Beam

P2P Networking

Beam's P2P networking layer enables decentralized service discovery and intelligent routing without relying on central servers. This architecture provides censorship resistance, improved reliability, and eliminates single points of failure.

Note: The P2P networking layer is in active development. Core Tor functionality is available today, while the full decentralized mesh network is being built.

Network Architecture

The P2P network forms a self-organizing mesh where peers can discover and connect to services without any central coordination. Each peer maintains connections to multiple other peers, creating redundant paths for communication.

P2P Network Topology

Peer A
(Service)
Peer B
(Relay)
Peer C
(Gateway)
Peer D
(Client)
Kademlia DHT
(Discovery)
Gossip Protocol
(Propagation)

Key properties of this architecture:

  • No single point of failure — services remain accessible even if some peers go offline
  • Self-healing topology — the network automatically routes around failures
  • Censorship resistant — no central authority can block services
  • Horizontal scaling — capacity increases as more peers join

Kademlia DHT

Kademlia is a peer-to-peer distributed hash table (DHT) that Beam uses for service discovery. It's the same algorithm used by BitTorrent, IPFS, and Ethereum for decentralized data storage and retrieval.

How Kademlia Works

Each node in the network has a unique 256-bit identifier. When you want to find a service, Kademlia calculates the "distance" between node IDs using XOR (exclusive or). Nodes that are "closer" to a service's key are more likely to know about it.

JavaScript
1
2
3
4
5
6
7
8
9
10
11
// XOR Distance Calculation
distance(node_a, node_b) = node_a.id XOR node_b.id
// Example: Finding service "myapp.local"
1. Hash "myapp.local" to get a 256-bit key
2. Query nodes closest to that key
3. Each node returns even closer nodes
4. Repeat until service record is found
// Lookup complexity: O(log n)
// For a network of 1 million peers, ~20 hops max

Service Records

When a service registers with the DHT, it stores a record containing connection information. This record is replicated across multiple nodes for redundancy.

JSON
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ServiceRecord {
service_id: "myapp.local",
peer_id: "QmYjtig7VJQ6XsnUjqqJvj7QaMcCAwtrgNdahSiFoSdGKj",
endpoints: [
"/ip4/192.168.1.100/tcp/8080",
"/ip6/::1/tcp/8080"
],
metadata: {
version: "1.0",
protocol: "http",
capabilities: ["websocket", "http2"]
},
ttl: 3600, // Record expires in 1 hour
signature: "0x..." // Proves ownership
}

K-Buckets

Each node maintains a routing table organized into "k-buckets" — groups of peer contacts sorted by XOR distance. When looking up a service, the node queries peers from the appropriate bucket, progressively narrowing down to the target.

  • Bucket 0 contains peers with distance 0-1 (closest)
  • Bucket 1 contains peers with distance 2-3
  • Bucket n contains peers with distance 2^n to 2^(n+1) - 1
  • Each bucket holds up to k peers (typically k=20)

Gossip Protocol

While Kademlia handles service discovery, the gossip protocol handles real-time propagation of announcements. When a service comes online or changes, the information spreads through the network like a rumor.

Message Types

  • ANNOUNCE — new service available
  • DEPARTURE — service going offline
  • HEARTBEAT — peer liveness check
  • ROUTE_UPDATE — routing table changes

Propagation

Each peer forwards messages to a random subset of its neighbors (the "fanout"). With a fanout of 6, a message reaches all 1000 peers in approximately 4 rounds.

Plain Text
1
2
3
4
5
6
7
Round 1: Origin sends to 6 peers
Round 2: 6 peers × 6 = 36 peers reached
Round 3: 36 × 6 = 216 peers reached
Round 4: 216 × 6 = 1,296 peers reached
// Propagation time: O(log n)
// With deduplication to prevent message floods

Messages include a TTL (time-to-live) that decrements with each hop, preventing infinite propagation loops. Peers track recently seen message IDs to avoid processing duplicates.

Intelligent Routing

Beam's routing system continuously monitors network conditions and selects optimal paths based on multiple quality metrics. This enables automatic failover and load balancing.

Quality Metrics

Routes are scored based on:

  • Latency (40%) — round-trip time to the destination
  • Bandwidth (30%) — available throughput capacity
  • Reliability (20%) — historical success rate
  • Congestion (10%) — current load level
JavaScript
1
2
3
4
5
6
7
8
// Route scoring formula
score = (latency_score × 0.4) +
(bandwidth_score × 0.3) +
(reliability_score × 0.2) -
(congestion_penalty × 0.1)
// Routes are re-evaluated every 30 seconds
// Traffic shifts automatically to better paths

Multi-Path Routing

When available, Beam can split traffic across multiple paths simultaneously. This provides:

  • Automatic failover when one path fails
  • Load balancing across available routes
  • Increased aggregate bandwidth
  • Reduced dependency on any single path

Peer Discovery

New peers join the network through several discovery mechanisms:

  • Bootstrap nodes — well-known entry points for initial connection
  • mDNS — local network discovery without internet access
  • DHT queries — find peers with specific capabilities
  • Peer exchange — learn about peers from existing connections

NAT Traversal

Many peers are behind NAT (Network Address Translation), which makes direct connections challenging. Beam uses several techniques to establish connections:

  • STUN — discovers your public IP and port mapping, enabling direct connections when both peers have compatible NAT types
  • Hole punching — coordinated connection attempts that "punch" through NAT by having both peers send packets simultaneously
  • TURN relay — fallback to relay servers when direct connection is impossible (e.g., symmetric NAT)

Beam attempts direct connection first, falling back to relay only when necessary. Relay nodes are other Beam peers that volunteer to forward traffic.

Connection Management

Each peer maintains a target number of connections (default: 25-50). The connection manager continuously optimizes the peer set:

  • Prioritizes geographically diverse peers for resilience
  • Replaces low-quality connections opportunistically
  • Handles churn with automatic reconnection
  • Monitors connection health with periodic heartbeats

Connection Lifecycle

Plain Text
1
2
3
4
5
6
1. Discovery → Find peer via DHT, mDNS, or gossip
2. Handshake → Exchange capabilities and authenticate
3. Connection → Establish encrypted channel
4. Monitoring → Periodic health checks (every 30s)
5. Optimization → Replace if better peer available
6. Graceful close or timeout → Clean disconnection

Failed connections are retried with exponential backoff. After repeated failures, peers are temporarily blacklisted to avoid wasting resources.

Performance Targets

The P2P layer is designed to meet these performance goals:

Discovery Performance

  • Service discovery: <2 seconds
  • Service registration: <500ms
  • Network join time: <10 seconds
  • Discovery success rate: >98%

Routing Performance

  • Route calculation: <100ms
  • Failover time: <5 seconds
  • Additional routing latency: <50ms

Scalability

  • Maximum network size: 1M+ peers
  • Connections per peer: 25-50
  • DHT replication factor: 20
  • Gossip fanout: 6

Development Status

The P2P networking layer is being developed in phases:

Phase 1: Core Discovery (Completed)

  • Basic Kademlia DHT implementation
  • Service registration and lookup
  • Bootstrap node connectivity
  • Local mDNS discovery

Phase 2: Routing System (In Progress)

  • Quality-aware path selection
  • Real-time metrics collection
  • Multi-path routing
  • Automatic failover

Phase 3: Advanced Features (Planned)

  • ML-based route prediction
  • Bandwidth reservation
  • Adaptive load balancing
  • Network analytics dashboard

Related Documentation

The P2P networking layer is under active development. For the latest updates, follow the project on GitHub.