P2P Networking
Beam's P2P networking layer enables decentralized service discovery and intelligent routing without relying on central servers. This architecture provides censorship resistance, improved reliability, and eliminates single points of failure.
Note: The P2P networking layer is in active development. Core Tor functionality is available today, while the full decentralized mesh network is being built.
Network Architecture
The P2P network forms a self-organizing mesh where peers can discover and connect to services without any central coordination. Each peer maintains connections to multiple other peers, creating redundant paths for communication.
P2P Network Topology
Key properties of this architecture:
- No single point of failure — services remain accessible even if some peers go offline
- Self-healing topology — the network automatically routes around failures
- Censorship resistant — no central authority can block services
- Horizontal scaling — capacity increases as more peers join
Kademlia DHT
Kademlia is a peer-to-peer distributed hash table (DHT) that Beam uses for service discovery. It's the same algorithm used by BitTorrent, IPFS, and Ethereum for decentralized data storage and retrieval.
How Kademlia Works
Each node in the network has a unique 256-bit identifier. When you want to find a service, Kademlia calculates the "distance" between node IDs using XOR (exclusive or). Nodes that are "closer" to a service's key are more likely to know about it.
1234567891011// XOR Distance Calculationdistance(node_a, node_b) = node_a.id XOR node_b.id// Example: Finding service "myapp.local"1. Hash "myapp.local" to get a 256-bit key2. Query nodes closest to that key3. Each node returns even closer nodes4. Repeat until service record is found// Lookup complexity: O(log n)// For a network of 1 million peers, ~20 hops max
Service Records
When a service registers with the DHT, it stores a record containing connection information. This record is replicated across multiple nodes for redundancy.
123456789101112131415ServiceRecord {service_id: "myapp.local",peer_id: "QmYjtig7VJQ6XsnUjqqJvj7QaMcCAwtrgNdahSiFoSdGKj",endpoints: ["/ip4/192.168.1.100/tcp/8080","/ip6/::1/tcp/8080"],metadata: {version: "1.0",protocol: "http",capabilities: ["websocket", "http2"]},ttl: 3600, // Record expires in 1 hoursignature: "0x..." // Proves ownership}
K-Buckets
Each node maintains a routing table organized into "k-buckets" — groups of peer contacts sorted by XOR distance. When looking up a service, the node queries peers from the appropriate bucket, progressively narrowing down to the target.
- Bucket 0 contains peers with distance 0-1 (closest)
- Bucket 1 contains peers with distance 2-3
- Bucket n contains peers with distance 2^n to 2^(n+1) - 1
- Each bucket holds up to k peers (typically k=20)
Gossip Protocol
While Kademlia handles service discovery, the gossip protocol handles real-time propagation of announcements. When a service comes online or changes, the information spreads through the network like a rumor.
Message Types
ANNOUNCE— new service availableDEPARTURE— service going offlineHEARTBEAT— peer liveness checkROUTE_UPDATE— routing table changes
Propagation
Each peer forwards messages to a random subset of its neighbors (the "fanout"). With a fanout of 6, a message reaches all 1000 peers in approximately 4 rounds.
1234567Round 1: Origin sends to 6 peersRound 2: 6 peers × 6 = 36 peers reachedRound 3: 36 × 6 = 216 peers reachedRound 4: 216 × 6 = 1,296 peers reached// Propagation time: O(log n)// With deduplication to prevent message floods
Messages include a TTL (time-to-live) that decrements with each hop, preventing infinite propagation loops. Peers track recently seen message IDs to avoid processing duplicates.
Intelligent Routing
Beam's routing system continuously monitors network conditions and selects optimal paths based on multiple quality metrics. This enables automatic failover and load balancing.
Quality Metrics
Routes are scored based on:
- Latency (40%) — round-trip time to the destination
- Bandwidth (30%) — available throughput capacity
- Reliability (20%) — historical success rate
- Congestion (10%) — current load level
12345678// Route scoring formulascore = (latency_score × 0.4) +(bandwidth_score × 0.3) +(reliability_score × 0.2) -(congestion_penalty × 0.1)// Routes are re-evaluated every 30 seconds// Traffic shifts automatically to better paths
Multi-Path Routing
When available, Beam can split traffic across multiple paths simultaneously. This provides:
- Automatic failover when one path fails
- Load balancing across available routes
- Increased aggregate bandwidth
- Reduced dependency on any single path
Peer Discovery
New peers join the network through several discovery mechanisms:
- Bootstrap nodes — well-known entry points for initial connection
- mDNS — local network discovery without internet access
- DHT queries — find peers with specific capabilities
- Peer exchange — learn about peers from existing connections
NAT Traversal
Many peers are behind NAT (Network Address Translation), which makes direct connections challenging. Beam uses several techniques to establish connections:
- STUN — discovers your public IP and port mapping, enabling direct connections when both peers have compatible NAT types
- Hole punching — coordinated connection attempts that "punch" through NAT by having both peers send packets simultaneously
- TURN relay — fallback to relay servers when direct connection is impossible (e.g., symmetric NAT)
Beam attempts direct connection first, falling back to relay only when necessary. Relay nodes are other Beam peers that volunteer to forward traffic.
Connection Management
Each peer maintains a target number of connections (default: 25-50). The connection manager continuously optimizes the peer set:
- Prioritizes geographically diverse peers for resilience
- Replaces low-quality connections opportunistically
- Handles churn with automatic reconnection
- Monitors connection health with periodic heartbeats
Connection Lifecycle
1234561. Discovery → Find peer via DHT, mDNS, or gossip2. Handshake → Exchange capabilities and authenticate3. Connection → Establish encrypted channel4. Monitoring → Periodic health checks (every 30s)5. Optimization → Replace if better peer available6. Graceful close or timeout → Clean disconnection
Failed connections are retried with exponential backoff. After repeated failures, peers are temporarily blacklisted to avoid wasting resources.
Performance Targets
The P2P layer is designed to meet these performance goals:
Discovery Performance
- Service discovery: <2 seconds
- Service registration: <500ms
- Network join time: <10 seconds
- Discovery success rate: >98%
Routing Performance
- Route calculation: <100ms
- Failover time: <5 seconds
- Additional routing latency: <50ms
Scalability
- Maximum network size: 1M+ peers
- Connections per peer: 25-50
- DHT replication factor: 20
- Gossip fanout: 6
Development Status
The P2P networking layer is being developed in phases:
Phase 1: Core Discovery (Completed)
- Basic Kademlia DHT implementation
- Service registration and lookup
- Bootstrap node connectivity
- Local mDNS discovery
Phase 2: Routing System (In Progress)
- Quality-aware path selection
- Real-time metrics collection
- Multi-path routing
- Automatic failover
Phase 3: Advanced Features (Planned)
- ML-based route prediction
- Bandwidth reservation
- Adaptive load balancing
- Network analytics dashboard
Related Documentation
- Architecture — see how P2P fits into the overall system design
- Tor Network — current tunneling implementation using Tor
- Security — P2P security considerations and threat model
- Why Decentralized? — benefits of the P2P architecture approach