elide tun derp
Run a standalone DERP (Designated Encrypted Relay for Packets) server. When two Tailscale peers cannot establish a direct WireGuard connection -- due to NAT, firewalls, or restrictive networks -- they relay encrypted WireGuard packets through a DERP server instead. The relay sees only opaque, NaCl-box-authenticated ciphertext; it cannot decrypt the WireGuard traffic.
elide tun derpThe server is single-threaded with native async I/O. All connections share one event loop with no locks on the hot path. Default capacity is 65,536 concurrent connections.
---
Prerequisites
- A public IP or load balancer -- peers must be able to reach the relay
- Port 443/tcp (recommended for production) and optionally port 3478/udp for STUN
---
CLI flags
elide tun derp [OPTIONS]| Flag | Default | Description |
|---|---|---|
—host | 0.0.0.0 | Bind address for the DERP relay server |
—port | 3340 | TCP port for the DERP relay. Clients connect via HTTP upgrade on this port |
—stun-port | none (disabled) | Enable a STUN server on this UDP port. Standard STUN port is 3478. Allows clients to discover their public IP and port for NAT traversal |
—tls-cert | none | Path to a PEM-encoded TLS certificate file. When provided with —tls-key, the DERP server listens over TLS |
—tls-key | none | Path to a PEM-encoded TLS private key file |
—mesh-with | none | Comma-separated DERP server URLs for mesh mode (e.g., derp://10.0.0.2:3340,derp://10.0.0.3:3340). Packets for unknown clients are forwarded to mesh peers |
—json | false | Output server status as JSON after startup |
Examples
Basic relay
Start a relay on the default port:
elide tun derpStarting DERP relay on 0.0.0.0:3340...
DERP relay running on 0.0.0.0:3340
Press Ctrl+C to stop.Custom port with STUN
Run DERP on port 443 with a STUN server on the standard port:
elide tun derp --port 443 --stun-port 3478With TLS (production)
Tailscale clients expect DERP servers to run on port 443 with TLS in production:
elide tun derp \
--port 443 \
--tls-cert /etc/letsencrypt/live/derp.example.com/fullchain.pem \
--tls-key /etc/letsencrypt/live/derp.example.com/privkey.pem \
--stun-port 3478Mesh deployment
Deploy multiple DERP relays across regions and mesh them together. Each relay forwards packets for unknown clients to its mesh peers via ForwardPacket frames.
On us-east (10.0.0.1):
elide tun derp --port 3340 --mesh-with derp://10.0.0.2:3340,derp://10.0.0.3:3340On eu-west (10.0.0.2):
elide tun derp --port 3340 --mesh-with derp://10.0.0.1:3340,derp://10.0.0.3:3340On ap-south (10.0.0.3):
elide tun derp --port 3340 --mesh-with derp://10.0.0.1:3340,derp://10.0.0.2:3340Each mesh peer connection is maintained with configurable backoff and automatic reconnection. The mesh client sends ForwardPacket frames containing both source and destination public keys (64 bytes of key material + the encrypted WireGuard packet). If a forwarding client falls behind, excess packets are silently dropped -- this is safe because WireGuard retransmits at the tunnel layer.
JSON output
elide tun derp --json{
"status": "running",
"bind": "0.0.0.0:3340",
"mesh_peers": 0
}---
When to run your own DERP relay
Tailscale provides public DERP relays worldwide. Here is when self-hosting makes sense:
Latency. Place a relay close to your users. If your nodes are concentrated in a region without a nearby Tailscale DERP, a self-hosted relay eliminates the extra round trip. Privacy. DERP relays see encrypted WireGuard packets (they cannot decrypt the content), but they do see source and destination public keys and connection metadata. Self-hosting keeps this metadata on your infrastructure. Headscale. If you run Headscale as your coordination server, you need your own DERP relays. Tailscale's public DERP servers require a Tailscale coordination server. Reliability. For production workloads, a dedicated relay with predictable capacity beats a shared public relay.---
Architecture
Handshake state machine
Each connection progresses through three states:
on_accept --> AwaitingHttpUpgrade
--> parse "GET /derp HTTP/1.1\r\n...\r\nUpgrade: DERP\r\n\r\n"
--> send 101 Switching Protocols + ServerKey frame
--> AwaitingClientInfo
--> parse ClientInfo frame (NaCl-box decrypt with X25519)
--> send ServerInfo frame, register peer in connection table
--> Ready (relay loop)The server generates a random X25519 keypair at startup. The server key is sent to each client in a ServerKey frame containing the 8-byte DERP magic followed by the 32-byte server public key. The client responds with a ClientInfo frame: its 32-byte public key, a 24-byte NaCl nonce, and NaCl-box-encrypted JSON containing the protocol version.
Frame protocol
DERP uses a binary framing protocol with a 5-byte header:[type: u8][length: u32 BE]. Maximum frame payload is 1 MiB.
| Type | Value | Direction | Description |
|---|---|---|---|
ServerKey | 0x01 | S->C | Server's public key (magic + 32B key) |
ClientInfo | 0x02 | C->S | Client's public key + encrypted client info |
ServerInfo | 0x03 | S->C | Encrypted server info |
SendPacket | 0x04 | C->S | Send WireGuard packet to a peer (dest key + data) |
RecvPacket | 0x05 | S->C | Receive WireGuard packet from a peer (source key + data) |
KeepAlive | 0x06 | Both | Empty keepalive frame |
NotePreferred | 0x07 | C->S | Mark this server as the client's preferred DERP |
PeerGone | 0x08 | S->C | A peer has disconnected |
PeerPresent | 0x09 | S->C | A peer has connected |
ForwardPacket | 0x0a | S->S | Mesh forwarding (source key + dest key + data) |
WatchConns | 0x10 | C->S | Subscribe to peer presence changes |
ClosePeer | 0x11 | C->S | Close a peer connection |
Ping | 0x12 | C->S | Ping with 8 bytes of opaque data |
Pong | 0x13 | S->C | Pong echoing the 8 bytes from Ping |
Health | 0x14 | S->C | Health check (UTF-8 string, empty = healthy) |
Restarting | 0x15 | S->C | Server is restarting (reconnect_in + try_for, both u32 BE) |
Relay forwarding
When the server receives a SendPacket frame from a Ready peer:
1. Extract the 32-byte destination public key and the WireGuard data payload
2. Look up the destination key in the local connection table (HashMap<[u8; 32], i32> mapping keys to socket fds)
3. If found locally, build a RecvPacket frame with the source peer's key and the data, and send it to the destination socket
4. If not found locally and mesh peers are configured, build a ForwardPacket frame (source key + dest key + data) and forward to each mesh peer
The server extracts frame payloads from the receive buffer without copying per relayed packet.
Connection health
The server maintains connection health through two mechanisms:
- Idle timeout (default: 120 seconds) -- Connections with no activity are evicted
- Server-initiated pings (default: every 60 seconds) -- Peers with no activity receive a
Pingframe. If they do not respond withPongwithin 2 ping intervals, they are evicted
The server supports WatchConns subscribers: clients that opt in receive PeerPresent and PeerGone notifications as peers connect and disconnect.
Event loop
The DERP server runs entirely on a single-threaded event loop using native async I/O for all socket operations (accept, read, write). The event loop processes completions in batches with pooled buffers for efficient frame assembly.
The server runs on a dedicated thread and shuts down gracefully when the process exits or shutdown() is called.
---
Production deployment
Systemd
[Unit]
Description=Elide DERP Relay
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/elide tun derp \
--port 443 \
--tls-cert /etc/letsencrypt/live/derp.example.com/fullchain.pem \
--tls-key /etc/letsencrypt/live/derp.example.com/privkey.pem \
--stun-port 3478
Restart=on-failure
RestartSec=5s
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadOnlyPaths=/etc/letsencrypt
[Install]
WantedBy=multi-user.targetDocker
FROM ghcr.io/elide-dev/elide:latest
EXPOSE 443/tcp 3478/udp
ENTRYPOINT ["elide", "tun", "derp", \
"--port", "443", \
"--stun-port", "3478"]Firewall rules
| Port | Protocol | Purpose |
|---|---|---|
443 (or —port) | TCP | DERP relay (HTTP upgrade to binary protocol) |
3478 (or —stun-port) | UDP | STUN endpoint discovery (RFC 8489) |
Server defaults
| Parameter | Default | Description |
|---|---|---|
| Private key | Random X25519 | Generated fresh at each startup |
| Max connections | 65,536 | Per-server connection limit |
| Idle timeout | 120 seconds | Time before an inactive connection is evicted |
| Ping interval | 60 seconds | Interval between server-initiated health pings |
| Mesh peers | Empty | Set via —mesh-with CLI flag |
Mesh forwarding internals
When --mesh-with is specified, the server creates persistent client connections to each mesh peer. These clients connect to the peer servers, perform the same HTTP upgrade + NaCl-box handshake, and maintain connections with automatic reconnection on failure.
When a SendPacket frame arrives for a destination not in the local connection table, the server builds a ForwardPacket frame (96 bytes of overhead: 5-byte header + 32-byte source key + 32-byte dest key + 27 bytes of framing) and sends it to each mesh peer. The receiving mesh peer looks up the destination in its own local table and delivers via RecvPacket if found.
Statistics
The server tracks shared statistics accessible via its management handle:
peers_connected-- Current number of authenticated peers- Packet relay counts (accessible via the handle's API)
---