elide tun derp

Run a standalone DERP (Designated Encrypted Relay for Packets) server. When two Tailscale peers cannot establish a direct WireGuard connection -- due to NAT, firewalls, or restrictive networks -- they relay encrypted WireGuard packets through a DERP server instead. The relay sees only opaque, NaCl-box-authenticated ciphertext; it cannot decrypt the WireGuard traffic.

bash
 elide tun derp

The server is single-threaded with native async I/O. All connections share one event loop with no locks on the hot path. Default capacity is 65,536 concurrent connections.

---

Prerequisites

  • A public IP or load balancer -- peers must be able to reach the relay
  • Port 443/tcp (recommended for production) and optionally port 3478/udp for STUN

---

CLI flags

elide tun derp [OPTIONS]
FlagDefaultDescription
—host0.0.0.0Bind address for the DERP relay server
—port3340TCP port for the DERP relay. Clients connect via HTTP upgrade on this port
—stun-portnone (disabled)Enable a STUN server on this UDP port. Standard STUN port is 3478. Allows clients to discover their public IP and port for NAT traversal
—tls-certnonePath to a PEM-encoded TLS certificate file. When provided with —tls-key, the DERP server listens over TLS
—tls-keynonePath to a PEM-encoded TLS private key file
—mesh-withnoneComma-separated DERP server URLs for mesh mode (e.g., derp://10.0.0.2:3340,derp://10.0.0.3:3340). Packets for unknown clients are forwarded to mesh peers
—jsonfalseOutput server status as JSON after startup
---

Examples

Basic relay

Start a relay on the default port:

bash
 elide tun derp
Starting DERP relay on 0.0.0.0:3340...
DERP relay running on 0.0.0.0:3340
Press Ctrl+C to stop.

Custom port with STUN

Run DERP on port 443 with a STUN server on the standard port:

bash
 elide tun derp --port 443 --stun-port 3478

With TLS (production)

Tailscale clients expect DERP servers to run on port 443 with TLS in production:

bash
 elide tun derp \
 --port 443 \
 --tls-cert /etc/letsencrypt/live/derp.example.com/fullchain.pem \
 --tls-key /etc/letsencrypt/live/derp.example.com/privkey.pem \
 --stun-port 3478
Running without TLS is fine for testing or when the relay sits behind a TLS-terminating load balancer.

Mesh deployment

Deploy multiple DERP relays across regions and mesh them together. Each relay forwards packets for unknown clients to its mesh peers via ForwardPacket frames.

On us-east (10.0.0.1):

bash
 elide tun derp --port 3340 --mesh-with derp://10.0.0.2:3340,derp://10.0.0.3:3340

On eu-west (10.0.0.2):

bash
 elide tun derp --port 3340 --mesh-with derp://10.0.0.1:3340,derp://10.0.0.3:3340

On ap-south (10.0.0.3):

bash
 elide tun derp --port 3340 --mesh-with derp://10.0.0.1:3340,derp://10.0.0.2:3340

Each mesh peer connection is maintained with configurable backoff and automatic reconnection. The mesh client sends ForwardPacket frames containing both source and destination public keys (64 bytes of key material + the encrypted WireGuard packet). If a forwarding client falls behind, excess packets are silently dropped -- this is safe because WireGuard retransmits at the tunnel layer.

JSON output

bash
 elide tun derp --json
json
{
  "status": "running",
  "bind": "0.0.0.0:3340",
  "mesh_peers": 0
}

---

When to run your own DERP relay

Tailscale provides public DERP relays worldwide. Here is when self-hosting makes sense:

Latency. Place a relay close to your users. If your nodes are concentrated in a region without a nearby Tailscale DERP, a self-hosted relay eliminates the extra round trip. Privacy. DERP relays see encrypted WireGuard packets (they cannot decrypt the content), but they do see source and destination public keys and connection metadata. Self-hosting keeps this metadata on your infrastructure. Headscale. If you run Headscale as your coordination server, you need your own DERP relays. Tailscale's public DERP servers require a Tailscale coordination server. Reliability. For production workloads, a dedicated relay with predictable capacity beats a shared public relay.
DERP is a relay of last resort. Tailscale's DISCO protocol always attempts direct peer-to-peer connections first. DERP only carries traffic when NAT traversal fails entirely. In most networks, DERP handles the initial connection while DISCO finds a direct path, after which DERP traffic drops to zero.

---

Architecture

Handshake state machine

Each connection progresses through three states:

on_accept --> AwaitingHttpUpgrade
          --> parse "GET /derp HTTP/1.1\r\n...\r\nUpgrade: DERP\r\n\r\n"
          --> send 101 Switching Protocols + ServerKey frame
          --> AwaitingClientInfo
          --> parse ClientInfo frame (NaCl-box decrypt with X25519)
          --> send ServerInfo frame, register peer in connection table
          --> Ready (relay loop)

The server generates a random X25519 keypair at startup. The server key is sent to each client in a ServerKey frame containing the 8-byte DERP magic followed by the 32-byte server public key. The client responds with a ClientInfo frame: its 32-byte public key, a 24-byte NaCl nonce, and NaCl-box-encrypted JSON containing the protocol version.

Frame protocol

DERP uses a binary framing protocol with a 5-byte header: [type: u8][length: u32 BE]. Maximum frame payload is 1 MiB.
TypeValueDirectionDescription
ServerKey0x01S->CServer's public key (magic + 32B key)
ClientInfo0x02C->SClient's public key + encrypted client info
ServerInfo0x03S->CEncrypted server info
SendPacket0x04C->SSend WireGuard packet to a peer (dest key + data)
RecvPacket0x05S->CReceive WireGuard packet from a peer (source key + data)
KeepAlive0x06BothEmpty keepalive frame
NotePreferred0x07C->SMark this server as the client's preferred DERP
PeerGone0x08S->CA peer has disconnected
PeerPresent0x09S->CA peer has connected
ForwardPacket0x0aS->SMesh forwarding (source key + dest key + data)
WatchConns0x10C->SSubscribe to peer presence changes
ClosePeer0x11C->SClose a peer connection
Ping0x12C->SPing with 8 bytes of opaque data
Pong0x13S->CPong echoing the 8 bytes from Ping
Health0x14S->CHealth check (UTF-8 string, empty = healthy)
Restarting0x15S->CServer is restarting (reconnect_in + try_for, both u32 BE)

Relay forwarding

When the server receives a SendPacket frame from a Ready peer:

1. Extract the 32-byte destination public key and the WireGuard data payload 2. Look up the destination key in the local connection table (HashMap<[u8; 32], i32> mapping keys to socket fds) 3. If found locally, build a RecvPacket frame with the source peer's key and the data, and send it to the destination socket 4. If not found locally and mesh peers are configured, build a ForwardPacket frame (source key + dest key + data) and forward to each mesh peer

The server extracts frame payloads from the receive buffer without copying per relayed packet.

Connection health

The server maintains connection health through two mechanisms:

  • Idle timeout (default: 120 seconds) -- Connections with no activity are evicted
  • Server-initiated pings (default: every 60 seconds) -- Peers with no activity receive a Ping frame. If they do not respond with Pong within 2 ping intervals, they are evicted

The server supports WatchConns subscribers: clients that opt in receive PeerPresent and PeerGone notifications as peers connect and disconnect.

Event loop

The DERP server runs entirely on a single-threaded event loop using native async I/O for all socket operations (accept, read, write). The event loop processes completions in batches with pooled buffers for efficient frame assembly.

The server runs on a dedicated thread and shuts down gracefully when the process exits or shutdown() is called.

---

Production deployment

Systemd

ini
[Unit]
Description=Elide DERP Relay
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/local/bin/elide tun derp \
  --port 443 \
  --tls-cert /etc/letsencrypt/live/derp.example.com/fullchain.pem \
  --tls-key /etc/letsencrypt/live/derp.example.com/privkey.pem \
  --stun-port 3478
Restart=on-failure
RestartSec=5s

# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadOnlyPaths=/etc/letsencrypt

[Install]
WantedBy=multi-user.target

Docker

dockerfile
FROM ghcr.io/elide-dev/elide:latest

EXPOSE 443/tcp 3478/udp

ENTRYPOINT ["elide", "tun", "derp", \
  "--port", "443", \
  "--stun-port", "3478"]

Firewall rules

PortProtocolPurpose
443 (or —port)TCPDERP relay (HTTP upgrade to binary protocol)
3478 (or —stun-port)UDPSTUN endpoint discovery (RFC 8489)
---

Server defaults

ParameterDefaultDescription
Private keyRandom X25519Generated fresh at each startup
Max connections65,536Per-server connection limit
Idle timeout120 secondsTime before an inactive connection is evicted
Ping interval60 secondsInterval between server-initiated health pings
Mesh peersEmptySet via —mesh-with CLI flag

Mesh forwarding internals

When --mesh-with is specified, the server creates persistent client connections to each mesh peer. These clients connect to the peer servers, perform the same HTTP upgrade + NaCl-box handshake, and maintain connections with automatic reconnection on failure.

When a SendPacket frame arrives for a destination not in the local connection table, the server builds a ForwardPacket frame (96 bytes of overhead: 5-byte header + 32-byte source key + 32-byte dest key + 27 bytes of framing) and sends it to each mesh peer. The receiving mesh peer looks up the destination in its own local table and delivers via RecvPacket if found.

Statistics

The server tracks shared statistics accessible via its management handle:

  • peers_connected -- Current number of authenticated peers
  • Packet relay counts (accessible via the handle's API)

---

See also

  • elide tun -- Overview of tunnel, DERP relay, and all subcommands
  • tun up -- Bring up a WireGuard tunnel via Tailscale