Tailscale Configuration Reference

Configuration for Tailscale integration with elide serve.

Elide supports three modes of Tailscale integration:

  • Proxy mode (--tailscale): Connects to a running tailscaled daemon.
The server binds to the node's Tailscale IP and can use Tailscale certificates.
  • Funnel mode (--funnel): Like proxy mode, but also exposes the server
on the public internet via Tailscale Funnel.
  • Direct mode (direct = true): Runs as a standalone Tailscale node
without tailscaled. Embeds a WireGuard data plane, DERP relay client, and DISCO endpoint discovery. Best for containerized and serverless deployments where installing tailscaled is impractical.

Proxy and funnel modes require a running tailscaled daemon on the host. Direct mode requires a pre-authentication key (authKey).

See TunnelConfig for traffic bridging settings that control how tunnel traffic reaches the HTTP server.

Quick example

pkl
tailscale {
  direct = true
  authKey = env("TAILSCALE_AUTH_KEY")
  disco { heartbeatInterval = 2.s }
}

> This page is auto-generated from the PKL schema. See the guide for usage examples.

TailscaleConfig

Open class — can be extended.

Top-level Tailscale configuration for a network attachment.

Set direct = true for embedded mode or leave it false to connect to a running tailscaled daemon. Sub-blocks disco, dataPlane, and derp only take effect in direct mode.
FieldTypeDefaultDescription
directBooleanfalseEnable direct (embedded) mode — run as a standalone Tailscale node
authKeyString?nullPre-authentication key for joining the tailnet in direct mode.
controlUrlString"https://controlplane.tailscale.com"URL of the Tailscale control (coordination) server.
stateDirString"~/.elide/tailscale"Directory where node keys and cached configuration are persisted
discoDiscoConfig(empty)DISCO endpoint discovery tuning.
dataPlaneDataPlaneConfig(empty)Data plane tuning for the embedded WireGuard tunnel.
derpDerpConfig(empty)DERP relay settings.

direct

Enable direct (embedded) mode — run as a standalone Tailscale node without a tailscaled daemon.

When true, Elide embeds the full Tailscale data plane: WireGuard tunnel, DERP relay client, and DISCO endpoint discovery. An authKey is required. No external tailscaled process is needed.

When false (the default), Elide connects to a running tailscaled daemon via its local API socket.

authKey

Pre-authentication key for joining the tailnet in direct mode.

Required when direct = true; ignored otherwise. Obtain one from the Tailscale admin console or the CLI:

pkl
tailscale preauthkey create --reusable --expiry 90d

Prefer reading the key from the environment rather than hard-coding it:

pkl
authKey = env("TAILSCALE_AUTH_KEY")

controlUrl

URL of the Tailscale control (coordination) server.

Defaults to the public Tailscale coordination server. Override this when using a self-hosted Headscale instance:

pkl
controlUrl = "https://headscale.internal.example.com"

stateDir

Directory where node keys and cached configuration are persisted between restarts.

Stores the node private key, DERP map, and cached netmap. The directory is created automatically if it does not exist. Use an absolute path in production to avoid ambiguity.

Default: ~/.elide/tailscale

disco

DISCO endpoint discovery tuning.

Controls NAT traversal probe timing, heartbeat intervals, and path degradation thresholds. The defaults match the constants in the DISCO state machine and are suitable for most deployments. See DiscoConfig for individual parameters.

dataPlane

Data plane tuning for the embedded WireGuard tunnel.

Adjusts MTU and buffer pool sizing. Only meaningful when direct = true. See DataPlaneConfig for details.

derp

DERP relay settings.

Configure a local relay server or add custom relay endpoints. See DerpConfig for details.

---

DiscoConfig

Open class — can be extended.

Tuning parameters for DISCO endpoint discovery.

DISCO probes candidate UDP endpoints to find direct WireGuard paths between peers, falling back to DERP relays when no direct path is available. These defaults match the constants in the DISCO state machine. Adjusting them trades faster path discovery against lower probe overhead.

State transitions

pkl
Unknown -> Probing -> Direct -> Degraded -> RelayOnly
A peer transitions Direct -> Degraded after degradedAfterFailures consecutive ping timeouts, then Degraded -> RelayOnly after degradedTimeout elapses with no successful pong.
FieldTypeDefaultDescription
heartbeatIntervalDuration2.sInterval between heartbeat pings sent to maintain NAT pinholes.
reprobeIntervalDuration25.sInterval between full re-probe cycles across all candidate endpoints.
trustPeriodDuration5.sDuration a newly-selected best path must remain stable before it
probeTimeoutDuration5.sTimeout for an individual probe ping.
degradedAfterFailuresUInt83Number of consecutive ping failures before the path is marked
degradedTimeoutDuration10.sMaximum time to remain in the Degraded state before falling back

heartbeatInterval

Interval between heartbeat pings sent to maintain NAT pinholes.

While a peer is in the Direct state, a heartbeat ping is sent to the best endpoint whenever no data has been transmitted for this duration. Lower values keep NAT mappings alive more aggressively at the cost of additional background traffic.

Default: 2.s

reprobeInterval

Interval between full re-probe cycles across all candidate endpoints.

Periodically pings every known endpoint to discover better paths or detect topology changes (e.g., a peer that moved networks). Shorter intervals detect changes faster but increase probe traffic.

Default: 25.s

trustPeriod

Duration a newly-selected best path must remain stable before it replaces the current path.

Prevents flapping between endpoints that alternate as "best" on successive probe cycles. Reserved for future path-upgrade gating.

Default: 5.s

probeTimeout

Timeout for an individual probe ping.

If no pong is received within this duration, the ping counts as a failure and increments the counter checked by degradedAfterFailures.

Default: 5.s

degradedAfterFailures

Number of consecutive ping failures before the path is marked degraded.

Failures are counted only against the current best endpoint. After this many consecutive timeouts, the peer transitions from Direct to Degraded and begins probing for alternatives.

Default: 3

degradedTimeout

Maximum time to remain in the Degraded state before falling back to relay-only connectivity.

While in Degraded, traffic still uses the last-known direct path. If no successful pong arrives within this timeout, the peer falls back to RelayOnly and all traffic routes through DERP.

Default: 10.s

---

DataPlaneConfig

Open class — can be extended.

Data plane tuning for the embedded WireGuard tunnel.

Controls MTU and buffer pool sizing for encrypted packet processing in direct mode. These settings are ignored when direct = false.
FieldTypeDefaultDescription
mtuUInt161280Maximum transmission unit for tunnel packets, in bytes.
bufferPoolSizeUInt256Number of pre-allocated packet buffers in the tunnel buffer pool.

mtu

Maximum transmission unit for tunnel packets, in bytes.

Must be less than or equal to the path MTU minus WireGuard overhead (60 bytes for IPv4, 80 bytes for IPv6). The default of 1280 is the IPv6 minimum MTU and is safe for virtually all network paths. Increase to 1420 on networks with a known 1500-byte path MTU for better throughput.

Default: 1280

bufferPoolSize

Number of pre-allocated packet buffers in the tunnel buffer pool.

Each buffer holds one MTU-sized packet. Larger pools reduce allocation churn under burst traffic but consume more memory (roughly bufferPoolSize * mtu bytes).

Default: 256

---

DerpConfig

Open class — can be extended.

DERP (Designated Encrypted Relay for Packets) relay configuration.

DERP relays provide fallback encrypted connectivity when direct WireGuard paths cannot be established (e.g., behind symmetric NATs). Relays are authenticated — only nodes in the same tailnet can communicate through them.

By default, the node uses relays advertised by the control server. Use customRelays to add private relays or server = true to run a relay on this node.
FieldTypeDefaultDescription
serverBooleanfalseRun a DERP relay server on this node.
serverPortUInt163478Listen port for the local DERP relay server.
customRelaysListing(empty)Additional DERP relay URLs, used alongside relays from the control

server

Run a DERP relay server on this node.

When true, this node accepts relay connections from other nodes in the tailnet in addition to its normal server role. Useful for self-hosted infrastructure or air-gapped environments where public Tailscale relays are unreachable.

Default: false

serverPort

Listen port for the local DERP relay server.

Only used when server = true. The default 3478 allows coexistence with STUN on the same host. Set to 443 if this node is the primary relay and clients expect HTTPS-port connectivity.

Default: 3478

customRelays

Additional DERP relay URLs, used alongside relays from the control server.

Each entry is an HTTPS URL pointing to a DERP relay:

pkl
customRelays {
  "https:<<>>
  "https://derp-eu.internal.example.com:443"
}

Nodes always prefer the lowest-latency relay regardless of whether it came from the control server or this list.

---