libvine
libvine is a Zig virtual network library for building small authenticated IPv4 overlays on top of the existing libzig stack.
For the MVP, libvine is intentionally narrow:
- Linux only
- IPv4 L3 overlay semantics
- one overlay network per process
- small peer sets
- static or bootstrap-assisted membership
libvine is not a replacement control plane. It uses libmesh extensively for discovery, signaling, route selection, session setup, and relay fallback. libvine owns overlay membership policy, virtual addressing, packet forwarding, and Linux TUN integration.
Stack Position
libself: identity and authenticated peer metadatalibmesh: discovery, signaling, path selection, and relay fallbacklibdice: NAT traversal assistance when direct paths need setup exchangelibfast: encrypted transport for control and packet carriagelibvine: overlay network semantics and Linux host integration
MVP Direction
The first milestone is a coherent library skeleton with stable public boundaries, tests that compile, and enough documentation for downstream consumers to follow the intended architecture before deeper control-plane and data-plane work lands.
CLI
vine is the operator-facing binary for running libvine as a real VPN process on Linux hosts.
Top-Level Commands
vine helpvine versionvine identityvine configvine daemonvine statusvine diagnostics
Command Intent
identitymanages persistentlibself-backed node identityconfigmanages validation and initialization of the node config filedaemoncontrols long-running runtime lifecyclestatusreports current node and network statediagnosticsexposes counters, snapshots, and debug output
Current State
The binary surface is being built incrementally. The first milestone is to freeze the operator contract before deeper identity, config, and daemon behavior is implemented.
Identity
libvine node identity comes from libself, not from overlay IP addresses.
Separation Of Concerns
libselfidentity answers who the node isNetworkIdanswers which overlay the node is joiningVinePrefixanswers which virtual IP range the node advertiseslibmeshanswers which path should carry traffic right now
Persistent Identity
The vine binary stores node identity on disk so the same node can restart without
appearing as a new peer.
Current default path:
/var/lib/libvine/identity
The persisted identity file is a text format with a stable header and derived public fields:
formatseedpublic_keypeer_idfingerprint
CLI Commands
vine identity initvine identity showvine identity export-publicvine identity fingerprint
Enrollment Use
export-public is intended for enrollment and allowlist workflows. It exposes public identity material
that another node or operator can use when binding a peer identity to an overlay prefix.
The important rule is that the peer is trusted as a libself identity first, and only then
associated with overlay addressing policy.
Configuration
vine reads operator configuration from a TOML file, by default:
/etc/libvine/vine.toml
The file defines overlay membership, local TUN settings, bootstrap peers, allowlisted peer prefixes,
and runtime policy toggles. Identity is configured by path, but identity itself is still derived from
libself material on disk rather than from overlay IP addresses.
Example
[node]
name = "alpha"
network_id = "home-net"
identity_path = "/var/lib/libvine/identity"
[tun]
name = "vine0"
address = "10.42.0.1"
prefix_len = 24
mtu = 1400
[[bootstrap_peers]]
peer_id = "seed-a"
address = "udp://198.51.100.10:4100"
[[allowed_peers]]
peer_id = "beta"
prefix = "10.42.1.0/24"
relay_capable = false
[[allowed_peers]]
peer_id = "relay-a"
prefix = "10.42.254.0/24"
relay_capable = true
[policy]
strict_allowlist = true
allow_relay = true
allow_signaling_upgrade = true
Sections
[node]
name: operator-facing node labelnetwork_id: shared overlay network identifieridentity_path: absolute path to the persistedlibselfidentity file
[tun]
name: Linux TUN interface nameaddress: local overlay IPv4 addressprefix_len: local prefix lengthmtu: interface MTU
[[bootstrap_peers]]
peer_id: remote authenticated peer identifieraddress: bootstrap transport address
[[allowed_peers]]
peer_id: remote authenticated peer identifierprefix: overlay prefix owned by that peerrelay_capable: whether the peer may act as a relay fallback
[policy]
strict_allowlist: require explicit peer allowlistingallow_relay: permit relay fallbackallow_signaling_upgrade: permit direct-path upgrade after signaling
Validation
Use:
vine config validate -c /etc/libvine/vine.toml
Validation currently checks:
- TOML-shaped section and key parsing
- repeated
bootstrap_peersandallowed_peersrecords - boolean and integer field decoding
- config path must be absolute and not group/world writable
- identity path must be absolute and point to a
0600file
Malformed config should be rejected before daemon startup rather than at packet-forwarding time.
Daemon
vine now has a dedicated daemon lifecycle surface:
vine daemon runvine daemon startvine daemon stopvine daemon status
The current implementation is still a lightweight skeleton, but it already establishes the runtime ownership model that later slices will connect to real node startup and multi-peer operation.
Runtime Ownership
The daemon runtime owns:
- phase transitions:
stopped,starting,running,stopping - the configured config path for the active process
- pidfile and runtime state paths
- bounded startup sequencing
- bounded shutdown sequencing
- signal conventions for shutdown, reload, and diagnostics
Those behaviors live in lib/daemon/runtime.zig.
Foreground And Background
vine daemon run -c /etc/libvine/vine.toml
- runs in the foreground
- enters the bounded startup sequence
- transitions to
running
vine daemon start -c /etc/libvine/vine.toml
- spawns the current
vineexecutable in background daemon mode - writes a pidfile
- leaves runtime state for
statusto inspect
Stop And Status
vine daemon stop
- reads the pidfile
- sends
SIGTERM - runs the bounded shutdown sequence
- removes the pidfile
vine daemon status
- reads the runtime state file when present
- reports the daemon phase
- reports the pid from pidfile or stored state
Signals
Current signal conventions are explicit:
- clean shutdown:
SIGTERM - reload request:
SIGHUP - diagnostics dump request:
SIGUSR1
This keeps the control contract stable before the runtime is fully connected to live TUN and session ownership.
Runtime
Step 5 turns static config and persisted identity into a real startup model.
The key rule stays the same:
libselfidentity determines who the node is- config determines which overlay prefix the node advertises
Those are related at startup, but they are not the same thing.
Runtime Translation
lib/runtime/runtime_config.zig now translates:
vine.toml- the persisted identity file
into runtime-facing state:
NodeConfig- local
PeerId - local
LocalMembership - allowlist-driven admission policy
- bootstrap peer startup state
- relay-capable peer declarations
Startup Inputs
The runtime loader consumes:
[node][tun][[bootstrap_peers]][[allowed_peers]][policy]
and the identity file referenced by node.identity_path.
The resulting startup state is then suitable for:
Node.initNode.start- foreground bring-up through
vine up
Operator Commands
vine up -c /etc/libvine/vine.toml
- loads config
- loads persisted identity
- derives the local peer identity from
libself - binds the configured local prefix into local membership
- starts the node in foreground mode
vine down
- reads the daemon pidfile when present
- sends a shutdown request
- removes the pidfile for controlled local stop
Identity Versus Prefix
The same identity can be restarted with a different configured overlay prefix.
That should change the local membership prefix, but not the authenticated PeerId.
The runtime tests now lock that behavior in:
- identity is stable across reloads
- configured prefix is translated from config
- changing config address does not change peer identity
Enrollment
Enrollment in libvine is now explicit and deterministic.
The daemon no longer treats a discovered peer as automatically valid just because it exists on the network. A remote membership update must satisfy all of the following:
- the peer is on the allowlist by
PeerId - the update targets the same configured
NetworkId - the peer is claiming the exact overlay prefix it was configured to own
Phase-One Ownership Model
For phase one, each allowed peer owns one configured prefix.
That mapping comes from the node config and is translated into runtime enrollment state. The result is:
- one authenticated
PeerId - one owned
VinePrefix - optional relay-capable flag
This keeps ownership simple enough for real multi-PC bring-up without introducing distributed routing policy.
Accepted Membership Flow
When a membership update is accepted:
- remote membership state is refreshed
- route-table state is updated for the accepted prefix
- topology callbacks can observe the change
When a membership update is rejected:
- remote membership state is unchanged
- route-table state is unchanged
Rejection Cases
Current rejection rules include:
- peer not present in the configured allowlist
- wrong overlay network ID
- claimed prefix does not match configured ownership
- overlapping configured peer prefixes at startup
Shutdown
Withdrawal follows the inverse path:
- remote membership is marked expired
- the matching route is withdrawn
- topology observers receive the change
This keeps local routing state aligned with membership state during daemon shutdown and peer departure.
Peers and Sessions
libvine keeps identity, overlay addressing, and transport reachability separate:
PeerIdcomes fromlibselfVinePrefixsays which overlay range a peer owns- the session manager decides which transport path is currently best
Configured peers are loaded from vine.toml and translated into runtime session candidates.
At runtime, libmesh reachability is mapped into three session classes:
directsignaling_then_directrelay
The runtime preference order is fixed:
directsignaling_then_directrelay
Relay sessions are only accepted for peers explicitly marked relay_capable = true.
This prevents the runtime from silently treating every peer as a relay target.
When a better path appears, the session manager promotes it. When a preferred path dies, the manager can fall back to an existing relay session without changing peer identity or prefix ownership.
Use the CLI to inspect the current session snapshot:
vine sessions -c /etc/libvine/vine.toml
The command reports:
- configured peer count
- session counts by class
- preferred session per peer
- whether the peer is relay-capable
At this stage the command is config-driven and renders the runtime session-manager view for configured peers. The later diagnostics slices will extend this into a fuller daemon-backed runtime snapshot.
Packet Flow
Step 8 turns the runtime into a real packet path instead of just a session and membership model.
The flow is:
- read an IP packet from the TUN device
- inspect the overlay destination address
- match the destination against the route table
- choose the preferred session for the route’s peer
- send the packet over that session
- receive packets from authorized peers
- inject inbound packets back into the local TUN device
TunRuntime owns the bridge between:
Node- the Linux-facing TUN handle
- installed Linux route state
- packet drop reasons
The runtime keeps explicit drop behavior:
- unknown overlay destinations are dropped as
unknown_route - unauthorized peers are dropped as
unauthorized_peer - missing session state is dropped as
no_session
Route cleanup stays explicit too:
- withdrawing membership detaches the installed route
- stale direct sessions downgrade to relay when relay fallback exists
- stale sessions without fallback tombstone the route
The fake transport tests exercise end-to-end packet movement between nodes without needing real sockets. That gives us coverage for:
- outbound route lookup
- preferred-session dispatch
- inbound TUN injection
- route teardown and downgrade behavior
This is the point where the daemon starts resembling a real overlay dataplane instead of just a configuration and identity shell.
Diagnostics
Step 9 turns vine into an operator-facing tool instead of a Zig-only runtime.
The current diagnostics commands are:
vine statusvine peersvine routesvine sessionsvine countersvine snapshotvine ping <overlay-ip>
These commands are config-driven and are meant to answer different questions:
status: is the node configured correctly and what overlay identity is it using?peers: which peers and prefixes are configured?routes: which overlay prefixes point at which peers?sessions: which session class is preferred for each peer?counters: are packets failing, missing routes, or falling back to relay?snapshot: dump the major sections togetherping: which peer and session would carry traffic to a given overlay IP?
Diagnostics support two explicit output modes:
--format text--format json
Use text for shell-driven inspection and JSON for machine parsing.
The counters surface currently includes:
packets_sentpackets_receivedroute_missessession_failuresfallback_transitionsrelay_usage
relay_usage is especially useful when a deployment is technically working but direct paths are not being selected as often as expected.
Example:
vine status --format text -c /etc/libvine/vine.toml
vine sessions --format json -c /etc/libvine/vine.toml
vine ping --format text -c /etc/libvine/vine.toml 10.42.9.7
This is still a lightweight diagnostics layer, but it is now good enough to inspect peer ownership, route intent, preferred transport paths, and relay dependence without reading the Zig code.
Multi-PC Showcase
The examples/multi-pc/ directory is the reference deployment story for vine as a real binary.
It uses four Linux machines running the same executable:
alphabetagammarelay
Each machine keeps its own libself identity file and advertises one overlay prefix into the same network:
alpha->10.42.0.0/24beta->10.42.1.0/24gamma->10.42.2.0/24relay->10.42.254.0/24
Expected Path Story
The example is designed to demonstrate three normal cases and one failure case:
alphatalks tobetaover a direct session.gammareachesbetaafter signaling-assisted setup.gammareachesalphathroughrelayif no direct path can be formed.alphafalls back torelayif the direct path tobetadisappears.
That is the intended vine operator model:
- direct when possible
- signaling when setup is needed
- relay when direct reachability fails
Narrative Walkthrough
Start with all four configs installed and one identity generated per machine.
Bring up relay first so the other nodes have a stable bootstrap target. Then start alpha, beta, and gamma.
Once all nodes are running:
vine peersshould show the allowlisted remote peersvine routesshould show owned prefixes for those peersvine sessionsshould initially favor direct or signaling-assisted paths when available
Now force a direct-path failure between alpha and beta. The expected recovery is:
- route ownership stays the same
- the preferred session changes
relay_usageandfallback_transitionsincrease- traffic still has a path through
relay
This is why relay capability belongs to the peer policy layer instead of to the overlay address itself.
Files
examples/multi-pc/README.mdexamples/multi-pc/alpha.tomlexamples/multi-pc/beta.tomlexamples/multi-pc/gamma.tomlexamples/multi-pc/relay.toml
Copying The Binary To Multiple Machines
vine is meant to be the same binary on every Linux PC in the deployment.
The node role comes from local config and local identity, not from a machine-specific build.
Recommended Layout
Install the binary once per host:
/usr/local/bin/vine
Install config and state paths separately:
/etc/libvine/vine.toml
/var/lib/libvine/
That lets you copy one executable everywhere while keeping:
- one identity file per machine
- one node config per machine
- one runtime state directory per machine
Operator Sequence
On each machine:
- copy the
vinebinary to the same path - create
/etc/libvineand/var/lib/libvine - run
vine identity init - install the machine-specific
vine.toml - validate with
vine config validate - start the daemon
What Must Differ Per Machine
- the persisted
libselfidentity - the local node name
- the local TUN address and advertised prefix
- the bootstrap addresses that point at reachable peers
What Must Stay Shared
- the binary itself
- the
network_id - the allowlist view of which peer owns which prefix
- the operator workflow and CLI surface
Bootstrap Peers And Relay Placement
Bootstrap peers and relay peers solve different problems.
- bootstrap peers help a node find the network
- relay-capable peers help a node keep forwarding traffic when direct paths fail
Do not collapse those roles conceptually even if one machine does both.
Bootstrap Selection
A good bootstrap peer should be:
- usually online
- reachable from all expected nodes
- stable enough that operators can hardcode its address
For the showcase config, relay is the obvious bootstrap target for alpha, beta, and gamma
because it is intended to stay reachable and already sits in the middle of the topology.
Relay Placement
A good relay-capable node should be:
- on a stable network
- likely to have public reachability
- not overloaded by unrelated workloads
The relay should not own a special identity class. It remains a normal allowlisted peer with:
- a normal
libselfidentity - a normal overlay prefix
relay_capable = truein the allowlist
Practical Rule
If you only have one machine that is always online, that machine will usually be:
- the first bootstrap peer
- the first relay-capable peer
If you later add better infrastructure, keep bootstrap and relay decisions explicit in config rather than assuming every bootstrap peer should relay traffic.
Identity Enrollment Across Machines
Every machine needs its own local libself identity before it can join the overlay.
The enrollment rule is:
- identity is generated locally
- public identity material is shared with operators
- allowlist entries are updated on every participating machine
Recommended Flow
- On each machine, run
vine identity init. - Export the public identity material with
vine identity export-public. - Record the peer fingerprint with
vine identity fingerprint. - Distribute the public identity or fingerprint out of band.
- Update each machine’s
[[allowed_peers]]list so the authenticated peer ID matches the intended prefix owner.
Why This Matters
Overlay IPs can be changed later. The libself identity is the stable trust anchor.
That means enrollment is really:
- collect remote peer identity
- bind it to an owned prefix
- bind relay capability as policy
not:
- trust whoever claims a certain overlay IP
Operational Advice
- keep fingerprints in operator notes or inventory
- review allowlist updates before daemon restart or reload
- reject any peer whose claimed prefix does not match the configured owner
In other words, machine enrollment is a trust update first and a routing update second.
TUN Permissions And Route Installation
vine is a user-space VPN, but it still depends on Linux networking privileges.
The minimum operator expectation is:
- the process can create or open the configured TUN device
- the process can assign the configured overlay address
- the process can install and remove local routes for owned prefixes
What Usually Requires Privilege
On a normal Linux host, the following actions often need root or equivalent delegated capability:
- opening
/dev/net/tun - setting interface flags
- assigning interface addresses
- changing route tables
If those operations fail, the control-plane side of vine may still start while packet forwarding remains broken.
Practical Deployment Rule
Validate host readiness before relying on the daemon:
- confirm
/dev/net/tunexists - confirm the runtime user can access it
- confirm route changes are permitted
- confirm the configured interface name is acceptable for the target host
Failure Shape
When TUN setup or route installation fails, expect:
- no usable overlay interface
- no local packet injection path
- misleading peer/session state if you only inspect the control plane
That is why host readiness checks belong in vine doctor and why operators should treat TUN permissions as
part of initial deployment, not as a later optimization.
First Deployment On A LAN
The safest first real deployment is on one local network where you control all four machines.
Use that environment to prove:
- identities load correctly
- bootstrap works
- peer ownership matches config
- the TUN interface comes up
- traffic moves without relay unless needed
Recommended Order
- prepare all four configs from
examples/multi-pc/ - generate one identity per machine
- replace placeholder peer IDs with real exported identities
- validate each config locally
- start
relay - start
alpha,beta, andgamma - inspect
vine status,peers,routes, andsessions
Why Start On A LAN
A LAN removes several variables at once:
- NAT complexity
- public reachability problems
- firewall uncertainty
That makes it easier to tell whether failure is caused by:
- wrong allowlist identity
- wrong prefix ownership
- bad config distribution
- TUN or route setup problems
Success Criteria
The first LAN deployment should show:
- all peers enrolled into the same
network_id - expected prefixes installed
- at least one direct session
- observable relay fallback only when forced
Internet Deployment With One Relay-Capable Node
After the LAN proof, the next practical deployment is several machines on different networks with one publicly reachable relay-capable node.
The simplest version is:
relayon a stable public hostalpha,beta, andgammabehind normal home or office NATs
Recommended Placement
Put the relay-capable node where it has:
- a stable address
- predictable firewall rules
- enough uptime to be the bootstrap point
That single host becomes the operational anchor for:
- bootstrap discovery
- failed direct-session fallback
Configuration Advice
- keep
allow_relay = truefor remote nodes - make the relay node explicit in every allowlist
- point remote bootstrap entries at the relay’s reachable UDP address
- avoid depending on an edge node that frequently changes networks
Success Criteria
An internet deployment is healthy when:
- nodes still join the same
network_id - overlay prefixes remain tied to configured peer IDs
- direct sessions appear when possible
- relay paths appear when necessary
- relay usage is visible through diagnostics instead of being guessed
The operator goal is not to eliminate relay entirely. The goal is to prefer direct paths while keeping connectivity when direct reachability is impossible.
Operations
The vine binary is intended to support both development-time foreground use and long-running daemon use.
Foreground Workflow
Foreground mode is for:
- initial configuration bring-up
- debugging TUN and routing behavior
- watching logs and counters interactively
The expected foreground flow is:
- validate config
- load identity
- start the node runtime
- observe peers, routes, sessions, and counters
- stop cleanly with a signal or explicit command
Daemon Workflow
Daemon mode is for:
- multi-PC deployments
- persistent background VPN service
- boot-time or service-manager startup
The expected daemon flow is:
- start with a config file
- record pidfile and state paths
- own TUN lifecycle and session lifecycle
- expose status and diagnostics through CLI subcommands
- stop cleanly without leaking routes or stale runtime state
Operational Priorities
- identity must load before overlay membership is advertised
- route ownership must remain tied to authenticated peers
- direct sessions should be preferred over relay when both are available
- relay fallback should be observable rather than silent
Architecture
libvine sits at the overlay edge of the libzig networking stack.
Responsibility Split
libselfowns node identity and authenticated peer metadata.libmeshowns discovery, signaling, path selection, and relay fallback.libdicehelps with traversal setup when direct connectivity needs coordination.libfastcarries encrypted control and packet traffic once a path is open.libvineowns overlay membership policy, virtual IPv4 addressing, forwarding, and Linux TUN integration.
Architectural Rule
If libmesh already owns a reachability concern, libvine reuses it instead of rebuilding it. libvine should consume libmesh decisions and expose overlay semantics on top of them rather than inventing a second control plane.
MVP Shape
The MVP is Linux-only, IPv4-only, and intentionally optimized for small peer sets. One process hosts one overlay network instance, and the first policy model is expected to be explicit and static enough to keep routing and membership deterministic.
MVP
Supported
- Linux-only runtime
- IPv4 L3 overlay traffic
- one overlay network per process
- small peer sets
- static or bootstrap-assisted membership
libmesh-driven direct-first and relay-fallback connectivity- Linux TUN-backed packet ingress and egress
Explicitly Out Of Scope
- multi-platform support
- IPv6 overlay forwarding
- dynamic distributed routing protocols
- kernel modules
- multi-tenant controllers
- production-grade PKI
- aggressive performance work beyond basic correctness
Addressing
The MVP uses an IPv4-only overlay model.
Overlay Semantics
- each node belongs to one named overlay network
- each node owns one primary IPv4 prefix for the MVP
- packet forwarding is based on prefix ownership, not dynamic distributed routing
- route selection prefers the best reachable session provided by
libmesh
Initial Constraints
- no IPv6 data plane in the MVP
- no multi-prefix complexity unless later slices explicitly add it
- no overlapping prefix acceptance without a deterministic conflict policy
The addressing model stays intentionally small so control-plane compatibility and forwarding behavior can stabilize before broader routing features are attempted.
Control Plane
libvine does not build an independent reachability control plane for the MVP.
What libmesh Owns
- peer discovery
- signaling exchange
- path selection
- relay fallback decisions
- session orchestration hooks
What libvine Adds
- overlay network identity checks
- virtual prefix advertisement semantics
- membership policy
- route ownership updates specific to overlay forwarding
libvine control messages should be small, versioned, and designed to travel through libmesh signaling and setup flows rather than bypassing them.
Data Plane
The MVP data plane carries raw IPv4 packets between Linux TUN interfaces over sessions established through the lower stack.
Packet Flow
- a local packet arrives from the TUN device
libvinemaps the destination address to an owned overlay prefixlibvineselects the preferred reachable peer session- the packet is framed and sent over the active transport path
- the remote node decapsulates the payload and injects it into its TUN device
Scope
- raw IPv4 payload carriage only for the MVP
- explicit framing for control messages versus packet data
- bounded payload sizes derived from transport and relay constraints
- no hidden control-plane reinvention in the packet path
Linux
The MVP assumes a Linux host with access to /dev/net/tun and sufficient privileges to create and configure a TUN interface.
Expectations
libvineopens and manages a TUN device in userspace- the host must allow interface creation and address assignment
- local route installation is expected for remote overlay prefixes
- one process owns one overlay instance and its associated TUN interface
Operational Notes
- interface and route management should stay narrow and explicit
- shell fallbacks, if used at all, should be tightly scoped and easy to audit
- privilege requirements and failure modes should be surfaced clearly to callers and operators
libvine Integration Contract
Audience
This document is for downstream consumers such as libboid that want to embed libvine
as an overlay runtime instead of talking to libmesh, libfast, and Linux TUN details directly.
Public Entry Points
- import
libvinefrom the package root - configure a node with
lib/api/config.zig - create and run a node with
lib/api/node.zig - use examples in
examples/as the baseline integration shape
The most complete topology walkthrough in the repo is examples/multi_node_relay_demo.zig,
which models multiple peers plus a relay and shows direct, signaling-assisted, and relay fallback traffic choices.
Required Consumer Inputs
A consumer is expected to provide:
- one
NetworkIdshared by all peers in the overlay - one local TUN address and prefix length
- an allowlist of accepted peers when strict admission is desired
- either static bootstrap peers or published seed records
- an identity source, generated or deterministic seed-backed
Runtime Contract
Node.init wires together local identity, membership state, TUN state, route state, session state,
and the libmesh adapter boundary. Consumers should treat the node as the sole owner of those runtime slices
for the lifetime of the process.
Node.start marks the node active and advertises local membership.
Node.bootstrap chooses discovery inputs in this order:
- static bootstrap peers
- seed records
Node.sendPacket, Node.receivePacket, and Node.cleanupStaleSession expose the data-plane and
fallback-sensitive runtime hooks used by the current tests and examples.
Diagnostics And Debugging
Consumers can attach an event callback for:
- lifecycle logs
- bootstrap diagnostics
- topology changes
Consumers can also read:
node.diagnosticsfor countersnode.debugSnapshot()for structured runtime inspection
Integration Rules For libboid
libboid should:
- construct
NodeConfigonce per overlay process - keep peer identity and allowlist policy outside
libvine, then feed approved peers in - treat
libmeshreachability as authoritative instead of second-guessing it in application code - use the event callback and debug snapshot APIs for observability
- prefer the example programs as the starting point for host-side orchestration
libboid should not:
- bypass
Nodeand mutate route/session slices concurrently while the runtime is active - invent its own overlay packet framing on top of live
libvinesessions - assume relay fallback is equivalent to direct-path latency or throughput
libvine Testing Guide
Core Commands
make buildmake testzig build examples
The development loop used for the MVP slices is:
- apply one small change
- run
make build - run
make test - commit only after both pass
What Is Covered
The current test suite covers:
- core type parsing and validation
- route-table precedence, stale update rejection, withdrawals, and stress cases
- session promotion, churn, and relay fallback behavior
- control/data codec parsing, including mutation-style malformed input tests
- session send/receive framing
- fake Linux TUN lifecycle behavior
- node lifecycle, bootstrap, membership refresh, event callbacks, diagnostics, and debug snapshots
- direct, signaling-assisted, and relay-backed integration-style flows
What Is Not Covered
The MVP tests do not currently provide:
- real multi-process Linux network namespace coverage
- live
libmeshnetwork interoperability beyond the adapter-level contract - throughput or latency benchmarking
- privileged end-to-end route programming on a real host
Recommended Downstream Practice
Downstream consumers should:
- reuse
lib/testing/fixtures.zigfor shared test setup - keep integration tests deterministic and in-memory where possible
- add host-specific tests outside
libvinewhen validating deployment environments
Troubleshooting
Build Fails
If make build fails:
- confirm the Nix shell is active and Zig
0.15.2is the toolchain in use - check that local path dependencies
libself,libmesh,libdice, andlibfastare present - remember that example programs are compiled as part of the build
Test Fails
If make test fails:
- start with the failing module instead of chasing the full stack
- check whether a new test exposed a stale assumption in route/session fallback behavior
- confirm changes under
lib/data/were force-added if Git ignore rules apply there
Peer Mismatch
Symptoms:
- a remote node appears to exist but never becomes an accepted peer
- membership updates are rejected
- the advertised prefix never becomes a usable route
Checks:
- confirm the remote
libselfidentity was exported from the correct machine - confirm the configured
peer_idmatches that identity exactly - confirm every machine updated its allowlist before restart or reload
vine should trust the authenticated peer ID first and only then accept the owned prefix.
Route Mismatch
Symptoms:
vine peerslooks correct butvine routesis missing a prefix- traffic to an overlay subnet is dropped as unknown
Checks:
- confirm the configured prefix matches the node that is actually advertising it
- confirm no two peers claim overlapping prefixes
- confirm the local config still points at the intended
network_id
If prefix ownership is wrong, the correct outcome is rejection rather than silent route guessing.
TUN Failure
Symptoms:
- control-plane commands look healthy but packets do not enter the overlay
- interface creation or route install fails at startup
Checks:
- run
vine doctor -c /etc/libvine/vine.toml - confirm
/dev/net/tunexists and is accessible to the runtime user - confirm the process has the privileges required to assign the interface and routes
- confirm the configured TUN name is valid for Linux
Relay Overuse
Symptoms:
- traffic works, but
vine sessionsshows relay more often than expected relay_usageandfallback_transitionsclimb steadily
Checks:
- inspect whether the direct path is actually reachable between the two peers
- confirm bootstrap and relay placement are not masking a broken direct-session setup
- treat relay as a resilience mechanism, not as the default steady-state path
Debugging Tools
Use:
vine doctorvine statusvine peersvine routesvine sessionsvine countersvine snapshot