Incus Clustering
Incus clustering allows you to join multiple Incus servers into a single logical unit, providing high availability, load distribution, and seamless resource management across physical hosts. Instances can be migrated live between cluster members without downtime.
3-Node Incus Cluster
Cluster Features
Distributed Database
Raft-based consensus for configuration replication across all nodes.
- Automatic failover
- Consistent state
- No single point of failure
Live Migration
Move running instances between nodes with zero downtime.
- Containers: instant migration
- VMs: stateful migration
- Automatic or manual
Load Balancing
Distribute instances across nodes based on resources.
- CPU and memory aware
- Storage capacity
- Custom placement
High Availability
Automatic recovery from node failures.
- Instance evacuation
- Automatic restart
- Health monitoring
Setting Up a Cluster
Step 1: Initialize First Node
incus cluster enable my-cluster
incus cluster list
Step 2: Generate Join Token
incus cluster add node2
Step 3: Join Additional Nodes
incus admin init
# Select "yes" to join existing cluster
# Paste the token from step 2
Cluster Management Commands
incus cluster list
incus cluster show node1
incus cluster evacuate node2
incus cluster restore node2
incus cluster remove node2
incus move my-instance --target node3
incus launch images:ubuntu/22.04 my-container --target node2
Storage in Clusters
| Storage Type | Live Migration | Notes |
|---|---|---|
| Local (ZFS/Btrfs/LVM) | Slower (data copy) | Works without shared storage, transfers image over network |
| Ceph RBD | Instant | Recommended for clusters, native support |
| CephFS | Instant | Shared filesystem for VM images |
| NFS/iSCSI | Fast | Requires external storage server |
Networking in Clusters
Cluster Networking Requirements
- Management Network: All nodes must be able to communicate (port 8443)
- Cluster Traffic: Database replication, member communication
- Instance Networks: Can be node-local or shared (OVN, VXLAN)
- Migration Traffic: Dedicated network recommended for live migrations
OVN Networking
Open Virtual Network (OVN) provides advanced software-defined networking for Incus clusters:
- Layer 2 and Layer 3 virtualization
- Distributed virtual routers and load balancers
- Automatic instance connectivity across nodes
- Network isolation between projects
- ACLs and security groups
Best Practices
Production Cluster Recommendations
- Minimum 3 nodes: Required for database quorum and HA
- Odd number of nodes: 3, 5, or 7 for proper quorum
- Dedicated network: Separate management and migration networks
- Shared storage: Ceph or similar for best live migration performance
- Monitoring: Track node health, resource usage, migration status
- Backup strategy: Regular database backups, test restore procedures
- Update procedure: Rolling updates, one node at a time
Troubleshooting
Common Issues
- Split brain: Network partitions can cause cluster splits. Use heartbeat/fencing
- Database sync: If nodes show inconsistent state, check database replication
- Migration failures: Check network connectivity and storage availability
- Node offline: Evacuate instances before maintenance, or use force-remove for dead nodes
Incus clustering transforms standalone servers into a unified, highly-available infrastructure platform capable of supporting production workloads with automatic failover and seamless scalability.