Maglev is the codename of Google’s Layer 4 network load balancer, which is referred to in GCP as External TCP/UDP Network Load Balancing. I read the 2016 Maglev paper to better understand various implementation details of Maglev with an emphasis on security (in particular as affects availability).

Maglev uses a scale-out approach, implemented within clusters built from commodity hardware achieving n+1 redundancy, providing greater tolerance to failure compared with traditional hardware load balancers deployed in pairs (only 1+1 redundancy). The collection of Maglev machines are in an active-active setup, with the router balancing across them via Equal Cost Multipath (ECMP) routing. This permits greater hardware utilization compared to an active-passive approach.

The paper discusses in detail the various techniques used to increase performance (userspace networking to avoid kernel switching, use of 5-tuple hashing to avoid needing to share data across threads, and pinning the threads to dedicated CPUs).

A local connection tracking table is used to ensure packets in the same connection are routed to the same backend endpoint consistently. However, two failure cases are discussed: during upgrades/failures the set of Maglev machines might change, or the connection table might simply run out of space.

The impact of these cases are reduced using Maglev hashing, a novel approach to connection tracking with consistent hashing that optimizes for balancing the load rather than guaranteeing connections will not shift to a different backend endpoint. The paper includes results from experiments to show that the typical impact of failure cases is low.

This was an interesting read, giving an insight into how modern highly-available load balancing works. The approach from this paper has subsequently been adopted by other implementations (e.g. is used in the open source project Cilium which is part of GKE Dataplane v2).