Visualize how traffic is distributed across multiple servers using various Load Balancing Algorithms. Experiment with Round Robin, Least Connections, and more.
Load Balancer Simulator
Round Robin Algorithm:Round Robin distributes client requests to servers sequentially. It ensures an equal distribution of requests but doesn't account for the current load or processing capacity of each server.
This interactive simulator demonstrates how a Load Balancer distributes network traffic across multiple servers. You can experiment with different algorithms like Round Robin, Random, and Least Connections. Adjust the traffic intensity and add or remove servers to see how the system behaves under load.
3 Nodes
Users
Load Balancer
Server 10
Server 20
Server 30
Visual Guide
Request Packet: Represents a single user request traversing the system.
Server Load: Green indicates healthy load. Red indicates overload (>80%).
How to use
1. click Start Traffic to begin the simulation. 2. Switch Algorithms to see how the distribution logic changes. 3. Increase Traffic Intensity to stress test your configuration. 4. Add/Remove Nodes to scale your backend dynamically.
Quick Guide: Load Balancing
Understanding the basics in 30 seconds
How It Works
Client sends request to Load Balancer IP
LB selects backend server using algorithm
Request forwarded to chosen server
Response returns through the LB
Health checks remove unhealthy nodes
Key Benefits
High availability with redundancy
Horizontal scaling (add more servers)
No single point of failure
SSL termination offloading
Sticky sessions for stateful apps
Real-World Uses
NGINX, HAProxy: Web traffic
AWS ELB, GCP Load Balancer: Cloud
Kubernetes Ingress: Container traffic
Database read replicas: Query distribution
API Gateways: Microservices routing
Load Balancing in Production
How real-world systems distribute millions of requests across server clusters.
Session Persistence
Sticky sessions ensure a user's requests always go to the same server - critical for shopping carts and login states.
Cookie-based affinity: Server ID stored client-side
IP-based: Same IP → same server (NAT issues)
Trade-off: Even distribution vs session consistency
Health Checks
Load balancers continuously probe backends. Unhealthy servers are removed from the pool automatically.
Active: Periodic HTTP/TCP probes to /health
Passive: Track failed requests, mark dead
Circuit breaker: Prevent cascade failures
AWS ELB vs NGINX vs HAProxy
AWS ELB is managed but expensive at scale. NGINX handles 10K+ concurrent connections with minimal memory. HAProxy excels at TCP-level balancing for databases. Netflix, Airbnb, and GitHub all use custom combinations of these technologies.
Load Balancing Algorithms Explained
Round Robin
The simplest algorithm: requests are distributed sequentially across all servers. Server A → B → C → A → B → C...
When to Use
All servers have equal capacity
Requests have similar processing time
Stateless applications (no sessions)
Least Connections
Routes traffic to the server with fewest active connections. Smarter than Round Robin when requests have varying durations.
✓ Ideal For
Long-running connections (WebSockets), APIs with variable response times, mixed workloads.
✗ Avoid For
Quick, uniform requests where Round Robin performs equally well with less overhead.
Weighted Algorithms
Assign weights to servers based on capacity. A server with weight 3 gets 3x more traffic than one with weight 1.
Weighted Round Robin: Distributes proportionally by weight
Weighted Least Connections: Combines weight + active connections
IP Hash: Same client always hits same server (sticky)
💡 Pro Tip: Combine with health checks. A dead server with weight 10 is still useless!