Load Balancing with Docker Compose and Nginx
This document describes the setup and verification of a load balancing solution for the AIV application using:
-
Docker Compose
-
Custom Nginx Reverse Proxy
Purpose
Load balancing distributes incoming application requests across multiple backend servers (instances of the AIV application). This approach enhances:
-
Availability: If one instance fails, others can handle the traffic.
-
Scalability: More instances can be added to handle higher loads.
Components:
-
Docker Compose orchestrates the multi-container environment (database, application instances, load balancer).
-
Nginx acts as the reverse proxy and load balancer, receiving all incoming traffic and forwarding it to available AIV application instances.
System Components
Docker Compose Services:
-
aiv_1 & aiv_2: Two AIV application instances that run the same application but provide redundancy and handle different loads.
-
aiv-ai: Related AI service.
-
proxy (Nginx Load Balancer): A container running Nginx, acting as the load balancer and reverse proxy for the application.
How Load Balancing Works
-
Incoming Requests: Users or automated systems send HTTP requests to the public entry point (
http://your_server_address/
orhttp://localhost/
). -
Nginx Proxy: Nginx receives the requests and directs them to one of the available backend servers (aiv_1 or aiv_2) based on a round-robin load balancing strategy.
-
Processing: The chosen AIV instance processes the request and interacts with the db if necessary.
-
Response: The AIV instance sends the response back to Nginx, which then forwards it to the client.
Configuration Details
docker-compose.yml Highlights
version: '3.8'
services:
db:
image: postgres:17
container_name: postgres
restart: always
expose:
- "5432"
environment:
POSTGRES_DB: postgres
POSTGRES_USER: ${AIV_DB_USER}
POSTGRES_PASSWORD: ${AIV_DB_PASSWORD}
volumes:
- ./pg_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${AIV_DB_USER}"]
interval: 10s
timeout: 5s
retries: 5
pgadmin:
image: dpage/pgadmin4:latest
container_name: pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD}
ports:
- "${PGADMIN_PORT}:80"
depends_on:
db:
condition: service_healthy
aiv_1:
container_name: aiv_1
image: jits023/aiv:6.1.0
command: > [ ... ]
ports:
- "8080:8080"
depends_on:
- db
environment: [ ... ]
volumes: [ ... ]
aiv_2:
container_name: aiv_2
image: jits023/aiv:6.1.0
command: > [ ... ]
ports:
- "8081:8080"
depends_on:
- db
environment: [ ... ]
volumes: [ ... ]
aiv-ai:
container_name: aiv-ai
image: jits023/aiv-ai:6.1.0
ports:
- "8001:8001"
environment:
- AIV_JUPYTER=aiv-jupyter
- AIV_TOKEN=aivhub
volumes:
- ./dataloc:/usr/local/temp:rw
restart: always
proxy:
container_name: aiv-proxy
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./nginx_logs:/var/log/nginx:rw
ports:
- "80:80"
depends_on:
- aiv_1
- aiv_2
nginx.conf Highlights
<!-- This is for logging upstream requests -->
log_format upstream_log '$time_iso8601 | $remote_addr | "$request" | status $status | upstream $upstream_addr';
upstream aiv_backend {
server aiv_1:8080;
server aiv_2:8080;
}
server {
listen 80;
<!-- This logs requests using the 'upstream_log' format -->
access_log /var/log/nginx/access.log upstream_log;
server_name aiv.local;
location / {
proxy_pass http://aiv_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
add_header X-Upstream $upstream_addr;
}
}
Running the Setup
Prerequisites:
-
Docker and Docker Compose installed.
-
A .env file with the necessary environment variables (e.g., AIV_DB_USER, AIV_DB_PASSWORD, PGADMIN_DEFAULT_EMAIL, etc.).
-
Host directories: ./config , ./repository, ./logs, ./dataloc, ./nginx_logs.
-
Configuration files: ./repository/econfig/application.yml, nginx.conf.
Steps:
Start Services:
docker-compose up -d
Verify: Ensure all containers are running:
docker-compose ps
Testing and Verification
Method 1: Using the X-Upstream Header (Interactive)
Use curl to check the X-Upstream header:
curl -I http://localhost/aiv
Repeat the command and verify that the X-Upstream header alternates between the IPs of aiv_1 and aiv_2.
Method 2: Monitoring Nginx Logs (Persistent Tracking)
Monitor the live logs to see which instance is handling each request:
Get-Content -Path ./nginx_logs/access.log -Wait
Access the application via a browser or curl, and watch the logs for the upstream information.
Alternative Load Balancing Solutions
-
Docker Swarm Mode: Docker’s native orchestration system that includes automatic load balancing.
-
Kubernetes: A more advanced orchestration tool, offering load balancing via Ingress controllers and Services.
-
Cloud Provider Load Balancers: Services like AWS ELB, Azure ALB, or GCP Load Balancer can be used for containerized applications.
-
Other Load Balancers: Tools like HAProxy and Traefik can also be used for load balancing.
Nginx Reference
For more details on Nginx configuration: