502 Bad Gateway: Complete Guide to Reverse Proxy Issues and Solutions
Learn what causes 502 Bad Gateway errors and how to troubleshoot them. Master reverse proxy configuration, backend connectivity, and common resolution techniques.
502 Bad Gateway: Complete Guide to Reverse Proxy Issues and Solutions
A 502 Bad Gateway error is one of the most common HTTP status codes you'll encounter when working with web applications. It occurs when a reverse proxy or load balancer (like NGINX, HAProxy, or AWS ELB) receives an invalid response from an upstream server or cannot reach the backend service at all. Understanding how to diagnose and fix 502 errors is essential for maintaining reliable web services.
Understanding 502 Bad Gateway
What is a 502 Error?
A 502 Bad Gateway is an HTTP status code that indicates a server acting as a gateway or proxy received an invalid response from an upstream server. This typically happens in scenarios where:
- A reverse proxy (like NGINX) cannot reach the backend application
- The backend application is down or unresponsive
- The backend returns a malformed response
- There's a configuration mismatch between proxy and backend
HTTP Status Code Hierarchy
5xx Server Errors:
├── 500 Internal Server Error - Generic server error
├── 502 Bad Gateway - Invalid response from upstream
├── 503 Service Unavailable - Server temporarily unavailable
├── 504 Gateway Timeout - Upstream server timeout
└── 505 HTTP Version Not Supported
Common Architecture Patterns
Reverse Proxy Setup
Client → NGINX/HAProxy → Backend App
(502 occurs here)
Load Balancer Setup
Client → AWS ELB/ALB → EC2 Instances
(502 occurs here)
Root Causes of 502 Bad Gateway
1. Backend Service is Down
Symptoms
- 502 error appears suddenly
- No response from backend application
- Service status shows as stopped
Diagnosis
# Check service status
systemctl status your-app
systemctl status nginx
# Check if process is running
ps aux | grep your-app
pgrep -f your-app
# Check port binding
netstat -tuln | grep :5000
ss -tuln | grep :5000
Solution
# Start the service
systemctl start your-app
systemctl enable your-app
# Or restart if already running
systemctl restart your-app
# Check logs for startup errors
journalctl -u your-app -f
2. Wrong Upstream Configuration
Symptoms
- 502 error persists after service restart
- NGINX logs show connection refused
- Backend service is running but on different port
Diagnosis
# Check NGINX configuration
nginx -t
cat /etc/nginx/sites-available/your-site
# Check what port the app is actually running on
netstat -tuln | grep LISTEN
ss -tuln | grep LISTEN
Common Configuration Issues
# Wrong port in proxy_pass
location / {
proxy_pass http://localhost:5000; # App might be on 3000
}
# Wrong host
location / {
proxy_pass http://127.0.0.1:5000; # Should be localhost
}
# Missing upstream block
upstream backend {
server localhost:5000;
}
location / {
proxy_pass http://backend;
}
Solution
# Fix the configuration
sudo nano /etc/nginx/sites-available/your-site
# Test configuration
sudo nginx -t
# Reload NGINX
sudo systemctl reload nginx
3. Backend Timeout Issues
Symptoms
- 502 error after some delay
- Backend responds slowly
- High response times in logs
Diagnosis
# Check NGINX timeout settings
grep -r "timeout" /etc/nginx/
# Test backend response time
time curl http://localhost:5000/health
# Check backend logs for slow queries
tail -f /var/log/your-app/application.log
Solution
# Increase timeout values
location / {
proxy_pass http://localhost:5000;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
}
4. Firewall or Security Group Blocking
Symptoms
- 502 error from specific locations
- Connection refused errors
- Works locally but not remotely
Diagnosis
# Test local connectivity
telnet localhost 5000
nc -zv localhost 5000
# Test from different machine
telnet your-server-ip 5000
# Check firewall rules
sudo iptables -L -n
sudo ufw status
sudo firewall-cmd --list-all
Solution
# Allow port through firewall
sudo ufw allow 5000
sudo firewall-cmd --add-port=5000/tcp --permanent
sudo firewall-cmd --reload
# Check security groups (AWS)
aws ec2 describe-security-groups --group-ids sg-xxxxxxxxx
5. SSL/TLS Configuration Mismatch
Symptoms
- 502 error with HTTPS backends
- SSL handshake failures
- Certificate errors in logs
Diagnosis
# Check SSL configuration
openssl s_client -connect localhost:5000
# Test HTTP vs HTTPS
curl -k https://localhost:5000/health
curl http://localhost:5000/health
Solution
# Fix protocol mismatch
location / {
# If backend uses HTTP
proxy_pass http://localhost:5000;
# If backend uses HTTPS
proxy_pass https://localhost:5000;
proxy_ssl_verify off;
}
6. Application Crashes or Out of Memory
Symptoms
- 502 error after application starts
- Application logs show crashes
- Memory usage spikes
Diagnosis
# Check application logs
journalctl -u your-app -f
docker logs your-container
# Check system resources
free -h
df -h
top
# Check for OOM kills
dmesg | grep -i "killed process"
journalctl | grep -i "out of memory"
Solution
# Restart the application
systemctl restart your-app
docker restart your-container
# Increase memory limits
# Docker
docker run -m 2g your-app
# Kubernetes
resources:
limits:
memory: "2Gi"
requests:
memory: "1Gi"
Troubleshooting Methodology
Step 1: Check Service Status
# Check if backend service is running
systemctl status your-app
ps aux | grep your-app
# Check if proxy service is running
systemctl status nginx
systemctl status haproxy
Step 2: Test Backend Connectivity
# Test direct backend access
curl http://localhost:5000/health
curl http://localhost:5000/
# Test with different tools
wget http://localhost:5000/health
telnet localhost 5000
Step 3: Check Proxy Logs
# NGINX error logs
tail -f /var/log/nginx/error.log
# HAProxy logs
tail -f /var/log/haproxy.log
# Application logs
journalctl -u your-app -f
Step 4: Verify Configuration
# Test NGINX configuration
nginx -t
# Check HAProxy configuration
haproxy -c -f /etc/haproxy/haproxy.cfg
# Verify upstream servers
nginx -T | grep upstream
Step 5: Test Network Connectivity
# Check port binding
netstat -tuln | grep :5000
ss -tuln | grep :5000
# Test firewall rules
sudo iptables -L -n | grep 5000
Common Scenarios and Solutions
Scenario 1: Docker Container Backend
Problem: Container not accessible from host
# Check container status
docker ps
docker logs your-container
# Check port mapping
docker port your-container
# Test container connectivity
docker exec your-container curl localhost:5000/health
Solution
# Restart container
docker restart your-container
# Check port mapping
docker run -p 5000:5000 your-app
# Use host network
docker run --network host your-app
Scenario 2: Kubernetes Pod Backend
Problem: Pod not accessible from service
# Check pod status
kubectl get pods
kubectl describe pod your-pod
# Check service configuration
kubectl get svc
kubectl describe svc your-service
# Test pod connectivity
kubectl exec your-pod -- curl localhost:5000/health
Solution
# Fix service configuration
apiVersion: v1
kind: Service
metadata:
name: your-service
spec:
selector:
app: your-app
ports:
- port: 80
targetPort: 5000
type: ClusterIP
Scenario 3: AWS Load Balancer
Problem: ALB returns 502
# Check target group health
aws elbv2 describe-target-health --target-group-arn arn:aws:elasticloadbalancing:...
# Check security groups
aws ec2 describe-security-groups --group-ids sg-xxxxxxxxx
# Check instance status
aws ec2 describe-instances --instance-ids i-xxxxxxxxx
Solution
# Fix security group rules
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxx \
--protocol tcp \
--port 5000 \
--cidr 0.0.0.0/0
# Restart instances
aws ec2 reboot-instances --instance-ids i-xxxxxxxxx
Prevention and Best Practices
1. Health Checks
Implement Health Endpoints
# Flask example
@app.route('/health')
def health_check():
return {'status': 'healthy', 'timestamp': time.time()}, 200
# Express.js example
app.get('/health', (req, res) => {
res.status(200).json({ status: 'healthy', timestamp: Date.now() });
});
Configure Health Checks
# NGINX health check
location /health {
proxy_pass http://localhost:5000/health;
access_log off;
}
2. Monitoring and Alerting
Set Up Monitoring
# Prometheus monitoring
- job_name: 'your-app'
static_configs:
- targets: ['localhost:5000']
metrics_path: /metrics
scrape_interval: 15s
Configure Alerts
# Alert for 502 errors
- alert: High502ErrorRate
expr: rate(nginx_http_requests_total{status="502"}[5m]) > 0.1
for: 2m
labels:
severity: critical
annotations:
summary: "High 502 error rate detected"
3. Graceful Shutdowns
Implement Graceful Shutdown
# Python example
import signal
import sys
def signal_handler(sig, frame):
print('Gracefully shutting down...')
# Clean up resources
sys.exit(0)
signal.signal(signal.SIGTERM, signal_handler)
signal.signal(signal.SIGINT, signal_handler)
4. Load Balancing
Configure Multiple Backends
upstream backend {
server localhost:5000;
server localhost:5001;
server localhost:5002;
}
location / {
proxy_pass http://backend;
}
Testing 502 Errors
Simulate 502 for Testing
Stop Backend Service
# Stop the backend service
sudo systemctl stop your-app
# Test the 502 error
curl -I http://your-domain.com
# Should return: HTTP/1.1 502 Bad Gateway
Block Port with Firewall
# Block the backend port
sudo iptables -A INPUT -p tcp --dport 5000 -j DROP
# Test the 502 error
curl -I http://your-domain.com
Misconfigure Proxy
# Wrong port in configuration
location / {
proxy_pass http://localhost:9999; # Non-existent port
}
Conclusion
502 Bad Gateway errors are common in web applications using reverse proxies or load balancers. The key to resolving them is systematic troubleshooting:
- Check service status - Ensure backend services are running
- Test connectivity - Verify the proxy can reach the backend
- Review configuration - Check for port mismatches and protocol issues
- Monitor logs - Look for error patterns and timeout issues
- Test network - Verify firewall rules and security groups
Key takeaways:
- 502 errors are proxy-side issues - The problem is between your proxy and backend
- Check backend first - Ensure your application is running and accessible
- Verify configuration - Port mismatches are common causes
- Monitor continuously - Set up health checks and alerting
- Test systematically - Use a step-by-step approach to isolate the issue