Scale WebSocket using Redis and HAProxy

Vipul Vyas
6 min readJan 29, 2023

WebSockets are an efficient and powerful technology for real-time communication over the web, but they can be challenging to scale as the number of concurrent connections increases. One solution to this problem is to use a combination of Redis and HAProxy to handle the load. Redis, a high-performance in-memory data store, can be used to store and manage WebSocket connections, while HAProxy, a high-performance load balancer, can be used to distribute the incoming traffic across multiple servers. Together, these tools can help you create a robust and scalable WebSocket infrastructure that can handle large numbers of concurrent connections. [Github : https://github.com/vipulvyas/Socket/tree/main/Ws-scalability-haproxy-redis]

Let’s suppose we have 4 WebSocket servers(scaling horizontally). and we want to use all 4 servers for the chat application. All 4 servers are connected with a load balancer and the load will be divided using a round-robin algorithm. but the problem here is how we can tell other servers when 1 server gets a message from the client. For that, we will use Publisher and subscribers using Redis.

Let’s create 4 WebSocket servers, a Redis server, and a load balancer HAProxy using docker.

First, let’s create a WebSocket server. we will use node for creating a WebSocket server.

index.mjs

import http from "http";
import ws from "websocket"
import redis from "redis";

// get the APPID from the environment variable
const APPID = process.env.APPID;

// array to store all the current connections
let connections = [];

const WebSocketServer = ws.server

// create a new redis client for subscribing to messages
const subscriber = redis.createClient({
port: 6379,
host: 'rds'} );

// create a new redis client for publishing messages
const publisher = redis.createClient({
port: 6379,
host: 'rds'} );

// when the subscriber is successfully subscribed to a channel
subscriber.on("subscribe", function(channel, count) {
console.log(`Server ${APPID} subscribed successfully to livechat`)
publisher.publish("livechat", "a message");
});

// when a message is received on the channel
subscriber.on("message", function(channel, message) {
try{
console.log(`Server ${APPID} received message in channel ${channel} msg: ${message}`);
// send the message to all connected clients
connections.forEach(c => c.send(APPID + ":" + message))
}
catch(ex){
console.log("ERR::" + ex)
}
});

// subscribe to the 'livechat' channel
subscriber.subscribe("livechat");

// create a raw http server (this will help us create the TCP which will then pass to the websocket to do the job)
const httpserver = http.createServer()

// pass the httpserver object to the WebSocketServer library to do all the job, this class will override the req/res
const websocket = new WebSocketServer({
"httpServer": httpserver
})

// listen on port 8080
httpserver.listen(8080, () => console.log("My server is listening on port 8080"))

// when a legit websocket request comes listen to it and get the connection
websocket.on("request", request=> {

// accept the websocket connection
const con = request.accept(null, request.origin)

// log when the connection is opened
con.on("open", () => console.log("opened"))

// log when the connection is closed
con.on("close", () => console.log("CLOSED!!!"))

// when a message is received
con.on("message", message => {
console.log(`${APPID} Received message ${message.utf8Data}`)
// publish the message to the 'livechat' channel in redis
publisher.publish("livechat", message.utf8Data)
})

// send a message to the client after 5 seconds
setTimeout(() => con.send(`Connected successfully to server ${APPID}`), 5000)
// add the connection to the connections array
connections.push(con)

})

Here is the directory structure.

Let’s create a server docker image.

dockerfile

FROM node:14
WORKDIR /home/node/app
COPY app /home/node/app
RUN npm install
CMD npm run app

docker build -t wsapp .

output

The command docker build -t wsapp . is used to build a Docker image using the Dockerfile in the current directory.

The docker build a command is used to build an image from a Dockerfile. The -t flag specifies the name and optionally a tag in the name:tag format to the name of the image in the repository. The final parameter . specifies the location of the build context, which is the current directory (.) in this case.

So this command will create an image called “wsapp” using the Dockerfile in the current directory, and no tag will be assigned.

A docker image is created let’s create multiple servers from this image and also create HAProxy and Redis servers using docker-compose.

version : '3'

services:
lb:
image: haproxy
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
ws1:
image: wsapp
environment:
- APPID=1111
ws2:
image: wsapp
environment:
- APPID=2222
ws3:
image: wsapp
environment:
- APPID=3333
ws4:
image: wsapp
environment:
- APPID=4444
rds:
image: redis

This Docker Compose file sets up a web application that uses a combination of Redis and HAProxy to handle WebSocket connections. The file defines several services that are used in the application:

  • lb: This service is for HAProxy, which is a high-performance load balancer. It maps port 8080 on the host to port 8080 in the container. The file also maps a local directory called haproxy to the /usr/local/etc/haproxy directory in the container, which is where HAProxy's configuration files are stored.
  • ws1, ws2, ws3, ws4: These services are for the WebSocket application. Each service uses an image called wsapp which is a custom image for the application. Each service also has an environment variable called APPID which is set to a different value for each service, this could probably be an identifier for the service for the application to use.
  • rds: This service is for Redis, which is an in-memory data store. It uses the official Redis image from Docker Hub.

Let’s create an HAProxy config file.

haproxy.cfg

frontend http
bind *:8080
mode http
timeout client 1000s
use_backend all

backend all
mode http
timeout server 1000s
timeout connect 1000s
server s1 ws1:8080
server s2 ws2:8080
server s3 ws3:8080
server s4 ws4:8080

All is done let's run docker-compose.

docker-compose up

output

Yeh, our servers are up and running. let’s test it in the browser.

let ws = new WebSocket("ws://localhost:8080");
ws.onmessage = message => console.log(`Received: ${message.data}`);
ws.send("Hello! I'm client 1")

copy this code to your browser line by line because WebSocket will take some time to establish the connection and after that, we will get it in “ws”.

After every connection, you will get which user is connected with which server.

client 1: connected on the server which has AppID 1111
client 2: connected on the server which has AppID 2222
client 3: connected on the server which has AppID 3333

In conclusion, using Redis and HAProxy to scale WebSockets can be an effective solution for handling high traffic and providing a reliable real-time communication experience. By utilizing Redis’ publish-subscribe functionality, messages can be broadcasted to multiple WebSocket servers, allowing for horizontal scaling. Additionally, by using HAProxy as a load balancer, incoming traffic can be distributed across multiple WebSocket servers, further increasing capacity and reliability. By implementing these tools, it is possible to handle a large number of concurrent connections and provide a robust real-time communication experience for users.

code: https://github.com/vipulvyas/Socket/tree/main/Ws-scalability-haproxy-redis

--

--