Catching disconnects with Apache Load Balancer and node/io.socket backend

NOTE: I have solved most of the problem, but am still encountering an issue with catching the disconnects as noted towards the bottom of this post in the Update section.

NOTE 2: As requested I have posted a more complete view of my setup. See the heading at the bottom of this post.

I am trying to set up a load balancer in Apache but it is not working for socket.io. My Apache code looks like this:

<VirtualHost *:80>
        ServerAdmin webmaster@example.com
        ServerName jpl.example.com

        ProxyRequests off

        Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
        <Proxy "balancer://mycluster">
                BalancerMember "http://203.0.113.22:3000" route=1
                BalancerMember "http://203.0.113.23:3000" route=2
        </Proxy>

        ProxyPass "/test/" "balancer://mycluster/"
        ProxyPassReverse "/test/" "balancer://mycluster/"    

</VirtualHost>

Problems with socket.io

The issue I am facing is that on the backend I have a node.js server that uses socket.io connections for long polling in both subdir1/index.html and subdir2.index.html. Unfortunately, socket.io likes to be only running from the root directory:

http://203.0.113.22:3000/socket.io/

It is unable to find it if I try running it from:

http://jpl.example.com/test/socket.io

The start of my index.js file on the server looks like this:

// Setup basic express server
var express = require('express');
var app = express();
var server = require('http').createServer(app);
var io = require('socket.io')(server);

Part of my /subdir1/index.html (also being loaded from the server) originally looked like this:

<script src="/socket.io/socket.io.js"></script>
<script>
    var socket = io.connect();
    socket.on('notification', function (data) {

But I was now getting an error when accessing it through the proxy. The error was:

http://jpl.example.com/socket.io/socket.io.js 404 (Not Found)

I have tried changing it to this:

<script src="/test/socket.io/socket.io.js"></script>
<script src="http://code.jquery.com/jquery-latest.min.js"></script>
<script>
    var refresh_value = 0;
    var refresh_time = 0;
    //var socket = io.connect();
    var socket = io.connect('http://example.com/', {path: "/test/"});
    socket.on('notification', function (data) {

It no longer gives me an error, but there is no indication that it is communicating with the socket.

What am I doing wrong here and how can I get this to work?

Update

I have now mostly solved the problem with using:

var socket = io.connect('http://example.com/', {path: "/test/socket.io"});

instead of:

var socket = io.connect('http://example.com/', {path: "/test/"});

Final problem:

Things are now working but I am still experiencing the following issue:

It takes about a minute before it detects that a client has actually closed a page. Without a Proxy and Apache load balancer I do not have this issue. I have tried various things such as setting KeepAlive to "no" and modifying the VirtualHost at the top of this page with the following:

        <Proxy "balancer://mycluster">
                BalancerMember "http://203.0.113.22:3000" route=1 max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
                BalancerMember "http://203.0.113.23:3000" route=2 max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
        </Proxy>

But it still takes about a minute before it recognizes that a client has left the page. What can I do to solve this problem?

A more complete view of my setup

As requested, to help diagnose the problem I am posting a more complete view of my setup. I have eliminated as much as I thought I could while providing as much detail as I could:

My current Apache file:

<VirtualHost *:80>
    # Admin email, Server Name (domain name), and any aliases
    ServerAdmin webmaster@example.com
    ServerName jpl.example.com

    ProxyRequests off

    Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
    <Proxy "balancer://mycluster">
        BalancerMember "http://203.0.113.22:3000" route=1 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        BalancerMember "http://203.0.113.23:3000" route=2 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        ProxySet stickysession=ROUTEID
    </Proxy>

    <Proxy "balancer://myws">
        BalancerMember "ws://203.0.113.22:3000" route=1 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        BalancerMember "ws://203.0.113.23:3000" route=2 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        ProxySet stickysession=ROUTEID
    </Proxy>

    RewriteEngine On
    RewriteCond %{REQUEST_URI}  ^/test/socket.io                [NC]
    RewriteCond %{QUERY_STRING} transport=websocket        [NC]
    RewriteRule /(.*)           balancer://myws/$1 [P,L]

    ProxyPass "/test/" "balancer://mycluster/"
    ProxyPassReverse "/test/" "balancer://mycluster/"

</VirtualHost>

On each of those servers I have a node installation. The main index.js looks like this:

/************ Set Variables ******************/

// Setup basic express server
var express = require('express');
var app = express();
var server = require('http').createServer(app);
var io = require('socket.io')(server);
var port = process.env.PORT || 3000;
var fs                  = require('fs'),
    mysql               = require('mysql'),
    connectionsArray    = [],
    connection          = mysql.createConnection({
        host        : 'localhost',
        user        : 'myusername',
        password    : 'mypassword',
        database    : 'mydatabase',
        port        : 3306
    }),
    POLLING_INTERVAL = 5000;


server.listen(port, function () {
        console.log("-----------------------------------");
        console.log('Server listening at port %d', port);
});

// Routing
app.use(express.static(__dirname + "/public"));


/*********  Connect to DB ******************/
connection.connect(function(err) {
        if (err == null){
                console.log("Connected to Database!");
        }
        else {
                console.log( err );
                process.exit();
        }
});



/***********************  Looping *********************/

var pollingLoop = function () {

        var query = connection.query('SELECT * FROM spec_table'),
        specs = [];

        query
        .on('error', function(err) {
                console.log( err );
                updateSockets( err );
        })

        .on('result', function( spec ) {
                specs.push( spec );
        })

        .on('end',function(){
                pollingLoop2(specs);
        });

};

var pollingLoop2 = function (specs) {

        // Make the database query
        var query = connection.query('SELECT * FROM info_table'),
        infos = [];

        // set up the query listeners
        query
        .on('error', function(err) {
                console.log( err );
                updateSockets( err );
        })

        .on('result', function( info ) {
                infos.push( info );
        })

        .on('end',function(){
                if(connectionsArray.length) {
                        setTimeout( pollingLoop, POLLING_INTERVAL );
                        updateSockets({specs:specs, infos:infos});
                }
        });

};

/***************  Create new websocket ****************/
//This is where I can tell who connected and who disconnected.

io.sockets.on('connection', function ( socket ) {

        var socketId = socket.id;

        var clientIp = socket.request.connection.remoteAddress;

        var time = new Date();
        console.log(time);
        console.log("33[32mJOINED33[0m: "+ clientIp + " (Socket ID: " + socketId + ")");

        // start the polling loop only if at least there is one user connected
        if (!connectionsArray.length) {
                pollingLoop();
        }

        socket.on('disconnect', function () {
                var socketIndex = connectionsArray.indexOf( socket );

                var time = new Date();
                console.log(time);
                console.log("33[31mLEFT33[0m: "+ clientIp + " (Socket ID: " + socketId + ")");

                if (socketIndex >= 0) {
                        connectionsArray.splice( socketIndex, 1 );
                }
                console.log('    Number of connections: ' + connectionsArray.length);
        });

        connectionsArray.push( socket );
        console.log('    Number of connections: ' + connectionsArray.length);

});


/********* Function updateSockets ************/

var updateSockets = function ( data ) {

        connectionsArray.forEach(function( tmpSocket ){
                tmpSocket.volatile.emit( 'notification' , data );
        });

};

Finally, in my public/dir1/index.html file I have something that looks like this:

//HTML code here
<script src="/test/socket.io/socket.io.js"></script>
<script>
    var socket = io.connect('', {path: "/test/socket.io"});
    socket.on('notification', function (data) {
            $.each(data.specs,function(index,spec){
                    //Other js code here
            })
    })
</script>
//More HTML code here

With this particular setup the connection works, but it takes over a minute before I can detect that a page is closed. Also, with this set up there is an error logged to the console:

WebSocket connection to 'ws://jpl.example.com/test/socket.io/?EIO=3&transport=websocket&sid=QE5aCExz3nAGBYcZAAAA' failed: Connection closed before receiving a handshake response
ws @ socket.io.js:5325

What am I doing wrong and how can I fix my code so that I can detect disconnects the moment they occur?

Note: It works just fine if I do not use a subdirectory /test/.

Please also note: this is only a subdirectory appearing in the URL. It does not exist in the file system anywhere.

Also, I am open to tips and suggestions if you notice areas in my code that I could be writing better.


So after some hit and trials, I was able to get a config which works fine. The changes required

Base Path on Server

You need to use the base path on server as well to make this smooth

var io = require('socket.io')(server, { path: '/test/socket.io'});

And then below is the updated Apache config I used

<VirtualHost *:8090>
    # Admin email, Server Name (domain name), and any aliases
    ServerAdmin webmaster@example.com

    ProxyRequests off

   #Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
   Header add Set-Cookie "SERVERID=sticky.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED

    <Proxy "balancer://mycluster">
        BalancerMember "http://127.0.0.1:3001" route=1 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        BalancerMember "http://127.0.0.1:3000" route=2 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        ProxySet stickysession=SERVERID
    </Proxy>

    <Proxy "balancer://myws">
        BalancerMember "ws://127.0.0.1:3001" route=1 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        BalancerMember "ws://127.0.0.1:3000" route=2 keepalive=On smax=1 connectiontimeout=10 retry=600 timeout=900 ttl=900
        ProxySet stickysession=SERVERID
    </Proxy>

    RewriteEngine On
    RewriteCond %{HTTP:Upgrade} =websocket [NC]
    RewriteRule /(.*) balancer://myws/$1 [P,L]

    RewriteCond %{HTTP:Upgrade} !=websocket [NC]
    RewriteRule /(.*)                balancer://mycluster/$1 [P,L]
    ProxyTimeout 3
</VirtualHost>

And now the disconnects are immediate

断开


Im not that familiar with Apaches mod_proxy, but I think your issue is related to your paths.

I setup a little test to see if I could help (and have a play), in my test I will proxy both HTTP and ws traffic to a single backend. Which is what your doing plus websockets.

Servers (LXD Containers):

  • 10.158.250.99 is the proxy.
  • 10.158.250.137 is the node.
  • First, enable the apache mods on the proxy:

    sudo a2enmod proxy
    sudo a2enmod proxy_http
    sudo a2enmod proxy_wstunnel
    sudo a2enmod proxy_balancer
    sudo a2enmod lbmethod_byrequests
    

    Then change 000-default.conf :

    sudo nano /etc/apache2/sites-available/000-default.conf
    

    This is what I used after clearing out the comments:

    <VirtualHost *:80>
        ServerAdmin webmaster@localhost
        DocumentRoot /var/www/html
    
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
    
        <Proxy balancer://mycluster>
            BalancerMember http://10.158.250.137:7779
        </Proxy> 
    
        ProxyPreserveHost On
    
        # web proxy - forwards to mycluser nodes
        ProxyPass /test/ balancer://mycluster/
        ProxyPassReverse /test/ balancer://mycluster/
    
        # ws proxy - forwards to web socket server
        ProxyPass /ws/  "ws://10.158.250.137:7778"
    
    </VirtualHost>
    

    What the above config is doing:

  • Visit the proxy http://10.158.250.99 it will show default Apache page.
  • Visit the proxy http://10.158.250.99/test/ it will forward the HTTP request to http://10.158.250.137:7779 .
  • Visit the proxy http://10.158.250.99/ws and it will make a websocket connection to ws://10.158.250.137:7778 and tunnel it though.
  • So for my app im using phptty as it uses HTTP and WS, its uses xterm.js frontend which connects to websocket http://10.158.250.99/ws to give a tty in the browser.

    Here Is a screen of it all working, using my LXDui electron app to control it all.

    在这里输入图像描述

    So check your settings against what I have tried and see if it's different, its always good to experiment abit to see how things work before trying to apply them to your idea.

    Hope it helps.


    I think your delay problem to detect the client has closed the page comes from your default kernel tcp keepalive configuration of your proxy apache node. I think in your system, if you check the value of sys.net.ipv4.tcp_keepalive_time, you may have the value 60 that should be the 60 seconds waited before the first keepalive packet is sent to detect if the client has closed the connection. From your problem details, mod_proxy looks to have an issue because it seems to not forward the RST packet that you correctly manage without the mod_proxy module. Without solving that forward RST packet issue on mod_proxy, you may only be able to reduce the delay by decreasing the parameter tcp_keepalive_time in example to 5, to wait up to 5 second before to start to check if the tcp connection is closed. Check also the number of failed keepalive probes parameters before to state the connection has been closed, it could also impact the total delay. This is tcp_keepalive_probes parameter.

    链接地址: http://www.djcxy.com/p/41430.html

    上一篇: 在Python中滚动数据透视表

    下一篇: 捕获与Apache负载均衡器和node / io.socket后端断开连接