So the problem I'm trying to solve is that I have a cluster of nodes. Due to existing (difficult-to-change) technology choices, those nodes all run through websockets. Theres' a tiny http server to handle the upgrade request, but other than that, everything is socket and command based.The trouble is, the normal proxy tools (ha proxy, nginx) essentially tunnel the connection for you. The way this app works, certain customers are pinned to certain nodes (clusters, really). Yes, it's a horrible scaling strategy, you're preaching to the choir :)So I've got to deliver frontend users to the server that's able to handle the user for a given customer. However, if one of those backend servers goes down temporarily, I don't want the frontend users to know that anything is going on. We know that it'll be available in a few seconds.I don't want the frontend users to even go through a websocket disconnect/reconnect process.To that end, I've written a little app with node-websocket that creates a server that can handle the connections from the frontend clients. There's a small piece of logic that picks the right backend to send them to, and does that. When that connection to the backend gets closed, dies, or otherwise stops working, messages get queued into a simple array (with push/unshift semantics). When we're able to re-connect to the server, the queued messages get played first, then new messages get played.I'm basically wondering if this is a library that already exists out there. I've written it, but it's kind of ugly and I'm a bit hesitant about the production-readiness of the code. I've tried Nodejitsu's http-proxy, but the patterns in that library make it almost impossible to achieve my needs. Running my own server and doing my own multiplexing is functioning, but I feel like I must be re-building a wheel. Any pointers?
Submitted March 04, 2019 at 09:47PM by librul-snowflake
No comments:
Post a Comment