How to properly loadbalance separate fastcgi servers

Posted on

How to properly loadbalance separate fastcgi servers – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.

But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about apache-2.2, load-balancing, django, fastcgi, scaling.

We are running bigger deployment with multiple servers running Django applications under apache2 and mod_wsgi.

We are considering switch to apache2 + fastcgi and moving fcgi processes to separate application server “layer”.

My question is: how to do proper load balancing between those multiple backend servers? I am most concerned about ability to add and remove servers on-the-fly.

Solution :

What you’re proposing is workable; it sounds like you’re essentially recreating the architecture which eins.de used for their Ruby On Rails install for a while.

But, that said, it would not be my first choice. Why bother with load balancing and add/remove servers over FastCGI, when HTTP is so ubiquitous as it is? Which benefit do you believe you will gain from using FastCGI instead of HTTP?

I would just use an HTTP/HTTPS load balancer as first server, and then speak HTTP to the application server layer. And the application servers could use Apache2+mod_wsgi+Django or gunicorn+Django based on your preference.

There are many good HTTP load balancers available. Search through this site. Some common open-source choices are Perlbal, nginx, HAProxy & Apache 2.2 (in no particular order). There are also commercial appliances like Coyote Point, Loadbalancer.org, or services like Amazon ELB for EC2.

I am most concerned about ability to add and remove servers on-the-fly.

That’s a good point, both about taking app servers out of service as well as completely reloading the load balancer config on the fly. Off the top of my head Perlbal, nginx and HAProxy can all reload their config file while running; but I’m haven’t verified this just now.

Leave a Reply

Your email address will not be published. Required fields are marked *