Apache Django OOM on server with 4GB RAM – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.
But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about apache-2.2, django, linode, oom, .
I’m running a production Django site on Ubuntu Linode w/ 4GB RAM. Major processes are Apache2, MongoDB, Memcache, PostgreSQL, Tomcat6 and Redis. Apache OOMs about 10 times a day. I’ve tweaked values in apache2.conf many times and seen no effect. There is no obvious correlation between number of requests and memory spikes or the path of requests and memory spikes. I say ‘spikes’ because normally Apache consumes very little memory, then suddenly in one second it jumps to 3.5GB and gets killed by the Kernel. I’ve not been able to artificially trigger the spikes using JMeter (load testing software), normally memory consumption under load is quite low and stable.
24 hour graph of memory usage (from Linode Longview):
It also looks like memory usage is also slowly climbing.
kernel: apache2 invoked oom-killer: ... kernel: 11705 total pagecache pages kernel: 5472 pages in swap cache kernel: Swap cache stats: add 76719087, delete 76713615, find 92563708/94246314 kernel: Free swap = 0kB kernel: Total swap = 2097148kB kernel: 1050623 pages RAM kernel: 43278 pages reserved kernel: 788996 pages shared kernel: 999768 pages non-shared ... kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name kernel: [ 3709] 1000 3709 3706586 889237 7117 464598 0 apache2 ... kernel: Killed process 3709 (apache2) total-vm:14826344kB
Timeout 30 KeepAlive Off <IfModule mpm_prefork_module> StartServers 3 MinSpareServers 2 MaxSpareServers 5 MaxClients 10 MaxRequestsPerChild 1000 </IfModule>
Switching to Nginx is not an option. Most of the time the OOMs don’t kill the system but every couple of weeks it does and the server requires a restart. A: What might be causing this? B: What steps have I not done yet to diagnose the true cause?
You have django apps running? It’s one of those.
Not knowing exactly how you have set it up I’m wagering that you have it set up so that the python/django runtime is sharing memory space with Apache. The memory use is being conflated here.