More musing on memory upgradesAs we have been exploring the issues of server configuration for faster forums, memory is always the ruling consideration.
One thing spells death for a running server, and that is when it starts to "page". Paging occurs when the server is running so many processes that it runs out of actual memory and starts to use the disk as "swap space".
At this point things start to run very slowly and everyone accessing the server start to retry their requests, piling on even more work, the death spiral at that point is unstoppable
The key way to limit this, is to ensure that there are limits on the number of web processes and the database processes that are needed to service some of the web processes. It is better that someones page request is simply queued or even refused that for the server to self destruct trying to do more than it can actually cope with.
But of course no one wants to see any request to a server fail, and so the key is to configure to avoid that happening.
A way to cram as much as possible on a server is to find the best set of parameters for each database process, parameters that will give the best memory/performance ratio. If you starve a process so much that despite using half the memory it takes more than twice as long to process, then your memory usage actually increases whilst your customers wait more than twice as long.
But conversely memory is a valuable asset, use too much memory on a process, and there is less memory left on the system to cache files, and with each new database request a little more of your disk cache gets lost, and each process starts to take more and more time to complete, you may have servers than zip along at quiet times, but really feel the strain at the peak times of the day.
With our current configuration we tend to hover at peak times at the point where cache has expanded to pretty much fill memory, but is what you might call flabby, a load more database processes might trim the cache down, but it would really take a deluge of requests for a performance spiral of decline to occur.
What is quite clear though is how easy it would be, to simply throttle back the settings and choose to deal with probably twice the server load, but simply at a slightly slower performance level. Financially it is easy to see how attractive it is to follow that route, and so may servers on the web do seem to do this. But somewhere along the line the rubber band stretching a lot more load for a little less speed reaches the limit and the server suddenly hits a wall where it simply cannot cope.