When building Apache, you must choose an MPM to use. For general Unix-type systems, there are several MPMs from which to choose. The choice of MPM can affect the speed and scalability of the httpd. Nowadays people tend to install apache 2. Using threads to serve requests can serve a large number of requests with fewer system resources than a process-based server.
The most important directives used to control this MPM are ThreadsPerChild , which controls the number of threads deployed by each child process, and MaxClients , which controls the maximum total number of threads that may be launched.
Usually, I require that I want my apache to serve many concurrent users. So now I want to increase this number. The more threads and processes you have, the more memory is consumed. Refer to the Apache documentation for more information on this directive.
With this number of instantiated workers, Apache can handle almost requests per second without increasing the number of workers. However, the default configuration would allow workers about three times as many workers as Apache instantiated in our example. This is too high, so to be on the safe side, you should decrease the MaxRequestWorkers setting to about The actual impact on CPU may be higher depending on the nature of the requests.
The service consuming the most CPU runs on port It has dynamic requests per minute and 2, static requests per minute. With respect to CPU, the most potential for optimization lies within the algorithms that serve the dynamic requests. Various modules that are dedicated to caching commonly requested content exist in order to make subsequent requests faster. These are particularly helpful in making static requests faster. To save memory, review the list of modules that are loaded by default with your server processes and remove unnecessary modules.
To save more CPU time and optimize response time further, review the list of modules that are consulted for each request. Can be achieved by defending against DDoS attacks majorly:. Extra Steps. Feel free to advise, recommend, or criticize me on Twitter BaraSec or in the comments section below.
You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. Email Address:. It has two related directives, MinSpareServers and MaxSpareServers ,which specify the number of workers Apache keeps waiting in the wings ready to serve requests.
The absolute maximum number of processes is configurable through the ServerLimit directive. For the prefork MPM of the above directives are all there is to determining the process limit. However, if you are running a threaded MPM the situation is a little more complicated. If you set either directive to a number that doesn't meet this requirement, Apache will send a message of complaint to the error log and adjust the ThreadsPerChild value downwards until it is an even factor of MaxClients.
Optimally, the maximum number of processes should be set so that all the memory on your system is used, but no more. If your system gets so overloaded that it needs to heavily swap core memory out to disk, performance will degrade quickly. The formula for determining MaxClients is fairly simple:. The various amounts of memory allocated for the OS, external programs and the httpd processes is best determined by observation: use the top and free commands described above to determine the memory footprint of the OS without the web server running.
You can also determine the footprint of a typical web server process from top: most top implementations have a Resident Size RSS column and a Shared Memory column. The difference between these two is the amount of memory per-process.
The shared segment really exists only once and is used for the code and libraries loaded and the dynamic inter-process tally, or 'scoreboard,' that Apache keeps. How much memory each process takes for itself depends heavily on the number and kind of modules you use. The best approach to use in determining this need is to generate a typical test load against your web site and see how large the httpd processes become.
The RAM for external programs parameter is intended mostly for CGI programs and scripts that run outside the web server process. However, if you have a Java virtual machine running Tomcat on the same box it will need a significant amount of memory as well. The above assessment should give you an idea how far you can push MaxClients ,but it is not an exact science. When in doubt, be conservative and use a low MaxClients value.
The Linux kernel will put extra memory to good use for caching disk access. On Solaris you need enough available real RAM memory to create any process. If no real memory is available, httpd will start writing 'No space left on device' messages to the error log and be unable to fork additional child processes, so a higher MaxClients value may actually be a disadvantage.
The prime reason for selecting a threaded MPM is that threads consume fewer system resources than processes, and it takes less effort for the system to switch between threads. This is more true for some operating systems than for others. On systems like Solaris and AIX, manipulating processes is relatively expensive in terms of system resources.
On these systems, running a threaded MPM makes sense. On Linux, the threading implementation actually uses one process for each thread. Linux processes are relatively lightweight, but it means that a threaded MPM offers less of a performance advantage than in other environments. Running a threaded MPM can cause stability problems in some situations For instance, should a child process of a preforked MPM crash, at most one client connection is affected.
However, if a threaded child crashes, all the threads in that process disappear, which means all the clients currently being served by that process will see their connection aborted. Additionally, there may be so-called "thread-safety" issues, especially with third-party libraries.
In threaded applications, threads may access the same variables indiscriminately, not knowing whether a variable may have been changed by another thread. This has been a sore point within the PHP community. The PHP processor heavily relies on third-party libraries and cannot guarantee that all of these are thread-safe. The good news is that if you are running Apache on Linux, you can run PHP in the preforked MPM without fear of losing too much performance relative to the threaded option.
Apache httpd maintains an inter-process lock around its network listener. For all practical purposes, this means that only one httpd child process can receive a request at any given time. The other processes are either servicing requests already received or are 'camping out' on the lock, waiting for the network listener to become available. This process is best visualized as a revolving door, with only one process allowed in the door at any time.
On a heavily loaded web server with requests arriving constantly, the door spins quickly and requests are accepted at a steady rate. On a lightly loaded web server, the process that currently "holds" the lock may have to stay in the door for a while, during which all the other processes sit idle, waiting to acquire the lock. At this time, the parent process may decide to terminate some children based on its MaxSpareServers directive.
The function of the 'accept mutex' as this inter-process lock is called is to keep request reception moving along in an orderly fashion. If the lock is absent, the server may exhibit the Thundering Herd syndrome. Consider an American Football team poised on the line of scrimmage. If the football players were Apache processes all team members would go for the ball simultaneously at the snap. One process would get it, and all the others would have to lumber back to the line for the next snap.
In this metaphor, the accept mutex acts as the quarterback, delivering the connection "ball" to the appropriate player process. Moving this much information around is obviously a lot of work, and, like a smart person, a smart web server tries to avoid it whenever possible.
Hence the revolving door construction. In recent years, many operating systems, including Linux and Solaris, have put code in place to prevent the Thundering Herd syndrome. Apache recognizes this and if you run with just one network listener, meaning one virtual host or just the main server, Apache will refrain from using an accept mutex.
If you run with multiple listeners for instance because you have a virtual host serving SSL requests , it will activate the accept mutex to avoid internal conflicts. You can manipulate the accept mutex with the AcceptMutex directive. Besides turning the accept mutex off, you can select the locking mechanism. Common locking mechanisms include fcntl, System V Semaphores and pthread locking. Not all are available on every platform, and their availability also depends on compile-time settings.
The various locking mechanisms may place specific demands on system resources: manipulate them with care. There is no compelling reason to disable the accept mutex. Apache automatically recognizes the single listener situation described above and knows if it is safe to run without mutex on your platform. People often look for the 'magic tune-up' that will make their system perform four times as fast by tweaking just one little setting.
The truth is, present-day UNIX derivatives are pretty well adjusted straight out of the box and there is not a lot that needs to be done to make them perform optimally. However, there are a few things that an administrator can do to improve performance. The usual mantra regarding RAM is "more is better".
As discussed above, unused RAM is put to good use as file system cache. A large configuration file-with many virtual hosts-also tends to inflate the process footprint. Having ample RAM allows you to run Apache with more child processes, which allows the server to process more concurrent requests. While the various platforms treat their virtual memory in different ways, it is never a good idea to run with less disk-based swap space than RAM.
The virtual memory system is designed to provide a fallback for RAM, but when you don't have disk space available and run out of swappable memory, your machine grinds to a halt. This can crash your box, requiring a physical reboot for which your hosting facility may charge you. Also, such an outage naturally occurs when you least want it: when the world has found your website and is beating a path to your door. If you have enough disk-based swap space available and the machine gets overloaded, it may get very, very slow as the system needs to swap memory pages to disk and back, but when the load decreases the system should recover.
0コメント