Get a handle on
You can deploy
Handlers
Under-the-Hood in every
An easy way to check what handler you are running is to create a dummy file on your server named
Load that page up in your browser and look for the Server API entry block.
CGI – mod_cgi
mod_cgi is a common option even to this day, mainly in shared hosting environments or (extremely) legacy applications that haven’t upgraded in almost a decade. It is old, outdated, and recommended never to use in any modern
Running
Unfortunately, the bad outweighs the good with this method; because of its inefficient, poor performance, and heavy taxation on server resources, it quickly fell out of preference for serious
su
Similar to CGI, su
With su
DSO – mod_
Considered the de facto standard of
The DSO handler is perhaps one of the oldest and fastest handlers available. The mod_
One of the greatest advantages of mod_
A disadvantage to mod_
FastCGI – mod_fcgid
FastCGI is what its name implies: a fast
A major disadvantage to FastCGI is that it is taxing on your memory consumption (although less so than mod_
FastCGI is a good modern alternative to su
FPM (FastCGI Process Manager) –
While technically not a handler per se,
FPM also supports opcode cache and allows for sharing your APC cache among all
Web Servers
We’ve been talking a lot about handlers, but to reach an optimal environment you need to combine it with the right web server for the level of scale you are trying to achieve with your
Apache
Being the most common web server in the world, Apache also has its limitations. As user traffic comes in, Apache will spawn a new worker process per request. Both prefork and MPM mode create new processes per client connection, although worker mode can handle more requests while it serves more than one connection per process. Nevertheless, as your traffic begins to increase you’ll notice your memory begin to exhaust as connections utilize RAM and compete for CPU. You’ll find yourself easily maxing out your connection pool if you run a heavy-traffic environment. You can configure Apache to optimize the number of maximum processes and concurrent connections given your hardware resources, but you’ll still find processes competing, and as traffic increases so do your need to scale.
Nginx
Nginx contains an event-driven, non-blocking, and asynchronous architectural model. If Node.js has taught us anything, this design creates the perfect environment conducive to serving multiple requests at the same time. Because it is non-blocking, it can continue to serve additional events (user connections) without having to wait, and because it is asynchronous, it can handle multiple concurrent users at the same time. The beauty of this model has quickly proven Nginx to be the leading contender for handling massive amounts of traffic under the same server setup. With Nginx, you can even cluster your processes to spawn an additional process per core on your machine, thus increasing your ability to handle more requests per server.
Conclusion
Clearly, you have options. Regardless of whether your primary concern is security or speed, you’ll need to consider your architecture deeply before going down the path of building an ecosystem in which to deploy your
In high traffic environments, coupling the event-driven architectural model of Nginx with
原文:https://blog.appdynamics.com/