Few days ago i had a chance to work bit different with Nginx, and what was needed to be done is how to determinate if user had successfully downloaded file. Why this is needed because file should be erased from server after download and PHP would be bad solution, also they told me that they don’t want to use any additional application eg. programming languages.
PHP-FPM (FastCGI Process Manager) is an alternative FastCGI implementation with some additional features useful for websites of any size, especially high-load websites. It makes it particularly easy to run PHP on Nginx.
Included features – from original website :
- Adaptive process spawning
- Basic statistics
- Advanced process management with graceful stop/start
- Ability to start workers with different uid/gid/chroot/environment and different php.ini
- Stdout & stderr logging
- Emergency restart in case of accidental opcode cache destruction
- Accelerated upload support
- Support for a “slowlog”
- Enhancements to FastCGI, such as fastcgi_finish_request() – a special function to finish request & flush all data while continuing to do something time-consuming
..and much more..
Source Wikipedia :
Hotlinking is a term used on the Internet that refers to the practice of displaying an image on a website by linking to the same image on another website, rather than saving a copy of it on the website on which the image will be shown.
Hotlinking can be major issue for bandwidth leeching for some sites. Here is small config part which you can add to prevent those activities.
Everyone got excited about new feature in nginx. From version 1.3.13 nginx have native support for proxying websockets, but i can not find it useful on any project of mine, please correct me if i’m wrong, and if you wish you can give me idea in comments. I just want to try it but i spend like 2-3 hours and hardly make them to work.
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400,000 to 500,000 requests per second (clustered), most what i saw is 50,000 to 80,000 (non-clustered) requests per second and 30% CPU load, course, this was 2xIntel Xeon with HT enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
Nginx timeout error is not uncommon.
While running maintenance on few nginx servers i saw today error like this one, actually i saw it a lot of times but not on my configs. After digging around for a while i found out that this server could not respond in 60 seconds which is default. Nginx has a directive for read timeout and its call’s proxy_read_timeout, it determines how long will nginx wait until he receive response to a request. This actually is not permanent solution but it’s quick fix.
Time To First Byte have number of components :
- DNS Lookup: Find the IP address of the domain (possible improvement: more numerous/distributed/responsive DNS servers)
- Connection time: Open a socket to the server, negotiate the connection (typical value should be around ‘ping’ time – a round trip is usually necessary – keepalive should help for subsequent requests)
- Waiting: initial processing required before first byte can be sent (his is where your improvement should be – it will be most significant for dynamic content.
We can test TTFB with ApacheBenchmark and we can see in output processing time which is sum of waiting plus complete transfer of content - if the transfer time is significantly longer than what would be expected to download the quantity of data received. further processing, after TTFB, is occuring (eg. the page is flushing content as it is available).