Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implements fastcgi_cache and limit_req #7

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Implements fastcgi_cache and limit_req #7

wants to merge 2 commits into from

Conversation

ghost
Copy link

@ghost ghost commented May 2, 2017

Implements NGINX limit_req for security

A dynamic website without limit_req or something similar,
is a website that can be thrown down just by holding F5 button
in a web browser, overloading the resources of the server.

limit_req permits to set a maximum number of requests per number of seconds
per client IP address, amazingly increasing security against denial of services.

Once a client reach it's limit, a status code "429 Too Many Requests" is responded (RFC 6585-4)

Only two parameters over the role are required in order to set up limit_req :

php_limitreq_enabled: true
php_limitreq_per_second: 10

How to test

Note that this behavior is per individual IP address only.

  1. Add the parameters to enable it in your playbook and run provision
  2. Go on a webpage
  3. Press repeatly F5 fast
  4. The server should respond 429 status code if you press F5 more fast than the "per_second" specified

Implementing FastCGI cache to NGINX virtual host

Benchmarking shows a significant increase of server response from 100ms to 10ms using FastCGI Cache.

FastCGI Cache will make NGINX cache the page as static once generated, so the server will respond amazingly fast.

You can enable FastCGI Cache by setting php_fastcgi_cache_enabled to true.
If you want to clear the cache, you need to call from the same machine this
corresponding URL : http://[[php_base_name]].purge.cache.fastcgi.nginx.local/

More information about caching features can be found in defaults/main.yml.

How to test

  1. Add the parameters to enable it in your playbook and run provision
  2. Go on a webpage, measure at eye the response time
  3. Refresh the page and measure at eye the response time
  4. Refresh the page and measure at eye the response time
  5. The page load at point 3 and 4 should be very more fast than at point 2

David Côté-Tremblay added 2 commits May 2, 2017 10:48
Benchmarking shows a significant increase of server response from 100ms to 10ms using FastCGI Cache.

FastCGI Cache will make NGINX cache the page as static once generated, so the server will respond amazingly fast.

You can enable FastCGI Cache by setting `php_fastcgi_cache_enabled` to `true`.
If you want to clear the cache, you need to call from the same machine this
corresponding URL : `http://[[php_base_name]].purge.cache.fastcgi.nginx.local/`

More information about caching features can be found in `defaults/main.yml`.
A dynamic website without limit_req or something similar,
is a website that can be thrown down just by holding F5 button
in a web browser, overloading the resources of the server.

limit_req permits to set a maximum number of requests per number of seconds
per client IP address, amazingly increasing security against denial of services.

Once a client reach it's limit, a status code "429 Too Many Requests" is responded (RFC6585#4)

Only two parameters over the role are required in order to set up limit_req :

```yaml
php_limitreq_enabled: true
php_limitreq_per_second: 10
```
@ghost ghost requested a review from fpeyre May 2, 2017 15:49
@ghost
Copy link
Author

ghost commented May 2, 2017

@fpeyre Ready to review

@ghost ghost changed the title Implements fastcgi cache and limit req Implements fastcgi_cache and limit_req May 2, 2017
@ghost ghost requested review from ellmetha and samherve May 4, 2017 19:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant