SlideShare a Scribd company logo
NGINX, Inc. 2017
Using NGINX as an Effective
and Highly Available Content
Cache
Kevin Jones
Technical Solutions Architect
@webopsx
• Quick intro to…
• NGINX
• Content Caching
• Caching with NGINX
• How caching functionality works
• How to enable basic caching
• Advanced caching with NGINX
• How to increase availability using caching
• When and how to enable micro-caching
• How to fine tune the cache
• How to architect for high availability
• Various configuration tips and tricks!
• Various examples!
2
Agenda
ITB2017 - Nginx Effective High Availability Content Caching
MORE INFORMATION AT NGINX.COM
Solves complexity…
Web Server Load BalancerReverse Proxy Content Cache
Streaming
Media Server

Recommended for you

NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing

Learn how to load balance your applications following best practices with NGINX and NGINX Plus. Join this webinar to learn: - How to configure basic HTTP load balancing features - The essential elements of load balancing: session persistence, health checks, and SSL termination - How to load balance MySQL, DNS, and other common TCP/UDP applications - How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment

nginxnginx plusload balancing
5 things you didn't know nginx could do
5 things you didn't know nginx could do5 things you didn't know nginx could do
5 things you didn't know nginx could do

NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we'll cover several things that most developers or administrators could implement to further delight their end users.

web performancewebservernginx
Extending functionality in nginx, with modules!
Extending functionality in nginx, with modules!Extending functionality in nginx, with modules!
Extending functionality in nginx, with modules!

A presentation about module-writing for nginx. Example code can be found at github: https://github.com/kvisle/nginx_module_samples

cnginx
350 million
total sites and counting…

running on NGINX
53%of the Top 10,000

most visited websites
36%of all instances on

Amazon Web Services
Source: W3Techs December 2013 Web Server Survey
Contexts, directives and parameters… oh my.

Recommended for you

What's New in NGINX Plus R12?
What's New in NGINX Plus R12? What's New in NGINX Plus R12?
What's New in NGINX Plus R12?

On-demand recording: nginx.com/resources/webinars/whats-new-nginx-plus-r12 NGINX Plus Release 12 (R12) is a significant release of the high-performance software application delivery platform, including award-winning customer support, a load balancer, content cache, and web server. R12 adds improved configuration sharing, additional monitoring statistics, enhanced caching, improved health checks, and the general availability (GA) release of nginScript, which increases dynamic configuration capabilities for NGINX and NGINX Plus. Join Liam Crilly, Director of Product Management for NGINX and NGINX Plus, to learn: * How to use a new and improved method for synchronizing configuration across a cluster of servers * What new features have been added to nginScript, the unique JavaScript implementation for NGINX and NGINX Plus * Which new statistics have been added to NGINX Plus monitoring, such as response time for upstream servers, response codes for TCP/UDP upstreams, and upstream hostnames * How improved health checks can help you maximize server uptime

cachingclusteringapplication monitoring
NGINX: Basics & Best Practices - EMEA Broadcast
NGINX: Basics & Best Practices - EMEA BroadcastNGINX: Basics & Best Practices - EMEA Broadcast
NGINX: Basics & Best Practices - EMEA Broadcast

This document provides an overview of installing and configuring the NGINX web server. It discusses installing NGINX from official repositories or from source on Linux systems like Ubuntu, Debian, CentOS and Red Hat. It also covers verifying the installation, basic configurations for web serving, reverse proxying, load balancing and caching. The document discusses modifications that can be made to the main nginx.conf file to improve performance and reliability. It also covers monitoring NGINX using status pages and logs, and summarizes key documentation resources.

nginx plusreverse proxynginx
Nginx - Tips and Tricks.
Nginx - Tips and Tricks.Nginx - Tips and Tricks.
Nginx - Tips and Tricks.

Nginx is a lightweight web server that was created in 2002 to address the C10K problem of scaling to 10,000 concurrent connections. It uses an asynchronous event-driven architecture that uses less memory and CPU than traditional multi-threaded models. Key features include acting as a reverse proxy, load balancer, HTTP cache, and web server. Nginx has grown in popularity due to its high performance, low memory usage, simple configuration, and rich feature set including modules for streaming, caching, and dynamic content.

nginxapachescalability
9
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/dirindex.html
http context
server context
events context
main context
stream contextnot shown…
upstream context
location context
10
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
server directive
location directive
upstream directive
events directive
main directive
nginx.org/en/docs/dirindex.html
11
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/dirindex.html
parameter
parameter
parameter
parameter
Source: W3Techs December 2013 Web Server Survey
Variables

Recommended for you

Rate Limiting with NGINX and NGINX Plus
Rate Limiting with NGINX and NGINX PlusRate Limiting with NGINX and NGINX Plus
Rate Limiting with NGINX and NGINX Plus

On-demand recording: https://www.nginx.com/resources/webinars/rate-limiting-nginx/ Learn how to mitigate DDoS and password-guessing attacks by limiting the number of HTTP requests a user can make in a given period of time. This webinar will teach you how to: * How to protect application servers from being overwhelmed with request limits * About the burst and no‑delay features for minimizing delay while handling large bursts of user requests * How to use the map and geo blocks to impose different rate limits on different HTTP user requests * About using the limit_req_log_level directive to set logging levels for rate‑limiting events About the webinar A delay of even a few seconds for a screen to render is interpreted by many users as a breakdown in the experience. There are many reasons for these breakdowns in the user experience, one of which is DDoS attacks which tie up your system’s resources. Rate limiting is a powerful feature of NGINX that can mitigate DDoS attacks, which would otherwise overload your servers and hinder application performance. In this webinar, we’ll cover basic concepts as well as advanced configuration. We will finish with a live demo that shows NGINX rate limiting in action.

nginx plushttprate limiting
NGINX: HTTP/2 Server Push and gRPC
NGINX: HTTP/2 Server Push and gRPCNGINX: HTTP/2 Server Push and gRPC
NGINX: HTTP/2 Server Push and gRPC

On demand recording: https://www.nginx.com/resources/webinars/nginx-http2-server-push-grpc/ We discuss new NGINX support for HTTP/2 server push and proxying gRPC traffic. Check out this webinar to learn: - About NGINX HTTP/2 support - How to use HTTP/2 server push with NGINX - How to proxy gRPC traffic using NGINX - How to configure both features, with live demonstrations

http2server pushgrpc
Load Balancing with Nginx
Load Balancing with NginxLoad Balancing with Nginx
Load Balancing with Nginx

Nginx is a popular tool for load balancing and caching. It offers high performance, reliability and flexibility for load balancing through features like upstream modules, health checks, and request distribution methods. It can also improve response times and handle traffic spikes through caching static content and supporting techniques like stale caching.

load balancingloadbalancerload balancer
13
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/varindex.html
variables
14
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
map $http_user_agent $dynamic {
“~*Mobile” mobile.example.com;
default desktop.example.com;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://$dynamic;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/varindex.html
variable
map (dynamic)
Server Name Identification
(i.e. hostname routing)
16
http {
...
server {
listen 10.0.1.10:80;
server_name example.com *.website.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/varindex.html
SNI

Recommended for you

Benchmarking NGINX for Accuracy and Results
Benchmarking NGINX for Accuracy and ResultsBenchmarking NGINX for Accuracy and Results
Benchmarking NGINX for Accuracy and Results

View full webinar on demand at http://bit.ly/nginxbenchmarking Whether you’re doing performance testing or planning for infrastructure needs, benchmarking can be a big deal. Join us for this webinar where we cover NGINX benchmarking best practices, including: - the test environment - configuring NGINX - using benchmarking tools - and more! You’ll learn how to approach doing benchmarks so that you obtain results that are more accurate, better understood, and do a better job of addressing the needs of your project.

benchmarkingnginxweb server
5 things you didn't know nginx could do velocity
5 things you didn't know nginx could do   velocity5 things you didn't know nginx could do   velocity
5 things you didn't know nginx could do velocity

NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we’ll cover several things that most developers or administrators could implement to further delight their end users.

velocity nginx
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...

Proxies are building blocks of HA setups for MySQL & MariaDB. They can detect failed nodes and route queries to hosts which are still available. If your master failed and you had to promote one of your slaves, proxies will detect such topology changes and route your traffic accordingly. More advanced proxies can do much more: route traffic based on precise query rules, cache queries or mirror them. They can be even used to implement different types of sharding. Introducing ProxySQL! In this joint webinar with ProxySQL’s creator, René Cannaò, we discuss this new proxy and its key features. We show you how you can deploy ProxySQL using ClusterControl. And we give you an early walk-through of some of the exciting ClusterControl features for ProxySQL that we have planned for its next release. AGENDA 1. Introduction 2. ProxySQL concepts (René Cannaò) - Hostgroups - Query rules - Connection multiplexing - Configuration management 3. Demo of ProxySQL setup in ClusterControl (Krzysztof Książek) 4. Upcoming ClusterControl features for ProxySQL SPEAKERS René Cannaò, Creator & Founder, ProxySQL. René has 10 years of working experience as a System, Network and Database Administrator mainly on Linux/Unix platform. In the last 4-5 years his experience was focused mainly on MySQL, working as Senior MySQL Support Engineer at Sun/Oracle and then as Senior Operational DBA at Blackbird, (formerly PalominoDB). In this period he built an analytic and problem solving mindset and he is always eager to take on new challenges, especially if they are related to high performance. And then he created ProxySQL… Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

clustercontrolmariadbmysql
Layer 7 Request Routing
18
http {
...
server {
if ( $blocked ) {
return 444;
}
listen 10.0.1.10:80;
server_name website.com *.example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/varindex.html
Regex location matching
Conditional Routing
19
The Basics of Content Caching
20
Client
initiates request
(e.g. GET /file)
Proxy Cache
determines if response
is already cached if not
NGINX will fetch from the
origin server Origin Server
serves response
along with all
cache control headers
(e.g. Cache-Control,
Etag, etc..)
Proxy Cache
caches the response
and serves it to the client

Recommended for you

Lcu14 Lightning Talk- NGINX
Lcu14 Lightning Talk- NGINXLcu14 Lightning Talk- NGINX
Lcu14 Lightning Talk- NGINX

Key external invitees will each give a 10min lightning talk about their Company, their interest in ARM servers and any requirements to port their software solutions on ARM 64-bit platforms. Video: https://www.youtube.com/watch?v=XWxrVM1i7gA&list=UUIVqQKxCyQLJS6xvSmfndLA

lcu14
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015

Fastly Altitude - June 25, 2015. Joe Williams, Computer Operator at GitHub discusses using a CDN to mitigate security threats. Video of the talk: http://fastly.us/Altitude2015_Mitigating-Security-Threats-2 Joe's bio: Joe Williams is a Computer Operator at GitHub, and joined their infrastructure team in August 2013. Joe's passion for distributed systems, queuing theory and automation help keep the lights on. When not behind a computer you can generally find him riding a bicycle around Marin, CA.

securitygithubcontent delivery network
Rails Caching Secrets from the Edge
Rails Caching Secrets from the EdgeRails Caching Secrets from the Edge
Rails Caching Secrets from the Edge

This document discusses caching strategies for Rails applications, including: 1. Using Rails caching for queries, pages, assets, and fragments to improve performance. 2. Configuring Cache-Control headers, compression, and CDNs like Fastly for efficient caching. 3. Techniques for caching dynamic content at the edge using surrogate keys and purging cached responses.

cachingcontent deliveryedge
21
Cache Headers
• Cache-Control - used to specify directives for caching mechanisms in both, requests and
responses. (e.g. Cache-Control: max-age=600 or Cache-Control: no-cache)
• Expires - contains the date/time after which the response is considered stale. If there is a Cache-
Control header with the "max-age" or "s-max-age" directive in the response, the Expires header is
ignored. (e.g. Expires: Wed, 21 Oct 2015 07:28:00 GMT)
• Last-Modified - contains the date and time at which the origin server believes the resource was last
modified. HTTP dates are always expressed in GMT, never in local time. Less accurate than the
ETag header (e.g. Last-Modified: Wed, 21 Oct 2015 07:28:00 GMT)
• ETag - is an identifier (or fingerprint) for a specific version of a resource. (e.g. ETag: “58efdcd0-268")
22
Content caching with NGINX is simple.
23
proxy_cache_path
proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time]
[max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time]
[loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on|off] [purger_files=number]
[purger_sleep=time] [purger_threshold=time];
Syntax:
Default: -
Context: http
Documentation
http {
proxy_cache_path /tmp/nginx/micro_cache/ levels=1:2 keys_zone=large_cache:10m
max_size=300g inactive=14d;
...
}
Definition: Sets the path and other parameters of a cache. Cache data are stored in files. The file name in a cache is
a result of applying the MD5 function to the cache key.
24
proxy_cache_key
Documentation
server {
proxy_cache_key $scheme$proxy_host$request_uri$cookie_userid$http_user_agent;
...
}
proxy_cache_key string;Syntax:
Default: proxy_cache_key $scheme$proxy_host$request_uri;
Context: http, server, location
Definition: Defines a key for caching. Used in the proxy_cache_path directive.

Recommended for you

Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse ProxyNginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy

Nginx is a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache

nginx configurationnginx load balancernginx reverse proxy server
What’s New in NGINX Plus R15?
What’s New in NGINX Plus R15?What’s New in NGINX Plus R15?
What’s New in NGINX Plus R15?

In this webinar we discuss new features in NGINX Plus R15, which includes support for gRPC, HTTP/2 Server Push, enhanced clustering, and OpenID Connect SSO integration. Watch this webinar to learn: - About new HTTP/2 enhancements: gRPC and HTTP/2 server push support - About new state sharing and clustering support in NGINX Plus, with support for Sticky Learn session persistence - How to integrate with Okta, OneLogin, and other identity providers to provide single sign on (SSO) for your applications - How to initiate subrequests with the NGINX JavaScript module, new variables, and other great new enhancements in this release https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r15/

nginxnginx plushttp
Clug 2012 March web server optimisation
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisation

These slides show how to reduce latency on websites and reduce bandwidth for improved user experience. Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation

webserveroptimisationmysql
25
proxy_cache
Documentation
location ^~ /api {
...
proxy_cache large_cache;
}
proxy_cache zone | off;Syntax:
Default: proxy_cache off;
Context: http, server, location
Definition: Defines a shared memory zone used for caching. The same zone can be used in several places.
26
proxy_cache_valid
Documentation
location ~* .(jpg|png|gif|ico)$ {
...
proxy_cache_valid any 1d;
}
proxy_cache_valid [code ...] time;Syntax:
Default: -
Context: http, server, location
Definition: Sets caching time for different response codes.
27
http {
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=cache:10m
max_size=100g inactive=7d use_temp_path=off;
...
server {
...
location / {
...
proxy_pass http://backend.com;
}
location ^~ /images {
...
proxy_cache cache;
proxy_cache_valid 200 301 302 12h;
proxy_pass http://images.origin.com;
}
}
}
Basic Caching
28
Client
NGINX Cache
Origin Server
Cache Memory Zone
(Shared across workers)
1. HTTP Request:
GET /images/hawaii.jpg
Cache Key: http://origin/images/hawaii.jpg
md5 hash: 51b740d1ab03f287d46da45202c84945
2. NGINX checks if hash exists in memory. If it does
not the request is passed to the origin server.
3. Origin server
responds
4. NGINX caches the response to disk
and places the hash in memory
5. Response is served to client

Recommended for you

Load Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterLoad Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS Cluster

The document discusses load balancing applications with NGINX in a CoreOS cluster. It provides an overview of using CoreOS, etcd, and fleet to deploy and manage containers across a cluster. Etcd is used for service discovery to track dynamic IP addresses and endpoints, while fleet is used as an application scheduler to deploy units and rebalance loads. NGINX can then be used as a software load balancer to distribute traffic to the backend services. The document demonstrates setting up this environment with CoreOS, etcd, fleet and NGINX to provide load balancing in a clustered deployment.

internetcontainersdeployment
Less and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developersLess and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developers

Otto Kekäläinen, the code-loving CEO of Seravo held a webinar on May 12, 2020, that focused on the cache: what should a WordPress developer know and which are the best practices to follow?

cachewebweb development
PHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on NginxPHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on Nginx

The document provides tips and tricks for optimizing website performance. It discusses using PHP-FPM or HHVM as faster alternatives to running PHP as an Apache module. Nginx is recommended as a lightweight web server that can serve static files and pass dynamic requests to PHP faster. Caching with Nginx, Memcached, and browser caching can significantly improve performance. Load balancing upstream servers and monitoring tools are also discussed.

phpperformancemonitoring
29
NGINX Processes
# ps aux | grep nginx
root 14559 0.0 0.1 53308 3360 ? Ss Apr12 0:00 nginx: master process /usr/
sbin/nginx -c /etc/nginx/nginx.conf
nginx 27880 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process
nginx 27881 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process
nginx 27882 0.0 0.1 53472 2876 ? S 00:06 0:00 nginx: cache manager process
nginx 27883 0.0 0.1 53472 2552 ? S 00:06 0:00 nginx: cache loader process
• Cache Manager - activated periodically to check the state of the cache. If the cache
size exceeds the limit set by the max_size parameter to the proxy_cache_path directive,
the cache manager removes the data that was accessed least recently
• Cache Loader - runs only once, right after NGINX starts. It loads metadata about
previously cached data into the shared memory zone.
30
Caching is not just for HTTP
HTTP
FastCGI
UWSGI
SCGI
Memcache
Tip: NGINX can also be used to cache other backends using their unique cache directives. (e.g. fastcgi_cache,
uwsgi_cache and scgi_cache)
Alternatively, NGINX can also be used to retrieve content directly from a memcached server.
31
Initial… Tips and Tricks!
32
log_format main 'rid="$request_id" pck="$scheme://$proxy_host$request_uri" '
'ucs="$upstream_cache_status" '
'site="$server_name" server="$host” dest_port="$server_port" '
'dest_ip="$server_addr" src="$remote_addr" src_ip="$realip_remote_addr" '
'user="$remote_user" time_local="$time_local" protocol="$server_protocol" '
'status="$status" bytes_out="$bytes_sent" '
'bytes_in="$upstream_bytes_received" http_referer="$http_referer" '
'http_user_agent="$http_user_agent" nginx_version="$nginx_version" '
'http_x_forwarded_for="$http_x_forwarded_for" '
'http_x_header="$http_x_header" uri_query="$query_string" uri_path="$uri" '
'http_method="$request_method" response_time="$upstream_response_time" '
'cookie="$http_cookie" request_time="$request_time" ';
Logging is your friend…
Tip: The more relevant information in your log the better. When troubleshooting you can easily add the proxy
cache KEY to the log_format for debugging. For a list of all variables see the “Alphabetical index of
variables” on nginx.org.

Recommended for you

ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones

I will be giving a brief overview of the history of NGINX along with an overview of the features and functionality in the project as it stands today. I will give some real use case of example of how NGINX can be used to solve problems and eliminate complexity within infrastructure. I will then dive into the future of the modern web and how NGINX is monitoring and leveraging industry changes to enhance the product for individuals and companies in the industry.

itb2019nginxcfml
What's new in NGINX Plus R19
What's new in NGINX Plus R19What's new in NGINX Plus R19
What's new in NGINX Plus R19

Test rate limits in dry-run mode and monitor NGINX Plus using advanced metrics with NGINX Plus R19. On-Demand Link: https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r19/ Watch this webinar to learn: - How to monitor your NGINX Plus ecosystem with fine-grained insights using advanced metrics - About dynamically blacklisting IP address ranges in the key-value Store - How to apply different bandwidth limits based on attributes of incoming traffic - About testing rate limits in dry-run mode

ngnixnginx plusnginx microservices
Nginx internals
Nginx internalsNginx internals
Nginx internals

The document discusses the internals and architecture of the Nginx web server. It covers Nginx's event-driven and non-blocking architecture, its use of memory pools and data structures like radix trees, how it processes HTTP requests through different phases, and how modules and extensions can be developed for Nginx. The document also provides an overview of Nginx's configuration, caching, and load balancing capabilities.

nginx
33
server {
...
# add HTTP response headers
add_header CC-X-Request-ID $request_id;
add_header X-Cache-Status $upstream_cache_status;
}
Adding response headers…
Tip: Using the add_header directive you can add useful HTTP response headers allowing you to debug
your NGINX deployment rather easily.
34
Cache Status
• MISS – The response was not found in the cache and so was fetched from an origin server. The response
might then have been cached.
• BYPASS – The response was fetched from the origin server instead of served from the cache because the
request matched a proxy_cache_bypass directive. The response might then have been cached.
• EXPIRED – The entry in the cache has expired. The response contains fresh content from the origin
server.
• STALE – The content is stale because the origin server is not responding correctly, and
proxy_cache_use_stale was configured.
• UPDATING – The content is stale because the entry is currently being updated in response to a previous
request, and proxy_cache_use_stale updating is configured.
• REVALIDATED – The proxy_cache_revalidate directive was enabled and NGINX verified that the current
cached content was still valid (ETag, If‑Modified‑Since or If‑None‑Match).
• HIT – The response contains valid, fresh content direct from the cache.
35
# curl -I 127.0.0.1/images/hawaii.jpg
HTTP/1.1 200 OK
Server: nginx/1.11.10
Date: Wed, 19 Apr 2017 22:20:53 GMT
Content-Type: image/jpeg
Content-Length: 21542868
Connection: keep-alive
Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT
ETag: "58efe5ab-148b7d4"
OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1
CC-X-Request-ID: d4a5f7a8d25544b1409c351a22f42960
X-Cache-Status: HIT
Accept-Ranges: bytes
Using cURL to Debug…
Tip: Use cURL or Chrome developer tools to grab the request ID or other various headers useful for
debugging.
36
# grep -ri d4a5f7a8d25544b1409c351a22f42960 /var/log/nginx/adv_access.log
rid="d4a5f7a8d25544b1409c351a22f42960" pck="http://origin/images/hawaii.jpg"
site="webopsx.com" server="localhost” dest_port="80" dest_ip=“127.0.0.1" ...
# echo -n "http://origin/images/hawaii.jpg" | md5sum
51b740d1ab03f287d46da45202c84945 -
# tree /tmp/nginx/micro_cache/5/94/
/tmp/nginx/micro_cache/5/94/
!"" 51b740d1ab03f287d46da45202c84945
0 directories, 1 file
Troubleshooting the Proxy Cache
Tip: A quick and easy way to determine the hash of your cache key can be accomplished using echo, pipe and
md5sum

Recommended for you

How to make a high-quality Node.js app, Nikita Galkin
How to make a high-quality Node.js app, Nikita GalkinHow to make a high-quality Node.js app, Nikita Galkin
How to make a high-quality Node.js app, Nikita Galkin

This document discusses how to build high quality Node.js applications. It covers attributes of quality like understandability, modifiability, portability, reliability, efficiency, usability, and testability. For each attribute, it provides examples of what could go wrong and best practices to achieve that attribute, such as using dependency injection for modifiability, environment variables for portability, and graceful shutdown for reliability. It also discusses Node.js programming paradigms like callbacks, promises, and async/await and recommends best practices for testing Node.js applications.

Nginx Scalable Stack
Nginx Scalable StackNginx Scalable Stack
Nginx Scalable Stack

This document discusses using NGINX to deliver high performance applications through efficient caching. It explains that NGINX can be used as a web server, load balancer, and high availability content cache to provide low latency, scalability, availability and reduced costs. Specific NGINX caching configurations like proxy_cache, proxy_cache_valid and proxy_cache_background_update are described. Microcaching optimizations with NGINX are also covered, showing significant performance improvements over Apache+WordPress and a reverse proxy only setup.

Where is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by exampleWhere is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by example

The document discusses various architectural patterns for caching microservices, including embedded caching, embedded distributed caching, client-server caching, cloud caching, sidecar caching, reverse proxy caching, and reverse proxy sidecar caching. It provides examples and descriptions of each pattern, discussing pros and cons. The presentation concludes with a summary matrix comparing the different caching patterns based on factors like whether they are application-aware, support containers, are language-agnostic, support large amounts of data, have security restrictions, and can be deployed to the cloud.

cachingarchitectural patternsarchitecture
37
# head -n 14 /tmp/nginx/micro_cache/5/94/51b740d1ab03f287d46da45202c84945
??X?X??Xb?!bv?"58efe5ab-148b7d4"
KEY: http://origin/images/hawaii.jpg
HTTP/1.1 200 OK
Server: nginx/1.11.10
Date: Wed, 19 Apr 2017 23:51:38 GMT
Content-Type: image/jpeg
Content-Length: 21542868
Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT
Connection: keep-alive
ETag: "58efe5ab-148b7d4"
OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1
Accept-Ranges: bytes
?wExifII>(i?Nl?0230??HH??
Cache Contents
38
Micro-Caching
“Size matters not.”
39
Static Content
• Images
• CSS
• Simple HTML
User Content
• Shopping Cart
• Unique Data
• Account Data
Dynamic Content
• Blog Posts
• Status
• API Data (Maybe?)
Easy to cache Cannot CacheMicro-cacheable!
Types of Content
Documentation
40
http {
upstream backend {
keepalive 20;
server 127.0.0.1:8080;
}
proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m
max_size=100m inactive=600s;
...
server {
listen 80;
...
proxy_cache micro_cache;
proxy_cache_valid any 1s;
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Accept-Encoding "";
proxy_pass http://backend;
}
}
}
Enable keepalives on upstream
Set proxy_cache_valid to any
status with a 1 second value
Set required HTTP version and
pass HTTP headers for keepalives
Set short inactive parameter

Recommended for you

Running php on nginx
Running php on nginxRunning php on nginx
Running php on nginx

The document discusses configuring Nginx and PHP-FPM for high performance websites. Some key points: - Nginx is a lightweight and fast HTTP server that is well-suited for high traffic loads. It can be used as a web server, reverse proxy, load balancer, and more. - PHP-FPM (PHP FastCGI Process Manager) runs PHP processes as a pool that is separate from the web server for better isolation and performance. Nginx communicates with PHP-FPM via FastCGI. - Benchmark results show Nginx performing better than Apache, especially under high concurrency loads. Caching with Nginx and Memcached can further improve

phpnginxperformance
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker

NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.

load balancercachingreverse proxy
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker

NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.

reverse proxyload balancernginx
41
proxy_cache_lock
Documentation
proxy_cache_lock on | off;Syntax:
Default: proxy_cache_lock off;
Context: http, server, location
Definition: When enabled, only one request at a time will be allowed to populate a new cache element identified
according to the proxy_cache_key directive by passing a request to a proxied server.
Other requests of the same cache element will either wait for a response to appear in the cache or the
cache lock for this element to be released, up to the time set by the proxy_cache_lock_timeout directive.
Related: See the following for tuning…
• proxy_cache_lock_age,
• proxy_cache_lock_timeout
42
proxy_cache_use_stale
Documentation
location /contact-us {
...
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
}
proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 |
http_504 | http_403 | http_404 | http_429 | off ...;
Syntax:
Default: proxy_cache_use_stale off;
Context: http, server, location
Definition: Determines in which cases a stale cached response can be used during communication with the proxied
server.
43
http {
upstream backend {
keepalive 20;
server 127.0.0.1:8080;
}
proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m
max_size=100m inactive=600s;
...
server {
listen 80;
...
proxy_cache micro_cache;
proxy_cache_valid any 1s;
proxy_cache_lock on;
proxy_cache_use_stale updating;
location / {
...
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Accept-Encoding "";
proxy_pass http://backend;
}
}
}
Final optimization
44
Further Tuning and Optimization

Recommended for you

What’s New in NGINX Plus R15? - EMEA
What’s New in NGINX Plus R15? - EMEAWhat’s New in NGINX Plus R15? - EMEA
What’s New in NGINX Plus R15? - EMEA

In this webinar we discuss new features in NGINX Plus R15, which includes support for gRPC, HTTP/2 Server Push, enhanced clustering, and OpenID Connect SSO integration. Watch this webinar to learn: - About new HTTP/2 enhancements: gRPC and HTTP/2 server push support - About new state sharing and clustering support in NGINX Plus, with support for Sticky Learn session persistence - How to integrate with Okta, OneLogin, and other identity providers to provide single sign on (SSO) for your applications - How to initiate subrequests with the NGINX JavaScript module, new variables, and other great new enhancements in this release https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r15/

nginxnginx plus
Solving anything in VCL
Solving anything in VCLSolving anything in VCL
Solving anything in VCL

Andrew Betts Web Developer, The Financial Times at Fastly Altitude 2016 Running custom code at the Edge using a standard language is one of the biggest advantages of working with Fastly’s CDN. Andrew gives you a tour of all the problems the Financial Times and Nikkei solve in VCL and how their solutions work.

fastlyvclvarnish
NGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEANGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEA

In this webinar we help you get started using NGINX, the de facto web server for building modern applications. We cover best practices for installing, configuring, and troubleshooting both NGINX Open Source and the enterprise-grade NGINX Plus. https://www.nginx.com/resources/webinars/nginx-basics-best-practices-emea-2/

nginxnginx plus
45
proxy_cache_revalidate
Documentation
proxy_cache_revalidate on | off;Syntax:
Default: proxy_cache_revalidate off;
Context: http, server, location
Definition: Enables revalidation of expired cache items using conditional GET requests with the “If-Modified-Since”
and “If-None-Match” header fields.
46
proxy_cache_min_uses
Documentation
location ~* /legacy {
...
proxy_cache_min_uses 5;
}
proxy_cache_min_uses number;Syntax:
Default: proxy_cache_min_uses 1;
Context: http, server, location
Definition: Sets the number of requests after which the response will be cached. This will help with disk utilization and
hit ratio of your cache.
47
proxy_cache_methods
Documentation
location ~* /data {
...
proxy_cache_methods GET HEAD POST;
}
proxy_cache_methods GET | HEAD | POST …;Syntax:
Default: proxy_cache_methods GET HEAD;
Context: http, server, location
Definition: NGINX only caches GET and HEAD request methods by default. Using this directive you can add
additional methods.
If you plan to add additional methods consider updating the cache key to include the $request_method
variable if the response will be different depending on the request method.
48
proxy_buffering
Documentation
proxy_buffering on | off;Syntax:
Default: proxy_buffering on;
Context: http, server, location
Definition: Enables or disables buffering of responses from the proxied server.
When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving
it into the buffers set by the proxy_buffer_size and proxy_buffers directives. If the whole response does not
fit into memory, a part of it can be saved to a temporary file on the disk.
When buffering is disabled, the response is passed to a client synchronously, immediately as it is received.

Recommended for you

NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing

Learn how to load balance your applications following best practices with NGINX and NGINX Plus. On-Demand Recording: https://www.nginx.com/resources/webinars/high-performance-load-balancing/ Join this webinar to learn: * How to configure basic HTTP load balancing features * The essential elements of load balancing: session persistence, health checks, and SSL termination * How to load balance MySQL, DNS, and other common TCP/UDP applications * How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment About the webinar You’ve built a great application and it’s gaining in popularity. Or maybe you already have a hardware load balancer and you’re looking to replace it with a software solution. In this webinar we’ll share the latest information on how to scale-out and load balance your applications with NGINX and NGINX Plus.

nginxnginx plusnginx amplify
NGINX Installation and Tuning
NGINX Installation and TuningNGINX Installation and Tuning
NGINX Installation and Tuning

You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time. View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/

nginx pluswebservernginx
ProxySQL Tutorial - PLAM 2016
ProxySQL Tutorial - PLAM 2016ProxySQL Tutorial - PLAM 2016
ProxySQL Tutorial - PLAM 2016

This document provides an overview of ProxySQL, a high performance proxy for MySQL. It discusses ProxySQL's main features such as query routing, caching, load balancing, and high availability capabilities including seamless failover. The document also describes ProxySQL's internal architecture including modules for queries processing, user authentication, hostgroup management, and more. Examples are given showing how hostgroups can be used for read/write splitting and replication topologies.

49
location ^~ /wordpress {
...
proxy_cache cache;
proxy_ignore_headers Cache-Control;
}
Override Cache-Control headers
Tip: By default NGINX will honor all Cache-Control headers from the origin server, in turn not caching
responses with Cache-Control set to Private, No-Cache, No-Store or with Set-Cookie in the response
header.
Using proxy_ignore_headers you can disable processing of certain response header fields from the
proxied server.
50
location / {
...
proxy_cache cache;
proxy_cache_bypass $cookie_nocache $arg_nocache $http_cache_bypass;
}
Can I Punch Through the Cache?
Tip: If you want to disregard the cache and go strait to the origin for a response, you can use the
proxy_cache_bypass directive.
51
proxy_cache_purge
Documentation
proxy_cache_methods string ...;Syntax:
Default: -
Context: http, server, location
Definition: Defines conditions under which the request will be considered a cache purge request. If at least one value
of the string parameters is not empty and is not equal to “0” then the cache entry with a corresponding
cache key is removed.
The result of successful operation is indicated by returning the 204 (No Content) response.
Note: NGINX Plus only feature
52
proxy_cache_path /tmp/cache keys_zone=mycache:10m levels=1:2 inactive=60s;
map $request_method $purge_method {
PURGE 1;
default 0;
}
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass http://localhost:8002;
proxy_cache mycache;
proxy_cache_purge $purge_method;
}
}
Example Cache Purge Configuration
Tip: Using NGINX Plus, you can issue unique request methods to invalidate the cache

Recommended for you

Socket programming, and openresty
Socket programming, and openrestySocket programming, and openresty
Socket programming, and openresty

This document discusses socket programming and network programming concepts like TCP and UDP. It provides examples of using Netcat and Python for sockets. It also summarizes the architecture of Nginx and Openresty, a framework that embeds Lua in Nginx allowing full web applications to run within the Nginx process for high performance and scalability. Openresty allows accessing and modifying requests and responses with Lua scripts.

Intro to Amazon Web Services (AWS) and Gen AI
Intro to Amazon Web Services (AWS) and Gen AIIntro to Amazon Web Services (AWS) and Gen AI
Intro to Amazon Web Services (AWS) and Gen AI

Sami provided a beginner-friendly introduction to Amazon Web Services (AWS), covering essential terms, products, and services for cloud deployment. Participants explored AWS' latest Gen AI offerings, making it accessible for those starting their cloud journey or integrating AI into coding practices.

cfmlcoldfusionboxlang
How we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hoursHow we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hours

Explore the rapid development journey of TryBoxLang, completed in just 48 hours. This session delves into the innovative process behind creating TryBoxLang, a platform designed to showcase the capabilities of BoxLang by Ortus Solutions. Discover the challenges, strategies, and outcomes of this accelerated development effort, highlighting how TryBoxLang provides a practical introduction to BoxLang's features and benefits.

coldfusioncfmladobe
53
Useful Examples
54
http {
proxy_cache_path /path/to/hdd1 levels=1:2 keys_zone=my_cache_hdd1:10m
max_size=10g inactive=60m use_temp_path=off;
proxy_cache_path /path/to/hdd2 levels=1:2 keys_zone=my_cache_hdd2:10m
max_size=10g inactive=60m use_temp_path=off;
split_clients $request_uri $my_cache {
50% “my_cache_hdd1”;
50% “my_cache_hdd2”;
}
server {
...
location / {
proxy_cache $my_cache;
proxy_pass http://my_upstream;
}
}
}
Split the Cache Across HDDs
55
Using NGINX for Byte Range Caching
56
http {
proxy_cache_path /tmp/mycache keys_zone=mycache:10m;
server {
listen 80;
proxy_cache mycache;
slice 1m;
proxy_cache_key $host$uri$is_args$args$slice_range;
proxy_set_header Range $slice_range;
proxy_http_version 1.1;
proxy_cache_valid 200 206 1h;
location / {
proxy_pass http://origin.example.com;
}
}
Split the Cache Across HDDs
Tip: Using the split_client directive, NGINX will perform a hash function on a variable of your choice and
based on that hash will dynamically set a new variable that can be used elsewhere in the configuration.

Recommended for you

Migrate your Infrastructure to the AWS Cloud
Migrate your Infrastructure to the AWS CloudMigrate your Infrastructure to the AWS Cloud
Migrate your Infrastructure to the AWS Cloud

Are you wondering how to migrate to the Cloud? At the ITB session, we addressed the challenge of managing multiple ColdFusion licenses and AWS EC2 instances. Discover how you can consolidate with just one EC2 instance capable of running over 50 apps using CommandBox ColdFusion. This solution supports both ColdFusion flavors and includes cb-websites, a GoLang binary for managing CommandBox websites.

coldfusioncfmlwebsite
BoxLang Developer Tooling: VSCode Extension and Debugger
BoxLang Developer Tooling: VSCode Extension and DebuggerBoxLang Developer Tooling: VSCode Extension and Debugger
BoxLang Developer Tooling: VSCode Extension and Debugger

Discover BoxLang, the innovative JVM programming language developed by Ortus Solutions. Designed to harness the power of the Java Virtual Machine, BoxLang offers a modern approach to application development with robust performance and scalability. Join us as we explore the capabilities of BoxLang, its syntax, and how it enhances productivity in software development.

vscodecodingcfml
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...

Unlock the secrets of seamless ColdFusion error troubleshooting! Join us to explore the potent capabilities of Visual Studio Code (VS Code) and ColdFusion Builder (CF Builder) in debugging. This hands-on session guides you through practical techniques tailored for local setups, ensuring a smooth and efficient development experience.

builderdebugcoldfusion
57
Architecting for High Availability
58
Two Approaches
• Sharded (High Capacity)
• Shared (Replicated)
59
Shared Cache Clustering
Tip: If your primary goal is to achieve high availability while minimizing load on the origin servers, this scenario
provides a highly available shared cache.
60
And Failover…
Tip: In the event of a failover there is no loss in cache and the origin does not suffer unneeded proxy requests.

Recommended for you

Web Hosting with CommandBox and CommandBox Pro
Web Hosting with CommandBox and CommandBox ProWeb Hosting with CommandBox and CommandBox Pro
Web Hosting with CommandBox and CommandBox Pro

CommandBox was highlighted as a powerful web hosting solution, perfect for developers and businesses alike. Featuring a built-in server and command-line interface, CommandBox simplified web application management. Developers could deploy multiple application instances simultaneously, optimizing development workflows. CommandBox's efficient deployment processes ensured reliable web hosting, seamlessly integrating into existing workflows for scalability and feature enhancements.

commandboxcommandbox procfml
Revolutionizing Task Scheduling in ColdBox
Revolutionizing Task Scheduling in ColdBoxRevolutionizing Task Scheduling in ColdBox
Revolutionizing Task Scheduling in ColdBox

Join me for an insightful journey into task scheduling within the ColdBox framework. In this session, we explored how to effortlessly create and manage scheduled tasks directly in your code, enhancing control and efficiency in applications and modules. Attendees experienced a user-friendly dashboard for seamless task management and monitoring. Whether you're experienced with ColdBox or new to it, this session provided practical knowledge and tips to streamline your development workflow.

cfmlcoldfusioncoldbox
Disk to Cloud: Abstract your File Operations with CBFS
Disk to Cloud: Abstract your File Operations with CBFSDisk to Cloud: Abstract your File Operations with CBFS
Disk to Cloud: Abstract your File Operations with CBFS

In this session, we explored how the cbfs module empowers developers to abstract and manage file systems seamlessly across their lifecycle. From local development to S3 deployment and customized media providers requiring authentication, cbfs offers flexible solutions. We discussed how cbfs simplifies file handling with enhanced workflow efficiency compared to native methods, along with practical tips to accelerate complex file operations in your projects.

softwaretechnologycoldfusion
61
Sharding your Cache
Tip: If your primary goal is to create a very high‑capacity cache, shard (partition) your cache across multiple
servers. This in turn maximizes the resources you have while minimizing impact on your origin servers
depending on the amount of cache servers in your cache tier.
62
upstream cache_servers {
hash $scheme$proxy_host$request_uri consistent;
server cache1.example.com;
server cache2.example.com;
server cache3.example.com;
server cache4.example.com;
}
Hash Load Balancing
Tip: Using the hash load balancing algorithm, we can specify the proxy cache key. This allows each resource to
be cached on only one backend server.
63
Combined Load Balancer and Cache
Tip: Alternatively, It is possible to consolidate the load balancer and cache tier into one with the use of a
various NGINX directives and parameters.
64
Multi-Tier with “Hot Cache”
Tip: If needed, a “Hot Cache Tier” can be enabled on the load balancer layer which will give you the same high
capacity cache and provide a high availability of specific cached resources.

Recommended for you

How to Break Your App with Playwright Tests
How to Break Your App with Playwright TestsHow to Break Your App with Playwright Tests
How to Break Your App with Playwright Tests

In this session, we explored setting up Playwright, an end-to-end testing tool for simulating browser interactions and running TestBox tests. Participants learned to configure Playwright for applications, simulate user interactions to stress-test forms, and handle scenarios like taking screenshots, recording sessions, capturing Chrome dev tools traces, testing login failures, and managing broken JavaScript. The session also covered using Playwright with non-ColdBox sites, providing practical insights into enhancing testing capabilities.

testingboxlangcfml
Securing Your Application with Passkeys and cbSecurity
Securing Your Application with Passkeys and cbSecuritySecuring Your Application with Passkeys and cbSecurity
Securing Your Application with Passkeys and cbSecurity

Discover Passkeys, the next evolution in secure login methods that eliminate traditional password vulnerabilities. Learn about the CBSecurity Passkeys module's installation, configuration, and integration into your application to enhance security.

cfmlcoldfusionwebsite
Schrodinger’s Backup: Is Your Backup Really a Backup?
Schrodinger’s Backup: Is Your Backup Really a Backup?Schrodinger’s Backup: Is Your Backup Really a Backup?
Schrodinger’s Backup: Is Your Backup Really a Backup?

In this session, we discussed the critical need for comprehensive backups across all aspects of our industry—from code and databases to webservers, file servers, and network configurations. Emphasizing the importance of proactive measures, attendees were urged to ensure their backup systems were tested through restoration processes. The session underscored the risk of discovering backup issues only during crises, highlighting the necessity of verifying backup integrity through restoration tests.

cfmlcoldfusionjava
Documentation
• https://nginx.org
• https://nginx.com
Blog
• https://www.nginx.com/blog/nginx-caching-guide/
• https://www.nginx.com/blog/benefits-of-microcaching-nginx/
• https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/
• https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/
• https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/
Webinar
• https://www.nginx.com/resources/webinars/content-caching-nginx-plus/
65
Links
Thank You
66
https://www.nginx.com/blog/author/kjones/
@webopsx
Kevin Jones
Technical Solutions Architect
NGINX Inc.
kevin@nginx.com
https://www.slideshare.net/KevinJones62
https://www.linkedin.com/in/kevin-jones-19b17b47/

More Related Content

What's hot

Reverse proxy & web cache with NGINX, HAProxy and Varnish
Reverse proxy & web cache with NGINX, HAProxy and VarnishReverse proxy & web cache with NGINX, HAProxy and Varnish
Reverse proxy & web cache with NGINX, HAProxy and Varnish
El Mahdi Benzekri
 
Delivering High Performance Websites with NGINX
Delivering High Performance Websites with NGINXDelivering High Performance Websites with NGINX
Delivering High Performance Websites with NGINX
NGINX, Inc.
 
Nginx dhruba mandal
Nginx dhruba mandalNginx dhruba mandal
Nginx dhruba mandal
Dhrubaji Mandal ♛
 
NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing
NGINX, Inc.
 
5 things you didn't know nginx could do
5 things you didn't know nginx could do5 things you didn't know nginx could do
5 things you didn't know nginx could do
sarahnovotny
 
Extending functionality in nginx, with modules!
Extending functionality in nginx, with modules!Extending functionality in nginx, with modules!
Extending functionality in nginx, with modules!
Trygve Vea
 
What's New in NGINX Plus R12?
What's New in NGINX Plus R12? What's New in NGINX Plus R12?
What's New in NGINX Plus R12?
NGINX, Inc.
 
NGINX: Basics & Best Practices - EMEA Broadcast
NGINX: Basics & Best Practices - EMEA BroadcastNGINX: Basics & Best Practices - EMEA Broadcast
NGINX: Basics & Best Practices - EMEA Broadcast
NGINX, Inc.
 
Nginx - Tips and Tricks.
Nginx - Tips and Tricks.Nginx - Tips and Tricks.
Nginx - Tips and Tricks.
Harish S
 
Rate Limiting with NGINX and NGINX Plus
Rate Limiting with NGINX and NGINX PlusRate Limiting with NGINX and NGINX Plus
Rate Limiting with NGINX and NGINX Plus
NGINX, Inc.
 
NGINX: HTTP/2 Server Push and gRPC
NGINX: HTTP/2 Server Push and gRPCNGINX: HTTP/2 Server Push and gRPC
NGINX: HTTP/2 Server Push and gRPC
NGINX, Inc.
 
Load Balancing with Nginx
Load Balancing with NginxLoad Balancing with Nginx
Load Balancing with Nginx
Marian Marinov
 
Benchmarking NGINX for Accuracy and Results
Benchmarking NGINX for Accuracy and ResultsBenchmarking NGINX for Accuracy and Results
Benchmarking NGINX for Accuracy and Results
NGINX, Inc.
 
5 things you didn't know nginx could do velocity
5 things you didn't know nginx could do   velocity5 things you didn't know nginx could do   velocity
5 things you didn't know nginx could do velocity
sarahnovotny
 
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Severalnines
 
Lcu14 Lightning Talk- NGINX
Lcu14 Lightning Talk- NGINXLcu14 Lightning Talk- NGINX
Lcu14 Lightning Talk- NGINX
Linaro
 
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Fastly
 
Rails Caching Secrets from the Edge
Rails Caching Secrets from the EdgeRails Caching Secrets from the Edge
Rails Caching Secrets from the Edge
Michael May
 
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse ProxyNginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Amit Aggarwal
 
What’s New in NGINX Plus R15?
What’s New in NGINX Plus R15?What’s New in NGINX Plus R15?
What’s New in NGINX Plus R15?
NGINX, Inc.
 

What's hot (20)

Reverse proxy & web cache with NGINX, HAProxy and Varnish
Reverse proxy & web cache with NGINX, HAProxy and VarnishReverse proxy & web cache with NGINX, HAProxy and Varnish
Reverse proxy & web cache with NGINX, HAProxy and Varnish
 
Delivering High Performance Websites with NGINX
Delivering High Performance Websites with NGINXDelivering High Performance Websites with NGINX
Delivering High Performance Websites with NGINX
 
Nginx dhruba mandal
Nginx dhruba mandalNginx dhruba mandal
Nginx dhruba mandal
 
NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing
 
5 things you didn't know nginx could do
5 things you didn't know nginx could do5 things you didn't know nginx could do
5 things you didn't know nginx could do
 
Extending functionality in nginx, with modules!
Extending functionality in nginx, with modules!Extending functionality in nginx, with modules!
Extending functionality in nginx, with modules!
 
What's New in NGINX Plus R12?
What's New in NGINX Plus R12? What's New in NGINX Plus R12?
What's New in NGINX Plus R12?
 
NGINX: Basics & Best Practices - EMEA Broadcast
NGINX: Basics & Best Practices - EMEA BroadcastNGINX: Basics & Best Practices - EMEA Broadcast
NGINX: Basics & Best Practices - EMEA Broadcast
 
Nginx - Tips and Tricks.
Nginx - Tips and Tricks.Nginx - Tips and Tricks.
Nginx - Tips and Tricks.
 
Rate Limiting with NGINX and NGINX Plus
Rate Limiting with NGINX and NGINX PlusRate Limiting with NGINX and NGINX Plus
Rate Limiting with NGINX and NGINX Plus
 
NGINX: HTTP/2 Server Push and gRPC
NGINX: HTTP/2 Server Push and gRPCNGINX: HTTP/2 Server Push and gRPC
NGINX: HTTP/2 Server Push and gRPC
 
Load Balancing with Nginx
Load Balancing with NginxLoad Balancing with Nginx
Load Balancing with Nginx
 
Benchmarking NGINX for Accuracy and Results
Benchmarking NGINX for Accuracy and ResultsBenchmarking NGINX for Accuracy and Results
Benchmarking NGINX for Accuracy and Results
 
5 things you didn't know nginx could do velocity
5 things you didn't know nginx could do   velocity5 things you didn't know nginx could do   velocity
5 things you didn't know nginx could do velocity
 
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
 
Lcu14 Lightning Talk- NGINX
Lcu14 Lightning Talk- NGINXLcu14 Lightning Talk- NGINX
Lcu14 Lightning Talk- NGINX
 
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
 
Rails Caching Secrets from the Edge
Rails Caching Secrets from the EdgeRails Caching Secrets from the Edge
Rails Caching Secrets from the Edge
 
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse ProxyNginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
 
What’s New in NGINX Plus R15?
What’s New in NGINX Plus R15?What’s New in NGINX Plus R15?
What’s New in NGINX Plus R15?
 

Similar to ITB2017 - Nginx Effective High Availability Content Caching

Clug 2012 March web server optimisation
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisation
grooverdan
 
Load Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterLoad Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS Cluster
Kevin Jones
 
Less and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developersLess and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developers
Seravo
 
PHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on NginxPHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on Nginx
Harald Zeitlhofer
 
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
Ortus Solutions, Corp
 
What's new in NGINX Plus R19
What's new in NGINX Plus R19What's new in NGINX Plus R19
What's new in NGINX Plus R19
NGINX, Inc.
 
Nginx internals
Nginx internalsNginx internals
Nginx internals
liqiang xu
 
How to make a high-quality Node.js app, Nikita Galkin
How to make a high-quality Node.js app, Nikita GalkinHow to make a high-quality Node.js app, Nikita Galkin
How to make a high-quality Node.js app, Nikita Galkin
Sigma Software
 
Nginx Scalable Stack
Nginx Scalable StackNginx Scalable Stack
Nginx Scalable Stack
Bruno Paiuca
 
Where is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by exampleWhere is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by example
Rafał Leszko
 
Running php on nginx
Running php on nginxRunning php on nginx
Running php on nginx
Harald Zeitlhofer
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker
sarahnovotny
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker
Sarah Novotny
 
What’s New in NGINX Plus R15? - EMEA
What’s New in NGINX Plus R15? - EMEAWhat’s New in NGINX Plus R15? - EMEA
What’s New in NGINX Plus R15? - EMEA
NGINX, Inc.
 
Solving anything in VCL
Solving anything in VCLSolving anything in VCL
Solving anything in VCL
Fastly
 
NGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEANGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEA
NGINX, Inc.
 
NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing
NGINX, Inc.
 
NGINX Installation and Tuning
NGINX Installation and TuningNGINX Installation and Tuning
NGINX Installation and Tuning
NGINX, Inc.
 
ProxySQL Tutorial - PLAM 2016
ProxySQL Tutorial - PLAM 2016ProxySQL Tutorial - PLAM 2016
ProxySQL Tutorial - PLAM 2016
Derek Downey
 
Socket programming, and openresty
Socket programming, and openrestySocket programming, and openresty
Socket programming, and openresty
Tavish Naruka
 

Similar to ITB2017 - Nginx Effective High Availability Content Caching (20)

Clug 2012 March web server optimisation
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisation
 
Load Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterLoad Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS Cluster
 
Less and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developersLess and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developers
 
PHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on NginxPHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on Nginx
 
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
 
What's new in NGINX Plus R19
What's new in NGINX Plus R19What's new in NGINX Plus R19
What's new in NGINX Plus R19
 
Nginx internals
Nginx internalsNginx internals
Nginx internals
 
How to make a high-quality Node.js app, Nikita Galkin
How to make a high-quality Node.js app, Nikita GalkinHow to make a high-quality Node.js app, Nikita Galkin
How to make a high-quality Node.js app, Nikita Galkin
 
Nginx Scalable Stack
Nginx Scalable StackNginx Scalable Stack
Nginx Scalable Stack
 
Where is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by exampleWhere is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by example
 
Running php on nginx
Running php on nginxRunning php on nginx
Running php on nginx
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker
 
What’s New in NGINX Plus R15? - EMEA
What’s New in NGINX Plus R15? - EMEAWhat’s New in NGINX Plus R15? - EMEA
What’s New in NGINX Plus R15? - EMEA
 
Solving anything in VCL
Solving anything in VCLSolving anything in VCL
Solving anything in VCL
 
NGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEANGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEA
 
NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing
 
NGINX Installation and Tuning
NGINX Installation and TuningNGINX Installation and Tuning
NGINX Installation and Tuning
 
ProxySQL Tutorial - PLAM 2016
ProxySQL Tutorial - PLAM 2016ProxySQL Tutorial - PLAM 2016
ProxySQL Tutorial - PLAM 2016
 
Socket programming, and openresty
Socket programming, and openrestySocket programming, and openresty
Socket programming, and openresty
 

More from Ortus Solutions, Corp

Intro to Amazon Web Services (AWS) and Gen AI
Intro to Amazon Web Services (AWS) and Gen AIIntro to Amazon Web Services (AWS) and Gen AI
Intro to Amazon Web Services (AWS) and Gen AI
Ortus Solutions, Corp
 
How we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hoursHow we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hours
Ortus Solutions, Corp
 
Migrate your Infrastructure to the AWS Cloud
Migrate your Infrastructure to the AWS CloudMigrate your Infrastructure to the AWS Cloud
Migrate your Infrastructure to the AWS Cloud
Ortus Solutions, Corp
 
BoxLang Developer Tooling: VSCode Extension and Debugger
BoxLang Developer Tooling: VSCode Extension and DebuggerBoxLang Developer Tooling: VSCode Extension and Debugger
BoxLang Developer Tooling: VSCode Extension and Debugger
Ortus Solutions, Corp
 
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
Ortus Solutions, Corp
 
Web Hosting with CommandBox and CommandBox Pro
Web Hosting with CommandBox and CommandBox ProWeb Hosting with CommandBox and CommandBox Pro
Web Hosting with CommandBox and CommandBox Pro
Ortus Solutions, Corp
 
Revolutionizing Task Scheduling in ColdBox
Revolutionizing Task Scheduling in ColdBoxRevolutionizing Task Scheduling in ColdBox
Revolutionizing Task Scheduling in ColdBox
Ortus Solutions, Corp
 
Disk to Cloud: Abstract your File Operations with CBFS
Disk to Cloud: Abstract your File Operations with CBFSDisk to Cloud: Abstract your File Operations with CBFS
Disk to Cloud: Abstract your File Operations with CBFS
Ortus Solutions, Corp
 
How to Break Your App with Playwright Tests
How to Break Your App with Playwright TestsHow to Break Your App with Playwright Tests
How to Break Your App with Playwright Tests
Ortus Solutions, Corp
 
Securing Your Application with Passkeys and cbSecurity
Securing Your Application with Passkeys and cbSecuritySecuring Your Application with Passkeys and cbSecurity
Securing Your Application with Passkeys and cbSecurity
Ortus Solutions, Corp
 
Schrodinger’s Backup: Is Your Backup Really a Backup?
Schrodinger’s Backup: Is Your Backup Really a Backup?Schrodinger’s Backup: Is Your Backup Really a Backup?
Schrodinger’s Backup: Is Your Backup Really a Backup?
Ortus Solutions, Corp
 
Design system: The basis for a consistent design
Design system: The basis for a consistent designDesign system: The basis for a consistent design
Design system: The basis for a consistent design
Ortus Solutions, Corp
 
ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...
ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...
ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...
Ortus Solutions, Corp
 
How to Make a Living as a (ColdFusion) Freelancer?
How to Make a Living as a (ColdFusion) Freelancer?How to Make a Living as a (ColdFusion) Freelancer?
How to Make a Living as a (ColdFusion) Freelancer?
Ortus Solutions, Corp
 
What’s New in ContentBox 6 by Ortus Solutions.pdf
What’s New in ContentBox 6 by Ortus Solutions.pdfWhat’s New in ContentBox 6 by Ortus Solutions.pdf
What’s New in ContentBox 6 by Ortus Solutions.pdf
Ortus Solutions, Corp
 
cbq - Jobs and Tasks in the Background by Ortus
cbq - Jobs and Tasks in the Background by Ortuscbq - Jobs and Tasks in the Background by Ortus
cbq - Jobs and Tasks in the Background by Ortus
Ortus Solutions, Corp
 
Demonstrating Monitoring Solutions for CF and Lucee
Demonstrating Monitoring Solutions for CF and LuceeDemonstrating Monitoring Solutions for CF and Lucee
Demonstrating Monitoring Solutions for CF and Lucee
Ortus Solutions, Corp
 
Ortus Solutions - Headless Content for the Win!
Ortus Solutions - Headless Content for the Win!Ortus Solutions - Headless Content for the Win!
Ortus Solutions - Headless Content for the Win!
Ortus Solutions, Corp
 
Build a Complex Web Form with RuleBox and TestBox
Build a Complex Web Form with RuleBox and TestBoxBuild a Complex Web Form with RuleBox and TestBox
Build a Complex Web Form with RuleBox and TestBox
Ortus Solutions, Corp
 
Reactive CFML with CBWIRE v4 by Ortus Solutions
Reactive CFML with CBWIRE v4 by Ortus SolutionsReactive CFML with CBWIRE v4 by Ortus Solutions
Reactive CFML with CBWIRE v4 by Ortus Solutions
Ortus Solutions, Corp
 

More from Ortus Solutions, Corp (20)

Intro to Amazon Web Services (AWS) and Gen AI
Intro to Amazon Web Services (AWS) and Gen AIIntro to Amazon Web Services (AWS) and Gen AI
Intro to Amazon Web Services (AWS) and Gen AI
 
How we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hoursHow we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hours
 
Migrate your Infrastructure to the AWS Cloud
Migrate your Infrastructure to the AWS CloudMigrate your Infrastructure to the AWS Cloud
Migrate your Infrastructure to the AWS Cloud
 
BoxLang Developer Tooling: VSCode Extension and Debugger
BoxLang Developer Tooling: VSCode Extension and DebuggerBoxLang Developer Tooling: VSCode Extension and Debugger
BoxLang Developer Tooling: VSCode Extension and Debugger
 
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
 
Web Hosting with CommandBox and CommandBox Pro
Web Hosting with CommandBox and CommandBox ProWeb Hosting with CommandBox and CommandBox Pro
Web Hosting with CommandBox and CommandBox Pro
 
Revolutionizing Task Scheduling in ColdBox
Revolutionizing Task Scheduling in ColdBoxRevolutionizing Task Scheduling in ColdBox
Revolutionizing Task Scheduling in ColdBox
 
Disk to Cloud: Abstract your File Operations with CBFS
Disk to Cloud: Abstract your File Operations with CBFSDisk to Cloud: Abstract your File Operations with CBFS
Disk to Cloud: Abstract your File Operations with CBFS
 
How to Break Your App with Playwright Tests
How to Break Your App with Playwright TestsHow to Break Your App with Playwright Tests
How to Break Your App with Playwright Tests
 
Securing Your Application with Passkeys and cbSecurity
Securing Your Application with Passkeys and cbSecuritySecuring Your Application with Passkeys and cbSecurity
Securing Your Application with Passkeys and cbSecurity
 
Schrodinger’s Backup: Is Your Backup Really a Backup?
Schrodinger’s Backup: Is Your Backup Really a Backup?Schrodinger’s Backup: Is Your Backup Really a Backup?
Schrodinger’s Backup: Is Your Backup Really a Backup?
 
Design system: The basis for a consistent design
Design system: The basis for a consistent designDesign system: The basis for a consistent design
Design system: The basis for a consistent design
 
ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...
ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...
ColdBox Debugger v4.2.0: Unveiling Advanced Debugging Techniques for ColdBox ...
 
How to Make a Living as a (ColdFusion) Freelancer?
How to Make a Living as a (ColdFusion) Freelancer?How to Make a Living as a (ColdFusion) Freelancer?
How to Make a Living as a (ColdFusion) Freelancer?
 
What’s New in ContentBox 6 by Ortus Solutions.pdf
What’s New in ContentBox 6 by Ortus Solutions.pdfWhat’s New in ContentBox 6 by Ortus Solutions.pdf
What’s New in ContentBox 6 by Ortus Solutions.pdf
 
cbq - Jobs and Tasks in the Background by Ortus
cbq - Jobs and Tasks in the Background by Ortuscbq - Jobs and Tasks in the Background by Ortus
cbq - Jobs and Tasks in the Background by Ortus
 
Demonstrating Monitoring Solutions for CF and Lucee
Demonstrating Monitoring Solutions for CF and LuceeDemonstrating Monitoring Solutions for CF and Lucee
Demonstrating Monitoring Solutions for CF and Lucee
 
Ortus Solutions - Headless Content for the Win!
Ortus Solutions - Headless Content for the Win!Ortus Solutions - Headless Content for the Win!
Ortus Solutions - Headless Content for the Win!
 
Build a Complex Web Form with RuleBox and TestBox
Build a Complex Web Form with RuleBox and TestBoxBuild a Complex Web Form with RuleBox and TestBox
Build a Complex Web Form with RuleBox and TestBox
 
Reactive CFML with CBWIRE v4 by Ortus Solutions
Reactive CFML with CBWIRE v4 by Ortus SolutionsReactive CFML with CBWIRE v4 by Ortus Solutions
Reactive CFML with CBWIRE v4 by Ortus Solutions
 

Recently uploaded

Manual | Product | Research Presentation
Manual | Product | Research PresentationManual | Product | Research Presentation
Manual | Product | Research Presentation
welrejdoall
 
Best Programming Language for Civil Engineers
Best Programming Language for Civil EngineersBest Programming Language for Civil Engineers
Best Programming Language for Civil Engineers
Awais Yaseen
 
Recent Advancements in the NIST-JARVIS Infrastructure
Recent Advancements in the NIST-JARVIS InfrastructureRecent Advancements in the NIST-JARVIS Infrastructure
Recent Advancements in the NIST-JARVIS Infrastructure
KAMAL CHOUDHARY
 
Details of description part II: Describing images in practice - Tech Forum 2024
Details of description part II: Describing images in practice - Tech Forum 2024Details of description part II: Describing images in practice - Tech Forum 2024
Details of description part II: Describing images in practice - Tech Forum 2024
BookNet Canada
 
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfBT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
Neo4j
 
20240702 QFM021 Machine Intelligence Reading List June 2024
20240702 QFM021 Machine Intelligence Reading List June 202420240702 QFM021 Machine Intelligence Reading List June 2024
20240702 QFM021 Machine Intelligence Reading List June 2024
Matthew Sinclair
 
20240705 QFM024 Irresponsible AI Reading List June 2024
20240705 QFM024 Irresponsible AI Reading List June 202420240705 QFM024 Irresponsible AI Reading List June 2024
20240705 QFM024 Irresponsible AI Reading List June 2024
Matthew Sinclair
 
WhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdf
WhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdfWhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdf
WhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdf
ArgaBisma
 
Transcript: Details of description part II: Describing images in practice - T...
Transcript: Details of description part II: Describing images in practice - T...Transcript: Details of description part II: Describing images in practice - T...
Transcript: Details of description part II: Describing images in practice - T...
BookNet Canada
 
The Rise of Supernetwork Data Intensive Computing
The Rise of Supernetwork Data Intensive ComputingThe Rise of Supernetwork Data Intensive Computing
The Rise of Supernetwork Data Intensive Computing
Larry Smarr
 
[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf
[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf
[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf
Kief Morris
 
Observability For You and Me with OpenTelemetry
Observability For You and Me with OpenTelemetryObservability For You and Me with OpenTelemetry
Observability For You and Me with OpenTelemetry
Eric D. Schabell
 
BLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALL
BLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALLBLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALL
BLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALL
Liveplex
 
Quantum Communications Q&A with Gemini LLM
Quantum Communications Q&A with Gemini LLMQuantum Communications Q&A with Gemini LLM
Quantum Communications Q&A with Gemini LLM
Vijayananda Mohire
 
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...
Bert Blevins
 
Quality Patents: Patents That Stand the Test of Time
Quality Patents: Patents That Stand the Test of TimeQuality Patents: Patents That Stand the Test of Time
Quality Patents: Patents That Stand the Test of Time
Aurora Consulting
 
How RPA Help in the Transportation and Logistics Industry.pptx
How RPA Help in the Transportation and Logistics Industry.pptxHow RPA Help in the Transportation and Logistics Industry.pptx
How RPA Help in the Transportation and Logistics Industry.pptx
SynapseIndia
 
Advanced Techniques for Cyber Security Analysis and Anomaly Detection
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionAdvanced Techniques for Cyber Security Analysis and Anomaly Detection
Advanced Techniques for Cyber Security Analysis and Anomaly Detection
Bert Blevins
 
20240704 QFM023 Engineering Leadership Reading List June 2024
20240704 QFM023 Engineering Leadership Reading List June 202420240704 QFM023 Engineering Leadership Reading List June 2024
20240704 QFM023 Engineering Leadership Reading List June 2024
Matthew Sinclair
 
Best Practices for Effectively Running dbt in Airflow.pdf
Best Practices for Effectively Running dbt in Airflow.pdfBest Practices for Effectively Running dbt in Airflow.pdf
Best Practices for Effectively Running dbt in Airflow.pdf
Tatiana Al-Chueyr
 

Recently uploaded (20)

Manual | Product | Research Presentation
Manual | Product | Research PresentationManual | Product | Research Presentation
Manual | Product | Research Presentation
 
Best Programming Language for Civil Engineers
Best Programming Language for Civil EngineersBest Programming Language for Civil Engineers
Best Programming Language for Civil Engineers
 
Recent Advancements in the NIST-JARVIS Infrastructure
Recent Advancements in the NIST-JARVIS InfrastructureRecent Advancements in the NIST-JARVIS Infrastructure
Recent Advancements in the NIST-JARVIS Infrastructure
 
Details of description part II: Describing images in practice - Tech Forum 2024
Details of description part II: Describing images in practice - Tech Forum 2024Details of description part II: Describing images in practice - Tech Forum 2024
Details of description part II: Describing images in practice - Tech Forum 2024
 
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfBT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdf
 
20240702 QFM021 Machine Intelligence Reading List June 2024
20240702 QFM021 Machine Intelligence Reading List June 202420240702 QFM021 Machine Intelligence Reading List June 2024
20240702 QFM021 Machine Intelligence Reading List June 2024
 
20240705 QFM024 Irresponsible AI Reading List June 2024
20240705 QFM024 Irresponsible AI Reading List June 202420240705 QFM024 Irresponsible AI Reading List June 2024
20240705 QFM024 Irresponsible AI Reading List June 2024
 
WhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdf
WhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdfWhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdf
WhatsApp Image 2024-03-27 at 08.19.52_bfd93109.pdf
 
Transcript: Details of description part II: Describing images in practice - T...
Transcript: Details of description part II: Describing images in practice - T...Transcript: Details of description part II: Describing images in practice - T...
Transcript: Details of description part II: Describing images in practice - T...
 
The Rise of Supernetwork Data Intensive Computing
The Rise of Supernetwork Data Intensive ComputingThe Rise of Supernetwork Data Intensive Computing
The Rise of Supernetwork Data Intensive Computing
 
[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf
[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf
[Talk] Moving Beyond Spaghetti Infrastructure [AOTB] 2024-07-04.pdf
 
Observability For You and Me with OpenTelemetry
Observability For You and Me with OpenTelemetryObservability For You and Me with OpenTelemetry
Observability For You and Me with OpenTelemetry
 
BLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALL
BLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALLBLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALL
BLOCKCHAIN FOR DUMMIES: GUIDEBOOK FOR ALL
 
Quantum Communications Q&A with Gemini LLM
Quantum Communications Q&A with Gemini LLMQuantum Communications Q&A with Gemini LLM
Quantum Communications Q&A with Gemini LLM
 
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...
 
Quality Patents: Patents That Stand the Test of Time
Quality Patents: Patents That Stand the Test of TimeQuality Patents: Patents That Stand the Test of Time
Quality Patents: Patents That Stand the Test of Time
 
How RPA Help in the Transportation and Logistics Industry.pptx
How RPA Help in the Transportation and Logistics Industry.pptxHow RPA Help in the Transportation and Logistics Industry.pptx
How RPA Help in the Transportation and Logistics Industry.pptx
 
Advanced Techniques for Cyber Security Analysis and Anomaly Detection
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionAdvanced Techniques for Cyber Security Analysis and Anomaly Detection
Advanced Techniques for Cyber Security Analysis and Anomaly Detection
 
20240704 QFM023 Engineering Leadership Reading List June 2024
20240704 QFM023 Engineering Leadership Reading List June 202420240704 QFM023 Engineering Leadership Reading List June 2024
20240704 QFM023 Engineering Leadership Reading List June 2024
 
Best Practices for Effectively Running dbt in Airflow.pdf
Best Practices for Effectively Running dbt in Airflow.pdfBest Practices for Effectively Running dbt in Airflow.pdf
Best Practices for Effectively Running dbt in Airflow.pdf
 

ITB2017 - Nginx Effective High Availability Content Caching

  • 1. NGINX, Inc. 2017 Using NGINX as an Effective and Highly Available Content Cache Kevin Jones Technical Solutions Architect @webopsx
  • 2. • Quick intro to… • NGINX • Content Caching • Caching with NGINX • How caching functionality works • How to enable basic caching • Advanced caching with NGINX • How to increase availability using caching • When and how to enable micro-caching • How to fine tune the cache • How to architect for high availability • Various configuration tips and tricks! • Various examples! 2 Agenda
  • 4. MORE INFORMATION AT NGINX.COM Solves complexity… Web Server Load BalancerReverse Proxy Content Cache Streaming Media Server
  • 5. 350 million total sites and counting…
 running on NGINX
  • 6. 53%of the Top 10,000
 most visited websites
  • 7. 36%of all instances on
 Amazon Web Services Source: W3Techs December 2013 Web Server Survey
  • 8. Contexts, directives and parameters… oh my.
  • 9. 9 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/dirindex.html http context server context events context main context stream contextnot shown… upstream context location context
  • 10. 10 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } server directive location directive upstream directive events directive main directive nginx.org/en/docs/dirindex.html
  • 11. 11 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/dirindex.html parameter parameter parameter parameter
  • 12. Source: W3Techs December 2013 Web Server Survey Variables
  • 13. 13 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/varindex.html variables
  • 14. 14 http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; map $http_user_agent $dynamic { “~*Mobile” mobile.example.com; default desktop.example.com; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://$dynamic; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/varindex.html variable map (dynamic)
  • 16. 16 http { ... server { listen 10.0.1.10:80; server_name example.com *.website.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/varindex.html SNI
  • 17. Layer 7 Request Routing
  • 18. 18 http { ... server { if ( $blocked ) { return 444; } listen 10.0.1.10:80; server_name website.com *.example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/varindex.html Regex location matching Conditional Routing
  • 19. 19 The Basics of Content Caching
  • 20. 20 Client initiates request (e.g. GET /file) Proxy Cache determines if response is already cached if not NGINX will fetch from the origin server Origin Server serves response along with all cache control headers (e.g. Cache-Control, Etag, etc..) Proxy Cache caches the response and serves it to the client
  • 21. 21 Cache Headers • Cache-Control - used to specify directives for caching mechanisms in both, requests and responses. (e.g. Cache-Control: max-age=600 or Cache-Control: no-cache) • Expires - contains the date/time after which the response is considered stale. If there is a Cache- Control header with the "max-age" or "s-max-age" directive in the response, the Expires header is ignored. (e.g. Expires: Wed, 21 Oct 2015 07:28:00 GMT) • Last-Modified - contains the date and time at which the origin server believes the resource was last modified. HTTP dates are always expressed in GMT, never in local time. Less accurate than the ETag header (e.g. Last-Modified: Wed, 21 Oct 2015 07:28:00 GMT) • ETag - is an identifier (or fingerprint) for a specific version of a resource. (e.g. ETag: “58efdcd0-268")
  • 22. 22 Content caching with NGINX is simple.
  • 23. 23 proxy_cache_path proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time] [max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time] [loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on|off] [purger_files=number] [purger_sleep=time] [purger_threshold=time]; Syntax: Default: - Context: http Documentation http { proxy_cache_path /tmp/nginx/micro_cache/ levels=1:2 keys_zone=large_cache:10m max_size=300g inactive=14d; ... } Definition: Sets the path and other parameters of a cache. Cache data are stored in files. The file name in a cache is a result of applying the MD5 function to the cache key.
  • 24. 24 proxy_cache_key Documentation server { proxy_cache_key $scheme$proxy_host$request_uri$cookie_userid$http_user_agent; ... } proxy_cache_key string;Syntax: Default: proxy_cache_key $scheme$proxy_host$request_uri; Context: http, server, location Definition: Defines a key for caching. Used in the proxy_cache_path directive.
  • 25. 25 proxy_cache Documentation location ^~ /api { ... proxy_cache large_cache; } proxy_cache zone | off;Syntax: Default: proxy_cache off; Context: http, server, location Definition: Defines a shared memory zone used for caching. The same zone can be used in several places.
  • 26. 26 proxy_cache_valid Documentation location ~* .(jpg|png|gif|ico)$ { ... proxy_cache_valid any 1d; } proxy_cache_valid [code ...] time;Syntax: Default: - Context: http, server, location Definition: Sets caching time for different response codes.
  • 27. 27 http { proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=cache:10m max_size=100g inactive=7d use_temp_path=off; ... server { ... location / { ... proxy_pass http://backend.com; } location ^~ /images { ... proxy_cache cache; proxy_cache_valid 200 301 302 12h; proxy_pass http://images.origin.com; } } } Basic Caching
  • 28. 28 Client NGINX Cache Origin Server Cache Memory Zone (Shared across workers) 1. HTTP Request: GET /images/hawaii.jpg Cache Key: http://origin/images/hawaii.jpg md5 hash: 51b740d1ab03f287d46da45202c84945 2. NGINX checks if hash exists in memory. If it does not the request is passed to the origin server. 3. Origin server responds 4. NGINX caches the response to disk and places the hash in memory 5. Response is served to client
  • 29. 29 NGINX Processes # ps aux | grep nginx root 14559 0.0 0.1 53308 3360 ? Ss Apr12 0:00 nginx: master process /usr/ sbin/nginx -c /etc/nginx/nginx.conf nginx 27880 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process nginx 27881 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process nginx 27882 0.0 0.1 53472 2876 ? S 00:06 0:00 nginx: cache manager process nginx 27883 0.0 0.1 53472 2552 ? S 00:06 0:00 nginx: cache loader process • Cache Manager - activated periodically to check the state of the cache. If the cache size exceeds the limit set by the max_size parameter to the proxy_cache_path directive, the cache manager removes the data that was accessed least recently • Cache Loader - runs only once, right after NGINX starts. It loads metadata about previously cached data into the shared memory zone.
  • 30. 30 Caching is not just for HTTP HTTP FastCGI UWSGI SCGI Memcache Tip: NGINX can also be used to cache other backends using their unique cache directives. (e.g. fastcgi_cache, uwsgi_cache and scgi_cache) Alternatively, NGINX can also be used to retrieve content directly from a memcached server.
  • 32. 32 log_format main 'rid="$request_id" pck="$scheme://$proxy_host$request_uri" ' 'ucs="$upstream_cache_status" ' 'site="$server_name" server="$host” dest_port="$server_port" ' 'dest_ip="$server_addr" src="$remote_addr" src_ip="$realip_remote_addr" ' 'user="$remote_user" time_local="$time_local" protocol="$server_protocol" ' 'status="$status" bytes_out="$bytes_sent" ' 'bytes_in="$upstream_bytes_received" http_referer="$http_referer" ' 'http_user_agent="$http_user_agent" nginx_version="$nginx_version" ' 'http_x_forwarded_for="$http_x_forwarded_for" ' 'http_x_header="$http_x_header" uri_query="$query_string" uri_path="$uri" ' 'http_method="$request_method" response_time="$upstream_response_time" ' 'cookie="$http_cookie" request_time="$request_time" '; Logging is your friend… Tip: The more relevant information in your log the better. When troubleshooting you can easily add the proxy cache KEY to the log_format for debugging. For a list of all variables see the “Alphabetical index of variables” on nginx.org.
  • 33. 33 server { ... # add HTTP response headers add_header CC-X-Request-ID $request_id; add_header X-Cache-Status $upstream_cache_status; } Adding response headers… Tip: Using the add_header directive you can add useful HTTP response headers allowing you to debug your NGINX deployment rather easily.
  • 34. 34 Cache Status • MISS – The response was not found in the cache and so was fetched from an origin server. The response might then have been cached. • BYPASS – The response was fetched from the origin server instead of served from the cache because the request matched a proxy_cache_bypass directive. The response might then have been cached. • EXPIRED – The entry in the cache has expired. The response contains fresh content from the origin server. • STALE – The content is stale because the origin server is not responding correctly, and proxy_cache_use_stale was configured. • UPDATING – The content is stale because the entry is currently being updated in response to a previous request, and proxy_cache_use_stale updating is configured. • REVALIDATED – The proxy_cache_revalidate directive was enabled and NGINX verified that the current cached content was still valid (ETag, If‑Modified‑Since or If‑None‑Match). • HIT – The response contains valid, fresh content direct from the cache.
  • 35. 35 # curl -I 127.0.0.1/images/hawaii.jpg HTTP/1.1 200 OK Server: nginx/1.11.10 Date: Wed, 19 Apr 2017 22:20:53 GMT Content-Type: image/jpeg Content-Length: 21542868 Connection: keep-alive Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT ETag: "58efe5ab-148b7d4" OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1 CC-X-Request-ID: d4a5f7a8d25544b1409c351a22f42960 X-Cache-Status: HIT Accept-Ranges: bytes Using cURL to Debug… Tip: Use cURL or Chrome developer tools to grab the request ID or other various headers useful for debugging.
  • 36. 36 # grep -ri d4a5f7a8d25544b1409c351a22f42960 /var/log/nginx/adv_access.log rid="d4a5f7a8d25544b1409c351a22f42960" pck="http://origin/images/hawaii.jpg" site="webopsx.com" server="localhost” dest_port="80" dest_ip=“127.0.0.1" ... # echo -n "http://origin/images/hawaii.jpg" | md5sum 51b740d1ab03f287d46da45202c84945 - # tree /tmp/nginx/micro_cache/5/94/ /tmp/nginx/micro_cache/5/94/ !"" 51b740d1ab03f287d46da45202c84945 0 directories, 1 file Troubleshooting the Proxy Cache Tip: A quick and easy way to determine the hash of your cache key can be accomplished using echo, pipe and md5sum
  • 37. 37 # head -n 14 /tmp/nginx/micro_cache/5/94/51b740d1ab03f287d46da45202c84945 ??X?X??Xb?!bv?"58efe5ab-148b7d4" KEY: http://origin/images/hawaii.jpg HTTP/1.1 200 OK Server: nginx/1.11.10 Date: Wed, 19 Apr 2017 23:51:38 GMT Content-Type: image/jpeg Content-Length: 21542868 Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT Connection: keep-alive ETag: "58efe5ab-148b7d4" OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1 Accept-Ranges: bytes ?wExifII>(i?Nl?0230??HH?? Cache Contents
  • 39. 39 Static Content • Images • CSS • Simple HTML User Content • Shopping Cart • Unique Data • Account Data Dynamic Content • Blog Posts • Status • API Data (Maybe?) Easy to cache Cannot CacheMicro-cacheable! Types of Content Documentation
  • 40. 40 http { upstream backend { keepalive 20; server 127.0.0.1:8080; } proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m max_size=100m inactive=600s; ... server { listen 80; ... proxy_cache micro_cache; proxy_cache_valid any 1s; location / { proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Accept-Encoding ""; proxy_pass http://backend; } } } Enable keepalives on upstream Set proxy_cache_valid to any status with a 1 second value Set required HTTP version and pass HTTP headers for keepalives Set short inactive parameter
  • 41. 41 proxy_cache_lock Documentation proxy_cache_lock on | off;Syntax: Default: proxy_cache_lock off; Context: http, server, location Definition: When enabled, only one request at a time will be allowed to populate a new cache element identified according to the proxy_cache_key directive by passing a request to a proxied server. Other requests of the same cache element will either wait for a response to appear in the cache or the cache lock for this element to be released, up to the time set by the proxy_cache_lock_timeout directive. Related: See the following for tuning… • proxy_cache_lock_age, • proxy_cache_lock_timeout
  • 42. 42 proxy_cache_use_stale Documentation location /contact-us { ... proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; } proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | http_429 | off ...; Syntax: Default: proxy_cache_use_stale off; Context: http, server, location Definition: Determines in which cases a stale cached response can be used during communication with the proxied server.
  • 43. 43 http { upstream backend { keepalive 20; server 127.0.0.1:8080; } proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m max_size=100m inactive=600s; ... server { listen 80; ... proxy_cache micro_cache; proxy_cache_valid any 1s; proxy_cache_lock on; proxy_cache_use_stale updating; location / { ... proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Accept-Encoding ""; proxy_pass http://backend; } } } Final optimization
  • 44. 44 Further Tuning and Optimization
  • 45. 45 proxy_cache_revalidate Documentation proxy_cache_revalidate on | off;Syntax: Default: proxy_cache_revalidate off; Context: http, server, location Definition: Enables revalidation of expired cache items using conditional GET requests with the “If-Modified-Since” and “If-None-Match” header fields.
  • 46. 46 proxy_cache_min_uses Documentation location ~* /legacy { ... proxy_cache_min_uses 5; } proxy_cache_min_uses number;Syntax: Default: proxy_cache_min_uses 1; Context: http, server, location Definition: Sets the number of requests after which the response will be cached. This will help with disk utilization and hit ratio of your cache.
  • 47. 47 proxy_cache_methods Documentation location ~* /data { ... proxy_cache_methods GET HEAD POST; } proxy_cache_methods GET | HEAD | POST …;Syntax: Default: proxy_cache_methods GET HEAD; Context: http, server, location Definition: NGINX only caches GET and HEAD request methods by default. Using this directive you can add additional methods. If you plan to add additional methods consider updating the cache key to include the $request_method variable if the response will be different depending on the request method.
  • 48. 48 proxy_buffering Documentation proxy_buffering on | off;Syntax: Default: proxy_buffering on; Context: http, server, location Definition: Enables or disables buffering of responses from the proxied server. When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving it into the buffers set by the proxy_buffer_size and proxy_buffers directives. If the whole response does not fit into memory, a part of it can be saved to a temporary file on the disk. When buffering is disabled, the response is passed to a client synchronously, immediately as it is received.
  • 49. 49 location ^~ /wordpress { ... proxy_cache cache; proxy_ignore_headers Cache-Control; } Override Cache-Control headers Tip: By default NGINX will honor all Cache-Control headers from the origin server, in turn not caching responses with Cache-Control set to Private, No-Cache, No-Store or with Set-Cookie in the response header. Using proxy_ignore_headers you can disable processing of certain response header fields from the proxied server.
  • 50. 50 location / { ... proxy_cache cache; proxy_cache_bypass $cookie_nocache $arg_nocache $http_cache_bypass; } Can I Punch Through the Cache? Tip: If you want to disregard the cache and go strait to the origin for a response, you can use the proxy_cache_bypass directive.
  • 51. 51 proxy_cache_purge Documentation proxy_cache_methods string ...;Syntax: Default: - Context: http, server, location Definition: Defines conditions under which the request will be considered a cache purge request. If at least one value of the string parameters is not empty and is not equal to “0” then the cache entry with a corresponding cache key is removed. The result of successful operation is indicated by returning the 204 (No Content) response. Note: NGINX Plus only feature
  • 52. 52 proxy_cache_path /tmp/cache keys_zone=mycache:10m levels=1:2 inactive=60s; map $request_method $purge_method { PURGE 1; default 0; } server { listen 80; server_name www.example.com; location / { proxy_pass http://localhost:8002; proxy_cache mycache; proxy_cache_purge $purge_method; } } Example Cache Purge Configuration Tip: Using NGINX Plus, you can issue unique request methods to invalidate the cache
  • 54. 54 http { proxy_cache_path /path/to/hdd1 levels=1:2 keys_zone=my_cache_hdd1:10m max_size=10g inactive=60m use_temp_path=off; proxy_cache_path /path/to/hdd2 levels=1:2 keys_zone=my_cache_hdd2:10m max_size=10g inactive=60m use_temp_path=off; split_clients $request_uri $my_cache { 50% “my_cache_hdd1”; 50% “my_cache_hdd2”; } server { ... location / { proxy_cache $my_cache; proxy_pass http://my_upstream; } } } Split the Cache Across HDDs
  • 55. 55 Using NGINX for Byte Range Caching
  • 56. 56 http { proxy_cache_path /tmp/mycache keys_zone=mycache:10m; server { listen 80; proxy_cache mycache; slice 1m; proxy_cache_key $host$uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_http_version 1.1; proxy_cache_valid 200 206 1h; location / { proxy_pass http://origin.example.com; } } Split the Cache Across HDDs Tip: Using the split_client directive, NGINX will perform a hash function on a variable of your choice and based on that hash will dynamically set a new variable that can be used elsewhere in the configuration.
  • 57. 57 Architecting for High Availability
  • 58. 58 Two Approaches • Sharded (High Capacity) • Shared (Replicated)
  • 59. 59 Shared Cache Clustering Tip: If your primary goal is to achieve high availability while minimizing load on the origin servers, this scenario provides a highly available shared cache.
  • 60. 60 And Failover… Tip: In the event of a failover there is no loss in cache and the origin does not suffer unneeded proxy requests.
  • 61. 61 Sharding your Cache Tip: If your primary goal is to create a very high‑capacity cache, shard (partition) your cache across multiple servers. This in turn maximizes the resources you have while minimizing impact on your origin servers depending on the amount of cache servers in your cache tier.
  • 62. 62 upstream cache_servers { hash $scheme$proxy_host$request_uri consistent; server cache1.example.com; server cache2.example.com; server cache3.example.com; server cache4.example.com; } Hash Load Balancing Tip: Using the hash load balancing algorithm, we can specify the proxy cache key. This allows each resource to be cached on only one backend server.
  • 63. 63 Combined Load Balancer and Cache Tip: Alternatively, It is possible to consolidate the load balancer and cache tier into one with the use of a various NGINX directives and parameters.
  • 64. 64 Multi-Tier with “Hot Cache” Tip: If needed, a “Hot Cache Tier” can be enabled on the load balancer layer which will give you the same high capacity cache and provide a high availability of specific cached resources.
  • 65. Documentation • https://nginx.org • https://nginx.com Blog • https://www.nginx.com/blog/nginx-caching-guide/ • https://www.nginx.com/blog/benefits-of-microcaching-nginx/ • https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-1/ • https://www.nginx.com/blog/shared-caches-nginx-plus-cache-clusters-part-2/ • https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/ Webinar • https://www.nginx.com/resources/webinars/content-caching-nginx-plus/ 65 Links
  • 66. Thank You 66 https://www.nginx.com/blog/author/kjones/ @webopsx Kevin Jones Technical Solutions Architect NGINX Inc. kevin@nginx.com https://www.slideshare.net/KevinJones62 https://www.linkedin.com/in/kevin-jones-19b17b47/