When people hear the word NGINX, they usually associate the open source platform for its popular adoption as an HTTP web server or load balancer. What a lot of people don't know is the vast amount of powerful features contained in the platform that can be used to build an HTTP caching layer and why NGINX is often used as a framework to build powerful, scalable and highly available content delivery networks. In this talk we will dive into each unique NGINX directive and its configuration options that are available. We will show different architectural approaches that can be used to build a highly available HTTP content cache layer. We will show various other NGINX configurations that can be critical to your NGINX deployment. Walking away from this presentation, attendees will have the knowledge required to configure basic and advanced caching of their NGINX servers.
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
Join this webinar to learn:
- How to configure basic HTTP load balancing features
- The essential elements of load balancing: session persistence, health checks, and SSL termination
- How to load balance MySQL, DNS, and other common TCP/UDP applications
- How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we'll cover several things that most developers or administrators could implement to further delight their end users.
On-demand recording: nginx.com/resources/webinars/whats-new-nginx-plus-r12
NGINX Plus Release 12 (R12) is a significant release of the high-performance software application delivery platform, including award-winning customer support, a load balancer, content cache, and web server.
R12 adds improved configuration sharing, additional monitoring statistics, enhanced caching, improved health checks, and the general availability (GA) release of nginScript, which increases dynamic configuration capabilities for NGINX and NGINX Plus.
Join Liam Crilly, Director of Product Management for NGINX and NGINX Plus, to learn:
* How to use a new and improved method for synchronizing configuration across a cluster of servers
* What new features have been added to nginScript, the unique JavaScript implementation for NGINX and NGINX Plus
* Which new statistics have been added to NGINX Plus monitoring, such as response time for upstream servers, response codes for TCP/UDP upstreams, and upstream hostnames
* How improved health checks can help you maximize server uptime
This document provides an overview of installing and configuring the NGINX web server. It discusses installing NGINX from official repositories or from source on Linux systems like Ubuntu, Debian, CentOS and Red Hat. It also covers verifying the installation, basic configurations for web serving, reverse proxying, load balancing and caching. The document discusses modifications that can be made to the main nginx.conf file to improve performance and reliability. It also covers monitoring NGINX using status pages and logs, and summarizes key documentation resources.
Nginx is a lightweight web server that was created in 2002 to address the C10K problem of scaling to 10,000 concurrent connections. It uses an asynchronous event-driven architecture that uses less memory and CPU than traditional multi-threaded models. Key features include acting as a reverse proxy, load balancer, HTTP cache, and web server. Nginx has grown in popularity due to its high performance, low memory usage, simple configuration, and rich feature set including modules for streaming, caching, and dynamic content.
On-demand recording: https://www.nginx.com/resources/webinars/rate-limiting-nginx/
Learn how to mitigate DDoS and password-guessing attacks by limiting the number of HTTP requests a user can make in a given period of time.
This webinar will teach you how to:
* How to protect application servers from being overwhelmed with request limits
* About the burst and no‑delay features for minimizing delay while handling large bursts of user requests
* How to use the map and geo blocks to impose different rate limits on different HTTP user requests
* About using the limit_req_log_level directive to set logging levels for rate‑limiting events
About the webinar
A delay of even a few seconds for a screen to render is interpreted by many users as a breakdown in the experience. There are many reasons for these breakdowns in the user experience, one of which is DDoS attacks which tie up your system’s resources.
Rate limiting is a powerful feature of NGINX that can mitigate DDoS attacks, which would otherwise overload your servers and hinder application performance. In this webinar, we’ll cover basic concepts as well as advanced configuration. We will finish with a live demo that shows NGINX rate limiting in action.
On demand recording: https://www.nginx.com/resources/webinars/nginx-http2-server-push-grpc/
We discuss new NGINX support for HTTP/2 server push and proxying gRPC traffic.
Check out this webinar to learn:
- About NGINX HTTP/2 support
- How to use HTTP/2 server push with NGINX
- How to proxy gRPC traffic using NGINX
- How to configure both features, with live demonstrations
Nginx is a popular tool for load balancing and caching. It offers high performance, reliability and flexibility for load balancing through features like upstream modules, health checks, and request distribution methods. It can also improve response times and handle traffic spikes through caching static content and supporting techniques like stale caching.
View full webinar on demand at http://bit.ly/nginxbenchmarking
Whether you’re doing performance testing or planning for infrastructure needs, benchmarking can be a big deal. Join us for this webinar where we cover NGINX benchmarking best practices, including:
- the test environment
- configuring NGINX
- using benchmarking tools
- and more!
You’ll learn how to approach doing benchmarks so that you obtain results that are more accurate, better understood, and do a better job of addressing the needs of your project.
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we’ll cover several things that most developers or administrators could implement to further delight their end users.
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...
Proxies are building blocks of HA setups for MySQL & MariaDB. They can detect failed nodes and route queries to hosts which are still available. If your master failed and you had to promote one of your slaves, proxies will detect such topology changes and route your traffic accordingly. More advanced proxies can do much more: route traffic based on precise query rules, cache queries or mirror them. They can be even used to implement different types of sharding.
Introducing ProxySQL!
In this joint webinar with ProxySQL’s creator, René Cannaò, we discuss this new proxy and its key features. We show you how you can deploy ProxySQL using ClusterControl. And we give you an early walk-through of some of the exciting ClusterControl features for ProxySQL that we have planned for its next release.
AGENDA
1. Introduction
2. ProxySQL concepts (René Cannaò)
- Hostgroups
- Query rules
- Connection multiplexing
- Configuration management
3. Demo of ProxySQL setup in ClusterControl (Krzysztof Książek)
4. Upcoming ClusterControl features for ProxySQL
SPEAKERS
René Cannaò, Creator & Founder, ProxySQL. René has 10 years of working experience as a System, Network and Database Administrator mainly on Linux/Unix platform. In the last 4-5 years his experience was focused mainly on MySQL, working as Senior MySQL Support Engineer at Sun/Oracle and then as Senior Operational DBA at Blackbird, (formerly PalominoDB). In this period he built an analytic and problem solving mindset and he is always eager to take on new challenges, especially if they are related to high performance. And then he created ProxySQL…
Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.
Key external invitees will each give a 10min lightning talk about their Company, their interest in ARM servers and any requirements to port their software solutions on ARM 64-bit platforms.
Video: https://www.youtube.com/watch?v=XWxrVM1i7gA&list=UUIVqQKxCyQLJS6xvSmfndLA
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015
Fastly Altitude - June 25, 2015. Joe Williams, Computer Operator at GitHub discusses using a CDN to mitigate security threats.
Video of the talk: http://fastly.us/Altitude2015_Mitigating-Security-Threats-2
Joe's bio: Joe Williams is a Computer Operator at GitHub, and joined their infrastructure team in August 2013. Joe's passion for distributed systems, queuing theory and automation help keep the lights on. When not behind a computer you can generally find him riding a bicycle around Marin, CA.
This document discusses caching strategies for Rails applications, including:
1. Using Rails caching for queries, pages, assets, and fragments to improve performance.
2. Configuring Cache-Control headers, compression, and CDNs like Fastly for efficient caching.
3. Techniques for caching dynamic content at the edge using surrogate keys and purging cached responses.
In this webinar we discuss new features in NGINX Plus R15, which includes support for gRPC, HTTP/2 Server Push, enhanced clustering, and OpenID Connect SSO integration.
Watch this webinar to learn:
- About new HTTP/2 enhancements: gRPC and HTTP/2 server push support
- About new state sharing and clustering support in NGINX Plus, with support for Sticky Learn session persistence
- How to integrate with Okta, OneLogin, and other identity providers to provide single sign on (SSO) for your applications
- How to initiate subrequests with the NGINX JavaScript module, new variables, and other great new enhancements in this release
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r15/
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
Load Balancing Applications with NGINX in a CoreOS Cluster
The document discusses load balancing applications with NGINX in a CoreOS cluster. It provides an overview of using CoreOS, etcd, and fleet to deploy and manage containers across a cluster. Etcd is used for service discovery to track dynamic IP addresses and endpoints, while fleet is used as an application scheduler to deploy units and rebalance loads. NGINX can then be used as a software load balancer to distribute traffic to the backend services. The document demonstrates setting up this environment with CoreOS, etcd, fleet and NGINX to provide load balancing in a clustered deployment.
Less and faster – Cache tips for WordPress developers
Otto Kekäläinen, the code-loving CEO of Seravo held a webinar on May 12, 2020, that focused on the cache: what should a WordPress developer know and which are the best practices to follow?
The document provides tips and tricks for optimizing website performance. It discusses using PHP-FPM or HHVM as faster alternatives to running PHP as an Apache module. Nginx is recommended as a lightweight web server that can serve static files and pass dynamic requests to PHP faster. Caching with Nginx, Memcached, and browser caching can significantly improve performance. Load balancing upstream servers and monitoring tools are also discussed.
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
I will be giving a brief overview of the history of NGINX along with an overview of the features and functionality in the project as it stands today. I will give some real use case of example of how NGINX can be used to solve problems and eliminate complexity within infrastructure. I will then dive into the future of the modern web and how NGINX is monitoring and leveraging industry changes to enhance the product for individuals and companies in the industry.
Test rate limits in dry-run mode and monitor NGINX Plus using advanced metrics with NGINX Plus R19.
On-Demand Link:
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r19/
Watch this webinar to learn:
- How to monitor your NGINX Plus ecosystem with fine-grained insights using advanced metrics
- About dynamically blacklisting IP address ranges in the key-value Store
- How to apply different bandwidth limits based on attributes of incoming traffic
- About testing rate limits in dry-run mode
The document discusses the internals and architecture of the Nginx web server. It covers Nginx's event-driven and non-blocking architecture, its use of memory pools and data structures like radix trees, how it processes HTTP requests through different phases, and how modules and extensions can be developed for Nginx. The document also provides an overview of Nginx's configuration, caching, and load balancing capabilities.
How to make a high-quality Node.js app, Nikita Galkin
This document discusses how to build high quality Node.js applications. It covers attributes of quality like understandability, modifiability, portability, reliability, efficiency, usability, and testability. For each attribute, it provides examples of what could go wrong and best practices to achieve that attribute, such as using dependency injection for modifiability, environment variables for portability, and graceful shutdown for reliability. It also discusses Node.js programming paradigms like callbacks, promises, and async/await and recommends best practices for testing Node.js applications.
This document discusses using NGINX to deliver high performance applications through efficient caching. It explains that NGINX can be used as a web server, load balancer, and high availability content cache to provide low latency, scalability, availability and reduced costs. Specific NGINX caching configurations like proxy_cache, proxy_cache_valid and proxy_cache_background_update are described. Microcaching optimizations with NGINX are also covered, showing significant performance improvements over Apache+WordPress and a reverse proxy only setup.
Where is my cache architectural patterns for caching microservices by example
The document discusses various architectural patterns for caching microservices, including embedded caching, embedded distributed caching, client-server caching, cloud caching, sidecar caching, reverse proxy caching, and reverse proxy sidecar caching. It provides examples and descriptions of each pattern, discussing pros and cons. The presentation concludes with a summary matrix comparing the different caching patterns based on factors like whether they are application-aware, support containers, are language-agnostic, support large amounts of data, have security restrictions, and can be deployed to the cloud.
The document discusses configuring Nginx and PHP-FPM for high performance websites. Some key points:
- Nginx is a lightweight and fast HTTP server that is well-suited for high traffic loads. It can be used as a web server, reverse proxy, load balancer, and more.
- PHP-FPM (PHP FastCGI Process Manager) runs PHP processes as a pool that is separate from the web server for better isolation and performance. Nginx communicates with PHP-FPM via FastCGI.
- Benchmark results show Nginx performing better than Apache, especially under high concurrency loads. Caching with Nginx and Memcached can further improve
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
In this webinar we discuss new features in NGINX Plus R15, which includes support for gRPC, HTTP/2 Server Push, enhanced clustering, and OpenID Connect SSO integration.
Watch this webinar to learn:
- About new HTTP/2 enhancements: gRPC and HTTP/2 server push support
- About new state sharing and clustering support in NGINX Plus, with support for Sticky Learn session persistence
- How to integrate with Okta, OneLogin, and other identity providers to provide single sign on (SSO) for your applications
- How to initiate subrequests with the NGINX JavaScript module, new variables, and other great new enhancements in this release
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r15/
Andrew Betts Web Developer, The Financial Times at Fastly Altitude 2016
Running custom code at the Edge using a standard language is one of the biggest advantages of working with Fastly’s CDN. Andrew gives you a tour of all the problems the Financial Times and Nikkei solve in VCL and how their solutions work.
In this webinar we help you get started using NGINX, the de facto web server for building modern applications. We cover best practices for installing, configuring, and troubleshooting both NGINX Open Source and the enterprise-grade NGINX Plus.
https://www.nginx.com/resources/webinars/nginx-basics-best-practices-emea-2/
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
On-Demand Recording: https://www.nginx.com/resources/webinars/high-performance-load-balancing/
Join this webinar to learn:
* How to configure basic HTTP load balancing features
* The essential elements of load balancing: session persistence, health checks, and SSL termination
* How to load balance MySQL, DNS, and other common TCP/UDP applications
* How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
About the webinar
You’ve built a great application and it’s gaining in popularity. Or maybe you already have a hardware load balancer and you’re looking to replace it with a software solution. In this webinar we’ll share the latest information on how to scale-out and load balance your applications with NGINX and NGINX Plus.
You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time.
View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/
This document provides an overview of ProxySQL, a high performance proxy for MySQL. It discusses ProxySQL's main features such as query routing, caching, load balancing, and high availability capabilities including seamless failover. The document also describes ProxySQL's internal architecture including modules for queries processing, user authentication, hostgroup management, and more. Examples are given showing how hostgroups can be used for read/write splitting and replication topologies.
This document discusses socket programming and network programming concepts like TCP and UDP. It provides examples of using Netcat and Python for sockets. It also summarizes the architecture of Nginx and Openresty, a framework that embeds Lua in Nginx allowing full web applications to run within the Nginx process for high performance and scalability. Openresty allows accessing and modifying requests and responses with Lua scripts.
Sami provided a beginner-friendly introduction to Amazon Web Services (AWS), covering essential terms, products, and services for cloud deployment. Participants explored AWS' latest Gen AI offerings, making it accessible for those starting their cloud journey or integrating AI into coding practices.
Explore the rapid development journey of TryBoxLang, completed in just 48 hours. This session delves into the innovative process behind creating TryBoxLang, a platform designed to showcase the capabilities of BoxLang by Ortus Solutions. Discover the challenges, strategies, and outcomes of this accelerated development effort, highlighting how TryBoxLang provides a practical introduction to BoxLang's features and benefits.
Are you wondering how to migrate to the Cloud? At the ITB session, we addressed the challenge of managing multiple ColdFusion licenses and AWS EC2 instances. Discover how you can consolidate with just one EC2 instance capable of running over 50 apps using CommandBox ColdFusion. This solution supports both ColdFusion flavors and includes cb-websites, a GoLang binary for managing CommandBox websites.
BoxLang Developer Tooling: VSCode Extension and Debugger
Discover BoxLang, the innovative JVM programming language developed by Ortus Solutions. Designed to harness the power of the Java Virtual Machine, BoxLang offers a modern approach to application development with robust performance and scalability. Join us as we explore the capabilities of BoxLang, its syntax, and how it enhances productivity in software development.
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...
Unlock the secrets of seamless ColdFusion error troubleshooting! Join us to explore the potent capabilities of Visual Studio Code (VS Code) and ColdFusion Builder (CF Builder) in debugging. This hands-on session guides you through practical techniques tailored for local setups, ensuring a smooth and efficient development experience.
CommandBox was highlighted as a powerful web hosting solution, perfect for developers and businesses alike. Featuring a built-in server and command-line interface, CommandBox simplified web application management. Developers could deploy multiple application instances simultaneously, optimizing development workflows. CommandBox's efficient deployment processes ensured reliable web hosting, seamlessly integrating into existing workflows for scalability and feature enhancements.
Join me for an insightful journey into task scheduling within the ColdBox framework. In this session, we explored how to effortlessly create and manage scheduled tasks directly in your code, enhancing control and efficiency in applications and modules. Attendees experienced a user-friendly dashboard for seamless task management and monitoring. Whether you're experienced with ColdBox or new to it, this session provided practical knowledge and tips to streamline your development workflow.
Disk to Cloud: Abstract your File Operations with CBFS
In this session, we explored how the cbfs module empowers developers to abstract and manage file systems seamlessly across their lifecycle. From local development to S3 deployment and customized media providers requiring authentication, cbfs offers flexible solutions. We discussed how cbfs simplifies file handling with enhanced workflow efficiency compared to native methods, along with practical tips to accelerate complex file operations in your projects.
In this session, we explored setting up Playwright, an end-to-end testing tool for simulating browser interactions and running TestBox tests. Participants learned to configure Playwright for applications, simulate user interactions to stress-test forms, and handle scenarios like taking screenshots, recording sessions, capturing Chrome dev tools traces, testing login failures, and managing broken JavaScript. The session also covered using Playwright with non-ColdBox sites, providing practical insights into enhancing testing capabilities.
Securing Your Application with Passkeys and cbSecurity
Discover Passkeys, the next evolution in secure login methods that eliminate traditional password vulnerabilities. Learn about the CBSecurity Passkeys module's installation, configuration, and integration into your application to enhance security.
Schrodinger’s Backup: Is Your Backup Really a Backup?
In this session, we discussed the critical need for comprehensive backups across all aspects of our industry—from code and databases to webservers, file servers, and network configurations. Emphasizing the importance of proactive measures, attendees were urged to ensure their backup systems were tested through restoration processes. The session underscored the risk of discovering backup issues only during crises, highlighting the necessity of verifying backup integrity through restoration tests.
Reverse proxy & web cache with NGINX, HAProxy and VarnishEl Mahdi Benzekri
Discover the very wide world of web servers, in addition to the basic web deliverance fonctionnality, we will cover the reverse proxy, the resource caching and the load balancing.
Nginx and apache HTTPD will be used as web server and reverse proxy, and to illustrate some caching features we will also present varnish a powerful caching server.
To introduce load balancers we will compare between Nginx and Haproxy.
Delivering High Performance Websites with NGINXNGINX, Inc.
NGINX Plus is an easy-to-install, proven software solution to deliver your sites and applications through state-of-the-art intelligent load balancing and high performance acceleration. Improve your servers’ performance, scalability, and reliability with application delivery from NGINX Plus.
NGINX Plus significantly increases application performance during periods of high load with its caching, HTTP connection processing, and efficient offloading of traffic from slow networks. NGINX Plus offers enterprise application load balancing, sophisticated health checks, and more, to balance workloads and avoid user-visible errors.
Check out this webinar to:
* Learn why web performance matters more than ever, in the face of growing application complexity and traffic volumes
* Get the lowdown on the performance challenges of HTTP, and why the real world is so different to a development environment
* Understand why NGINX and NGINX Plus are such popular solutions for mitigating these problems and restoring peak performance
* Look at some real-world deployment examples of accelerating traffic in complex scenarios
Basic concept of nginx , Apache Vs Nginx , Nginx as Loadbalancer , Nginx as Reverse proxy , Configuration of nginx as load balancer and reverse proxy .
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
Join this webinar to learn:
- How to configure basic HTTP load balancing features
- The essential elements of load balancing: session persistence, health checks, and SSL termination
- How to load balance MySQL, DNS, and other common TCP/UDP applications
- How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
5 things you didn't know nginx could dosarahnovotny
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we'll cover several things that most developers or administrators could implement to further delight their end users.
On-demand recording: nginx.com/resources/webinars/whats-new-nginx-plus-r12
NGINX Plus Release 12 (R12) is a significant release of the high-performance software application delivery platform, including award-winning customer support, a load balancer, content cache, and web server.
R12 adds improved configuration sharing, additional monitoring statistics, enhanced caching, improved health checks, and the general availability (GA) release of nginScript, which increases dynamic configuration capabilities for NGINX and NGINX Plus.
Join Liam Crilly, Director of Product Management for NGINX and NGINX Plus, to learn:
* How to use a new and improved method for synchronizing configuration across a cluster of servers
* What new features have been added to nginScript, the unique JavaScript implementation for NGINX and NGINX Plus
* Which new statistics have been added to NGINX Plus monitoring, such as response time for upstream servers, response codes for TCP/UDP upstreams, and upstream hostnames
* How improved health checks can help you maximize server uptime
NGINX: Basics & Best Practices - EMEA BroadcastNGINX, Inc.
This document provides an overview of installing and configuring the NGINX web server. It discusses installing NGINX from official repositories or from source on Linux systems like Ubuntu, Debian, CentOS and Red Hat. It also covers verifying the installation, basic configurations for web serving, reverse proxying, load balancing and caching. The document discusses modifications that can be made to the main nginx.conf file to improve performance and reliability. It also covers monitoring NGINX using status pages and logs, and summarizes key documentation resources.
Nginx is a lightweight web server that was created in 2002 to address the C10K problem of scaling to 10,000 concurrent connections. It uses an asynchronous event-driven architecture that uses less memory and CPU than traditional multi-threaded models. Key features include acting as a reverse proxy, load balancer, HTTP cache, and web server. Nginx has grown in popularity due to its high performance, low memory usage, simple configuration, and rich feature set including modules for streaming, caching, and dynamic content.
Rate Limiting with NGINX and NGINX PlusNGINX, Inc.
On-demand recording: https://www.nginx.com/resources/webinars/rate-limiting-nginx/
Learn how to mitigate DDoS and password-guessing attacks by limiting the number of HTTP requests a user can make in a given period of time.
This webinar will teach you how to:
* How to protect application servers from being overwhelmed with request limits
* About the burst and no‑delay features for minimizing delay while handling large bursts of user requests
* How to use the map and geo blocks to impose different rate limits on different HTTP user requests
* About using the limit_req_log_level directive to set logging levels for rate‑limiting events
About the webinar
A delay of even a few seconds for a screen to render is interpreted by many users as a breakdown in the experience. There are many reasons for these breakdowns in the user experience, one of which is DDoS attacks which tie up your system’s resources.
Rate limiting is a powerful feature of NGINX that can mitigate DDoS attacks, which would otherwise overload your servers and hinder application performance. In this webinar, we’ll cover basic concepts as well as advanced configuration. We will finish with a live demo that shows NGINX rate limiting in action.
On demand recording: https://www.nginx.com/resources/webinars/nginx-http2-server-push-grpc/
We discuss new NGINX support for HTTP/2 server push and proxying gRPC traffic.
Check out this webinar to learn:
- About NGINX HTTP/2 support
- How to use HTTP/2 server push with NGINX
- How to proxy gRPC traffic using NGINX
- How to configure both features, with live demonstrations
Nginx is a popular tool for load balancing and caching. It offers high performance, reliability and flexibility for load balancing through features like upstream modules, health checks, and request distribution methods. It can also improve response times and handle traffic spikes through caching static content and supporting techniques like stale caching.
Benchmarking NGINX for Accuracy and ResultsNGINX, Inc.
View full webinar on demand at http://bit.ly/nginxbenchmarking
Whether you’re doing performance testing or planning for infrastructure needs, benchmarking can be a big deal. Join us for this webinar where we cover NGINX benchmarking best practices, including:
- the test environment
- configuring NGINX
- using benchmarking tools
- and more!
You’ll learn how to approach doing benchmarks so that you obtain results that are more accurate, better understood, and do a better job of addressing the needs of your project.
5 things you didn't know nginx could do velocitysarahnovotny
NGINX is a well kept secret of high performance web service. Many people know NGINX as an Open Source web server that delivers static content blazingly fast. But, it has many more features to help accelerate delivery of bits to your end users even in more complicated application environments. In this talk we’ll cover several things that most developers or administrators could implement to further delight their end users.
Webinar slides: MySQL & MariaDB load balancing with ProxySQL & ClusterControl...Severalnines
Proxies are building blocks of HA setups for MySQL & MariaDB. They can detect failed nodes and route queries to hosts which are still available. If your master failed and you had to promote one of your slaves, proxies will detect such topology changes and route your traffic accordingly. More advanced proxies can do much more: route traffic based on precise query rules, cache queries or mirror them. They can be even used to implement different types of sharding.
Introducing ProxySQL!
In this joint webinar with ProxySQL’s creator, René Cannaò, we discuss this new proxy and its key features. We show you how you can deploy ProxySQL using ClusterControl. And we give you an early walk-through of some of the exciting ClusterControl features for ProxySQL that we have planned for its next release.
AGENDA
1. Introduction
2. ProxySQL concepts (René Cannaò)
- Hostgroups
- Query rules
- Connection multiplexing
- Configuration management
3. Demo of ProxySQL setup in ClusterControl (Krzysztof Książek)
4. Upcoming ClusterControl features for ProxySQL
SPEAKERS
René Cannaò, Creator & Founder, ProxySQL. René has 10 years of working experience as a System, Network and Database Administrator mainly on Linux/Unix platform. In the last 4-5 years his experience was focused mainly on MySQL, working as Senior MySQL Support Engineer at Sun/Oracle and then as Senior Operational DBA at Blackbird, (formerly PalominoDB). In this period he built an analytic and problem solving mindset and he is always eager to take on new challenges, especially if they are related to high performance. And then he created ProxySQL…
Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.
Key external invitees will each give a 10min lightning talk about their Company, their interest in ARM servers and any requirements to port their software solutions on ARM 64-bit platforms.
Video: https://www.youtube.com/watch?v=XWxrVM1i7gA&list=UUIVqQKxCyQLJS6xvSmfndLA
Mitigating Security Threats with Fastly - Joe Williams at Fastly Altitude 2015Fastly
Fastly Altitude - June 25, 2015. Joe Williams, Computer Operator at GitHub discusses using a CDN to mitigate security threats.
Video of the talk: http://fastly.us/Altitude2015_Mitigating-Security-Threats-2
Joe's bio: Joe Williams is a Computer Operator at GitHub, and joined their infrastructure team in August 2013. Joe's passion for distributed systems, queuing theory and automation help keep the lights on. When not behind a computer you can generally find him riding a bicycle around Marin, CA.
This document discusses caching strategies for Rails applications, including:
1. Using Rails caching for queries, pages, assets, and fragments to improve performance.
2. Configuring Cache-Control headers, compression, and CDNs like Fastly for efficient caching.
3. Techniques for caching dynamic content at the edge using surrogate keys and purging cached responses.
In this webinar we discuss new features in NGINX Plus R15, which includes support for gRPC, HTTP/2 Server Push, enhanced clustering, and OpenID Connect SSO integration.
Watch this webinar to learn:
- About new HTTP/2 enhancements: gRPC and HTTP/2 server push support
- About new state sharing and clustering support in NGINX Plus, with support for Sticky Learn session persistence
- How to integrate with Okta, OneLogin, and other identity providers to provide single sign on (SSO) for your applications
- How to initiate subrequests with the NGINX JavaScript module, new variables, and other great new enhancements in this release
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r15/
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
Load Balancing Applications with NGINX in a CoreOS ClusterKevin Jones
The document discusses load balancing applications with NGINX in a CoreOS cluster. It provides an overview of using CoreOS, etcd, and fleet to deploy and manage containers across a cluster. Etcd is used for service discovery to track dynamic IP addresses and endpoints, while fleet is used as an application scheduler to deploy units and rebalance loads. NGINX can then be used as a software load balancer to distribute traffic to the backend services. The document demonstrates setting up this environment with CoreOS, etcd, fleet and NGINX to provide load balancing in a clustered deployment.
Less and faster – Cache tips for WordPress developersSeravo
Otto Kekäläinen, the code-loving CEO of Seravo held a webinar on May 12, 2020, that focused on the cache: what should a WordPress developer know and which are the best practices to follow?
The document provides tips and tricks for optimizing website performance. It discusses using PHP-FPM or HHVM as faster alternatives to running PHP as an Apache module. Nginx is recommended as a lightweight web server that can serve static files and pass dynamic requests to PHP faster. Caching with Nginx, Memcached, and browser caching can significantly improve performance. Load balancing upstream servers and monitoring tools are also discussed.
I will be giving a brief overview of the history of NGINX along with an overview of the features and functionality in the project as it stands today. I will give some real use case of example of how NGINX can be used to solve problems and eliminate complexity within infrastructure. I will then dive into the future of the modern web and how NGINX is monitoring and leveraging industry changes to enhance the product for individuals and companies in the industry.
Test rate limits in dry-run mode and monitor NGINX Plus using advanced metrics with NGINX Plus R19.
On-Demand Link:
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r19/
Watch this webinar to learn:
- How to monitor your NGINX Plus ecosystem with fine-grained insights using advanced metrics
- About dynamically blacklisting IP address ranges in the key-value Store
- How to apply different bandwidth limits based on attributes of incoming traffic
- About testing rate limits in dry-run mode
The document discusses the internals and architecture of the Nginx web server. It covers Nginx's event-driven and non-blocking architecture, its use of memory pools and data structures like radix trees, how it processes HTTP requests through different phases, and how modules and extensions can be developed for Nginx. The document also provides an overview of Nginx's configuration, caching, and load balancing capabilities.
How to make a high-quality Node.js app, Nikita GalkinSigma Software
This document discusses how to build high quality Node.js applications. It covers attributes of quality like understandability, modifiability, portability, reliability, efficiency, usability, and testability. For each attribute, it provides examples of what could go wrong and best practices to achieve that attribute, such as using dependency injection for modifiability, environment variables for portability, and graceful shutdown for reliability. It also discusses Node.js programming paradigms like callbacks, promises, and async/await and recommends best practices for testing Node.js applications.
This document discusses using NGINX to deliver high performance applications through efficient caching. It explains that NGINX can be used as a web server, load balancer, and high availability content cache to provide low latency, scalability, availability and reduced costs. Specific NGINX caching configurations like proxy_cache, proxy_cache_valid and proxy_cache_background_update are described. Microcaching optimizations with NGINX are also covered, showing significant performance improvements over Apache+WordPress and a reverse proxy only setup.
Where is my cache architectural patterns for caching microservices by exampleRafał Leszko
The document discusses various architectural patterns for caching microservices, including embedded caching, embedded distributed caching, client-server caching, cloud caching, sidecar caching, reverse proxy caching, and reverse proxy sidecar caching. It provides examples and descriptions of each pattern, discussing pros and cons. The presentation concludes with a summary matrix comparing the different caching patterns based on factors like whether they are application-aware, support containers, are language-agnostic, support large amounts of data, have security restrictions, and can be deployed to the cloud.
The document discusses configuring Nginx and PHP-FPM for high performance websites. Some key points:
- Nginx is a lightweight and fast HTTP server that is well-suited for high traffic loads. It can be used as a web server, reverse proxy, load balancer, and more.
- PHP-FPM (PHP FastCGI Process Manager) runs PHP processes as a pool that is separate from the web server for better isolation and performance. Nginx communicates with PHP-FPM via FastCGI.
- Benchmark results show Nginx performing better than Apache, especially under high concurrency loads. Caching with Nginx and Memcached can further improve
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
In this webinar we discuss new features in NGINX Plus R15, which includes support for gRPC, HTTP/2 Server Push, enhanced clustering, and OpenID Connect SSO integration.
Watch this webinar to learn:
- About new HTTP/2 enhancements: gRPC and HTTP/2 server push support
- About new state sharing and clustering support in NGINX Plus, with support for Sticky Learn session persistence
- How to integrate with Okta, OneLogin, and other identity providers to provide single sign on (SSO) for your applications
- How to initiate subrequests with the NGINX JavaScript module, new variables, and other great new enhancements in this release
https://www.nginx.com/resources/webinars/whats-new-nginx-plus-r15/
Andrew Betts Web Developer, The Financial Times at Fastly Altitude 2016
Running custom code at the Edge using a standard language is one of the biggest advantages of working with Fastly’s CDN. Andrew gives you a tour of all the problems the Financial Times and Nikkei solve in VCL and how their solutions work.
In this webinar we help you get started using NGINX, the de facto web server for building modern applications. We cover best practices for installing, configuring, and troubleshooting both NGINX Open Source and the enterprise-grade NGINX Plus.
https://www.nginx.com/resources/webinars/nginx-basics-best-practices-emea-2/
Learn how to load balance your applications following best practices with NGINX and NGINX Plus.
On-Demand Recording: https://www.nginx.com/resources/webinars/high-performance-load-balancing/
Join this webinar to learn:
* How to configure basic HTTP load balancing features
* The essential elements of load balancing: session persistence, health checks, and SSL termination
* How to load balance MySQL, DNS, and other common TCP/UDP applications
* How to have NGINX Plus automatically discover new service instances in an auto-scaling or microservices environment
About the webinar
You’ve built a great application and it’s gaining in popularity. Or maybe you already have a hardware load balancer and you’re looking to replace it with a software solution. In this webinar we’ll share the latest information on how to scale-out and load balance your applications with NGINX and NGINX Plus.
You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time.
View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/
This document provides an overview of ProxySQL, a high performance proxy for MySQL. It discusses ProxySQL's main features such as query routing, caching, load balancing, and high availability capabilities including seamless failover. The document also describes ProxySQL's internal architecture including modules for queries processing, user authentication, hostgroup management, and more. Examples are given showing how hostgroups can be used for read/write splitting and replication topologies.
This document discusses socket programming and network programming concepts like TCP and UDP. It provides examples of using Netcat and Python for sockets. It also summarizes the architecture of Nginx and Openresty, a framework that embeds Lua in Nginx allowing full web applications to run within the Nginx process for high performance and scalability. Openresty allows accessing and modifying requests and responses with Lua scripts.
Similar to ITB2017 - Nginx Effective High Availability Content Caching (20)
Sami provided a beginner-friendly introduction to Amazon Web Services (AWS), covering essential terms, products, and services for cloud deployment. Participants explored AWS' latest Gen AI offerings, making it accessible for those starting their cloud journey or integrating AI into coding practices.
Explore the rapid development journey of TryBoxLang, completed in just 48 hours. This session delves into the innovative process behind creating TryBoxLang, a platform designed to showcase the capabilities of BoxLang by Ortus Solutions. Discover the challenges, strategies, and outcomes of this accelerated development effort, highlighting how TryBoxLang provides a practical introduction to BoxLang's features and benefits.
Are you wondering how to migrate to the Cloud? At the ITB session, we addressed the challenge of managing multiple ColdFusion licenses and AWS EC2 instances. Discover how you can consolidate with just one EC2 instance capable of running over 50 apps using CommandBox ColdFusion. This solution supports both ColdFusion flavors and includes cb-websites, a GoLang binary for managing CommandBox websites.
Discover BoxLang, the innovative JVM programming language developed by Ortus Solutions. Designed to harness the power of the Java Virtual Machine, BoxLang offers a modern approach to application development with robust performance and scalability. Join us as we explore the capabilities of BoxLang, its syntax, and how it enhances productivity in software development.
How to debug ColdFusion Applications using “ColdFusion Builder extension for ...Ortus Solutions, Corp
Unlock the secrets of seamless ColdFusion error troubleshooting! Join us to explore the potent capabilities of Visual Studio Code (VS Code) and ColdFusion Builder (CF Builder) in debugging. This hands-on session guides you through practical techniques tailored for local setups, ensuring a smooth and efficient development experience.
CommandBox was highlighted as a powerful web hosting solution, perfect for developers and businesses alike. Featuring a built-in server and command-line interface, CommandBox simplified web application management. Developers could deploy multiple application instances simultaneously, optimizing development workflows. CommandBox's efficient deployment processes ensured reliable web hosting, seamlessly integrating into existing workflows for scalability and feature enhancements.
Join me for an insightful journey into task scheduling within the ColdBox framework. In this session, we explored how to effortlessly create and manage scheduled tasks directly in your code, enhancing control and efficiency in applications and modules. Attendees experienced a user-friendly dashboard for seamless task management and monitoring. Whether you're experienced with ColdBox or new to it, this session provided practical knowledge and tips to streamline your development workflow.
In this session, we explored how the cbfs module empowers developers to abstract and manage file systems seamlessly across their lifecycle. From local development to S3 deployment and customized media providers requiring authentication, cbfs offers flexible solutions. We discussed how cbfs simplifies file handling with enhanced workflow efficiency compared to native methods, along with practical tips to accelerate complex file operations in your projects.
In this session, we explored setting up Playwright, an end-to-end testing tool for simulating browser interactions and running TestBox tests. Participants learned to configure Playwright for applications, simulate user interactions to stress-test forms, and handle scenarios like taking screenshots, recording sessions, capturing Chrome dev tools traces, testing login failures, and managing broken JavaScript. The session also covered using Playwright with non-ColdBox sites, providing practical insights into enhancing testing capabilities.
Discover Passkeys, the next evolution in secure login methods that eliminate traditional password vulnerabilities. Learn about the CBSecurity Passkeys module's installation, configuration, and integration into your application to enhance security.
In this session, we discussed the critical need for comprehensive backups across all aspects of our industry—from code and databases to webservers, file servers, and network configurations. Emphasizing the importance of proactive measures, attendees were urged to ensure their backup systems were tested through restoration processes. The session underscored the risk of discovering backup issues only during crises, highlighting the necessity of verifying backup integrity through restoration tests.
Participants explored how visual and functional coherence strengthened brand identity and streamlined development in this session. They learned to maintain consistency across platforms and enhance user experiences using Design Systems. Ideal for brand designers, UI/UX designers, developers, and product managers who sought to optimize efficiency and ensure consistency across projects.
Explore the latest in ColdBox Debugger v4.2.0, featuring the Hyper Collector for HTTP/S request tracking, Lucee SQL Collector for query profiling, and Heap Dump Support for memory leak debugging. Enhancements like the revamped Request Dock and improved SQL/JSON formatting streamline debugging for optimal ColdBox application performance and stability. Ideal for developers familiar with ColdBox, this session focuses on leveraging advanced debugging tools to enhance development efficiency.
Thinking about freelancing? It's not just about coding solo and avoiding coworkers. Join me as I share insights from my 15-year freelance journey, covering everything from managing invoices to client communication styles. This session blends ColdFusion-specific tips with general freelance and consulting advice, with time for audience Q&A.
Content templates, CBFS, Redirects, and Coldbox 7, oh my! ContentBox 6 is the game-changing new release for the ContentBox CMS platform. In this session, we'll discuss all of the new goodness added in the release, as well as show the many ways in which your single or multi-site ContentBox instance just became more powerful and flexible.
Almost every application has tasks or jobs that are better suited to the background, and cbqmakes it easier and traceable to manage those jobs. cbq can scale from simple background tasks to a database to any message queue provider. Come learn how to get started with background tasks in your application.
Building on his 2021 ITB presentation, "Monitoring Solutions for CF and Lucee," Charlie now focuses on practical demonstrations of these tools. Discover key observations and metrics for troubleshooting, tuning, and receiving alerts. Gain insights into the evolution of these tools since the last talk, drawn from Charlie's extensive experience assisting users with server, container, and CommandBox environments.
we delve into the power of headless CMS—a versatile solution separating content creation from presentation. Explore its benefits: multi-channel delivery, accelerated time-to-market, content reusability, scalability, technology flexibility, and enhanced security. Discover how headless CMS transforms digital content management, empowering efficient and flexible content delivery across diverse platforms.
Learn to manage your web form's question flow with RuleBox in this session. Simplify complex conditional statements by structuring logic in a readable and testable Given-When-Then format. Discussion covers prototyping tips, writing test cases, integrating external data, and managing multiple form versions with a single set of rules. Ideal for ColdFusion web developers exploring TestBox and/or RuleBox, with a demo featuring ColdBox and cborm, though not required.
In this session, developers explored CBWIRE, a ColdBox module that simplifies modern, reactive CFML app development without JavaScript frameworks like Vue or React. Attendees learned its usage, benefits, and the new features introduced in CBWIRE version 4, designed based on community feedback. The session catered to developers familiar with ColdBox and CFML, offering practical insights and guidance for leveraging CBWIRE effectively in their projects.
Best Programming Language for Civil EngineersAwais Yaseen
The integration of programming into civil engineering is transforming the industry. We can design complex infrastructure projects and analyse large datasets. Imagine revolutionizing the way we build our cities and infrastructure, all by the power of coding. Programming skills are no longer just a bonus—they’re a game changer in this era.
Technology is revolutionizing civil engineering by integrating advanced tools and techniques. Programming allows for the automation of repetitive tasks, enhancing the accuracy of designs, simulations, and analyses. With the advent of artificial intelligence and machine learning, engineers can now predict structural behaviors under various conditions, optimize material usage, and improve project planning.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Invited Remote Lecture to SC21
The International Conference for High Performance Computing, Networking, Storage, and Analysis
St. Louis, Missouri
November 18, 2021
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data.
The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs.
Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution!
Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Blockchain technology is transforming industries and reshaping the way we conduct business, manage data, and secure transactions. Whether you're new to blockchain or looking to deepen your knowledge, our guidebook, "Blockchain for Dummies", is your ultimate resource.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
Advanced Techniques for Cyber Security Analysis and Anomaly DetectionBert Blevins
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.
Best Practices for Effectively Running dbt in Airflow.pdfTatiana Al-Chueyr
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models.
This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through:
- Standard ways of running dbt (and when to utilize other methods)
- How Cosmos can be used to run and visualize your dbt projects in Airflow
- Common challenges and how to address them, including performance, dependency conflicts, and more
- How running dbt projects in Airflow helps with cost optimization
Webinar given on 9 July 2024
Best Practices for Effectively Running dbt in Airflow.pdf
ITB2017 - Nginx Effective High Availability Content Caching
1. NGINX, Inc. 2017
Using NGINX as an Effective
and Highly Available Content
Cache
Kevin Jones
Technical Solutions Architect
@webopsx
2. • Quick intro to…
• NGINX
• Content Caching
• Caching with NGINX
• How caching functionality works
• How to enable basic caching
• Advanced caching with NGINX
• How to increase availability using caching
• When and how to enable micro-caching
• How to fine tune the cache
• How to architect for high availability
• Various configuration tips and tricks!
• Various examples!
2
Agenda
4. MORE INFORMATION AT NGINX.COM
Solves complexity…
Web Server Load BalancerReverse Proxy Content Cache
Streaming
Media Server
20. 20
Client
initiates request
(e.g. GET /file)
Proxy Cache
determines if response
is already cached if not
NGINX will fetch from the
origin server Origin Server
serves response
along with all
cache control headers
(e.g. Cache-Control,
Etag, etc..)
Proxy Cache
caches the response
and serves it to the client
21. 21
Cache Headers
• Cache-Control - used to specify directives for caching mechanisms in both, requests and
responses. (e.g. Cache-Control: max-age=600 or Cache-Control: no-cache)
• Expires - contains the date/time after which the response is considered stale. If there is a Cache-
Control header with the "max-age" or "s-max-age" directive in the response, the Expires header is
ignored. (e.g. Expires: Wed, 21 Oct 2015 07:28:00 GMT)
• Last-Modified - contains the date and time at which the origin server believes the resource was last
modified. HTTP dates are always expressed in GMT, never in local time. Less accurate than the
ETag header (e.g. Last-Modified: Wed, 21 Oct 2015 07:28:00 GMT)
• ETag - is an identifier (or fingerprint) for a specific version of a resource. (e.g. ETag: “58efdcd0-268")
23. 23
proxy_cache_path
proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time]
[max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time]
[loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on|off] [purger_files=number]
[purger_sleep=time] [purger_threshold=time];
Syntax:
Default: -
Context: http
Documentation
http {
proxy_cache_path /tmp/nginx/micro_cache/ levels=1:2 keys_zone=large_cache:10m
max_size=300g inactive=14d;
...
}
Definition: Sets the path and other parameters of a cache. Cache data are stored in files. The file name in a cache is
a result of applying the MD5 function to the cache key.
25. 25
proxy_cache
Documentation
location ^~ /api {
...
proxy_cache large_cache;
}
proxy_cache zone | off;Syntax:
Default: proxy_cache off;
Context: http, server, location
Definition: Defines a shared memory zone used for caching. The same zone can be used in several places.
28. 28
Client
NGINX Cache
Origin Server
Cache Memory Zone
(Shared across workers)
1. HTTP Request:
GET /images/hawaii.jpg
Cache Key: http://origin/images/hawaii.jpg
md5 hash: 51b740d1ab03f287d46da45202c84945
2. NGINX checks if hash exists in memory. If it does
not the request is passed to the origin server.
3. Origin server
responds
4. NGINX caches the response to disk
and places the hash in memory
5. Response is served to client
29. 29
NGINX Processes
# ps aux | grep nginx
root 14559 0.0 0.1 53308 3360 ? Ss Apr12 0:00 nginx: master process /usr/
sbin/nginx -c /etc/nginx/nginx.conf
nginx 27880 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process
nginx 27881 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process
nginx 27882 0.0 0.1 53472 2876 ? S 00:06 0:00 nginx: cache manager process
nginx 27883 0.0 0.1 53472 2552 ? S 00:06 0:00 nginx: cache loader process
• Cache Manager - activated periodically to check the state of the cache. If the cache
size exceeds the limit set by the max_size parameter to the proxy_cache_path directive,
the cache manager removes the data that was accessed least recently
• Cache Loader - runs only once, right after NGINX starts. It loads metadata about
previously cached data into the shared memory zone.
30. 30
Caching is not just for HTTP
HTTP
FastCGI
UWSGI
SCGI
Memcache
Tip: NGINX can also be used to cache other backends using their unique cache directives. (e.g. fastcgi_cache,
uwsgi_cache and scgi_cache)
Alternatively, NGINX can also be used to retrieve content directly from a memcached server.
32. 32
log_format main 'rid="$request_id" pck="$scheme://$proxy_host$request_uri" '
'ucs="$upstream_cache_status" '
'site="$server_name" server="$host” dest_port="$server_port" '
'dest_ip="$server_addr" src="$remote_addr" src_ip="$realip_remote_addr" '
'user="$remote_user" time_local="$time_local" protocol="$server_protocol" '
'status="$status" bytes_out="$bytes_sent" '
'bytes_in="$upstream_bytes_received" http_referer="$http_referer" '
'http_user_agent="$http_user_agent" nginx_version="$nginx_version" '
'http_x_forwarded_for="$http_x_forwarded_for" '
'http_x_header="$http_x_header" uri_query="$query_string" uri_path="$uri" '
'http_method="$request_method" response_time="$upstream_response_time" '
'cookie="$http_cookie" request_time="$request_time" ';
Logging is your friend…
Tip: The more relevant information in your log the better. When troubleshooting you can easily add the proxy
cache KEY to the log_format for debugging. For a list of all variables see the “Alphabetical index of
variables” on nginx.org.
33. 33
server {
...
# add HTTP response headers
add_header CC-X-Request-ID $request_id;
add_header X-Cache-Status $upstream_cache_status;
}
Adding response headers…
Tip: Using the add_header directive you can add useful HTTP response headers allowing you to debug
your NGINX deployment rather easily.
34. 34
Cache Status
• MISS – The response was not found in the cache and so was fetched from an origin server. The response
might then have been cached.
• BYPASS – The response was fetched from the origin server instead of served from the cache because the
request matched a proxy_cache_bypass directive. The response might then have been cached.
• EXPIRED – The entry in the cache has expired. The response contains fresh content from the origin
server.
• STALE – The content is stale because the origin server is not responding correctly, and
proxy_cache_use_stale was configured.
• UPDATING – The content is stale because the entry is currently being updated in response to a previous
request, and proxy_cache_use_stale updating is configured.
• REVALIDATED – The proxy_cache_revalidate directive was enabled and NGINX verified that the current
cached content was still valid (ETag, If‑Modified‑Since or If‑None‑Match).
• HIT – The response contains valid, fresh content direct from the cache.
35. 35
# curl -I 127.0.0.1/images/hawaii.jpg
HTTP/1.1 200 OK
Server: nginx/1.11.10
Date: Wed, 19 Apr 2017 22:20:53 GMT
Content-Type: image/jpeg
Content-Length: 21542868
Connection: keep-alive
Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT
ETag: "58efe5ab-148b7d4"
OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1
CC-X-Request-ID: d4a5f7a8d25544b1409c351a22f42960
X-Cache-Status: HIT
Accept-Ranges: bytes
Using cURL to Debug…
Tip: Use cURL or Chrome developer tools to grab the request ID or other various headers useful for
debugging.
36. 36
# grep -ri d4a5f7a8d25544b1409c351a22f42960 /var/log/nginx/adv_access.log
rid="d4a5f7a8d25544b1409c351a22f42960" pck="http://origin/images/hawaii.jpg"
site="webopsx.com" server="localhost” dest_port="80" dest_ip=“127.0.0.1" ...
# echo -n "http://origin/images/hawaii.jpg" | md5sum
51b740d1ab03f287d46da45202c84945 -
# tree /tmp/nginx/micro_cache/5/94/
/tmp/nginx/micro_cache/5/94/
!"" 51b740d1ab03f287d46da45202c84945
0 directories, 1 file
Troubleshooting the Proxy Cache
Tip: A quick and easy way to determine the hash of your cache key can be accomplished using echo, pipe and
md5sum
39. 39
Static Content
• Images
• CSS
• Simple HTML
User Content
• Shopping Cart
• Unique Data
• Account Data
Dynamic Content
• Blog Posts
• Status
• API Data (Maybe?)
Easy to cache Cannot CacheMicro-cacheable!
Types of Content
Documentation
40. 40
http {
upstream backend {
keepalive 20;
server 127.0.0.1:8080;
}
proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m
max_size=100m inactive=600s;
...
server {
listen 80;
...
proxy_cache micro_cache;
proxy_cache_valid any 1s;
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Accept-Encoding "";
proxy_pass http://backend;
}
}
}
Enable keepalives on upstream
Set proxy_cache_valid to any
status with a 1 second value
Set required HTTP version and
pass HTTP headers for keepalives
Set short inactive parameter
41. 41
proxy_cache_lock
Documentation
proxy_cache_lock on | off;Syntax:
Default: proxy_cache_lock off;
Context: http, server, location
Definition: When enabled, only one request at a time will be allowed to populate a new cache element identified
according to the proxy_cache_key directive by passing a request to a proxied server.
Other requests of the same cache element will either wait for a response to appear in the cache or the
cache lock for this element to be released, up to the time set by the proxy_cache_lock_timeout directive.
Related: See the following for tuning…
• proxy_cache_lock_age,
• proxy_cache_lock_timeout
42. 42
proxy_cache_use_stale
Documentation
location /contact-us {
...
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
}
proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 |
http_504 | http_403 | http_404 | http_429 | off ...;
Syntax:
Default: proxy_cache_use_stale off;
Context: http, server, location
Definition: Determines in which cases a stale cached response can be used during communication with the proxied
server.
45. 45
proxy_cache_revalidate
Documentation
proxy_cache_revalidate on | off;Syntax:
Default: proxy_cache_revalidate off;
Context: http, server, location
Definition: Enables revalidation of expired cache items using conditional GET requests with the “If-Modified-Since”
and “If-None-Match” header fields.
46. 46
proxy_cache_min_uses
Documentation
location ~* /legacy {
...
proxy_cache_min_uses 5;
}
proxy_cache_min_uses number;Syntax:
Default: proxy_cache_min_uses 1;
Context: http, server, location
Definition: Sets the number of requests after which the response will be cached. This will help with disk utilization and
hit ratio of your cache.
47. 47
proxy_cache_methods
Documentation
location ~* /data {
...
proxy_cache_methods GET HEAD POST;
}
proxy_cache_methods GET | HEAD | POST …;Syntax:
Default: proxy_cache_methods GET HEAD;
Context: http, server, location
Definition: NGINX only caches GET and HEAD request methods by default. Using this directive you can add
additional methods.
If you plan to add additional methods consider updating the cache key to include the $request_method
variable if the response will be different depending on the request method.
48. 48
proxy_buffering
Documentation
proxy_buffering on | off;Syntax:
Default: proxy_buffering on;
Context: http, server, location
Definition: Enables or disables buffering of responses from the proxied server.
When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving
it into the buffers set by the proxy_buffer_size and proxy_buffers directives. If the whole response does not
fit into memory, a part of it can be saved to a temporary file on the disk.
When buffering is disabled, the response is passed to a client synchronously, immediately as it is received.
49. 49
location ^~ /wordpress {
...
proxy_cache cache;
proxy_ignore_headers Cache-Control;
}
Override Cache-Control headers
Tip: By default NGINX will honor all Cache-Control headers from the origin server, in turn not caching
responses with Cache-Control set to Private, No-Cache, No-Store or with Set-Cookie in the response
header.
Using proxy_ignore_headers you can disable processing of certain response header fields from the
proxied server.
50. 50
location / {
...
proxy_cache cache;
proxy_cache_bypass $cookie_nocache $arg_nocache $http_cache_bypass;
}
Can I Punch Through the Cache?
Tip: If you want to disregard the cache and go strait to the origin for a response, you can use the
proxy_cache_bypass directive.
51. 51
proxy_cache_purge
Documentation
proxy_cache_methods string ...;Syntax:
Default: -
Context: http, server, location
Definition: Defines conditions under which the request will be considered a cache purge request. If at least one value
of the string parameters is not empty and is not equal to “0” then the cache entry with a corresponding
cache key is removed.
The result of successful operation is indicated by returning the 204 (No Content) response.
Note: NGINX Plus only feature
52. 52
proxy_cache_path /tmp/cache keys_zone=mycache:10m levels=1:2 inactive=60s;
map $request_method $purge_method {
PURGE 1;
default 0;
}
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass http://localhost:8002;
proxy_cache mycache;
proxy_cache_purge $purge_method;
}
}
Example Cache Purge Configuration
Tip: Using NGINX Plus, you can issue unique request methods to invalidate the cache
56. 56
http {
proxy_cache_path /tmp/mycache keys_zone=mycache:10m;
server {
listen 80;
proxy_cache mycache;
slice 1m;
proxy_cache_key $host$uri$is_args$args$slice_range;
proxy_set_header Range $slice_range;
proxy_http_version 1.1;
proxy_cache_valid 200 206 1h;
location / {
proxy_pass http://origin.example.com;
}
}
Split the Cache Across HDDs
Tip: Using the split_client directive, NGINX will perform a hash function on a variable of your choice and
based on that hash will dynamically set a new variable that can be used elsewhere in the configuration.
59. 59
Shared Cache Clustering
Tip: If your primary goal is to achieve high availability while minimizing load on the origin servers, this scenario
provides a highly available shared cache.
60. 60
And Failover…
Tip: In the event of a failover there is no loss in cache and the origin does not suffer unneeded proxy requests.
61. 61
Sharding your Cache
Tip: If your primary goal is to create a very high‑capacity cache, shard (partition) your cache across multiple
servers. This in turn maximizes the resources you have while minimizing impact on your origin servers
depending on the amount of cache servers in your cache tier.
62. 62
upstream cache_servers {
hash $scheme$proxy_host$request_uri consistent;
server cache1.example.com;
server cache2.example.com;
server cache3.example.com;
server cache4.example.com;
}
Hash Load Balancing
Tip: Using the hash load balancing algorithm, we can specify the proxy cache key. This allows each resource to
be cached on only one backend server.
63. 63
Combined Load Balancer and Cache
Tip: Alternatively, It is possible to consolidate the load balancer and cache tier into one with the use of a
various NGINX directives and parameters.
64. 64
Multi-Tier with “Hot Cache”
Tip: If needed, a “Hot Cache Tier” can be enabled on the load balancer layer which will give you the same high
capacity cache and provide a high availability of specific cached resources.