The document discusses key aspects of resource loading and prioritization on the web, including: 1. The HTML parser stops for non-async scripts until previous CSS is downloaded and the script is parsed and executed, but does not pause for CSS or image loading. 2. Resources can only be loaded once discovered by the parser or layout; optimal ordering prioritizes render-blocking and parser-blocking resources first using full bandwidth. 3. HTTP/2 allows for prioritization of resources from a single domain, while priority hints and preloading help prioritize cross-domain assets.
Optimizing web page performance involves minimizing round trips, request sizes, and payload sizes. This includes leveraging browser caching, combining and minifying assets, gzip compression, and optimizing images. Developer tools can identify optimization opportunities like unused resources and suggest techniques for faster loading and rendering.
The document summarizes load balancing at Tuenti using HAProxy. It describes how Tuenti moved from using Linux boxes with LVS and ldirectord for load balancing to using HAProxy for its improved layer 7 capabilities. The new setup uses HAProxy for SSL termination on 4 load balancers behind which are over 500 frontend servers. HAProxy allows for health checks, persistence, content routing, monitoring and other advanced load balancing features.
The need to scale is in high demand in an age where everything is moving to the cloud. Though the standard Apache configuration could handle a website with moderate traffic, the minute it gets slash dotted or twitted multiple times could spell an embarrassing crash landing! If you are the administrator of such a website then good luck finding another job! On the other hand you value high availability in the midst of popularity then read on. On this one day workshop, we will show you how to scale your website and webapps to scale to handle thousands of simultaneous sessions the right way. The topics covered will include: - Setting up Apache and NGiNXM - Setting up a sample LAMP web app - Benchmarking Apache performance - Fine tuning Apache to improve performance - Fine tuning NGiNX to improve performance - Discussion about code level improvements when developing custom webapps using PHP
AJAX allows web pages to be updated asynchronously by exchanging data with a web server behind the scenes, allowing parts of a page to change without reloading the entire page. Tuenti uses AJAX extensively to update parts of their single-page application, caching content on both client and server sides for scalability. They route requests to different server farms based on client location and cache content to improve performance. Tuenti serves billions of images per day using multiple CDNs and pre-fetches content to minimize load times.
My talk at ScaleConf 2017 in Cape Town on some tips and tactics for scaling WordPress, with reference to WordPress.com and the container-based VIP Go platform. Video of my talk is here: https://www.youtube.com/watch?v=cs0DcY80spw
This document discusses caching strategies and techniques. It covers when and what to cache, including entire pages, page fragments, and data. It also discusses different caching mechanisms like file system, database, and in-memory caching and their pros and cons. It provides guidance on managing cache expiration policies and invalidating cached content.
The document discusses various database-related performance problems and their solutions. It describes issues like lock contention, missing indexes, slow queries, and the "SELECT N+1" problem. It provides examples of how to reduce lock contention using algorithms like Hi/Lo and updating asynchronously. It also discusses database connection management and transaction isolation levels. Payment and URL shortener systems are used as examples to illustrate strategies for improving database performance.
This document discusses synchronous and asynchronous execution in web servers. Synchronous execution means processes wait for one another to complete before starting the next task, while asynchronous processes can occur simultaneously without dependencies. The document then covers Apache and Nginx web servers. Apache uses multiple processing modules (MPMs) that allow synchronous or hybrid processing models. Nginx uses an asynchronous and non-blocking event-driven model for high performance and scalability. Key differences between the two include how they handle modules, static/dynamic content, and client connections.
Integrating content delivery networks into your application infrastructure can offer many benefits, including major performance improvements for your applications. So understanding how CDNs perform — especially for your specific use cases — is vital. However, testing for measurement is complicated and nuanced, and results in metric overload and confusion. It's becoming increasingly important to understand measurement techniques, what they're telling you, and how to apply them to your actual content. In this session, we'll examine the challenges around measuring CDN performance and focus on the different methods for measurement. We'll discuss what to measure, important metrics to focus on, and different ways that numbers may mislead you. More specifically, we'll cover: Different techniques for measuring CDN performance Differentiating between network footprint and object delivery performance Choosing the right content to test Core metrics to focus on and how each impacts real traffic Understanding cache hit ratio, why it can be misleading, and how to measure for it
Periodic refresh and multi-stage download are design patterns for updating content. Periodic refresh checks the server at regular intervals for new information and notifies users. Multi-stage download loads basic functionality initially and additional components in the background over time to improve the user experience for both fast and slow connections. Examples include ESPN scoreboards, Gmail notifications, and Microsoft Start.com.
This document provides an overview of Windows Azure Service Bus including: - How it provides brokered messaging with queues and topics as well as relays for synchronous communication. - How it uses AMQP 1.0 as a messaging protocol and supports multiple languages and platforms. - How Notification Hubs can be used to send push notifications to multiple client platforms.
Siddharth Vijayakrishnan discusses how web servers work and compares Apache to other web servers like Lighttpd. He explains that while Apache is popular, its multi-process model does not scale well under heavy loads. Lighttpd uses an event-driven model and single process design that allows it to outperform Apache in benchmarks. It has gained popularity as a faster alternative to Apache for serving dynamic content. The document also outlines future areas of improvement for Lighttpd.
This document provides an overview of techniques for building scalable and high performance websites, including definitions of scalability, approaches to avoiding failure, load balancing, caching, and tools for analyzing website speed such as YSlow and PageSpeed. Specific techniques discussed include horizontal and vertical scalability, monitoring, release cycles, fault tolerance, static content delivery, memcached, and APC caching.
Apache and Nginx are the two most popular open source web servers. While they share many qualities, they have key differences that make each better suited for certain situations. Apache excels at running PHP applications without external software. It also works well in shared hosting environments. However, Nginx is more efficient at serving static content and scaling to handle high concurrency loads. Many choose to run Nginx as a reverse proxy in front of Apache to take advantage of both servers' strengths.
Puru Hemnani - ICF Interactive The session will go over the advantages of CDN in general and Akamai caching in particular. Akamai is one of the most commonly used caching option with AEM and several clients use it. There are several features and akamai tuning options such as Error caching, GeoRouting, ESI, Siteshield, WAF that can help developers and system engineers make the sites faster and secure. Configuring it correctly can also reduce the licensing requirements for AEM as well as infrastructure costs as you can serve much higher amount of traffic with less number of origin servers.
According to the specification of HTTP, which is at the heart of all things web, a client must first request or “pull” information from the server and the server can only issue responses. It is never the other way around, with the server initiating the communication and “pushing” the data as it becomes available. Overcoming this limitation, actually an old and historical problem, would have remarkable applications, benefiting almost every page on the web to various degrees, and significantly enhancing the user experience. And the best part is: you can do it all right now, on any average server environment, and have it work on any standard browser! The modern, Web 2.0 -inspired collection of these solutions, design principles, and techniques for this “sever push technology” is sometimes referred to as “Comet.” I will discuss in detail: the numerous uses and benefits of Comet; the problems and difficulties that developers have to face; the variously accepted solution strategies that exist today including polling, long polling, streaming; their subcategories and their specific implementations, subcategories, advantages, disadvantages, and compatibility nuances; how HTML5 offers to address the issue; as well as outline some original research on the topic. Finally, I will illustrate these concepts and ideas through the live coding of a simple, Comet-based application using the help of a PHP framework with rich Comet support.
Fastly VP of Technology Hooman Beheshti gives a keynote on The Future of CDNs at Software Practice Advancement Conference 2015. More resources: http://spaconference.org/spa2015/uploads/resources/SPA%202015%20KEYNOTE%20AND%20DIVERSIONS.pdf
An alphabetical tour of digital media landscape terminology, covering concepts from Ajax to Usability. Designed for training of journalists entering the digital media landscape.
The document discusses optimizing the critical rendering path (CRP) of a web page. The CRP refers to the steps between receiving HTML, CSS, and JavaScript and rendering pixels on the screen. These steps include parsing HTML to build the DOM tree, parsing CSS to build the CSSOM tree, combining them into a render tree, running layout to compute geometry, and painting to the screen. Optimizing the CRP means minimizing the time spent in these steps. Some tips include getting CSS to the client fast, eliminating blocking JavaScript from the CRP, and focusing on above-the-fold content. Tools like critical CSS extraction can help optimize the CRP.
This document provides an overview of front-end web development. It discusses how the internet works using a client-server model and how websites are structured using HTML, CSS, and JavaScript. HTML provides structure, CSS handles styling, and JavaScript adds interactivity. The document also covers HTML tags, CSS selectors and properties, and using <div> and <span> tags. It concludes with mentioning a portfolio website project and learnings.
Session at ConFoo Montreal 2019 on the latest tips and tricks for achieving the best Web Performance on sites and apps.
My talking points for the presentation on optimization of modern web applications. It is a huge topic, and I concentrated mostly on technical aspects of it.
This document discusses how free and open-source tools can be used to improve metadata quality and workflow efficiency within a digital asset management (DAM) system. It provides examples of tools like Exiftool, ffmpeg, and ImageMagick that can be used for tasks like metadata extraction, validation, automation, and preflighting. Scripting is presented as a way to integrate these tools with most DAMs. Customizing metadata workflows in Adobe Creative Suite applications is also covered.
Topics covered: 1. Generating a new Remix project 2. Conventional files 3. Routes (including the nested variety) 4. Styling 5. Database interactions (via sqlite and prisma) 6. Mutations, Validation, and Authentication 7. Error handling 8. SEO with Meta Tags and much more
The document discusses techniques for high resolution images on the web, including adaptive images, srcset attribute, <picture> element, and browser scaling. It provides examples of client-side and server-side solutions for serving adaptive images, such as libraries and services. Guidelines are given for when to use techniques like SVG, icon fonts, and media queries to control images. The document concludes that bandwidth will limit downloading high resolution images over slower networks and to trust cellular optimizations.
The O'Reilly Velocity Conference Europe was held in London from 13th to 15th November 2013. In a few days I shared my notes with my fellow webspeeders at the Web Performance Barcelona Meetup. These are the slides I used.
This document provides practical strategies for improving front-end performance of websites. It discusses specific techniques like making fewer HTTP requests by combining files, leveraging browser caching with far-future expires headers, gzipping components, using CSS sprites, and deploying assets on a content delivery network. It also summarizes key rules from tools like YSlow and PageSpeed for optimizing front-end performance.
Progressive downloads and rendering allow content to be delivered and displayed to the user incrementally to improve perceived performance. JavaScript should be placed at the bottom of the page to avoid blocking. CSS can block rendering so should also be delivered non-blocking when possible. Techniques like flushing output, non-blocking scripts, and data URIs can help deliver content progressively. MHTML and preloading can help optimize delivery across multiple HTTP requests. The overall goal is to start displaying content as soon as possible while content continues downloading in the background.
This document discusses techniques for progressively downloading and rendering web pages to improve performance and user experience. It covers topics like preventing blocking JavaScript and CSS downloads, using techniques like deferred and async scripts, inline CSS, and flushing to start rendering sooner. It also discusses using data URIs to reduce HTTP requests by inlining images and other assets. Formats like MHTML and chunked encoding are presented as ways to progressively deliver content across browsers. The goal is to start outputting content as fast as possible while downloading remaining assets in the background.
This document discusses using Chrome DevTools to debug web applications. It provides an overview of the DevTools interface and highlights some of its key features, including the Elements, Network, Sources, Timeline, Profiles, Resources, Audits, and Console panels. It demonstrates how to use these panels to debug issues like performance problems. The document also shares some cool DevTools tricks and discusses unexpected Chrome behaviors developers should be aware of.