This document provides an overview of how to build a scriptable cache for mobile and desktop applications. It discusses:
- The benefits of a scriptable cache like improved performance and ability to implement advanced optimizations.
- A 6 step process for building a basic scriptable cache using localStorage and dynamically loading/storing resources.
- Additional techniques like handling errors, tracking cache state and size, and implementing an LRU cache.
The document is intended to introduce the concept of a scriptable cache but notes that implementing one is not trivial and requires modifications to HTML and resources. Pseudocode is provided but may have errors and not cover all cases.
Building Hybrid data cluster using PostgreSQL and MongoDB
This document describes building a hybrid data cluster with MongoDB and PostgreSQL. It discusses using PostgreSQL's Foreign Data Wrapper (FDW) to allow PostgreSQL to query and join data stored in MongoDB collections. The document provides steps to set up a sharded MongoDB cluster, install the MongoDB FDW extension in PostgreSQL, and create foreign tables in PostgreSQL that map to MongoDB collections to allow complex SQL queries on MongoDB data. Live demonstrations are provided of inserting, updating, querying data across the hybrid cluster.
Memcached is a high-performance, distributed memory caching system that is used to speed up dynamic web applications by caching objects in memory to reduce database load. It works by storing objects in memory to allow for fast retrieval, improving response times significantly. Major companies that use memcached include Facebook, Yahoo, Amazon, and LiveJournal. It provides features like consistent hashing for object distribution, multithreading, and replication.
This document discusses how bookmarklets can function as applications by interacting with web pages in a secure manner. It describes how the bookmarklet uses elementFromPoint for fast hit detection, resets CSS to robustly render its UI, and transmits data to a server through signed cross-domain POST messages for security. Examples of embedding the bookmarklet code on a page and customizing its appearance are also provided.
PostgreSQL connections at scale was the presentation by our external speaker at our 8th opensource database meetup. The presentation helps you comprehend on database connections with its cost, gauge the need for a connection pooler, Pgbouncer overview with its features, monitoring, and deployment best practices.
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
The document discusses various techniques for optimizing web performance, ranging from beginner to advanced levels. At the beginner level, it recommends avoiding redirects, enabling client-side caching, and reducing DOM elements. At the medium level, it suggests minifying JavaScript and CSS. More advanced techniques include image compression, combining files, and server-side gzip compression. The document also provides optimization tips for databases like MongoDB and recommends using asynchronous and non-blocking I/O for costly operations. It advocates for client-side templating to reduce bandwidth usage and improve cacheability.
Building Lightning Fast Websites (for Twin Cities .NET User Group)
1. A website is loaded by a browser through a multi-step process involving DNS lookups, TCP connections, downloading resources like HTML, CSS, JS, and images. This process can be slow due to the number of individual requests and dependencies between resources.
2. Ways to optimize the loading process include making the server fast, inlining critical resources, gzip compression, an optimized caching strategy, optimizing file delivery through techniques like CDNs and HTTP/2, bundling resources, optimizing images, avoiding unnecessary domains, minimizing web fonts, and JavaScript techniques like PJAX. Minifying assets can also speed up loading.
This document discusses web performance optimization and provides guidance on ensuring high performance web applications. It covers why performance is important, key performance metrics to measure, common areas to profile like client and server-side processing, requirements for performance testing like goals and load thresholds, and tools for performance testing and profiling like JMeter, dotTrace and SQL Server Profiler. The document also outlines best practices for integrating performance testing into the development workflow when issues are found or time allows before a release.
This document discusses various techniques for optimizing frontend performance, including:
1. Using hardware, backend, and frontend optimizations like combined and minified files, CSS sprites, browser caching headers, and content delivery networks.
2. Analyzing performance with tools like Firebug, YSlow, and Google Page Speed to identify opportunities.
3. Specific techniques like gzipping, avoiding redirects, placing scripts at the bottom, and making Ajax cacheable can improve performance.
MongoDB stores data in files on disk that are broken into variable-sized extents containing documents. These extents, as well as separate index structures, are memory mapped by the operating system for efficient read/write. A write-ahead journal is used to provide durability and prevent data corruption after crashes by logging operations before writing to the data files. The journal increases write performance by 5-30% but can be optimized using a separate drive. Data fragmentation over time can be addressed using the compact command or adjusting the schema.
The document discusses methods for collecting multi-channel data across different domains and platforms. It describes how cross-domain tracking works using shared cookies to pass visitor IDs between sites. It also covers using iframes to collect data across domains by adding tracking code or using postMessages. Additionally, it discusses collecting event-based data using selectors and JavaScript events, and capturing video playback events with HTML5 event listeners.
Nginx is a web server that is faster, uses less memory and is more stable than Apache under load. It is better suited for Rails applications and cloud computing. Nginx acts as a proxy, routing requests to application servers. It can perform request filtering, like caching requests, and authentication checks without modifying Rails application code using custom Nginx modules. This allows separating infrastructure concerns from application logic.
Reverse proxy & web cache with NGINX, HAProxy and Varnish
Discover the very wide world of web servers, in addition to the basic web deliverance fonctionnality, we will cover the reverse proxy, the resource caching and the load balancing.
Nginx and apache HTTPD will be used as web server and reverse proxy, and to illustrate some caching features we will also present varnish a powerful caching server.
To introduce load balancers we will compare between Nginx and Haproxy.
This document provides tips and tricks for optimizing SSIS packages, including documenting code, establishing naming conventions, leveraging community tasks and components, configuring Visual Studio settings, designing data flows, handling errors, executing tasks in parallel, tuning data flows and queries, optimizing bulk inserts, managing buffer sizes, and monitoring packages. Key recommendations include breaking solutions into logical units, selecting the right SQL technologies, determining data volumes and locations, reusing code through templates, and dropping indexes or batching updates to improve performance.
Building the Enterprise infrastructure with PostgreSQL as the basis for stori...
In my talk, I will tell how we built a geographically distributed system of personal data storage based on Open Source software and PostgreSQL. The concept of the inCountry business is to provide customers with a ready-to-use infrastructure for personal data storage. Our business customers are ensured that their customer’s personal data is securely stored within their country’s borders. We wrote an API and SDK and built a variety of services. Our system complies with generally accepted security standards (SOC Type 1, Type 2, PCI DSS, etc.). We built our infrastructure with Consul, Nomad, and Vault, used PostgreSQL, ElasticSearch as a storage system, Nginx, Jenkins, Artifactory, other tools to automate management and deployment. We have assembled our development and management teams - DevOps, Security, Monitoring, and DBA. We use both cloud providers and bare-metal servers located in different regions of the world. Development of the system architecture and ensuring the stability of the infrastructure, consistent and secure operation of all its components is the main task facing our teams.
This document provides a practical guide to caching data with Zend Server. It introduces the Zend Data Cache API and shows how to cache the results of a function that retrieves recent blog posts from a database. The function is modified to first check the cache for the results before querying the database. If no results are found in the cache, it queries the database and stores the results in the cache. By caching frequently accessed data, significant performance improvements can be achieved by reducing database queries. The document also discusses best practices for caching, such as profiling applications to identify bottlenecks and determining appropriate cache lifetimes based on how often data changes.
Performance Optimization using Caching | Swatantra Kumar
This document discusses various caching techniques that can be used to improve performance optimization. It defines caching as temporarily storing frequently accessed data for rapid access. The main reasons for using caching are to reduce database queries, external service requests, computation time, and filesystem access in order to lighten server load and send less data. Techniques covered include full page, partial page, SQL query, processing result, pre-generation, web service response, and browser caching. The document also discusses different storage options for caching like MySQL query cache, disk storage, Memcache, and Redis and emphasizes the importance of defining unique cache keys.
This document discusses different types of caching in ASP.NET, including output caching, data caching, object caching, class caching, and configuration caching. Output caching stores rendered HTML pages in memory to return cached copies to subsequent requests rather than regenerating pages. Data caching stores data from data sources in memory to fulfill future requests from the cache rather than accessing the data source again. Object caching stores objects like data-bound controls in server memory. Class caching caches compiled web pages or services in server memory. Configuration caching stores application configuration information in server memory.
This document discusses approaches for storing data on the client side beyond a page refresh without transmitting it to the server. It reviews the history of cookies, Flash cookies, Gears, and other approaches. It then summarizes modern approaches like Application Cache, Web Storage, Web SQL Database, IndexedDB, and the File API which allow persistent local storage on the client. It concludes with tips for using these storage options and libraries to help manage offline data.
In today’s systems , the time it takes to bring data to the end-user can be very long, especially under heavy load. An application can often increase performance by using an appropriate caching system. There are many caching level that you can use in our application today : CDN, In-Memory/Local Cache, Distributed Cache, Outut Cache, Browser Cache, Html Cache
Caching is an important technique for improving performance on the web. It allows frequently requested resources like documents, images, and scripts to be stored locally for faster retrieval. Caching can occur at various levels including in the browser, network through CDNs, on servers through tools like Nginx, and within applications using memoization and cache stores. Defining appropriate caching policies and strategies using HTTP headers is key to an efficient caching implementation.
Abhishek Sinha is a senior product manager at Amazon for Amazon EMR. Amazon EMR allows customers to easily run data frameworks like Hadoop, Spark, and Presto on AWS. It provides a managed platform and tools to launch clusters in minutes that leverage the elasticity of AWS. Customers can customize clusters and choose from different applications, instances types, and access methods. Amazon EMR allows separating compute and storage where the low-cost S3 can be used for persistent storage while clusters are dynamically scaled based on workload.
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
Glenn K. Lockwood's document summarizes his professional background and experience with data-intensive computing systems. It then discusses the Gordon supercomputer deployed at SDSC in 2012, which was one of the world's first systems to use flash storage. The document analyzes Gordon's architecture using burst buffers and SSDs, experiences using the flash file system, and lessons learned. It also compares Gordon's proto-burst buffer approach to the dedicated burst buffer nodes on the Cori supercomputer.
10 things i wish i'd known before using spark in production
Vous avez récemment commencé à travailler sur Spark et vos jobs prennent une éternité pour se terminer ? Cette présentation est faite pour vous.
Himanshu Arora et Nitya Nand YADAV ont rassemblé de nombreuses bonnes pratiques, optimisations et ajustements qu'ils ont appliqué au fil des années en production pour rendre leurs jobs plus rapides et moins consommateurs de ressources.
Dans cette présentation, ils nous apprennent les techniques avancées d'optimisation de Spark, les formats de sérialisation des données, les formats de stockage, les optimisations hardware, contrôle sur la parallélisme, paramétrages de resource manager, meilleur data localité et l'optimisation du GC etc.
Ils nous font découvrir également l'utilisation appropriée de RDD, DataFrame et Dataset afin de bénéficier pleinement des optimisations internes apportées par Spark.
This document discusses distributed caching and its benefits. It provides examples of how caching is used in browsers and for user login requests. It then discusses how companies like Facebook and Naver implement distributed caching at large scale using Memcached to improve performance and scalability. The key points are:
1) Caching stores data to serve future requests faster. It is commonly used in browsers, databases for login requests, and at companies like Facebook and Naver.
2) Facebook used Memcached to cache content and improved performance, serving over 1 billion users per day with 800 Memcached servers and 1,800 MySQL servers.
3) Naver uses a distributed file system with multiple cheap servers and disks instead of a single
Introduction to memcached, a caching service designed for optimizing performance and scaling in the web stack, seen from perspective of MySQL/PHP users. Given for 2nd year students of professional bachelor in ICT at Kaho St. Lieven, Gent.
Did you know that 80% to 90% of the user's page-load time comes from components outside the firewall? Optimizing performance on the front end (e.g. from the client side) can enhance the user experience by reducing the response times of your web pages and making them load and render much faster.
Building Hybrid data cluster using PostgreSQL and MongoDBAshnikbiz
This document describes building a hybrid data cluster with MongoDB and PostgreSQL. It discusses using PostgreSQL's Foreign Data Wrapper (FDW) to allow PostgreSQL to query and join data stored in MongoDB collections. The document provides steps to set up a sharded MongoDB cluster, install the MongoDB FDW extension in PostgreSQL, and create foreign tables in PostgreSQL that map to MongoDB collections to allow complex SQL queries on MongoDB data. Live demonstrations are provided of inserting, updating, querying data across the hybrid cluster.
Memcached is a high-performance, distributed memory caching system that is used to speed up dynamic web applications by caching objects in memory to reduce database load. It works by storing objects in memory to allow for fast retrieval, improving response times significantly. Major companies that use memcached include Facebook, Yahoo, Amazon, and LiveJournal. It provides features like consistent hashing for object distribution, multithreading, and replication.
This document discusses how bookmarklets can function as applications by interacting with web pages in a secure manner. It describes how the bookmarklet uses elementFromPoint for fast hit detection, resets CSS to robustly render its UI, and transmits data to a server through signed cross-domain POST messages for security. Examples of embedding the bookmarklet code on a page and customizing its appearance are also provided.
PostgreSQL connections at scale was the presentation by our external speaker at our 8th opensource database meetup. The presentation helps you comprehend on database connections with its cost, gauge the need for a connection pooler, Pgbouncer overview with its features, monitoring, and deployment best practices.
These slides show how to reduce latency on websites and reduce bandwidth for improved user experience.
Covering network, compression, caching, etags, application optimisation, sphinxsearch, memcache, db optimisation
The document discusses various techniques for optimizing web performance, ranging from beginner to advanced levels. At the beginner level, it recommends avoiding redirects, enabling client-side caching, and reducing DOM elements. At the medium level, it suggests minifying JavaScript and CSS. More advanced techniques include image compression, combining files, and server-side gzip compression. The document also provides optimization tips for databases like MongoDB and recommends using asynchronous and non-blocking I/O for costly operations. It advocates for client-side templating to reduce bandwidth usage and improve cacheability.
Building Lightning Fast Websites (for Twin Cities .NET User Group)strommen
1. A website is loaded by a browser through a multi-step process involving DNS lookups, TCP connections, downloading resources like HTML, CSS, JS, and images. This process can be slow due to the number of individual requests and dependencies between resources.
2. Ways to optimize the loading process include making the server fast, inlining critical resources, gzip compression, an optimized caching strategy, optimizing file delivery through techniques like CDNs and HTTP/2, bundling resources, optimizing images, avoiding unnecessary domains, minimizing web fonts, and JavaScript techniques like PJAX. Minifying assets can also speed up loading.
This document discusses web performance optimization and provides guidance on ensuring high performance web applications. It covers why performance is important, key performance metrics to measure, common areas to profile like client and server-side processing, requirements for performance testing like goals and load thresholds, and tools for performance testing and profiling like JMeter, dotTrace and SQL Server Profiler. The document also outlines best practices for integrating performance testing into the development workflow when issues are found or time allows before a release.
This document discusses various techniques for optimizing frontend performance, including:
1. Using hardware, backend, and frontend optimizations like combined and minified files, CSS sprites, browser caching headers, and content delivery networks.
2. Analyzing performance with tools like Firebug, YSlow, and Google Page Speed to identify opportunities.
3. Specific techniques like gzipping, avoiding redirects, placing scripts at the bottom, and making Ajax cacheable can improve performance.
MongoDB stores data in files on disk that are broken into variable-sized extents containing documents. These extents, as well as separate index structures, are memory mapped by the operating system for efficient read/write. A write-ahead journal is used to provide durability and prevent data corruption after crashes by logging operations before writing to the data files. The journal increases write performance by 5-30% but can be optimized using a separate drive. Data fragmentation over time can be addressed using the compact command or adjusting the schema.
Engage 2013 - Multi Channel Data CollectionWebtrends
The document discusses methods for collecting multi-channel data across different domains and platforms. It describes how cross-domain tracking works using shared cookies to pass visitor IDs between sites. It also covers using iframes to collect data across domains by adding tracking code or using postMessages. Additionally, it discusses collecting event-based data using selectors and JavaScript events, and capturing video playback events with HTML5 event listeners.
Nginx is a web server that is faster, uses less memory and is more stable than Apache under load. It is better suited for Rails applications and cloud computing. Nginx acts as a proxy, routing requests to application servers. It can perform request filtering, like caching requests, and authentication checks without modifying Rails application code using custom Nginx modules. This allows separating infrastructure concerns from application logic.
Reverse proxy & web cache with NGINX, HAProxy and VarnishEl Mahdi Benzekri
Discover the very wide world of web servers, in addition to the basic web deliverance fonctionnality, we will cover the reverse proxy, the resource caching and the load balancing.
Nginx and apache HTTPD will be used as web server and reverse proxy, and to illustrate some caching features we will also present varnish a powerful caching server.
To introduce load balancers we will compare between Nginx and Haproxy.
This document provides tips and tricks for optimizing SSIS packages, including documenting code, establishing naming conventions, leveraging community tasks and components, configuring Visual Studio settings, designing data flows, handling errors, executing tasks in parallel, tuning data flows and queries, optimizing bulk inserts, managing buffer sizes, and monitoring packages. Key recommendations include breaking solutions into logical units, selecting the right SQL technologies, determining data volumes and locations, reusing code through templates, and dropping indexes or batching updates to improve performance.
Building the Enterprise infrastructure with PostgreSQL as the basis for stori...PavelKonotopov
In my talk, I will tell how we built a geographically distributed system of personal data storage based on Open Source software and PostgreSQL. The concept of the inCountry business is to provide customers with a ready-to-use infrastructure for personal data storage. Our business customers are ensured that their customer’s personal data is securely stored within their country’s borders. We wrote an API and SDK and built a variety of services. Our system complies with generally accepted security standards (SOC Type 1, Type 2, PCI DSS, etc.). We built our infrastructure with Consul, Nomad, and Vault, used PostgreSQL, ElasticSearch as a storage system, Nginx, Jenkins, Artifactory, other tools to automate management and deployment. We have assembled our development and management teams - DevOps, Security, Monitoring, and DBA. We use both cloud providers and bare-metal servers located in different regions of the world. Development of the system architecture and ensuring the stability of the infrastructure, consistent and secure operation of all its components is the main task facing our teams.
This document provides a practical guide to caching data with Zend Server. It introduces the Zend Data Cache API and shows how to cache the results of a function that retrieves recent blog posts from a database. The function is modified to first check the cache for the results before querying the database. If no results are found in the cache, it queries the database and stores the results in the cache. By caching frequently accessed data, significant performance improvements can be achieved by reducing database queries. The document also discusses best practices for caching, such as profiling applications to identify bottlenecks and determining appropriate cache lifetimes based on how often data changes.
Performance Optimization using Caching | Swatantra KumarSwatantra Kumar
This document discusses various caching techniques that can be used to improve performance optimization. It defines caching as temporarily storing frequently accessed data for rapid access. The main reasons for using caching are to reduce database queries, external service requests, computation time, and filesystem access in order to lighten server load and send less data. Techniques covered include full page, partial page, SQL query, processing result, pre-generation, web service response, and browser caching. The document also discusses different storage options for caching like MySQL query cache, disk storage, Memcache, and Redis and emphasizes the importance of defining unique cache keys.
This document discusses different types of caching in ASP.NET, including output caching, data caching, object caching, class caching, and configuration caching. Output caching stores rendered HTML pages in memory to return cached copies to subsequent requests rather than regenerating pages. Data caching stores data from data sources in memory to fulfill future requests from the cache rather than accessing the data source again. Object caching stores objects like data-bound controls in server memory. Class caching caches compiled web pages or services in server memory. Configuration caching stores application configuration information in server memory.
This document discusses approaches for storing data on the client side beyond a page refresh without transmitting it to the server. It reviews the history of cookies, Flash cookies, Gears, and other approaches. It then summarizes modern approaches like Application Cache, Web Storage, Web SQL Database, IndexedDB, and the File API which allow persistent local storage on the client. It concludes with tips for using these storage options and libraries to help manage offline data.
In today’s systems , the time it takes to bring data to the end-user can be very long, especially under heavy load. An application can often increase performance by using an appropriate caching system. There are many caching level that you can use in our application today : CDN, In-Memory/Local Cache, Distributed Cache, Outut Cache, Browser Cache, Html Cache
Caching is an important technique for improving performance on the web. It allows frequently requested resources like documents, images, and scripts to be stored locally for faster retrieval. Caching can occur at various levels including in the browser, network through CDNs, on servers through tools like Nginx, and within applications using memoization and cache stores. Defining appropriate caching policies and strategies using HTTP headers is key to an efficient caching implementation.
Abhishek Sinha is a senior product manager at Amazon for Amazon EMR. Amazon EMR allows customers to easily run data frameworks like Hadoop, Spark, and Presto on AWS. It provides a managed platform and tools to launch clusters in minutes that leverage the elasticity of AWS. Customers can customize clusters and choose from different applications, instances types, and access methods. Amazon EMR allows separating compute and storage where the low-cost S3 can be used for persistent storage while clusters are dynamically scaled based on workload.
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...Glenn K. Lockwood
Glenn K. Lockwood's document summarizes his professional background and experience with data-intensive computing systems. It then discusses the Gordon supercomputer deployed at SDSC in 2012, which was one of the world's first systems to use flash storage. The document analyzes Gordon's architecture using burst buffers and SSDs, experiences using the flash file system, and lessons learned. It also compares Gordon's proto-burst buffer approach to the dedicated burst buffer nodes on the Cori supercomputer.
Vous avez récemment commencé à travailler sur Spark et vos jobs prennent une éternité pour se terminer ? Cette présentation est faite pour vous.
Himanshu Arora et Nitya Nand YADAV ont rassemblé de nombreuses bonnes pratiques, optimisations et ajustements qu'ils ont appliqué au fil des années en production pour rendre leurs jobs plus rapides et moins consommateurs de ressources.
Dans cette présentation, ils nous apprennent les techniques avancées d'optimisation de Spark, les formats de sérialisation des données, les formats de stockage, les optimisations hardware, contrôle sur la parallélisme, paramétrages de resource manager, meilleur data localité et l'optimisation du GC etc.
Ils nous font découvrir également l'utilisation appropriée de RDD, DataFrame et Dataset afin de bénéficier pleinement des optimisations internes apportées par Spark.
phptek13 - Caching and tuning fun tutorialWim Godden
This document discusses caching and tuning techniques to improve scalability for web applications. It begins with an introduction and background on caching. It then covers different caching techniques including caching entire pages, parts of pages, SQL queries, and complex PHP results. It discusses various caching storage options such as the MySQL query cache, memory tables, opcode caching with APC, disk, memory disk, Memcache, and notes on each. The document provides code examples for using Memcache and discusses caching strategies such as updating cached data, cache stampeding, and cache warming scripts. It also covers performance benchmarks and moving to Nginx with PHP-FPM. The overall goal of the techniques discussed is to increase reliability, performance and scalability of a
High Availability Content Caching with NGINXNGINX, Inc.
On-Demand Recording:
https://www.nginx.com/resources/webinars/high-availability-content-caching-nginx/
You trust NGINX to be your web server, but did you know it’s also a high-performance content cache? In fact, the world’s most popular CDNs – CloudFlare, MaxCDN, and Level 3 among them – are built on top of the open source NGINX software.
NGINX content caching can drastically improve the performance of your applications. We’ll start with basic configuration, then move on to advanced concepts and best practices for architecting high availability and capacity in your application infrastructure.
Join this webinar to:
* Enable content caching with the key configuration directives
* Use micro caching with NGINX Plus to cache dynamic content while maintaining low CPU utilization
* Partition your cache across multiple servers for high availability and increased capacity
* Log transactions and troubleshoot your NGINX content cache
This document provides an overview of the Apache Spark framework. It discusses how Spark allows distributed processing of large datasets across computer clusters using simple programming models. It also describes how Spark can scale from single servers to thousands of machines. Spark is designed to provide high availability by detecting and handling failures at the application layer. The document also summarizes Resilient Distributed Datasets (RDDs), which are Spark's fundamental data abstraction, and transformations and actions that can be performed on RDDs.
Freezer is an OpenStack backup and restore service that allows users to automate backup processes. It includes components like an API, scheduler, and agent. The scheduler retrieves backup jobs from the API and executes them via the agent. Freezer supports different backup types including file system, Cinder volume, and MySQL backups. It can store backups in OpenStack Swift, locally, or remotely using SSH. Backups can be restored locally or by recreating volumes/instances in Cinder/Nova.
CHI provides a standard interface and implementation for caching in Perl modules. It aims to improve on existing solutions like Cache::Cache by offering better performance and extensibility. CHI allows modules to easily implement caching by requesting a handle to any backend cache. It also provides a common place to implement generic caching features. Current supported backends include memory, file, memcached, and BerkeleyDB caches. Driver development is simplified through a skeleton interface.
Caching and tuning fun for high scalabilityWim Godden
Caching has been a 'hot' topic for a few years. But caching takes more than merely taking data and putting it in a cache : the right caching techniques can improve performance and reduce load significantly. But we'll also look at some major pitfalls, showing that caching the wrong way can bring down your site. If you're looking for a clear explanation about various caching techniques and tools like Memcached, Nginx and Varnish, as well as ways to deploy them in an efficient way, this talk is for you.
Matteo Moretti discusses scaling PHP applications. He covers scaling the web server, sessions, database, filesystem, asynchronous tasks, and logging. The key aspects are decoupling services, using caching, moving to external services like Redis, S3, and RabbitMQ, and allowing those services to scale automatically using techniques like auto-scaling. Sharding the database is difficult to implement and should only be done if really needed.
Similar to Mobile & Desktop Cache 2.0: How To Create A Scriptable Cache (20)
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk.
What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year?
Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year.
This webinar will review:
- Key changes to privacy regulations in 2024
- Key themes in privacy and data governance in 2024
- How to maximize your privacy program in the second half of 2024
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
論文紹介:A Systematic Survey of Prompt Engineering on Vision-Language Foundation ...Toru Tamaki
Jindong Gu, Zhen Han, Shuo Chen, Ahmad Beirami, Bailan He, Gengyuan Zhang, Ruotong Liao, Yao Qin, Volker Tresp, Philip Torr "A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models" arXiv2023
https://arxiv.org/abs/2307.12980
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
2. Agenda
• Caching 101
• Mobile & Desktop Scriptable Cache
– Concept
– 6 Steps to Building a Scriptable Cache
– Advanced Optimizations
• Q&A
2
3. The Value of a Scriptable Cache
• A dedicated cache, not affected by other sites
• A robust cache, not cleared by power cycles
• Better file consolidation
– Works in more cases
– Cache Friendly
– Less requests without more bytes
• Enable Advanced Optimizations
– Robust Prefetching, Async CSS/JS…
• The Secret to Eternal Youth
3
4. Not For The Faint of Heart!
• DIY Scriptable Cache isn’t simple
– No magic 3 lines of code
• Requires HTML & Resource modifications
– Some of each
• The code samples are pseudo-code
– They don’t cover all edge cases
– They’re not optimized
– They probably have syntax errors
4
6. What is a Cache?
• Storage of previously seen data
• Reduces costs
• Accelerates results
• Sample savings:
– Computation costs (avoid regenerating
content)
– Network costs (avoid retransmitting content)
6
7. Cache Types
Gateway
-‐
Server
resources
from
the
faster
intranet
-‐
Shared
per
organizaHon
Browser
CDN
Edge
-‐
Eliminates
network
Hme
-‐
reduces
roundtrip
Hme
–
latency
-‐
Shared
by
one
user
-‐
Shared
by
all
users
Server-‐Side
-‐
Reduces
server
load
-‐
Faster
turnaround
for
response
-‐
Shared
by
all
users
7
8. Caching - Expiry
• Cache Expiry Controlled by Headers
– HTTP/1.0: Expires
– HTTP/1.1: Cache-Control
• ETAG/Last-Modified Enables Conditional GET
– Fetch Resource “If-Modified-Since”
• CDN/Server Cache can be manually purged
8
9. Stale Cache
• Outdated data in cache
– Affects Browser Cache the most
• Versioning
– Add a version number to the filename
– Change the version when the file changes
– Unique filename = long caching – stale cache
file.v1.js
file.v2.js
var
today
=
“11/10/26”
var
today
=
“11/10/27”
9
10. Cache Sizes - Desktop
• Ranges from 75MB to 250MB
• Fits about 90-300 pages
– Average desktop page size is ~800 KB
• Cycles fully every 1-4 days
– Average user browses 88 pages/day
10
11. Cache Sizes - Mobile
• Ranges from 0 MB to 25MB
• Fits about 0-60 pages (Average size ~400KB)
• Memory Cache a bit bigger, but volatile
11
12. Conclusion
• Caching is useful and important
• Cache sizes are too small
– Especially on Mobile
• Cache hasn’t evolved with the times
– Stopped evolving with HTTP/1.1 in 2004
• Browser Cache evolved least of all
– Browsers adding smart eviction only now
– Still no script interfaces for smart
caching
12
14. Scriptable Browser Cache - Concept
• A cache accessible via JavaScript
– Get/Put/Delete Actions
• What is it good for?
– Cache parts of a page/resource
– Adapt to cache state
– Load resources in different ways
• Why don’t browsers support it today?
– Most likely never saw the need
– Useful only for advanced websites
– Not due to security concerns (at least not good
ones)
14
15. Intro to HTML5 localStorage
• Dedicated Client-Side Storage
– HTML5 standard
– Replaces hacky past solutions
• Primarily used for logical data
– Game high-score, webmail drafts…
• Usually limited to 5 MB
• Enables simple get/put/remove commands
• Supported by all modern browsers
– Desktop: IE8+, Firefox, Safari, Chrome, Opera
– BB 6.0+, most others (http://mobilehtml5.org/)
15
16. Step 0: Utilities
var
sCache
=
{
…
//
Short
name
for
localStorage
db:
localStorage,
//
Method
for
fetching
an
URL
in
sync
getUrlSync:
funcHon
(url)
{
var
xhr
=
new
XMLH;pRequest();
xhr.open(
‘GET’,
url,
false);
xhr.send(null);
if
(xhr.status==200)
{
return
xhr.responseText;
}
else
{
return
null;
}
}
…}
16
17. Step 1: Store & Load Resources
var
sCache
=
{
…
//
Method
for
running
an
external
script
runExtScript:
funcHon
(url)
{
//
Check
if
the
data
is
in
localStorage
var
data
=
db.getItem(url);
if
(!data)
{
//
If
not,
fetch
it
data
=
getUrlSync(url);
//
Store
it
for
later
use
db.setItem(url,
data);
}
//
Run
the
script
dynamically
addScriptElement(data);
}
…}
17
18. Step 2: Recover on error
var
sCache
=
{
…
runExtScript:
funcHon
(url)
{
//
Check
if
the
data
is
in
localStorage
var
data
=
db
&&
db.getItem(url);
if
(!data)
{
//
If
not,
fetch
it
data
=
$.get(url);
//
Store
it
for
later
use
try
{
db.setItem(url,
data)
}
catch(e)
{
}
}
//
Run
the
script
dynamically
addScriptElement(data);
}
…}
18
19. Step 3: LRU Cache – Cache State
var
sCache
=
{
…
//
Meta-‐Data
about
the
cache
capacity
and
state
dat:
{size:
0,
capacity:
2*1024*1024,
items:
{}
},
//
Load
the
cache
state
and
items
from
localStorage
load:
funcHon()
{
var
str
=
db
&&
db.getItem(“cacheData”);
if
(data)
{
dat
=
JSON.parse(x);
}
},
//
Persist
an
updated
state
to
localStorage
save:
funcHon()
{
var
str
=
JSON.stringify(dat);
try
{db.setItem(“cacheData”,
str);
}
catch(e)
{
}
},
…
}
19
20. Step 3: LRU Cache – Storing items
var
sCache
=
{
…
storeItem:
funcHon(name,
data)
{
//
Do
nothing
if
the
single
item
is
greater
than
our
capacity
if
(data.length
>
dat.capacity)
return;
//
Make
room
for
the
object
while(dat.items.length
&&
(dat.size
+
data.length)
>
dat.capacity)
{
var
elem
=
dat.pop();
//
Remove
the
least
recently
used
element
try
{
db.removeItem(elem.name);
}
catch(e)
{
}
dat.size
-‐=
elem.size;
}
//
Store
the
new
element
in
localStorage
and
the
cache
try
{
db.setItem(name,
data);
dat.size
+=
data.length;
dat.items.push
({name:
name,
size:
data.length});
}
catch(e)
{
}
}
…
20
21. Step 3: LRU Cache – Getting items
var
sCache
=
{
…
getItem:
funcHon(name)
{
//
Try
to
get
the
item
var
data
=
db
&&
db.getItem(name);
if
(!data)
return
null;
//
Move
the
element
to
the
top
of
the
array,
marking
it
as
used
for(var
i=0;i<dat.items.length;i++)
{
if
(dat.items[i].name
===
name)
{
dat.items.unshiw(dat.items.splice(i,-‐1));
break;
}
}
return
data;
}
…}
21
22. Post Step 3: Revised Run Script
var
sCache
=
{
…
runExtScript:
funcHon
(url)
{
//
Check
if
the
data
is
in
the
cache
var
data
=
getItem(url);
if
(!data)
{
//
If
not,
fetch
it
data
=
$.get(url);
//
Store
it
for
later
use
storeItem(url,
data);
}
//
Run
the
script
addScriptElement(data);
}
…}
22
23. Step 4: Versioning
//
Today:
File
version
1
sCache.load();
sCache.runExtScript(‘res.v1.js’);
sCache.save();
//
Tomorrow:
File
version
2
sCache.load();
sCache.runExtScript(‘res.v2.js’);
sCache.save();
//
Old
files
will
implicitly
be
pushed
out
of
the
cache
//
Also
work
with
versioning
using
signature
on
content
23
24. What Have We Created So Far?
• Scriptable LRU Cache
– Enforces size limits
– Recovers from errors
• Dedicated Cache
– Not affected by browsing other sites
• Robust Cache
– Not affected by Mobile Cache Sizes
– Survives Power Cycle and Process Reset
• Still Has Limitations:
– Only works on same domain
– Resources fetched sequentially
24
25. Step 5: Cross-Domain Resources
• Why Cross Domain?
– Enables Domain Sharding
– Various Architecture Reasons
• Solution: Self-Registering Scripts
– Scripts load themselves into the cache
– Added to the page as standard scripts
– Note that one URL stores data as another URL
h;p://1.foo.com/res.v1.js
h;p://1.foo.com/store.res.v1.js
alert(1);
sCache.storeItem(
‘h;p://1.foo.com/res.v1.js’,
’alert(1)’);
25
26. Step 6: Fetching Resources In Parallel
<script>sCache.load()</script>
<script>
//
Resources
downloaded
in
parallel
doc.write(“<scr”+”ipt
src=‘h;p://foo.com/store.foo.v1.js’></scr”+”ipt>”);
doc.write(“<scr”+”ipt
src=‘h;p://bar.com/store.bar.v1.js’></scr”+”ipt>”);
</script>
<!-‐-‐
Scripts
won’t
run
unHl
previous
ones
complete,
and
data
is
cached
-‐-‐>
<script>sCache.runExtScript(‘h;p://foo.com/foo.v1.js’);
</script>
<script>sCache.runExtScript(‘h;p://bar.com/bar.v1.js’);
</script>
<!-‐-‐
Note
the
different
URLs!
-‐-‐>
<script>sCache.save();</script>
26
27. Step 6: Parallel Resources + Cache Check
var
sCache
=
{
…
loadResourceViaWrite:
funcHon
(path,
file)
{
//
Check
if
the
data
is
in
the
cache
var
data
=
getItem(url);
if
(!data)
{
//
If
not,
doc-‐write
the
store
URL
doc.write(“<scr”+”ipt
src=‘”
+
path
+
“store.”
+
file
+
//
Add
the
“store.”
prefix
“’></scr”+”ipt>”);
}
}
…}
27
28. Step 6: Parallel Downloads, with Cache
<script>sCache.load()</script>
<script>
//
Resources
downloaded
in
parallel,
only
if
needed
sCache.loadResourceViaWrite("h;p://foo.com/”,”foo.v1.js”);
sCache.loadResourceViaWrite("h;p://bar.com/”,”bar.v1.js”);
</script>
<!-‐-‐
Scripts
won’t
run
unHl
previous
ones
complete,
and
data
is
cached
-‐-‐>
<script>sCache.runExtScript(‘h;p://foo.com/foo.v1.js’);
</script>
<script>sCache.runExtScript(‘h;p://bar.com/bar.v1.js’);
</script>
<!-‐-‐
Note
the
different
URLs!
-‐-‐>
<script>sCache.save();</script>
28
29. What Have We Created?
• Scriptable LRU Cache
– Enforces size limits
– Recovers from errors
• Dedicated Cache
– Not affected by browsing other sites
• Robust Cache
– Not affected by Mobile Cache Sizes
– Survives Power Cycle and Process Reset
• Works across domains
• Resources downloaded in parallel
29
30. Understanding localStorage Quota
• Many browsers use UTF-16 for characters
– Effectively halves the storage space
– Safest to limit capacity to 2 MB
• Best value: Cache CSS & JavaScript
– Biggest byte-for-byte impact on page load
– Lowest variation allows for longest caching
– Images are borderline too big for capacity
• Remember: Quotas are per top-level-domain
– *.foo.com share the same quota
30
32. Adaptive Consolidation
• Fetch Several Resources with One Request
– Store them as Fragments
• Adapt to Browser Cache State
– If resources aren’t in cache, fetch them as one file
– If some resources are in cache, fetch separate files
– Optionally consolidate missing pieces
h;p://1.foo.com/foo.v1.js
h;p://1.foo.com/store.res.v1.js
alert(1);
sCache.storeItem(‘/foo.v1.js’,
’alert(1)’);
h;p://1.foo.com/bar.v1.js
sCache.storeItem(‘/bar.v1.js’,
’alert(2)’);
alert(2);
32
33. Adaptive vs. Simple Consolidation - #2
• User browsers Page A, then Page B
– Assume each JS file is 20KB in Size
OpGmizaGon
Total
JS
Requests
Total
JS
Bytes
None
3
60KB
Simple
ConsolidaHon
2
100KB
AdapHve
ConsolidaHon
1
60KB
Page
A
Page
B
<script
src=“a.js”></script>
<script
src=“a.js”></script>
<script
src=“b.js”></script>
<script
src=“b.js”></script>
<script
src=“c.js”></script>
33
34. Adaptive vs. Simple Consolidation - #2
• User browsers Page A, then Page B
– Assume each JS file is 20KB in Size
OpGmizaGon
Total
JS
Requests
Total
JS
Bytes
None
4
80KB
Simple
ConsolidaHon
2
140KB
AdapHve
ConsolidaHon
2
80KB
Page
A
Page
B
<script
src=“a.js”></script>
<script
src=“a.js”></script>
<script
src=“b.js”></script>
<script
src=“b.js”></script>
<script
src=“c.js”></script>
<script
src=“c.js”></script>
<script
src=“d.js”></script>
34
35. Adaptive vs. Simple Consolidation - #3
• External & Inline Scripts are often related
• Breaks Simple Consolidation
• Doesn’t break Adaptive Consolidation
StoreAll.js
a.js
var
mode=1;
sCache.storeItem(‘a.js’,’var
mode=1;’)
sCache.storeItem(‘b.js’,’alert(userType);’)
b.js
alert(userType);
OpHmized
Page:
Page:
<script
src=“a.js”></script>
<script>sCache.runExtScript(‘a.js’)</script>
<script>
<script>
var
userType
=
“user”;
var
userType
=
“user”;
If
(mode==1)
userType
=
“admin”;
If
(mode==1)
userType
=
“admin”;
</script>
</script>
<script
src=“b.js”></script>
<script>sCache.runExtScript(‘b.js’)</script>
35
36. Robust Prefetching
• In-Page Prefetching
– Fetch CSS/JS Resources at top of page, to be
used later
• Next-Page Prefetching
– Fetch resources for future pages
• Robust and Predictable
– Not invalidated due to content type change in FF
– Not invalided by cookies set in IE
– Not reloaded when entering same URL in Safari
– …
36
37. Async JS/CSS
• Async JS: Run scripts without blocking page
– Doable without Scriptable Cache
– Scriptable Cache allows script prefetching
– Eliminates need to make fetch scripts block
• Async CSS: Download CSS without blocking
– CSS ordinarily delay resource download & render
– You can’t always know when a CSS file loaded
– Scriptable Cache enables “onload” event
– Can still block rendering if desired
37
38. Summary
• Caching is good – you should use it!
• Scriptable Cache is better
– More robust
– More reasonably sized on Mobile
– Enables important optimizations
• The two aren’t mutually exclusive
– “store” files should be cacheable
– Images should likely keep using regular cache
38
39. Or… Use the Blaze Scriptable Cache!
• Blaze automates Front-End Optimization
– No Software, Hardware or Code Changes needed
– All the pitfalls and complexities taken care of
• Blaze optimizes Mobile & Desktop Websites
– Applying the right optimizations for each client
See how much faster Blaze
can make your site with our
Free Report: www.blaze.io
Contact Us: contact@blaze.io
39