This document discusses front-end performance measurement. It recommends measuring performance at every stage of a project's lifecycle using both synthetic and real user monitoring tools. Key metrics to measure include time to first byte, speed index, user timings. Both types of tools provide valuable but different insights and should be used together. Performance data should be reported visually through dashboards to make it relevant and actionable. The goal is to establish a "culture of performance" and catch problems early.
GatsbyJS is a site generator that allows you to build modern, fast and secure apps and websites using React, GraphQL, and other tools. It focuses on developer experience with batteries included and features like hot reloading. Popular sites using Gatsby include reactjs.org, airbnb.io, and figma.com. Gatsby gets data from various sources and delivers sites via services like S3, Netlify, and GitHub Pages. Developers can install Gatsby globally, generate a new Gatsby site, and develop locally while previewing changes in real time.
Migration Best-Practices: Successfully re-launching your website - SMX New Yo...Bastian Grimm
This document provides best practices for successfully migrating a website from HTTP to HTTPS. It recommends a granular, multi-step approach including thorough planning, documentation, testing, and preparation work. Key steps include updating internal and external links, XML sitemaps, structured data, headers, and more to reference the new HTTPS URLs. It also covers monitoring rankings, using search console tools, and redirecting URLs with 301 redirects after all changes are made before the official migration go-live. The goal is to minimize any potential negative SEO impacts from the migration.
Measuring the visual experience of website performancePatrick Meenan
This document discusses different methods for measuring website performance from both a synthetic and real-user perspective. It introduces the Speed Index metric for quantifying visual progress and compares the Speed Index of Amazon and Twitter. It also covers the Chrome resource prioritization and different challenges around visual performance metrics.
Raiders of the Fast Start: Frontend Performance Archaeology - Performance.now...Katie Sylor-Miller
Raiders of the Fast Start: Frontend Performance Archeology
There are a lot of books, articles, and online tutorials out there with fantastic advice on how to make your websites performant. It all seems easy in theory, but applying best practices to real-world code is anything but straightforward. Diagnosing and fixing frontend performance issues on a large legacy codebase is like being an archaeologist excavating the remains of a lost civilization. You don’t know what you will find until you start digging!
Pick up your trowels and come along with Etsy’s Frontend Systems team as we become archaeologists digging into frontend performance on our large, legacy mobile codebase. I’ll share real-life lessons you can use to guide your own excavations into legacy code:
What tools and metrics we used to diagnose issues and track progress.
How we went beyond server-driven best practices to focus on the client.
Which fixes successfully increased conversion, and which didn’t.
Our work, like all good archaeology, went beyond artifacts and unearthed new insights into our culture. We at Etsy pride ourselves on our culture of performance, but, like all cultures, it needs to adapt and reinvent itself to account for changes to the landscape. Based on what we’ve learned, we are making the case for a new, organization-wide, frontend-focused performance culture that will solve the problems we face today.
10 things you can do to speed up your web app today 2016Chris Love
Web Sites are to slow and this is costing businesses money. Most performance issues are easy to fix. In this session we review why web performance is important and 10 simple things you can do to make a faster user experience.
Technical SEO vs. User Experience - Bastian Grimm, Peak Ace AGBastian Grimm
My kick-off talk for a webinar titled "Technical SEO vs. UI/UX" which featured a panel of speakers discussing if and how SEO should work (more closely) together with UX. Enjoy!
Super speed around the globe - SearchLeeds 2018Bastian Grimm
My talk covering some of the very latest in web performance optimisation (paint timings, critical rendering path, custom web fonts, etc.) for technical marketers & SEOs from SearchLeeds 2018.
Welcome to a new reality - DeepCrawl Webinar 2018Bastian Grimm
My webinar with DeepCrawl talking about mobile-friendliness, assessing keyword targeting on mobile, finding content inconsistencies across devices and much, much more!
Become an artisan web analytics practitioner by building your own analytics QA tool. For Adobe Analytics but you could do the same with Google Analytics, A/B testing, tag management, VOC tools and many other analytics tools
This document summarizes a company's transition from using Angular to React for their frontend framework. It discusses why they wanted to change frameworks, the research they did into Angular 2, VueJS and React, and why they ultimately chose React. It also addresses some immediate roadblocks they faced and improvements they noticed. While the transition was time-consuming, they believe it was worth it to use a framework that is faster, simpler and has a larger community in React.
The technology landscape is changing with every passing year. The technology landscape is changing with every passing year. More people than ever before are now online. It also means that the ways that people are accessing the web all over the world are changing, too.
In this talk, I talk about the different techniques coupled with few case studies on how to improve front-end performance.
Web 3.0 extends Web 2.0 with technologies like machine-to-machine communication, IPv6, artificial intelligence, 5G networks, fiber internet connections at home, and a peer-to-peer internet. It is predicted that every home will have computing power equivalent to Google today and people will have personalized web experiences. Google may go bankrupt as people gain this power and the company needs to explore new business models. Technologies like smart pencils, real-time handwriting recognition, and plagiarism detection could enable new forms of education and learning tracking.
Web Components at Scale, HTML5DevConf 2014-10-21Chris Danford
At Pinterest, we've begun experimenting in production with Web Components. This talk will discuss some challenges of implementing Web Components in a large scale production environment such as SEO concerns, reasonable fallbacks for browsers not supported by Platform.js, migrating a large code base component-by-component to mitigate risk, and optimizing page load and scroll performance.
This document discusses ways to improve web performance for mobile users. It outlines goals like achieving a speed index between 1,100-2,500 and first meaningful paint within 1-3 seconds. Various techniques are presented for hacking first load times, data transfer, resource loading, images and user experience. These include avoiding redirects, using HTTP/2 and service workers, modern cache controls, responsive images, preloading resources, and ensuring consistent frame rates. The overall message is that mobile performance needs more attention given average load times and high bounce rates on slow mobile sites.
Metrics are everywhere! We’ve done a great job of keeping pace with measuring the output of our applications, but how are we doing with measuring what really matters? This talk will explore the various metrics available to application owners today, highlight what’s coming tomorrow and level-set on the relative importance as it relates to the user experience.
Selecting and deploying automated optimization solutionsPatrick Meenan
This document discusses various methods for automating front-end optimization. It describes how HTML rewriting solutions can optimize HTML through proxies or in-app plugins. It also discusses when certain optimizations are best done by machines versus humans. The document outlines different architectures for front-end optimization solutions, including cloud-based and on-premises options, and considers when each is most appropriate. It emphasizes the importance of testing solutions before deploying and of monitoring performance after deployment.
When third parties stop being polite... and start getting realCharles Vazac
By Nic Jansma and Charles Vazac (Akamai)
Fluent 2018
http://www.youtube.com/watch?v=L3LKtFh1HkQ
Would you give the Amazon Prime delivery robot the key to your house, just because it stops by to deliver delicious packages every day? Even if you would, do you still have 100% confidence that it wouldn’t accidentally drag in some mud, let the neighbor in, steal your things, or burn your house down? Worst-case scenarios such as these are what you should be planning for when deciding whether or not to include third-party libraries and services on your website. While most libraries have good intentions, by including them on your site, you have given them complete control over the kingdom. Once on your site, they can provide all of the great services you want—or they can destroy everything you’ve worked so hard to build.
It’s prudent to be cautious: we’ve all heard stories about how third-party libraries have caused slowdowns, broken websites, and even led to downtime. But how do you evaluate the actual costs and potential risks of a third-party library so you can balance that against the service it provides? Every library requires nonzero overhead to provide the service it claims. In many cases, the overhead is minimal and justified, but we should quantify it to understand the real cost. In addition, libraries need to be carefully crafted so they can avoid causing additional pain when the stars don’t align and things go wrong.
Nic Jansma and Charles Vazac perform an honest audit of several popular third-party libraries to understand their true cost to your site, exploring loading patterns, SPOF avoidance, JavaScript parsing, long tasks, runtime overhead, polyfill headaches, security and privacy concerns, and more. From how the library is loaded, to the moment it phones home, you’ll see how third-parties can affect the host page and discover best practices you can follow to ensure they do the least potential harm.
With all of the great performance tools available to developers today, we’ve gained a lot of insight into just how much third-party libraries are impacting our websites. Nic and Charles detail tools to help you decide if a library’s risks and unseen costs are worth it. While you may not have the time to perform a deep dive into every third-party library you want to include on your site, you’ll leave with a checklist of the most important best practices third-parties should be following for you to have confidence in them.
At Fluent Conference 2018, Nic Jansma and Charles Vazac perform an honest audit of several popular third-party libraries to understand their true cost to your site, exploring loading patterns, SPOF avoidance, JavaScript parsing, long tasks, runtime overhead, polyfill headaches, security and privacy concerns, and more. They also share tools to help you decide if a library’s risks and unseen costs are worth it.
When Third Parties Stop Being Polite... and Start Getting RealNicholas Jansma
By Nic Jansma and Charlie Vazac (Akamai)
Fluent 2018
Would you give the Amazon Prime delivery robot the key to your house, just because it stops by to deliver delicious packages every day? Even if you would, do you still have 100% confidence that it wouldn’t accidentally drag in some mud, let the neighbor in, steal your things, or burn your house down? Worst-case scenarios such as these are what you should be planning for when deciding whether or not to include third-party libraries and services on your website. While most libraries have good intentions, by including them on your site, you have given them complete control over the kingdom. Once on your site, they can provide all of the great services you want—or they can destroy everything you’ve worked so hard to build.
It’s prudent to be cautious: we’ve all heard stories about how third-party libraries have caused slowdowns, broken websites, and even led to downtime. But how do you evaluate the actual costs and potential risks of a third-party library so you can balance that against the service it provides? Every library requires nonzero overhead to provide the service it claims. In many cases, the overhead is minimal and justified, but we should quantify it to understand the real cost. In addition, libraries need to be carefully crafted so they can avoid causing additional pain when the stars don’t align and things go wrong.
Nic Jansma and Charles Vazac perform an honest audit of several popular third-party libraries to understand their true cost to your site, exploring loading patterns, SPOF avoidance, JavaScript parsing, long tasks, runtime overhead, polyfill headaches, security and privacy concerns, and more. From how the library is loaded, to the moment it phones home, you’ll see how third-parties can affect the host page and discover best practices you can follow to ensure they do the least potential harm.
With all of the great performance tools available to developers today, we’ve gained a lot of insight into just how much third-party libraries are impacting our websites. Nic and Charles detail tools to help you decide if a library’s risks and unseen costs are worth it. While you may not have the time to perform a deep dive into every third-party library you want to include on your site, you’ll leave with a checklist of the most important best practices third-parties should be following for you to have confidence in them.
Scraping the web with Laravel, Dusk, Docker, and PHPPaul Redmond
Jumpstart your web scraping automation in the cloud with Laravel Dusk, Docker, and friends. We will discuss the types of web scraping tools, the best tools for the job, and how to deal with running selenium in Docker.
Code examples @ https://github.com/paulredmond/scraping-with-laravel-dusk
Check Yourself Before You Wreck Yourself: Auditing and Improving the Performa...Nicholas Jansma
Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences. The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site. We recently performed and shared an audit of Boomerang's performance, to help communicate its "cost of doing business", and in doing so we found several areas of code that we wanted to improve. We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture for our library to ensure we're having as little of an impact as possible on the sites we're included on.
Boomerang is an open-source Real User Monitoring (RUM) JavaScript library used by thousands of websites to measure their visitor's experiences.
Boomerang runs on billions of page loads a day, either via the open-source library or as part of Akamai's mPulse RUM service. The developers behind Boomerang take pride in building a reliable and performant third-party library that everyone can use without being concerned about its measurements affecting their site.
Recently, we performed and shared an audit of Boomerang's performance, to help communicate the "cost of doing business" of including Boomerang on a page while it takes its measurements. In doing the audit, we found several areas of code that we wanted to improve and have been making continuous improvements ever since. We've taken ideas and contributions from the OSS community, and have built a Performance Lab that helps "lock in" our improvements by continuously measuring the metrics that are important to us.
We'll discuss how we performed the audit, some of the improvements we've made, how we're testing and validating our changes, and the real-time telemetry we capture on our library to ensure we're having as little of an impact as possible on the sites we're included on.
The document discusses effective strategies for monitoring client-side web performance. It recommends collecting both real user monitoring metrics from actual users as well as synthetic metrics from automated tests. It describes tools like Navigation Timing API, paint metrics, custom metrics, and open-source libraries that can capture metrics. It also discusses storing and visualizing metrics with tools like Graphite and Grafana and how to reduce noise and account for environment differences when analyzing performance data. The overall goal is to utilize performance metrics to inform decisions that improve the user experience.
This document summarizes a presentation on performance optimization on a budget. It discusses measuring and improving performance at the front-end through asset optimization, latency reduction, and client-side rendering. It also discusses measuring and optimizing performance at the backend through caching, databases, and server-side architecture. The document lists several free and paid tools for profiling, testing, and analyzing performance. It concludes with best practices for performance including establishing goals, architecture, testing, and an SDLC approach.
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...Applitools
The document discusses AI tools for software testing such as ChatGPT, Github Copilot, and Applitools Visual AI. It provides an overview of each tool and how they can help with testing tasks like test automation, debugging, and handling dynamic content. The document also covers potential challenges with AI like data privacy issues and tools having superficial knowledge. It emphasizes that AI should be used as an assistance to humans rather than replacing them and that finding the right balance and application of tools is important.
Using Modern Browser APIs to Improve the Performance of Your Web ApplicationsNicholas Jansma
This document discusses modern browser APIs that can improve web application performance. It covers Navigation Timing, Resource Timing, and User Timing which provide standardized ways to measure page load times, resource load times, and custom events. Other APIs discussed include the Performance Timeline, Page Visibility, requestAnimationFrame for script animations, High Resolution Time for more precise timestamps, and setImmediate for more efficient script yielding than setTimeout. These browser APIs give developers tools to assess and optimize the performance of their applications.
Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. It allows JavaScript to be run on the server-side and is well-suited for real-time, event-driven applications due to its asynchronous and non-blocking I/O model. It was created in 2009 by Ryan Dahl who was frustrated by the limitations of JavaScript in the server-side. Node.js uses an event loop that handles asynchronous callbacks and a single thread model to achieve scalable performance. Many large companies like Uber, LinkedIn, and Netflix use Node.js for applications that require real-time features or high throughput.
In the realm of real-time applications, Large Language Models (LLMs) have long dominated language-centric tasks, while tools like OpenCV have excelled in the visual domain. However, the future (maybe) lies in the fusion of LLMs and deep learning, giving birth to the revolutionary concept of Large Action Models (LAMs).
Imagine a world where AI not only comprehends language but mimics human actions on technology interfaces. For example, the Rabbit r1 device presented at CES 2024, driven by an AI operating system and LAM, brings this vision to life. It executes complex commands, leveraging GUIs with unprecedented ease.
In this presentation, join me on a journey as a software engineer tinkering with WebRTC, Janus, and LLM/LAMs. Together, we’ll evaluate the current state of these AI technologies, unraveling the potential they hold for shaping the future of real-time applications.
The document discusses Performance as Code (PAC). Key points include:
- PAC aims to treat performance testing like code through tools like PerfDriver that allow defining performance tests as code.
- The concept of Minimum Viable Performance (MVPx) seeks to establish basic performance testing capabilities.
- Continuous Performance (CPx) and microcontainerization with tools like Kubernetes allow ongoing performance testing.
- The Digital Performance Lifecycle (DPL) aims to systematize performance testing approaches.
- Test Data as Code (TDaC) and Robotic Process Automation (RPA) aim to automate data creation and parameterization.
- Lifecycle Virtualization aims to apply AI to further automate
SF JUG - GWT Can Help You Create Amazing Apps - 2009-10-13Fred Sauer
This document summarizes a presentation about Google Web Toolkit (GWT). It discusses how GWT can help developers create apps by allowing them to use Java to build AJAX apps that run on any modern browser, highlights of GWT features like widgets, libraries, compiler optimizations for performance and code size, and resources for learning more about GWT.
Technical Tips: Visual Regression Testing and Environment Comparison with Bac...Building Blocks
As a Front End Web Developer, experimenting with new tools to add to your workflow (and going down the rabbit hole with them!) is all part and parcel of refining your craft. Chris Eccles, Technical Manager at Building Blocks has been doing just this and has some invaluable insight into CSS Visual Regression using Backstop.JS.
CSS Visual Regression testing is the process of running automated visual test comparisons on pages or elements in your projects. Using Backstop.JS, Chris has discovered that this tool is intuitive, allowing quick configuration to allow you to get up and rolling quickly.
Backstop.JS serves your tests via a webpage which gives you the visual feedback needed for targeting bugs caused from CSS related issues. These comparisons can uncover bugs you’d otherwise not learn about until it’s too late. A very useful tool to have in your Front End arsenal, wouldn’t you agree?
Chris has been sharing his insights with the BB team and wanted to share with our blog readers also. So, sit back and enjoy the ride through the wonderful world of Backstop.JS.
The document discusses shifting performance testing left in the development process. It argues that with increased software complexity, testing needs to start earlier to avoid delays. Single user performance testing can be run by developers as part of their normal testing to gain immediate feedback. This involves measuring responsiveness, network traffic, and device vitals under different conditions. While load testing still has value, splitting it up and combining it with functional and responsiveness testing allows more testing to be done earlier in development.
improving the performance of Rails web ApplicationsJohn McCaffrey
This presentation is the first in a series on Improving Rails application performance. This session covers the basic motivations and goals for improving performance, the best way to approach a performance assessment, and a review of the tools and techniques that will yield the best results. Tools covered include: Firebug, yslow, page speed, speed tracer, dom monster, request log analyzer, oink, rack bug, new relic rpm, rails metrics, showslow.org, msfast, webpagetest.org and gtmetrix.org.
The upcoming sessions will focus on:
Improving sql queries, and active record use
Improving general rails/ruby code
Improving the front-end
And a final presentation will cover how to be a more efficient and effective developer!
This series will be compressed into a best of session for the 2010 http://windycityRails.org conference
Client Side Performance for Back End Developers - Camb Expert Talks, Nov 2016Bart Read
Slides for a new talk - honestly, an alpha version (thanks to everyone who came for playing guinea pig) - of my client side performance talk. This is very much aimed towards back-end, or full stack developers more used to working behind the scenes, who may be less comfortable with JavaScript and other front-end performance concerns.
The document provides an overview of developing high performance web applications, focusing on optimizing front-end performance. It discusses why front-end performance matters, and provides best practices for optimizing page load time, developing responsive interfaces, and efficiently loading and executing JavaScript. The document also covers DOM scripting techniques, tools for profiling and analyzing performance, and how the performance monitoring service Gomez can be extended to better measure client-side metrics.
The document summarizes several emerging tech trends for 2018-2019 including:
- Micro-frontends, which separate large monolithic applications into independent and modular frontends.
- Polly, which records and replays HTTP interactions for deterministic, accurate tests.
- HTTP/3 which will officially replace HTTP-over-QUIC.
- Architecture Decision Records (ADR) which document architectural choices in a standard format.
- Chaos engineering which experiments on distributed systems to build confidence in withstanding turbulent conditions.
- Blazor which allows building web UIs using C#/Razor components running natively in the browser via WebAssembly.
- Nullable reference types coming to C# 8.
Slides from my 4-hour workshop on Client-Side Performance Testing conducted at Phoenix, AZ in STPCon 2017 (March).
Workshop Takeaways:
Understand difference between is Performance Testing and Performance Engineering.
Hand’s on experience of some open-source tools to monitor, measure and automate Client-side Performance Testing.
Examples / code walk-through of some ways to automate Client-side Performance Testing.
See blog for more details - https://essenceoftesting.blogspot.com/2017/03/workshop-client-side-performance.html
Similar to Measuring Front-End Performance - What, When and How? (20)
Best Practices for Effectively Running dbt in Airflow.pdfTatiana Al-Chueyr
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models.
This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through:
- Standard ways of running dbt (and when to utilize other methods)
- How Cosmos can be used to run and visualize your dbt projects in Airflow
- Common challenges and how to address them, including performance, dependency conflicts, and more
- How running dbt projects in Airflow helps with cost optimization
Webinar given on 9 July 2024
RPA In Healthcare Benefits, Use Case, Trend And Challenges 2024.pptxSynapseIndia
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
YOUR RELIABLE WEB DESIGN & DEVELOPMENT TEAM — FOR LASTING SUCCESS
WPRiders is a web development company specialized in WordPress and WooCommerce websites and plugins for customers around the world. The company is headquartered in Bucharest, Romania, but our team members are located all over the world. Our customers are primarily from the US and Western Europe, but we have clients from Australia, Canada and other areas as well.
Some facts about WPRiders and why we are one of the best firms around:
More than 700 five-star reviews! You can check them here.
1500 WordPress projects delivered.
We respond 80% faster than other firms! Data provided by Freshdesk.
We’ve been in business since 2015.
We are located in 7 countries and have 22 team members.
With so many projects delivered, our team knows what works and what doesn’t when it comes to WordPress and WooCommerce.
Our team members are:
- highly experienced developers (employees & contractors with 5 -10+ years of experience),
- great designers with an eye for UX/UI with 10+ years of experience
- project managers with development background who speak both tech and non-tech
- QA specialists
- Conversion Rate Optimisation - CRO experts
They are all working together to provide you with the best possible service. We are passionate about WordPress, and we love creating custom solutions that help our clients achieve their goals.
At WPRiders, we are committed to building long-term relationships with our clients. We believe in accountability, in doing the right thing, as well as in transparency and open communication. You can read more about WPRiders on the About us page.
Comparison Table of DiskWarrior Alternatives.pdfAndrey Yasko
To help you choose the best DiskWarrior alternative, we've compiled a comparison table summarizing the features, pros, cons, and pricing of six alternatives.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment.
How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Understanding Insider Security Threats: Types, Examples, Effects, and Mitigat...Bert Blevins
Today’s digitally connected world presents a wide range of security challenges for enterprises. Insider security threats are particularly noteworthy because they have the potential to cause significant harm. Unlike external threats, insider risks originate from within the company, making them more subtle and challenging to identify. This blog aims to provide a comprehensive understanding of insider security threats, including their types, examples, effects, and mitigation techniques.
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
INDIAN AIR FORCE FIGHTER PLANES LIST.pdfjackson110191
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
Measuring the Impact of Network Latency at TwitterScyllaDB
Widya Salim and Victor Ma will outline the causal impact analysis, framework, and key learnings used to quantify the impact of reducing Twitter's network latency.
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Choose our Linux Web Hosting for a seamless and successful online presencerajancomputerfbd
Our Linux Web Hosting plans offer unbeatable performance, security, and scalability, ensuring your website runs smoothly and efficiently.
Visit- https://onliveserver.com/linux-web-hosting/
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
BT & Neo4j: Knowledge Graphs for Critical Enterprise Systems.pptx.pdfNeo4j
Presented at Gartner Data & Analytics, London Maty 2024. BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Join this session to hear their story, the lessons they learned along the way and how their future innovation plans include the exploration of uses of EKG + Generative AI.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
9. In the browser
function myTimings()
{
performance.mark("startTask1");
doTask1(); // Some developer code
performance.mark("endTask1");
performance.mark("startTask2");
doTask2(); // Some developer code
performance.mark("endTask2");
}
http://www.w3.org/TR/user-timing/
12. • Response End / TTFB
How quickly has my server served the base page
• DOM Content Loaded
A good analogy for “Page is usable”
• Render Start / First Paint
Gives us an indication of when the user actually sees something
• Total Page Load
Although this includes all 3rd-party and deferred content, it can help get a “feel” for how well
everything is working
• User Timings
This is a little more work, but allows the ability to instrument the areas important to you
• Speed Index
This is a great single metric to give a pretty good idea of overall user experience
What?
16. Performance Budgets
• Defines tangible numbers or metrics
• May be defined by an aspiration or industry standards
• Enforces the performance standards
• Instills a “culture of performance” in the project team
• Gives a mark to measure by
• You probably already have one!
• Start vague, but define early
• “Performance is everyone’s problem”
24. • sitespeed.io
Uses WPT & PhantomJS to run performance audits on site.
• Can be used internally (CLI tool)
• PerfBar (http://wpotools.github.io/perfBar/)
Surfaces NavTiming data in the browser
• Useful on UAT-type environments
• CI plugins
• Test for performance as part of the CI process
Other Tools
28. How?
• Synthetic
External, controlled testing
• Real User Monitoring
Browser-based reporting of real user’s experience
• Don’t choose!
Both synthetic and RUM provide valuable insight into performance and should
be seen as complementary - either alone gives a narrow view
• Report
Display data on dashboards, make it visible and relevant
29. Summary
• What: Decide what metrics are relevant to User Experience
• When: At every stage of the lifecycle
• How: Using tools and reports to make the data relevant and actionable
Start with the what…?
What shall we measure?
more questions:
meaningful?
how our pages are performing?
user experience?
what do users *mean*?
The page is usable?
All objects are loaded?
When the browser wheel stops spinning?
Can’t answer
Can help to find out
Know what you can measure to ensure you are meeting you user’s expectations.
Let’s start with the basics…
request an object over HTTP
basic steps to deliver an object over HTTP,
measure all of these
indication of the page delivery performance
bundle the back-end metrics into TTFB
HTML page has been downloaded
how the rest of the page gets built and displayed
DOM = document object model
very simplistic model
partial render tree
render start may happen before DCL
elephant in the room: JavaScript!
blocks DOM construction
CSSOM construction blocks JavaScript execution!
maybe DCL is a useful metric…?
Once in the browser, there are APIs we can use to collect these, and other metrics…
The NavTiming API…
Lots of metrics covering navigation, page load
+ browser events like DCL
http://www.w3.org/TR/navigation-timing/
The ResourceTiming API…
Performance metrics for page objects / resources
NB Subject to CORS
Must have an allow header (Timing-Allow-Origin)
ultimate flexibility, the UserTimings API
own timing marks in JS
Guardian 1st party JS app instrumented
Measure of how quickly the visible portions are drawn
Visual completeness during page load
Index of how long spent Incomplete
Example:
Start and End at the same time
Graphing completeness over time gives…
Can see that A is more complete more quickly
B is Incomplete longer = worse UX
Index calc’d from area above
Larger area = larger index = worse UX
More detail on formula online
Used in synthetic
Can be calc’d from browser paint events
unreliable & not used commercially
So let’s return to the “What?”…
Huge number of metrics
What can we use to represent UX?
Depends
a starting point of what I use…
Response EndHow quickly has my server served the base page
DOM Content LoadedA good analogy for “Page is usable”
Render Start / First PaintGives us an indication of when the user actually sees something
Total Page LoadAlthough this includes all 3rd-party and deferred content, it can help get a “feel” for how well everything is working
User TimingsThis is a little more work, but allows the ability to instrument the areas important to you
Speed IndexThis is a great single metric to give a pretty good idea of overall user experience
let’s look at the “When?”
when to test
develop then test and hope?
Example of a waterfall methodology
when to measure performance?
without saying do it in test
and probably development.
What about requirements?
Performance should be a NFR
And monitoring performance after release
editors add content, marketing add tags
ensure that users are still getting the optimal experience.
what about during Design…?
Brad Frost tells us…
Good performance is good design
Many articles on designing for performance
book on it by Lara Hogan.
Designing to be fast from the beginning, rather than trying to optimise, will always give a better experience for end-users, saves time in development & test, and makes the life of a developer a heck of a lot easier!
A key way to achieve this is to set Performance Budgets;
dev & design collaborating on designing a fast site
PERFORMANCE IS EVERYONE’S PROBLEM
So… we come back to the question of When?
At every stage of the lifecycle
But how do we do that?
We know:
what to measure,
when to measure.
But how?
I’ll walk you through some of the options, using examples of tools on the way.
Synthetic, often referred to as “robots”.
many forms
simple curl-type requests measuring the HTTP req
also commonly used for availability monitoring
doesn’t tell us a lot about UX
easy, and often free or very cheap.
Better to test using “real” browsers -
emulated browser to load the page at
regularly - external (generally)
Methodologies vary
Test under (relatively) consistent conditions.
Emulated browsers for control
Graph-based portals waterfall charts
Also test from real VMs running desktop (or mobile in some cases) browsers.
Some will use these for regular testing, as well as for single tests.
Also screenshots filmstrips videos
Key is consistency - Stable - Bandwidth - Latency
Can’t compare otherwise
Webpagetest is a fantastic resource
it’s free,
test from real browsers all over the world.
Can build scripts to do things like authentication, click-paths.
API to run tests, and get results,
plenty of tools use this to automate measurement
build pipelines - other reporting suites (sitespeed.io)
Real mobile devices (Android and iOS) - US based.
Open source, available on github…
own private instance on your own network,
A few minutes on AWS (pre-built AMIs).
Great tool for testing before production as you can put it anywhere you need!
How our site is really performing for end users, we need to get metrics from them.
We know browsers provide a mechanism for getting data
how do we get the millions(?) of data points and make sense of them?
Start on the rum… no,“Real User Monitoring”
Typically, small JS tag
collect the metrics and beacon to a portal
Some analytics will also collect basic performance data Like GA,
Usually very basic and heavily sampled
But how do we make sense of all this data?
Eternal question for RUM data.
Portals allow you to analyse the avalanche of data
Often averages, percentiles or aggregations
Valuable, but takes work
Allows visibility into UX
Further investigation before conclusions can be drawn
Poor performance in Mexico could be
poor CDN performance in the region
A local connectivity issue
Even, a single data point from a user on dial-up ;)
HISTOGRAMS
Other Tools
Sitespeed - CLI - PhantomJS - WPT runner - internal
Perfbar in UAT or for internal users
CI plugins - fail the build on broken budgets
So what are we going to do with all this data we’re collecting?
Use to optimise the site
Find areas to improve,
RUM data might show situational optimisations
But what else can we do with it?
Speedcurve offers a number of high-level visualisations.
Example here shows number of images on homepage
Marked performance budget
Markers to show deployments.
Can be used to “publicise” site performance
I know teams that display around the offices
Make sure everyone knows what’s going on.
“performance is everyones problem”.
API access to the data,
Build custom dashboards
Graphite (with Grafana as a front-end) on the left,
and Splunk on the right.
Flexibility to integrate performance data
business needs,
other data sources like analytics,
combining synthetic and RUM data,
Build your own story
Display data in a way that’s meaningful to everyone.
So How…
I’ll leave you with this quote from a 2011 blog entry from Ian Malpass at Etsy… this is their philosophy.
However it’s important to remember to focus on what’s important to you, while collecting all the data you possibly can - you never know when it may be useful!
https://codeascraft.com/2011/02/15/measure-anything-measure-everything/