This document discusses using JavaScript to analyze network performance. It covers measuring latency, TCP handshake time, DNS lookup time, network throughput, and IPv6 support. The document provides code examples for measuring each of these metrics using JavaScript and analyzing image load times. It notes that network conditions vary and accurate measurements require statistical analysis over many samples.
Introduction to the H2O HTTP/2 server, and discusses the improvements in first-paint time from previous generation protocols
This document discusses reorganizing website architecture for HTTP/2 and beyond. It summarizes some issues with HTTP/2 including errors in prioritization where some browsers fail to specify resource priority properly. It also discusses the problem of TCP head-of-line blocking where pending data in TCP buffers can delay higher priority resources. The document proposes solutions to these issues such as prioritizing resources on the server-side and writing only what can be sent immediately to avoid buffer blocking. It also examines the mixed success of HTTP/2 push and argues the server should not push already cached resources.
This document discusses HTTP/2 server push and how it can be used to improve web performance. It begins with an overview of existing techniques for pushing content like polling, long polling, pushlets, and server-sent events. It then provides details on how HTTP/2 server push works, including the new PUSH_PROMISE frame that allows the server to push associated resources to the client. It examines the benefits of HTTP/2 push like reduced latency and improved caching as well as challenges around flexibility and complexity compared to other push techniques.
Slides from my #JavaDay2016 talk "Enabling Googley microservices with HTTP/2 and gRPC. A high performance, open source, HTTP/2-based RPC framework."
This document discusses open source tools for machine learning workflows. It introduces MLFlow for tracking metrics and versions of models, Git-LFS for versioning large datasets, and DVC for versioning datasets, models, and connecting data and code. DVC allows for reproducible experiments, tracking basic metrics, and managing ML pipelines with commands like 'dvc repro' and 'dvc pipeline show'. The document argues that new tools are needed to address the specific needs of machine learning compared to traditional software development.
HTTP/2 for Developers: How It Changes Developer's Life? by Svetlin Nakov (SoftUni) - http://www.nakov.com jProfessionals Conference - Sofia, 22-Nov-2015 Key new features in HTTP/2 - Multiplexing: multiple streams over a single connection - Header compression: reuse headers from previous requests - Sever push: multiple parallel responses for a single request - Prioritization and flow control: resources have priorities
The document proposes a Bulk-n-Pick method for one-to-many data transfer in dense wireless spaces. It begins by outlining problems with congested WiFi networks and inefficient HTTP requests. The solution involves pushing data to clients using circuits rather than individual packet pulls, which reduces overhead. The Bulk-n-Pick method bulk transfers data then allows clients to pick relevant portions, improving throughput. Modeling suggests it completes transfers much faster than traditional methods, especially with multiple parallel sessions. The approach reintroduces benefits of circuits for bulk data transfer over wireless networks.
This document summarizes a presentation given by Alex Borysov on enabling microservices with gRPC. gRPC is an open-source, high performance RPC framework that is based on HTTP/2. Borysov discusses what gRPC is, why it provides advantages over JSON/HTTP especially for high throughput services, how to define services with protocol buffers, implement gRPC servers and clients, and develop microservices using gRPC. He provides examples of unary, asynchronous and streaming calls between services.
The document provides an analysis of a "Megalodon Challenge" network issue where page requests were going unanswered at high loads. The analysis found: 1) Packet losses occurred regularly between the capture points and content server. 2) There were two types of packet loss - one consistent throughout and another only at high loads, causing failed connections. 3) At high loads in the fourth test, the content server terminated 11 connections and ignored 47, resulting in failed transactions from the web server.
RFC 7540 was ratified over 2 years ago and, today, all major browsers, servers, and CDNs support the next generation of HTTP. Just over a year ago, at Velocity (https://www.slideshare.net/Fastly/http2-what-no-one-is-telling-you), we discussed the protocol, looked at some real world implications of its deployment and use, and what realistic expectations we should have from its use. Now that adoption is ramped up and the protocol is being regularly used on the Internet, it's a good time to revisit the protocol and its deployment. Has it evolved? Have we learned anything? Are all the features providing the benefits we were expecting? What's next? In this session, we'll review protocol basics and try to answer some of these questions based on real-world use of it. We'll dig into the core features like interaction with TCP, server push, priorities and dependencies, and HPACK. We'll look at these features through the lens of experience and see if good practice patterns have emerged. We'll also review available tools and discuss what protocol enhancements are in the near and not-so-near horizon.
Presentation material for TokyoRubyKaigi11. Describes techniques used by H2O, including: techniques to optimize TCP for responsiveness, server-push and cache digests.
Introduction of H2O, an optimized HTTP server / library implementation with support for HTTP/1, HTTP/2, websocket
The document is a presentation about abusing JavaScript to measure web performance. It discusses using JavaScript to measure network latency, TCP handshake time, network throughput, DNS lookup time, IPv6 support and latency, and other performance metrics. It provides code examples for measuring each metric in JavaScript and notes challenges to consider. The presentation encourages the use of the open source Boomerang library for accurate performance measurement.
This document discusses extending the lifespan of IoT devices through firmware updates and outlines some challenges and solutions. It proposes a standardized approach using multicast transmissions, forward error correction, and an update server to efficiently deliver firmware over constrained low-power wide area networks. An open-source reference implementation is available to demonstrate feasibility on current hardware within radio regulations.
The goal of the project “An optic’s life” is, to predict the time when an optical transceiver will reach its real end-of-life-time based on the actual setup in the datacenter / colocation.
This document provides an overview and agenda for a Janet Tech 2 Tech session on network performance. It discusses challenges in achieving optimal network performance, tools for troubleshooting issues like congestion and packet loss, best practices like implementing a Science DMZ, and Janet-hosted test tools including perfSONAR, iperf3, and a data transfer node for file transfers. The session aims to help members make the most of their Janet network connection and minimize data shipped by hard disk.
This document summarizes a presentation on high performance mobile web. The presentation covers: - Delivering fast mobile experiences by making fewer HTTP requests, using CDNs, browser prefetching, and other techniques. - Measuring web performance using Navigation Timing, Resource Timing, custom timing marks, and tools like WebPagetest and Google Analytics. - Typical mobile network performance statistics like average latency, download speeds, and how these numbers impact page load times.
The problem of over-the-network indexing has been raised in recent literature. Indexing is traditionally done on a local filesystem. When processing/access and storage are separated by network, traditional methods perform poorly, even if rewritten for over-the-network logic. The new engine called Stringex was newly proposed with over-the-network efficiency in mind. However, although blocksize is optimized by the method, it is fixed for the entire index. This paper looks into a way to allow for dynamic blocksize. The problem is formulated as dynamic packing of unit blocks for optimal over-the-network access. The new method also takes into account the issue of atomicity of operations in multiuser environments, where each of the multiple users can experience drastically different performance on end-to-end network paths.
This document provides instructions for creating scenarios in the GloMoSim network simulator. It discusses the key input and output files used, including the scenario configuration file, node placement file, and application configuration file. It also describes how to design both wired and wireless networks as scenarios in GloMoSim, including defining the network topology and components, configuring applications and traffic, and analyzing output statistics files.
WebSockets and browser-based real-time communications allow for two-way communication between client-side code and remote servers. This enables web applications to maintain bidirectional communications using a simple API. While other options like AJAX exist, WebSockets provide more efficient bidirectional communications by keeping the connection open. The technology has evolved from static web pages to enable rich applications through standards like WebSockets and WebRTC.
Here is a draft proposal for migrating the Windows XP machines in the new LSDG research group to Linux: Proposal to Migrate LSDG Desktops from Windows XP to Linux Introduction The new LSDG research group at Linx LLC will be using desktop operating systems. Currently, some machines in the larger Linx LLC organization run Windows XP and Windows 7. As LSDG will be a separate research group, we need to consider the best desktop OS choice for their needs and the longevity of the machines. Analysis Windows XP is no longer supported by Microsoft, so continuing to use it poses major security risks. Without updates and patches, XP machines are vulnerable to exploits. Support for Windows 7 will also end
This document summarizes a joint research project between JPRS and several Japanese ISPs to enhance DNS resiliency. The goals were to install DNS servers in multiple regions of Japan to distribute query load and ensure continuity of DNS services during natural disasters. ISPs configured their networks to direct queries to local DNS nodes hosted by JPRS within their networks. Evaluation found queries shifted towards local nodes, response times improved, and Internet services remained available within ISP networks even when other DNS sites were unreachable, demonstrating increased DNS resiliency.
Jose Saldana, Julian Fernandez-Navajas, Jose Ruiz-Mas, Eduardo Viruete Navarro, Luis Casadesus, "The Effect of Router Buffer Size on Subjective Gaming Quality Estimators based on Delay and Jitter," in Proc. CCNC 2012- 4th IEEE International Workshop on Digital Entertainment, Networked Virtual Environments, and Creative Technology (DENVECT), pp. 502-506, Las Vegas. Jan 2012. ISBN 9781457720697.
This document discusses several challenges with integrating YANG push data into a data mesh architecture, and proposes solutions to address those challenges. Specifically, it discusses: 1. The need to unify observations from network events that occur at different times into single alerts. 2. The lack of standardization around aspects of YANG push like transport protocols, encodings, subscriptions, metadata, and versioning. 3. A proposal to integrate YANG push into a data mesh to produce standardized metrics with timestamps, and control semantic changes end-to-end.
In textile industry, fabric defect relies on human inspection traditionally, which is inaccurate, inconsistent, inefficient and expensive. There were automatic systems developed on the defect detection by identifying the faults in fabric surface using the image and video processing techniques. However, the existing solution has insufficiencies in defect data sharing, backhaul interconnect, maintenance and etc. By evolving to an edge-optimized architecture, we can help textile industry improve fabric quality, reduce operation cost and increase production efficiency. In this session, I’ll share: What’s edge computing and why it’s important to intelligence manufacturing What’s the characteristics, strengths and weaknesses of traditional fabric defect detection method Why textile industry can benefit from edge computing infrastructure How to design and implement an edge-enabled application for fabric defect detection in real-time Insights, synergy and future research directions
The document discusses plans to set up game servers for an action RPG game for beta testing in North America and Europe. It describes analyzing latency data from previous closed betas to select optimal server locations that can provide latency below 200ms for over 75% of players. The analysis identified US East and Frankfurt as locations that met the criteria. The open beta will launch with servers in Las Vegas and Frankfurt based on the latency analyses.
The document discusses four OpenSolaris projects - Network Auto-Magic, Clearview, Brussels, and Crossbow - that aim to simplify and enhance network administration on the Solaris platform. Network Auto-Magic seeks to automate basic network configuration. Clearview aims to unify and enhance features across different network interfaces. Brussels looks to simplify network interface configuration and tuning. Crossbow integrates network interface virtualization and resource management.
How to tackle real-world web platform performance problems in modern websites and apps? This session starts with a basic understanding of the web platform and then explores to a set of problem/solution pairs built from industry-standard performance guidance. In the talk, we will demonstrate performance tips and tricks that will help you improve the performance of your apps and sites today. We will discuss the following respond to network requests, speed and responsiveness, optimizing media usage, and writing fast JavaScript. These performance tips and tricks apply equally to web sites that run on standards based web browsers, as well as to modern apps.
How can infrastructure engineers empower their product developers with easy-to-use systems and processes that abstract the complexity of core infrastructure? This talk focuses on Envoy configuration management, and how the networking team at Lyft builds on top of Envoy to allow Lyft engineers to focus on business logic.
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
The document describes two experiments conducted using the OPNET simulation tool. Experiment 1 involves simulating a TCP network using different congestion control mechanisms and analyzing OSPF routing. Experiment 2 compares the bus and star network topologies by creating networks with each in OPNET and collecting statistics on traffic and delay. The objectives are to get familiar with OPNET, study TCP algorithms, simulate OSPF routing, and understand the pros and cons of different topologies. Tasks for each experiment are described in detail, including how to set up the simulations, configure nodes and links, select statistics, and run the simulations.
The document proposes a new WebML API to optimize machine learning workloads on the web by integrating them with OS-level ML APIs and hardware accelerators. It provides an overview of existing web ML frameworks and limitations. The WebML API would standardize ML inference on the web and allow web apps to fully utilize CPU, GPU and dedicated ML accelerators for near-native performance. The document includes a prototype WebML API implementation and initial performance results showing significant speedups compared to existing web APIs.
This document discusses techniques for improving the performance of D3 visualizations. It begins with an overview of D3 and some basic tutorials. It then describes issues with performance for force-directed layouts and edge-bundled layouts as the number of nodes and links increases. Solutions proposed include using canvas instead of SVG for rendering, reducing unnecessary calculations, and caching repeated drawing states. The document concludes that the number of DOM nodes has major performance implications and techniques like canvas can help when exact mouse interactions are not required.
There’s no such thing as fast enough. You can always make your website faster. This talk will show you how. The very first requirement of a great user experience is actually getting the bytes of that experience to the user before they they get tired and leave.In this talk we’ll start with the basics and get progressively insane. We’ll go over several frontend performance best practices, a few anti-patterns, the reasoning behind the rules, and how they’ve changed over the years. We’ll also look at some great tools to help you.
Frontend Performance Beginner to Expert to Crazy Person The very first requirement of a great user experience is actually getting the bytes of that experience to the user before they they get tired and leave. In this talk we'll start with the basics and get progressively insane. We'll go over several frontend performance best practices, a few anti-patterns, the reasoning behind the rules, and how they've changed over the years. We'll also look at some great tools to help you. La performance front-end de débutant, à expert, à fou furieux ! La toute première condition nécessaire à une bonne expérience utilisateur est de pouvoir obtenir les octets de cette expérience avant que l'utilisateur ne se lasse et parte. Nous débuterons cette conférence avec les bases pour progressivement devenir démentiel. Nous aborderons plusieurs des meilleurs pratiques de la performance front-end, quelques anti-patterns à éviter, le raisonnement derrière les règles, et comment ces dernières ont changé au fil des ans. Nous regarderons d'un peu plus près quelques très bon outils qui peuvent vous aider.
The document outlines steps for front-end performance optimization, beginning with basic techniques like caching, compression and domain sharing and progressing to more advanced strategies involving preloading, parallel downloads, and predicting response times. It was presented by Philip Tellis at WebPerfDays New York and includes references for further reading on topics like CDNs, TCP tuning, and the page visibility API.
RUM isn’t just for page level metrics anymore. Thanks to modern browser updates and new techniques we can collect real user data at the object level, finding slow page components and keeping third parties honest. In this talk we will show you how to use Resource Timing, User Timing, and other browser tricks to time the most important components in your page. We’ll also share recipes for several of the web’s most popular third parties. This will give you a head start on measuring object level performance on your own site.
The document outlines steps web performance experts take to optimize frontend performance, moving from beginner to advanced techniques. It starts with basic optimizations like enabling gzip, caching, and image optimization. It then discusses more advanced strategies like using a CDN, splitting JavaScript, auditing CSS, and parallelizing downloads. Finally it discusses very advanced techniques like pre-loading assets, detecting broken Accept-Encoding headers, and understanding how to optimize for HTTP/2. The document provides references for further information on each topic.
The document discusses front-end web performance optimization from beginner to expert levels. At the beginner level, it recommends starting with basic optimizations like measuring performance, enabling gzip compression, optimizing images, and caching. At the expert level, it discusses more advanced techniques like using a CDN, splitting JavaScript files, auditing CSS, and flushing content early. Finally, it outlines "crazy" optimizations like pre-loading assets, post-load fetching, and understanding round-trip network latency.