Joerek van Gaalen discussed his experience conducting large-scale performance tests, including a test with 2 million virtual users. He explained how to simulate that level of load using 800 load generators across different cloud providers. He emphasized optimizing tests by tuning scripts, controllers, and agents to minimize resources. Van Gaalen also stressed the importance of making tests realistic by mimicking production usage patterns and balancing load. Specific issues like CDN performance limitations and uneven load distribution were also addressed.
The document proposes a replicated Siamese LSTM model for semantic textual similarity (STS) and information retrieval (IR) in an industrial diagnostic ticketing system. The system aims to retrieve relevant solutions from a knowledge base of tickets given a query. However, the text pairs in the system are often asymmetric in length and content. The proposed model addresses this by learning complementary representations of text pairs in a highly structured latent space using a replicated Siamese LSTM architecture and multi-channel Manhattan metric. It aims to capture similarity at both coarse-grained topic and fine-grained semantic levels to better handle asymmetric texts. The model is evaluated on STS and IR tasks for the industrial ticketing system.
The FDA is advising use of data standards as early as possible in the study lifecycle. As a result, Data Management centers are using the Study Data Tabulation Model (SDTM) to drive operations from First Patient In till Database Lock. Many tools on the market allow for the creation of SDTM datasets via intuitive user interfaces. However, targeted tools are needed to manage nightly jobs taking care of data source downloads (eCRF, ePRO, Lab, etc), data uploads in a staging database, converting to SDTM and running edit checks before the Clinical Data Manager arrives in the morning.
These are the slides of my JavaOne presentation. The abstract goes like this: How do companies developing business-critical Java enterprise Web applications increase releases from 40 to 300 per year and still remain confident about a spike of 1,800 percent in traffic during key events such as Super Bowl Sunday or Cyber Monday? It takes a fundamental change in culture. Although DevOps is often seen as a mechanism for taming the chaos, adopting an agile methodology across all teams is only the first step. This session explores best practices for continuous delivery with higher quality for improving collaboration between teams by consolidating tools and for reducing overhead to fix issues. It shows how to build a performance-focused culture with tools such as Hudson, Jenkins, Chef, Puppet, Selenium, and Compuware APM/dynaTrace
I gave this presentation at the Sydney Continuous Delivery Meetup Group. The main goal was to talk about Performance Metrics that you should monitor along the pipeline. I examples in 4 different areas where deployments failed and how metrics would have helped preventing these problems
The document discusses the 12 factor app methodology for building scalable software-as-a-service applications. It begins with an introduction to 12 factor apps and their focus on principles like codebase, dependencies, configuration, backing services, build-release-run processes, port binding, concurrency and more. The rest of the document delves into each of the 12 factors in more detail, explaining their importance and providing examples.
This presentation gives a lot of insights into Jimdo's infrastructure that hosts 20 million websites. To enable our application developers to quickly launch and improve their services, we've created a platform called Wonderland that does all the infrastructure work them. In this talk, I present the parts of Wonderland related to monitoring and logging. You can learn about our Prometheus setup as well as how we stream log messages from Docker to Logstash.
Ankur Jain presented on using the User Timing API to measure user perceived performance of single-page applications without an APM tool. The User Timing API allows developers to mark milestones and measure the time between them to understand performance. While it requires code changes, it provides accurate, real-user monitoring of applications across browsers. Some limitations are that it requires knowledge of the application's user flow and code access to implement the markers.
Why we build a Data Ingestion & Processing Pipeline with Spark & Airflow @Datlinq and all the parts needed to get it all together in our big data system. Presented 2017-02-10 at Data Driven Rijnmond Meetup: https://www.meetup.com/nl-NL/Data-Driven-Rijnmond/events/236256531/
The document discusses how to automate performance testing in DevOps. It outlines an automated analysis workflow involving defining metrics, comparing metrics to thresholds and baselines, pattern analysis, and test results. It also discusses script automation, reducing false positives, and integrating different types of performance tests like load, stress, and spike tests. The goal is to automate performance testing to support the rapid delivery cycles of DevOps.
MongoDB can be used in the Nuxeo Platform as a replacement for more traditional SQL databases. Nuxeo's content repository, which is the cornerstone of this open source enterprise content management platform, integrates completely with MongoDB for data storage. This presentation will explain the motivation for using MongoDB and will emphasize the different implementation choices driven by the very nature of a NoSQL datastore like MongoDB. Learn how Nuxeo integrated MongoDB into the platform which resulted in increased performance (including actual benchmarks) and better response to some use cases.
YouTube: https://www.youtube.com/watch?v=H5F0D55nKX4&index=11&list=PLnKL6-WWWE_WNYmP_P5x2SfzJ7jeJNzfp Tomasz Kowalczewski Language: English Hardware fails, applications fail, our code... well, it fails too (at least mine). To prevent software failure we test. Hardware failures are inevitable, so we write code that tolerates them, then we test. From tests we gather metrics and act upon them by improving parts that perform inadequately. Measuring right things at right places in an application is as much about good engineering practices and maintaining SLAs as it is about end user experience and may differentiate successful product from a failure. In order to act on performance metrics such as max latency and consistent response times we need to know their accurate value. The problem with such metrics is that when using popular tools we get results that are not only inaccurate but also too optimistic. During my presentation I will simulate services that require monitoring and show how gathered metrics differ from real numbers. All this while using what currently seems to be most popular metric pipeline - Graphite together with com.codahale metrics library - and get completely false results. We will learn to tune it and get much better accuracy. We will use JMeter to measure latency and observe how falsely reassuring the results are. We will check how graphite averages data just to helplessly watch important latency spikes disappear. Finally I will show how HdrHistogram helps in gathering reliable metrics. We will also run tests measuring performance of different metric classes
Hardware fails, applications fail, our code... well, it fails too (at least mine). To prevent software failure we test. Hardware failures are inevitable, so we write code that tolerates them, then we test. From tests we gather metrics and act upon them by improving parts that perform inadequately. Measuring right things at right places in an application is as much about good engineering practices and maintaining SLAs as it is about end user experience and may differentiate successful product from a failure. In order to act on performance metrics such as max latency and consistent response times we need to know their accurate value. The problem with such metrics is that when using popular tools we get results that are not only inaccurate but also too optimistic. During my presentation I will simulate services that require monitoring and show how gathered metrics differ from real numbers. All this while using what currently seems to be most popular metric pipeline - Graphite together with metrics.dropwizard.io library - and get completely false results. We will learn to tune it and get much better accuracy. We will use JMeter to measure latency and observe how falsely reassuring the results are. Finally I will show how HdrHistogram helps in gathering reliable metrics. We will also run tests measuring performance of different metric classes.
As we are building solutions for the future, we have a responsibility to make them sustainable so that we leave not only great tech solutions but also a habitable planet for future generations
https://github.com/alvarowolfx/react-native-shakeit-demo Introduction to React native presentation. A little history about React web, comparison with state of art of hybrid mobile development and demo to the local community.
Hardware fails, applications fail, our code... well, it fails too (at least mine). To prevent software failure we test. Hardware failures are inevitable, so we write code that tolerates them, then we test. From tests we gather metrics and act upon them by improving parts that perform inadequately. Measuring right things at right places in an application is as much about good engineering practices and maintaining SLAs as it is about end user experience and may differentiate successful product from a failure. In order to act on performance metrics such as max latency and consistent response times we need to know their accurate value. The problem with such metrics is that when using popular tools we get results that are not only inaccurate but also too optimistic. During my presentation I will simulate services that require monitoring and show how gathered metrics differ from real numbers. All this while using what currently seems to be most popular metric pipeline - Graphite together with metrics.dropwizard.io library - and get completely false results. We will learn to tune it and get much better accuracy. We will use JMeter to measure latency and observe how falsely reassuring the results are. Finally I will show how HdrHistogram helps in gathering reliable metrics. We will also run tests measuring performance of different metric classes.