This talk was presented at the Apache Big Data 2016, North America conference that was held in Vancouver, CA (http://events.linuxfoundation.org/events/archive/2016/apache-big-data-north-america/program/schedule)
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Apache Flink is a world class stateful stream processor presents a huge variety of optional features and configuration choices to the user. Determining out the optimal choice for any production environment and use-case be challenging. In this talk, we will explore and discuss the universe of Flink configuration with respect to state and state backends.
We will start with a closer look under the hood, at core data structures and algorithms, to build the foundation for understanding the impact of tuning parameters and the costs-benefit-tradeoffs that come with certain features and options. In particular, we will focus on state backend choices (Heap vs RocksDB), tuning checkpointing (incremental checkpoints, ...) and recovery (local recovery), serializers and Apache Flink's new state migration capabilities.
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
Zhiyong Bai
As a high performance and scalable key value database, Zhihu use HBase to provide online data store system along with Mysql and Redis. Zhihu’s platform team had accumulated some experience in technology of container, and this time, based on Kubernetes, we build flexible platform of online HBase system, create multiple logic isolated HBase clusters on the shared physical cluster with fast rapid,and provide customized service for different business needs. Combined with Consul and DNS server, we implement high available access of HBase using client mainly written with Python. This presentation is mainly shared the architecture of online HBase platform in Zhihu and some practical experience in production environment.
hbaseconasia2017 hbasecon hbase
We’ll present details about Argus, a time-series monitoring and alerting platform developed at Salesforce to provide insight into the health of infrastructure as an alternative to systems such as Graphite and Seyren.
Rolling Out Apache HBase for Mobile Offerings at Visa
Partha Saha and CW Chung (Visa)
Visa has embarked on an ambitious multi-year redesign of its entire data platform that powers its business. As part of this plan, the Apache Hadoop ecosystem, including HBase, will now become a staple in many of its solutions. Here, we will describe our journey in rolling out a high-availability NoSQL solution based on HBase behind some of our prominent mobile offerings.
Tapad's data pipeline is an elastic combination of technologies (Kafka, Hadoop, Avro, Scalding) that forms a reliable system for analytics, realtime and batch graph-building, and logging. In this talk, I will speak about the creation and evolution of the pipeline, and a concrete example – a day in the life of an event tracking pixel. We'll also talk about common challenges that we've overcome such as integrating different pieces of the system, schema evolution, queuing, and data retention policies.
Kafka Streams is a lightweight stream processing library included in Apache Kafka since version 0.10. It provides a simple yet powerful API for building stream processing applications. The API uses a domain-specific language that allows developers to define stream processing topologies where data from Kafka topics acts as input streams and can be transformed before writing the results to output topics. The library handles common stream processing tasks like state management, windowing, and fault tolerance using Kafka's distributed and fault-tolerant architecture.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
ScyllaDB: What could you do with Cassandra compatibility at 1.8 million reque...
Scylla is a new, open-source NoSQL data store with a novel design optimized for modern hardware, capable of 1.8 million requests per second per node, while providing Apache Cassandra compatibility and scaling properties. While conventional NoSQL databases suffer from latency hiccups, expensive locking, and low throughput due to low processor utilization, the Scylla design is based on a modern shared-nothing approach. Scylla runs multiple engines, one per core, each with its own memory, CPU and multi-queue NIC. The result is a NoSQL database that delivers an order of magnitude more performance, with less performance tuning needed from the administrator.
With extra performance to work with, NoSQL projects can have more flexibility to focus on other concerns, such as functionality and time to market. Come for the tech details on what Scylla does under the hood, and leave with some ideas on how to do more with NoSQL, faster.
Speaker bio
Don Marti is technical marketing manager for ScyllaDB. He has written for Linux Weekly News, Linux Journal, and other publications. He co-founded the Linux consulting firm Electric Lichen. Don is a strategic advisor for Mozilla, and has previously served as president and vice president of the Silicon Valley Linux Users Group and on the program committees for Uselinux, Codecon, and LinuxWorld Conference and Expo.
Essential ingredients for real time stream processing @Scale by Kartik pParam...
This document discusses stream processing at scale. It begins with an introduction and agenda. It then discusses scenarios for stream processing like newsfeeds, cybersecurity, and IoT. It presents the canonical stream processing architecture with data buses, real-time and batch processing, and ingestion/serving tiers. The document dives into the essential ingredients for stream processing: scale, reprocessing, accuracy of results, and easy programmability. It provides examples and strategies for each of these essential ingredients to achieve efficient and accurate stream processing at large scales.
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Recently, Apache Phoenix has been integrated with Apache (incubator) Omid transaction processing service, to provide ultra-high system throughput with ultra-low latency overhead. Phoenix has been shown to scale beyond 0.5M transactions per second with sub-5ms latency for short transactions on industry-standard hardware. On the other hand, Omid has been extended to support secondary indexes, multi-snapshot SQL queries, and massive-write transactions.
These innovative features make Phoenix an excellent choice for translytics applications, which allow converged transaction processing and analytics. We share the story of building the next-gen data tier for advertising platforms at Verizon Media that exploits Phoenix and Omid to support multi-feed real-time ingestion and AI pipelines in one place, and discuss the lessons learned.
Bellevue Big Data meetup: Dive Deep into Spark Streaming
Discuss the code and architecture about building realtime streaming application using Spark and Kafka. This demo presents some use cases and patterns of different streaming frameworks.
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
(Berkeley CS186 guest lecture)
Big Data Analytics Systems: What Goes Around Comes Around
Introduction to MapReduce, GFS, HDFS, Spark, and differences between "Big Data" and database systems.
High cardinality time series search: A new level of scale - Data Day Texas 2016
Modern search systems provide incredible feature sets, developer-friendly APIs, and low latency indexing and query response. By some measures, these systems operate "at scale," but rarely is that quantified. Customers of Rocana typically look to push ingest rates in excess of 1 million events per second, retaining years of data online for query, with the expectation of sub-second response times for any reasonably sized subset of data.
We quickly found that the tradeoffs made by general purpose search systems, while right for common use cases, were less appropriate for these high cardinality, large scale use cases.
This session details the architecture, tradeoffs, and interesting implementation decisions made in building a new time series optimized distributed search system using Apache Lucene, Kafka, and HDFS. Data ingestion and durability, index and metadata organization, storage, query scheduling and optimization, and failure modes will be covered. Finally, a summary of the results achieved will be shown.
Intro to Apache Apex - Next Gen Platform for Ingest and Transform
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
Maheedhar Gunturu presented on connecting Kafka message systems with Scylla. He discussed the benefits of message queues like Kafka including centralized infrastructure, buffering capabilities, and streaming data transformations. He then explained Kafka Connect which provides a standardized framework for building connectors with distributed and scalable connectors. Scylla and Cassandra connectors are available today with a Scylla shard aware connector being developed.
From Batch to Streaming with Apache Apex Dataworks Summit 2017
This document discusses transitioning from batch to streaming data processing using Apache Apex. It provides an overview of Apex and how it can be used to build real-time streaming applications. Examples are given of how to build an application that processes Twitter data streams and visualizes results. The document also outlines Apex's capabilities for scalable stream processing, queryable state, and its growing library of connectors and transformations.
Data Pipeline with Kafka, This slide include
Kafka Introduction, Topic / Partitions, Produce / Consumer, Quick Start, Offset Monitoring, Example Code, Camus
Lambda-less Stream Processing @Scale in LinkedIn
The document discusses challenges with stream processing including data accuracy and reprocessing. It proposes a "lambda-less" approach using windowed computations and handling late and out-of-order events to produce eventually correct results. Samza is used in LinkedIn's implementation to store streaming data locally using RocksDB for processing within configurable windows. The approach avoids code duplication compared to traditional lambda architectures while still supporting reprocessing through resetting offsets. Challenges remain in merging online and reprocessed results at large scale.
Dataflow - A Unified Model for Batch and Streaming Data Processing
Batch and Streaming Data Processing and Vizualize 300Tb in 5 Seconds meetup on April 18th, 2016 (http://www.meetup.com/Big-things-are-happening-here/events/229532500)
Benchmarking Apache Samza: 1.2 million messages per sec per node
This document summarizes benchmarking tests of Apache Samza's performance processing streaming data. The tests measured Samza's performance on different processing tasks: message passing achieved 1.2 million messages per second per node; key counting with an in-memory store achieved 1 million messages per second; key counting with RocksDB storage was 443k messages per second; and key counting with RocksDB storage and changelog was 300k messages per second. The benchmarks provide a foundation for developing a capacity model for Samza's performance on high-volume streaming data applications.
Ehtsham Elahi, Senior Research Engineer, Personalization Science and Engineer...
Spark and GraphX in the Netflix Recommender System: We at Netflix strive to deliver maximum enjoyment and entertainment to our millions of members across the world. We do so by having great content and by constantly innovating on our product. A key strategy to optimize both is to follow a data-driven method. Data allows us to find optimal approaches to applications such as content buying or our renowned personalization algorithms. But, in order to learn from this data, we need to be smart about the algorithms we use, how we apply them, and how we can scale them to our volume of data (over 50 million members and 5 billion hours streamed over three months). In this talk we describe how Spark and GraphX can be leveraged to address some of our scale challenges. In particular, we share insights and lessons learned on how to run large probabilistic clustering and graph diffusion algorithms on top of GraphX, making it possible to apply them at Netflix scale.
High Performance Spatial-Temporal Trajectory Analysis with Spark
This document discusses high performance spatial-temporal trajectory analysis using Spark. It covers the background of analyzing mobile signaling data to enable smarter urban planning. The solution architecture includes data sources, distributed file system, computation engine, and visualization. Technical designs address the big data platform, data governance, algorithm models, and Spark spatial computing. Example scenarios are presented for population heatmaps, commute routes, and office-residence imbalance analysis.
Developing Realtime Data Pipelines With Apache Kafka
Developing Realtime Data Pipelines With Apache Kafka. Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients. Kafka is designed to allow a single cluster to serve as the central data backbone for a large organization. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of co-ordinated consumers. Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages without performance impact. Kafka has a modern cluster-centric design that offers strong durability and fault-tolerance guarantees.
codecentric AG: CQRS and Event Sourcing Applications with Cassandra
CQRS (Command Query Responsibility Segregation) is a pattern, which separates the process of querying and updating data. As a query only returns data without any side effects, a command is designed to change data. CQRS is often combined with Event Sourcing. This is an architecture in which all changes to an application state are stored as a sequence of events.
Because of its great capability to store time series data Cassandra is the perfect fit for implementing the event store. But there a still a lot of open questions: What about the data modeling? What techniques will be used to process and store data in the Cassandra database? How to access the current state of the application, without replaying every event? And what about failure handling?
In this talk, I will give a brief introduction to CQRS and the Event Sourcing pattern and will then answer the questions above using a real life example of a data store for customer data.
Till Rohrmann - Dynamic Scaling - How Apache Flink adapts to changing workloads
http://flink-forward.org/kb_sessions/dynamic-scaling-how-apache-flink-adapts-to-changing-workloads/
Modern stream processing engines not only have to process millions of events per second at sub-second latency but also have to cope with constantly changing workloads. Due to the dynamic nature of stream applications where the number of incoming events can strongly vary with time, systems cannot reliably predetermine the amount of required resources. In order to meet guaranteed SLAs as well as utilizing system resources as efficiently as possible, frameworks like Apache Flink have to adapt their resource consumption dynamically. In this talk, we will take a look under the hood and explain how Flink scales stateful application in and out. Starting with the concept of key groups and partionable state, we will cover ways to detect bottlenecks in streaming jobs and discuss efficient strategies how to scale out operators with minimal down-time.
Latency-aware Elastic Scaling for Distributed Data Stream Processing Systems
Elastic scaling allows a data stream processing system to react to a dynamically changing query or event workload by automatically scaling in or out. Thereby, both unpredictable load peaks as well as underload situations can be handled. However, each scaling decision comes with a latency penalty due to the required operator movements. Therefore, in practice an elastic system might be able to improve the system utilization, however it is not able to provide latency guarantees defined by a service level agreement (SLA). In this paper we introduce an elastic scaling system, which optimizes the utilization under certain latency constraints defined by a SLA. Specifically, we present a model, which estimates the latency spike created by a set of operator movements. We use this model to build a latency-aware elastic operator placement algorithm, which minimizes the number of latency violations. We show that our solution is able to reduce the 90th percentile of the end to end latency by up to 30% and reduce the number of latency violations by 50%. The achieved system utilization for our approach is comparable to a scaling strategy, which does not use latency as optimization target.
Auto-scaling Techniques for Elastic Data Stream Processing
An elastic data stream processing system is able to handle changes in workload by dynamically scaling out and
scaling in. This allows for handling of unexpected load spikes without the need for constant overprovisioning. One of the major challenges for an elastic system is to find the right point in time to scale in or to scale out. Finding such a point is difficult as it depends on constantly changing workload and system characteristics. In this paper we investigate the application of different auto-scaling techniques for solving this problem. Specifically: (1) we formulate basic requirements for an autoscaling technique used in an elastic data stream processing system, (2) we use the formulated requirements to select the best auto scaling techniques, and (3) we perform evaluation of the selected auto scaling techniques using the real world data. Our experiments show that the auto scaling techniques used in existing elastic data stream processing systems are performing worse than the strategies used in our work.
Adaptive Replication for Elastic Data Stream Processing
A major challenge for cloud-based systems is to be fault tolerant so as to cope with an increasing probability of faults in cloud environments. This is especially true for in-memory computing solutions like data stream processing systems, where a single host failure might result in an unrecoverable information loss.
In state of the art data streaming systems either active replication or upstream backup are applied to ensure fault tolerance, which have a high resource overhead or a high recovery time respectively. This paper combines these two fault tolerance mechanisms in one system to minimize the number of violations of a user-defined recovery time threshold and to reduce the overall resource consumption compared to active replication. The system switches for individual operators between both replication techniques dynamically based on the current workload characteristics. Our approach is implemented as an extension of an elastic data stream processing engine, which is able to reduce the number of used hosts due to the smaller replication overhead. Based on a real-world evaluation we show that our system is able to reduce the resource usage by up to 19% compared to an active replication scheme.
This lecture covers the principles and the architectures of modern cluster schedulers, including Apache Mesos, Apache Yarn, Google Borg and K8s, and some notes on Omega
We believe that security *IS* a shared responsibility, - when we give developers the power to create infrastructure, security became their responsibility, too.
During this meetup, we'd like to share our experience with implementing security best practices, to be implemented directly by development teams to build more robust and secure cloud environments. Make cloud security your team's sport!
Flexible and Real-Time Stream Processing with Apache Flink
This document provides an overview of stream processing with Apache Flink. It discusses the rise of stream processing and how it enables low-latency applications and real-time analysis. It then describes Flink's stream processing capabilities, including pipelining of data, fault tolerance through checkpointing and recovery, and integration with batch processing. The document also summarizes Flink's programming model, state management, and roadmap for further development.
NCache is an in-memory caching solution by Alachisoft that improves application scalability and performance by reducing database trips and storing frequently accessed data in memory to provide better performance. It is also used to cache session data in web farms.
Realtime infrastructure powers critical pieces of Uber. This talk will discuss the architecture, technical challenges, learnings and how a blend of open source infrastructure (Apache Kafka/Flink/Pinot) and in-house technologies have helped Uber scale and enabled SQL to power realtime decision making for city ops, data scientists, data analysts and engineers.
Unified Batch & Stream Processing with Apache Samza
The traditional lambda architecture has been a popular solution for joining offline batch operations with real time operations. This setup incurs a lot of developer and operational overhead since it involves maintaining code that produces the same result in two, potentially different distributed systems. In order to alleviate these problems, we need a unified framework for processing and building data pipelines across batch and stream data sources.
Based on our experiences running and developing Apache Samza at LinkedIn, we have enhanced the framework to support: a) Pluggable data sources and sinks; b) A deployment model supporting different execution environments such as Yarn or VMs; c) A unified processing API for developers to work seamlessly with batch and stream data. In this talk, we will cover how these design choices in Apache Samza help tackle the overhead of lambda architecture. We will use some real production use-cases to elaborate how LinkedIn leverages Apache Samza to build unified data processing pipelines.
Speaker
Navina Ramesh, Sr. Software Engineer, LinkedIn
This document provides an overview of Apache Kafka including its main components, architecture, and ecosystem. It describes how LinkedIn used Kafka to solve their data pipeline problem by decoupling systems and allowing for horizontal scaling. The key elements of Kafka are producers that publish data to topics, the Kafka cluster that stores streams of records in a distributed, replicated commit log, and consumers that subscribe to topics. Kafka Connect and the Schema Registry are also introduced as part of the Kafka ecosystem.
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Discover how to avoid common pitfalls when shifting to an event-driven architecture (EDA) in order to boost system recovery and scalability. We cover Kafka Schema Registry, in-broker transformations, event sourcing, and more.
Disaster Recovery Experience at CACIB: Hardening Hadoop for Critical Financia...
Hadoop is becoming a standard platform for building critical financial applications such as risk reporting, trading and fraud detection. These applications require high level of SLAs (service-level agreement) in terms of RPO (Recovery Point Objective) and RTO (Recovery Time Objective). To achieve these SLAs, organizations need to build a disaster recovery plan that cover several layers ranging from the infrastructure to the clients going through the platform and the applications. In this talk, we will present the different architecture blueprints for disaster recovery as well as their corresponding SLA objectives. Then, we will focus on the stretch cluster solution that Crédit Agricole CIB is using in production. We will discuss the solution’s advantages, drawbacks and the impact of this approach on the global architecture. Finally, we will explain in detail how to configure and deploy this solution and how to integrate each layer (storage layer, processing layer...) into the architecture.
HA and DR Architecture for HANA on Power Deck - 2022-Nov-21.PPTX
This document discusses high availability (HA) and disaster recovery (DR) architectures for SAP HANA on IBM Power Systems. It provides an overview of typical HA/DR configurations including host auto-failover, SAP HANA system replication in performance-optimized and cost-optimized modes, and the roles of cluster managers like Pacemaker in automating failover. Key aspects covered are recovery point objectives (RPOs), recovery time objectives (RTOs), synchronous vs. asynchronous replication modes, and multi-tier DR landscapes.
An adaptive and eventually self healing framework for geo-distributed real-ti...
This document discusses an adaptive and self-healing framework for real-time data ingestion across geographically distributed data centers. It describes the problem domain of ingesting 15 billion events per day across multiple schemas and data types from various sources. The proposed architecture includes an ingestion layer using technologies like Storm, Kafka and HDFS to ingest, transform and replicate streaming and batch data. It also includes a serving layer using Aerospike to provide low-latency aggregated user views. Issues encountered with technologies like Storm and Kafka are discussed, as well as features still under development.
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache Apex
Stream data processing is becoming increasingly important to support business needs for faster time to insight and action with growing volume of information from more sources. Apache Apex (http://apex.apache.org/) is a unified big data in motion processing platform for the Apache Hadoop ecosystem. Apex supports demanding use cases with:
* Architecture for high throughput, low latency and exactly-once processing semantics.
* Comprehensive library of building blocks including connectors for Kafka, Files, Cassandra, HBase and many more
* Java based with unobtrusive API to build real-time and batch applications and implement custom business logic.
* Advanced engine features for auto-scaling, dynamic changes, compute locality.
Apex was developed since 2012 and is used in production in various industries like online advertising, Internet of Things (IoT) and financial services.
Stephan Ewen - Experiences running Flink at Very Large Scale
This talk shares experiences from deploying and tuning Flink steam processing applications for very large scale. We share lessons learned from users, contributors, and our own experiments about running demanding streaming jobs at scale. The talk will explain what aspects currently render a job as particularly demanding, show how to configure and tune a large scale Flink job, and outline what the Flink community is working on to make the out-of-the-box for experience as smooth as possible. We will, for example, dive into - analyzing and tuning checkpointing - selecting and configuring state backends - understanding common bottlenecks - understanding and configuring network parameters
This document provides an overview of Apache Flink, an open-source stream processing framework. It discusses the rise of stream processing and how Flink enables low-latency applications through features like pipelining, operator state, fault tolerance using distributed snapshots, and integration with batch processing. The document also outlines Flink's roadmap, which includes graduating its DataStream API, fully managing windowing and state, and unifying batch and stream processing.
This document provides an overview of Apache Flink, an open-source platform for distributed stream and batch data processing. Flink allows for unified batch and stream processing with a simple yet powerful programming model. It features native stream processing, exactly-once fault tolerance based on consistent snapshots, and high performance optimized for streaming workloads. The document outlines Flink's APIs, state management, fault tolerance approach, and roadmap for continued improvements in 2015.
The document discusses troubleshooting performance issues for SQL Server. It begins with an introduction and case study on the MS Society of Canada's website. It then discusses optimizing the environment, using Performance Monitor (PerfMon) to monitor performance, and concludes with recommendations to address issues like high CPU usage, slow disk speeds, and insufficient memory.
Modern Stream Processing With Apache Flink @ GOTO Berlin 2017
In our fast moving world it becomes more and more important for companies to gain near real-time insights from their data to make faster decisions. These insights do not only provide a competitve edge over ones rivals but also enable a company to create completely new services and products. Amongst others, predictive user interfaces and online recommendation can be implemented when being able to process large amounts of data in real-time.
Apache Flink, one of the most advanced open source distributed stream processing platforms, allows you to extract business intelligence from your data in near real-time. With Apache Flink it is possible to process billions of messages with milliseconds latency. Moreover, its expressive APIs allow you to quickly solve your problems, ranging from classical analytical workloads to distributed event-driven applications.
In this talk, I will introduce Apache Flink and explain how it enables users to develop distributed applications and process analytical workloads alike. Starting with Flink’s basic concepts of fault-tolerance, statefulness and event-time aware processing, we will take a look at the different APIs and what they allow us to do. The talk will be concluded by demonstrating how we can use Flink’s higher level abstractions such as FlinkCEP and StreamSQL to do declarative stream processing.
Big Data Berlin v8.0 Stream Processing with Apache Apex
This document discusses Apache Apex, an open source stream processing framework. It provides an overview of stream data processing and common use cases. It then describes key Apache Apex capabilities like in-memory distributed processing, scalability, fault tolerance, and state management. The document also highlights several customer use cases from companies like PubMatic, GE, and Silver Spring Networks that use Apache Apex for real-time analytics on data from sources like IoT sensors, ad networks, and smart grids.
Software Engineering and Project Management - Introduction to Project Management
Introduction to Project Management: Introduction, Project and Importance of Project Management, Contract Management, Activities Covered by Software Project Management, Plans, Methods and Methodologies, some ways of categorizing Software Projects, Stakeholders, Setting Objectives, Business Case, Project Success and Failure, Management and Management Control, Project Management life cycle, Traditional versus Modern Project Management Practices.
Bravo Six, Going Realtime. Transitioning Activision Data Pipeline to StreamingYaroslav Tkachenko
Activision Data team has been running a data pipeline for a variety of Activision games for many years. Historically we used a mix of micro-batch microservices coupled with classic Big Data tools like Hadoop and Hive for ETL. As a result, it could take up to 4-6 hours for data to be available to the end customers.
In the last few years, the adoption of data in the organization skyrocketed. We needed to de-legacy our data pipeline and provide near-realtime access to data in order to improve reporting, gather insights faster, power web and mobile applications. I want to tell a story about heavily leveraging Kafka Streams and Kafka Connect to reduce the end latency to minutes, at the same time making the pipeline easier and cheaper to run. We were able to successfully validate the new data pipeline by launching two massive games just 4 weeks apart.
Large-Scale Stream Processing in the Hadoop EcosystemGyula Fóra
Distributed stream processing is one of the hot topics in big data analytics today. An increasing number of applications are shifting from traditional static data sources to processing the incoming data in real-time. Performing large scale stream processing or analysis requires specialized tools and techniques which have become publicly available in the last couple of years.
This talk will give a deep, technical overview of the top-level Apache stream processing landscape. We compare several frameworks including Spark, Storm, Samza and Flink. Our goal is to highlight the strengths and weaknesses of the individual systems in a project-neutral manner to help selecting the best tools for the specific applications. We will touch on the topics of API expressivity, runtime architecture, performance, fault-tolerance and strong use-cases for the individual frameworks.
In order to effectively predict and prevent online fraud in real time, Sift Science stores hundreds of terabytes of data in HBase—and needs it to be always available. This talk will cover how we used circuit-breaking, cluster failover, monitoring, and automated recovery procedures to improve our HBase uptime from 99.7% to 99.99% on top of unreliable cloud hardware and networks.
Webinar: Deep Dive on Apache Flink State - Seth WiesmanVerverica
Apache Flink is a world class stateful stream processor presents a huge variety of optional features and configuration choices to the user. Determining out the optimal choice for any production environment and use-case be challenging. In this talk, we will explore and discuss the universe of Flink configuration with respect to state and state backends.
We will start with a closer look under the hood, at core data structures and algorithms, to build the foundation for understanding the impact of tuning parameters and the costs-benefit-tradeoffs that come with certain features and options. In particular, we will focus on state backend choices (Heap vs RocksDB), tuning checkpointing (incremental checkpoints, ...) and recovery (local recovery), serializers and Apache Flink's new state migration capabilities.
hbaseconasia2017: Building online HBase cluster of Zhihu based on KubernetesHBaseCon
Zhiyong Bai
As a high performance and scalable key value database, Zhihu use HBase to provide online data store system along with Mysql and Redis. Zhihu’s platform team had accumulated some experience in technology of container, and this time, based on Kubernetes, we build flexible platform of online HBase system, create multiple logic isolated HBase clusters on the shared physical cluster with fast rapid,and provide customized service for different business needs. Combined with Consul and DNS server, we implement high available access of HBase using client mainly written with Python. This presentation is mainly shared the architecture of online HBase platform in Zhihu and some practical experience in production environment.
hbaseconasia2017 hbasecon hbase
We’ll present details about Argus, a time-series monitoring and alerting platform developed at Salesforce to provide insight into the health of infrastructure as an alternative to systems such as Graphite and Seyren.
Rolling Out Apache HBase for Mobile Offerings at Visa HBaseCon
Partha Saha and CW Chung (Visa)
Visa has embarked on an ambitious multi-year redesign of its entire data platform that powers its business. As part of this plan, the Apache Hadoop ecosystem, including HBase, will now become a staple in many of its solutions. Here, we will describe our journey in rolling out a high-availability NoSQL solution based on HBase behind some of our prominent mobile offerings.
Tapad's data pipeline is an elastic combination of technologies (Kafka, Hadoop, Avro, Scalding) that forms a reliable system for analytics, realtime and batch graph-building, and logging. In this talk, I will speak about the creation and evolution of the pipeline, and a concrete example – a day in the life of an event tracking pixel. We'll also talk about common challenges that we've overcome such as integrating different pieces of the system, schema evolution, queuing, and data retention policies.
Kafka Streams is a lightweight stream processing library included in Apache Kafka since version 0.10. It provides a simple yet powerful API for building stream processing applications. The API uses a domain-specific language that allows developers to define stream processing topologies where data from Kafka topics acts as input streams and can be transformed before writing the results to output topics. The library handles common stream processing tasks like state management, windowing, and fault tolerance using Kafka's distributed and fault-tolerant architecture.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
ScyllaDB: What could you do with Cassandra compatibility at 1.8 million reque...Data Con LA
Scylla is a new, open-source NoSQL data store with a novel design optimized for modern hardware, capable of 1.8 million requests per second per node, while providing Apache Cassandra compatibility and scaling properties. While conventional NoSQL databases suffer from latency hiccups, expensive locking, and low throughput due to low processor utilization, the Scylla design is based on a modern shared-nothing approach. Scylla runs multiple engines, one per core, each with its own memory, CPU and multi-queue NIC. The result is a NoSQL database that delivers an order of magnitude more performance, with less performance tuning needed from the administrator.
With extra performance to work with, NoSQL projects can have more flexibility to focus on other concerns, such as functionality and time to market. Come for the tech details on what Scylla does under the hood, and leave with some ideas on how to do more with NoSQL, faster.
Speaker bio
Don Marti is technical marketing manager for ScyllaDB. He has written for Linux Weekly News, Linux Journal, and other publications. He co-founded the Linux consulting firm Electric Lichen. Don is a strategic advisor for Mozilla, and has previously served as president and vice president of the Silicon Valley Linux Users Group and on the program committees for Uselinux, Codecon, and LinuxWorld Conference and Expo.
Essential ingredients for real time stream processing @Scale by Kartik pParam...Big Data Spain
This document discusses stream processing at scale. It begins with an introduction and agenda. It then discusses scenarios for stream processing like newsfeeds, cybersecurity, and IoT. It presents the canonical stream processing architecture with data buses, real-time and batch processing, and ingestion/serving tiers. The document dives into the essential ingredients for stream processing: scale, reprocessing, accuracy of results, and easy programmability. It provides examples and strategies for each of these essential ingredients to achieve efficient and accurate stream processing at large scales.
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
Recently, Apache Phoenix has been integrated with Apache (incubator) Omid transaction processing service, to provide ultra-high system throughput with ultra-low latency overhead. Phoenix has been shown to scale beyond 0.5M transactions per second with sub-5ms latency for short transactions on industry-standard hardware. On the other hand, Omid has been extended to support secondary indexes, multi-snapshot SQL queries, and massive-write transactions.
These innovative features make Phoenix an excellent choice for translytics applications, which allow converged transaction processing and analytics. We share the story of building the next-gen data tier for advertising platforms at Verizon Media that exploits Phoenix and Omid to support multi-feed real-time ingestion and AI pipelines in one place, and discuss the lessons learned.
Bellevue Big Data meetup: Dive Deep into Spark StreamingSantosh Sahoo
Discuss the code and architecture about building realtime streaming application using Spark and Kafka. This demo presents some use cases and patterns of different streaming frameworks.
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...Reynold Xin
(Berkeley CS186 guest lecture)
Big Data Analytics Systems: What Goes Around Comes Around
Introduction to MapReduce, GFS, HDFS, Spark, and differences between "Big Data" and database systems.
High cardinality time series search: A new level of scale - Data Day Texas 2016Eric Sammer
Modern search systems provide incredible feature sets, developer-friendly APIs, and low latency indexing and query response. By some measures, these systems operate "at scale," but rarely is that quantified. Customers of Rocana typically look to push ingest rates in excess of 1 million events per second, retaining years of data online for query, with the expectation of sub-second response times for any reasonably sized subset of data.
We quickly found that the tradeoffs made by general purpose search systems, while right for common use cases, were less appropriate for these high cardinality, large scale use cases.
This session details the architecture, tradeoffs, and interesting implementation decisions made in building a new time series optimized distributed search system using Apache Lucene, Kafka, and HDFS. Data ingestion and durability, index and metadata organization, storage, query scheduling and optimization, and failure modes will be covered. Finally, a summary of the results achieved will be shown.
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
Maheedhar Gunturu presented on connecting Kafka message systems with Scylla. He discussed the benefits of message queues like Kafka including centralized infrastructure, buffering capabilities, and streaming data transformations. He then explained Kafka Connect which provides a standardized framework for building connectors with distributed and scalable connectors. Scylla and Cassandra connectors are available today with a Scylla shard aware connector being developed.
From Batch to Streaming with Apache Apex Dataworks Summit 2017Apache Apex
This document discusses transitioning from batch to streaming data processing using Apache Apex. It provides an overview of Apex and how it can be used to build real-time streaming applications. Examples are given of how to build an application that processes Twitter data streams and visualizes results. The document also outlines Apex's capabilities for scalable stream processing, queryable state, and its growing library of connectors and transformations.
Data Pipeline with Kafka, This slide include
Kafka Introduction, Topic / Partitions, Produce / Consumer, Quick Start, Offset Monitoring, Example Code, Camus
Lambda-less Stream Processing @Scale in LinkedIn
The document discusses challenges with stream processing including data accuracy and reprocessing. It proposes a "lambda-less" approach using windowed computations and handling late and out-of-order events to produce eventually correct results. Samza is used in LinkedIn's implementation to store streaming data locally using RocksDB for processing within configurable windows. The approach avoids code duplication compared to traditional lambda architectures while still supporting reprocessing through resetting offsets. Challenges remain in merging online and reprocessed results at large scale.
Dataflow - A Unified Model for Batch and Streaming Data ProcessingDoiT International
Batch and Streaming Data Processing and Vizualize 300Tb in 5 Seconds meetup on April 18th, 2016 (http://www.meetup.com/Big-things-are-happening-here/events/229532500)
Benchmarking Apache Samza: 1.2 million messages per sec per nodeTao Feng
This document summarizes benchmarking tests of Apache Samza's performance processing streaming data. The tests measured Samza's performance on different processing tasks: message passing achieved 1.2 million messages per second per node; key counting with an in-memory store achieved 1 million messages per second; key counting with RocksDB storage was 443k messages per second; and key counting with RocksDB storage and changelog was 300k messages per second. The benchmarks provide a foundation for developing a capacity model for Samza's performance on high-volume streaming data applications.
Ehtsham Elahi, Senior Research Engineer, Personalization Science and Engineer...MLconf
Spark and GraphX in the Netflix Recommender System: We at Netflix strive to deliver maximum enjoyment and entertainment to our millions of members across the world. We do so by having great content and by constantly innovating on our product. A key strategy to optimize both is to follow a data-driven method. Data allows us to find optimal approaches to applications such as content buying or our renowned personalization algorithms. But, in order to learn from this data, we need to be smart about the algorithms we use, how we apply them, and how we can scale them to our volume of data (over 50 million members and 5 billion hours streamed over three months). In this talk we describe how Spark and GraphX can be leveraged to address some of our scale challenges. In particular, we share insights and lessons learned on how to run large probabilistic clustering and graph diffusion algorithms on top of GraphX, making it possible to apply them at Netflix scale.
This document discusses high performance spatial-temporal trajectory analysis using Spark. It covers the background of analyzing mobile signaling data to enable smarter urban planning. The solution architecture includes data sources, distributed file system, computation engine, and visualization. Technical designs address the big data platform, data governance, algorithm models, and Spark spatial computing. Example scenarios are presented for population heatmaps, commute routes, and office-residence imbalance analysis.
Developing Realtime Data Pipelines With Apache KafkaJoe Stein
Developing Realtime Data Pipelines With Apache Kafka. Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients. Kafka is designed to allow a single cluster to serve as the central data backbone for a large organization. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of co-ordinated consumers. Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages without performance impact. Kafka has a modern cluster-centric design that offers strong durability and fault-tolerance guarantees.
codecentric AG: CQRS and Event Sourcing Applications with CassandraDataStax Academy
CQRS (Command Query Responsibility Segregation) is a pattern, which separates the process of querying and updating data. As a query only returns data without any side effects, a command is designed to change data. CQRS is often combined with Event Sourcing. This is an architecture in which all changes to an application state are stored as a sequence of events.
Because of its great capability to store time series data Cassandra is the perfect fit for implementing the event store. But there a still a lot of open questions: What about the data modeling? What techniques will be used to process and store data in the Cassandra database? How to access the current state of the application, without replaying every event? And what about failure handling?
In this talk, I will give a brief introduction to CQRS and the Event Sourcing pattern and will then answer the questions above using a real life example of a data store for customer data.
Till Rohrmann - Dynamic Scaling - How Apache Flink adapts to changing workloadsFlink Forward
http://flink-forward.org/kb_sessions/dynamic-scaling-how-apache-flink-adapts-to-changing-workloads/
Modern stream processing engines not only have to process millions of events per second at sub-second latency but also have to cope with constantly changing workloads. Due to the dynamic nature of stream applications where the number of incoming events can strongly vary with time, systems cannot reliably predetermine the amount of required resources. In order to meet guaranteed SLAs as well as utilizing system resources as efficiently as possible, frameworks like Apache Flink have to adapt their resource consumption dynamically. In this talk, we will take a look under the hood and explain how Flink scales stateful application in and out. Starting with the concept of key groups and partionable state, we will cover ways to detect bottlenecks in streaming jobs and discuss efficient strategies how to scale out operators with minimal down-time.
Latency-aware Elastic Scaling for Distributed Data Stream Processing SystemsZbigniew Jerzak
Elastic scaling allows a data stream processing system to react to a dynamically changing query or event workload by automatically scaling in or out. Thereby, both unpredictable load peaks as well as underload situations can be handled. However, each scaling decision comes with a latency penalty due to the required operator movements. Therefore, in practice an elastic system might be able to improve the system utilization, however it is not able to provide latency guarantees defined by a service level agreement (SLA). In this paper we introduce an elastic scaling system, which optimizes the utilization under certain latency constraints defined by a SLA. Specifically, we present a model, which estimates the latency spike created by a set of operator movements. We use this model to build a latency-aware elastic operator placement algorithm, which minimizes the number of latency violations. We show that our solution is able to reduce the 90th percentile of the end to end latency by up to 30% and reduce the number of latency violations by 50%. The achieved system utilization for our approach is comparable to a scaling strategy, which does not use latency as optimization target.
Auto-scaling Techniques for Elastic Data Stream ProcessingZbigniew Jerzak
An elastic data stream processing system is able to handle changes in workload by dynamically scaling out and
scaling in. This allows for handling of unexpected load spikes without the need for constant overprovisioning. One of the major challenges for an elastic system is to find the right point in time to scale in or to scale out. Finding such a point is difficult as it depends on constantly changing workload and system characteristics. In this paper we investigate the application of different auto-scaling techniques for solving this problem. Specifically: (1) we formulate basic requirements for an autoscaling technique used in an elastic data stream processing system, (2) we use the formulated requirements to select the best auto scaling techniques, and (3) we perform evaluation of the selected auto scaling techniques using the real world data. Our experiments show that the auto scaling techniques used in existing elastic data stream processing systems are performing worse than the strategies used in our work.
Adaptive Replication for Elastic Data Stream ProcessingZbigniew Jerzak
A major challenge for cloud-based systems is to be fault tolerant so as to cope with an increasing probability of faults in cloud environments. This is especially true for in-memory computing solutions like data stream processing systems, where a single host failure might result in an unrecoverable information loss.
In state of the art data streaming systems either active replication or upstream backup are applied to ensure fault tolerance, which have a high resource overhead or a high recovery time respectively. This paper combines these two fault tolerance mechanisms in one system to minimize the number of violations of a user-defined recovery time threshold and to reduce the overall resource consumption compared to active replication. The system switches for individual operators between both replication techniques dynamically based on the current workload characteristics. Our approach is implemented as an extension of an elastic data stream processing engine, which is able to reduce the number of used hosts due to the smaller replication overhead. Based on a real-world evaluation we show that our system is able to reduce the resource usage by up to 19% compared to an active replication scheme.
This lecture covers the principles and the architectures of modern cluster schedulers, including Apache Mesos, Apache Yarn, Google Borg and K8s, and some notes on Omega
We believe that security *IS* a shared responsibility, - when we give developers the power to create infrastructure, security became their responsibility, too.
During this meetup, we'd like to share our experience with implementing security best practices, to be implemented directly by development teams to build more robust and secure cloud environments. Make cloud security your team's sport!
Flexible and Real-Time Stream Processing with Apache FlinkDataWorks Summit
This document provides an overview of stream processing with Apache Flink. It discusses the rise of stream processing and how it enables low-latency applications and real-time analysis. It then describes Flink's stream processing capabilities, including pipelining of data, fault tolerance through checkpointing and recovery, and integration with batch processing. The document also summarizes Flink's programming model, state management, and roadmap for further development.
Application Scalability in Server Farms - NCacheAlachisoft
NCache is an in-memory caching solution by Alachisoft that improves application scalability and performance by reducing database trips and storing frequently accessed data in memory to provide better performance. It is also used to cache session data in web farms.
Scaling up uber's real time data analyticsXiang Fu
Realtime infrastructure powers critical pieces of Uber. This talk will discuss the architecture, technical challenges, learnings and how a blend of open source infrastructure (Apache Kafka/Flink/Pinot) and in-house technologies have helped Uber scale and enabled SQL to power realtime decision making for city ops, data scientists, data analysts and engineers.
Unified Batch & Stream Processing with Apache SamzaDataWorks Summit
The traditional lambda architecture has been a popular solution for joining offline batch operations with real time operations. This setup incurs a lot of developer and operational overhead since it involves maintaining code that produces the same result in two, potentially different distributed systems. In order to alleviate these problems, we need a unified framework for processing and building data pipelines across batch and stream data sources.
Based on our experiences running and developing Apache Samza at LinkedIn, we have enhanced the framework to support: a) Pluggable data sources and sinks; b) A deployment model supporting different execution environments such as Yarn or VMs; c) A unified processing API for developers to work seamlessly with batch and stream data. In this talk, we will cover how these design choices in Apache Samza help tackle the overhead of lambda architecture. We will use some real production use-cases to elaborate how LinkedIn leverages Apache Samza to build unified data processing pipelines.
Speaker
Navina Ramesh, Sr. Software Engineer, LinkedIn
This document provides an overview of Apache Kafka including its main components, architecture, and ecosystem. It describes how LinkedIn used Kafka to solve their data pipeline problem by decoupling systems and allowing for horizontal scaling. The key elements of Kafka are producers that publish data to topics, the Kafka cluster that stores streams of records in a distributed, replicated commit log, and consumers that subscribe to topics. Kafka Connect and the Schema Registry are also introduced as part of the Kafka ecosystem.
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...ScyllaDB
Discover how to avoid common pitfalls when shifting to an event-driven architecture (EDA) in order to boost system recovery and scalability. We cover Kafka Schema Registry, in-broker transformations, event sourcing, and more.
Disaster Recovery Experience at CACIB: Hardening Hadoop for Critical Financia...DataWorks Summit
Hadoop is becoming a standard platform for building critical financial applications such as risk reporting, trading and fraud detection. These applications require high level of SLAs (service-level agreement) in terms of RPO (Recovery Point Objective) and RTO (Recovery Time Objective). To achieve these SLAs, organizations need to build a disaster recovery plan that cover several layers ranging from the infrastructure to the clients going through the platform and the applications. In this talk, we will present the different architecture blueprints for disaster recovery as well as their corresponding SLA objectives. Then, we will focus on the stretch cluster solution that Crédit Agricole CIB is using in production. We will discuss the solution’s advantages, drawbacks and the impact of this approach on the global architecture. Finally, we will explain in detail how to configure and deploy this solution and how to integrate each layer (storage layer, processing layer...) into the architecture.
HA and DR Architecture for HANA on Power Deck - 2022-Nov-21.PPTXThinL389917
This document discusses high availability (HA) and disaster recovery (DR) architectures for SAP HANA on IBM Power Systems. It provides an overview of typical HA/DR configurations including host auto-failover, SAP HANA system replication in performance-optimized and cost-optimized modes, and the roles of cluster managers like Pacemaker in automating failover. Key aspects covered are recovery point objectives (RPOs), recovery time objectives (RTOs), synchronous vs. asynchronous replication modes, and multi-tier DR landscapes.
An adaptive and eventually self healing framework for geo-distributed real-ti...Angad Singh
This document discusses an adaptive and self-healing framework for real-time data ingestion across geographically distributed data centers. It describes the problem domain of ingesting 15 billion events per day across multiple schemas and data types from various sources. The proposed architecture includes an ingestion layer using technologies like Storm, Kafka and HDFS to ingest, transform and replicate streaming and batch data. It also includes a serving layer using Aerospike to provide low-latency aggregated user views. Issues encountered with technologies like Storm and Kafka are discussed, as well as features still under development.
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Stream data processing is becoming increasingly important to support business needs for faster time to insight and action with growing volume of information from more sources. Apache Apex (http://apex.apache.org/) is a unified big data in motion processing platform for the Apache Hadoop ecosystem. Apex supports demanding use cases with:
* Architecture for high throughput, low latency and exactly-once processing semantics.
* Comprehensive library of building blocks including connectors for Kafka, Files, Cassandra, HBase and many more
* Java based with unobtrusive API to build real-time and batch applications and implement custom business logic.
* Advanced engine features for auto-scaling, dynamic changes, compute locality.
Apex was developed since 2012 and is used in production in various industries like online advertising, Internet of Things (IoT) and financial services.
Stephan Ewen - Experiences running Flink at Very Large ScaleVerverica
This talk shares experiences from deploying and tuning Flink steam processing applications for very large scale. We share lessons learned from users, contributors, and our own experiments about running demanding streaming jobs at scale. The talk will explain what aspects currently render a job as particularly demanding, show how to configure and tune a large scale Flink job, and outline what the Flink community is working on to make the out-of-the-box for experience as smooth as possible. We will, for example, dive into - analyzing and tuning checkpointing - selecting and configuring state backends - understanding common bottlenecks - understanding and configuring network parameters
This document provides an overview of Apache Flink, an open-source stream processing framework. It discusses the rise of stream processing and how Flink enables low-latency applications through features like pipelining, operator state, fault tolerance using distributed snapshots, and integration with batch processing. The document also outlines Flink's roadmap, which includes graduating its DataStream API, fully managing windowing and state, and unifying batch and stream processing.
This document provides an overview of Apache Flink, an open-source platform for distributed stream and batch data processing. Flink allows for unified batch and stream processing with a simple yet powerful programming model. It features native stream processing, exactly-once fault tolerance based on consistent snapshots, and high performance optimized for streaming workloads. The document outlines Flink's APIs, state management, fault tolerance approach, and roadmap for continued improvements in 2015.
The document discusses troubleshooting performance issues for SQL Server. It begins with an introduction and case study on the MS Society of Canada's website. It then discusses optimizing the environment, using Performance Monitor (PerfMon) to monitor performance, and concludes with recommendations to address issues like high CPU usage, slow disk speeds, and insufficient memory.
Modern Stream Processing With Apache Flink @ GOTO Berlin 2017Till Rohrmann
In our fast moving world it becomes more and more important for companies to gain near real-time insights from their data to make faster decisions. These insights do not only provide a competitve edge over ones rivals but also enable a company to create completely new services and products. Amongst others, predictive user interfaces and online recommendation can be implemented when being able to process large amounts of data in real-time.
Apache Flink, one of the most advanced open source distributed stream processing platforms, allows you to extract business intelligence from your data in near real-time. With Apache Flink it is possible to process billions of messages with milliseconds latency. Moreover, its expressive APIs allow you to quickly solve your problems, ranging from classical analytical workloads to distributed event-driven applications.
In this talk, I will introduce Apache Flink and explain how it enables users to develop distributed applications and process analytical workloads alike. Starting with Flink’s basic concepts of fault-tolerance, statefulness and event-time aware processing, we will take a look at the different APIs and what they allow us to do. The talk will be concluded by demonstrating how we can use Flink’s higher level abstractions such as FlinkCEP and StreamSQL to do declarative stream processing.
Big Data Berlin v8.0 Stream Processing with Apache Apex Apache Apex
This document discusses Apache Apex, an open source stream processing framework. It provides an overview of stream data processing and common use cases. It then describes key Apache Apex capabilities like in-memory distributed processing, scalability, fault tolerance, and state management. The document also highlights several customer use cases from companies like PubMatic, GE, and Silver Spring Networks that use Apache Apex for real-time analytics on data from sources like IoT sensors, ad networks, and smart grids.
Similar to Will it Scale? The Secrets behind Scaling Stream Processing Applications (20)
Software Engineering and Project Management - Introduction to Project ManagementPrakhyath Rai
Introduction to Project Management: Introduction, Project and Importance of Project Management, Contract Management, Activities Covered by Software Project Management, Plans, Methods and Methodologies, some ways of categorizing Software Projects, Stakeholders, Setting Objectives, Business Case, Project Success and Failure, Management and Management Control, Project Management life cycle, Traditional versus Modern Project Management Practices.
OCS Training Institute is pleased to co-operate with
a Global provider of Rig Inspection/Audits,
Commission-ing, Compliance & Acceptance as well as
& Engineering for Offshore Drilling Rigs, to deliver
Drilling Rig Inspec-tion Workshops (RIW) which
teaches the inspection & maintenance procedures
required to ensure equipment integrity. Candidates
learn to implement the relevant standards &
understand industry requirements so that they can
verify the condition of a rig’s equipment & improve
safety, thus reducing the number of accidents and
protecting the asset.
In May 2024, globally renowned natural diamond crafting company Shree Ramkrishna Exports Pvt. Ltd. (SRK) became the first company in the world to achieve GNFZ’s final net zero certification for existing buildings, for its two two flagship crafting facilities SRK House and SRK Empire. Initially targeting 2030 to reach net zero, SRK joined forces with the Global Network for Zero (GNFZ) to accelerate its target to 2024 — a trailblazing achievement toward emissions elimination.
Understanding Cybersecurity Breaches: Causes, Consequences, and PreventionBert Blevins
Cybersecurity breaches are a growing threat in today’s interconnected digital landscape, affecting individuals, businesses, and governments alike. These breaches compromise sensitive information and erode trust in online services and systems. Understanding the causes, consequences, and prevention strategies of cybersecurity breaches is crucial to protect against these pervasive risks.
Cybersecurity breaches refer to unauthorized access, manipulation, or destruction of digital information or systems. They can occur through various means such as malware, phishing attacks, insider threats, and vulnerabilities in software or hardware. Once a breach happens, cybercriminals can exploit the compromised data for financial gain, espionage, or sabotage. Causes of breaches include software and hardware vulnerabilities, phishing attacks, insider threats, weak passwords, and a lack of security awareness.
The consequences of cybersecurity breaches are severe. Financial loss is a significant impact, as organizations face theft of funds, legal fees, and repair costs. Breaches also damage reputations, leading to a loss of trust among customers, partners, and stakeholders. Regulatory penalties are another consequence, with hefty fines imposed for non-compliance with data protection regulations. Intellectual property theft undermines innovation and competitiveness, while disruptions of critical services like healthcare and utilities impact public safety and well-being.
Conservation of Taksar through Economic RegenerationPriyankaKarn3
This was our 9th Sem Design Studio Project, introduced as Conservation of Taksar Bazar, Bhojpur, an ancient city famous for Taksar- Making Coins. Taksar Bazaar has a civilization of Newars shifted from Patan, with huge socio-economic and cultural significance having a settlement of about 300 years. But in the present scenario, Taksar Bazar has lost its charm and importance, due to various reasons like, migration, unemployment, shift of economic activities to Bhojpur and many more. The scenario was so pityful that when we went to make inventories, take survey and study the site, the people and the context, we barely found any youth of our age! Many houses were vacant, the earthquake devasted and ruined heritages.
Conservation of those heritages, ancient marvels,a nd history was in dire need, so we proposed the Conservation of Taksar through economic regeneration because the lack of economy was the main reason for the people to leave the settlement and the reason for the overall declination.
A vernier caliper is a precision instrument used to measure dimensions with high accuracy. It can measure internal and external dimensions, as well as depths.
Here is a detailed description of its parts and how to use it.
Unblocking The Main Thread - Solving ANRs and Frozen FramesSinan KOZAK
In the realm of Android development, the main thread is our stage, but too often, it becomes a battleground where performance issues arise, leading to ANRS, frozen frames, and sluggish Uls. As we strive for excellence in user experience, understanding and optimizing the main thread becomes essential to prevent these common perforrmance bottlenecks. We have strategies and best practices for keeping the main thread uncluttered. We'll examine the root causes of performance issues and techniques for monitoring and improving main thread health as wel as app performance. In this talk, participants will walk away with practical knowledge on enhancing app performance by mastering the main thread. We'll share proven approaches to eliminate real-life ANRS and frozen frames to build apps that deliver butter smooth experience.
20CDE09- INFORMATION DESIGN
UNIT I INCEPTION OF INFORMATION DESIGN
Introduction and Definition
History of Information Design
Need of Information Design
Types of Information Design
Identifying audience
Defining the audience and their needs
Inclusivity and Visual impairment
Case study.
A brief introduction to quadcopter (drone) working. It provides an overview of flight stability, dynamics, general control system block diagram, and the electronic hardware.
Will it Scale? The Secrets behind Scaling Stream Processing Applications
1. Will it Scale?
The Secrets behind Scaling Stream Processing
Applications
Navina Ramesh
Software Engineer, LinkedIn
Apache Samza, Committer & PMC
navina@apache.org
2. What is this talk about ?
● Understand the architectural choices in stream processing systems that may
impact performance/scalability of stream processing applications
● Have a high level comparison of two streaming engines (Flink/Samza) with a
focus on scalability of the stream-processing application
3. What this talk is not about ?
● Not a feature-by-feature comparison of existing stream processing systems
(such as Flink, Storm, Samza etc)
4. Agenda
● Use cases in Stream Processing
● Typical Data Pipelines
● Scaling Data Ingestion
● Scaling Data Processing
○ Challenges in Scaling Data Processing
○ Walk-through of Apache Flink & Apache Samza
○ Observations on state & fault-tolerance
● Challenges in Scaling Result Storage
● Conclusion
9. Agenda
● Use cases in Stream Processing
●Typical Data Pipelines
● Scaling Data Ingestion
● Scaling Data Processing
○ Challenges in Scaling Data Processing
○ Walk-through of Apache Flink & Apache Samza
○ Observations on state & fault-tolerance
● Challenges in Scaling Result Storage
● Conclusion
10. Typical Data Pipeline - Batch
Ingestion
Service
HDFS
Mappers Reducers
HDFS/
HBase
Query
14. Parallels in Streaming
Ingestion
Service
HDFS
Mappers Reducers
HDFS/
HBase
Processors Processors
HDFS
KV
Store
Partition 0
Partition 1
Partition N
...
Data
Ingestion
Data
Processing
Result Storage /
Serving
Query
Query
16. Batch Streaming
● Data Processing on bounded data
● Acceptable Latency - order of hours
● Processing occurs at regular intervals
● Throughput trumps latency
● Horizontal scaling to improve processing
time
● Data processing on unbounded data
● Low latency - order of sub-seconds
● Processing is continuous
● Horizontal scaling is not straightforward
(stateful applications)
● Need tools to reason about time (esp.
when re-processing stream)
17. Agenda
● Use cases in Stream Processing
● Typical Data Pipelines
●Scaling Data Ingestion
● Scaling Data Processing
○ Challenges in Scaling Data Processing
○ Walk-through of Apache Flink & Apache Samza
○ Observations on state & fault-tolerance
● Challenges in Scaling Result Storage
● Conclusion
18. Typical Data Ingestion
Producers
Partition 0
Partition 1
Partition 3
key=0
key=3
key=23
Stream A
Consumer
(host A)
Consumer
(host B)
Partition 2
- Typically, streams are
partitioned
- Messages sent to partitions
based on “Partition Key”
- Time-based message
retentionkey=10
Kafka Kinesis
19. Scaling Data Ingestion
Producers
Partition 0
Partition 1
Partition 3
Stream A
Consumer
(host A)
Consumer
(host B)
Partition 2
- Scaling “up” -> Increasing
partitions
- Changing partitioning logic
re-distributes* the keys
across the partitions
Partition 4
key=0
key=10
key=23
key=3
Kafka Kinesis
20. Scaling Data Ingestion
Producers
Partition 0
Partition 1
Partition 3
Stream A
Consumer
(host A)
Consumer
(host B)
Partition 2
- Scaling “up” -> Increasing
partitions
- Changing partitioning logic
re-distributes* the keys
across the partitions
- Consuming clients (includes
stream processors) should be
able to re-adjust!
- Impact -> Over-provisioning
of partitions in order to handle
changes in load
Partition 4
key=0
key=10
key=23
key=3
Kafka Kinesis
21. Agenda
● Use cases in Stream Processing
● Typical Data Pipelines
● Scaling Data Ingestion
●Scaling Data Processing
○ Challenges in Scaling Data Processing
○ Walk-through of Apache Flink & Apache Samza
○ Observations on state & fault-tolerance
● Challenges in Scaling Result Storage
● Conclusion
23. Scaling Data Processing
● Increase number of processing units → Horizontal Scaling
But more machines means more $$$
● Impact NOT only CPU cores, but “large” (order of TBs) stateful applications
impact network and disk!!
24. Key Bottleneck in Scaling Data Processing
● Accessing State
○ Operator state
■ Read/Write state that is maintained during stream processing
■ Eg: windowed aggregation, windowed join
○ Adjunct state
■ To process events, applications might need to lookup related or ‘adjunct’ data.
28. Accessing Operator State: Push Notifications
B2
Online
Apps
Relevance
Score
User
Action Data
Task
(Generate active notifications -
filtering, windowed-aggregation,
external calls etc)
Notification System
(Scheduler)
29. Accessing Operator State: Push Notifications
B2
Online
Apps
Relevance
Score
User
Action Data
Task
(Generate active notifications -
filtering, windowed-aggregation,
external calls etc)
Notification System
(Scheduler)
- Stream processing tasks
consume from multiple sources
- offline/online
- Performs multiple operations
- Filters information and
buffers data for window of
time
- Aggregates / Joins
buffered data
- Total operator state per
instance can easily grow to
multiple GBs per Task
30. Accessing Adjunct Data: AdQuality Updates
Task
AdClicks AdQuality Update
Read Member Data
Member Info
Stream-to-Table Join
(Look-up memberId & generate
AdQuality improvements for the
User)
31. Accessing Adjunct Data: AdQuality Updates
Task
AdClicks AdQuality Update
Read Member Data
Member Info
Stream-to-Table Join
(Look-up memberId & generate
AdQuality improvements for the
User)
Concerns:
- Remote look-up Latency is
high!
- DDoS on shared store -
MemberInfo
32. Accessing Adjunct Data using Cache: AdQuality Updates
Task
AdClicks AdQuality Update
Read Member Data
Member Info
Stream-to-Table Join
(Maintain a cache of
member Info & do local
lookup)
33. Accessing Adjunct Data using Cache: AdQuality Updates
Task
AdClicks AdQuality Update
Read Member Data
Member Info
Stream-to-Table Join
(Maintain a cache of
member Info & do local
lookup)
Concerns:
- Overhead of maintaining cache
consistency based on the source of
truth (MemberInfo)
- Warming up the cache after the job’s
downtime can cause temporary spike
in QPS on the shared store
34. Agenda
● Use cases in Stream Processing
● Typical Data Pipelines
● Scaling Data Ingestion
● Scaling Data Processing
○ Challenges in Scaling Data Processing
○Walk-through of Apache Flink & Apache Samza
○ Observations on state & fault-tolerance
● Challenges in Scaling Result Storage
● Conclusion
36. Apache Flink: Processing
● Dataflows with streams and
transformation operators
● Starts with one or more source and
ends in one or more sinks
37. Actor System
Scheduler
Checkpoint
Coordinator
Job Manager
Task
Slot
Task
Slot
Task Manager
Task
Slot
Actor System
Network Manager
Memory & I/O Manager
Task
Slot
Task
Slot
Task Manager
Task
Slot
Actor System
Network Manager
Memory & I/O Manager
Stream
Task
Slot
● JobManager (Master) coordinates
distributed execution such as,
checkpoint, recovery management,
schedule tasks etc.
● TaskManager (JVM Process)
execute the subtasks of the dataflow,
and buffer and exchange data
streams
● Each Task Slot may execute multiple
subtasks and runs on a separate
thread.
Apache Flink: Processing
38. Apache Flink: State Management
● Lightweight Asynchronous Barrier Snapshots
● Master triggers checkpoint and source inserts barrier
● On receiving barrier from all input sources, each operator stores the entire state, acks the
checkpoint to the master and emits snapshot barrier in the output
39. Apache Flink: State Management
Job
Manager
Task
Manager
HDFS
Snapshot Store
Task
Manager
Task
Manager
● Lightweight Asynchronous Barrier
Snapshots
● Periodically snapshot the entire state
to snapshot store
● Checkpoint mapping is stored in Job
Manager
● Snapshot Store (typically, HDFS)
○ operator state
(windows/aggregation)
○ user-defined state
(checkpointed)
40. Apache Flink: State Management
● Operator state is primarily
stored In-Memory or local File
System
● Recently added RocksDB
● Allows user-defined operators
to define state that should be
checkpointed
Job
Manager
Task
Manager
HDFS
Snapshot Store
Task
Manager
Task
Manager
41. Apache Flink: Fault Tolerance of State
Job
Manager
Task
Manager
Snapshot Store
Task
Manager
Task
Manager
Task Failure
42. Apache Flink: Fault Tolerance of State
Job
Manager
Task
Manager
HDFS
Task
Manager
Task
Manager
● Full restore of snapshot from last
completed checkpointed state
● Continues processing after restoring
from the latest snapshot from the
store
Full Restore
43. Apache Flink: Summary
● State Management Primitives:
○ Within task, local state info is stored primarily in-memory (recently, rocksdb)
○ Periodic snapshot (checkpoints + user-defined state + operator state) written to Snapshot
Store
● Fault-Tolerance of State
○ Full state restored from Snapshot Store
44. Apache Flink: Observations
● Full snapshots are expensive for large states
● Frequent snapshots that can quickly saturate network
● Applications must trade-off between snapshot frequency and how large a
state can be built within a task
47. Apache Samza: Processing
● Samza Master handles container life-
cycle and failure handling
● Each container (JVM process)
contains more than one task to
process the input stream partitions
Samza
Master
Task Task
Container
Task Task
Container
48. Apache Samza: State Management
● Tasks checkpoint periodically to a
checkpoint stream
● Checkpoint indicates which position
in the input from which processing
has to continue in case of a container
restart
Samza
Master
Task Task
Container
Task Task
Container
Checkpoint Stream
49. Apache Samza: State Management
● State store is local to the task -
typically RocksDB (off-heap) and In-
Memory (backed by a map)
● State store contains any operator
state or adjunct state
● Allows application to define state
through a Key Value interface
Samza
Master
Task Task
Container
Task Task
Container
Checkpoint Stream
50. Apache Samza: State Management
● State store is continuously replicated
to a changelog stream
● Each store partition is mapped to a
specific changelog partition
Samza
Master
Task Task
Container
Task Task
Container
Changelog Stream
Checkpoint Stream
51. Apache Samza: Fault Tolerance of State
Samza
Master
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
Container Failure
Machine A Machine B
52. Samza
Master
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
Re-allocated on
different host!
Machine A Machine X
● When container is recovered in a
different host, there is no state
available locally
Apache Samza: Fault Tolerance of State
53. Samza
Master
Task Task
Container
Task
Checkpoint Stream
Re-allocated on
different host!
Machine A Machine X
● When container comes up in a
different host, there is no state
available locally
● Restores from the beginning of the
changelog stream -> Full restore!
Task Task
Container
Apache Samza: Fault Tolerance of State
54. Samza
Master
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
Container Failure
● State store is persisted to local disk
on the machine, along with info on
which offset to begin restoring the
state from changelogMachine A Machine B
Apache Samza: Fault Tolerance of State
55. Samza
Master
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
Re-allocated on same host!
Machine A Machine B
● Samza Master tries to re-allocate the
container on the same host
● The feature where the Samza Master
attempts to co-locate the task with
their built-up state stores (where they
were previously running) is called
Host-affinity.
Apache Samza: Fault Tolerance of State
56. Samza
Master
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
Machine A Machine B
Re-allocated on same host!
● Samza Master tries to re-allocate the
container on the same host
● The feature where the Samza Master
attempts to co-locate the task with
their built-up state stores (where they
were previously running) is called
Host-affinity.
● If container is re-allocated on the
same host, state store is partially
restored from changelog stream
(delta restore)
Apache Samza: Fault Tolerance of State
57. Samza
AppMaster
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
● Once state is restored, checkpoint
stream contains the correct offset for
each task to begin processing
Machine A Machine B
Re-allocated on same host!
Apache Samza: Fault Tolerance of State
58. ● Persisting state on local disk + host-
affinity effectively reduces the time-
to-recover state from failure (or)
upgrades and continue with
processing
Samza
AppMaster
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
Apache Samza: Fault Tolerance of State
59. ● Persisting state on local disk + host-
affinity effectively reduces the time-
to-recover state from failure (or)
upgrades and continue with
processing
● Only a subset of tasks may require
full restore, thereby, reducing the
time to recover from failure or time to
restart processing upon upgrades!
Samza
AppMaster
Task Task
Container
Task Task
Container
Checkpoint Stream
Changelog Stream
Apache Samza: Fault Tolerance of State
60. Apache Samza: Summary
● State Management Primitives
○ Within task, data is stored in-memory or on-disk using RocksDB
○ Checkpoint state stored in checkpoint-stream
○ User-defined and operator state continuously replicated in a changelog stream
● Fault-Tolerance of State
○ Full state restored by consuming changelog stream, if user-defined state not persisted on
task’s machine
○ If locally persisted, only partial restore
61. Apache Samza: Observations
● State recovery from changelog can be time-consuming. It could potentially
saturate Kafka clusters. Hence, partial restore is necessary.
● Host-affinity allows for faster failure recovery of task states, and faster job
upgrades, even for large stateful jobs
● Since checkpoints are written to a stream and state is continuously replicated
in changelog, frequent checkpoints are possible.
62. Agenda
● Use cases in Stream Processing
● Typical Data Pipelines
● Scaling Data Ingestion
● Scaling Data Processing
○ Challenges in Scaling Data Processing
○ Walk-through of Apache Flink & Apache Samza
○Observations on state & fault-tolerance
● Challenges in Scaling Result Storage
● Conclusion
63. Comparison of State & Fault-tolerance
Apache Samza Apache Flink
Durable State
RocksDB FileSystem (Recently added,
RocksDB)
State Fault Tolerance Kafka based Changelog Stream HDFS
State Update Unit Delta Changes Full Snapshot
State Recovery Unit
Full Restore + Improved recovery
with host-affinity
Full Restore
64. Agenda
● Use cases in Stream Processing
● Typical Data Pipelines
● Scaling Data Ingestion
● Scaling Data Processing
○ Challenges in Scaling Data Processing
○ Walk-through of Apache Flink & Apache Samza
○ Observations on state & fault-tolerance
●Challenges in Scaling Result Storage
● Conclusion
65. Challenges in Scaling Result Storage / Serving
● Any fast KV store can handle very small (order of thousands) QPS compared
to the rate of stream processing output rate (order of millions)
● Output store can DoS due to high-throughput
68. Agenda
● Use cases in Stream Processing
● Typical Data Pipelines
● Scaling Data Ingestion
● Scaling Data Processing
○ Challenges in Scaling Data Processing
○ Walk-through of Apache Flink & Apache Samza
○ Observations on state & fault-tolerance
● Challenges in Scaling Result Storage
●Conclusion
69. Conclusion
● Ingest/Process/Serve should be wholistically scalable to successfully scale
stream processing applications
● The notion of a “locally” accessible state is great to scale stream processing
applications for performance. It brings in the additional cost of making the
state fault-tolerant