Modern ETL Pipelines with Change Data CaptureDatabricks
In this talk we’ll present how at GetYourGuide we’ve built from scratch a completely new ETL pipeline using Debezium, Kafka, Spark and Airflow, which can automatically handle schema changes. Our starting point was an error prone legacy system that ran daily, and was vulnerable to breaking schema changes, which caused many sleepless on-call nights. As most companies, we also have traditional SQL databases that we need to connect to in order to extract relevant data.
This is done usually through either full or partial copies of the data with tools such as sqoop. However another approach that has become quite popular lately is to use Debezium as the Change Data Capture layer which reads databases binlogs, and stream these changes directly to Kafka. As having data once a day is not enough anymore for our bussiness, and we wanted our pipelines to be resilent to upstream schema changes, we’ve decided to rebuild our ETL using Debezium.
We’ll walk the audience through the steps we followed to architect and develop such solution using Databricks to reduce operation time. By building this new pipeline we are now able to refresh our data lake multiple times a day, giving our users fresh data, and protecting our nights of sleep.
Stream data processing is increasingly required to support business needs for faster actionable insight with growing volume of information from more sources. Apache Apex is a true stream processing framework for low-latency, high-throughput and reliable processing of complex analytics pipelines on clusters. Apex is designed for quick time-to-production, and is used in production by large companies for real-time and batch processing at scale.
This session will use an Apex production use case to walk through the incremental transition from a batch pipeline with hours of latency to an end-to-end streaming architecture with billions of events per day which are processed to deliver real-time analytical reports. The example is representative for many similar extract-transform-load (ETL) use cases with other data sets that can use a common library of building blocks. The transform (or analytics) piece of such pipelines varies in complexity and often involves business logic specific, custom components.
Topics include:
* Pipeline functionality from event source through queryable state for real-time insights.
* API for application development and development process.
* Library of building blocks including connectors for sources and sinks such as Kafka, JMS, Cassandra, HBase, JDBC and how they enable end-to-end exactly-once results.
* Stateful processing with event time windowing.
* Fault tolerance with exactly-once result semantics, checkpointing, incremental recovery
* Scalability and low-latency, high-throughput processing with advanced engine features for auto-scaling, dynamic changes, compute locality.
* Who is using Apex in production, and roadmap.
Following the session attendees will have a high level understanding of Apex and how it can be applied to use cases at their own organizations.
Kafka Summit SF 2017 - Keynote - Go Against the Flow: Databases and Stream Pr...confluent
The document discusses how businesses can adopt a streaming-first approach using stream processing tools like Apache Kafka and KSQL. It argues that databases are no longer suitable for analyzing real-time streaming data and processing events. KSQL is presented as an open source tool that allows users to write SQL queries against streaming data in Kafka. It supports features like continuous queries, stream-table joins, and streaming materialized views. The document also provides a demo of how KSQL can be used for real-time anomaly detection on streaming web user data.
This document summarizes a presentation on the Elastic Stack. It discusses the main components - Elasticsearch for storing and searching data, Logstash for ingesting data, Kibana for visualizing data. It provides examples of using Elasticsearch for search, analytics, and aggregations. It also briefly mentions new features across the Elastic Stack like update by query, ingest nodes, pipeline improvements, and APIs for management and metrics.
This document provides an overview of SK Telecom's use of big data analytics and Spark. Some key points:
- SKT collects around 250 TB of data per day which is stored and analyzed using a Hadoop cluster of over 1400 nodes.
- Spark is used for both batch and real-time processing due to its performance benefits over other frameworks. Two main use cases are described: real-time network analytics and a network enterprise data warehouse (DW) built on Spark SQL.
- The network DW consolidates data from over 130 legacy databases to enable thorough analysis of the entire network. Spark SQL, dynamic resource allocation in YARN, and integration with BI tools help meet requirements for timely processing and quick
More Data, More Problems: Scaling Kafka-Mirroring Pipelines at LinkedIn confluent
(Celia Kung, LinkedIn) Kafka Summit SF 2018
For several years, LinkedIn has been using Kafka MirrorMaker as the mirroring solution for copying data between Kafka clusters across data centers. However, as LinkedIn data continued to grow, mirroring trillions of Kafka messages per day across data centers uncovered the scale limitations and operability challenges of Kafka MirrorMaker. To address such issues, we have developed a new mirroring solution, built on top our stream ingestion service, Brooklin. Brooklin MirrorMaker aims to provide improved performance and stability, while facilitating better management through finer control of data pipelines. Through flushless Kafka produce, dynamic management of data pipelines, per-partition error handling and flow control, we are able to increase throughput, better withstand consume and produce failures and reduce overall operating costs. As a result, we have eliminated the major pain points of Kafka MirrorMaker. In this talk, we will dive deeper into the challenges LinkedIn has faced with Kafka MirrorMaker, how we tackled them with Brooklin MirrorMaker and our plans for iterating further on this new mirroring solution.
Story of architecture evolution of one project from zero to Lambda Architecture. Also includes information on how we scaled cluster as soon as architecture is set up.
Contains nice performance charts after every architecture change.
Low-latency data applications with Kafka and Agg indexes | Tino Tereshko, Fir...HostedbyConfluent
If a real-time dashboard takes 5 minutes to refresh, it’s not real-time. With data lakes increasingly enabling massive amounts of unprocessed data sets, delivering low-latency analytics is not for the faint-hearted. Learn how to stream massive amounts of data which used to be impossible to handle from Kafka, to serve real-time applications using lake-scale optimized approaches to storage and indexing.
Streaming Data Lakes using Kafka Connect + Apache Hudi | Vinoth Chandar, Apac...HostedbyConfluent
Apache Hudi is a data lake platform, that provides streaming primitives (upserts/deletes/change streams) on top of data lake storage. Hudi powers very large data lakes at Uber, Robinhood and other companies, while being pre-installed on four major cloud platforms.
Hudi supports exactly-once, near real-time data ingestion from Apache Kafka to cloud storage, which is typically used in-place of a S3/HDFS sink connector to gain transactions and mutability. While this approach is scalable and battle-tested, it can only ingest data in mini batches, leading to lower data freshness. In this talk, we introduce a Kafka Connect Sink Connector for Apache Hudi, which writes data straight into Hudi's log format, making the data immediately queryable, while Hudi's table services like indexing, compaction, clustering work behind the scenes, to further re-organize for better query performance.
Insights Without Tradeoffs Using Structured Streaming keynote by Michael Armb...Spark Summit
In Spark 2.0, we introduced Structured Streaming, which allows users to continually and incrementally update your view of the world as new data arrives, while still using the same familiar Spark SQL abstractions. I talk about progress we’ve made since then on robustness, latency, expressiveness and observability, using examples of production end-to-end continuous applications.
August 2016 HUG: Open Source Big Data Ingest with StreamSets Data Collector Yahoo Developer Network
Big data tools such as Hadoop and Spark allow you to process data at unprecedented scale, but keeping your processing engine fed can be a challenge. Upstream data sources can 'drift' due to infrastructure, OS and application changes, causing ETL tools and hand-coded solutions to fail. StreamSets Data Collector (SDC) is an open source platform for building big data ingest pipelines that allows you to design, execute and monitor robust data flows. In this session we'll look at how SDC's "intent-driven" approach keeps the data flowing, whether you're processing data 'off-cluster', in Spark, or in MapReduce.
StreamSets software delivers performance management for data flows that feed the next generation of big data applications. Its mission is to bring operational excellence to the management of data in motion, so that data arrives on time and with quality, accelerating analysis and decision making. StreamSets Data Collector is in use at hundreds of companies where it brings unprecedented visibility into and control over data as it moves between an expanding variety of sources and destinations.
Speakers:
Pat Patterson has been working with Internet technologies since 1997, building software and working with communities at Sun Microsystems, Huawei, Salesforce and StreamSets. At Sun, Pat was the community lead for the OpenSSO open source project, while at Huawei he developed cloud storage infrastructure software. Part of the developer evangelism team at Salesforce, Pat focused on identity, integration and the Internet of Things. Now community champion at StreamSets, Pat is responsible for the care and feeding of the StreamSets open source community.
Lambda architecture is a popular technique where records are processed by a batch system and streaming system in parallel. The results are then combined during query time to provide a complete answer. Strict latency requirements to process old and recently generated events made this architecture popular. The key downside to this architecture is the development and operational overhead of managing two different systems.
There have been attempts to unify batch and streaming into a single system in the past. Organizations have not been that successful though in those attempts. But, with the advent of Delta Lake, we are seeing lot of engineers adopting a simple continuous data flow model to process data as it arrives. We call this architecture, The Delta Architecture.
Data platform architecture principles - ieee infrastructure 2020Julien Le Dem
This document discusses principles for building a healthy data platform, including:
1. Establishing explicit contracts between teams to define dependencies and service level agreements.
2. Abstracting the data platform into services for ingesting, storing, and processing data in motion and at rest.
3. Enabling observability of data pipelines through metadata collection and integration with tools like Marquez to provide lineage, availability, and change management visibility.
Real-Time Data Pipelines with Kafka, Spark, and Operational DatabasesSingleStore
Eric Frenkiel, MemSQL CEO and co-founder and Gartner Catalyst. August 11, 2015, San Diego, CA. Watch the Pinterest Demo Video here: https://youtu.be/KXelkQFVz4E
Tangram: Distributed Scheduling Framework for Apache Spark at FacebookDatabricks
Tangram is a state-of-art resource allocator and distributed scheduling framework for Spark at Facebook with hierarchical queues and a resource based container abstraction. We support scheduling and resource management for a significant portion of Facebook's data warehouse and machine learning workloads that equates to running millions of jobs across several clusters with tens of thousands of machines. In this talk, we will describe Tangram's architecture, discuss Facebook's need for a custom scheduler, and explain how Tangram schedules Spark workloads at scale. We will specifically focus on several important features around improving Spark's efficiency, usability and reliability: 1. IO-rebalancer (Tetris) Support 2. User-Fairness Queueing 3. Heuristic-Based Backfill Scheduling Optimizations.
Kappa Architecture on Apache Kafka and Querona: datamass.ioPiotr Czarnas
This document discusses Kappa Architecture, an alternative to Lambda Architecture for event processing. Kappa Architecture uses a single stream of events from Apache Kafka as the input, rather than separating batch and stream processing. It reads all events from Kafka and runs analytics on the full data set to enable both learning from historical events and reacting to new events. The document outlines how Kappa Architecture provides benefits like avoiding duplicate processing logic and making actionable analytics easier. It also describes how to read bounded batches of events from Kafka for analytics using tools like Apache Spark.
Interactive Visualization of Streaming Data Powered by Spark by Ruhollah Farc...Spark Summit
This document discusses how to visualize streaming data using Spark. It describes how Spark Streaming can be used to process streaming data in real-time and integrate it with visualization tools. Key points include:
- Spark Streaming receives streaming data from sources like Kafka and processes it using in-memory computations in a single JVM cluster.
- The processed data can be stored in buffers like MongoDB or output to systems like MemSQL, Solr to enable interactive visualizations that update in real-time.
- A demo is shown of Twitter data being streamed and analyzed using Spark Streaming with results stored in MemSQL and Solr for visualization.
- Benefits of this approach include being able to work with streaming data
Delta Lake, an open-source innovations which brings new capabilities for transactions, version control and indexing your data lakes. We uncover how Delta Lake benefits and why it matters to you. Through this session, we showcase some of its benefits and how they can improve your modern data engineering pipelines. Delta lake provides snapshot isolation which helps concurrent read/write operations and enables efficient insert, update, deletes, and rollback capabilities. It allows background file optimization through compaction and z-order partitioning achieving better performance improvements. In this presentation, we will learn the Delta Lake benefits and how it solves common data lake challenges, and most importantly new Delta Time Travel capability.
What to Expect for Big Data and Apache Spark in 2017 Databricks
Big data remains a rapidly evolving field with new applications and infrastructure appearing every year. In this talk, Matei Zaharia will cover new trends in 2016 / 2017 and how Apache Spark is moving to meet them. In particular, he will talk about work Databricks is doing to make Apache Spark interact better with native code (e.g. deep learning libraries), support heterogeneous hardware, and simplify production data pipelines in both streaming and batch settings through Structured Streaming.
Speaker: Matei Zaharia
Video: http://go.databricks.com/videos/spark-summit-east-2017/what-to-expect-big-data-apache-spark-2017
This talk was originally presented at Spark Summit East 2017.
Kappa Architecture is an alternative to Lambda Architecture that simplifies real-time data processing. It uses a distributed log like Kafka to store all input data immutably to allow reprocessing from the beginning if the processing code changes. This avoids having to maintain separate batch and real-time processing systems. The ASPgems team has implemented Kappa Architecture for several clients using Kafka, Spark Streaming, and Cassandra to provide real-time analytics and metrics in sectors like telecommunications, IoT, insurance, and energy.
This document discusses monitoring Apache Kafka clusters and applications with Prometheus. It provides an overview of the architecture used, including deploying Prometheus servers, Kafka and HBase exporters, and a JSON exporter for YARN applications. Specific exporters are discussed for Kafka brokers using JMX, Kafka clients using the Prometheus Java library, and exposing application metrics via HTTP. Important Prometheus configurations and query functions are also covered. The summary highlights the key components of the monitoring architecture and some of the exporters and techniques discussed.
The key findings of the survey of 314 big data professionals are:
- 87% said 'bad data' pollutes their data stores and 74% said 'bad data' is currently in their stores. Ensuring data quality was the top challenge cited.
- 72% build data flows through hand coding while 53% change pipelines several times per month.
- Only 12% rated their ability to detect issues like stopped pipelines or degraded performance as 'good' or 'excellent'.
- There are significant gaps between the real-time visibility needed and what current tools provide across metrics like error rates, divergent data, and privacy detection.
- 81% said upgrading big data components has significant operational impact.
Logging infrastructure for Microservices using StreamSets Data CollectorCask Data
This document discusses using StreamSets Data Collector (SDC) to build a logging infrastructure for microservices. SDC can ingest logs from microservices running in containers and handle issues like schema changes and new log formats. It processes and transforms the logs, sending them to destinations like Kafka. SDC pipelines can run on Spark clusters on Yarn and Mesos to handle large volumes of log data and load it into systems like HDFS, HBase and Elasticsearch for analysis.
Adaptive Data Cleansing with StreamSets and Cassandra (Pat Patterson, StreamS...DataStax
Cassandra is a perfect fit for consuming high volumes of time-series data directly from users, devices, and sensors. Sometimes, though, when we consume data from the real world, systematic and random errors creep in. In this session, we'll see how to use open source tools like RabbitMQ and StreamSets Data Collector with Cassandra features such as User Defined Aggregates to collect, cleanse and ingest variable quality data at scale. Discover how to combine the power of Cassandra with the flexibility of StreamSets to implement adaptive data cleansing.
About the Speaker
Pat Patterson Community Champion, StreamSets
Pat Patterson has been working with Internet technologies since 1997, building software and working with communities at Sun Microsystems, Huawei, Salesforce and StreamSets. At Sun, Pat was the community lead for OpenSSO, while at Huawei he developed cloud storage infrastructure software. A developer evangelist at Salesforce, Pat focused on identity, integration and IoT. Now community champion at StreamSets, Pat is responsible for the care and feeding of the StreamSets open source community.
Muvr is a real-time personal trainer system. It must be highly available, resilient and responsive, and so it relies on heavily on Spark, Mesos, Akka, Cassandra, and Kafka—the quintuple also known as the SMACK stack. In this talk, we are going to explore the architecture of the entire muvr system, exploring, in particular, the challenges of ingesting very large volume of data, applying trained models on the data to provide real-time advice to our users, and training & evaluating new models using the collected data. We will specifically emphasize on how we have used Cassandra for consuming lots of fast incoming biometric data from devices and sensors, and how to securely access the big data sets from Cassandra in Spark to compute the models.
We will finish by showing the mechanics of deploying such a distributed application. You will get a clear understanding of how Mesos, Marathon, in conjunction with Docker, is used to build an immutable infrastructure that allows us to provide reliable service to our users and a great environment for our engineers.
Demystifying salesforce for developersHeitor Souza
This document provides an overview of Salesforce for developers. It defines CRM and Salesforce, explains Salesforce's multi-tenant architecture and how it provides reliability, customizability and security. It also covers the Salesforce platform, development tools like Force.com, Apex and SOQL, and governor limits for developers to be aware of. The presentation includes an agenda, definitions of key terms, and a demo of developing on the platform.
Kafka Lambda architecture with mirroringAnant Rustagi
This document outlines a master plan for a lambda architecture that involves mirroring data from multiple Kafka clusters into a Hadoop cluster for batch processing and analytics, as well as real-time processing using Storm/Spark on the mirrored data in the Kafka clusters, with data from various sources integrated into the Kafka clusters with the topic name "Data".
Following best practices can help ensure your success. This is especially true for Force.com applications or large Salesforce orgs that have the potential to push platform limits.
Salesforce allows you to easily scale up from small to large amounts of data. Mostly this is seamless, but as data sets get larger, the time required for certain operations may grow too. Join us to learn different ways of designing and configuring data structures and planning a deployment process to significantly reduce deployment times and achieve operational efficiency.
Watch this webinar to:
:: Explore best practices for the design, implementation, and maintenance phases of your app's lifecycle.
:: Learn how seemingly unrelated components can affect one another and determine the ultimate scalability of your app.
:: See live demos that illustrate innovative solutions to tough challenges, including the integration of an external data warehouse using Force.com Canvas.
:: Walk away with practical tips for putting best practices into action.
Intended Audience
This webinar is perfect for Salesforce or Force.com architects and developers that want to better understand data management best practices to ensure both short and long-term implementation success. Although many topics focus on large data volumes, the recommendations in this presentation are equally relevant to smaller orgs.
How Apache Kafka is transforming Hadoop, Spark and StormEdureka!
This document provides an overview of Apache Kafka and how it is transforming Hadoop, Spark, and Storm. It begins with explaining why Kafka is needed, then defines what Kafka is and describes its architecture. Key components of Kafka like topics, producers, consumers and brokers are explained. The document also shows how Kafka can be used with Hadoop, Spark, and Storm for stream processing. It lists some companies that use Kafka and concludes by advertising an Edureka course on Apache Kafka.
Large volume data analysis on the Typesafe Reactive Platform - Big Data Scala...Martin Zapletal
The document discusses distributed machine learning and data processing. It covers several topics including reasons for using distributed machine learning, different distributed computing architectures and primitives, distributed data stores and analytics tools like Spark, streaming architectures like Lambda and Kappa, and challenges around distributed state management and fault tolerance. It provides examples of failures in distributed databases and suggestions to choose the appropriate tools based on the use case and understand their internals.
Cassandra Day Atlanta 2016 - Monitoring Cassandraaaronmorton
This document discusses monitoring Apache Cassandra using metrics. It describes various metric types like gauges, histograms, meters, and timers that can be used to monitor Cassandra. It provides examples of specific metrics that can be monitored for requests, latencies, memory usage, clients, errors, inconsistencies, compactions, thread pools, commit logs, and more. Reporting mechanisms like Graphite and JMX are also covered. The presentation aims to help understand how to gain insights into a Cassandra cluster through effective metric collection and monitoring.
Towards Benchmaking Modern Distruibuted Systems-(Grace Huang, Intel)Spark Summit
This document discusses StreamingBench, a benchmarking tool for streaming systems. It aims to help users understand and select streaming platforms, identify factors that impact performance, and provide guidance on optimizing resources. The document outlines StreamingBench workloads and scoring metrics, compares the performance of Spark Streaming, Storm, Trident and Samza, and analyzes how configuration choices like serialization, partitions, and acknowledgements affect throughput and latency.
Prometheus lightning talk (Devops Dublin March 2015)Brian Brazil
This document introduces Prometheus, an open-source monitoring system that allows instrumentation of everything including RPCs, interfaces, business logic, and logs. It provides client libraries that make instrumentation easy across many languages. The Prometheus server can handle over a million time series in one instance with no dependencies. It offers dashboards, expression queries, alerts and integrates with many systems. Time series have structured labels allowing flexible aggregation and complex math for rules and alerts. Prometheus costs less than $.001 per time series per month and is developed by SoundCloud, Boxever and Docker with an active community.
Big Data Day LA 2015 - Event Driven Architecture for Web Analytics by Peyman ...Data Con LA
As integrated web analytics evolves to both a service oriented and event based model, there will be higher emphasis on moving toward event based analytics. Business analytics is moving from purely counts of analytics to time-series, relationship and usage analytics. Examples of web analytics that can take advantage of this architecture are conversions analytics or cross channel marketing.
The advantage of storing raw event data is that you have maximum flexibility for analysis. For example, you can trace the sequence of pages that one person visited over the course of their session. You can’t do that if you’ve squashed all the events into e.g. counters. That sort of analysis is really important for some offline processing tasks, such as training a recommender system (“people who bought X also bought Y”, that sort of thing). For such use cases, it’s best to simply keep all the raw events, so that you can later feed them all into your shiny new machine learning system.
In this session we are going to elaborate on using Kafka, an Event Processing framework (e.g. Storm or Spark Streaming) and either Hadoop or EDW for building an Event Driven Architecture.
This document discusses scaling machine learning using Apache Spark. It covers several key topics:
1) Parallelizing machine learning algorithms and neural networks to distribute computation across clusters. This includes data, model, and parameter server parallelism.
2) Apache Spark's Resilient Distributed Datasets (RDDs) programming model which allows distributing data and computation across a cluster in a fault-tolerant manner.
3) Examples of very large neural networks trained on clusters, such as a Google face detection model using 1,000 servers and a IBM brain-inspired chip model using 262,144 CPUs.
Talk about Salesforce REST API: how to perform query, search or single-record CRUD operations; how to retrieve versions, list of custom object and object metadata and field metadata and presentation of demo page performing these requests
Centralized log-management-with-elastic-stackRich Lee
Centralized log management is implemented using the Elastic Stack including Filebeat, Logstash, Elasticsearch, and Kibana. Filebeat ships logs to Logstash which transforms and indexes the data into Elasticsearch. Logs can then be queried and visualized in Kibana. For large volumes of logs, Kafka may be used as a buffer between the shipper and indexer. Backups are performed using Elasticsearch snapshots to a shared file system or cloud storage. Logs are indexed into time-based indices and a cron job deletes old indices to control storage usage.
BDA402 Deep Dive: Log Analytics with Amazon Elasticsearch ServiceAmazon Web Services
Everything generates logs. Applications, infrastructure, security ... everything. Keeping track of the flood of log data is a big challenge, yet critical to your ability to understand your systems and troubleshoot (or prevent) issues. In this session, we will use both Amazon CloudWatch and application logs to show you how to build an end-to-end log analytics solution. First, we cover how to configure an Amazon Elaticsearch Service domain and ingest data into it using Amazon Kinesis Firehose, demonstrating how easy it is to transform data with Firehose. We look at best practices for choosing instance types, storage options, shard counts, and index rotations based on the throughput of incoming data and configure a secure analytics environment. We demonstrate how to set up a Kibana dashboard and build custom dashboard widgets. Finally, we dive deep into the Elasticsearch query DSL and review approaches for generating custom, ad-hoc reports.
IBM Cloud Native Day April 2021: Serverless Data LakeTorsten Steinbach
- The document discusses serverless data analytics using IBM's cloud services, including a serverless data lake built on cloud object storage, serverless SQL queries using Spark, and serverless data processing functions.
- It provides an example of a COVID-19 data lake built on IBM Cloud that collects and integrates data from various sources, prepares and transforms the data, and makes it available for analytics and dashboards through serverless SQL queries.
Data Analytics Week at the San Francisco Loft
Using Data Lakes
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
John Mallory - Principal Business Development Manager Storage (Object), AWS
Hemant Borole - Sr. Big Data Consultant, AWS
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Level: Intermediate
Speakers:
Tony Nguyen - Senior Consultant, ProServe, AWS
Hannah Marlowe - Consultant - Federal, AWS
Serverless SQL provides a serverless analytics platform that allows users to analyze data stored in object storage without having to manage infrastructure. Key features include seamless elasticity, pay-per-query consumption, and the ability to analyze data directly in object storage without having to move it. The platform includes serverless storage, data ingest, data transformation, analytics, and automation capabilities. It aims to create a sharing economy for analytics by allowing various users like developers, data engineers, and analysts flexible access to data and analytics.
Modernizing upstream workflows with aws storage - john malloryAmazon Web Services
Modernizing Upstream Workflows with AWS Storage
Accelerating seismic data retrieval, getting better data protection and reliability, and providing a common AWS data platform for compute and graphic intensive processing, simulation and visualization workloads.
Modernizing and transforming exploration and production workflows with AWS Storage services
Accelerating seismic data retrieval, getting better data protection and reliability, and providing a common AWS data platform for compute and graphic intensive processing, simulation and visualization workloads.
Capturing and processing streaming sensor data from remote oil rigs with Snowball Edge
Providing a Data Lake foundation for a next generation Digital Oilfield IoT analytics platform with Amazon S3
Speaker: John Mallory - AWS Storage Business Development Manager
Centralized Logging System Using ELK StackRohit Sharma
Centralized Logging System using ELK Stack
The document discusses setting up a centralized logging system (CLS) using the ELK stack. The ELK stack consists of Logstash to capture and filter logs, Elasticsearch to index and store logs, and Kibana to visualize logs. Logstash agents on each server ship logs to Logstash, which filters and sends logs to Elasticsearch for indexing. Kibana queries Elasticsearch and presents logs through interactive dashboards. A CLS provides benefits like log analysis, auditing, compliance, and a single point of control. The ELK stack is an open-source solution that is scalable, customizable, and integrates with other tools.
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
Neel Mitra - Solutions Architect, AWS
Roger Dahlstrom - Solutions Architect, AWS
This document summarizes an IBM Cloud Day 2021 presentation on IBM Cloud Data Lakes. It describes the architecture of IBM Cloud Data Lakes including data skipping capabilities, serverless analytics, and metadata management. It then discusses an example COVID-19 data lake built on IBM Cloud to provide trusted COVID-19 data to analytics applications. Key aspects included landing, preparation, and integration zones; serverless pipelines for data ingestion and transformation; and a data mart for querying and reporting.
Serverless Analytics with Amazon Redshift Spectrum, AWS Glue, and Amazon Quic...Amazon Web Services
Learning Objectives:
- Understand how to build a serverless big data solution quickly and easily
- Learn how to discover and prepare all your data for analytics
- Learn how to query and visualize analytics on all your data to create actionable insights
We're talking about serious log crunching and intelligence gathering with Elastic, Logstash, and Kibana.
ELK is an end-to-end stack for gathering structured and unstructured data from servers. It delivers insights in real time using the Kibana dashboard giving unprecedented horizontal visibility. The visualization and search tools will make your day-to-day hunting a breeze.
During this brief walkthrough of the setup, configuration, and use of the toolset, we will show you how to find the trees from the forest in today's modern cloud environments and beyond.
The document provides an overview of the Databricks platform, which offers a unified environment for data engineering, analytics, and AI. It describes how Databricks addresses the complexity of managing data across siloed systems by providing a single "data lakehouse" platform where all data and analytics workloads can be run. Key features highlighted include Delta Lake for ACID transactions on data lakes, auto loader for streaming data ingestion, notebooks for interactive coding, and governance tools to securely share and catalog data and models.
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly SolarWinds Loggly
This document summarizes Loggly's transition from their first generation log management infrastructure to their second generation infrastructure built on Apache Kafka, Twitter Storm, and ElasticSearch on AWS. The first generation faced challenges around tightly coupling event ingestion and indexing. The new system uses Kafka as a persistent queue, Storm for real-time event processing, and ElasticSearch for search and storage. This architecture leverages AWS services like auto-scaling and provisioned IOPS for high availability and scale. The new system provides improved elasticity, multi-tenancy, and a pre-production staging environment.
DEVNET-1140 InterCloud Mapreduce and Spark Workload Migration and Sharing: Fi...Cisco DevNet
Data gravity is a reality when dealing with massive amounts and globally distributed systems. Processing this data requires distributed analytics processing across InterCloud. In this presentation we will share our real world experience with storing, routing, and processing big data workloads on Cisco Cloud Services and Amazon Web Services clouds.
Instrumenting and Scaling Databases with EnvoyDaniel Hochman
Every request to a database at Lyft is proxied by Envoy, providing complete visibility into the L3/L4 aspects of database interactions. This allows engineers to easily visualize changes to a database's load profile and pinpoint the root cause if necessary. Lyft has also open-sourced codecs for MongoDB, DynamoDB, and Redis. Protocol codecs in combination with custom filters yield benefits ranging from operation-level observability to horizontal scalability via sharding. Using Envoy for this purpose means that enhancements are implemented once and usable across a polyglot stack. The talk demonstrates Envoy's utility beyond traditional RPC service interactions in the network.
This document discusses Typesafe's Reactive Platform and Apache Spark. It describes Typesafe's Fast Data strategy of using a microservices architecture with Spark, Kafka, HDFS and databases. It outlines contributions Typesafe has made to Spark, including backpressure support, dynamic resource allocation in Mesos, and integration tests. The document also discusses Typesafe's customer support and roadmap, including plans to introduce Kerberos security and evaluate Tachyon.
Enabling Microservices Frameworks to Solve Business ProblemsKen Owens
Opening keynote at Mesoscon 2015 with announcements on creating an ecosystem for developing solutions to business problems leveraging Mesos, Mantl.io, Mesosphere Infinity, ZoomData, and Project Calico to create Fog nodes for IoE use cases.
Similar to Case Study: Elasticsearch Ingest Using StreamSets @ Cisco Intercloud (20)
LLM powered contract compliance application which uses Advanced RAG method Self-RAG and Knowledge Graph together for the first time.
It provides highest accuracy for contract compliance recorded so far for Oil and Gas Industry.
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...javier ramirez
Los sistemas distribuidos son difíciles. Los sistemas distribuidos de alto rendimiento, más. Latencias de red, mensajes sin confirmación de recibo, reinicios de servidores, fallos de hardware, bugs en el software, releases problemáticas, timeouts... hay un montón de motivos por los que es muy difícil saber si un mensaje que has enviado se ha recibido y procesado correctamente en destino. Así que para asegurar mandas el mensaje otra vez.. y otra... y cruzas los dedos para que el sistema del otro lado tenga tolerancia a los duplicados.
QuestDB es una base de datos open source diseñada para alto rendimiento. Nos queríamos asegurar de poder ofrecer garantías de "exactly once", deduplicando mensajes en tiempo de ingestión. En esta charla, te cuento cómo diseñamos e implementamos la palabra clave DEDUP en QuestDB, permitiendo deduplicar y además permitiendo Upserts en datos en tiempo real, añadiendo solo un 8% de tiempo de proceso, incluso en flujos con millones de inserciones por segundo.
Además, explicaré nuestra arquitectura de log de escrituras (WAL) paralelo y multithread. Por supuesto, todo esto te lo cuento con demos, para que veas cómo funciona en la práctica.
Airline Satisfaction Project using Azure
This presentation is created as a foundation of understanding and comparing data science/machine learning solutions made in Python notebooks locally and on Azure cloud, as a part of Course DP-100 - Designing and Implementing a Data Science Solution on Azure.
2. Agenda
• Express Overview of StreamSets Data Collector
Kirit Basu, Product Management, StreamSets
• Introduction to Elastic
CatherineJohnson, Solutions Architect, Elastic
• Implementing Shipped Analytics Using StreamSets and Elasticsearch
Dmitri Chtchourov, Innovation Architect, Cloud Solutions CTO Group
Group
8. Software that makes massive amounts of
structured and unstructured data usable for
search, logging, analytics, and more in mission
critical systems and applications
9. Examples: Elastic Stack Use Cases
Logging
IT Operations
Application Management
Security Analytics
Analytics Search
Marketing Insights
Business Development
Customer Sentiment
Website Search
Internal/Intranet Search
URL Search
Internal Systems/Applications External Systems/Applications
Developers IT/Ops Business Users
10. Elastic Solves Many Developer Use Cases
Social
Location
User-
Activity
Machine
(Log files)
Documents
Handles Complex
& Diverse Data
Meets Today’s Core
Developer Requirements
Developer requirements
Many users / use cases
Fast data processing
Large data volumes
Data quality & integrity
Cross-source insights
Solves Critical
Use Cases
Application
Search
Embedded
Search
Logging
Security
Analytics
Operational
Analytics
More …
11. The Elastic Stack
Ingest
Store, Index,
& Analyze
User Interface
Plugins Monitoring Security Alerting
Elastic Cloud: Hosted Elasticsearch
13. Implementing Shipped Analytics Using
Streamsets and Elasticsearch
Dmitri Chtchourov, Innovation Architect, Cloud Solutions CTO Group
Tymofii Polekhin, Software Engineer
14. Agenda
MANTL & Shipped
Shipped Analytics for Shipped
Why we need Shipped Analytics?
Archtecture and Data Flow
Streamsets Pipelines
End to end dataflow and performance with Elasticsearch
Benefits of Streamsets
Demo
15. Microservices managed and scaled separately
Microservices managed by Mesos in a single platform
Microservices architecture for Mesos frameworks and other components
CIS/AWS/Metastack/vSphere/UCS…
Terraform
Spark
Executor N
Spark
Executor 1
Spark
Scheduler
Kafka
Broker N
Kafka
Broker 1
Kafka
Scheduler
Docker Docker
TraefikMicroservices …
REST API
REST API
Scripted provisioning
Direct provisioning
Policy, Auto-scaling
VM1
or
BM1
VM2
or
BM2
VM3
or
BM3
VM4
or
BM4
VM5
or
BM5
19. Infrastructure Layer
Zookeeper Cluster Consul Cluster
Mesos Cluster
Marathon Framework
Kafka Cluster
topbeat filebeat
journalbeat dockerbeat
• Experimenting with Elastic Beats (unified arch., closer to micro-services model)
• Elastic Beats to replace collectd plugins and cAdvisor for containers
20. <file | top | *>beat collectd
logstash
DNS SRV beats.logstash.service.consul
Data normalization
Tagging
Cluster name decoration
Logstash is a single process per
cluster, discoverable with
standard inter-cluster
discovery mechanism, which
will get metrics from collectd
on every slave and logs from
filebeat on every slave,
normalize data and send to
desired output
DNS SRV collectd.logstash.service.consul
NOTE: currently Logstash is running in Docker container on every node, will be moving to Filebeat and Logstash mesos framework soon
21. logstash
Kafka 0.9.0.0 supports SSL
authentication and data
encryption for producers.
This is must-have security
when sending data to external
destination through WAN.
Sending data to central SA
cluster for long-term analytics
SSL encryption
WAN
kafka
SSL authentication
Shipped cluster
Shipped Analytics
22. StreamSets running in Mesos
Spark Cluster mode processing
data from multiple source
Shipped clusters and storing it
in Elasticsearch cluster.
kafka
elasticsearch
Streamsets Spark Streaming Cluster
Spark Job
Master instance
Spark Job Spark Job Spark Job
23. Lambda Reference Architecture
Monitoring / Analytics Cluster (local, Texas-3)
Global Monitoring / Analytics Cluster (global, Texas-1)
Monitoring / Analytics Cluster (local, Ams. -1 )
Monitoring / Analytics Cluster (local, Lon.-1)
Local components and deployment is the same as global, just smaller
Real-time and batch processing (Lambda), anomaly detection, visualization
SSL
Kafka
SSL
SSL
MQTT
24. Divide nodes by role for more
stable cluster operation and
ease of scalability
3 master/search nodes
5 live data nodes
3 archive data nodes
master/
search
master/
search
master/
search
live/
data
live/
data
live/
data
live/
data
live/
data
archive
/data
archive
/data
archive
/data
Shards=5 Replicas=4 Shards=5 Replicas=1
archive
/data
archive
/data
CPU=4
RAM=30GB
HDD=4TB
CPU=4
RAM=30GB
HDD=4TB
CPU=4
RAM=30GB
HDD=4TB
25. Streamsets pipelines process
incoming messages and
transform them according to
business logic requirements,
normalizing metrics and
parsing log lines; popping up
important information using
GROK filters or scripts.
Cluster Name
Decorator
Fields Type
Normalization
Metrics/Logs
Stream Splitter
ES Logs Output
General GROK
Filters
Float Value
Truncate
ES Metrics
Output
Shipped GROK
Logic
26. Marathon
• Streamsets instances running in docker containers in Marathon
o Easy deployment and scaling
o Fast upgrade to newer version
• Issues we faced with this approach:
o Containers were killed by marathon
o Needed to re-import pipeline every time we launch container
27. Marathon
• Working with Streamsets trying to resolve the OOM issue we increased
container memory and SDC heap size
• At first, all looked normal and we thought that it was just
starving on resources, but several days later we had SDC killed again
• We increased MEM and HEAP even more – to 16G, but we bought just
another day or two before is was killed again
• Looked like SDC heap were constantly filling with data
that don’t go away and eventually it kills the container
• Also GC was working hard and sometimes we got freezes
up to 60 seconds
• Decided to move out from Docker
28. Marathon
• Streamsets reading JSON messages from Kafka cluster and output
to Elasticsearch cluster
o De-serializing and serializing JSON was very slow with single
threaded process
o Consuming from Kafka performance test showed:
JSON format: 5k records/sec avg
Text format: 50k records/sec avg
Binary format: 250k records/sec avg
• Streamsets team were very proactive with this issues
and in 2 days we received a fix for multi-threaded JSON parsing
o New testing showed:
JSON format: 66k records/sec avg
29. Marathon
• Streamsets has never failed because of any internal logic bugs
but we kept seeing this oom-killer popping up and recovering was
not automated
• We decided to leave docker and run SDC natively on host,
still using Marathon for scaling and failover
• Without docker, we now can upload our pipeline on SDC startup,
and it will start working as soon as instance has loaded
We can freely scale up/down whenever we need
Also, we got rid of oom-killer issue as well
30. Each one of our 3 SDC instances already processes ~3B messages, with no issues!
31. • Streamsets pipeline consume metrics gathered by collectd
and logs gathered by logstash from 4 different clusters
(including self), transform and decorate them and send to
Elasticsearch for storage and analytics.
• First of all we consume messages from Kafka topic at
average of 5,000 messages per second. The consumer
itself parses JSON-format and sends further.
• Next stage is a JavaScript script that decorates messages
with cluster name, based on a instance hostname in that
message
• Finally, we exclude Marathon events from stream sending
them directly to ES
32. • Next stage will splits stream into 2 parts: logs and metrics
• Metrics are send straight to ES without any transformation
• Logs are the most interesting part:
o We pop docker container logs from stream and
delete “time” field that’s duplicate timstamp and
sending them to ES
o We separate logs from specific clusters, because we
need to apply special logic for them
o Separation is done though mapping IP’s to clusters in
the pipeline realtime
33. • Collecting data from several Mesos clusters and need to
correlate container metrics with it’s logs
• Use appID taskID and runID to identify specific containers
logs
• Container logs itself have all three of this, while mesos-
master and mesos-agent logs lacks runID
• All unidentified data is discarded
34. Current ShippedAnalytics prod cluster configuration:
Kafka Cluster: 7 brokers with 4CPU and 16GB RAM each
Logstash topic for all incoming messages with 7 partitions and 2 replicas
Current data flow is avg 5000 messages/sec to Kafka
Current data size is avg 1,2MB/sec to Kafka
Streamsets: 3 instances with identical pipeline configuration reading from Kafka cluster
7 partitions are split between 3 instances like 3/2/2
All 3 instances running natively on host (non-docker) with Marathon
Marathon restarts failed instance with automatic pipeline upload and start
Elasticsearch: 7 nodes with 4CPU, 16GB RAM and 2TB storage each
Each metrics is written to its own index, total of 15 indexes
Each index has 5 primary shards and 5 replica shards
Total Doc count: 17,5B Total Doc size: 3.84TB
1 Day rate count: ~500M 1 Day rate size: ~120GB
35. Streamsets is a great product to work with, also team is super helpful and works fast
• Lots of input and output connectors, huge processing capabilities
• Very intuitive and rich User Interface
• Easy to create pipelines visually, instead of writing code
• Clear data flow paths
• Small resource consumption compared to performance
• Easily can handle up to 10k records/sec to Elasticsearch with 1CPU 2GB RAM
• Simple configuration and deployment process
• Opensource(!)
• Fast logic changes with minimum downtime
• Preview mode(!) – check every stage before throwing all your data it
• Rich data transformation possibilities
• GROK filters – easy to migrate from Logstash
• Smart Errors handling
• Reliable: not once did Streamets crashed by itself – only Docker, Marathon, Mesos issues