SlideShare a Scribd company logo
Apache Spark in Depth
core concepts, architecture & internals
Anton Kirillov Ooyala, Mar 2016
Roadmap
● RDDs
○ Definition
○ Operations
● Execution workflow
○ DAG
○ Stages and tasks
○ Shuffle
● Architecture
○ Components
○ Memory model
● Coding
○ spark-shell
○ building and submitting Spark applications to YARN
Meet Spark
● Generalized framework for distributed data processing (batch, graph, ML)
● Scala collections functional API for manipulating data at scale
● In-memory data caching and reuse across computations
● Applies set of coarse-grained transformations over partitioned data
● Failure recovery relies on lineage to recompute failed tasks
● Supports majority of input formats and integrates with Mesos / YARN
Spark makes data engineers happy
Backup/restore of Cassandra tables in Parquet
def backup(config: Config) {
sc.cassandraTable(config.keyspace, config.table).map(_.toEvent).toDF()
.write.parquet(config.path)
}
def restore(config: Config) {
sqlContext.read.parquet(config.path)
.map(_.toEvent).saveToCassandra(config.keyspace, config.table)
}
Query different data sources to identify discrepancies
sqlContext.sql {
"""
SELECT count()
FROM cassandra_event_rollups
JOIN mongo_event_rollups
ON cassandra_event_rollups.uuid = cassandra_event_rollups.uuid
WHERE cassandra_event_rollups.value != cassandra_event_rollups.value
""".stripMargin
}

Recommended for you

Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview

This document provides an overview of Apache Spark, including how it compares to Hadoop, the Spark ecosystem, Resilient Distributed Datasets (RDDs), transformations and actions on RDDs, the directed acyclic graph (DAG) scheduler, Spark Streaming, and the DataFrames API. Key points covered include Spark's faster performance versus Hadoop through its use of memory instead of disk, the RDD abstraction for distributed collections, common RDD operations, and Spark's capabilities for real-time streaming data processing and SQL queries on structured data.

sparkbigdatasql
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...

The document discusses optimizations made to Spark SQL performance when working with Parquet files at ByteDance. It describes how Spark originally reads Parquet files and identifies two main areas for optimization: Parquet filter pushdown and the Parquet reader. For filter pushdown, sorting columns improved statistics and reduced data reads by 30%. For the reader, splitting it to first filter then read other columns prevented loading unnecessary data. These changes improved Spark SQL performance at ByteDance without changing jobs.

Dive into PySpark
Dive into PySparkDive into PySpark
Dive into PySpark

Apache Spark™ is a fast and general engine for large-scale data processing. Spark is written in Scala and runs on top of JVM, but Python is one of the officially supported languages. But how does it actually work? How can Python communicate with Java / Scala? In this talk, we’ll dive into the PySpark internals and try to understand how to write and test high-performance PySpark applications.

sparkapache-sparkpyspark
Core Concepts
RDD: Resilient Distributed Dataset
● A fault-tolerant, immutable, parallel data structure
● Provides API for
○ manipulating the collection of elements (transformations and materialization)
○ persisting intermediate results in memory for later reuse
○ controlling partitioning to optimize data placement
● Can be created through deterministic operation
○ from storage (distributed file system, database, plain file)
○ from another RDD
● Stores information about parent RDDs
○ for execution optimization and operations pipelining
○ to recompute the data in case of failure
RDD: a developer’s view
● Distributed immutable data + lazily evaluated operations
○ partitioned data + iterator
○ transformations & actions
● An interface defining 5 main properties
a list of partitions (e.g. splits in Hadoop)
def getPartitions: Array[Partition]
a list of dependencies on other RDDs
def getDependencies: Seq[Dependency[_]]
a function for computing each split
def compute(split: Partition, context: TaskContext): Iterator[T]
(optional) a list of preferred locations to compute each split on
def getPreferredLocations(split: Partition): Seq[String] = Nil
(optional) a partitioner for key-value RDDs
val partitioner: Option[Partitioner] = None
lineage
execution optimization
RDDs Example
● HadoopRDD
○ getPartitions = HDFS blocks
○ getDependencies = None
○ compute = load block in memory
○ getPrefferedLocations = HDFS block locations
○ partitioner = None
● MapPartitionsRDD
○ getPartitions = same as parent
○ getDependencies = parent RDD
○ compute = compute parent and apply map()
○ getPrefferedLocations = same as parent
○ partitioner = None
sparkContext.textFile("hdfs://...")

Recommended for you

Map reduce vs spark
Map reduce vs sparkMap reduce vs spark
Map reduce vs spark

This document compares MapReduce and Spark frameworks. It discusses their histories and basic functionalities. MapReduce uses input, map, shuffle, and reduce stages, while Spark uses RDDs (Resilient Distributed Datasets) and transformations and actions. Spark is easier to program than MapReduce due to its interactive mode, but MapReduce has more supporting tools. Performance benchmarks show Spark is faster than MapReduce for sorting. The hardware and developer costs of Spark are also lower than MapReduce.

Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark

we will see an overview of Spark in Big Data. We will start with an introduction to Apache Spark Programming. Then we will move to know the Spark History. Moreover, we will learn why Spark is needed. Afterward, will cover all fundamental of Spark components. Furthermore, we will learn about Spark’s core abstraction and Spark RDD. For more detailed insights, we will also cover spark features, Spark limitations, and Spark Use cases.

sparkapache sparkcomponenets of apache spark
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...

Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.

apache sparksparkaisummit
RDD Operations
● Transformations
○ apply user function to every element in a partition (or to the whole partition)
○ apply aggregation function to the whole dataset (groupBy, sortBy)
○ introduce dependencies between RDDs to form DAG
○ provide functionality for repartitioning (repartition, partitionBy)
● Actions
○ trigger job execution
○ used to materialize computation results
● Extra: persistence
○ explicitly store RDDs in memory, on disk or off-heap (cache, persist)
○ checkpointing for truncating RDD lineage
Execution workflow
10
rdd1.join(rdd2)
.groupBy(...)
.filter(...)
splits graph into
stages of tasks
submits each stage
as ready
launches tasks via
cluster manager
retries failed or
struggling tasks
executes tasks
stores and serves
blocks
Code sample: joining aggregated and raw data
//aggregate events after specific date for given campaign
val events = sc.cassandraTable("demo", "event")
.map(_.toEvent)
.filter(event => event.campaignId == campaignId && event.time.isAfter(watermark))
.keyBy(_.eventType)
.reduceByKey(_ + _)
.cache()
//aggregate campaigns by type
val campaigns = sc.cassandraTable("demo", "campaign")
.map(_.toCampaign)
.filter(campaign => campaign.id == campaignId && campaign.time.isBefore(watermark))
.keyBy(_.eventType)
.reduceByKey(_ + _)
.cache()
//joined rollups and raw events
val joinedTotals = campaigns.join(events)
.map { case (key, (campaign, event)) => CampaignTotals(campaign, event) }
.collect()
//count totals separately
val eventTotals = events.map{ case (t, e) => s"$t -> ${e.value}" }.collect()
val campaignTotals = campaigns.map{ case (t, e) => s"$t -> ${e.value}" }.collect()
DAG

Recommended for you

Apache Spark Overview
Apache Spark OverviewApache Spark Overview
Apache Spark Overview

This document provides an overview of a talk on Apache Spark. It introduces the speaker and their background. It acknowledges inspiration from a previous Spark training. It then outlines the structure of the talk, which will include: a brief history of big data; a tour of Spark including its advantages over MapReduce; and explanations of Spark concepts like RDDs, transformations, and actions. The document serves to introduce the topics that will be covered in the talk.

analyticsbig dataapache spark
Spark architecture
Spark architectureSpark architecture
Spark architecture

The document discusses Apache Spark, an open source cluster computing framework for real-time data processing. It notes that Spark is up to 100 times faster than Hadoop for in-memory processing and 10 times faster on disk. The main feature of Spark is its in-memory cluster computing capability, which increases processing speeds. Spark runs on a driver-executor model and uses resilient distributed datasets and directed acyclic graphs to process data in parallel across a cluster.

Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveApache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive

We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.

apache sparkrddrdd deep dive
Dependency types
● Narrow (pipelineable)
○ each partition of the parent RDD is used by at most
one partition of the child RDD
○ allow for pipelined execution on one cluster node
○ failure recovery is more efficient as only lost parent
partitions need to be recomputed
● Wide (shuffle)
○ multiple child partitions may depend on one parent
partition
○ require data from all parent partitions to be available
and to be shuffled across the nodes
○ if some partition is lost from all the ancestors a
complete recomputation is needed
Stages and Tasks
● Stages breakdown strategy
○ check backwards from final RDD
○ add each “narrow” dependency to
the current stage
○ create new stage when there’s a
shuffle dependency
● Tasks
○ ShuffleMapTask partitions its
input for shuffle
○ ResultTask sends its output to
the driver
Shuffle
● Shuffle Write
○ redistributes data among partitions
and writes files to disk
○ each hash shuffle task creates one
file per “reduce” task (total = MxR)
○ sort shuffle task creates one file
with regions assigned to reducer
○ sort shuffle uses in-memory sorting
with spillover to disk to get final
result
● Shuffle Read
○ fetches the files and applies
reduce() logic
○ if data ordering is needed then it is
sorted on “reducer” side for any
type of shuffle (SPARK-2926)
Sort Shuffle
● Incoming records accumulated
and sorted in memory according
their target partition ids
● Sorted records are written to file
or multiple files if spilled and then
merged
● index file stores offsets of the
data blocks in the data file
● Sorting without deserialization is
possible under certain conditions
(SPARK-7081)

Recommended for you

A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiA Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai

Catalyst is becoming one of the most important components of Apache Spark, as it underpins all the major new APIs in Spark 2.0 and later versions, from DataFrames and Datasets to Streaming. At its core, Catalyst is a general library for manipulating trees. In this talk, Yin explores a modular compiler frontend for Spark based on this library that includes a query analyzer, optimizer, and an execution planner. Yin offers a deep dive into Spark SQL’s Catalyst optimizer, introducing the core concepts of Catalyst and demonstrating how developers can extend it. You’ll leave with a deeper understanding of how Spark analyzes, optimizes, and plans a user’s query.

apache sparkspark summit
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture

This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark

apache sparkdistributed systemtungsten
Programming in Spark using PySpark
Programming in Spark using PySpark      Programming in Spark using PySpark
Programming in Spark using PySpark

This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.

hadoopapache hadoopspark
Architecture Recap
● Spark Driver
○ separate process to execute user
applications
○ creates SparkContext to schedule
jobs execution and negotiate with
cluster manager
● Executors
○ run tasks scheduled by driver
○ store computation results in
memory, on disk or off-heap
○ interact with storage systems
● Cluster Manager
○ Mesos
○ YARN
○ Spark Standalone
Spark Components
Spark Components
● SparkContext
○ represents the connection to a Spark cluster, and can be used to create RDDs, accumulators and
broadcast variables on that cluster
● DAGScheduler
○ computes a DAG of stages for each job and submits them to TaskScheduler
○ determines preferred locations for tasks (based on cache status or shuffle files locations) and finds
minimum schedule to run the jobs
● TaskScheduler
○ responsible for sending tasks to the cluster, running them, retrying if there are failures, and mitigating
stragglers
● SchedulerBackend
○ backend interface for scheduling systems that allows plugging in different implementations(Mesos,
YARN, Standalone, local)
● BlockManager
○ provides interfaces for putting and retrieving blocks both locally and remotely into various stores
(memory, disk, and off-heap)
Memory Management in Spark 1.6
● Execution Memory
○ storage for data needed during tasks execution
○ shuffle-related data
● Storage Memory
○ storage of cached RDDs and broadcast variables
○ possible to borrow from execution memory
(spill otherwise)
○ safeguard value is 0.5 of Spark Memory when cached
blocks are immune to eviction
● User Memory
○ user data structures and internal metadata in Spark
○ safeguarding against OOM
● Reserved memory
○ memory needed for running executor itself and not
strictly related to Spark

Recommended for you

Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction

This document discusses Spark shuffle, which is an expensive operation that involves data partitioning, serialization/deserialization, compression, and disk I/O. It provides an overview of how shuffle works in Spark and the history of optimizations like sort-based shuffle and an external shuffle service. Key concepts discussed include shuffle writers, readers, and the pluggable block transfer service that handles data transfer. The document also covers shuffle-related configuration options and potential future work.

spark; internal; shuffle;
Spark SQL
Spark SQLSpark SQL
Spark SQL

The document summarizes Spark SQL, which is a Spark module for structured data processing. It introduces key concepts like RDDs, DataFrames, and interacting with data sources. The architecture of Spark SQL is explained, including how it works with different languages and data sources through its schema RDD abstraction. Features of Spark SQL are covered such as its integration with Spark programs, unified data access, compatibility with Hive, and standard connectivity.

hadooprnosql
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins

Join operations in Apache Spark is often the biggest source of performance problems and even full-blown exceptions in Spark. After this talk, you will understand the two most basic methods Spark employs for joining DataFrames – to the level of detail of how Spark distributes the data within the cluster. You’ll also find out how to work out common errors and even handle the trickiest corner cases we’ve encountered! After this talk, you should be able to write performance joins in Spark SQL that scale and are zippy fast! This session will cover different ways of joining tables in Apache Spark. Speaker: Vida Ha This talk was originally presented at Spark Summit East 2017.

sqlbig dataapache spark
Workshop
code available @ github.com/datastrophic/spark-workshop
Execution Modes
● spark-shell --master [ local | spark | yarn-client | mesos]
○ launches REPL connected to specified cluster manager
○ always runs in client mode
● spark-submit --master [ local | spark:// | mesos:// | yarn ] spark-job.jar
○ launches assembly jar on the cluster
● Masters
○ local[k] - run Spark locally with K worker threads
○ spark - launches driver app on Spark Standalone installation
○ mesos - driver will spawn executors on Mesos cluster (deploy-mode: client | cluster)
○ yarn - same idea as with Mesos (deploy-mode: client | cluster)
● Deploy Modes
○ client - driver executed as a separate process on the machine where it has been launched and
spawns executors
○ cluster - driver launched as a container using underlying cluster manager
Invocation examples
spark-shell 
--master yarn 
--deploy-mode client 
--executor-cores 1 
--num-executors 2 
--jars /target/spark-workshop.jar 
--conf spark.cassandra.connection.host=cassandra
spark-submit --class io.datastrophic.spark.workshop.ParametrizedApplicationExample 
--master yarn 
--deploy-mode cluster 
--num-executors 2 
--driver-memory 1g 
--executor-memory 1g 
/target/spark-workshop.jar 
--cassandra-host cassandra 
--keyspace demo 
--table event 
--target-dir /workshop/dumps
Live Demo
● spark-shell
● Spark UI
● creating an app with Typesafe Activator
● Spark SQL and DataFrames API
● coding

Recommended for you

Apache Spark 101
Apache Spark 101Apache Spark 101
Apache Spark 101

This document provides an overview of Apache Spark, including its goal of providing a fast and general engine for large-scale data processing. It discusses Spark's programming model, components like RDDs and DAGs, and how to initialize and deploy Spark on a cluster. Key aspects covered include RDDs as the fundamental data structure in Spark, transformations and actions, and storage levels for caching data in memory or disk.

data analyticsapache sparkbig data
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark

The slides explain how shuffle works in Spark and help people understand more details about Spark internal. It shows how the major classes are implemented, including: ShuffleManager (SortShuffleManager), ShuffleWriter (SortShuffleWriter, BypassMergeSortShuffleWriter, UnsafeShuffleWriter), ShuffleReader (BlockStoreShuffleReader).

sparkshuffleshuffle manager
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer Training

Apache Spark is a next-generation processing engine optimized for speed, ease of use, and advanced analytics well beyond batch. The Spark framework supports streaming data and complex, iterative algorithms, enabling applications to run 100x faster than traditional MapReduce programs. With Spark, developers can write sophisticated parallel applications for faster business decisions and better user outcomes, applied to a wide variety of architectures and industries. Learn What Apache Spark is and how it compares to Hadoop MapReduce, How to filter, map, reduce, and save Resilient Distributed Datasets (RDDs), Who is best suited to attend the course and what prior knowledge you should have, and the benefits of building Spark applications as part of an enterprise data hub.

apache spark trainingcloudera training
Coding ideas
● get familiar with API through sample project
○ join data from different storage systems
○ aggregate data with breakdown by date
● play with caching and persistence
● check out join behavior applying different partitioning
● familiarize with Spark UI
● experiment with new DataSet API (since 1.6)
● [ your awesome idea here ]
Questions
@antonkirillov datastrophic.io

More Related Content

What's hot

A Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQLA Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQL
Databricks
 
Memory Management in Apache Spark
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache Spark
Databricks
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
Anastasios Skarlatidis
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
DataArt
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Databricks
 
Dive into PySpark
Dive into PySparkDive into PySpark
Dive into PySpark
Mateusz Buśkiewicz
 
Map reduce vs spark
Map reduce vs sparkMap reduce vs spark
Map reduce vs spark
Tudor Lapusan
 
Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark
Aakashdata
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Databricks
 
Apache Spark Overview
Apache Spark OverviewApache Spark Overview
Apache Spark Overview
Vadim Y. Bichutskiy
 
Spark architecture
Spark architectureSpark architecture
Spark architecture
GauravBiswas9
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveApache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Sachin Aggarwal
 
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiA Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
Databricks
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
Alexey Grishchenko
 
Programming in Spark using PySpark
Programming in Spark using PySpark      Programming in Spark using PySpark
Programming in Spark using PySpark
Mostafa
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
colorant
 
Spark SQL
Spark SQLSpark SQL
Spark SQL
Joud Khattab
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
Databricks
 
Apache Spark 101
Apache Spark 101Apache Spark 101
Apache Spark 101
Abdullah Çetin ÇAVDAR
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Bo Yang
 

What's hot (20)

A Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQLA Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQL
 
Memory Management in Apache Spark
Memory Management in Apache SparkMemory Management in Apache Spark
Memory Management in Apache Spark
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
 
Dive into PySpark
Dive into PySparkDive into PySpark
Dive into PySpark
 
Map reduce vs spark
Map reduce vs sparkMap reduce vs spark
Map reduce vs spark
 
Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 
Apache Spark Overview
Apache Spark OverviewApache Spark Overview
Apache Spark Overview
 
Spark architecture
Spark architectureSpark architecture
Spark architecture
 
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveApache Spark Introduction and Resilient Distributed Dataset basics and deep dive
Apache Spark Introduction and Resilient Distributed Dataset basics and deep dive
 
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin HuaiA Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
A Deep Dive into Spark SQL's Catalyst Optimizer with Yin Huai
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 
Programming in Spark using PySpark
Programming in Spark using PySpark      Programming in Spark using PySpark
Programming in Spark using PySpark
 
Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
 
Spark SQL
Spark SQLSpark SQL
Spark SQL
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
 
Apache Spark 101
Apache Spark 101Apache Spark 101
Apache Spark 101
 
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in SparkSpark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
Spark Shuffle Deep Dive (Explained In Depth) - How Shuffle Works in Spark
 

Viewers also liked

Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer Training
Cloudera, Inc.
 
MapR Tutorial Series
MapR Tutorial SeriesMapR Tutorial Series
MapR Tutorial Series
selvaraaju
 
Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
Databricks
 
MapR and Cisco Make IT Better
MapR and Cisco Make IT BetterMapR and Cisco Make IT Better
MapR and Cisco Make IT Better
MapR Technologies
 
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
Amazon Web Services
 
Apache Spark & Hadoop
Apache Spark & HadoopApache Spark & Hadoop
Apache Spark & Hadoop
MapR Technologies
 
Modern Data Architecture
Modern Data ArchitectureModern Data Architecture
Modern Data Architecture
Alexey Grishchenko
 
Hands on MapR -- Viadea
Hands on MapR -- ViadeaHands on MapR -- Viadea
Hands on MapR -- Viadea
viadea
 
Architectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop DistributionArchitectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop Distribution
mcsrivas
 
MapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APIMapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase API
mcsrivas
 
Deep Learning for Fraud Detection
Deep Learning for Fraud DetectionDeep Learning for Fraud Detection
Deep Learning for Fraud Detection
DataWorks Summit/Hadoop Summit
 
Apache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterApache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and Smarter
Databricks
 
MapR Data Analyst
MapR Data AnalystMapR Data Analyst
MapR Data Analyst
selvaraaju
 

Viewers also liked (13)

Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer Training
 
MapR Tutorial Series
MapR Tutorial SeriesMapR Tutorial Series
MapR Tutorial Series
 
Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
 
MapR and Cisco Make IT Better
MapR and Cisco Make IT BetterMapR and Cisco Make IT Better
MapR and Cisco Make IT Better
 
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
 
Apache Spark & Hadoop
Apache Spark & HadoopApache Spark & Hadoop
Apache Spark & Hadoop
 
Modern Data Architecture
Modern Data ArchitectureModern Data Architecture
Modern Data Architecture
 
Hands on MapR -- Viadea
Hands on MapR -- ViadeaHands on MapR -- Viadea
Hands on MapR -- Viadea
 
Architectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop DistributionArchitectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop Distribution
 
MapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APIMapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase API
 
Deep Learning for Fraud Detection
Deep Learning for Fraud DetectionDeep Learning for Fraud Detection
Deep Learning for Fraud Detection
 
Apache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterApache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and Smarter
 
MapR Data Analyst
MapR Data AnalystMapR Data Analyst
MapR Data Analyst
 

Similar to Apache Spark in Depth: Core Concepts, Architecture & Internals

Data processing platforms with SMACK: Spark and Mesos internals
Data processing platforms with SMACK:  Spark and Mesos internalsData processing platforms with SMACK:  Spark and Mesos internals
Data processing platforms with SMACK: Spark and Mesos internals
Anton Kirillov
 
Spark Deep Dive
Spark Deep DiveSpark Deep Dive
Spark Deep Dive
Corey Nolet
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
Datio Big Data
 
Apache spark - Installation
Apache spark - InstallationApache spark - Installation
Apache spark - Installation
Martin Zapletal
 
Apache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming modelApache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming model
Martin Zapletal
 
Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)
Datio Big Data
 
Apache Spark™ is a multi-language engine for executing data-S5.ppt
Apache Spark™ is a multi-language engine for executing data-S5.pptApache Spark™ is a multi-language engine for executing data-S5.ppt
Apache Spark™ is a multi-language engine for executing data-S5.ppt
bhargavi804095
 
Spark 计算模型
Spark 计算模型Spark 计算模型
Spark 计算模型
wang xing
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?
Massimo Schenone
 
Core Services behind Spark Job Execution
Core Services behind Spark Job ExecutionCore Services behind Spark Job Execution
Core Services behind Spark Job Execution
datamantra
 
DAGScheduler - The Internals of Apache Spark.pdf
DAGScheduler - The Internals of Apache Spark.pdfDAGScheduler - The Internals of Apache Spark.pdf
DAGScheduler - The Internals of Apache Spark.pdf
JoeKibangu
 
Apache Spark Workshop
Apache Spark WorkshopApache Spark Workshop
Apache Spark Workshop
Michael Spector
 
Improving Apache Spark Downscaling
 Improving Apache Spark Downscaling Improving Apache Spark Downscaling
Improving Apache Spark Downscaling
Databricks
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
Databricks
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
Patrick Wendell
 
Debugging & Tuning in Spark
Debugging & Tuning in SparkDebugging & Tuning in Spark
Debugging & Tuning in Spark
Shiao-An Yuan
 
11. From Hadoop to Spark 2/2
11. From Hadoop to Spark 2/211. From Hadoop to Spark 2/2
11. From Hadoop to Spark 2/2
Fabio Fumarola
 
Big Data processing with Apache Spark
Big Data processing with Apache SparkBig Data processing with Apache Spark
Big Data processing with Apache Spark
Lucian Neghina
 
SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...
SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...
SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...
Inhacking
 
Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...
Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...
Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...
Аліна Шепшелей
 

Similar to Apache Spark in Depth: Core Concepts, Architecture & Internals (20)

Data processing platforms with SMACK: Spark and Mesos internals
Data processing platforms with SMACK:  Spark and Mesos internalsData processing platforms with SMACK:  Spark and Mesos internals
Data processing platforms with SMACK: Spark and Mesos internals
 
Spark Deep Dive
Spark Deep DiveSpark Deep Dive
Spark Deep Dive
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Apache spark - Installation
Apache spark - InstallationApache spark - Installation
Apache spark - Installation
 
Apache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming modelApache spark - Spark's distributed programming model
Apache spark - Spark's distributed programming model
 
Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)Apache Spark II (SparkSQL)
Apache Spark II (SparkSQL)
 
Apache Spark™ is a multi-language engine for executing data-S5.ppt
Apache Spark™ is a multi-language engine for executing data-S5.pptApache Spark™ is a multi-language engine for executing data-S5.ppt
Apache Spark™ is a multi-language engine for executing data-S5.ppt
 
Spark 计算模型
Spark 计算模型Spark 计算模型
Spark 计算模型
 
Apache Spark: What? Why? When?
Apache Spark: What? Why? When?Apache Spark: What? Why? When?
Apache Spark: What? Why? When?
 
Core Services behind Spark Job Execution
Core Services behind Spark Job ExecutionCore Services behind Spark Job Execution
Core Services behind Spark Job Execution
 
DAGScheduler - The Internals of Apache Spark.pdf
DAGScheduler - The Internals of Apache Spark.pdfDAGScheduler - The Internals of Apache Spark.pdf
DAGScheduler - The Internals of Apache Spark.pdf
 
Apache Spark Workshop
Apache Spark WorkshopApache Spark Workshop
Apache Spark Workshop
 
Improving Apache Spark Downscaling
 Improving Apache Spark Downscaling Improving Apache Spark Downscaling
Improving Apache Spark Downscaling
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
 
Tuning and Debugging in Apache Spark
Tuning and Debugging in Apache SparkTuning and Debugging in Apache Spark
Tuning and Debugging in Apache Spark
 
Debugging & Tuning in Spark
Debugging & Tuning in SparkDebugging & Tuning in Spark
Debugging & Tuning in Spark
 
11. From Hadoop to Spark 2/2
11. From Hadoop to Spark 2/211. From Hadoop to Spark 2/2
11. From Hadoop to Spark 2/2
 
Big Data processing with Apache Spark
Big Data processing with Apache SparkBig Data processing with Apache Spark
Big Data processing with Apache Spark
 
SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...
SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...
SE2016 BigData Vitalii Bondarenko "HD insight spark. Advanced in-memory Big D...
 
Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...
Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...
Vitalii Bondarenko HDinsight: spark. advanced in memory big-data analytics wi...
 

Recently uploaded

MUMBAI MONTHLY RAINFALL CAPSTONE PROJECT
MUMBAI MONTHLY RAINFALL CAPSTONE PROJECTMUMBAI MONTHLY RAINFALL CAPSTONE PROJECT
MUMBAI MONTHLY RAINFALL CAPSTONE PROJECT
GaneshGanesh399816
 
Sunshine Coast University diploma
Sunshine Coast University diplomaSunshine Coast University diploma
Sunshine Coast University diploma
cwavvyy
 
EGU2020-10385_presentation LSTM algorithm
EGU2020-10385_presentation LSTM algorithmEGU2020-10385_presentation LSTM algorithm
EGU2020-10385_presentation LSTM algorithm
fatimaezzahraboumaiz2
 
Maruti Wagon R on road price in Faridabad - CarDekho
Maruti Wagon R on road price in Faridabad - CarDekhoMaruti Wagon R on road price in Faridabad - CarDekho
Maruti Wagon R on road price in Faridabad - CarDekho
kamli sharma#S10
 
Nehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model Safe
Nehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model SafeNehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model Safe
Nehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model Safe
bookmybebe1
 
Rohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model Safe
Rohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model SafeRohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model Safe
Rohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model Safe
depikasharma
 
Daryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model Safe
Daryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model SafeDaryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model Safe
Daryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model Safe
butwhat24
 
Sin Involves More Than You Might Think (We'll Explain)
Sin Involves More Than You Might Think (We'll Explain)Sin Involves More Than You Might Think (We'll Explain)
Sin Involves More Than You Might Think (We'll Explain)
sapna sharmap11
 
South Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model Safe
South Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model SafeSouth Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model Safe
South Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model Safe
simmi singh$A17
 
Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...
Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...
Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...
shoeb2926
 
Amul goes international: Desi dairy giant to launch fresh ...
Amul goes international: Desi dairy giant to launch fresh ...Amul goes international: Desi dairy giant to launch fresh ...
Amul goes international: Desi dairy giant to launch fresh ...
chetankumar9855
 
[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...
[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...
[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...
Amazon Web Services Korea
 
[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습
[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습
[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습
Amazon Web Services Korea
 
Saket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model Safe
Saket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model SafeSaket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model Safe
Saket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model Safe
shruti singh$A17
 
Noida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model Safe
Noida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model SafeNoida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model Safe
Noida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model Safe
kumkum tuteja$A17
 
University of Toronto degree offer diploma Transcript
University of Toronto  degree offer diploma TranscriptUniversity of Toronto  degree offer diploma Transcript
University of Toronto degree offer diploma Transcript
taqyea
 
[D3T1S02] Aurora Limitless Database Introduction
[D3T1S02] Aurora Limitless Database Introduction[D3T1S02] Aurora Limitless Database Introduction
[D3T1S02] Aurora Limitless Database Introduction
Amazon Web Services Korea
 
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
javier ramirez
 
Simon Fraser University degree offer diploma Transcript
Simon Fraser University  degree offer diploma TranscriptSimon Fraser University  degree offer diploma Transcript
Simon Fraser University degree offer diploma Transcript
taqyea
 
Introduction to the Red Hat Portfolio.pdf
Introduction to the Red Hat Portfolio.pdfIntroduction to the Red Hat Portfolio.pdf
Introduction to the Red Hat Portfolio.pdf
kihus38
 

Recently uploaded (20)

MUMBAI MONTHLY RAINFALL CAPSTONE PROJECT
MUMBAI MONTHLY RAINFALL CAPSTONE PROJECTMUMBAI MONTHLY RAINFALL CAPSTONE PROJECT
MUMBAI MONTHLY RAINFALL CAPSTONE PROJECT
 
Sunshine Coast University diploma
Sunshine Coast University diplomaSunshine Coast University diploma
Sunshine Coast University diploma
 
EGU2020-10385_presentation LSTM algorithm
EGU2020-10385_presentation LSTM algorithmEGU2020-10385_presentation LSTM algorithm
EGU2020-10385_presentation LSTM algorithm
 
Maruti Wagon R on road price in Faridabad - CarDekho
Maruti Wagon R on road price in Faridabad - CarDekhoMaruti Wagon R on road price in Faridabad - CarDekho
Maruti Wagon R on road price in Faridabad - CarDekho
 
Nehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model Safe
Nehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model SafeNehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model Safe
Nehru Place @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Jya Khan Top Model Safe
 
Rohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model Safe
Rohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model SafeRohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model Safe
Rohini @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Megha Singla Top Model Safe
 
Daryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model Safe
Daryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model SafeDaryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model Safe
Daryaganj @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Jina Singh Top Model Safe
 
Sin Involves More Than You Might Think (We'll Explain)
Sin Involves More Than You Might Think (We'll Explain)Sin Involves More Than You Might Think (We'll Explain)
Sin Involves More Than You Might Think (We'll Explain)
 
South Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model Safe
South Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model SafeSouth Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model Safe
South Ex @ℂall @Girls ꧁❤ 9711199012 ❤꧂Glamorous sonam Mehra Top Model Safe
 
Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...
Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...
Greater Kailash @ℂall @Girls ꧁❤ 9873777170 ❤꧂Glamorous sonam Mehra Top Model ...
 
Amul goes international: Desi dairy giant to launch fresh ...
Amul goes international: Desi dairy giant to launch fresh ...Amul goes international: Desi dairy giant to launch fresh ...
Amul goes international: Desi dairy giant to launch fresh ...
 
[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...
[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...
[D3T1S04] Aurora PostgreSQL performance monitoring and troubleshooting by use...
 
[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습
[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습
[D3T2S03] Data&AI Roadshow 2024 - Amazon DocumentDB 실습
 
Saket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model Safe
Saket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model SafeSaket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model Safe
Saket @ℂall @Girls ꧁❤ 9873777170 ❤꧂VIP Neha Singla Top Model Safe
 
Noida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model Safe
Noida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model SafeNoida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model Safe
Noida Extension @ℂall @Girls ꧁❤ 9873940964 ❤꧂VIP Vishakha Singla Top Model Safe
 
University of Toronto degree offer diploma Transcript
University of Toronto  degree offer diploma TranscriptUniversity of Toronto  degree offer diploma Transcript
University of Toronto degree offer diploma Transcript
 
[D3T1S02] Aurora Limitless Database Introduction
[D3T1S02] Aurora Limitless Database Introduction[D3T1S02] Aurora Limitless Database Introduction
[D3T1S02] Aurora Limitless Database Introduction
 
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
Cómo hemos implementado semántica de "Exactly Once" en nuestra base de datos ...
 
Simon Fraser University degree offer diploma Transcript
Simon Fraser University  degree offer diploma TranscriptSimon Fraser University  degree offer diploma Transcript
Simon Fraser University degree offer diploma Transcript
 
Introduction to the Red Hat Portfolio.pdf
Introduction to the Red Hat Portfolio.pdfIntroduction to the Red Hat Portfolio.pdf
Introduction to the Red Hat Portfolio.pdf
 

Apache Spark in Depth: Core Concepts, Architecture & Internals

  • 1. Apache Spark in Depth core concepts, architecture & internals Anton Kirillov Ooyala, Mar 2016
  • 2. Roadmap ● RDDs ○ Definition ○ Operations ● Execution workflow ○ DAG ○ Stages and tasks ○ Shuffle ● Architecture ○ Components ○ Memory model ● Coding ○ spark-shell ○ building and submitting Spark applications to YARN
  • 3. Meet Spark ● Generalized framework for distributed data processing (batch, graph, ML) ● Scala collections functional API for manipulating data at scale ● In-memory data caching and reuse across computations ● Applies set of coarse-grained transformations over partitioned data ● Failure recovery relies on lineage to recompute failed tasks ● Supports majority of input formats and integrates with Mesos / YARN
  • 4. Spark makes data engineers happy Backup/restore of Cassandra tables in Parquet def backup(config: Config) { sc.cassandraTable(config.keyspace, config.table).map(_.toEvent).toDF() .write.parquet(config.path) } def restore(config: Config) { sqlContext.read.parquet(config.path) .map(_.toEvent).saveToCassandra(config.keyspace, config.table) } Query different data sources to identify discrepancies sqlContext.sql { """ SELECT count() FROM cassandra_event_rollups JOIN mongo_event_rollups ON cassandra_event_rollups.uuid = cassandra_event_rollups.uuid WHERE cassandra_event_rollups.value != cassandra_event_rollups.value """.stripMargin }
  • 6. RDD: Resilient Distributed Dataset ● A fault-tolerant, immutable, parallel data structure ● Provides API for ○ manipulating the collection of elements (transformations and materialization) ○ persisting intermediate results in memory for later reuse ○ controlling partitioning to optimize data placement ● Can be created through deterministic operation ○ from storage (distributed file system, database, plain file) ○ from another RDD ● Stores information about parent RDDs ○ for execution optimization and operations pipelining ○ to recompute the data in case of failure
  • 7. RDD: a developer’s view ● Distributed immutable data + lazily evaluated operations ○ partitioned data + iterator ○ transformations & actions ● An interface defining 5 main properties a list of partitions (e.g. splits in Hadoop) def getPartitions: Array[Partition] a list of dependencies on other RDDs def getDependencies: Seq[Dependency[_]] a function for computing each split def compute(split: Partition, context: TaskContext): Iterator[T] (optional) a list of preferred locations to compute each split on def getPreferredLocations(split: Partition): Seq[String] = Nil (optional) a partitioner for key-value RDDs val partitioner: Option[Partitioner] = None lineage execution optimization
  • 8. RDDs Example ● HadoopRDD ○ getPartitions = HDFS blocks ○ getDependencies = None ○ compute = load block in memory ○ getPrefferedLocations = HDFS block locations ○ partitioner = None ● MapPartitionsRDD ○ getPartitions = same as parent ○ getDependencies = parent RDD ○ compute = compute parent and apply map() ○ getPrefferedLocations = same as parent ○ partitioner = None sparkContext.textFile("hdfs://...")
  • 9. RDD Operations ● Transformations ○ apply user function to every element in a partition (or to the whole partition) ○ apply aggregation function to the whole dataset (groupBy, sortBy) ○ introduce dependencies between RDDs to form DAG ○ provide functionality for repartitioning (repartition, partitionBy) ● Actions ○ trigger job execution ○ used to materialize computation results ● Extra: persistence ○ explicitly store RDDs in memory, on disk or off-heap (cache, persist) ○ checkpointing for truncating RDD lineage
  • 10. Execution workflow 10 rdd1.join(rdd2) .groupBy(...) .filter(...) splits graph into stages of tasks submits each stage as ready launches tasks via cluster manager retries failed or struggling tasks executes tasks stores and serves blocks
  • 11. Code sample: joining aggregated and raw data //aggregate events after specific date for given campaign val events = sc.cassandraTable("demo", "event") .map(_.toEvent) .filter(event => event.campaignId == campaignId && event.time.isAfter(watermark)) .keyBy(_.eventType) .reduceByKey(_ + _) .cache() //aggregate campaigns by type val campaigns = sc.cassandraTable("demo", "campaign") .map(_.toCampaign) .filter(campaign => campaign.id == campaignId && campaign.time.isBefore(watermark)) .keyBy(_.eventType) .reduceByKey(_ + _) .cache() //joined rollups and raw events val joinedTotals = campaigns.join(events) .map { case (key, (campaign, event)) => CampaignTotals(campaign, event) } .collect() //count totals separately val eventTotals = events.map{ case (t, e) => s"$t -> ${e.value}" }.collect() val campaignTotals = campaigns.map{ case (t, e) => s"$t -> ${e.value}" }.collect()
  • 12. DAG
  • 13. Dependency types ● Narrow (pipelineable) ○ each partition of the parent RDD is used by at most one partition of the child RDD ○ allow for pipelined execution on one cluster node ○ failure recovery is more efficient as only lost parent partitions need to be recomputed ● Wide (shuffle) ○ multiple child partitions may depend on one parent partition ○ require data from all parent partitions to be available and to be shuffled across the nodes ○ if some partition is lost from all the ancestors a complete recomputation is needed
  • 14. Stages and Tasks ● Stages breakdown strategy ○ check backwards from final RDD ○ add each “narrow” dependency to the current stage ○ create new stage when there’s a shuffle dependency ● Tasks ○ ShuffleMapTask partitions its input for shuffle ○ ResultTask sends its output to the driver
  • 15. Shuffle ● Shuffle Write ○ redistributes data among partitions and writes files to disk ○ each hash shuffle task creates one file per “reduce” task (total = MxR) ○ sort shuffle task creates one file with regions assigned to reducer ○ sort shuffle uses in-memory sorting with spillover to disk to get final result ● Shuffle Read ○ fetches the files and applies reduce() logic ○ if data ordering is needed then it is sorted on “reducer” side for any type of shuffle (SPARK-2926)
  • 16. Sort Shuffle ● Incoming records accumulated and sorted in memory according their target partition ids ● Sorted records are written to file or multiple files if spilled and then merged ● index file stores offsets of the data blocks in the data file ● Sorting without deserialization is possible under certain conditions (SPARK-7081)
  • 17. Architecture Recap ● Spark Driver ○ separate process to execute user applications ○ creates SparkContext to schedule jobs execution and negotiate with cluster manager ● Executors ○ run tasks scheduled by driver ○ store computation results in memory, on disk or off-heap ○ interact with storage systems ● Cluster Manager ○ Mesos ○ YARN ○ Spark Standalone
  • 19. Spark Components ● SparkContext ○ represents the connection to a Spark cluster, and can be used to create RDDs, accumulators and broadcast variables on that cluster ● DAGScheduler ○ computes a DAG of stages for each job and submits them to TaskScheduler ○ determines preferred locations for tasks (based on cache status or shuffle files locations) and finds minimum schedule to run the jobs ● TaskScheduler ○ responsible for sending tasks to the cluster, running them, retrying if there are failures, and mitigating stragglers ● SchedulerBackend ○ backend interface for scheduling systems that allows plugging in different implementations(Mesos, YARN, Standalone, local) ● BlockManager ○ provides interfaces for putting and retrieving blocks both locally and remotely into various stores (memory, disk, and off-heap)
  • 20. Memory Management in Spark 1.6 ● Execution Memory ○ storage for data needed during tasks execution ○ shuffle-related data ● Storage Memory ○ storage of cached RDDs and broadcast variables ○ possible to borrow from execution memory (spill otherwise) ○ safeguard value is 0.5 of Spark Memory when cached blocks are immune to eviction ● User Memory ○ user data structures and internal metadata in Spark ○ safeguarding against OOM ● Reserved memory ○ memory needed for running executor itself and not strictly related to Spark
  • 21. Workshop code available @ github.com/datastrophic/spark-workshop
  • 22. Execution Modes ● spark-shell --master [ local | spark | yarn-client | mesos] ○ launches REPL connected to specified cluster manager ○ always runs in client mode ● spark-submit --master [ local | spark:// | mesos:// | yarn ] spark-job.jar ○ launches assembly jar on the cluster ● Masters ○ local[k] - run Spark locally with K worker threads ○ spark - launches driver app on Spark Standalone installation ○ mesos - driver will spawn executors on Mesos cluster (deploy-mode: client | cluster) ○ yarn - same idea as with Mesos (deploy-mode: client | cluster) ● Deploy Modes ○ client - driver executed as a separate process on the machine where it has been launched and spawns executors ○ cluster - driver launched as a container using underlying cluster manager
  • 23. Invocation examples spark-shell --master yarn --deploy-mode client --executor-cores 1 --num-executors 2 --jars /target/spark-workshop.jar --conf spark.cassandra.connection.host=cassandra spark-submit --class io.datastrophic.spark.workshop.ParametrizedApplicationExample --master yarn --deploy-mode cluster --num-executors 2 --driver-memory 1g --executor-memory 1g /target/spark-workshop.jar --cassandra-host cassandra --keyspace demo --table event --target-dir /workshop/dumps
  • 24. Live Demo ● spark-shell ● Spark UI ● creating an app with Typesafe Activator ● Spark SQL and DataFrames API ● coding
  • 25. Coding ideas ● get familiar with API through sample project ○ join data from different storage systems ○ aggregate data with breakdown by date ● play with caching and persistence ● check out join behavior applying different partitioning ● familiarize with Spark UI ● experiment with new DataSet API (since 1.6) ● [ your awesome idea here ]