SlideShare a Scribd company logo
Till Rohrmann
trohrmann@apache.org
@stsffap
Unifying Stream SQL and CEP
for Declarative Stream
Processing with Apache Flink
2
Original creators of Apache
Flink®
Providers of the
dA Platform, a supported
Flink distribution
Streams are Everywhere
 Most data is continuously produced as stream
 Processing data as it arrives
is becoming very popular
 Many diverse applications
and use cases
3
Batch Analytics
4
 The batch approach to data analytics
Streaming Analytics
 Online aggregation of streams
• No delay – Continuous results
 Stream analytics subsumes batch analytics
• Batch is a finite stream
 Demanding requirements on stream processor
• High throughput
• Exactly-once semantics & event-time support
• Advanced window support
5
Complex Event Processing
 Analyzing a stream of events and drawing conclusions
• Detect patterns and assemble new events
 Applications
• Network intrusion
• Process monitoring
• Algorithmic trading
 Demanding requirements on stream processor
• Low latency!
• Exactly-once semantics & event-time support
6
Apache Flink®
 Platform for scalable stream processing
 Meets requirements of CEP and stream analytics
• Low latency and high throughput
• Exactly-once semantics
• Event-time support
• Advanced windowing
 Core DataStream API available for Java & Scala
7
Tracking an Order Process
Use Case
8
Order Process
9
Order Events
 Process is reflected in a stream of order events
 Order(orderId, tStamp, “received”)
 Shipment(orderId, tStamp, “shipped”)
 Delivery(orderId, tStamp, “delivered”)
 orderId: Identifies the order
 tStamp: Time at which the event happened
10
Aggregating Massive Streams
Stream Analytics
11
Stream Analytics
 Traditional batch analytics
• Repeated queries on finite and changing data sets
• Queries join and aggregate large data sets
 Stream analytics
• “Standing” query produces continuous results
from infinite input stream
• Query computes aggregates on high-volume streams
 How to compute aggregates on infinite streams?
12
Compute Aggregates on Streams
 Split infinite stream into finite “windows”
• Split usually by time
 Tumbling windows
• Fixed size & consecutive
 Sliding windows
• Fixed size & may overlap
 Event time mandatory for correct & consistent results!
13
Example: Count Orders by Hour
14
Example: Count Orders by Hour
15
SELECT
TUMBLE_START(tStamp, INTERVAL ‘1’ HOUR) AS hour,
COUNT(*) AS cnt
FROM events
WHERE
status = ���received’
GROUP BY
TUMBLE(tStamp, INTERVAL ‘1’ HOUR)
Stream SQL Architecture
 Flink features SQL on static
and streaming tables
 Parsing and optimization by
Apache Calcite
 SQL queries are translated
into native Flink programs
16
Pattern Matching on Streams
Complex Event Processing
17
Real-time Warnings
18
CEP to the Rescue
 Define processing and delivery intervals (SLAs)
 ProcessSucc(orderId, tStamp, duration)
 ProcessWarn(orderId, tStamp)
 DeliverySucc(orderId, tStamp, duration)
 DeliveryWarn(orderId, tStamp)
 orderId: Identifies the order
 tStamp: Time when the event happened
 duration: Duration of the processing/delivery
19
CEP Example
20
Processing: Order  Shipment
21
val processingPattern = Pattern
.begin[Event]("received").subtype(classOf[Order])
.followedBy("shipped").where(_.status == "shipped")
.within(Time.hours(1))
val processingPatternStream = CEP.pattern(
input.keyBy("orderId"),
processingPattern)
val procResult: DataStream[Either[ProcessWarn, ProcessSucc]] =
processingPatternStream.select {
(pP, timestamp) => // Timeout handler
ProcessWarn(pP("received").orderId, timestamp)
} {
fP => // Select function
ProcessSucc(
fP("received").orderId, fP("shipped").tStamp,
fP("shipped").tStamp – fP("received").tStamp)
}
… and both at the same time!
Integrated Stream Analytics with CEP
22
Count Delayed Shipments
23
Compute Avg Processing Time
24
CEP + Stream SQL
25
// complex event processing result
val delResult: DataStream[Either[DeliveryWarn, DeliverySucc]] = …
val delWarn: DataStream[DeliveryWarn] = delResult.flatMap(_.left.toOption)
val deliveryWarningTable: Table = delWarn.toTable(tableEnv)
tableEnv.registerTable(”deliveryWarnings”, deliveryWarningTable)
// calculate the delayed deliveries per day
val delayedDeliveriesPerDay = tableEnv.sql(
"""SELECT
| TUMBLE_START(tStamp, INTERVAL ‘1’ DAY) AS day,
| COUNT(*) AS cnt
|FROM deliveryWarnings
|GROUP BY TUMBLE(tStamp, INTERVAL ‘1’ DAY)""".stripMargin)
CEP-enriched Stream SQL
26
SELECT
TUMBLE_START(tStamp, INTERVAL '1' DAY) as day,
AVG(duration) as avgDuration
FROM (
// CEP pattern
SELECT duration, tStamp
FROM inputs MATCH_RECOGNIZE (
PARTITION BY orderId ORDER BY tStamp
MEASURES END.tStamp – START.tStamp as duration, END.tStamp as tStamp
PATTERN (START OTHER* END)
INTERVAL '1' HOUR
DEFINE
START AS START.status = ’received’,
END AS END.status = ‘shipped’
)
)
GROUP BY
TUMBLE(tStamp, INTERVAL '1' DAY)
Conclusion
 Apache Flink handles CEP and analytical
workloads
 Apache Flink offers intuitive APIs
 New class of applications by CEP and
Stream SQL integration 
27
2
Thank you!
@stsffap
@ApacheFlink
@dataArtisans
29
Stream Processing
and Apache Flink®'s
approach to it
@StephanEwen
Apache Flink PMC
CTO @ data ArtisansFLINKFORWARD IS COMING BACKTO BERLIN
SEPTEMBER11-13, 2017
BERLIN.FLINK-FORWARD.ORG -
We are hiring!
data-artisans.com/careers

More Related Content

Unifying Stream, SWL and CEP for Declarative Stream Processing with Apache Flink

  • 1. Till Rohrmann trohrmann@apache.org @stsffap Unifying Stream SQL and CEP for Declarative Stream Processing with Apache Flink
  • 2. 2 Original creators of Apache Flink® Providers of the dA Platform, a supported Flink distribution
  • 3. Streams are Everywhere  Most data is continuously produced as stream  Processing data as it arrives is becoming very popular  Many diverse applications and use cases 3
  • 4. Batch Analytics 4  The batch approach to data analytics
  • 5. Streaming Analytics  Online aggregation of streams • No delay – Continuous results  Stream analytics subsumes batch analytics • Batch is a finite stream  Demanding requirements on stream processor • High throughput • Exactly-once semantics & event-time support • Advanced window support 5
  • 6. Complex Event Processing  Analyzing a stream of events and drawing conclusions • Detect patterns and assemble new events  Applications • Network intrusion • Process monitoring • Algorithmic trading  Demanding requirements on stream processor • Low latency! • Exactly-once semantics & event-time support 6
  • 7. Apache Flink®  Platform for scalable stream processing  Meets requirements of CEP and stream analytics • Low latency and high throughput • Exactly-once semantics • Event-time support • Advanced windowing  Core DataStream API available for Java & Scala 7
  • 8. Tracking an Order Process Use Case 8
  • 10. Order Events  Process is reflected in a stream of order events  Order(orderId, tStamp, “received”)  Shipment(orderId, tStamp, “shipped”)  Delivery(orderId, tStamp, “delivered”)  orderId: Identifies the order  tStamp: Time at which the event happened 10
  • 12. Stream Analytics  Traditional batch analytics • Repeated queries on finite and changing data sets • Queries join and aggregate large data sets  Stream analytics • “Standing” query produces continuous results from infinite input stream • Query computes aggregates on high-volume streams  How to compute aggregates on infinite streams? 12
  • 13. Compute Aggregates on Streams  Split infinite stream into finite “windows” • Split usually by time  Tumbling windows • Fixed size & consecutive  Sliding windows • Fixed size & may overlap  Event time mandatory for correct & consistent results! 13
  • 14. Example: Count Orders by Hour 14
  • 15. Example: Count Orders by Hour 15 SELECT TUMBLE_START(tStamp, INTERVAL ‘1’ HOUR) AS hour, COUNT(*) AS cnt FROM events WHERE status = ‘received’ GROUP BY TUMBLE(tStamp, INTERVAL ‘1’ HOUR)
  • 16. Stream SQL Architecture  Flink features SQL on static and streaming tables  Parsing and optimization by Apache Calcite  SQL queries are translated into native Flink programs 16
  • 17. Pattern Matching on Streams Complex Event Processing 17
  • 19. CEP to the Rescue  Define processing and delivery intervals (SLAs)  ProcessSucc(orderId, tStamp, duration)  ProcessWarn(orderId, tStamp)  DeliverySucc(orderId, tStamp, duration)  DeliveryWarn(orderId, tStamp)  orderId: Identifies the order  tStamp: Time when the event happened  duration: Duration of the processing/delivery 19
  • 21. Processing: Order  Shipment 21 val processingPattern = Pattern .begin[Event]("received").subtype(classOf[Order]) .followedBy("shipped").where(_.status == "shipped") .within(Time.hours(1)) val processingPatternStream = CEP.pattern( input.keyBy("orderId"), processingPattern) val procResult: DataStream[Either[ProcessWarn, ProcessSucc]] = processingPatternStream.select { (pP, timestamp) => // Timeout handler ProcessWarn(pP("received").orderId, timestamp) } { fP => // Select function ProcessSucc( fP("received").orderId, fP("shipped").tStamp, fP("shipped").tStamp – fP("received").tStamp) }
  • 22. … and both at the same time! Integrated Stream Analytics with CEP 22
  • 25. CEP + Stream SQL 25 // complex event processing result val delResult: DataStream[Either[DeliveryWarn, DeliverySucc]] = … val delWarn: DataStream[DeliveryWarn] = delResult.flatMap(_.left.toOption) val deliveryWarningTable: Table = delWarn.toTable(tableEnv) tableEnv.registerTable(”deliveryWarnings”, deliveryWarningTable) // calculate the delayed deliveries per day val delayedDeliveriesPerDay = tableEnv.sql( """SELECT | TUMBLE_START(tStamp, INTERVAL ‘1’ DAY) AS day, | COUNT(*) AS cnt |FROM deliveryWarnings |GROUP BY TUMBLE(tStamp, INTERVAL ‘1’ DAY)""".stripMargin)
  • 26. CEP-enriched Stream SQL 26 SELECT TUMBLE_START(tStamp, INTERVAL '1' DAY) as day, AVG(duration) as avgDuration FROM ( // CEP pattern SELECT duration, tStamp FROM inputs MATCH_RECOGNIZE ( PARTITION BY orderId ORDER BY tStamp MEASURES END.tStamp – START.tStamp as duration, END.tStamp as tStamp PATTERN (START OTHER* END) INTERVAL '1' HOUR DEFINE START AS START.status = ’received’, END AS END.status = ‘shipped’ ) ) GROUP BY TUMBLE(tStamp, INTERVAL '1' DAY)
  • 27. Conclusion  Apache Flink handles CEP and analytical workloads  Apache Flink offers intuitive APIs  New class of applications by CEP and Stream SQL integration  27
  • 29. 29 Stream Processing and Apache Flink®'s approach to it @StephanEwen Apache Flink PMC CTO @ data ArtisansFLINKFORWARD IS COMING BACKTO BERLIN SEPTEMBER11-13, 2017 BERLIN.FLINK-FORWARD.ORG -