This document introduces Fluvio, an open-source data streaming platform founded by the creators of Nginx's open-source service mesh. It provides a programmable platform for data in motion that can be used to build analytics pipelines, track user behavior and sensor data, and enable fraud detection. Fluvio offers better performance and lower costs compared to Kafka. The roadmap details ongoing development of Fluvio and its cloud offering from InfinyOn, including adding smart modules, connectors, and pipelines.
Apache Kafka users who want to leverage Google Cloud Platform's (GCPs) data analytics platform and open source hosting capabilities can bridge their existing Kafka infrastructure on-premise or in other clouds to GCP using Confluent's replicator tool and managed Kafka service on GCP. Using actual customer examples and a reference architecture, we'll showcase how existing Kafka users can stream data to GCP and use it in popular tools like Apache Beam on Dataflow, BigQuery, Google Cloud Storage (GCS), Spark on Dataproc, and Tensorflow for data warehousing, data processing, data storage, and advanced analytics using AI and ML.
In this interactive session, you’ll access a lab environment that shows you how to build Streaming Applications on top of Kafka, leveraging Confluent's modern tooling. This is your exclusive opportunity to hear from the thought leaders of Apache Kafka on how event streaming enables you to leverage real-time data processing, with an easy-to-use, yet powerful interactive interface for stream processing, without the need to write code.
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies. This webinar explores the use-cases and architecture for Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.
Explore the use-cases and architecture for Apache Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
This document discusses real-time processing of large amounts of data using a streaming platform. It begins with an agenda for the presentation, then discusses how streaming platforms can be used as a central nervous system in enterprises. Several use cases are presented, including using Apache Kafka and the Confluent Platform for applications like fraud detection, customer analytics, and migrating from batch to stream-based data processing. The rest of the document goes into details on Kafka, Confluent Platform, and how they can be used to build stream processing applications.