Stream Processing Beyond Streaming with Apache Flink
In this fun-filled event, we will take a look at the latest developments in stream processing from the view of Apache Flink, and discuss how stream processing is outgrowing its original space of real-time data processing. Stream processing is rapidly maturing into a technology for enabling new approaches to general data processing with use cases that include batch processing, real-time applications and distributed transactions.
We will discuss how Flink implements its approach to "stream processing beyond streaming" and illustrate it by example, demonstrating streaming SQL in Apache Flink, showing how ANSI SQL works across batch and streaming use cases including features such as temporal joins and complex event processing that help make typical applications easier to model. This is a new approach to distributed transactions that makes it possible to use stream processing for fast cross-shard transactions with ACID serializability.
We will also discuss real-world use-cases of companies that are applying this broader streaming paradigm in practice.
Finally, we will discuss the latest features of Apache Flink, as well as the current roadmap and upcoming features such as building a unified batch/streaming machine learning library. We will also cover how Apache Flink is building the world's first true streaming runtime that will compete at batch processing with the best current batch processor packages.
Stephan Ewen is an Apache PMC member and one of the original creators of Apache Flink, and Co-Founder of Ververica (formerly data Artisans)
Within the Apache Flink community, Stephan works on the overall vision of a unified approach to batch processing, stream processing, and event-driven applications, including runtime, APIs, and new approaches to fault tolerance and transactional consistency.
Previously he worked at IBM and Microsoft on data processing technologies. Stephan holds a Ph.D. from the Berlin University of Technology.