Apache Flink® - 数据流上的有状态计算



所有流式场景
  • 事件驱动应用
  • 流批分析
  • 数据管道 & ETL
了解更多
正确性保证
  • Exactly-once 状态一致性
  • 事件时间处理
  • 成熟的迟到数据处理
了解更多
分层 API
  • SQL on Stream & Batch Data
  • DataStream API & DataSet API
  • ProcessFunction (Time & State)
了解更多
聚焦运维
  • 灵活部署
  • 高可用
  • 保存点
了解更多
大规模计算
  • 水平扩展架构
  • 支持超大状态
  • 增量检查点机制
了解更多
性能卓越
  • 低延迟
  • 高吞吐
  • 内存计算
了解更多

A Deep-Dive into Flink's Network Stack
Flink’s network stack is one of the core components that make up Apache Flink's runtime module sitting at the core of every Flink job. In this post, which is the first in a series of posts about the network stack, we look at the abstractions exposed to the stream operators and detail their physical implementation and various optimisations in Apache Flink.
State TTL in Flink 1.8.0: How to Automatically Cleanup Application State in Apache Flink
A common requirement for many stateful streaming applications is to automatically cleanup application state for effective management of your state size, or to control how long the application state can be accessed. State TTL enables application state cleanup and efficient state size management in Apache Flink
Flux capacitor, huh? Temporal Tables and Joins in Streaming SQL
Apache Flink natively supports temporal table joins since the 1.7 release for straightforward temporal data handling. In this blog post, we provide an overview of how this new concept can be leveraged for effective point-in-time analysis in streaming scenarios.
When Flink & Pulsar Come Together
Apache Flink and Apache Pulsar are distributed data processing systems. When combined, they offer elastic data processing at large scale. This post describes how Pulsar and Flink can work together to provide a seamless developer experience.
Apache Flink's Application to Season of Docs

The Apache Flink community is happy to announce its application to the first edition of Season of Docs by Google. The program is bringing together Open Source projects and technical writers to raise awareness for and improve documentation of Open Source projects. While the community is continuously looking for new contributors to collaborate on our documentation, we would like to take this chance to work with one or two technical writers to extend and restructure parts of our documentation (details below).