The Apache Flink Community is pleased to announce the preview release of the Apache Flink Kubernetes Operator (0.1.0)
The Flink Kubernetes Operator allows users to easily manage their Flink deployment lifecycle using native Kubernetes tooling.
The operator takes care of submitting, savepointing, upgrading and generally managing Flink jobs using the built-in Flink Kubernetes integration. This way users do not have to use the Flink Clients (e.g. CLI) or interact with the Flink jobs manually, they only have to declare the desired deployment specification and the operator will take care of the rest. It also make it easier to integrate Flink job management with CI/CD tooling.
- Deploy and monitor Flink Application and Session deployments
- Upgrade, suspend and delete Flink deployments
- Full logging and metrics integration
Getting started #
For a detailed getting started guide please check the documentation site.
FlinkDeployment CR overview #
When using the operator, users create
FlinkDeployment objects to describe their Flink application and session clusters deployments.
A minimal application deployment yaml would look like this:
apiVersion: flink.apache.org/v1alpha1 kind: FlinkDeployment metadata: namespace: default name: basic-example spec: image: flink:1.14 flinkVersion: v1_14 flinkConfiguration: taskmanager.numberOfTaskSlots: "2" serviceAccount: flink jobManager: replicas: 1 resource: memory: "2048m" cpu: 1 taskManager: resource: memory: "2048m" cpu: 1 job: jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar parallelism: 2 upgradeMode: stateless
Once applied to the cluster using
kubectl apply -f your-deployment.yaml the operator will spin up the application cluster for you.
If you would like to upgrade or make changes to your application, you can simply modify the yaml and submit it again, the operator will execute the necessary steps (savepoint, shutdown, redeploy etc.) to upgrade your application.
To stop and delete your application cluster you can simply call
kubectl delete -f your-deployment.yaml.
You can read more about the job management features on the documentation site.
What’s Next? #
The community is currently working on hardening the core operator logic, stabilizing the APIs and adding the remaining bits for making the Flink Kubernetes Operator production ready.
In the upcoming 1.0.0 release you can expect (at-least) the following additional features:
- Support for Session Job deployments
- Job upgrade rollback strategies
- Pluggable validation logic
- Operator deployment customization
- Improvements based on feedback from the preview release
In the medium term you can also expect:
- Support for standalone / reactive deployment modes
- Support for other job types such as SQL or Python
Please give the preview release a try, share your feedback on the Flink mailing list and contribute to the project!
Release Resources #
The source artifacts and helm chart are now available on the updated Downloads page of the Flink website.
The official 0.1.0 release archive doubles as a Helm repository that you can easily register locally:
$ helm repo add flink-kubernetes-operator-0.1.0 https://archive.apache.org/dist/flink/flink-kubernetes-operator-0.1.0/ $ helm install flink-kubernetes-operator flink-kubernetes-operator-0.1.0/flink-kubernetes-operator --set webhook.create=false
You can also find official Kubernetes Operator Docker images of the new version on Dockerhub.
List of Contributors #
The Apache Flink community would like to thank each and every one of the contributors that have made this release possible:
Aitozi, Biao Geng, Gyula Fora, Hao Xin, Jaegu Kim, Jaganathan Asokan, Junfan Zhang, Marton Balassi, Matyas Orhidi, Nicholas Jiang, Sandor Kelemen, Thomas Weise, Yang Wang, 愚鲤