Créer des applications 'publisher' et 'consumer' avec Analytics avec Kafka (EN)
Objective
Public Cloud Databases for Kafka allow you to focus on building and deploying cloud applications while OVHcloud takes care of the Kafka infrastructure and maintenance in operational conditions.
Kafka is a platform used for processing streams. It is fundamentally a massively scalable pub/sub message queue.
The purpose of this tutorial is to show you the steps to be able to have your first Python applications that will use Kafka.
One application will be able to subscribe to a topic and consume messsages, the other one will be able to produce and publish messages in a topic.
You will end up with all the basics to develop your own solution using Kafka.
Requirements
- Access to the OVHcloud Control Panel.
- A Public Cloud project in your OVHcloud account.
- A Public Cloud Databases for Kafka service running and configured. This guide can help you to meet this requirement.
- Following the Getting Started guide, save all certificates in a dedicated folder:
- the server certificate as
ca.pem - the user certificate as
service.cert - the user access key as
service.key
- the server certificate as
- A Python environment with a stable version and public network connectivity (Internet). This guide was made using Python 3.12.2.
Instructions
All source code is available on the GitHub repository public-cloud-examples.
Step 1 - Consume messages
One of the application will subscribe to a topic of your Kafka service and wait to consume any incoming message.
As you can see, the first lines of code define the configuration to be used to subscribe to your Kafka service.
Do not forget to set an environment variable called KAFKA_SERVICE_URI that will point to your service.
The library confluent-kafka provides a class called Consumer that will represent your connection to your Kafka service.
You then prepare the tool that will help you to have a nice exposition of messages.
All you need to do to at that point is to use the Consumer object to subscribe to your Kafka service.
The final piece of code will wait for incoming messages through the poll function.
You will be able to use the Console object to show the content of the messages.
Step 2 - Publish messages
Now that you have an application waiting for messages, let's create one to produce and publish them.
This is done in a very similar way as for your Consumer, and this time you will use a Producer object.
It is now time to prepare the elements used to publish a message.
The delivery_callback allows you to have a control on what to do once your Producer published the message.
The publishing action is in fact done in two steps:
- First prepare the message in the format needed by Kafka and set the callback function.
- Then use
flushto use your connection to Kafka and publish the message.
Go further
Confluent Kafka Python library
Join our community of users.