We’ve already explored the Producer API that Confluent.Kafka provides in the previous article. As the next logical part, let’s see how to configure message consumption out of Kafka topic. Again, it is as simple as shown in the code snippet below:
The whole code base could be found in my github repository.
There are two points which I want to emphasize about consuming from Kafka.
The first thing is the delivery pattern. Kafka uses the pull model: a consumer requests new messages starting from a specified offset. This brings two benefits:
- Consumption rate is defined by consumers and therefore they don’t get overwhelmed when there are lots of messages in a topic to be processed
- Kafka broker can reduce number of network requests by efficient batching of messages since it can send all suitable messages within a single response
The only precaution here is that the calling thread gets blocked until there are suitable messages in a topic. Generally that’s not a problem since one process should utilize a single consumer but this may turn to an issue when, for example, this blocking prevents starting a host.
This particular issue can be solved by calling Task.Yield() in ExecuteAsync method so that the calling thread is not blocked.
Committing an offset
The second point is committing an offset. Kafka brokers remember at which offset a consumer stopped reading messages but it’s the consumer’s responsibility to commit its offset. Confluent.Kafka API allows committing an offset automatically or manually. Generally you should not commit at each read because of performance impact.
So we’ve learnt how to consume messages from Kafka and digged deeper to understand the delivery pattern and offset committing strategy.
Also there are some articles which may be interesting:
If you have any suggestions or qustions, feel free to reach me in comments! :)