0
0
Kafkadevops~5 mins

Event streaming concept in Kafka - Commands & Configuration

Choose your learning style9 modes available
Introduction
Event streaming lets systems send and receive data continuously as events happen. It helps apps react quickly to new information without waiting for batch updates.
When you want to track user actions on a website in real time to personalize content.
When you need to process sensor data from machines immediately to detect faults.
When you want to update inventory levels instantly as sales happen in an online store.
When you want to build a chat app that shows messages instantly to all users.
When you want to collect logs and metrics from many servers continuously for monitoring.
Config File - server.properties
server.properties
broker.id=1
listeners=PLAINTEXT://:9092
log.dirs=/tmp/kafka-logs
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000

This file configures a Kafka broker, which is a server that stores and manages event streams.

broker.id: Unique ID for this Kafka server.

listeners: Network address and port Kafka listens on.

log.dirs: Where Kafka stores event data on disk.

zookeeper.connect: Address of Zookeeper, which helps Kafka manage cluster state.

Commands
Starts the Kafka broker using the configuration file to begin accepting event streams.
Terminal
kafka-server-start.sh server.properties
Expected OutputExpected
[2024-06-01 12:00:00,000] INFO Kafka startTimeMs set to 1685611200000 (kafka.server.KafkaServer) [2024-06-01 12:00:01,000] INFO started (kafka.server.KafkaServer)
Creates a new event stream topic named 'user-actions' where events like user clicks will be sent.
Terminal
kafka-topics.sh --create --topic user-actions --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
Expected OutputExpected
Created topic user-actions.
--topic - Name of the event stream topic
--partitions - Number of partitions for parallelism
--replication-factor - Number of copies for fault tolerance
Starts a command line tool to send events (messages) to the 'user-actions' event stream.
Terminal
kafka-console-producer.sh --topic user-actions --bootstrap-server localhost:9092
Expected OutputExpected
No output (command runs silently)
--topic - Topic to send events to
--bootstrap-server - Kafka server address
Starts a command line tool to read all events from the 'user-actions' stream from the start.
Terminal
kafka-console-consumer.sh --topic user-actions --bootstrap-server localhost:9092 --from-beginning
Expected OutputExpected
No output (command runs silently)
--from-beginning - Read all past events, not just new ones
Key Concept

If you remember nothing else from this pattern, remember: event streaming lets apps send and receive data continuously and instantly as things happen.

Common Mistakes
Not starting the Kafka broker before creating topics or sending events.
Commands fail because the Kafka server is not running to handle requests.
Always start the Kafka broker first with kafka-server-start.sh and confirm it is running.
Creating topics without specifying replication factor or partitions.
Kafka may create topics with default settings that do not meet fault tolerance or performance needs.
Specify --partitions and --replication-factor explicitly when creating topics.
Using the consumer without --from-beginning when you want to see all past events.
Consumer only shows new events arriving after it starts, missing earlier data.
Add --from-beginning flag to see all events from the start of the stream.
Summary
Start the Kafka broker to run the event streaming server.
Create a topic to organize and store related events.
Use the producer command to send events to the topic.
Use the consumer command to read events from the topic, optionally from the beginning.