0
0
Kafkadevops~10 mins

Testing stream topologies in Kafka - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Testing stream topologies
Define Topology
Build Topology
Create Test Driver
Pipe Input Records
Read Output Records
Assert Expected Results
Close Test Driver
This flow shows how to define, build, and test a Kafka Streams topology step-by-step using a test driver.
Execution Sample
Kafka
Topology topology = new Topology();
topology.addSource("source", "input-topic")
        .addProcessor("process", MyProcessor::new, "source")
        .addSink("sink", "output-topic", "process");

Properties props = new Properties();
// configure props as needed

TopologyTestDriver testDriver = new TopologyTestDriver(topology, props);

// Pipe input record
TestInputTopic<String, String> input = testDriver.createInputTopic("input-topic", new StringSerializer(), new StringSerializer());
input.pipeInput("key1", "value1");

// Read output record
TestOutputTopic<String, String> output = testDriver.createOutputTopic("output-topic", new StringDeserializer(), new StringDeserializer());
KeyValue<String, String> result = output.readKeyValue();
This code builds a simple Kafka Streams topology and tests it by sending an input record and reading the output.
Process Table
StepActionInput RecordProcessor StateOutput RecordNotes
1Define topology with source, processor, sinkNoneTopology builtNoneTopology structure created
2Create TopologyTestDriverNoneTest driver readyNoneTest environment setup
3Pipe input record (key1, value1)(key1, value1)Processor processes recordNoneInput record sent to source topic
4Processor logic runs(key1, value1)Processed data stored(key1, processedValue1)Processor transforms input
5Read output recordNoneNo change(key1, processedValue1)Output record read from sink topic
6Assert output matches expected(key1, value1)No change(key1, processedValue1)Test passes if output correct
7Close test driverNoneResources releasedNoneTest environment cleaned up
💡 Test driver closed after assertions; topology test complete
Status Tracker
VariableStartAfter Step 3After Step 4After Step 5Final
inputRecordNone(key1, value1)(key1, value1)(key1, value1)None
processorStateEmptyProcessing (key1, value1)Processed (key1, processedValue1)Processed (key1, processedValue1)Released
outputRecordNoneNone(key1, processedValue1)(key1, processedValue1)Read and verified
Key Moments - 3 Insights
Why do we need to create a TopologyTestDriver before piping input?
The test driver simulates the Kafka Streams runtime environment; without it, the topology cannot process input records (see execution_table step 2 and 3).
What happens if the output record does not match the expected result?
The assertion in step 6 will fail, indicating the processor logic or topology is incorrect and needs fixing.
Why must we close the test driver at the end?
Closing releases resources and ensures no side effects remain after the test (see execution_table step 7).
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the processor state after step 4?
AProcessed (key1, processedValue1)
BProcessing (key1, value1)
CEmpty
DReleased
💡 Hint
Check the 'Processor State' column at step 4 in the execution_table
At which step is the input record first sent to the topology?
AStep 2
BStep 3
CStep 5
DStep 6
💡 Hint
Look at the 'Action' and 'Input Record' columns in the execution_table
If the processor logic changes to output a different value, which step's output record will change?
AStep 3
BStep 4
CStep 5
DStep 7
💡 Hint
Output record is produced and read at steps 4 and 5; step 5 shows the final output read
Concept Snapshot
Testing Kafka Streams topology:
- Define topology with sources, processors, sinks
- Use TopologyTestDriver to simulate runtime
- Pipe input records to source topics
- Read output records from sink topics
- Assert outputs match expected results
- Close test driver to release resources
Full Transcript
This visual execution shows how to test Kafka Streams topologies step-by-step. First, you define the topology with sources, processors, and sinks. Then you create a TopologyTestDriver to simulate the Kafka Streams environment. You pipe input records into the source topic, which the processor processes and sends to the sink topic. You read the output records from the sink topic and assert they match expected results. Finally, you close the test driver to clean up resources. The execution table tracks each step's action, input, processor state, and output. The variable tracker shows how input records, processor state, and output records change during the test. Key moments clarify why the test driver is needed, what happens if outputs don't match, and why closing the driver is important. The quiz questions help reinforce understanding of the processor state, input timing, and output changes.