0
0
Kafkadevops~20 mins

Common connectors (JDBC, S3, Elasticsearch) in Kafka - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Kafka Connect Mastery: Common Connectors
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Kafka Connect JDBC Source Connector Configuration Output
What will be the output when the following Kafka Connect JDBC Source connector configuration is used to import data from a MySQL database table named users?
Kafka
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
connection.url=jdbc:mysql://localhost:3306/mydb
connection.user=root
connection.password=password
mode=incrementing
incrementing.column.name=id
topic.prefix=mysql-
table.whitelist=users
AKafka topic 'mysql-users' will receive data but only for columns excluding 'id'
BKafka topic 'users' will receive all rows from 'users' table in bulk mode
CKafka topic 'mysql-users' will receive only the latest snapshot of 'users' table without incremental updates
DKafka topic 'mysql-users' will receive all rows from 'users' table incrementally based on 'id' column
Attempts:
2 left
💡 Hint
Look at the mode and incrementing.column.name properties to understand how data is imported.
Predict Output
intermediate
2:00remaining
Kafka Connect S3 Sink Connector Behavior
Given the following Kafka Connect S3 Sink connector configuration, what will be the result in the S3 bucket after running the connector?
Kafka
connector.class=io.confluent.connect.s3.S3SinkConnector
s3.bucket.name=my-kafka-bucket
topics=my-topic
flush.size=3
storage.class=io.confluent.connect.s3.storage.S3Storage
format.class=io.confluent.connect.s3.format.json.JsonFormat
AData from 'my-topic' will be written to S3 in CSV format with 3 records per file
BData from 'my-topic' will be written to S3 as a single large JSON file with all records
CData from 'my-topic' will be written to S3 in JSON files, each containing 3 records
DData from 'my-topic' will not be written to S3 because flush.size is too small
Attempts:
2 left
💡 Hint
Check the flush.size and format.class properties to understand file creation.
Predict Output
advanced
2:00remaining
Kafka Connect Elasticsearch Sink Connector Document ID Behavior
What will be the document ID in Elasticsearch for the following Kafka Connect Elasticsearch Sink connector configuration when writing records from topic 'orders'?
Kafka
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
connection.url=http://localhost:9200
topics=orders
key.ignore=false
schema.ignore=true
behavior.on.null.values=delete
AThe document ID will be the Kafka record key from topic 'orders'
BThe document ID will be auto-generated by Elasticsearch ignoring Kafka keys
CThe connector will fail because key.ignore is false but keys are missing
DThe document ID will be the Kafka record value converted to string
Attempts:
2 left
💡 Hint
key.ignore=false means keys are used as document IDs.
🧠 Conceptual
advanced
2:00remaining
Error Handling in Kafka Connect S3 Sink Connector
Which of the following best describes how Kafka Connect S3 Sink connector handles errors when writing data to S3?
AIt retries failed writes a configurable number of times and then fails the task if unsuccessful
BIt drops failed records silently and continues processing without alerting
CIt retries failed writes indefinitely until success, blocking all other processing
DIt writes failed records to a local file and skips them in S3
Attempts:
2 left
💡 Hint
Consider how Kafka Connect tasks handle transient errors and retries.
Predict Output
expert
3:00remaining
Kafka Connect Elasticsearch Sink Connector Bulk Indexing Behavior
Given this Kafka Connect Elasticsearch Sink connector configuration, what is the expected behavior regarding bulk indexing and document updates?
Kafka
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
connection.url=http://localhost:9200
topics=products
batch.size=200
max.in.flight.requests=5
key.ignore=false
behavior.on.null.values=ignore
AThe connector sends one record at a time to Elasticsearch ignoring keys, creating new documents only
BThe connector sends batches of up to 200 records concurrently (up to 5 batches) using Kafka keys as document IDs, updating existing documents
CThe connector sends batches of 200 records sequentially, ignoring keys and overwriting documents randomly
DThe connector fails because max.in.flight.requests cannot be greater than 1 with batch.size set
Attempts:
2 left
💡 Hint
Check batch.size, max.in.flight.requests, and key.ignore settings for concurrency and document ID usage.