Challenge - 5 Problems
Kafka Connect Mastery: Common Connectors
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Kafka Connect JDBC Source Connector Configuration Output
What will be the output when the following Kafka Connect JDBC Source connector configuration is used to import data from a MySQL database table named
users?Kafka
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector connection.url=jdbc:mysql://localhost:3306/mydb connection.user=root connection.password=password mode=incrementing incrementing.column.name=id topic.prefix=mysql- table.whitelist=users
Attempts:
2 left
💡 Hint
Look at the mode and incrementing.column.name properties to understand how data is imported.
✗ Incorrect
The configuration uses 'incrementing' mode with 'id' column, so Kafka Connect imports new rows incrementally into topic 'mysql-users'.
❓ Predict Output
intermediate2:00remaining
Kafka Connect S3 Sink Connector Behavior
Given the following Kafka Connect S3 Sink connector configuration, what will be the result in the S3 bucket after running the connector?
Kafka
connector.class=io.confluent.connect.s3.S3SinkConnector s3.bucket.name=my-kafka-bucket topics=my-topic flush.size=3 storage.class=io.confluent.connect.s3.storage.S3Storage format.class=io.confluent.connect.s3.format.json.JsonFormat
Attempts:
2 left
💡 Hint
Check the flush.size and format.class properties to understand file creation.
✗ Incorrect
The connector writes data in JSON format to S3, creating a new file every 3 records as specified by flush.size.
❓ Predict Output
advanced2:00remaining
Kafka Connect Elasticsearch Sink Connector Document ID Behavior
What will be the document ID in Elasticsearch for the following Kafka Connect Elasticsearch Sink connector configuration when writing records from topic 'orders'?
Kafka
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector connection.url=http://localhost:9200 topics=orders key.ignore=false schema.ignore=true behavior.on.null.values=delete
Attempts:
2 left
💡 Hint
key.ignore=false means keys are used as document IDs.
✗ Incorrect
With key.ignore=false, the connector uses Kafka record keys as Elasticsearch document IDs, ensuring updates overwrite existing documents.
🧠 Conceptual
advanced2:00remaining
Error Handling in Kafka Connect S3 Sink Connector
Which of the following best describes how Kafka Connect S3 Sink connector handles errors when writing data to S3?
Attempts:
2 left
💡 Hint
Consider how Kafka Connect tasks handle transient errors and retries.
✗ Incorrect
Kafka Connect retries failed writes based on configuration and fails the task if retries are exhausted, allowing for error visibility and recovery.
❓ Predict Output
expert3:00remaining
Kafka Connect Elasticsearch Sink Connector Bulk Indexing Behavior
Given this Kafka Connect Elasticsearch Sink connector configuration, what is the expected behavior regarding bulk indexing and document updates?
Kafka
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector connection.url=http://localhost:9200 topics=products batch.size=200 max.in.flight.requests=5 key.ignore=false behavior.on.null.values=ignore
Attempts:
2 left
💡 Hint
Check batch.size, max.in.flight.requests, and key.ignore settings for concurrency and document ID usage.
✗ Incorrect
The connector batches records up to batch.size and sends multiple batches concurrently up to max.in.flight.requests, using keys as document IDs to update documents.