Spring Sale 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exams65

ExamsBrite Dumps

Confluent Certified Developer for Apache Kafka Certification Examination Question and Answers

Confluent Certified Developer for Apache Kafka Certification Examination

Last Update Feb 28, 2026
Total Questions : 90

We are offering FREE CCDAK Confluent exam questions. All you do is to just go and sign up. Give your details, prepare CCDAK free exam questions and then go for complete pool of Confluent Certified Developer for Apache Kafka Certification Examination test questions that will help you more.

CCDAK pdf

CCDAK PDF

$36.75  $104.99
CCDAK Engine

CCDAK Testing Engine

$43.75  $124.99
CCDAK PDF + Engine

CCDAK PDF + Testing Engine

$57.75  $164.99
Questions 1

(You create a topic with five partitions.

What can you assume about messages read from that topic by a single consumer group?)

Options:

A.  

Messages can be consumed by a maximum of five consumers in the same consumer group.

B.  

The consumer group can only read the same number of messages from all the partitions.

C.  

All messages will be read from exactly one broker by the consumer group.

D.  

Messages from one partition can be consumed by any of the consumers in a group for faster processing.

Discussion 0
Questions 2

(You want to read messages from all partitions of a topic in every consumer instance of your application.

How do you do this?)

Options:

A.  

Use the assign() method using all topic-partitions of the topic as argument.

B.  

Use the assign() method with the topic name as argument.

C.  

Use the subscribe() method with a regular expression argument.

D.  

Use the subscribe() method with an empty consumer group name configuration.

Discussion 0
Questions 3

You are sending messages to a Kafka cluster in JSON format and want to add more information related to each message:

Format of the message payload

Message creation time

A globally unique identifier that allows the message to be traced through the systemWhere should this additional information be set?

Options:

A.  

Header

B.  

Key

C.  

Value

D.  

Broker

Discussion 0
Questions 4

You have a consumer group with default configuration settings reading messages from your Kafka cluster.

You need to optimize throughput so the consumer group processes more messages in the same amount of time.

Which change should you make?

Options:

A.  

Remove some consumers from the consumer group.

B.  

Increase the number of bytes the consumers read with each fetch request.

C.  

Disable auto commit and have the consumers manually commit offsets.

D.  

Decrease the session timeout of each consumer.

Discussion 0
Questions 5

You need to correctly join data from two Kafka topics.

Which two scenarios will allow for co-partitioning?

(Select two.)

Options:

A.  

Both topics have the same number of partitions.

B.  

Both topics have the same key and partitioning strategy.

C.  

Both topics have the same value schema.

D.  

Both topics have the same retention time.

Discussion 0
Questions 6

You are composing a REST request to create a new connector in a running Connect cluster. You invoke POST /connectors with a configuration and receive a 409 (Conflict) response.

What are two reasons for this response? (Select two.)

Options:

A.  

The connector configuration was invalid, and the response body will expand on the configuration error.

B.  

The connect cluster has reached capacity, and new connectors cannot be created without expanding the cluster.

C.  

The Connector already exists in the cluster.

D.  

The Connect cluster is in process of rebalancing.

Discussion 0
Questions 7

(You want to enrich the content of a topic by joining it with key records from a second topic.

The two topics have a different number of partitions.

Which two solutions can you use?

Select two.)

Options:

A.  

Use a GlobalKTable for one of the topics where data does not change frequently and use a KStream–GlobalKTable join.

B.  

Repartition one topic to a new topic with the same number of partitions as the other topic (co-partitioning constraint) and use a KStream–KTable join.

C.  

Create as many Kafka Streams application instances as the maximum number of partitions of the two topics and use a KStream–KTable join.

D.  

Use a KStream–KTable join; Kafka Streams will automatically repartition the topics to satisfy the co-partitioning constraint.

Discussion 0
Questions 8

You are writing a producer application and need to ensure proper delivery. You configure the producer with acks=all.

Which two actions should you take to ensure proper error handling?

(Select two.)

Options:

A.  

Use a callback argument in producer.send() where you check delivery status.

B.  

Check that producer.send() returned a RecordMetadata object and is not null.

C.  

Surround the call of producer.send() with a try/catch block to catch KafkaException.

D.  

Check the value of ProducerRecord.status().

Discussion 0
Questions 9

(You are designing a stream pipeline to monitor the real-time location of GPS trackers, where historical location data is not required.

Each event has:

• Key: trackerId

• Value: latitude, longitude

You need to ensure that the latest location for each tracker is always retained in the Kafka topic.

Which topic configuration parameter should you set?)

Options:

A.  

cleanup.policy=compact

B.  

retention.ms=infinite

C.  

min.cleanable.dirty.ratio=-1

D.  

retention.ms=0

Discussion 0
Questions 10

Your company has three Kafka clusters: Development, Testing, and Production.

The Production cluster is running out of storage, so you add a new node.

Which two statements about the new node are true?

(Select two.)

Options:

A.  

A node ID will be assigned to the new node automatically.

B.  

A newly added node will have KRaft controller role by default.

C.  

A new node will not have any partitions assigned to it unless a new topic is created or reassignment occurs.

D.  

A new node can be added without stopping existing cluster nodes.

Discussion 0
Questions 11

You have a Kafka consumer in production actively reading from a critical topic.

You need to update the offset of your consumer to start reading from the beginning of the topic.

Which action should you take?

Options:

A.  

Temporarily configure the topic’s retention.ms parameter to 0 to empty the topic.

B.  

Start a new consumer application with the same consumer group id.

C.  

Update the consumer configuration by setting auto.offset.reset=earliest.

D.  

Update the consumer group’s offset to the earliest position using the kafka-consumer-groups CLI tool.

Discussion 0
Questions 12

A stream processing application is consuming from a topic with five partitions. You run three instances of the application. Each instance has num.stream.threads=5.

You need to identify the number of stream tasks that will be created and how many will actively consume messages from the input topic.

Options:

A.  

5 created, 1 actively consuming

B.  

5 created, 5 actively consuming

C.  

15 created, 5 actively consuming

D.  

15 created, 15 actively consuming

Discussion 0
Questions 13

Which configuration determines how many bytes of data are collected before sending messages to the Kafka broker?

Options:

A.  

batch.size

B.  

max.block.size

C.  

buffer.memory

D.  

send.buffer.bytes

Discussion 0
Questions 14

(You are developing a Java application that includes a Kafka consumer.

You need to integrate Kafka client logs with your own application logs.

Your application is using the Log4j2 logging framework.

Which Java library dependency must you include in your project?)

Options:

A.  

SLF4J implementation for Log4j2 (org.apache.logging.log4j:log4j-slf4j-impl)

B.  

SLF4J implementation for Log4j 1.2 (org.slf4j:slf4j-log4j12)

C.  

Just the Log4j2 dependency of the application

D.  

None, the correct dependency will be added transitively by the Kafka client

Discussion 0
Questions 15

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

Topic name: DLQ-Topic

Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.  

errors.tolerance=all

B.  

errors.deadletterqueue.topic.name=DLQ-Topic

C.  

errors.deadletterqueue.context.headers.enable=true

D.  

errors.tolerance=none

E.  

errors.log.enable=true

F.  

errors.log.include.messages=true

Discussion 0
Questions 16

Your application is consuming from a topic configured with a deserializer.

It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing.

Which action should you take in the catch block?

Options:

A.  

Log the bad record, no other action needed.

B.  

Log the bad record and seek the consumer to the offset of the next record.

C.  

Log the bad record and call the consumer.skip() method.

D.  

Throw a runtime exception to trigger a restart of the application.

Discussion 0
Questions 17

You are experiencing low throughput from a Java producer.

Metrics show low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?

Options:

A.  

Compression is enabled.

B.  

The producer is sending large batches of messages.

C.  

There is a bad data link layer (layer 2) connection from the producer to the cluster.

D.  

The producer code has an expensive callback function.

Discussion 0
Questions 18

(You are implementing a Kafka Streams application to process financial transactions.

Each transaction must be processed exactly once to ensure accuracy.

The application reads from an input topic, performs computations, and writes results to an output topic.

During testing, you notice duplicate entries in the output topic, which violates the exactly-once processing requirement.

You need to ensure exactly-once semantics (EOS) for this Kafka Streams application.

Which step should you take?)

Options:

A.  

Enable compaction on the output topic to handle duplicates.

B.  

Set enable.idempotence=true in the internal producer configuration of the Kafka Streams application.

C.  

Set enable.exactly_once=true in the Kafka Streams configuration.

D.  

Set processing.guarantee=exactly_once_v2 in the Kafka Streams configuration.

Discussion 0
Questions 19

(You are experiencing low throughput from a Java producer.

Kafka producer metrics show a low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?)

Options:

A.  

The producer is sending large batches of messages.

B.  

There is a bad data link layer (Layer 2) connection from the producer to the cluster.

C.  

The producer code has an expensive callback function.

D.  

Compression is enabled.

Discussion 0
Questions 20

(You are configuring a source connector that writes records to an Orders topic.

You need to send some of the records to a different topic.

Which Single Message Transform (SMT) is best suited for this requirement?)

Options:

A.  

RegexRouter

B.  

InsertField

C.  

TombstoneHandler

D.  

HeaderFrom

Discussion 0
Questions 21

You are creating a Kafka Streams application to process retail data.

Match the input data streams with the appropriate Kafka Streams object.

Options:

Discussion 0
Questions 22

An application is consuming messages from Kafka.

The application logs show that partitions are frequently being reassigned within the consumer group.

Which two factors may be contributing to this?

(Select two.)

Options:

A.  

There is a slow consumer processing application.

B.  

The number of partitions does not match the number of application instances.

C.  

There is a storage issue on the broker.

D.  

An instance of the application is crashing and being restarted.

Discussion 0
Questions 23

(What are two stateless operations in the Kafka Streams API?

Select two.)

Options:

A.  

Reduce

B.  

Join

C.  

Filter

D.  

GroupBy

Discussion 0
Questions 24

(You are building real-time streaming applications using Kafka Streams.

Your application has a custom transformation.

You need to define custom processors in Kafka Streams.

Which tool should you use?)

Options:

A.  

TopologyTestDriver

B.  

Processor API

C.  

Kafka Streams Domain Specific Language (DSL)

D.  

Kafka Streams Custom Transformation Language

Discussion 0
Questions 25

Which two statements about Kafka Connect Single Message Transforms (SMTs) are correct?

(Select two.)

Options:

A.  

Multiple SMTs can be chained together and act on source or sink messages.

B.  

SMTs are often used to join multiple records from a source data system into a single Kafka record.

C.  

Masking data is a good example of an SMT.

D.  

SMT functionality is included within Kafka Connect converters.

Discussion 0
Questions 26

You create a producer that writes messages about bank account transactions from tens of thousands of different customers into a topic.

Your consumers must process these messages with low latency and minimize consumer lag

Processing takes ~6x longer than producing

Transactions for each bank account must be processed in orderWhich strategy should you use?

Options:

A.  

Use the timestamp of the message's arrival as its key.

B.  

Use the bank account number found in the message as the message key.

C.  

Use a combination of the bank account number and the transaction timestamp as the message key.

D.  

Use a unique identifier such as a universally unique identifier (UUID) as the message key.

Discussion 0
Questions 27

Match the topic configuration setting with the reason the setting affects topic durability.

(You are given settings like unclean.leader.election.enable=false, replication.factor, min.insync.replicas=2)

Options:

Discussion 0