Summer Special Discount 60% Offer - Ends in 0d 00h 00m 00s - Coupon code: brite60

ExamsBrite Dumps

Confluent Certified Developer for Apache Kafka Certification Examination Question and Answers

Confluent Certified Developer for Apache Kafka Certification Examination

Last Update Oct 15, 2025
Total Questions : 61

We are offering FREE CCDAK Confluent exam questions. All you do is to just go and sign up. Give your details, prepare CCDAK free exam questions and then go for complete pool of Confluent Certified Developer for Apache Kafka Certification Examination test questions that will help you more.

CCDAK pdf

CCDAK PDF

$42  $104.99
CCDAK Engine

CCDAK Testing Engine

$50  $124.99
CCDAK PDF + Engine

CCDAK PDF + Testing Engine

$66  $164.99
Questions 1

You are sending messages to a Kafka cluster in JSON format and want to add more information related to each message:

    Format of the message payload

    Message creation time

    A globally unique identifier that allows the message to be traced through the systemWhere should this additional information be set?

Options:

A.  

Header

B.  

Key

C.  

Value

D.  

Broker

Discussion 0
Questions 2

Match the testing tool with the type of test it is typically used to perform.

Options:

Discussion 0
Questions 3

Which two statements are correct when assigning partitions to the consumers in a consumer group using the assign() API?

(Select two.)

Options:

A.  

It is mandatory to subscribe to a topic before calling assign() to assign partitions.

B.  

The consumer chooses which partition to read without any assignment from brokers.

C.  

The consumer group will not be rebalanced if a consumer leaves the group.

D.  

All topics must have the same number of partitions to use assign() API.

Discussion 0
Questions 4

You want to connect with username and password to a secured Kafka cluster that has SSL encryption.

Which properties must your client include?

Options:

A.  

security.protocol=SASL_SSL

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

B.  

security.protocol=SSL

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

C.  

security.protocol=SASL_PLAINTEXT

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

D.  

security.protocol=PLAINTEXT

sasl.jaas.config=org.apache.kafka.common.security.ssl.TlsLoginModule required username='myUser' password='myPassword';

Discussion 0
Questions 5

What are two examples of performance metrics?

(Select two.)

Options:

A.  

fetch-rate

B.  

Number of active users

C.  

total-login-attempts

D.  

incoming-byte-rate

E.  

Number of active user sessions

F.  

Time of last failed login

Discussion 0
Questions 6

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

    Topic name: DLQ-Topic

    Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.  

errors.tolerance=all

B.  

errors.deadletterqueue.topic.name=DLQ-Topic

C.  

errors.deadletterqueue.context.headers.enable=true

D.  

errors.tolerance=none

E.  

errors.log.enable=true

F.  

errors.log.include.messages=true

Discussion 0
Questions 7

Which statement describes the storage location for a sink connector’s offsets?

Options:

A.  

The __consumer_offsets topic, like any other consumer

B.  

The topic specified in the offsets.storage.topic configuration parameter

C.  

In a file specified by the offset.storage.file.filename configuration parameter

D.  

In memory which is then periodically flushed to a RocksDB instance

Discussion 0
Questions 8

An application is consuming messages from Kafka.

The application logs show that partitions are frequently being reassigned within the consumer group.

Which two factors may be contributing to this?

(Select two.)

Options:

A.  

There is a slow consumer processing application.

B.  

The number of partitions does not match the number of application instances.

C.  

There is a storage issue on the broker.

D.  

An instance of the application is crashing and being restarted.

Discussion 0
Questions 9

This schema excerpt is an example of which schema format?

package com.mycorp.mynamespace;

message SampleRecord {

int32 Stock = 1;

double Price = 2;

string Product_Name = 3;

}

Options:

A.  

Avro

B.  

Protobuf

C.  

JSON Schema

D.  

YAML

Discussion 0
Questions 10

A stream processing application is consuming from a topic with five partitions. You run three instances of the application. Each instance has num.stream.threads=5.

You need to identify the number of stream tasks that will be created and how many will actively consume messages from the input topic.

Options:

A.  

5 created, 1 actively consuming

B.  

5 created, 5 actively consuming

C.  

15 created, 5 actively consuming

D.  

15 created, 15 actively consuming

Discussion 0
Questions 11

You create a topic named loT-Data with 10 partitions and replication factor of three.

A producer sends 1 MB messages compressed with Gzip.

Which two statements are true in this scenario?

(Select two.)

Options:

A.  

Compression type will be stored in batch attributes.

B.  

By default, compression is the producer’s responsibility.

C.  

The message is already compressed so it will not be serialized.

D.  

All compressed messages will be stored in the same topic partition.

Discussion 0
Questions 12

You are working on a Kafka cluster with three nodes. You create a topic named orders with:

    replication.factor = 3

    min.insync.replicas = 2

    acks = allWhat exception will be generated if two brokers are down due to network delay?

Options:

A.  

NotEnoughReplicasException

B.  

NetworkException

C.  

NotCoordinatorException

D.  

NotLeaderForPartitionException

Discussion 0
Questions 13

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

Options:

A.  

enable.auto.commit=true

B.  

retries=2147483647

max.in.flight.requests.per.connection=5

enable.idempotence=true

C.  

retries=0

max.in.flight.requests.per.connection=5

enable.idempotence=true

D.  

retries=2147483647

max.in.flight.requests.per.connection=1

enable.idempotence=false

Discussion 0
Questions 14

You use Kafka Connect with the JDBC source connector to extract data from a large database and push it into Kafka.

The database contains tens of tables, and the current connector is unable to process the data fast enough.

You add more Kafka Connect workers, but throughput doesn't improve.

What should you do next?

Options:

A.  

Increase the number of Kafka partitions for the topics.

B.  

Increase the value of the connector's property tasks.max.

C.  

Add more Kafka brokers to the cluster.

D.  

Modify the database schemas to enable horizontal sharding.

Discussion 0
Questions 15

What is a consequence of increasing the number of partitions in an existing Kafka topic?

Options:

A.  

Existing data will be redistributed across the new number of partitions temporarily increasing cluster load.

B.  

Records with the same key could be located in different partitions.

C.  

Consumers will need to process data from more partitions which will significantly increase consumer lag.

D.  

The acknowledgment process will increase latency for producers using acks=all.

Discussion 0
Questions 16

You are writing a producer application and need to ensure proper delivery. You configure the producer with acks=all.

Which two actions should you take to ensure proper error handling?

(Select two.)

Options:

A.  

Use a callback argument in producer.send() where you check delivery status.

B.  

Check that producer.send() returned a RecordMetadata object and is not null.

C.  

Surround the call of producer.send() with a try/catch block to catch KafkaException.

D.  

Check the value of ProducerRecord.status().

Discussion 0
Questions 17

The producer code below features a Callback class with a method called onCompletion().

In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?

Options:

A.  

The sequential ID of the message committed into a partition

B.  

Its position in the producer’s batch of messages

C.  

The number of bytes that overflowed beyond a producer batch of messages

D.  

The ID of the partition to which the message was committed

Discussion 0
Questions 18

Which configuration allows more time for the consumer poll to process records?

Options:

A.  

session.timeout.ms

B.  

heartbeat.interval.ms

C.  

max.poll.interval.ms

D.  

fetch.max.wait.ms

Discussion 0