Summer Special Discount 60% Offer - Ends in 0d 00h 00m 00s - Coupon code: brite60

ExamsBrite Dumps

Confluent Certified Administrator for Apache Kafka Question and Answers

Confluent Certified Administrator for Apache Kafka

Last Update Oct 15, 2025
Total Questions : 54

We are offering FREE CCAAK Confluent exam questions. All you do is to just go and sign up. Give your details, prepare CCAAK free exam questions and then go for complete pool of Confluent Certified Administrator for Apache Kafka test questions that will help you more.

CCAAK pdf

CCAAK PDF

$42  $104.99
CCAAK Engine

CCAAK Testing Engine

$50  $124.99
CCAAK PDF + Engine

CCAAK PDF + Testing Engine

$66  $164.99
Questions 1

A topic ’recurring payments’ is created on a Kafka cluster with three brokers (broker id '0', ’1’, ‘2’) and nine partitions. The min.insync.replicas is set to three, and the producer is set with acks=all. Kafka Broker with id '0' is down.

Which statement is correct?

Options:

A.  

Consumers can read committed messages from partitions on broker id 1, 2

B.  

Producers can write messages to all the partitions, because new leaders for the partitions will be elected.

C.  

Producers and consumers will have no impact on six of the nine partitions.

D.  

Producers will only be able to write messages to the topic where the Leader for the partition is on Broker id 1.

Discussion 0
Questions 2

If the Controller detects the failure of a broker that was the leader for some partitions, which actions will be taken? (Choose two.)

Options:

A.  

The Controller waits for a new leader to be nominated by ZooKeeper.

B.  

The Controller persists the new leader and ISR list to ZooKeeper.

C.  

The Controller sends the new leader and ISR list changes to all brokers.

D.  

The Controller sends the new leader and ISR list changes to all producers and consumers.

Discussion 0
Questions 3

A developer is working for a company with internal best practices that dictate that there is no single point of failure for all data stored.

What is the best approach to make sure the developer is complying with this best practice when creating Kafka topics?

Options:

A.  

Set ‘min.insync.replicas’ to 1.

B.  

Use the parameter --partitions=3 when creating the topic.

C.  

Make sure the topics are created with linger.ms=0 so data is written immediately and not held in batch.

D.  

Set the topic replication factor to 3.

Discussion 0
Questions 4

Which option is a valid Kafka Topic cleanup policy? (Choose two.)

Options:

A.  

delete

B.  

default

C.  

compact

D.  

cleanup

Discussion 0
Questions 5

By default, what do Kafka broker network connections have?

Options:

A.  

No encryption, no authentication and no authorization

B.  

Encryption, but no authentication or authorization

C.  

No encryption, no authorization, but have authentication

D.  

Encryption and authentication, but no authorization

Discussion 0
Questions 6

You have an existing topic t1 that you want to delete because there are no more producers writing to it or consumers reading from it.

What is the recommended way to delete the topic?

Options:

A.  

If topic deletion is enabled on the brokers, delete the topic using Kafka command line tools.

B.  

The consumer should send a message with a 'null' key.

C.  

Delete the log files and their corresponding index files from the leader broker.

D.  

Delete the offsets for that topic from the consumer offsets topic.

Discussion 0
Questions 7

Which ksqlDB statement produces data that is persisted into a Kafka topic?

Options:

A.  

SELECT (Pull Query)

B.  

SELECT (Push Query)

C.  

INSERT VALUES

D.  

CREATE TABLE

Discussion 0
Questions 8

Kafka broker supports which Simple Authentication and Security Layer (SASL) mechanisms for authentication? (Choose three.)

Options:

A.  

SASL/PLAIN

B.  

SASL/SAML20

C.  

SASL/GSSAPI (Kerberos)

D.  

SASL/OAUTHBEARER

E.  

SASL/OTP

Discussion 0
Questions 9

Which statements are correct about partitions? (Choose two.)

Options:

A.  

A partition in Kafka will be represented by a single segment on a disk.

B.  

A partition is comprised of one or more segments on a disk.

C.  

All partition segments reside in a single directory on a broker disk.

D.  

A partition size is determined after the largest segment on a disk.

Discussion 0
Questions 10

An employee in the reporting department needs assistance because their data feed is slowing down. You start by quickly checking the consumer lag for the clients on the data stream.

Which command will allow you to quickly check for lag on the consumers?

Options:

A.  

bin/kafka-consumer-lag.sh

B.  

bin/kafka-consumer-groups.sh

C.  

bin/kafka-consumer-group-throughput.sh

D.  

bin/kafka-reassign-partitions.sh

Discussion 0
Questions 11

If a broker's JVM garbage collection takes too long, what can occur?

Options:

A.  

There will be a trigger of the broker's log cleaner thread.

B.  

ZooKeeper believes the broker to be dead.

C.  

There is backpressure to, and pausing of, Kafka clients.

D.  

Log files written to disk are loaded into the page cache.

Discussion 0
Questions 12

Which model does Kafka use for consumers?

Options:

A.  

Push

B.  

Publish

C.  

Pull

D.  

Enrollment

Discussion 0
Questions 13

Your organization has a mission-critical Kafka cluster that must be highly available. A Disaster Recovery (DR) cluster has been set up using Replicator, and data is continuously being replicated from source cluster to the DR cluster. However, you notice that the message on offset 1002 on source cluster does not seem to match with offset 1002 on the destination DR cluster.

Which statement is correct?

Options:

A.  

The DR cluster is lagging behind updates; once the DR cluster catches up, the messages will match.

B.  

The message on DR cluster got over-written accidently by another application.

C.  

The offsets for the messages on the source, destination cluster may not match.

D.  

The message was updated on source cluster, but the update did not flow into destination DR cluster and errored.

Discussion 0
Questions 14

In certain scenarios, it is necessary to weigh the trade-off between latency and throughput. One method to increase throughput is to configure batching of messages.

In addition to batch.size, what other producer property can be used to accomplish this?

Options:

A.  

sendbufferbytes

B.  

linger.ms

C.  

compression

D.  

delivery.timeout.ms

Discussion 0
Questions 15

Kafka Connect is running on a two node cluster in distributed mode. The connector is a source connector that pulls data from Postgres tables (users/payment/orders), writes to topics with two partitions, and with replication factor two. The development team notices that the data is lagging behind.

What should be done to reduce the data lag*?

The Connector definition is listed below:

{

"name": "confluent-postgresql-source",

"connector class": "PostgresSource",

"topic.prefix": "postgresql_",

& nbsp;& nbsp;& nbsp;…

"db.name": "postgres",

"table.whitelist": "users.payment.orders”,

"timestamp.column.name": "created_at",

"output.data format": "JSON",

"db.timezone": "UTC",

"tasks.max": "1"

}

Options:

A.  

Increase the number of Connect Nodes.

B.  

Increase the number of Connect Tasks (tasks max value).

C.  

Increase the number of partitions.

D.  

Increase the replication factor and increase the number of Connect Tasks.

Discussion 0
Questions 16

How can authentication for both internal component traffic and external client traffic be accomplished?

Options:

A.  

Configure multiple brokers.

B.  

Configure multiple listeners on the broker.

C.  

Configure multiple security protocols on the same listener.

D.  

Configure LoadBalancer.

Discussion 0