How to Set Up Authentication in Kafka Cluster

In the previous article, we have set up the Zookeeper and Kafka cluster and we can produce and consume messages.

In this article, we will do the authentication of Kafka and Zookeeper so if anyone wants to connect to our cluster must provide some sort of credential.

We have 3 Virtual machines running on Amazon EC2 instances and each machine are running Kafka and Zookeeper.

Here is the authentication mechanism Kafka provides.

  1. Authentication using SSL.
  2. Authentication using SASL.

In this article, we will use Authentication using SASL. In SASL, we can use the following mechanism.

  • GSSAPI (Kerberos)
  • PLAIN
  • SCRAM-SHA-256
  • SCRAM-SHA-512
  • OAUTHBEARER

For the sake of simplicity, we will use PLAIN authentication mechanism. However, for production is recommended to use SASL with SSL to avoid exposure of sensitive data over the network.
Here is what we are going to do:

  • Zookeeper authentication.
  • Broker authentication.

We will secure our zookeeper servers so that the broker can connect to it securely. We will also do the broker authentication for our clients.

Let’s begin the configuration.

We will do zookeeper authentication first. On each Server running Zookeeper, create the file named zookeeper_jaas.conf on config directory.

Log in to the server and switch to the Kafka directory.

$ vi config/zookeeper_jaas.conf

Then add the following config values.

Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="12345"
user_admin="12345";
};

Change the values based on your needs.

Open the zookeeper.properties file and add the following values.

#auth
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

After saving the file. Run this command.

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/ubuntu/kafka_2.11-2.1.0/config/zookeeper_jaas.conf"

Repeat the same steps on each server running Zookeeper.

Now let’s do the Kafka authentication.

Log in to each server running Kafka and switch to the Kafka directory.

create a file named kafka_server_jaas.conf in the config directory.

$ vi config/kafka_server_jaas.conf

Add the following values.

KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="12345"
  user_admin="12345";
};

Client {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="12345";
};

After saving the file, we need to edit the Kafka server properties.

Add the following values in the config/server.properties file.

# AUTH

security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://:9092

Then run this command.

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/ubuntu/kafka_2.11-2.1.0/config/kafka_server_jaas.conf"

Repeat the same steps on each Server running Kafka.

Restarting the cluster

Restart Zookeeper on each server.

$ bin/zookeeper-server-start.sh config/zookeeper.properties

Restart the Kafka on each server.

$ bin/kafka-server-start.sh config/server.properties

Your Kafka cluster is now secure. Refer the code below.

Connecting to Kafka using SASL Auth

Refer this Node code to connect to Kafka using SASL auth.

var kafka = require('kafka-node'),
Consumer = kafka.Consumer,
client = new kafka.KafkaClient({
  kafkaHost: ':9092,:9093,:9094',
  sasl: {mechanism: 'plain', username: 'admin', password: '12345'}
}),
consumer = new Consumer(
client,
  [{ topic: 'test', partition: 0 }],
  {
   autoCommit: false
  }
);

consumer.on('message', function (message) {
console.log(message);
});

When you run this code, you will receive messages if exists from the test topic in your console.

That’s it. We have completed the Kafka cluster authentication using SASL.

This article is a part of a series, check out other articles here:

1: What is Kafka
2: Setting Up Zookeeper Cluster for Kafka in AWS EC2
3: Setting up Multi-Broker Kafka in AWS EC2
4: Setting up Authentication in Multi-broker Kafka cluster in AWS EC2
5: Setting up Kafka management for Kafka cluster
6: Capacity Estimation for Kafka Cluster in production
7: Performance testing Kafka cluster

Shahid
Shahid

Founder of Codeforgeek. Technologist. Published Author. Engineer. Content Creator. Teaching Everything I learn!

Articles: 126