In the previous article, we have set up the Zookeeper and Kafka cluster and we can produce and consume messages.
In this article, we will do the authentication of Kafka and Zookeeper so if anyone wants to connect to our cluster must provide some sort of credential.
We have 3 Virtual machines running on Amazon EC2 instances and each machine are running Kafka and Zookeeper.
Here is the authentication mechanism Kafka provides.
- Authentication using SSL.
- Authentication using SASL.
In this article, we will use Authentication using SASL. In SASL, we can use the following mechanism.
- GSSAPI (Kerberos)
- PLAIN
- SCRAM-SHA-256
- SCRAM-SHA-512
- OAUTHBEARER
For the sake of simplicity, we will use PLAIN authentication mechanism. However, for production is recommended to use SASL with SSL to avoid exposure of sensitive data over the network.
Here is what we are going to do:
- Zookeeper authentication.
- Broker authentication.
We will secure our zookeeper servers so that the broker can connect to it securely. We will also do the broker authentication for our clients.
Let’s begin the configuration.
We will do zookeeper authentication first. On each Server running Zookeeper, create the file named zookeeper_jaas.conf on config directory.
Log in to the server and switch to the Kafka directory.
Then add the following config values.
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="12345"
user_admin="12345";
};
Change the values based on your needs.
Open the zookeeper.properties file and add the following values.
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
After saving the file. Run this command.
Repeat the same steps on each server running Zookeeper.
Now let’s do the Kafka authentication.
Log in to each server running Kafka and switch to the Kafka directory.
create a file named kafka_server_jaas.conf in the config directory.
Add the following values.
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="12345"
user_admin="12345";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="12345";
};
After saving the file, we need to edit the Kafka server properties.
Add the following values in the config/server.properties file.
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://:9092
Then run this command.
Repeat the same steps on each Server running Kafka.
Restarting the cluster
Restart Zookeeper on each server.
Restart the Kafka on each server.
Your Kafka cluster is now secure. Refer the code below.
Connecting to Kafka using SASL Auth
Refer this Node code to connect to Kafka using SASL auth.
Consumer = kafka.Consumer,
client = new kafka.KafkaClient({
kafkaHost: ':9092,:9093,:9094',
sasl: {mechanism: 'plain', username: 'admin', password: '12345'}
}),
consumer = new Consumer(
client,
[{ topic: 'test', partition: 0 }],
{
autoCommit: false
}
);
consumer.on('message', function (message) {
console.log(message);
});
When you run this code, you will receive messages if exists from the test topic in your console.
That’s it. We have completed the Kafka cluster authentication using SASL.
This article is a part of a series, check out other articles here:
1: What is Kafka
2: Setting Up Zookeeper Cluster for Kafka in AWS EC2
3: Setting up Multi-Broker Kafka in AWS EC2
4: Setting up Authentication in Multi-broker Kafka cluster in AWS EC2
5: Setting up Kafka management for Kafka cluster
6: Capacity Estimation for Kafka Cluster in production
7: Performance testing Kafka cluster