skip to Main Content

How to use PEM certificates with Apache Kafka

It’s been a long waiting but it’s finally here: starting with Apache Kafka 2.7 it is now possible to use TLS certificates in PEM format with brokers and java clients. You might wonder – why does it matter?

PEM is a scheme for encoding x509 certificates and private keys as Base64 ASCII strings. This makes it easier to handle your certificates. You can simply provide keys and certificates to the app as string parameters (e.g. through environment variables). This is especially useful if your applications are running in containers, where mounting files to containers makes the deployment pipeline a bit more complex. In this post, I want to show you two ways to use PEM certificates in Kafka.

Providing certificates as strings

Brokers and CLI tools

You can add certificates directly to the configuration file of your clients or brokers. If you’re providing them as single-line strings, you must transform the original multiline format to a single line by adding the line feed characters ( \n ) at the end of each line. Here’s how the SSL section of the properties file should look:

security.protocol=SSL
ssl.keystore.type=PEM
ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE-----\nMIIDZjC...\n-----END CERTIFICATE-----
ssl.keystore.key=-----BEGIN ENCRYPTED PRIVATE KEY-----\n...\n-----END ENCRYPTED PRIVATE KEY-----
ssl.key.password=<private_key_password>
ssl.truststore.type=PEM
ssl.truststore.certificates=-----BEGIN CERTIFICATE-----\nMICC...\n-----END CERTIFICATE-----

Note that ssl.keystore.certificate.chain needs to contain your signed certificate as well as all the intermediary CA certificates. For more details on this see the Common gotchas section below.

Your private key goes into ssl.keystore.key field, while the password for the private key (if you use one) goes to ssl.key.password field.

Java clients

Java clients use exactly the same properties, but constants help with readability:

Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
...omitted other producer configs...
//SSL configs
properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
properties.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, "PEM");
properties.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, "<certificate_chain_string>");
properties.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, "<private_key_string>");
// key password is needed if the private key is encrypted
properties.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "<private_key_password>");
properties.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "PEM");
properties.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, "<trusted_certificate>");

producer = new KafkaProducer<>(properties);

Providing certificates as files

If you already use mTLS authentication towards Kafka, then the easiest way to transition towards PEM certificates is to use them as files, replacing the java keystore and truststore you use today. This approach makes it easy to transition from PKCS12 files to PEM files.

Brokers and CLI tools

Here’s how the ssl section of the properties file should look:

security.protocol=SSL
ssl.keystore.type=PEM
ssl.keystore.location=/path/to/file/containing/certificate/chain
ssl.key.password=<private_key_password>
ssl.truststore.type=PEM
ssl.truststore.location=/path/to/truststore/certificate

ssl.keystore.type and ssl.truststore.type properties tell Kafka in which format we are providing the certificates and the truststore.

Next, ssl.keystore.location points to a file that should contain the following:

  • your private key
  • your signed certificate
  • as well as any intermediary CA certificates

For more details about the certificate chain see the Common gotchas section below.

You will need to set the ssl.key.password if your private key is encrypted (which I hope it is). Make sure not to provide the ssl.keystore.password otherwise you’ll get an error.

Java clients

Again Java clients use the same properties, but here we’re using the constants provided by the Kafka client library:

Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
...omitted other producer configs...
//SSL configs
properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
properties.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, "PEM");
properties.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "/path/to/file/containing/certificate/chain");
// key password is needed if the private key is encrypted
properties.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "<private_key_password>");
properties.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "PEM");
properties.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "/path/to/truststore/certificate");

producer = new KafkaProducer<>(properties);

Common gotchas when setting up a certificate chain

  1. If your private key is encrypted (which it should always be), you need to convert it from PKCS#1 to PKCS#8 format for java/kafka to be able to read it properly
  2. If you want to provide the PEM certificate as a one-line string, make sure to add the line feed characters at the end of each line ( \n ). Otherwise, the certificate will be considered invalid.
  3. The certificate chain has to include your certificate together with all the intermediary CA certificates that signed it, in that order. So for example, if your certificate was signed by certificate A which was signed by cert B which was signed by the root certificate, your certificate chain has to include: your certificate, certificate A and certificate B, in that order. Do note that the root certificate should not be in the chain.
  4. Certificate order in your certificate chain is important (see point 3)

Example of Kafka SSL setup with PEM certificates

Testing an SSL setup of your clients is not simple, because setting up a Kafka cluster with SSL authentication is not a straightforward process. This is why I created a docker-compose project with a single zookeeper and broker, enabled with SSL authentication. This project used many ideas from the excellent cp-demo project by Confluent.

To use the project, clone the docker-compose repository, and navigate to the kafka-ssl folder.

git clone https://github.com/codingharbour/kafka-docker-compose.git
cd kafka-ssl

Running the start-cluster.sh script will generate the self-signed root certificate. The script will use it to sign all other certificates. It will also generate and sign the certificates for the broker and zookeeper and certificates for one producer and consumer. After this, the script will start the cluster using docker-compose.

Don’t have docker-compose? Check: how to install docker-compose

In addition, the startup script will generate producer.properties and consumer.properties files you can use with kafka-console-* tools.

The consumer.properties file is an example of how to use PEM certificates as strings. The producer.properties, on the other hand, uses certificates stored in PEM files. This way you can see and test both approaches described in this blog post.

Photo credit:  FLY:D 

Would you like to learn more about Kafka?

I have created a Kafka mini-course that you can get absolutely free. Sign up below and I will send you lessons directly to your inbox.

Back To Top