User operator in Red Hat AMQ Stream supports the following authentication mechanism:
- TLS Client Authentication
- SCRAM-SHA-512 Authentication
This blog is about configuring TLS Client Authentication, generating truststore and keystore based on certificate and private key of the user in Openshift, and lastly to configure the java client of consumer and producer with the generated truststore and keystore.
Please note that this blog is not meant to share components in Openshift so there is no explanation on the Openshift Components used here.
Prerequisites
AMQ Stream Operator is already installed into the Openshift Cluster. The latest release version 2.0.0 is used in this blog. Red Hat AMQ Stream Operator 2.0.0 is based on Apache Kafka 3.0.0.
To install Red Hat AMQ Stream Operator, please refer to this link.
Configure Kafka Cluster with Topic and User
- Create a new Kafka Cluster with the configuration circled in red below. This is to enable authentication and authorization. If not configured, you will get the “Authorization needs to be enabled in the Kafka custom resource” error when creating Kafka user below. Please refer to Support Case #5440781 for more details about this error.
- With this Kafka cluster created, the service with name as <kafka cluster name>-kafka-<TLS Listener name>-bootstrap is auto-created. Based on the configuration above, the TLS Listener name is tls. This will contain the external load balancer to be used to connect the Kafka cluster.
- Create a topic in this Kafka cluster.
- Create a user in this Kafka cluster. When the user is created by the User Operator, it will create a new secret with the same name as the KafkaUser resource. The secret will contain a public and private key which should be used for the TLS Client Authentication.
- In the Openshift Console, navigate to Workloads -> Secrets. Open my-user and switch to YAML tab. user.p12 and user.password will be used to generate the keystore for the java client.
Generate truststore and keystore for Java Client
Use oc CLI to get the URL of external load balancer, certificate, keystore, etc information to generate the truststore and keystore for Java Client.
- Get the url of external load balancer: oc get service <kafka cluster name>-kafka-<TLS Listener name>-bootstrap -o=jsonpath=’{.status.loadBalancer.ingress[0].hostname}{“\n”}’
- Get the current certificate for the cluster CA : oc get secret <kafka cluster name>-cluster-ca-cert -o jsonpath=’{.data.ca\.crt}’ | base64 -d > ca.crt
- Get the PKCS #12 archive file for storing certificates and keys : oc get secret my-user -o jsonpath=’{.data.user\.p12}’ | base64 -d > user.p12
- Get the Password for protecting the PKCS #12 archive file : oc get secret my-user -o jsonpath=’{.data.user\.password}’ | base64 -d > user.password
- keytool -keystore truststore.p12 -storepass welcome1 -noprompt -alias ca -import -file ca.crt -storetype PKCS12;
- keytool -importkeystore -destkeystore keystore.p12 -srckeystore user.p12 -srcstorepass $(cat user.password) -srcstoretype PKCS12 -deststoretype PKCS12 -destkeypass welcome1 -deststorepass welcome1
Or you can refer to 1.generateCert.sh in https://github.com/likhia/tlsclientkafka.git.
Create Java Client for Producer and Consumer
The following libraries are required to compile and run the Producer and Consumer classes. The version of the libraries depends on the corresponding Apache Kafka version of Red Hat AMQ Stream Operator.
- kafka-clients-3.0.0.jar
- lz4-java-1.7.1.jar
- snappy-java-1.1.8.1.jar
- zstd-jni-1.5.0–2.jar
- slf4j-api-1.7.30.jar
Include the following properties for Producer and Consumer classes. Below is an example of the Consumer class. The URL of the external balancer is passed in as an argument (args[0]).
Please refer to https://github.com/likhia/tlsclientkafka.git for the sample code.