Client Authentication with AMQ Stream Operator #2

Li Khia
3 min readMar 22, 2022

User operator in Red Hat AMQ Stream supports the following authentication mechanism:

  1. TLS Client Authentication
  2. SCRAM-SHA-512 Authentication

This blog is about configuring SCRAM-SHA-512 Authentication, generating truststore based on a certificate in Openshift, getting the sasl.jaas.config of the user from Openshift, and lastly to configure the java client of consumer and producer with the generated truststore and sasl.jaas.config.

Please note that this blog is not meant to share components in Openshift so there is no explanation on the Openshift Components used here.

Prerequisites

AMQ Stream Operator is already installed into the Openshift Cluster. The latest release version 2.0.0 is used in this blog. Red Hat AMQ Stream Operator 2.0.0 is based on Apache Kafka 3.0.0.

To install Red Hat AMQ Stream Operator, please refer to [1].

Configure Kafka Cluster with Topic and User

  • Create a new Kafka Cluster with the configuration circled in red below. This is to enable authentication and authorization. If not configured, you will get the “Authorization needs to be enabled in the Kafka custom resource” error when creating Kafka user below. Please refer to Support Case #5440781 (https://access.redhat.com/solutions/5440781) for more details about this error.
  • With this Kafka cluster created, the service with name as <kafka cluster name>-kafka-<TLS Listener name>-bootstrap is auto created. Based on the configuration above, TLS Listener name is tls. This will contain the external load balancer to be used to connect the Kafka cluster.
  • Create a topic in this Kafka cluster.
  • Create a user in this Kafka cluster. When the user is created by the User Operator, it will create a new secret with the same name as the KafkaUser resource. The secret will contain a public and private key which should be used for the TLS Client Authentication.
  • In the Openshift Console, navigate to Workloads -> Secrets. Open user2 and switch to YAML tab. sasl.jaas.config contains the username and password that will be used by the Java client for authentication.

Generate truststore and keystore for Java Client

Use oc CLI to get the url of external load balancer, certificate, keystore, etc information to generate the truststore and keystore for Java Client.

  1. Get the url of external load balancer: oc get service <kafka cluster name>-kafka-<TLS Listener name>-bootstrap -o=jsonpath=’{.status.loadBalancer.ingress[0].hostname}{“\n”}’
  2. Get the current certificate for the cluster CA : oc get secret <kafka cluster name>-cluster-ca-cert -o jsonpath=’{.data.ca\.crt}’ | base64 -d > ca.crt
  3. Get the sasl.jaas.config for the user : oc get secret $USER -o jsonpath=’{.data.sasl\.jaas\.config}’ | base64 -d > user.auth
  4. keytool -keystore truststore.p12 -storepass welcome1 -noprompt -alias ca -import -file ca.crt -storetype PKCS12;

Or you can refer to 1.generateCert.sh in https://github.com/likhia/scramclientkafka.git.

Create Java Client for Producer and Consumer

The following libraries are required to compile and run the Producer and Consumer classes. The version of the libraries depends on the corresponding Apache Kafka version of Red Hat AMQ Stream Operator.

  • kafka-clients-3.0.0.jar
  • lz4-java-1.7.1.jar
  • snappy-java-1.1.8.1.jar
  • zstd-jni-1.5.0–2.jar
  • slf4j-api-1.7.30.jar

Include the following properties for Producer and Consumer classes. Below is the example of the Consumer class.

The URL of the external balancer is passed in as 1st argument (args[0]).

The sasl.jaas.config is passed in as the 2nd argument (args[1]).

Please refer to https://github.com/likhia/scramclientkafka.git for the sample code.

Please refer to the links.

[1]: https://access.redhat.com/documentation/en-us/red_hat_amq/2021.q3/html-single/deploying_and_upgrading_amq_streams_on_openshift/index#proc-deploying-cluster-operator-hub-str.

--

--