diff --git a/docs/tutorials/clustered-deployment-sasl.rst b/docs/tutorials/clustered-deployment-sasl.rst new file mode 100644 index 00000000..51b42dd2 --- /dev/null +++ b/docs/tutorials/clustered-deployment-sasl.rst @@ -0,0 +1,545 @@ +.. _clustered_deployment_sasl: + +Clustered Deployment Using SASL and SSL +---------------------------------------- + +In this section, we provide a tutorial for running a secure three-node Kafka cluster and Zookeeper ensemble with SASL. By the end of this tutorial, you will have successfully installed and run a simple deployment with security enabled on Docker. If you're looking for a simpler tutorial, please `refer to our quickstart guide `_, which is limited to a single node Kafka cluster. + + .. note:: + + It is worth noting that we will be configuring Kafka and Zookeeper to store secrets locally in the Docker containers. For production deployments (or generally whenever you care about not losing data), you should use mounted volumes for persisting data in the event that a container stops running or is restarted. This is important when running a system like Kafka on Docker, as it relies heavily on the filesystem for storing and caching messages. Refer to our `documentation on Docker external volumes `_ for an example of how to add mounted volumes to the host machine. +Installing & Running Docker +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +For this tutorial, we'll run docker using the Docker client. If you are interested in information on using Docker Compose to run the images, `skip to the bottom of this guide `_. + +To get started, you'll need to first `install Docker and get it running `_. The CP Docker Images require Docker version 1.11 or greater. + + +Docker Client: Setting Up a Three Node Kafka Cluster +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you're running on Windows or Mac OS X, you'll need to use `Docker Machine `_ to start the Docker host. Docker runs natively on Linux, so the Docker host will be your local machine if you go that route. If you are running on Mac or Windows, be sure to allocate at least 4 GB of ram to the Docker Machine. + +Now that we have all of the Docker dependencies installed, we can create a Docker machine and begin starting up Confluent Platform. + + .. note:: + + In the following steps we'll be running each Docker container in detached mode. However, we'll also demonstrate how access the logs for a running container. If you prefer to run the containers in the foreground, you can do so by replacing the ``-d`` flags with ``--it``. + +1. Create and configure the Docker machine. If you are running a docker-machine VM in the cloud like AWS, then you will need to SSH into the VM and run these commands. You may need to run them as root. + + .. sourcecode:: bash + + docker-machine create --driver virtualbox --virtualbox-memory 6000 confluent + + Next, configure your terminal window to attach it to your new Docker Machine: + + .. sourcecode:: bash + + eval $(docker-machine env confluent) + +2. Clone the git repository: + + .. sourcecode:: bash + + git clone https://github.com/confluentinc/cp-docker-images + cd cp-docker-images + +3. Generate Credentials + + You will need to generate CA certificates (or use yours if you already have one) and then generate keystore and truststore for brokers and clients. You can use the ``create-certs.sh`` script in ``examples/kafka-cluster-sasl/secrets`` to generate them. For production, please use these scripts for generating certificates : https://github.com/confluentinc/confluent-platform-security-tools + + For this example, we will use the ``create-certs.sh`` available in the ``examples/kafka-cluster-sasl/secrets`` directory in cp-docker-images. See "security" section for more details on security. Make sure that you have OpenSSL and JDK installed. + + .. sourcecode:: bash + + cd $(pwd)/examples/kafka-cluster-ssl/secrets + ./create-certs.sh + (Type yes for all "Trust this certificate? [no]:" prompts.) + cd - + + Set the environment variable for secrets directory. We will use this later in our commands. Make sure you are in the ``cp-confluent-images`` directory. + + .. sourcecode:: bash + + export KAFKA_SASL_SECRETS_DIR=$(pwd)/examples/kafka-cluster-sasl/secrets + +2. Build and run the kerberos image + + :: + + cd test/images/kerberos + docker build -t confluentinc/cp-kerberos:3.0.1 . + + docker run -d \ + --name=kerberos \ + --net=host \ + -v ${KAFKA_SASL_SECRETS_DIR}:/tmp/keytab \ + -v /dev/urandom:/dev/random \ + confluentinc/cp-kerberos:3.0.1 + + +3. Create the principals and keytabs. + i. To configure SASL, all your nodes will need to have a proper hostname . It is not advisable to use ``localhost`` as the hostname. + + We will now create a entry in ``/etc/hosts`` with hostname ``quickstart.confluent.io`` that points to ``eth0`` IP. + + :: + + export ETH0_IP=$(ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') + + echo ${ETH0_IP} quickstart.confluent.io >> /etc/hosts + + ii. Now, lets create all the prinicipals and their keytabs on Kerberos. + + :: + + for principal in zookeeper1 zookeeper2 zookeeper3 + do + docker exec -it kerberos kadmin.local -q "addprinc -randkey zookeeper/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker exec -it kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab zookeeper/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + + :: + + for principal in zkclient1 zkclient2 zkclient3 + do + docker exec -it kerberos kadmin.local -q "addprinc -randkey zkclient/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker exec -it kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab zkclient/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + + For Kafka brokers, the principal should be called ``kafka``. + :: + + for principal in broker1 broker2 broker3 + do + docker exec -it kerberos kadmin.local -q "addprinc -randkey kafka/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker exec -it kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab kafka/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + + :: + + for principal in saslproducer saslconsumer + do + docker exec -it kerberos kadmin.local -q "addprinc -randkey ${principal}/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker exec -it kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab ${principal}/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + +4. Run a 3-node Zookeeper ensemble with SASL enabled. + + .. sourcecode:: bash + + docker run -d \ + --net=host \ + --name=zk-sasl-1 \ + -e ZOOKEEPER_SERVER_ID=1 \ + -e ZOOKEEPER_CLIENT_PORT=22181 \ + -e ZOOKEEPER_TICK_TIME=2000 \ + -e ZOOKEEPER_INIT_LIMIT=5 \ + -e ZOOKEEPER_SYNC_LIMIT=2 \ + -e ZOOKEEPER_SERVERS="quickstart.confluent.io:22888:23888;quickstart.confluent.io:32888:33888;quickstart.confluent.io:42888:43888" \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_1_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Dsun.security.krb5.debug=true" \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + confluentinc/cp-zookeeper:3.0.1 + + .. sourcecode:: bash + + docker run -d \ + --net=host \ + --name=zk-sasl-2 \ + -e ZOOKEEPER_SERVER_ID=2 \ + -e ZOOKEEPER_CLIENT_PORT=32181 \ + -e ZOOKEEPER_TICK_TIME=2000 \ + -e ZOOKEEPER_INIT_LIMIT=5 \ + -e ZOOKEEPER_SYNC_LIMIT=2 \ + -e ZOOKEEPER_SERVERS="quickstart.confluent.io:22888:23888;quickstart.confluent.io:32888:33888;quickstart.confluent.io:42888:43888" \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_2_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Dsun.security.krb5.debug=true" \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + confluentinc/cp-zookeeper:3.0.1 + + .. sourcecode:: bash + + docker run -d \ + --net=host \ + --name=zk-sasl-3 \ + -e ZOOKEEPER_SERVER_ID=3 \ + -e ZOOKEEPER_CLIENT_PORT=42181 \ + -e ZOOKEEPER_TICK_TIME=2000 \ + -e ZOOKEEPER_INIT_LIMIT=5 \ + -e ZOOKEEPER_SYNC_LIMIT=2 \ + -e ZOOKEEPER_SERVERS="quickstart.confluent.io:22888:23888;quickstart.confluent.io:32888:33888;quickstart.confluent.io:42888:43888" \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_3_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider -Dsun.security.krb5.debug=true" \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + confluentinc/cp-zookeeper:3.0.1 + + Check the logs to see the Zookeeper server has booted up successfully + + .. sourcecode:: bash + + docker logs zk-sasl-1 + + You should see messages like this at the end of the log output: + + .. sourcecode:: bash + + [2016-07-24 07:17:50,960] INFO Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) + [2016-07-24 07:17:50,961] INFO FOLLOWING - LEADER ELECTION TOOK - 21823 (org.apache.zookeeper.server.quorum.Learner) + [2016-07-24 07:17:50,983] INFO Getting a diff from the leader 0x0 (org.apache.zookeeper.server.quorum.Learner) + [2016-07-24 07:17:50,986] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) + [2016-07-24 07:17:52,803] INFO Received connection request /127.0.0.1:50056 (org.apache.zookeeper.server.quorum.QuorumCnxManager) + [2016-07-24 07:17:52,806] INFO Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) FOLLOWING (my state) (org.apache.zookeeper.server.quorum.FastLeaderElection) + + You can repeat the command for the two other Zookeeper nodes. Next, you should verify that ZK ensemble is ready: + + .. sourcecode:: bash + + for i in 22181 32181 42181; do + docker run --net=host --rm confluentinc/cp-zookeeper:3.0.1 bash -c "echo stat | nc quickstart.confluent.io $i | grep Mode" + done + + You should see one ``leader`` and two ``follower`` instances. + + .. sourcecode:: bash + + Mode: follower + Mode: leader + Mode: follower + +4. Now that Zookeeper is up and running, we can fire up a three node Kafka cluster. + + .. sourcecode:: bash + + docker run -d \ + --net=host \ + --name=kafka-sasl-1 \ + -e KAFKA_ZOOKEEPER_CONNECT="quickstart.confluent.io:22181,quickstart.confluent.io:32181,quickstart.confluent.io:42181" \ + -e KAFKA_ADVERTISED_LISTENERS=SASL_SSL://quickstart.confluent.io:29094 \ + -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker1.keystore.jks \ + -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker1_keystore_creds \ + -e KAFKA_SSL_KEY_CREDENTIALS=broker1_sslkey_creds \ + -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker1.truststore.jks \ + -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker1_truststore_creds \ + -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SASL_SSL \ + -e KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL=GSSAPI \ + -e KAFKA_SASL_ENABLED_MECHANISMS=GSSAPI \ + -e KAFKA_SASL_KERBEROS_SERVICE_NAME=kafka \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/broker1_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf -Dsun.security.krb5.debug=true" \ + confluentinc/cp-kafka:3.0.1 + + .. sourcecode:: bash + + docker run -d \ + --net=host \ + --name=kafka-sasl-2 \ + -e KAFKA_ZOOKEEPER_CONNECT=quickstart.confluent.io:22181,quickstart.confluent.io:32181,quickstart.confluent.io:42181 \ + -e KAFKA_ADVERTISED_LISTENERS=SASL_SSL://quickstart.confluent.io:39094 \ + -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker2.keystore.jks \ + -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker2_keystore_creds \ + -e KAFKA_SSL_KEY_CREDENTIALS=broker2_sslkey_creds \ + -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker2.truststore.jks \ + -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker2_truststore_creds \ + -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SASL_SSL \ + -e KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL=GSSAPI \ + -e KAFKA_SASL_ENABLED_MECHANISMS=GSSAPI \ + -e KAFKA_SASL_KERBEROS_SERVICE_NAME=kafka \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/broker2_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf -Dsun.security.krb5.debug=true" \ + confluentinc/cp-kafka:3.0.1 + + .. sourcecode:: bash + + docker run -d \ + --net=host \ + --name=kafka-sasl-3 \ + -e KAFKA_ZOOKEEPER_CONNECT=quickstart.confluent.io:22181,quickstart.confluent.io:32181,quickstart.confluent.io:42181 \ + -e KAFKA_ADVERTISED_LISTENERS=SASL_SSL://quickstart.confluent.io:49094 \ + -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker3.keystore.jks \ + -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker3_keystore_creds \ + -e KAFKA_SSL_KEY_CREDENTIALS=broker3_sslkey_creds \ + -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker3.truststore.jks \ + -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker3_truststore_creds \ + -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SASL_SSL \ + -e KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL=GSSAPI \ + -e KAFKA_SASL_ENABLED_MECHANISMS=GSSAPI \ + -e KAFKA_SASL_KERBEROS_SERVICE_NAME=kafka \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/broker3_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf -Dsun.security.krb5.debug=true" \ + confluentinc/cp-kafka:3.0.1 + + +Check the logs to see the broker has booted up successfully: + + .. sourcecode:: bash + + docker logs kafka-sasl-1 + docker logs kafka-sasl-2 + docker logs kafka-sasl-3 + + You should see start see bootup messages. For example, ``docker logs kafka-sasl-3 | grep started`` should show the following: + + .. sourcecode:: bash + + [2016-07-24 07:29:20,258] INFO [Kafka Server 1003], started (kafka.server.KafkaServer) + [2016-07-24 07:29:20,258] INFO [Kafka Server 1003], started (kafka.server.KafkaServer) + + You should see the messages like the following on the broker acting as controller. + + .. sourcecode:: bash + + [2016-07-24 07:29:20,283] TRACE Controller 1001 epoch 1 received response {error_code=0} for a request sent to broker localhost:29092 (id: 1001 rack: null) (state.change.logger) + [2016-07-24 07:29:20,283] TRACE Controller 1001 epoch 1 received response {error_code=0} for a request sent to broker localhost:29092 (id: 1001 rack: null) (state.change.logger) + [2016-07-24 07:29:20,286] INFO [Controller-1001-to-broker-1003-send-thread], Starting (kafka.controller.RequestSendThread) + [2016-07-24 07:29:20,286] INFO [Controller-1001-to-broker-1003-send-thread], Starting (kafka.controller.RequestSendThread) + [2016-07-24 07:29:20,286] INFO [Controller-1001-to-broker-1003-send-thread], Starting (kafka.controller.RequestSendThread) + [2016-07-24 07:29:20,287] INFO [Controller-1001-to-broker-1003-send-thread], Controller 1001 connected to localhost:49092 (id: 1003 rack: null) for sending state change requests (kafka.controller.RequestSendThread) + +5. Test that the broker is working as expected. + + Now that the brokers are up, we'll test that they're working as expected by creating a topic. + + .. sourcecode:: bash + + docker run \ + --net=host \ + --rm \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/broker1_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf" \ + confluentinc/cp-kafka:3.0.1 \ + kafka-topics --create --topic bar --partitions 3 --replication-factor 3 --if-not-exists --zookeeper quickstart.confluent.io:32181 + + You should see the following output: + + .. sourcecode:: bash + + Created topic "bar". + + Now verify that the topic is created successfully by describing the topic. + + .. sourcecode:: bash + + docker run \ + --net=host \ + --rm \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/broker3_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf" \ + confluentinc/cp-kafka:3.0.1 \ + kafka-topics --describe --topic bar --zookeeper quickstart.confluent.io:32181 + + You should see the following message in your terminal window: + + .. sourcecode:: bash + + Topic:bar PartitionCount:3 ReplicationFactor:3 Configs: + Topic: bar Partition: 0 Leader: 1003 Replicas: 1003,1002,1001 Isr: 1003,1002,1001 + Topic: bar Partition: 1 Leader: 1001 Replicas: 1001,1003,1002 Isr: 1001,1003,1002 + Topic: bar Partition: 2 Leader: 1002 Replicas: 1002,1001,1003 Isr: 1002,1001,1003 + + Next, we'll try generating some data to the ``bar`` topic we just created. + + .. sourcecode:: bash + + docker run \ + --net=host \ + --rm \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/producer_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf" \ + confluentinc/cp-kafka:3.0.1 \ + bash -c "seq 42 | kafka-console-producer --broker-list quickstart.confluent.io:29094 --topic bar -producer.config /etc/kafka/secrets/host.producer.ssl.sasl.config && echo 'Produced 42 messages.'" + + The command above will pass 42 integers using the Console Producer that is shipped with Kafka. As a result, you should see something like this in your terminal: + + .. sourcecode:: bash + + Produced 42 messages. + + It looked like things were successfully written, but let's try reading the messages back using the Console Consumer and make sure they're all accounted for. + + .. sourcecode:: bash + + docker run \ + --net=host \ + --rm \ + -v ${KAFKA_SASL_SECRETS_DIR}:/etc/kafka/secrets \ + -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/consumer_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf" \ + confluentinc/cp-kafka:3.0.1 \ + kafka-console-consumer --bootstrap-server quickstart.confluent.io:29094 --topic bar --new-consumer --from-beginning --max-messages 42 --consumer.config /etc/kafka/secrets/host.consumer.ssl.sasl.config + + You should see the following (it might take some time for this command to return data. Kafka has to create the ``__consumers_offset`` topic behind the scenes when you consume data for the first time and this may take some time): + + .. sourcecode:: bash + + 1 + 4 + 7 + 10 + 13 + 16 + .... + 41 + Processed a total of 42 messages + +.. _clustered_quickstart_compose_sasl : + +Docker Compose: Setting Up a Three Node CP Cluster with SASL +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Before you get started, you will first need to install `Docker `_ and `Docker Compose `_. Once you've done that, you can follow the steps below to start up the Confluent Platform services. + +1. Clone the CP Docker Images Github Repository. + + .. sourcecode:: bash + + git clone https://github.com/confluentinc/cp-docker-images + cd cp-docker-images/examples/kafka-cluster-sasl + + Follow section 3 on generating credentials in the “Docker Client” section above to create the SSL credentials. + + Set the environment variable for secrets directory. This is used in the compose file. + + .. sourcecode:: bash + + export KAFKA_SSL_SECRETS_DIR=$(pwd)/examples/kafka-cluster-ssl/secrets + +2. Start Kerberos + + .. sourcecode:: bash + + docker-compose create kerberos + docker-compose start kerberos + +3. Create keytabs and principals. + + i. Follow steps 3.1 above to make sure ``quickstart.confluent.io`` is resolvable. + + ii. Now, lets create all the prinicipals and their keytabs on Kerberos. + + :: + + for principal in zookeeper1 zookeeper2 zookeeper3 + do + docker-compose exec kerberos kadmin.local -q "addprinc -randkey zookeeper/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker-compose exec kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab zookeeper/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + + :: + + for principal in zkclient1 zkclient2 zkclient3 + do + docker-compose exec kerberos kadmin.local -q "addprinc -randkey zkclient/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker-compose exec kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab zkclient/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + + For Kafka brokers, the principal should be called ``kafka``. + :: + + for principal in broker1 broker2 broker3 + do + docker-compose exec kerberos kadmin.local -q "addprinc -randkey kafka/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker-compose exec kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab kafka/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + + :: + + for principal in saslproducer saslconsumer + do + docker-compose exec kerberos kadmin.local -q "addprinc -randkey ${principal}/quickstart.confluent.io@TEST.CONFLUENT.IO" + docker-compose exec kerberos kadmin.local -q "ktadd -norandkey -k /tmp/keytab/${principal}.keytab ${principal}/quickstart.confluent.io@TEST.CONFLUENT.IO" + done + + +2. Start Zookeeper and Kafka + + .. sourcecode:: bash + + docker-compose create + docker-compose start + + Before we move on, let's make sure the services are up and running: + + .. sourcecode:: bash + + docker-compose ps + + You should see the following: + + .. sourcecode:: bash + + Name Command State Ports + ------------------------------------------------------------------------------- + kafkaclustersasl_kafka-sasl-1_1 /etc/confluent/docker/run Up + kafkaclustersasl_kafka-sasl-2_1 /etc/confluent/docker/run Up + kafkaclustersasl_kafka-sasl-3_1 /etc/confluent/docker/run Up + kafkaclustersasl_kerberos_1 /config.sh Up + kafkaclustersasl_zookeeper-sasl-1_1 /etc/confluent/docker/run Up + kafkaclustersasl_zookeeper-sasl-2_1 /etc/confluent/docker/run Up + kafkaclustersasl_zookeeper-sasl-3_1 /etc/confluent/docker/run Up + + Check the zookeeper logs to verify that Zookeeper is healthy. For example, for service zookeeper-1: + + .. sourcecode:: bash + + docker-compose logs zookeeper-sasl-1 + + You should see messages like the following: + + .. sourcecode:: bash + + zookeeper-1_1 | [2016-07-25 04:58:12,901] INFO Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) + zookeeper-1_1 | [2016-07-25 04:58:12,902] INFO FOLLOWING - LEADER ELECTION TOOK - 235 (org.apache.zookeeper.server.quorum.Learner) + + Verify that ZK ensemble is ready + + .. sourcecode:: bash + + for i in 22181 32181 42181; do + docker run --net=host --rm confluentinc/cp-zookeeper:3.0.1 bash -c "echo stat | nc quickstart.confluent.io $i | grep Mode" + done + + You should see one ``leader`` and two ``follower`` instances: + + .. sourcecode:: bash + + Mode: follower + Mode: leader + Mode: follower + + Check the logs to see the broker has booted up successfully + + .. sourcecode:: bash + + docker-compose logs kafka-sasl-1 + docker-compose logs kafka-sasl-2 + docker-compose logs kafka-sasl-3 + + You should see start see bootup messages. For example, ``docker-compose logs kafka-sasl-3 | grep started`` shows the following + + .. sourcecode:: bash + + kafka-sasl-3_1 | [2016-07-25 04:58:15,189] INFO [Kafka Server 3], started (kafka.server.KafkaServer) + kafka-sasl-3_1 | [2016-07-25 04:58:15,189] INFO [Kafka Server 3], started (kafka.server.KafkaServer) + + You should see the messages like the following on the broker acting as controller. + + .. sourcecode:: bash + + (Tip: `docker-compose logs | grep controller` makes it easy to grep through logs for all services.) + + kafka-sasl-1_1 | [2016-09-01 08:48:42,585] INFO [Controller-1-to-broker-2-send-thread], Starting (kafka.controller.RequestSendThread) + kafka-sasl-2_1 | [2016-09-01 08:48:41,716] INFO [Controller 2]: Controller startup complete (kafka.controller.KafkaController) + kafka-sasl-1_1 | [2016-09-01 08:48:42,585] INFO [Controller-1-to-broker-2-send-thread], Starting (kafka.controller.RequestSendThread) + kafka-sasl-2_1 | [2016-09-01 08:48:41,716] INFO [Controller 2]: Controller startup complete (kafka.controller.KafkaController) + kafka-sasl-2_1 | [2016-09-01 08:48:41,716] INFO [Controller 2]: Controller startup complete (kafka.controller.KafkaController) + +3. Follow section 5 in the "Docker Client" section above to test that your brokers are functioning as expected. + +4. To stop the cluster, first stop Kafka nodes one-by-one and then stop the Zookeeper cluster. + + .. sourcecode:: bash + + docker-compose stop kafka-sasl-1 + docker-compose stop kafka-sasl-2 + docker-compose stop kafka-sasl-3 + docker-compose stop + docker-compose rm diff --git a/docs/tutorials/clustered-deployment-ssl.rst b/docs/tutorials/clustered-deployment-ssl.rst index 3bc6a5fa..4071cc68 100644 --- a/docs/tutorials/clustered-deployment-ssl.rst +++ b/docs/tutorials/clustered-deployment-ssl.rst @@ -4,17 +4,16 @@ Clustered Deployment Using SSL ------------------------------- In this section, we provide a tutorial for running a secure three-node Kafka cluster and Zookeeper ensemble with SSL. By the end of this tutorial, you will have successfully installed and run a simple deployment with security enabled on Docker. If you're looking for a simpler tutorial, please `refer to our quickstart guide `_, which is limited to a single node Kafka cluster. ->>>>>>> Stashed changes:docs/tutorials/clustered-deployment-ssl.rst .. note:: - It is worth noting that we will be configuring Kafka and Zookeeper to store data locally in the Docker containers. For production deployments (or generally whenever you care about not losing data), you should use mounted volumes for persisting data in the event that a container stops running or is restarted. This is important when running a system like Kafka on Docker, as it relies heavily on the filesystem for storing and caching messages. Refer to our `documentation on Docker external volumes `_ for an example of how to add mounted volumes to the host machine. + It is worth noting that we will be configuring Kafka and Zookeeper to store data locally in the Docker containers. For production deployments (or generally whenever you care about not losing data), you should use mounted volumes for persisting data in the event that a container stops running or is restarted. This is important when running a system like Kafka on Docker, as it relies heavily on the filesystem for storing and caching messages. Refer to our `documentation on Docker external volumes `_ for an example of how to add mounted volumes to the host machine. Installing & Running Docker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For this tutorial, we'll run docker using the Docker client. If you are interested in information on using Docker Compose to run the images, `skip to the bottom of this guide `_. -To get started, you'll need to first `install Docker and get it running `_. The CP Docker Images require Docker version 1.11 or greater. +To get started, you'll need to first `install Docker and get it running `_. The CP Docker Images require Docker version 1.11 or greater. Docker Client: Setting Up a Three Node Kafka Cluster @@ -26,7 +25,7 @@ Now that we have all of the Docker dependencies installed, we can create a Docke .. note:: - In the following steps we'll be running each Docker container in detached mode. However, we'll also demonstrate how access the logs for a running container. If you prefer to run the containers in the foreground, you can do so by replacing the ``-d`` flags with ``--it``. + In the following steps we'll be running each Docker container in detached mode. However, we'll also demonstrate how access the logs for a running container. If you prefer to run the containers in the foreground, you can do so by replacing the ``-d`` flags with ``--it``. 1. Create and configure the Docker machine. @@ -49,15 +48,23 @@ Now that we have all of the Docker dependencies installed, we can create a Docke 3. Generate Credentials - You will need to generate CA certificates (or use yours if you already have one) and then generate keystore and truststore for brokers and clients. You can use this ``create-certs.sh`` in ``examples/secrets`` to generate them. For production, please use these scripts for generating certificates : https://github.com/confluentinc/confluent-platform-security-tools + You will need to generate CA certificates (or use yours if you already have one) and then generate keystore and truststore for brokers and clients. You can use ``create-certs.sh`` script in ``examples/kafka-ssl-cluster/secrets`` to generate them. For production, please use these scripts for generating certificates : https://github.com/confluentinc/confluent-platform-security-tools - For this example, we will use the credentials available in the examples/secrets directory in cp-docker-images. See "security" section for more details on security. + For this example, we will use the ``create-certs.sh`` available in the ``examples/kafka-ssl-cluster/secrets`` directory in cp-docker-images. See "security" section for more details on security. Make sure that you have OpenSSL and JDK installed. - Set the environment variable for secrets directory. We will use this later in our commands. + .. sourcecode:: bash + + cd $(pwd)/examples/kafka-cluster-ssl/secrets + ./create-certs.sh + (Type yes for all "Trust this certificate? [no]:" prompts.) + cd - + + Set the environment variable for secrets directory. We will use this later in our commands. Make sure you are in the ``cp-confluent-images`` directory. + + .. sourcecode:: bash - .. sourcecode:: bash + export KAFKA_SSL_SECRETS_DIR=$(pwd)/examples/kafka-cluster-ssl/secrets - export KAFKA_SSL_SECRETS_DIR=$(pwd)/examples/kafka-cluster-ssl/secrets 4. Start Up a 3-node Zookeeper Ensemble by running the three commands below. @@ -74,6 +81,8 @@ Now that we have all of the Docker dependencies installed, we can create a Docke -e ZOOKEEPER_SERVERS="localhost:22888:23888;localhost:32888:33888;localhost:42888:43888" \ confluentinc/cp-zookeeper:3.0.1 + .. sourcecode:: bash + docker run -d \ --net=host \ --name=zk-2 \ @@ -85,6 +94,8 @@ Now that we have all of the Docker dependencies installed, we can create a Docke -e ZOOKEEPER_SERVERS="localhost:22888:23888;localhost:32888:33888;localhost:42888:43888" \ confluentinc/cp-zookeeper:3.0.1 + .. sourcecode:: bash + docker run -d \ --net=host \ --name=zk-3 \ @@ -129,51 +140,55 @@ Now that we have all of the Docker dependencies installed, we can create a Docke Mode: leader Mode: follower -4. Now that Zookeeper is up and running, we can fire up a three node Kafka cluster. +4. Now that Zookeeper is up and running, we can fire up a three node Kafka cluster. + + .. sourcecode:: bash - .. sourcecode:: bash + docker run -d \ + --net=host \ + --name=kafka-ssl-1 \ + -e KAFKA_ZOOKEEPER_CONNECT=localhost:22181,localhost:32181,localhost:42181 \ + -e KAFKA_ADVERTISED_LISTENERS=SSL://localhost:29092 \ + -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker1.keystore.jks \ + -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker1_keystore_creds \ + -e KAFKA_SSL_KEY_CREDENTIALS=broker1_sslkey_creds \ + -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker1.truststore.jks \ + -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker1_truststore_creds \ + -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SSL \ + -v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \ + confluentinc/cp-kafka:3.0.1 - docker run -d \ - --net=host \ - --name=kafka-ssl-1 \ - -e KAFKA_ZOOKEEPER_CONNECT=localhost:22181,localhost:32181,localhost:42181 \ - -e KAFKA_ADVERTISED_LISTENERS=SSL://localhost:29092 \ - -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker1.keystore.jks \ - -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker1_keystore_creds \ - -e KAFKA_SSL_KEY_CREDENTIALS=broker1_sslkey_creds \ - -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker1.truststore.jks \ - -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker1_truststore_creds \ - -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SSL \ - -v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \ - confluentinc/cp-kafka:3.0.1 + .. sourcecode:: bash - docker run -d \ - --net=host \ - --name=kafka-ssl-2 \ - -e KAFKA_ZOOKEEPER_CONNECT=localhost:22181,localhost:32181,localhost:42181 \ - -e KAFKA_ADVERTISED_LISTENERS=SSL://localhost:39092 \ - -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker2.keystore.jks \ - -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker2_keystore_creds \ - -e KAFKA_SSL_KEY_CREDENTIALS=broker2_sslkey_creds \ - -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker2.truststore.jks \ - -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker2_truststore_creds \ - -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SSL \ - -v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \ - confluentinc/cp-kafka:3.0.1 + docker run -d \ + --net=host \ + --name=kafka-ssl-2 \ + -e KAFKA_ZOOKEEPER_CONNECT=localhost:22181,localhost:32181,localhost:42181 \ + -e KAFKA_ADVERTISED_LISTENERS=SSL://localhost:39092 \ + -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker2.keystore.jks \ + -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker2_keystore_creds \ + -e KAFKA_SSL_KEY_CREDENTIALS=broker2_sslkey_creds \ + -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker2.truststore.jks \ + -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker2_truststore_creds \ + -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SSL \ + -v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \ + confluentinc/cp-kafka:3.0.1 - docker run -d \ - --net=host \ - --name=kafka-ssl-3 \ - -e KAFKA_ZOOKEEPER_CONNECT=localhost:22181,localhost:32181,localhost:42181 \ - -e KAFKA_ADVERTISED_LISTENERS=SSL://localhost:49092 \ - -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker3.keystore.jks \ - -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker3_keystore_creds \ - -e KAFKA_SSL_KEY_CREDENTIALS=broker3_sslkey_creds \ - -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker3.truststore.jks \ - -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker3_truststore_creds \ - -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SSL \ - -v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \ - confluentinc/cp-kafka:3.0.1 + .. sourcecode:: bash + + docker run -d \ + --net=host \ + --name=kafka-ssl-3 \ + -e KAFKA_ZOOKEEPER_CONNECT=localhost:22181,localhost:32181,localhost:42181 \ + -e KAFKA_ADVERTISED_LISTENERS=SSL://localhost:49092 \ + -e KAFKA_SSL_KEYSTORE_FILENAME=kafka.broker3.keystore.jks \ + -e KAFKA_SSL_KEYSTORE_CREDENTIALS=broker3_keystore_creds \ + -e KAFKA_SSL_KEY_CREDENTIALS=broker3_sslkey_creds \ + -e KAFKA_SSL_TRUSTSTORE_FILENAME=kafka.broker3.truststore.jks \ + -e KAFKA_SSL_TRUSTSTORE_CREDENTIALS=broker3_truststore_creds \ + -e KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SSL \ + -v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \ + confluentinc/cp-kafka:3.0.1 Check the logs to see the broker has booted up successfully: @@ -264,7 +279,7 @@ Now that we have all of the Docker dependencies installed, we can create a Docke --rm \ -v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \ confluentinc/cp-kafka:3.0.1 \ - kafka-console-consumer --bootstrap-server localhost:29092 --topic bar --new-consumer --from-beginning --max-messages 42 --consumer.config /etc/kafka/secrets/host.consumer.ssl.config + kafka-console-consumer --bootstrap-server localhost:29092 --topic bar --new-consumer --from-beginning --max-messages 10 --consumer.config /etc/kafka/secrets/host.consumer.ssl.config You should see the following (it might take some time for this command to return data. Kafka has to create the ``__consumers_offset`` topic behind the scenes when you consume data for the first time and this may take some time): @@ -278,7 +293,7 @@ Now that we have all of the Docker dependencies installed, we can create a Docke 16 .... 41 - Processed a total of 42 messages + Processed a total of 10 messages .. _clustered_quickstart_compose_ssl : @@ -294,6 +309,7 @@ Before you get started, you will first need to install `Docker ${i}_sslkey_creds + echo "confluent" > ${i}_keystore_creds + echo "confluent" > ${i}_truststore_creds +done diff --git a/examples/kafka-cluster-sasl/secrets/host.consumer.ssl.config b/examples/kafka-cluster-sasl/secrets/host.consumer.ssl.config new file mode 100644 index 00000000..72c33e5d --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/host.consumer.ssl.config @@ -0,0 +1,9 @@ +group.id=ssl-host +ssl.truststore.location=/etc/kafka/secrets/kafka.consumer.truststore.jks +ssl.truststore.password=confluent + +ssl.keystore.location=/etc/kafka/secrets/kafka.consumer.keystore.jks +ssl.keystore.password=confluent +ssl.key.password=confluent + +security.protocol=SSL diff --git a/examples/kafka-cluster-sasl/secrets/host.consumer.ssl.sasl.config b/examples/kafka-cluster-sasl/secrets/host.consumer.ssl.sasl.config new file mode 100644 index 00000000..9314b16a --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/host.consumer.ssl.sasl.config @@ -0,0 +1,11 @@ +group.id=ssl-sasl-host +ssl.truststore.location=/etc/kafka/secrets/kafka.consumer.truststore.jks +ssl.truststore.password=confluent + +ssl.keystore.location=/etc/kafka/secrets/kafka.consumer.keystore.jks +ssl.keystore.password=confluent +ssl.key.password=confluent + +security.protocol=SASL_SSL +sasl.mechanism=GSSAPI +sasl.kerberos.service.name=kafka diff --git a/examples/kafka-cluster-sasl/secrets/host.producer.ssl.config b/examples/kafka-cluster-sasl/secrets/host.producer.ssl.config new file mode 100644 index 00000000..533c24f1 --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/host.producer.ssl.config @@ -0,0 +1,8 @@ +ssl.truststore.location=/etc/kafka/secrets/kafka.producer.truststore.jks +ssl.truststore.password=confluent + +ssl.keystore.location=/etc/kafka/secrets/kafka.producer.keystore.jks +ssl.keystore.password=confluent +ssl.key.password=confluent + +security.protocol=SSL diff --git a/examples/kafka-cluster-sasl/secrets/host.producer.ssl.sasl.config b/examples/kafka-cluster-sasl/secrets/host.producer.ssl.sasl.config new file mode 100644 index 00000000..e7bd5b11 --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/host.producer.ssl.sasl.config @@ -0,0 +1,10 @@ +ssl.truststore.location=/etc/kafka/secrets/kafka.producer.truststore.jks +ssl.truststore.password=confluent + +ssl.keystore.location=/etc/kafka/secrets/kafka.producer.keystore.jks +ssl.keystore.password=confluent +ssl.key.password=confluent + +security.protocol=SASL_SSL +sasl.mechanism=GSSAPI +sasl.kerberos.service.name=kafka diff --git a/examples/kafka-cluster-sasl/secrets/krb.conf b/examples/kafka-cluster-sasl/secrets/krb.conf new file mode 100644 index 00000000..c0be4f85 --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/krb.conf @@ -0,0 +1,22 @@ +[logging] + default = FILE:/var/log/kerberos/krb5libs.log + kdc = FILE:/var/log/kerberos/krb5kdc.log + admin_server = FILE:/var/log/kerberos/kadmind.log + +[libdefaults] + default_realm = TEST.CONFLUENT.IO + dns_lookup_realm = false + dns_lookup_kdc = false + ticket_lifetime = 24h + renew_lifetime = 7d + forwardable = true + +[realms] + TEST.CONFLUENT.IO = { + kdc = quickstart.confluent.io + admin_server = quickstart.confluent.io + } + +[domain_realm] + .TEST.CONFLUENT.IO = TEST.CONFLUENT.IO + TEST.CONFLUENT.IO = TEST.CONFLUENT.IO diff --git a/examples/kafka-cluster-sasl/secrets/producer_jaas.conf b/examples/kafka-cluster-sasl/secrets/producer_jaas.conf new file mode 100644 index 00000000..24df106f --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/producer_jaas.conf @@ -0,0 +1,7 @@ +KafkaClient { + com.sun.security.auth.module.Krb5LoginModule required + useKeyTab=true + storeKey=true + keyTab="/etc/kafka/secrets/saslproducer.keytab" + principal="saslproducer/quickstart.confluent.io@TEST.CONFLUENT.IO"; +}; diff --git a/examples/kafka-cluster-sasl/secrets/zookeeper_1_jaas.conf b/examples/kafka-cluster-sasl/secrets/zookeeper_1_jaas.conf new file mode 100644 index 00000000..d08af207 --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/zookeeper_1_jaas.conf @@ -0,0 +1,14 @@ +Server { + com.sun.security.auth.module.Krb5LoginModule required + useKeyTab=true + storeKey=true + keyTab="/etc/kafka/secrets/zookeeper1.keytab" + principal="zookeeper/quickstart.confluent.io@TEST.CONFLUENT.IO"; +}; +Client { + com.sun.security.auth.module.Krb5LoginModule required + useKeyTab=true + storeKey=true + keyTab="/etc/kafka/secrets/zkclient1.keytab" + principal="zkclient/quickstart.confluent.io@TEST.CONFLUENT.IO"; +}; diff --git a/examples/kafka-cluster-sasl/secrets/zookeeper_2_jaas.conf b/examples/kafka-cluster-sasl/secrets/zookeeper_2_jaas.conf new file mode 100644 index 00000000..c63b475a --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/zookeeper_2_jaas.conf @@ -0,0 +1,14 @@ +Server { + com.sun.security.auth.module.Krb5LoginModule required + useKeyTab=true + storeKey=true + keyTab="/etc/kafka/secrets/zookeeper2.keytab" + principal="zookeeper/quickstart.confluent.io@TEST.CONFLUENT.IO"; +}; +Client { + com.sun.security.auth.module.Krb5LoginModule required + useKeyTab=true + storeKey=true + keyTab="/etc/kafka/secrets/zkclient2.keytab" + principal="zkclient/quickstart.confluent.io@TEST.CONFLUENT.IO"; +}; diff --git a/examples/kafka-cluster-sasl/secrets/zookeeper_3_jaas.conf b/examples/kafka-cluster-sasl/secrets/zookeeper_3_jaas.conf new file mode 100644 index 00000000..d2b717a5 --- /dev/null +++ b/examples/kafka-cluster-sasl/secrets/zookeeper_3_jaas.conf @@ -0,0 +1,14 @@ +Server { + com.sun.security.auth.module.Krb5LoginModule required + useKeyTab=true + storeKey=true + keyTab="/etc/kafka/secrets/zookeeper3.keytab" + principal="zookeeper/quickstart.confluent.io@TEST.CONFLUENT.IO"; +}; +Client { + com.sun.security.auth.module.Krb5LoginModule required + useKeyTab=true + storeKey=true + keyTab="/etc/kafka/secrets/zkclient3.keytab" + principal="zkclient/quickstart.confluent.io@TEST.CONFLUENT.IO"; +}; diff --git a/examples/kafka-cluster-ssl/secrets/broker1_keystore_creds b/examples/kafka-cluster-ssl/secrets/broker1_keystore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker1_keystore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker1_sslkey_creds b/examples/kafka-cluster-ssl/secrets/broker1_sslkey_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker1_sslkey_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker1_truststore_creds b/examples/kafka-cluster-ssl/secrets/broker1_truststore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker1_truststore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker2_keystore_creds b/examples/kafka-cluster-ssl/secrets/broker2_keystore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker2_keystore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker2_sslkey_creds b/examples/kafka-cluster-ssl/secrets/broker2_sslkey_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker2_sslkey_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker2_truststore_creds b/examples/kafka-cluster-ssl/secrets/broker2_truststore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker2_truststore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker3_keystore_creds b/examples/kafka-cluster-ssl/secrets/broker3_keystore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker3_keystore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker3_sslkey_creds b/examples/kafka-cluster-ssl/secrets/broker3_sslkey_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker3_sslkey_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/broker3_truststore_creds b/examples/kafka-cluster-ssl/secrets/broker3_truststore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/broker3_truststore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/consumer_keystore_creds b/examples/kafka-cluster-ssl/secrets/consumer_keystore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/consumer_keystore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/consumer_sslkey_creds b/examples/kafka-cluster-ssl/secrets/consumer_sslkey_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/consumer_sslkey_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/consumer_truststore_creds b/examples/kafka-cluster-ssl/secrets/consumer_truststore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/consumer_truststore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/kafka.broker1.keystore.jks b/examples/kafka-cluster-ssl/secrets/kafka.broker1.keystore.jks deleted file mode 100644 index 168613e0..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.broker1.keystore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.broker1.truststore.jks b/examples/kafka-cluster-ssl/secrets/kafka.broker1.truststore.jks deleted file mode 100644 index a5294ddc..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.broker1.truststore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.broker2.keystore.jks b/examples/kafka-cluster-ssl/secrets/kafka.broker2.keystore.jks deleted file mode 100644 index d5e1222f..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.broker2.keystore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.broker2.truststore.jks b/examples/kafka-cluster-ssl/secrets/kafka.broker2.truststore.jks deleted file mode 100644 index a289d82d..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.broker2.truststore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.broker3.keystore.jks b/examples/kafka-cluster-ssl/secrets/kafka.broker3.keystore.jks deleted file mode 100644 index 0e0110f3..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.broker3.keystore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.broker3.truststore.jks b/examples/kafka-cluster-ssl/secrets/kafka.broker3.truststore.jks deleted file mode 100644 index e0c4bb6b..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.broker3.truststore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.consumer.keystore.jks b/examples/kafka-cluster-ssl/secrets/kafka.consumer.keystore.jks deleted file mode 100644 index 3bbcabce..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.consumer.keystore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.consumer.truststore.jks b/examples/kafka-cluster-ssl/secrets/kafka.consumer.truststore.jks deleted file mode 100644 index 01792a31..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.consumer.truststore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.producer.keystore.jks b/examples/kafka-cluster-ssl/secrets/kafka.producer.keystore.jks deleted file mode 100644 index 63cd770c..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.producer.keystore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafka.producer.truststore.jks b/examples/kafka-cluster-ssl/secrets/kafka.producer.truststore.jks deleted file mode 100644 index 31f657ad..00000000 Binary files a/examples/kafka-cluster-ssl/secrets/kafka.producer.truststore.jks and /dev/null differ diff --git a/examples/kafka-cluster-ssl/secrets/kafkacat-ca1-signed.pem b/examples/kafka-cluster-ssl/secrets/kafkacat-ca1-signed.pem deleted file mode 100644 index 1faa841d..00000000 --- a/examples/kafka-cluster-ssl/secrets/kafkacat-ca1-signed.pem +++ /dev/null @@ -1,15 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICQjCCAasCCQDwEiEZn8ndIDANBgkqhkiG9w0BAQUFADBjMR4wHAYDVQQDExVj -YTEudGVzdC5jb25mbHVlbnQuaW8xDTALBgNVBAsTBFRFU1QxEjAQBgNVBAoTCUNP -TkZMVUVOVDERMA8GA1UEBxMIUGFsb0FsdG8xCzAJBgNVBAYTAlVTMB4XDTE2MDgy -NDIxMzEzM1oXDTQ0MDEwOTIxMzEzM1owaDEjMCEGA1UEAxMaa2Fma2FjYXQudGVz -dC5jb25mbHVlbnQuaW8xDTALBgNVBAsTBFRFU1QxEjAQBgNVBAoTCUNPTkZMVUVO -VDERMA8GA1UEBxMIUGFsb0FsdG8xCzAJBgNVBAYTAlVTMIGfMA0GCSqGSIb3DQEB -AQUAA4GNADCBiQKBgQDVhLkQTgfgN1UwggieYMUY0mQY2UBlDGuL6Ym+b3MKMazZ -vkXn4YF+fM3Yt+KfYCwCkPQuLm1elsYy8at0vVQ1OGPWHDwtQw1Qeww4mSYIWMNC -Mc2V+0XkLkjjWG0wpm0vfcPc50DkFt1i4jQvYBNPG6/rKKi+Y2CxPlO0LX/a/wID -AQABMA0GCSqGSIb3DQEBBQUAA4GBAIqsLW/KRHz7gM5uS4+YH1yW5wYkEOys2SbD -YqfBQkX9IOStHg3/OUCBoFSKpWYULjJRwqB087djV9gRtpIlRevkshJTMkPi5WhC -778dnRxdQ0tl7igQ/5LoC+blKQm4WXVeGRhmwlBXW9CdWHF7Dm35q02qbiNEx1BE -HY7EP8M7 ------END CERTIFICATE----- diff --git a/examples/kafka-cluster-ssl/secrets/kafkacat.client.key b/examples/kafka-cluster-ssl/secrets/kafkacat.client.key deleted file mode 100644 index 95fb8dce..00000000 --- a/examples/kafka-cluster-ssl/secrets/kafkacat.client.key +++ /dev/null @@ -1,18 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -Proc-Type: 4,ENCRYPTED -DEK-Info: DES-EDE3-CBC,C82985E62E1B13D7 - -jaB63dFNqBh2RF6DbML4j8Y0QDzfB8mcTfafUOR2rXQurFhcQqaAaxiGfaBHo92s -uV2spaYjdOAcKdcDMs2DMxY9gGm9arFhDFMLAixqVyiUSRmKBF+CS1+pu/VNB9wz -4aoXlcHYWBLRm9e8EA7vaee9DYgUTbRtZtCm74K1CIqZR4apIKVNbJAI3Mc0YfWh -RdOXpCGYz5BWp9Lgh1TXW8Y3JUymU3LcaHMV3LxnYs+qqo07llumQAKmko0BSCBQ -/QUCoblED1y+/lMS3UPBhApKEqYyW/NQtFyhRfr+6YtD92OQdFeMmliVOTgW6KFJ -bfiiHMc99ExpYM9Wc1Rw5q/Pkxvx9doDaznlstcvDX4l1XNImFSr2YfOWxdblHsa -RpDDsvnwDnIsPPzeeiVU4V6X/UrtsC7zwIVKbVGYLfcvjOKIuQknJqDyxLDLsSKE -x4Sz1V2c6+eUXWlHWXNhyjnsyJ0HPV8QNlS7KoBVYJG+bZy8US8y9Zcl6lrm7JMZ -fW2GtD5X7XXEqML8y3ewOQHps7kOtTGgD/t8dmIt54oKpKEp0BOEhqPIvzMxWNjf -Q6pnVh91PanuiGqhgkMqHemxKkZfbWYRy4u9QeS9v2mj85Az1cQ5V12yoERZPjxA -mjYJ607n5d6QJMuFzOJTf0jAjVd6zU5uRGBQwQsn66c+x2jUP+OtokY2MH2PjcDi -bPV3Kqlz4lIoO9QGO2CbRqCHe+kKEB1AmXc1KVL/r737MwVxA4QjvIhsOX3ZluTR -nhpSlENA0nz0/5vgd++FZEPROgWc3ZgZJbnfhoFplSpy2WjcWeOCqQ== ------END RSA PRIVATE KEY----- diff --git a/examples/kafka-cluster-ssl/secrets/producer_keystore_creds b/examples/kafka-cluster-ssl/secrets/producer_keystore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/producer_keystore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/producer_sslkey_creds b/examples/kafka-cluster-ssl/secrets/producer_sslkey_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/producer_sslkey_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/producer_truststore_creds b/examples/kafka-cluster-ssl/secrets/producer_truststore_creds deleted file mode 100644 index 23212273..00000000 --- a/examples/kafka-cluster-ssl/secrets/producer_truststore_creds +++ /dev/null @@ -1 +0,0 @@ -confluent diff --git a/examples/kafka-cluster-ssl/secrets/snakeoil-ca-1.key b/examples/kafka-cluster-ssl/secrets/snakeoil-ca-1.key deleted file mode 100644 index 9fca0516..00000000 --- a/examples/kafka-cluster-ssl/secrets/snakeoil-ca-1.key +++ /dev/null @@ -1,18 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -Proc-Type: 4,ENCRYPTED -DEK-Info: DES-EDE3-CBC,6578086B898FE22A - -CCvBL5dE5iNb0ueQYt0WtO17hggOCMEMqcvienpWN+VCi+J1ZgGFR4QjGuM+bLG7 -FJ+JvDeZW2iW2KXfWEmvJbmipIgZUquE7Dk+JmGQiJx1eum58qRL7Z49r/SzlOiL -D+ok6MlRktiBSdXYRMZVugY3SzTx0BUFKF8lZlmXFFv/Tqd45mcQn8OlddM7ldZ/ -erD4aMum75CFsz8dY3vA3FMPceCDclYgpiwsbS5c2jOF81gn/dmkd51TPlq6IBlK -D9Hs6zx/hDPNeBzyFvfFjfcUk3IWCECtvneMPrHK5LXU2n5hFEtiJeIixe+60GMo -pgdjTmVPprzJipvbQwOA6ZLKN4yFf1cmEjryP/HcNq1DSIOZE7m2pLwuwcNQLDEq -EmB6dif8biW+nSy33FdafwW6FXwq7HN99v//4aUE4gFrM0NdHz5s8KMAbfacsY5M -NHVxFohVEEqyLo9dFn1q54gZG+oBvYaGGZMC4Q8uZm8Le+EJQDM7FNq/9hg58+XS -yE+WpymPBF6fd+VPnILGwArRhSdbZG40NBlfk/qYDWEIYfzVHAyF4HRUwB0fdSbY -A/SR9J+jU0k1gV9kJ6EBwzPiUZ2wvqVh4Y/v6k5P9h4GKFErfQbnWAEet/FRh0sl -BT01e0Oo0CUl6qH2hY7pxEoKLgdJZrwJTXyQeYcb9hSr+QFH1m5JiPKRt9dSU3P0 -ObV8UdzY+jSB9TM6k0EBwe4chssARFQRLu14ohl7O6p4i6612mRIonS8HVQC+3EO -1Ykcl5DGu8wJq1n3inzO+qtzS1xLVhoX8UkYrYLXPhocS8OzNXTUPw== ------END RSA PRIVATE KEY-----