Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS enabled Communication between two strimzi kafka clusters on separate machines using skupper and mirrormaker2 #1895

Open
SaifulHasan3000 opened this issue Jan 20, 2025 · 0 comments

Comments

@SaifulHasan3000
Copy link

SaifulHasan3000 commented Jan 20, 2025

Versions used:
Skupper:
client version 1.8.2
transport version quay.io/skupper/skupper-router:2.7.2 (sha256:ef8d44f5c182)
controller version quay.io/skupper/service-controller:1.8.2 (sha256:ed5109ebebc0)
config-sync version quay.io/skupper/config-sync:1.8.2 (sha256:0b5649c55a4e)
flow-collector version not-found

k8s
Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.30.6+k3s1

I want to setup tls-enabled communication between two strimzi kafka clusters deployed on two different machines and obviously inside different k3d clusters on these machines. I'm first here trying out the non tls version.
I’ve first deployed the strimizi kafka operator using its helm chart and then deployed the kafka clusters using the below files on my two machines

Source.yaml

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: source
spec:
  kafka:
    version: 3.9.0
    replicas: 1
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      default.replication.factor: 1
      min.insync.replicas: 1
      inter.broker.protocol.version: "3.9"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 30Gi
        deleteClaim: false
  zookeeper:
    replicas: 1
    storage:
      type: persistent-claim
      size: 30Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Note: for Target.yaml same file as above, just replace ‘source’ by ‘target’

Then I established scupper connection between these machines
By first doing (source cluster machine)

skupper init --ingress nodeport --ingress-host <IP of k3d node of machine 1>

And similarly for the other machine: (target cluster machine)

skupper init --ingress nodeport --ingress-host <IP of k3d node of machine 2>

Then generated the token at the source machine and using the token linked the two clusters.

Such that the response of: scupper status in machine 1 and machine 2 is:

Skupper is enabled for namespace "skupper-demo". It is connected to 1 other site. It has no exposed services.

The bootstrap service of the deployed kafka clusters look like this:

source-kafka-bootstrap              ClusterIP      10.43.171.227                      tcp-replication:9091►0 tcp-clients:9092►0 tcp-clientstls:9093►0

So, I exposed this bootstrap service of the source cluster (in machine 1) by using scupper expose command: (First made a headless service that has all the ports and then exposed that, since I wanted to expose all three ports (9091, 9092, 9093) and I knew only this method to do so (not so confident tha it is correct)
Making headless service:

skupper service create source-kafka-bootstrap-skupper 9091 9092 9093

Binding the headless service with the original bootstrap service.

skupper service bind source-kafka-bootstrap-skupper service source-kafka-bootstrap

After running the above command, a new service appears in the scupper-demo ns of both the clusters name is: source-kafka-bootstrap-skupper

source-kafka-bootstrap-skupper      ClusterIP      10.43.104.55                       port9092:9092►0 port9093:9093►0 port9091:9091►0 

And now, deployed the mirrormaker in the same namespace as that of the target cluster (i.e. in machine 2)

Mirrormaker2.yaml:

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaMirrorMaker2
metadata:
  name: my-mirror-maker-2
  namespace: skupper-demo
spec:
  version: 3.9.0
  replicas: 1
  connectCluster: "target" # Must be the target custer
  clusters:
  - alias: "source" # Source cluster
    bootstrapServers: source-kafka-bootstrap-skupper:9092
  - alias: "target" # Target cluster
    bootstrapServers: target-kafka-bootstrap:9092
    config:
      # -1 means it will use the default replication factor configured in the broker
      config.storage.replication.factor: 1
      offset.storage.replication.factor: 1
      status.storage.replication.factor: 1
  mirrors:
  - sourceCluster: "source"
    targetCluster: "target"
    sourceConnector:
      config:
        replication.factor: 1
        offset-syncs.topic.replication.factor: 1
        sync.topic.acls.enabled: "false"
        replication.policy.separator: ""
        replication.policy.class: "io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy"
    heartbeatConnector:
      config:
        heartbeats.topic.replication.factor: 1
    checkpointConnector:
      config:
        checkpoints.topic.replication.factor: 1
        replication.policy.separator: ""
        replication.policy.class: "io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy"
    topicsPattern: "topic4"
    groupsPattern: " .* "
  logging:
    type: inline
    loggers:
      connect.root.logger.level: "INFO"

After doing all this setup, I now setup a producer in source cluster (on machine 1) by the command:

kubectl run kafka-producer -ti --image=strimzi/kafka:0.20.0-rc1-kafka-2.6.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list source-kafka-bootstrap-skupper:9092 --topic topic4

And deployed a kafka-consumer in target cluster (on machine 2) by the command:

kubectl run kafka-consumer -ti --image=strimzi/kafka:0.20.0-rc1-kafka-2.6.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server target-kafka-bootstrap:9092 --topic topic4 --from-beginning

Now when I am sending messages from the above deployed producer (on source cluster, in machine 1) I am not receiving any messages on the consumer which is deployed on cluster 2 in machine 2.
Can someone please help me as it is really crucial for me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant