Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consumer reset offset after rebalance - commit with explicit consumer result used #1792

Closed
3schwartz opened this issue Apr 7, 2022 · 10 comments
Labels
investigate further we want to further investigate to understand properly

Comments

@3schwartz
Copy link
Contributor

3schwartz commented Apr 7, 2022

Description

We have experience some issues with our consumers in our Kafka environment. The issue we see is when a consumer is assigned to some partitions, it starts from first message - not the latest committed. Hence the offset gets reset on the given partition.

We are hosting our cluster in Kubernetes, and one consumer group exist in many different pods and get’s assigned to many different topics. These topics often have many partitions and hence we end up with many members (~500) of the consumer group.

We have experience the issue in several scenarios but always around rebalancing - either following rolling updates or scaling. When pods start, some partitions have their offset reset.

We can see that Kafka actually also see the initial offset as unset - this because we can see the value of metric kafka_consumergroup_current_offset is -1 just after the issue happens.

image

How to reproduce

We are currently not able to reproduce the issue. A simplified flow of our consumer follows that of example

public static void Run_Consume(string brokerList, List<string> topics, CancellationToken cancellationToken)
without the modulo - hence commit on every message.

            var config = new ConsumerConfig
            {
                BootstrapServers = "kafka:9092",
                GroupId = "consumer",
                ClientId = "some",
                EnableAutoCommit = false,
                AutoOffsetReset = AutoOffsetReset.Earliest,
            }

            using (var consumer = new ConsumerBuilder<string, string>(config)
                       .Build())
            {
                consumer.Subscribe(topics);

                try
                {
                    while (true)
                    {
                        try
                        {
                            var consumeResult = consumer.Consume(cancellationToken);

                            // Process

                            consumer.Commit(consumeResult);
                        }
                        catch (ConsumeException e)
                        {
                            Console.WriteLine($"Consume error: {e.Error.Reason}");
                        }
                    }
                }
                catch (OperationCanceledException)
                {
                    Console.WriteLine("Closing consumer.");
                    consumer.Close();
                }
            }

We have followed issue #1672, however the issue are resolved by a fix which flush the local offset storage between assignments.

There are comments on the issue, from some who have the same setup with EnableAutoCommit=false and using commit with explicit consumer result, which reports the same behavior as we see and which - I would expect they still have the issue after the change to flush offset storage:

We are not disabling auto offset storage - hence on every poll the storage would be populated. However we are explicit using the consumer result at commit - and from a small chat with @edenhill on the other issue unfortunately explicit commits are not related to the fix of the issue.

I would only expect the other issue and fix to be related to this issue, if there is a side effect to the commit with consumer result, which both commit the explicit consumer result and at the same time commit whatever is in offset storage.

Configurations

auto.offset.reset: earliest
enable.auto.commit: false
partition.assignment.strategy: range (default)

The default.replication.factor is 3 which means all topics would be replicated to three brokers.

If the leader doesn't hear from followers within replica.lag.time.max.ms the follower will be removed from ISR. We would expect to see it on metric kafka_topic_partition_under_replicated_partition.

Just before offset reset events happen we don't see any out of sync replica - hence I would expect everything to be replicated.

Versions
Kafka: v2.8
.NET Confluent Kafka: 1.8.2

@edenhill
Copy link
Contributor

edenhill commented Apr 7, 2022

Are there any error logs on any of the consumer pods when this happens? Perhaps something about offset resets?

@3schwartz
Copy link
Contributor Author

3schwartz commented Apr 7, 2022

Are there any error logs on any of the consumer pods when this happens? Perhaps something about offset resets?

Our log handlers are constructed like this

            var cb = new ConsumerBuilder<string, string>(consumerConfig)
                .SetLogHandler((consumer, message) =>
                {
                    // log with correct log level
                })
                .SetPartitionsRevokedHandler((consumer, partitions) =>
                {
                    // log partitions, topic and consumer
                })
                .SetPartitionsLostHandler((consumer, partitions) =>
                {
                    // log partitions, topic and consumer
                })
                .SetErrorHandler((consumer, error) =>
                {
                    // log error
                })
                .SetPartitionsAssignedHandler((consumer, partitions) =>
                {
                    // log partitions, topic and consumer
                });
            }

Some logs we see are

image

(overwritten are topic or consumer names)

I can't see any logs from the client library about offset resets - are there any log handler we are missing where this information could be gathered?

From Kafka we see logs like this

image

Those transition to Dead are one time consumers - these are generated because the client doesn't support subscription without group id. Hence no alarm there.
Likewise I would expect the expired offsets to be related to these one timer consumer groups.

@edenhill
Copy link
Contributor

edenhill commented Apr 7, 2022

"Removed 79 expired offsets .."

What is your offset commit retention time on the broker? E.g, offsets.retention.minutes et.al

@3schwartz
Copy link
Contributor Author

3schwartz commented Apr 7, 2022

offsets.retention.minutes

offsets.retention.minutes 7 dayes
Hence from this KIP I would not expect any issues at rolling deployments
https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets#KIP211:ReviseExpirationSemanticsofConsumerGroupOffsets-TransitioningtoEmptyState

offsets.retention.check.interval.ms 10 minutes

The delete renstion.ms on __consumer_offsets are set to 1 day.

@3schwartz
Copy link
Contributor Author

I have added link to two comments from issue #1672, which has the same setup as this issue and which I wouldn't expect to be mitigated by flush of offset storage:

@3schwartz
Copy link
Contributor Author

@3schwartz
Copy link
Contributor Author

Hi @edenhill

I have been able to catch the offset shift with all full debug statements from client library (consumer,cgrp,topic,fetch).

The below are log statements regarding affected topic TOPIC-XXX and partition [11]. All logs below are output from .SetLogHandler. All other messages than heartbeats are logged. The below messages are filtered with the requirement log.KafkaMessage field should contain 11 (offset) and topic name (TOPIC-XXX).

log.Timestamp	log.KafkaMessage	log.MessageTemplate	log.Level	pod
May 17, 2022 @ 21:49:11.716	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:11.502	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:11.295	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:11.206	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:11.068	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:10.829	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:10.687	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:10.598	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:10.339	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:10.181	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:10.131	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.905	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.709	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.679	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.470	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.169	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.133	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.130	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.088	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:09.032	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:09.021	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.854	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.646	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.629	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Enqueue 4 message(s) (2756 bytes, 4 ops) on TOPIC-XXX [11] fetch queue (qlen 23073, v4, last_offset 1895, 0 ctrl msgs, zstd)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.629	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.430	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.259	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [98] at offset 11 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:08.234	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.109	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1892 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:08.106	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Enqueue 1 message(s) (635 bytes, 1 ops) on TOPIC-XXX [11] fetch queue (qlen 21639, v4, last_offset 1891, 0 ctrl msgs, zstd)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.916	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.729	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-hqhcj
May 17, 2022 @ 21:49:07.711	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1891 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.677	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Enqueue 335 message(s) (212725 bytes, 335 ops) on TOPIC-XXX [11] fetch queue (qlen 11340, v4, last_offset 1890, 0 ctrl msgs, zstd)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.674	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.430	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.188	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.031	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1556 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:07.007	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Enqueue 1550 message(s) (984250 bytes, 1550 ops) on TOPIC-XXX [11] fetch queue (qlen 6692, v4, last_offset 1555, 0 ctrl msgs, zstd)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:06.792	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:06.766	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:06.749	[thrd:kafka-westeurope-0.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-0.kafka-westeurope-headless.test3.svc.cluster.local:9092/0: Fetch topic TOPIC-XXX [78] at offset 11 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:49:06.573	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:06.261	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:06.144	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 6 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:06.142	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Enqueue 2 message(s) (1270 bytes, 2 ops) on TOPIC-XXX [11] fetch queue (qlen 1619, v4, last_offset 5, 0 ctrl msgs, zstd)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:05.837	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.960	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 4 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.959	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Enqueue 4 message(s) (2540 bytes, 4 ops) on TOPIC-XXX [11] fetch queue (qlen 1, v4, last_offset 3, 0 ctrl msgs, zstd)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.868	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 0 (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.368	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11] in state active at offset 0 (0/100000 msgs, 0/65536 kb queued, opv 4) is fetchable	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.368	[thrd:main]: Partition TOPIC-XXX [11] start fetching at offset 0	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.368	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: Topic TOPIC-XXX [11]: fetch decide: updating to version 4 (was 0) at offset 0 (was 0)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.368	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Added TOPIC-XXX [11] to fetch list (4 entries, opv 4, 0 messages queued): fetchable	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.367	[thrd:main]: Offset BEGINNING request for TOPIC-XXX [11] returned offset 0 (0)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.367	[thrd:main]: Partition TOPIC-XXX [11] changed fetch state offset-wait -> active	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.367	[thrd:main]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Offset reply for topic TOPIC-XXX [11] (v4 vs v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:04.295	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:49:03.943	[thrd:main]: Topic TOPIC-XXX [11]: timed offset query for BEGINNING in state offset-query	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.943	[thrd:main]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Partition TOPIC-XXX [11]: querying for logical offset BEGINNING (opv 4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.943	[thrd:main]: Partition TOPIC-XXX [11] changed fetch state offset-query -> offset-wait	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.850	[thrd:main]: Group "CONSUMER-GROUP-XXX" received op PARTITION_JOIN in state up (join-state steady) for TOPIC-XXX [11]	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.850	[thrd:main]: Group "CONSUMER-GROUP-XXX": add TOPIC-XXX [11]	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.842	[thrd:main]: Not resuming TOPIC-XXX [11]: partition is not paused by library	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.842	[thrd:main]: Start fetch for TOPIC-XXX [11] in state none at offset INVALID (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.842	[thrd:main]: TOPIC-XXX [11]: offset reset (at offset INVALID) to BEGINNING: no previously committed offset available: Local: No offset stored	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.842	[thrd:main]: TOPIC-XXX [11] received op PAUSE (v3) in fetch-state none (opv2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.842	[thrd:main]: TOPIC-XXX [11] received op FETCH_START (v4) in fetch-state none (opv3)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.842	[thrd:main]: Partition TOPIC-XXX [11] changed fetch state none -> offset-query	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.842	[thrd:main]: TOPIC-XXX [11]: backoff: (re)starting offset query timer for offset BEGINNING	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [11] offset INVALID	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [11] at offset INVALID	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [11] (v3)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [11]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [11] at offset INVALID (v4)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [11]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [11] offset -1, metadata 0 byte(s): NO_ERROR	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [11] back to pending list with offset INVALID	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [11] received op PAUSE (v2) in fetch-state none (opv1)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [11]: partition is not paused by library	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [11] (v2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [11] offset STORED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [11]	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [11]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.811	[thrd:main]: TOPIC-XXX [11]: marking as DESIRED	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.778	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11]: joining broker (rktp 0x7f5d3c012a00, 0 message(s) queued)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]: TOPIC-XXX [11]: delegating to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 for partition with 0 messages (0 bytes) queued	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]: Migrating topic TOPIC-XXX [11] 0x7f5d3c012a00 from (none) to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (sending PARTITION_JOIN to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]: TOPIC-XXX [11]: delegate to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (rktp 0x7f5d3c012a00, term 0, ref 2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.772	[thrd:main]: NEW TOPIC-XXX [11] 0x7f5d3c012a00 refcnt 0x7f5d3c012a90 (at rd_kafka_topic_partition_cnt_update:798)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.758	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.758	[thrd:main]: TOPIC-XXX [11]: delegate to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (rktp 0x7f13f4012ba0, term 0, ref 2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.758	[thrd:main]: TOPIC-XXX [11]: delegating to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 for partition with 0 messages (0 bytes) queued	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.758	[thrd:main]: Migrating topic TOPIC-XXX [11] 0x7f13f4012ba0 from (none) to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (sending PARTITION_JOIN to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.758	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11]: joining broker (rktp 0x7f13f4012ba0, 0 message(s) queued)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.748	[thrd:main]: NEW TOPIC-XXX [11] 0x7f13f4012ba0 refcnt 0x7f13f4012c30 (at rd_kafka_topic_partition_cnt_update:798)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.746	[thrd:main]:  TOPIC-XXX [11] offset INVALID	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.725	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11]: joining broker (rktp 0x7fdb280129a0, 0 message(s) queued)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-hqhcj
May 17, 2022 @ 21:49:03.724	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-hqhcj
May 17, 2022 @ 21:49:03.724	[thrd:main]: TOPIC-XXX [11]: delegate to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (rktp 0x7fdb280129a0, term 0, ref 2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-hqhcj
May 17, 2022 @ 21:49:03.724	[thrd:main]: TOPIC-XXX [11]: delegating to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 for partition with 0 messages (0 bytes) queued	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-hqhcj
May 17, 2022 @ 21:49:03.724	[thrd:main]: Migrating topic TOPIC-XXX [11] 0x7fdb280129a0 from (none) to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (sending PARTITION_JOIN to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-hqhcj
May 17, 2022 @ 21:49:03.722	[thrd:main]: NEW TOPIC-XXX [11] 0x7fdb280129a0 refcnt 0x7fdb28012a30 (at rd_kafka_topic_partition_cnt_update:798)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-hqhcj
May 17, 2022 @ 21:49:03.718	[thrd:main]:  TOPIC-XXX [11] offset INVALID	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.718	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:03.718	[thrd:main]: TOPIC-XXX [11]: delegate to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (rktp 0x7f337c012a00, term 0, ref 2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:03.718	[thrd:main]: TOPIC-XXX [11]: delegating to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 for partition with 0 messages (0 bytes) queued	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:03.718	[thrd:main]: Migrating topic TOPIC-XXX [11] 0x7f337c012a00 from (none) to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (sending PARTITION_JOIN to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:03.718	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11]: joining broker (rktp 0x7f337c012a00, 0 message(s) queued)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:03.715	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:49:03.715	[thrd:main]: TOPIC-XXX [11]: delegate to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (rktp 0x7f5540012dd0, term 0, ref 2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:49:03.715	[thrd:main]: TOPIC-XXX [11]: delegating to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 for partition with 0 messages (0 bytes) queued	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:49:03.715	[thrd:main]: Migrating topic TOPIC-XXX [11] 0x7f5540012dd0 from (none) to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (sending PARTITION_JOIN to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:49:03.715	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11]: joining broker (rktp 0x7f5540012dd0, 0 message(s) queued)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:49:03.715	[thrd:main]: NEW TOPIC-XXX [11] 0x7f337c012a00 refcnt 0x7f337c012a90 (at rd_kafka_topic_partition_cnt_update:798)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-wftk8
May 17, 2022 @ 21:49:03.714	[thrd:main]: NEW TOPIC-XXX [11] 0x7f5540012dd0 refcnt 0x7f5540012e60 (at rd_kafka_topic_partition_cnt_update:798)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-rc5jw
May 17, 2022 @ 21:40:58.869	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v8)	Kafka consumer log: {KafkaMessage}	Debug	POD-5bb76b5474-xt7g2
May 17, 2022 @ 21:40:58.367	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v8)	Kafka consumer log: {KafkaMessage}	Debug	POD-5bb76b5474-xt7g2
May 17, 2022 @ 21:40:57.865	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v8)	Kafka consumer log: {KafkaMessage}	Debug	POD-5bb76b5474-xt7g2
May 17, 2022 @ 21:40:57.364	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v8)	Kafka consumer log: {KafkaMessage}	Debug	POD-5bb76b5474-xt7g2

One important log statement are the following

  May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [11] offset -1, metadata 0 byte(s): NO_ERROR	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk

I can see, that the OffsetFetchRequest from the Kafka Protocol actually receives a offset of -1. This is even though the latest fetch before the deployment roll out was correct:

  May 17, 2022 @ 21:40:58.869	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Fetch topic TOPIC-XXX [11] at offset 1896 (v8)	Kafka consumer log: {KafkaMessage}	Debug	POD-5bb76b5474-xt7g2

At timestamp 21:49:03.748, I get assigned partitions from .SetPartitionsAssignedHandler. Here TOPIC-XXX [[11]] are assigned to pod POD-85d549d46d-7j5zk.

@timestamp	log.AssignedPartitions	log.MessageTemplate	pod
17-05-2022  21:49:03.748	TOPIC-XXX [[0]], TOPIC-XXX [[1]], TOPIC-XXX [[2]], TOPIC-XXX [[3]], TOPIC-XXX [[4]], TOPIC-XXX [[5]], TOPIC-XXX [[6]], TOPIC-XXX [[7]], TOPIC-XXX [[8]], TOPIC-XXX [[9]], TOPIC-XXX [[10]], TOPIC-XXX [[11]], TOPIC-XXX [[12]], TOPIC-XXX [[13]], TOPIC-XXX [[14]], TOPIC-XXX [[15]], TOPIC-XXX [[16]], TOPIC-XXX [[17]], TOPIC-XXX [[18]], TOPIC-XXX [[19]]	Partitions {@AssignedPartitions}  assigned for {Consumer}  with message {MessageType}	POD-85d549d46d-7j5zk

The first log messages after the assignments, which ask to get assigned to a broker, are below statement - importantly I can see, at it is assigned to the same broker as before the deployment roll out. Hence I would expect Kafka to be fully in sync and I find it quite strange -1 gets returned. Could there be any errors related to the byte translated of the protocol received from Kafka? Liked the TAG_BUFFER, COMPACT_NULLABLE_STRING or COMPACT_STRING? Apache Kafka

  May 17, 2022 @ 21:49:03.758	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11]: joining broker (rktp 0x7f13f4012ba0, 0 message(s) queued)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-7j5zk

I have went through our Kafka logs over the rebalancing period. There is nothing which indicates removing offset off CONSUMER-GROUP-XXX and TOPIC-XXX. Also there a no metrics which should support this.

I find the below statements a bit strange - they are from a pod which isn’t the one which have the partition assigned after the assignment statement. Maybe it is because of some state between the consumers?

May 17, 2022 @ 21:49:03.778	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [11]: joining broker (rktp 0x7f5d3c012a00, 0 message(s) queued)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]: TOPIC-XXX [11]: delegating to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 for partition with 0 messages (0 bytes) queued	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]: Migrating topic TOPIC-XXX [11] 0x7f5d3c012a00 from (none) to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (sending PARTITION_JOIN to kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]:   Topic TOPIC-XXX partition 11 Leader 1	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.773	[thrd:main]: TOPIC-XXX [11]: delegate to broker kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1 (rktp 0x7f5d3c012a00, term 0, ref 2)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng
May 17, 2022 @ 21:49:03.772	[thrd:main]: NEW TOPIC-XXX [11] 0x7f5d3c012a00 refcnt 0x7f5d3c012a90 (at rd_kafka_topic_partition_cnt_update:798)	Kafka consumer log: {KafkaMessage}	Debug	POD-85d549d46d-8npng

I have also looked at logs around the OffsetFetchResponse (~150 before and after filtered only from the pod which ends up with the partition assignment). They are given below. I can see the OffsetFetchResponse was part of a partition batch of 20 - all responses are returned with error code success.

log.Timestamp	log.KafkaMessage	pod
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [2] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [3] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [4] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [5] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [6] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [7] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [8] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [9] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [10] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [11] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [12] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [13] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [14] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [15] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [16] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [17] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [18] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [19] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: List with 20 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [0] offset 1928	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [4] offset 1953	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [18] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [10] offset 1929	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [13] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [9] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [3] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [17] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [2] offset 1932	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [6] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [16] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [8] offset 2003	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [12] offset 1930	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [14] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [7] offset 1903	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [11] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [5] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [1] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [19] offset 1956	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]:  TOPIC-XXX [15] offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: List with 0 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: List with 0 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [15] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [15]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [15] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [15]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [15] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [19] at offset 1956	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [19]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [19] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [19]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [19] at offset 1956 (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [1] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [1]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [1] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [1]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [1] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [5] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [5]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [5] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [5]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [5] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [11] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [11]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [11] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [11]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [11] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [7] at offset 1903	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [7]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [7] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [7]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [7] at offset 1903 (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [14] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [14]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [14] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [14]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [14] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [12] at offset 1930	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [12]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [12] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [12]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [12] at offset 1930 (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [8] at offset 2003	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [8]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [8] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [8]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [8] at offset 2003 (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [16] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [16]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [16] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [16]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [16] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [6] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [6]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [6] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [6]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [6] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [2] at offset 1932	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [2]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [2] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [2]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [2] at offset 1932 (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [17] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [17]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [17] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [17]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [17] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [3] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [3]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [3] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [3]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [3] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [9] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [9]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [9] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [9]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [9] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [13] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [13]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [13] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [13]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [13] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [10] at offset 1929	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [10]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [10] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [10]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [10] at offset 1929 (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Starting pending assigned partition TOPIC-XXX [18] at offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [18]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v3	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Resume TOPIC-XXX [18] (v3)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: TOPIC-XXX [18]: rd_kafka_toppar_op_fetch_start:2363: new version barrier v4	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.839	[thrd:main]: Start consuming TOPIC-XXX [18] at offset INVALID (v4)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.838	[thrd:main]:  TOPIC-XXX [0] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.838	[thrd:main]:  TOPIC-XXX [1] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [10] offset 1929, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [13] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [9] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [3] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [17] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [2] offset 1932, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [6] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [16] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [8] offset 2003, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [12] offset 1930, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [14] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [7] offset 1903, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [11] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [5] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [1] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [19] offset 1956, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [15] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: GroupCoordinator/1: OffsetFetch for 20/20 partition(s) (0 unstable partition(s)) returned Success	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [0] back to pending list with offset 1928	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [4] back to pending list with offset 1953	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [18] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [10] back to pending list with offset 1929	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [13] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [9] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [3] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [17] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [2] back to pending list with offset 1932	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [6] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [16] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [8] back to pending list with offset 2003	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [12] back to pending list with offset 1930	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [14] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [7] back to pending list with offset 1903	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [11] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [5] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [1] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [19] back to pending list with offset 1956	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Adding TOPIC-XXX [15] back to pending list with offset INVALID	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: Assignment dump (started_cnt=0, wait_stop_cnt=0)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.829	[thrd:main]: List with 20 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.823	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [0] offset 1928, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.823	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [4] offset 1953, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.823	[thrd:main]: GroupCoordinator/1: OffsetFetchResponse: TOPIC-XXX [18] offset -1, metadata 0 byte(s): NO_ERROR	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [7]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [6]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [5]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [4]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [3]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [2]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [1]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [0]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Fetching committed offsets for 20 pending partition(s) in assignment	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: GroupCoordinator/1: OffsetFetchRequest(v7) for 20/20 partition(s)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: GroupCoordinator/1: Fetch committed offsets for 20/20 partition(s)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Current assignment of 20 partition(s) with 20 pending adds, 20 offset queries, 0 partitions awaiting stop and 0 offset commits in progress	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [0] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [0]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [1] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [1]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [2] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [2]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [3] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [3]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [4] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [4]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [5] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [5]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [6] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [6]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [7] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [92]: joining broker (rktp 0x7f13f402dc00, 0 message(s) queued)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [7]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [8] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [8]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [95]: joining broker (rktp 0x7f13f402ec20, 0 message(s) queued)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [9] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [9]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [10] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.]: kafka-westeurope-1.kafka-westeurope-headless.test3.svc.cluster.local:9092/1: Topic TOPIC-XXX [98]: joining broker (rktp 0x7f13f402fc40, 0 message(s) queued)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [10]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [11] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [11]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [12] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [12]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [13] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [13]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [14] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [14]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [15] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [15]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [16] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [16]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [17] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [17]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [18] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [18]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: TOPIC-XXX [19] received op PAUSE (v2) in fetch-state none (opv1)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.814	[thrd:main]: Not resuming TOPIC-XXX [19]: partition is not paused by library	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [11]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [11] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [12]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [12] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [13]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [13] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [14]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [14] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [15]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [15] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [16]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [16] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [17]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [17] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [18]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [18] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: TOPIC-XXX [19]: rd_kafka_toppar_op_pause_resume:2449: new version barrier v2	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Resume TOPIC-XXX [19] (v2)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Group "CONSUMER-GROUP-XXX" changed join state wait-assign-call -> steady (state up)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Assignment dump (started_cnt=0, wait_stop_cnt=0)	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: List with 20 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [0] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [1] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [2] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [3] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [4] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [5] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [6] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [7] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [8] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [9] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [10] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [11] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [12] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [13] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [14] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [15] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [16] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [17] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [18] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [19] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: List with 20 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [0] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [1] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [2] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [3] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [4] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [5] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [6] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [7] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [8] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [9] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [10] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [11] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [12] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [13] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [14] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [15] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [16] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [17] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [18] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]:  TOPIC-XXX [19] offset STORED	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: List with 0 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: List with 0 partition(s):	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [19]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [18]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [17]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [16]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [15]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [14]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [13]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [12]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [11]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [10]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [9]	POD-85d549d46d-7j5zk
May 17, 2022 @ 21:49:03.813	[thrd:main]: Querying committed offset for pending assigned partition TOPIC-XXX [8]	POD-85d549d46d-7j5zk

To sum up I get a offset -1 in a OffsetFetchResponse from Kafka which seems really strange due to the fetch just before the former application is closed and nothing from the Kafka logs.

I find it a bit strange pod POD-85d549d46d-8npng has some actions related to TOPIC-XXX partition 11 after assignment has been given to POD-85d549d46d-7j5zk. I don’t know if this could give some concurrency issues. Other than this maybe the translation of the protocol could maybe be a issues in some corner cases?

I have tried to give as excessive logs as possible. We have a lot more without any filter. If it helps you I would gladly arrange some time sharing what we have.

Kind Regards,
@3schwartz

@3schwartz
Copy link
Contributor Author

3schwartz commented May 31, 2022

Some more info - lately we experienced it at a deployment in the morning, where we the day before in the evening saw Kafka was deleting a lot of old log segments.
When the consumers (pods) was re-deployed the consumers where shiftet to -1, and then they went to earliest offset stored.

There are no logs from Kafka indicating removing consumer group offset. I would expect Kafka to return this earliest offset and not -1.

@mhowlett mhowlett added the investigate further we want to further investigate to understand properly label Jun 7, 2022
@alfhv
Copy link

alfhv commented Aug 9, 2022

more info from my side related to this issue: "Offset Reset".
Confluent Kafka NET version 1.9
using StoreOffset() or Commit()
PartitionAssignment = CooperativeSticky

scenario:

  • start 2 consumers
  • send batch of messages
  • wait until all msgs be consumed
  • stop one consumer
  • wait kafka start rebalance
  • start consumer
    result: first consumer (not restarted) start consuming duplicate messages.

in same scenario but without PartitionAssignment = CooperativeSticky we dont have offset reset.

Seems that hard rebalance revoking all partitions avoid the offset reset, at least in our environment.

in issue #1672 (comment) is mentioned Offset Reset is only related to the use of storing offset, unfortunately we have seen issue happening with explicit commit like consumer.commit(cr), always when PartitionAssignment = CooperativeSticky is used.

@Madhurdev007
Copy link

Hi,

I'm also facing same issue, can someone kindly suggest any solution...

@3schwartz 3schwartz closed this as not planned Won't fix, can't repro, duplicate, stale Apr 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigate further we want to further investigate to understand properly
Projects
None yet
Development

No branches or pull requests

5 participants