-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FluentBit is unable to recover from too many accumulated buffers with filesystem storage type #882
Comments
Can someone support on this case? A bit urgent. |
From only the information provided, I'm not sure what is happening in your application. Can you follow all of the below recommendations, or let us know if any of them end up solving your issue?
|
Hi @swapneils ,
Sample multiline log --> Please note that fluentbit is running as a DaemonSet and there are over 200 pods are running inside a worker node. |
Describe the question/issue
Fluentbit is sending data to s3 and it works fine for sometime and then its stuck and not sending data to S3.
Configuration
Fluent Bit Log Output
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160287.631424024.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160331.698451330.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160364.698626403.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160430.908663992.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160462.846875919.flb
[2024/12/18 12:33:43] [ info] [input:storage_backlog:storage_backlog.8] queueing tail.0:1-1734160468.165929808.flb
chunks are pending in below locations.
Fluent Bit Version Info
Fluent Bit is running as a Daemonset
public.ecr.aws/aws-observability/aws-for-fluent-bit:2.32.2.20241008
[fluent bit] version=1.9.10, commit=eba89f4660
EKS 1.30
Application Details
Fluentbit should concatenate stack traces or application logs print in multiple lines.
Metrices
sh-4.2$ curl -s http://127.0.0.1:2020/api/v1/storage | jq
{
"storage_layer": {
"chunks": {
"total_chunks": 12510,
"mem_chunks": 9,
"fs_chunks": 12501,
"fs_chunks_up": 3158,
"fs_chunks_down": 9343
}
},
"input_chunks": {
"tail.0": {
"status": {
"overlimit": true,
"mem_size": "5.7G",
"mem_limit": "572.2M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"tail.1": {
"status": {
"overlimit": false,
"mem_size": "38.5K",
"mem_limit": "4.8M"
},
"chunks": {
"total": 1,
"up": 1,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"tail.2": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "4.8M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"systemd.3": {
"status": {
"overlimit": false,
"mem_size": "64.6K",
"mem_limit": "0b"
},
"chunks": {
"total": 5,
"up": 5,
"down": 0,
"busy": 5,
"busy_size": "64.6K"
}
},
"tail.4": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "47.7M"
},
"chunks": {
"total": 1,
"up": 0,
"down": 1,
"busy": 0,
"busy_size": "0b"
}
},
"tail.5": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "4.8M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"tail.6": {
"status": {
"overlimit": false,
"mem_size": "111.3K",
"mem_limit": "4.8M"
},
"chunks": {
"total": 3,
"up": 3,
"down": 0,
"busy": 2,
"busy_size": "75.0K"
}
},
"tail.7": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "4.8M"
},
"chunks": {
"total": 0,
"up": 0,
"down": 0,
"busy": 0,
"busy_size": "0b"
}
},
"storage_backlog.8": {
"status": {
"overlimit": false,
"mem_size": "0b",
"mem_limit": "0b"
},
"chunks": {
"total": 3008,
"up": 3008,
"down": 0,
"busy": 1890,
"busy_size": "3.6G"
}
},
"emitter_for_multiline.0": {
"status": {
"overlimit": false,
"mem_size": "7.9M",
"mem_limit": "9.5M"
},
"chunks": {
"total": 200,
"up": 150,
"down": 50,
"busy": 150,
"busy_size": "7.9M"
}
}
}
}
The text was updated successfully, but these errors were encountered: