-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tpcc and ycsb workload is not generating any read IOs in the PVCs #3
Comments
Hi Tim, any updates or suggestions on this issue? |
Hi @vineethac what are you using to monitor IO? |
Hi Tim, I am using VxFlex OS (ScaleIO) to provision persistent volumes to the CockroachDB. VxFlex OS has its own GUI where I can monitor the Read/ Write IOs happening in the volumes. |
Hi Tim, hope you are doing well. You had a chance to look into this issue? Describe pod shows the following: Thanks |
Hey Vineeth... we've had issues with the helm chart repo, etc which prevented me for easily upgrading CRDB version. Now that those issues have been fixed i'm going to test this against |
Hi Tim, Thanks for your response. It would be really great if you could conduct a quick test and let me know. I would also try to deploy another CRDB instance on a vanilla K8s cluster and see the behavior. I will let you know. And, once CRDB is up on my TKG setup, I am even planning to write a blog post! Thanks |
So I tried again with 20.1 and am again observing no activity (in our admin ui) for reads. i've talked briefly with our team and this could be a result of what is essentially a caching layer. unfortunately we don't expose metrics for the caching layer that would expose hit/miss details. the assumption is that writes are written to disk but reads are served from the "cache" and not hitting the underlying disk. again, the team is looking into this from 2 angles... 1) verify that cache is in play 2) improve metrics/ui to surface cache interactions to end user. As for the "Readiness probe failed" issue i don't yet have an update. We have seen this in other k8s environments but still trying to determine cause/resolution. |
Thank you Tim for testing and confirming this issue. For a large dataset, not everything can be cached right? There will be definitely cache misses and has to be read from the underlying storage. Any recommendations on the dataset size? Should I try with some other benchmarking tools other than the built-in load generator? Any suggestions? And, I was able to deploy CRDB on Tanzu K8s workload cluster! Not really sure what changed, but this time it got deployed. I will destroy the current K8s cluster and try to deploy on a new one and see the behavior. |
Hi Tim, I tried running some ycsb (workload A) and tpcc workload using the built-in load generators. I am able to load the data and run the benchmark, but for some reason, I cannot see any read IOs on the PVCs. All I can see is write IOs while running the test. Workload A for ycsb is supposed to generate 50% Reads and 50% Writes right, but I can see only write IOs coming in on those PVCs in the storage layer. Could you please share your thoughts? Please let me know if you need additional details. Thanks.
The text was updated successfully, but these errors were encountered: