Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s POD delete data very slowly #5130

Open
xiaoli007 opened this issue Feb 7, 2025 · 2 comments
Open

K8s POD delete data very slowly #5130

xiaoli007 opened this issue Feb 7, 2025 · 2 comments
Labels
component/deployment Helm chart, kubernetes templates and configuration Issues/PRs component/rbd Issues related to RBD dependency/ceph depends on core Ceph functionality question Further information is requested

Comments

@xiaoli007
Copy link

On the Ubuntu 24.04 operating system, with Ceph version 19.2.0, Ceph-CSI version 3.12.3, and K8s mounting PVCs via Ceph-CSI, performing an rm -rf operation on a 200GB file inside a pod takes about an hour. Without Ceph, on a standalone server, deleting 200GB takes around 10 minutes. It used Sata Hdd。 How can I optimize this issue?

@xiaoli007
Copy link
Author

ceph pool used rbd mode

@nixpanic
Copy link
Member

nixpanic commented Feb 7, 2025

Ceph-CSI is not in the I/O path, it only creates/mounts RBD-images as volumes. There might be a configuration issue of some kind, but you would need to check with others that understand performance characteristics of Ceph better. Easiest is to reach out on their Slack or IRC and explain/ask there.

@nixpanic nixpanic added question Further information is requested component/rbd Issues related to RBD component/deployment Helm chart, kubernetes templates and configuration Issues/PRs dependency/ceph depends on core Ceph functionality labels Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/deployment Helm chart, kubernetes templates and configuration Issues/PRs component/rbd Issues related to RBD dependency/ceph depends on core Ceph functionality question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants