-
Notifications
You must be signed in to change notification settings - Fork 559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Ceph's namespaces (not kubernetes namespaces) #798
Comments
So, are there any plans to implement this ? |
If no one's going to implement it. I'd be glad to do it. @Madhu-1 Should I start working on it? |
We can help in testing the solution. |
@mehdy yes please go ahead |
@mehdy assigned this issue to you. |
@Madhu-1 Good. Thanks. |
@Madhu-1 I've been working on this lately and I found out that it's necessary to detect the Is there a way we could know the |
@mehdy Are you planning to allow the namespace to be different for each volume ? I would have thought that it should be part of the csi plugin configuration. Fixed for all volume in a a storage class. |
@simonpie No, I want it to be a storageclass configuration as you said. The problem is like this: |
Understood, thank you for the clarification. Can it be added in labels ? And used of course. |
There are 3 ways to provide this configuration, Via the StorageClass as being discussed aboveIssue: I would suggest we not go this way, as this is not a StorageClass requirement, rather a per-kubernetes (or, ContainerOrchestrator (CO)) cluster requirement. Considering this to be a CO per-cluster level requirement, opens up 2 other options, as follows, to configure the namespace. Pass this in as a CLI option to the CSI plugins, say
|
For the use-case provided, I'd vote for option (3). However, I think there is an alternate use-case where k8s tenants map to RBD namespaces. That helps to support isolation between tenants, different defaults can be configured, potentially different Ceph user caps, etc. |
I personally think if somehow it was possible to do it via |
We need to get what is the exact requirement here. Is it sharing same cluster with in single kubernetes cluster or sharing it with different kubernetes cluster. If we want to use the different ceph namespace with in the same kubernetes cluster with different namespace the above option 3 will work but you have to create different clusterid with same monitor info . We can keep default ceph namespace name in cluster info in configmap, but we can provide an option to override it via storageclass Imo if we implement it via storageclass it would minimize the duplication of monitor information and also we can have single cluster configuration in configmap What are the challenge if we want to implement this with storageclass. I think Only we need to think about where to store this namespace name @ShyamsundarR are there any other challenges? |
Would like to hear from folks following this issue. From the issue description I would state it is the latter, IOW share a ceph cluster/rbd-pool across different kubernetes clusters.
Agree, my thoughts as well.
StorageClass is a cluster wide resource, so I am not sure its access/use is restricted to certain namespaces, so per-tenant StorageClass, as of now, seems to be a no-go. As long as the intention to mention this in the StorageClass is to make it evident or easier to configure, and not to restrict its use to certain tenants, thinking in this direction has it's merits.
The one discussed challenge, as above, is the need to encode/represent the namespace in the |
Joining the discussion because I was working on #931, running into the same problem 🙂 Our requirements are using on Ceph cluster with multiple Kubernetes clusters. I'd vote for option 3) for clarity. Yes, possibly duplicating monitor configurations is not optimal. But there are two reasons why this would make sense to me:
Because of this, it would not really help to specify the namespace in the StorageClass. Tenant separation is not possible with multiple StorageClasses on Kubernetes.
|
Hi, is there any way to find a decision here? This blocks #931. We'd really like to deploy the provisioner on our clusters but need this feature first... |
I think there is interest in the feature and option 3 seems the best way forward to address the use-cases. I would state we have waited enough for alternatives or preference voting, and hence should proceed with the same. @madddi do you want to pick this up as part of #931? I am tied up on certain priorities as of this week (and a little into the next). |
Just adding my two cents. My original requirement was to have multiple kubernetes clusters use one ceph swarm. But, I could definitely use having multiple user/tenant/namespace in a single kubernetes cluster use ceph independently, that is each kubernetes namespace have volume in a different ceph namespace. I did not realize at the beginning that storage class usage could not be limited to specific kubernetes namespace. |
@ShyamsundarR Thanks! I'll start working on #931 then. Regarding the namespace topic: I wouldn't like to do this as part of #931 to keep the PR small. Additionally I don't have that much time to spare right now, too... Maybe someone else who already offered to start working on this can pick it up? |
Just to be verbose, I'll continue working on this issue with the third option ( |
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
Make sure to operate within the namespace if any given when dealing with rbd images and snapshots and volume and snapshot journals. Re-run the entire e2e tests one more time using a namespace. Closes: ceph#798 Signed-off-by: Mehdy Khoshnoody <[email protected]>
@Madhu-1 Do you have any release date for this feature? It's really needed to prevent creating many pools!! :( |
If everything goes good this will be part of 3.0.0 release. There is already a issue create to track release status |
@Madhu-1 Can you please tell more details about the things (issues) that should be done before this issue and a due date for release 3.0.0? |
cc @humblec |
@clwluvw this RFE or enhancement was not tracked for |
@humblec I thought it could be merged and doesn't impact anything like other cleanup PRs! :) |
What are the ETAs for version 3.0.0 and for 3.1 ? Or where could I find that information ? |
3.0 Release issue is here # #865 |
Hello, Am I to understand from this : that ceph's namespace will not be in v3.1.0 ? |
The feature is supposed to provide multi-tenancy, which means multiple k8s clusters will be able to use a single ceph cluster. What about the usage limit on per user basis? A user can create an rbd image of any size within its namespace as there is no such quota or restrictions either on a user basis or on a namespace basis to keep each individual user in check. |
@pawanthegemini ceph needs to provide quota support at the rbd namespace level (which is outside of cephcsi). Please open an issue with ceph to request it. |
Sure, thanks for the update. Created a request with ceph. |
Describe the feature you'd like to have
Support for Ceph's namespace as described here.
What is the value to the end user? (why is it a priority?)
This would allow multiple kubernetes clusters to share one unique ceph cluster without creating a pool per kubernetes cluster. Pools in ceph can be a computationally expensive as stated in documentation and namespaces are the recommended way to segregate tenants/users.
How will we know we have a good solution? (acceptance criteria)
The kubernetes admin can specify a ceph namespace to use in the helm chart and all ceph's operations done in the backend are automatically using
--namespace=$mynamespace
. Hence, all rbd operations are going to the specified namespace.A different kubernetes cluster using a different ceph client, could be assigned to a different ceph's namespace and would not be able to see the image done by the first cluster.
The text was updated successfully, but these errors were encountered: