Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v2.0.0. of Ceph CSI driver. #557

Closed
11 of 12 tasks
humblec opened this issue Aug 16, 2019 · 23 comments
Closed
11 of 12 tasks

Release v2.0.0. of Ceph CSI driver. #557

humblec opened this issue Aug 16, 2019 · 23 comments
Assignees
Labels
release-2.0.0 v2.0.0 release Repo activity Process/activites on ceph-csi repo

Comments

@humblec
Copy link
Collaborator

humblec commented Aug 16, 2019

@dillaman
Copy link

dillaman commented Oct 9, 2019

Tracking here as well that RBD locking support will not land in the v2.0 due to missing external dependencies.

@ajarr
Copy link
Contributor

ajarr commented Oct 11, 2019

@humblec @Madhu-1, I don't see the clone of FS subvolume snapshots feature being implemented in Ceph master, and backported to nautilus in time for CSI to be use by Nov 5.

Yes, we already have fs subvolume snapshot create/rm commands. Hence, we can easily support CephFS CSI volume snapshots. Is there a ceph-csi issue for RBD/CephFS snapshot?

The fs subvolume resizecommand is close to being done in Ceph. With this command you can easily implement resize feature for CSI v2.0.0

The following correctness/performance issues need to be fixed for v2.0.0
#539
#360

@ajarr
Copy link
Contributor

ajarr commented Oct 14, 2019

@humblec @Madhu-1 ^^
Can you clarify? I don't see FS clone getting implemented in mgr/volumes module in time for v2.0.0 dev freeze. So can we remove it?

For now, let's stick to FS resize and snapshot that can be implemented in CSI in time?

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Oct 14, 2019

moving FS clone out of v2.0.0 release as it is not getting implemented in ceph before cephcsi v2.0.0

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Oct 16, 2019

Had a discussion with @humblec regarding cephfs cloning requirement, as this is required feature for CSI v2.0.0 release, and as code freeze date is the concern for implementing the cephfs cloning functionality moved the dev freeze date to 20 Nov

cloning covers below 2 things

  • creating a PVC from the snapshot
  • creating a PVC from existing PVC

@ajarr is this viable? if more time is required we can move the code freeze date according, please let me know your thoughts

@ajarr
Copy link
Contributor

ajarr commented Oct 29, 2019

Had a discussion with @humblec regarding cephfs cloning requirement, as this is required feature for CSI v2.0.0 release, and as code freeze date is the concern for implementing the cephfs cloning functionality moved the dev freeze date to 20 Nov

cloning covers below 2 things

  • creating a PVC from the snapshot
  • creating a PVC from existing PVC

Looking at the CSI documentation, cloning covers only creating a PVC from an existing PVC,
https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/
https://kubernetes.io/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/
"While cloning is similar in behavior to creating a snapshot of a volume, then creating a volume from the snapshot, a clone operation is more streamlined and is more efficient for many backend devices."

creating a PVC from a snapshot seems to be the Restore Volume from Snapshot feature
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support

Wouldn't it be better to split the #411 into two feature requests as follows?

  • CephFS: Restore Volume from Snapshot
  • CephFS: Volume cloning

@joscollin please follow up on the above.

Regarding creating a PVC from a snapshot,
@vshankar will start working on the Ceph side to allow creating FS subvolumes from snapshots. You can track the progress here,
https://tracker.ceph.com/issues/24880

No plans yet on creating a PVC from existing PVC. Maybe it will just end up being a volume snapshot followed by creating a volume from snapshot?

@ajarr is this viable? if more time is required we can move the code freeze date according, please let me know your thoughts

Not sure yet. @joscollin will later update on the time required to implement creating FS subvolumes from snapshots feature in Ceph.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Oct 30, 2019

Had a discussion with @humblec regarding cephfs cloning requirement, as this is required feature for CSI v2.0.0 release, and as code freeze date is the concern for implementing the cephfs cloning functionality moved the dev freeze date to 20 Nov
cloning covers below 2 things

  • creating a PVC from the snapshot
  • creating a PVC from existing PVC

Looking at the CSI documentation, cloning covers only creating a PVC from an existing PVC,
https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/
https://kubernetes.io/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/
"While cloning is similar in behavior to creating a snapshot of a volume, then creating a volume from the snapshot, a clone operation is more streamlined and is more efficient for many backend devices."

creating a PVC from a snapshot seems to be the Restore Volume from Snapshot feature
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-snapshot-and-restore-volume-from-snapshot-support

Wouldn't it be better to split the #411 into two feature requests as follows?

  • CephFS: Restore Volume from Snapshot
  • CephFS: Volume cloning

@joscollin please follow up on the above.

Regarding creating a PVC from a snapshot,
@vshankar will start working on the Ceph side to allow creating FS subvolumes from snapshots. You can track the progress here,
https://tracker.ceph.com/issues/24880

No plans yet on creating a PVC from existing PVC. Maybe it will just end up being a volume snapshot followed by creating a volume from snapshot?

yes you are correct if we same logic is backed by single cephfs command to create a volume from the volume will be more helpful here

@ajarr is this viable? if more time is required we can move the code freeze date according, please let me know your thoughts

Not sure yet. @joscollin will later update on the time required to implement creating FS subvolumes from snapshots feature in Ceph.

@ajarr Thanks for the update.

@ajarr
Copy link
Contributor

ajarr commented Oct 30, 2019

Wouldn't it be better to split the #411 into two feature requests as follows?

  • CephFS: Restore Volume from Snapshot
  • CephFS: Volume cloning

Done. It should add more clarity.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Nov 6, 2019

as cephfs cloning cannot be completed in ceph core before v2.0.0 release, moving it out

CC @ajarr @humblec

@sergeimonakhov
Copy link

@humblec When is this release expected?

@humblec
Copy link
Collaborator Author

humblec commented Dec 9, 2019

@humblec When is this release expected?

@D1abloRUS planning to release at end of this year, however there are some concerns about some of the feature completion which gives us 2 choice atm, that said, either push some of the incomplete features out of 2.0.0 and release 2.1.0 with those or delay the release and include maximum features in this release. We are waiting for couple more weeks to take the call and it should also help us to validate the state in a better way. I believe you are specifically interested in resize functionality , is that the case ? is it something you can use it from canary image of master?

@sergeimonakhov
Copy link

@humblec We can pick it up from the master, in any case the request #331 is still open

@humblec
Copy link
Collaborator Author

humblec commented Dec 17, 2019

@humblec We can pick it up from the master, in any case the request #331 is still open

Its merged last week. You should be able to consume it from master. 👍

@humblec
Copy link
Collaborator Author

humblec commented Dec 17, 2019

[Release call update]

Feature completed:

  • CephFS resize
  • RBD resize
  • Encryption support on rbd volumes
  • RBD nbd support -> Available : Known issue: the client mount will be disturbed ( same as fuse deamon) if node plugin is restarted.

Waiting for completion:

  • Multi arch support of CSI drivers
  • Native go client support

Deferred from v2.0.0 to 2.1.0

  • Topology Aware provisioning
  • Snapshot/Clone of RBD/cephfs support.

New Release Date
Code Freeze : 7- Jan -2019
RC : 15 -Jan-2019
GA: 20- Jan-2019

@nixpanic
Copy link
Member

Moving out the native go-ceph implementation for now. Expect it to be ready for release-2.1.0. Performance improvement is currently minimal, as there are still a few calls to the rbd executable while creating volumes.

go-ceph will also mature more, and will be better tested by the time 2.1.0 is ready.

@humblec
Copy link
Collaborator Author

humblec commented Jan 17, 2020

[Release call update]

Out of pending 2 features,

  • Multi arch support of CSI drivers
  • Native go client support

Native go client support has been deferred from the release 2.0.0. Please see comment #557 (comment)
Multi arch support has been merged/completed!

We are at final stage of building 2.0.0. It should be available at any time now :)

@humblec
Copy link
Collaborator Author

humblec commented Jan 17, 2020

Waiting for ##764 to be merged. It should be the last PR for release 2.0.0

@oguzkilcan
Copy link
Contributor

Is it possible for you to also consider #785 for 2.0.0 after #764 is merged?

@humblec
Copy link
Collaborator Author

humblec commented Jan 17, 2020

Is it possible for you to also consider #785 for 2.0.0 after #764 is merged?

Sure, let me check.

@humblec
Copy link
Collaborator Author

humblec commented Jan 17, 2020

Is it possible for you to also consider #785 for 2.0.0 after #764 is merged?

Sure, let me check.

ACK. Tagged for release-2.0.0

@humblec
Copy link
Collaborator Author

humblec commented Jan 20, 2020

All the required PRs are merged.. We are at final stage of 2.0.0 release and build!! Thanks all !

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 20, 2020

closing this as ceph-csi v2.0.0 release is out https://github.com/ceph/ceph-csi/releases/tag/v2.0.0

@Madhu-1 Madhu-1 closed this as completed Jan 20, 2020
@humblec
Copy link
Collaborator Author

humblec commented Jan 20, 2020

Thanks to Ceph CSI community for our next major release (v2.0.0) !! Many features, bug fixes, documentation updates are part of this release !! It was a massive help from our community to roll out this big release .. Lets march toward our next big release v2.1.0 !!!

@vasyl-purchel @cyb70289 @wilmardo @hswong3i @nixpanic @sophalHong @woohhan @Madhu-1 .... 👍 💯

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-2.0.0 v2.0.0 release Repo activity Process/activites on ceph-csi repo
Projects
None yet
Development

No branches or pull requests

7 participants