You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 19, 2024. It is now read-only.
Hi,I'm planning to upgrade my cluster from mimic to nautilus and have been studying the "ceph-volume" deprecation thing since last month, but I still cannot find a proper way to deal with it. In this issue #1324ceph-disk would failed , so I have to edit the entrypoint , using ceph-volume to prepare and activate osd, like this in #1713
But it seems this is for brand new osd instead of existing osd created previously in mimic. And according to the doc https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous We recommend you avoid adding or replacing any OSDs while the upgrade is in progress. meaning I cannot replace the existing osd to lvm style osd during the upgrade process.
So what should I do to upgrade it?
What entrypoint should I use to start ceph osd in nautilus
Would it be running ok just using the old way? like :
I assume it will not start properly, cause these code is still using the deprecated ceph-disk command.
The ceph document also mentioned you need ceph-volume simple scan and ceph-volume simple activate --all if you want to keep ceph disk as it is , I assume that means I need to operate these in the running mimic osd container.
And another question is
Is the osd id required to start an osd daemon ? In mimic I the only parameter I would use is the raw device path, Is there a way to start osd daemon the old way, only passing the device path and everything's working? Is #1681 related to this question?
Environment:
OS (e.g. from /etc/os-release): gentoo
Kernel (e.g. uname -a):
Docker version (e.g. docker version):
Ceph version (e.g. ceph -v): mimic
The text was updated successfully, but these errors were encountered:
Using the OSD_CEPH_VOLUME_ACTIVATE entrypoint with a ceph-disk based OSD will work because this entrypoint is able to managed both ceph-volume and ceph-disk based OSD.
When using a ceph-disk based OSD, it will use the ceph-volume simple scan/activate commands. [1][2]
Ahhh, I finished my upgrade at Tuesday, thanks for your reply.
For upgrading from mimic, I switched from Ceph-disk to Ceph-volume first,by using the osd_ceph_volume_activate entrypoint and set OSD_TYPE=simple , OSD_ID=0
The final docker run command would like the command provided by @dsavineau
Then you can follow the upgrade manual to upgrade all ceph components. Just replace the images and all just works.
Hi,I'm planning to upgrade my cluster from mimic to nautilus and have been studying the "ceph-volume" deprecation thing since last month, but I still cannot find a proper way to deal with it. In this issue #1324
ceph-disk
would failed , so I have to edit the entrypoint , using ceph-volume to prepare and activate osd, like this in #1713But it seems this is for brand new osd instead of existing osd created previously in mimic. And according to the doc https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous
We recommend you avoid adding or replacing any OSDs while the upgrade is in progress.
meaning I cannot replace the existing osd tolvm
style osd during the upgrade process.So what should I do to upgrade it?
What entrypoint should I use to start ceph osd in nautilus
Would it be running ok just using the old way? like :
I assume it will not start properly, cause these code is still using the deprecated ceph-disk command.
The ceph document also mentioned you need
ceph-volume simple scan
andceph-volume simple activate --all
if you want to keep ceph disk as it is , I assume that means I need to operate these in the running mimic osd container.And another question is
Is the osd id required to start an osd daemon ? In mimic I the only parameter I would use is the raw device path, Is there a way to start osd daemon the old way, only passing the device path and everything's working? Is #1681 related to this question?
Environment:
uname -a
):docker version
):ceph -v
): mimicThe text was updated successfully, but these errors were encountered: