-
Notifications
You must be signed in to change notification settings - Fork 520
/opt/ceph-container/bin/osd_disk_prepare.sh: line 46: ceph-disk: command not found #1713
Comments
using the luminous (ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable) |
That's because the osd entrypoint is still using the ceph-disk command which doesn't exist anymore starting nautilus. |
2020-07-28 19:52:26 /opt/ceph-container/bin/entrypoint.sh: Valid values for the daemon parameter are populate_kvstore mon osd osd_directory osd_directory_single osd_ceph_disk osd_ceph_disk_prepare osd_ceph_disk_activate osd_ceph_activate_journal mds rgw rgw_user nfs zap_device mon_health mgr disk_introspection demo disk_list tcmu_runner rbd_target_api rbd_target_gw osd_ceph_volume_active doesnt seem to exist anymore in ceph/daemon:latest-octopus tried deleting and adding OSD according to https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/ - no success. When I upgrade from luminous > nautilus > octopus i get "tick checking mon for new map" when running octopus osd on a previous working osd. Mon is running in v1 and v2 mode. So both ways - upgrade or manual creation - fail. |
You have to use |
Still no luck: OSD 0 destroyed, partition zapped, ceph-volume lvm prepare --osd-id {id} --data /dev/sdX, then docker run -d --name ceph-osd OSD ist stuck on:
the same error, as when I do the migration from luminous > nautilus > octopus. Is there a step by step manual how to successfully create an OSD (bluestore) under octopus? I will downgrade to nautilus meanwhile. |
Using CEPH octopus' cephadm OSD creation works as expected. |
$ docker run --rm --privileged --net=host --ipc=host \
-v /run/lock/lvm:/run/lock/lvm:z \
-v /var/run/udev/:/var/run/udev/:z \
-v /dev:/dev -v /etc/ceph:/etc/ceph:z \
-v /run/lvm/:/run/lvm/ \
-v /var/lib/ceph/:/var/lib/ceph/:z \
-v /var/log/ceph/:/var/log/ceph/:z \
--entrypoint=ceph-volume \
docker.io/ceph/daemon:latest-octopus \
--cluster ceph lvm prepare --bluestore --data /dev/xxxxxx
// assuming the OSD id created is 0
$ docker run --rm --privileged --net=host --pid=host --ipc=host \
-v /dev:/dev \
-v /etc/localtime:/etc/localtime:ro \
-v /var/lib/ceph:/var/lib/ceph:z \
-v /etc/ceph:/etc/ceph:z \
-v /var/run/ceph:/var/run/ceph:z \
-v /var/run/udev/:/var/run/udev/ \
-v /var/log/ceph:/var/log/ceph:z \
-v /run/lvm/:/run/lvm/ \
-e CLUSTER=ceph \
-e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
-e CONTAINER_IMAGE=docker.io/ceph/daemon:latest-octopus \
-e OSD_ID=0 \
--name=ceph-osd-0 \
docker.io/ceph/daemon:latest-octopus |
ceph-volume instead of ceph-disk, ceph-disk is deprecated。 Have a try docker run -d --name ceph-osd \
--net=host \
--restart unless-stopped \
--privileged=true \
--pid=host \
-v /opt/ceph/etc/:/etc/ceph \
-v /opt/ceph/var/:/var/lib/ceph/ \
-v /dev/:/dev/ \
-v /run/udev/:/run/udev/ \
-e OSD_DEVICE=/dev/sdb \
-e OSD_TYPE=disk \
-e OSD_BLUESTORE=1 \
cucker/ceph_daemon osd |
Is this a bug report or feature request?
Bug Report
What happened:
Trying to start new OSD:
docker run -d --name ceph-osd
--net=host
--restart unless-stopped
--privileged=true
--pid=host
-v /opt/ceph/etc/:/etc/ceph
-v /opt/ceph/var/:/var/lib/ceph/
-v /dev/:/dev/
-v /run/udev/:/run/udev/
-e OSD_DEVICE=/dev/sdb
-e OSD_TYPE=disk
-e OSD_BLUESTORE=1
ceph/daemon osd
container keeps restarting, logs:
/opt/ceph-container/bin/osd_disk_prepare.sh: line 46: ceph-disk: command not found
How to reproduce it (minimal and precise):
start a fresh ceph/daemon osd - the error might not occurr on previous working osds. Perhaps old bug - ceph-disk need to be renamed in scripts to ceph-volume
Environment:
uname -a
): 4.18.0-193.6.3.el8_2.x86_64docker version
): CE 19.03.12ceph -v
): ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)As a workaround i will try to copy and fix osd_disk_prepare.sh locally, but image should be fixed by someone who knows what he or she does.
The text was updated successfully, but these errors were encountered: