Skip to content
This repository has been archived by the owner on Dec 19, 2024. It is now read-only.

/opt/ceph-container/bin/osd_disk_prepare.sh: line 46: ceph-disk: command not found #1713

Closed
pdbs opened this issue Jul 26, 2020 · 8 comments
Closed

Comments

@pdbs
Copy link

pdbs commented Jul 26, 2020

Is this a bug report or feature request?

  • Bug Report

Bug Report

What happened:

Trying to start new OSD:

docker run -d --name ceph-osd
--net=host
--restart unless-stopped
--privileged=true
--pid=host
-v /opt/ceph/etc/:/etc/ceph
-v /opt/ceph/var/:/var/lib/ceph/
-v /dev/:/dev/
-v /run/udev/:/run/udev/
-e OSD_DEVICE=/dev/sdb
-e OSD_TYPE=disk
-e OSD_BLUESTORE=1
ceph/daemon osd

container keeps restarting, logs:
/opt/ceph-container/bin/osd_disk_prepare.sh: line 46: ceph-disk: command not found

How to reproduce it (minimal and precise):
start a fresh ceph/daemon osd - the error might not occurr on previous working osds. Perhaps old bug - ceph-disk need to be renamed in scripts to ceph-volume

Environment:

  • OS (e.g. from /etc/os-release): Red Hat (unfortunately)
  • Kernel (e.g. uname -a): 4.18.0-193.6.3.el8_2.x86_64
  • Docker version (e.g. docker version): CE 19.03.12
  • Ceph version (e.g. ceph -v): ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

As a workaround i will try to copy and fix osd_disk_prepare.sh locally, but image should be fixed by someone who knows what he or she does.

@pdbs
Copy link
Author

pdbs commented Jul 26, 2020

using the luminous (ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)
) docker container, everything works with osd creation as documented. It looks like nobody created fresh osds since 12.2 since the scripts are broken.

@dsavineau
Copy link
Contributor

That's because the osd entrypoint is still using the ceph-disk command which doesn't exist anymore starting nautilus.
You should use ceph-volume instead.
There's no ceph-volume entrypoint available for creating the OSD, you have to do it on your own. Then you can use the osd_ceph_volume_active entrypoint [1] to start the OSD.

[1] https://github.com/ceph/ceph-container/blob/master/src/daemon/entrypoint.sh.in#L102-L108

@pdbs
Copy link
Author

pdbs commented Jul 28, 2020

2020-07-28 19:52:26 /opt/ceph-container/bin/entrypoint.sh: Valid values for the daemon parameter are populate_kvstore mon osd osd_directory osd_directory_single osd_ceph_disk osd_ceph_disk_prepare osd_ceph_disk_activate osd_ceph_activate_journal mds rgw rgw_user nfs zap_device mon_health mgr disk_introspection demo disk_list tcmu_runner rbd_target_api rbd_target_gw

osd_ceph_volume_active doesnt seem to exist anymore in ceph/daemon:latest-octopus

tried deleting and adding OSD according to https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/ - no success.

When I upgrade from luminous > nautilus > octopus i get "tick checking mon for new map" when running octopus osd on a previous working osd. Mon is running in v1 and v2 mode. So both ways - upgrade or manual creation - fail.

@dsavineau
Copy link
Contributor

You have to use -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE with your container engine cli

@pdbs
Copy link
Author

pdbs commented Jul 29, 2020

Still no luck: OSD 0 destroyed, partition zapped, ceph-volume lvm prepare --osd-id {id} --data /dev/sdX, then

docker run -d --name ceph-osd
--net=host
--restart unless-stopped
--privileged=true
--pid=host
-v /opt/ceph/etc/:/etc/ceph
-v /opt/ceph/var/:/var/lib/ceph/
-v /dev/:/dev/
-v /run/udev/:/run/udev/
-e OSD_DEVICE=/dev/sdb
-e OSD_TYPE=disk
-e OSD_BLUESTORE=1
-e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE
-e OSD_ID=0
ceph/daemon:latest-octopus osd

OSD ist stuck on:

2020-07-29T17:20:14.451+0000 7f0f28188700 1 osd.0 121 tick checking mon for new map

the same error, as when I do the migration from luminous > nautilus > octopus.

Is there a step by step manual how to successfully create an OSD (bluestore) under octopus?

I will downgrade to nautilus meanwhile.

@pdbs
Copy link
Author

pdbs commented Jul 30, 2020

Using CEPH octopus' cephadm OSD creation works as expected.

@pdbs pdbs closed this as completed Jul 30, 2020
@dsavineau
Copy link
Contributor

Is there a step by step manual how to successfully create an OSD (bluestore) under octopus?

$ docker run --rm --privileged --net=host --ipc=host \
                    -v /run/lock/lvm:/run/lock/lvm:z \
                    -v /var/run/udev/:/var/run/udev/:z \
                    -v /dev:/dev -v /etc/ceph:/etc/ceph:z \
                    -v /run/lvm/:/run/lvm/ \
                    -v /var/lib/ceph/:/var/lib/ceph/:z \
                    -v /var/log/ceph/:/var/log/ceph/:z \
                    --entrypoint=ceph-volume \
                    docker.io/ceph/daemon:latest-octopus \
                    --cluster ceph lvm prepare --bluestore --data /dev/xxxxxx
// assuming the OSD id created is 0
$ docker run --rm --privileged --net=host --pid=host --ipc=host \
                    -v /dev:/dev \
                    -v /etc/localtime:/etc/localtime:ro \
                    -v /var/lib/ceph:/var/lib/ceph:z \
                    -v /etc/ceph:/etc/ceph:z \
                    -v /var/run/ceph:/var/run/ceph:z \
                    -v /var/run/udev/:/var/run/udev/ \
                    -v /var/log/ceph:/var/log/ceph:z \
                    -v /run/lvm/:/run/lvm/ \
                    -e CLUSTER=ceph \
                    -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
                    -e CONTAINER_IMAGE=docker.io/ceph/daemon:latest-octopus \
                    -e OSD_ID=0 \
                    --name=ceph-osd-0 \
                    docker.io/ceph/daemon:latest-octopus

@cucker0
Copy link

cucker0 commented Aug 8, 2022

ceph-volume instead of ceph-disk, ceph-disk is deprecated。

Have a try cucker/ceph_daemon image,
ref https://hub.docker.com/repository/docker/cucker/ceph_daemon

docker run -d --name ceph-osd \
--net=host \
--restart unless-stopped \
--privileged=true \
--pid=host \
-v /opt/ceph/etc/:/etc/ceph \
-v /opt/ceph/var/:/var/lib/ceph/ \
-v /dev/:/dev/ \
-v /run/udev/:/run/udev/ \
-e OSD_DEVICE=/dev/sdb \
-e OSD_TYPE=disk \
-e OSD_BLUESTORE=1 \
cucker/ceph_daemon osd

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants