-
Notifications
You must be signed in to change notification settings - Fork 559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When calling NodeStageVolume, a modprobe error occurs #4610
Comments
Are you able to load the ceph module manually from the node? |
@Madhu-1 I don't think so. The volume mounts fine on another k8s cluster run as same version of kubelet. |
@gotoweb, the Ceph-CSI driver loads the kernel module that is provided by a host-path volume. If the module is already loaded (or built-in) , it should not try to load it again. Commit ab87045 checks for the support of the cephfs filesystem, it is included in Ceph-CSI 3.11 and was backported to 3.10 with #4381. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
Describe the bug
In response to a gRPC
/csi.v1.Node/NodeStageVolume
request from the csi plugin, the volume mount fails with the following error.When I checked dmesg, I found the log
Invalid ELF header magic: != \x7fELF
I hope this isn't a bug, but it seems to be out of my control.
Environment details
csi-node-driver-registrar:v2.9.3
,cephcsi:canary
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : kernelSteps to reproduce
cephfs.csi.ceph.com
provisioner.Actual results
Pods attempting to mount the volume received the following error message from the kubelet
I found the following message from the cephfs plugin pod.
The dmesg log looks like this:
Expected behavior
Interestingly, it mounts successfully on other K8S clusters using the same Ceph cluster. I was able to check the logs from that cephfs plugin.
The storageclass and configmap (config.json) of the k8s cluster where the error occurs, and the k8s cluster that is working correctly, match completely.
The text was updated successfully, but these errors were encountered: