-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when running the classification example #312
Comments
The code runs perfectly fine on the latest MinkowskiEngine. |
Command list : However, it return the same report: Traceback (most recent call last): |
I tried to use the same setup with pytorch 1.7.1 with cuda 11.1, but still, the code runs without a problem. Can you post the output of
|
==========System========== ==========MinkowskiEngine========== |
I used the same cuda versions but could not reproduce the error. I also tried pip and source but couldnt reproduce the error either. Are you using a docker or any special setup? |
Thanks. I downloaded file from https://developer.download.nvidia.com/compute/cuda/11.2.0/local_installers/cuda_11.2.0_460.27.04_linux.run (similar to cuda 11.1). |
Closing the issue. The related issue #308 has been resolved on the latest master. Please feel free to open if this issue reappears. |
Environment :
Driver Version: 460.32.03
CUDA Version: 11.1.105
Pytorch Version: 1.7.1
Install by ' pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=${CONDA_PREFIX}/include" --install-option="--blas=openblas" '
python -m examples.modelnet40 (https://github.com/NVIDIA/MinkowskiEngine/blob/master/examples/modelnet40.py)
Error report:
Warning: This process will cache the entire voxelized ModelNet40 dataset, which will take up ~10G of memory.
INFO - 2021-02-05 08:57:28,495 - modelnet40 - Loading the subset train from ./ModelNet40 with 8871 files
INFO - 2021-02-05 08:57:28,496 - modelnet40 - Loading the subset val from ./ModelNet40 with 966 files
warnings.warn("To get the last learning rate computed by the scheduler, "
INFO - 2021-02-05 08:57:28,529 - modelnet40 - LR: [0.01]
** On entry to cusparseSpMM_bufferSize() parameter number 1 (handle) had an illegal value: bad initialization or already destroyed
RuntimeError: CUSPARSE_STATUS_INVALID_VALUE at /tmp/pip-req-build-4vnh0cz8/src/spmm.cu:249
The text was updated successfully, but these errors were encountered: