You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CUDA runtime error: an illegal memory access was encountered (700) in apply_lu_factor_batched_magma at /opt/conda/conda-bld/pytorch_1656352464346/work/aten/src/ATen/native/cuda/linalg/BatchLinearAlgebra.cpp:1961
CUDA runtime error: an illegal memory access was encountered (700) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda113_1619629459349/work/interface_cuda/interface.cpp:944
CUDA runtime error: an illegal memory access was encountered (700) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda113_1619629459349/work/interface_cuda/interface.cpp:945
CUDA runtime error: an illegal memory access was encountered (700) in magma_queue_destroy_internal at /opt/conda/conda-bld/magma-cuda113_1619629459349/work/interface_cuda/interface.cpp:946
epochs: 2%|█▌ | 1/50 [31:39<25:51:18, 1899.55s/it, loss=15]
Traceback (most recent call last):
File "train.py", line 300, in <module>
main(0, 1)
File "train.py", line 199, in main
multigpu=multigpu
File "train.py", line 246, in train
loss, metrics = model(batch)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/code/NCLR/bevlab/models.py", line 453, in forward
x_cam, R, t, err_2d, err_3d = efficient_pnp(keypoints_coordinates, y_uncalibrated)
File "/code/NCLR/bevlab/pnp.py", line 632, in efficient_pnp
for c_cam in c_cam_variants
File "/code/NCLR/bevlab/pnp.py", line 632, in <listcomp>
for c_cam in c_cam_variants
File "/code/NCLR/bevlab/pnp.py", line 396, in _compute_norm_sign_scaling_factor
x_world, x_cam, weight, estimate_scale=True
File "/code/NCLR/bevlab/pnp.py", line 193, in corresponding_points_alignment
E[:, -1, -1] = torch.det(R_test)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
很棒的工作!
我正尝试复现目标检测任务,遇到如下问题,请多多指教!!
Q1. Object detection's pre-training
论文当中描述使用SemanticKITTI预训练,但代码里给出是在nuscenes上(cfgs/pretrain_ns_spconv.yaml)。我试着调整了下,写了SK的配置文件pretrain_sk_spconv.zip。但是多次运行在第二个epoch时,报错:
但是,ns的也试了试,pretrain_ns_spconv.yaml 运行正常。
想到目标检测下游任务是在kitti上进行,感觉更应该是用SemanticKITTI进行的预训练。如果是,请帮忙检查一下我的配置文件。
Q2. Object detection's downstream
请问怎么进行10%、20%、50% ratio设置。如果方便,请分享下相关代码或配置。感谢!
再次感谢!!
The text was updated successfully, but these errors were encountered: