You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+17-24
Original file line number
Diff line number
Diff line change
@@ -69,6 +69,10 @@ We visualized a sparse tensor network operation on a sparse tensor, convolution,
69
69
You can install the Minkowski Engine with `pip`, with anaconda, or on the system directly. If you experience issues installing the package, please checkout the [the installation wiki page](https://github.com/NVIDIA/MinkowskiEngine/wiki/Installation).
70
70
If you cannot find a relevant problem, please report the issue on [the github issue page](https://github.com/NVIDIA/MinkowskiEngine/issues).
### Too much GPU memory usage or Frequent Out of Memory
@@ -238,12 +231,12 @@ However, pytorch is implemented assuming that the number of point, or size of th
238
231
239
232
Specifically, pytorch caches chunks of memory spaces to speed up allocation used in every tensor creation. If it fails to find the memory space, it splits an existing cached memory or allocate new space if there's no cached memory large enough for the requested size. Thus, every time we use different number of point (number of non-zero elements) with pytorch, it either split existing cache or reserve new memory. If the cache is too fragmented and allocated all GPU space, it will raise out of memory error.
240
233
241
-
To prevent this, you must clear the cache at regular interval with `torch.cuda.empty_cache()`.
234
+
**To prevent this, you must clear the cache at regular interval with `torch.cuda.empty_cache()`.**
235
+
242
236
243
237
### Running the MinkowskiEngine on nodes with a large number of CPUs
244
238
245
239
The MinkowskiEngine uses OpenMP to parallelize the kernel map generation. However, when the number of threads used for parallelization is too large (e.g. OMP_NUM_THREADS=80), the efficiency drops rapidly as all threads simply wait for multithread locks to be released.
246
-
247
240
In such cases, set the number of threads used for OpenMP. Usually, any number below 24 would be fine, but search for the optimal setup on your system.
0 commit comments