Skip to content

Commit 4afee29

Browse files
committed
cache clear update
1 parent 966a3e3 commit 4afee29

File tree

2 files changed

+3
-6
lines changed

2 files changed

+3
-6
lines changed

README.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,9 @@
55

66
This repository contains the accompanying code for [4D-SpatioTemporal ConvNets: Minkowski Convolutional Neural Networks, CVPR'19](https://arxiv.org/abs/1904.08755).
77

8-
## Change Logs
8+
## Change Log
99

10+
- 2020-05-19 The latest Minkowski Engine since the commit [be5c3](https://github.com/StanfordVL/MinkowskiEngine/commit/be5c3c18b26d6a62380d613533b7a939a5458705), does not require explicit cache clear and can use the memory more efficiently.
1011
- 2020-05-04: As pointed out by Thomas Chaton on [Issue#30](https://github.com/chrischoy/SpatioTemporalSegmentation/issues/30), I also found out that the training script contains bugs that models cannot reach the target performance described in the Model Zoo with the latest MinkowskiEngine. I am in the process of debugging the bugs, but I am having some difficulty finding the bugs. So, I created another git repo [SpatioTemporalSegmentation-ScanNet](https://github.com/chrischoy/SpatioTemporalSegmentation-ScanNet) from my other private repo that reaches the target performance. Please refer to the [SpatioTemporalSegmentation-ScanNet](https://github.com/chrischoy/SpatioTemporalSegmentation-ScanNet) for the ScanNet training. I'll update this repo once I find the bugs and merge SpatioTemporalSegmentation-ScanNet with this repo. Sorry for the trouble.
1112

1213
## Requirements

lib/train.py

+1-5
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ def train(model, data_loader, val_data_loader, config, transform_data_fn=None):
8080
# Preprocess input
8181
color = input[:, :3].int()
8282
if config.normalize_color:
83-
input[:, 1:] = input[:, 1:] / 255. - 0.5
83+
input[:, :3] = input[:, :3] / 255. - 0.5
8484
sinput = SparseTensor(input, coords).to(device)
8585

8686
data_time += data_timer.toc(False)
@@ -150,10 +150,6 @@ def train(model, data_loader, val_data_loader, config, transform_data_fn=None):
150150
# Recover back
151151
model.train()
152152

153-
if curr_iter % config.empty_cache_freq == 0:
154-
# Clear cache
155-
torch.cuda.empty_cache()
156-
157153
# End of iteration
158154
curr_iter += 1
159155

0 commit comments

Comments
 (0)