You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently rf.set_default_device is called after the model was created. Specifically, we only use rf.set_default_device_ctx(self._device) around the run step, and not otherwise. The model creation happens on CPU, and then we call model.to(device).
I'm not 100% sure anymore about the reasoning for this. I think I did it this way to make sure the initialization stays mostly deterministic and independent of what device you use (e.g. on CUDA, the RNG behaves different, maybe even across CUDA versions). Maybe there could also be other issues.
Note, somewhat related to rf.set_default_device is also torch.set_default_device.
Changing this could potentially break some code now, and the concern about deterministic init still stays valid. But anyway, I want to discuss this. For bigger models, this can be quite some unnecessary overhead at startup.
Maybe some option for this, like global_set_default_device: bool, and if True, it will call rf.set_default_device (and maybe also torch.set_default_device) right in Engine.__init__?
And then also the question: We could introduce a new behavior version and make this enabled by default for the new behavior version?
Currently
rf.set_default_device
is called after the model was created. Specifically, we only userf.set_default_device_ctx(self._device)
around the run step, and not otherwise. The model creation happens on CPU, and then we callmodel.to(device)
.I'm not 100% sure anymore about the reasoning for this. I think I did it this way to make sure the initialization stays mostly deterministic and independent of what device you use (e.g. on CUDA, the RNG behaves different, maybe even across CUDA versions). Maybe there could also be other issues.
Note, somewhat related to
rf.set_default_device
is alsotorch.set_default_device
.Changing this could potentially break some code now, and the concern about deterministic init still stays valid. But anyway, I want to discuss this. For bigger models, this can be quite some unnecessary overhead at startup.
Maybe some option for this, like
global_set_default_device: bool
, and if True, it will callrf.set_default_device
(and maybe alsotorch.set_default_device
) right inEngine.__init__
?And then also the question: We could introduce a new behavior version and make this enabled by default for the new behavior version?
(cc @NeoLegends)
The text was updated successfully, but these errors were encountered: