-
由 Michael Suo 提交于
Now when initializing a ScriptModule during the torch.jit.load() process, there is already a cpp module backing the thing. That means that setting training will overwrite whatever the initialized ScriptModule had. This PR splits apart the common "set up internal state" part of the Module __init__ and calls that from ScriptModule.__init__ and Module.__init__, leaving the "nn.Module-specific" part (setting `self.training`) for the nn.Module __init__ ghstack-source-id: 9b2ba8a15c43cf230363e4cd10ba4ad3ac4931f7 Pull Request resolved: https://github.com/pytorch/pytorch/pull/23680
7c404fa5