- 02 8月, 2019 3 次提交
-
-
由 Michael Suo 提交于
Now when initializing a ScriptModule during the torch.jit.load() process, there is already a cpp module backing the thing. That means that setting training will overwrite whatever the initialized ScriptModule had. This PR splits apart the common "set up internal state" part of the Module __init__ and calls that from ScriptModule.__init__ and Module.__init__, leaving the "nn.Module-specific" part (setting `self.training`) for the nn.Module __init__ ghstack-source-id: 9b2ba8a15c43cf230363e4cd10ba4ad3ac4931f7 Pull Request resolved: https://github.com/pytorch/pytorch/pull/23680
-
由 Soumith Chintala 提交于
-
由 Tongzhou Wang 提交于
* Slightly improve dataloader docs on when auto-batching is disabled * fix typo
-
- 01 8月, 2019 6 次提交
-
-
由 Tongzhou Wang 提交于
-
由 Richard Zou 提交于
-
由 Vishwak Srinivasan 提交于
Summary: Changelog: - Use narrow instead of narrow_copy while returning Pull Request resolved: https://github.com/pytorch/pytorch/pull/23591 Test Plan: - All tests should pass to ensure that the change is correct Fixes https://github.com/pytorch/pytorch/issues/23580 Differential Revision: D16581174 Pulled By: ezyang fbshipit-source-id: 1b6bf7d338ddd138ea4c6aa6901834dd202ec79c
-
由 Jerry Zhang 提交于
Summary: accidently calls clone, but what we want is creating an empty tensor and set storage. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23452 ghstack-source-id: 87438096 Differential Revision: D16442756 fbshipit-source-id: 6d5663f82c9bd4e9de8fc846c52992477843af6a
-
由 Richard Zou 提交于
This sets up the docs build in dry-run mode. If everything looks okay I will enable it.
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23556 Test Plan: - Run ci Imported from OSS Differential Revision: D16563747 Pulled By: zou3519 fbshipit-source-id: 104371b3712c00b073a82e5145090e7bd6fd2d53
-
- 31 7月, 2019 2 次提交
-
-
由 Soumith Chintala 提交于
-
由 vishwakftw 提交于
Summary: Changelog: - Rename `gels` to `lstsq` - Fix all callsites - Rename all tests - Create a tentative alias for `lstsq` under the name `gels` and add a deprecation warning to not promote usage. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23460 Test Plan: - All tests should pass to confirm that the patch is correct Differential Revision: D16547834 Pulled By: colesbury fbshipit-source-id: b3bdb8f4c5d14c7716c3d9528e40324cc544e496
-
- 30 7月, 2019 29 次提交
-
-
由 Hong Xu 提交于
Summary: After https://github.com/pytorch/pytorch/issues/23455, there is no need of this preprocessing in Python scripts. They will be automatically processed in CMake (plus CPPFLAGS here probably meant to be CXXFLAGS). Reference: - https://cmake.org/cmake/help/v3.15/envvar/CFLAGS.html - https://cmake.org/cmake/help/v3.15/envvar/CXXFLAGS.html - https://cmake.org/cmake/help/v3.15/envvar/LDFLAGS.html Pull Request resolved: https://github.com/pytorch/pytorch/pull/23528 Differential Revision: D16561561 Pulled By: ezyang fbshipit-source-id: 962a27a2b0a18db0f95477ad067a2611e4128187
-
由 Pavel Belevich 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22917 Differential Revision: D16521429 Pulled By: pbelevich fbshipit-source-id: 80ae583c6486d6948431b79e1452902bdf2cfbc3
-
由 Johannes M Dieterich 提交于
Summary: Only check for cmake dependencies we directly depend on (e.g., hipsparse but not rocsparse) Use cmake targets for ROCm where possible. While there, update the docker CI build infrastructure to only pull in packages by name we directly depend on (anticipating the demise of, e.g., miopengemm). I do not anticipate a docker rebuild to be necessary at this stage as the changes are somewhat cosmetic. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23527 Differential Revision: D16561010 Pulled By: ezyang fbshipit-source-id: 87cd9d8a15a74caf9baca85a3e840e9d19ad5d9f
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23229 Test Plan: Imported from OSS Differential Revision: D16494413 Pulled By: zou3519 fbshipit-source-id: 4acb85e5a4ad09bf5f7cbb84cc8d4ceac0cd9967
-
由 Gabriele Mambrini 提交于
Summary: Syncing worker requirement mismatches to improve remote build time. Created actions: LARGE: 66 MEDIUM: 649 XLARGE: 1 Updated actions: From LARGE to MEDIUM: 18 From LARGE to XLARGE: 2 From MEDIUM to LARGE: 20 From XLARGE to LARGE: 1 Differential Revision: D16559356 fbshipit-source-id: a51ef034265649314661ab0e283089a069a20437
-
由 Ailing Zhang 提交于
Summary: Fixes https://github.com/pytorch/pytorch/issues/21406 Pull Request resolved: https://github.com/pytorch/pytorch/pull/23433 Differential Revision: D16524135 Pulled By: ailzhang fbshipit-source-id: e7684fec60c9b9db9a09f8ac157b13c8dde1bdd2
-
由 Will Feng 提交于
Summary: When a user tries to change metadata of a tensor created from `.data` or `.detach()`, we currently shows an error message "<function_name> is not allowed on Tensor created from .data or .detach()". However, this error message doesn't suggest what the right fix should look like. This PR improves the error message. Closes https://github.com/pytorch/pytorch/issues/23393. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23504 Differential Revision: D16547415 Pulled By: yf225 fbshipit-source-id: 37f4a0385442e2b0966386fb14d3d938ecf4230c
-
由 Owen Anderson 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23524 Differential Revision: D16549562 fbshipit-source-id: 58351fc2858d495b135023626116f6f565c8e9b1
-
由 Sebastian Messmer 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23270 ghstack-source-id: 87389530 Differential Revision: D16448942 fbshipit-source-id: e6b578f0e97776112259d7ea38e143e4716ec273
-
由 Michael Suo 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23542 Test Plan: Imported from OSS Differential Revision: D16557122 Pulled By: suo fbshipit-source-id: c86578aa2c55f44ed5d573d33874a82244df3d09
-
由 Michael Suo 提交于
Differential Revision: D16526027 Original commit changeset: 109f2968430d fbshipit-source-id: c27252540ec6b7da60739eb7dcc8b1650672c226
-
由 Michael Suo 提交于
Differential Revision: D16554694 Original commit changeset: 0fae4458f18c fbshipit-source-id: 08aa0c292fa5b2dbdd0d1f0e59f531416edef760
-
由 Michael Suo 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23535 Test Plan: Imported from OSS Differential Revision: D16554694 Pulled By: suo fbshipit-source-id: 0fae4458f18c06ffbd484905ad7836dce9ce69cc
-
由 davidriazati 提交于
Summary: Previously these were left out which would lead to confusing messages, now it looks something like: ``` torch.jit.frontend.UnsupportedNodeError: import statements aren't supported : at ../test.py:13:9 def bad_fn(self): import pdb ~~~~~~ <--- HERE '__torch__.X' is being compiled since it was called from 'fn' at ../test.py:16:12 def fn(x): return X(10) ~~~~ <--- HERE ``` Fixes #23453 ](https://our.intern.facebook.com/intern/diff/16526027/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/23454 Pulled By: driazati Differential Revision: D16526027 fbshipit-source-id: 109f2968430dbf51ee91b1b3409badfd557d19a4
-
由 davidriazati 提交于
Summary: Use the recursive script API in the existing docs TODO: * Migration guide for 1.1 -> 1.2 Pull Request resolved: https://github.com/pytorch/pytorch/pull/21612 Pulled By: driazati Differential Revision: D16553734 fbshipit-source-id: fb6be81a950224390bd5d19b9b3de2d97b3dc515
-
由 Daya Khudia 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23521 non-fbgemm path should have the same arguments as fbgemm path. Reviewed By: jianyuh Differential Revision: D16547637 fbshipit-source-id: bb00d725fb968cbee32defb8facd2799a7e79bb4
-
由 Michael Suo 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23154 Test Plan: Imported from OSS Differential Revision: D16441913 Pulled By: suo fbshipit-source-id: a79f2c3e06a33cbd79b2e3333f16c069f356f451
-
由 Hong Xu 提交于
Summary: This resolves two issues in one shot: - sub shouldn't be available for bool type. - When sub is applied to an unsupported type, the current error messages shows "add_cpu/add_cuda is not implemented for [type]". They should be "sub_cpu/sub_cuda" instead. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23519 Differential Revision: D16548770 Pulled By: izdeby fbshipit-source-id: fe404a2a97b8d11bd180ec41364bf8e68414fb15
-
由 Mingzhe Li 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23276 This diff introduces a new feature to simplify benchmarking the backward path of ops. Here is an example: ``` ... self.input_one = torch.rand(M, N, K, requires_grad=self.auto_set()) self.input_two = torch.rand(M, N, K, requires_grad=self.auto_set()) ... ``` In this way, the benchmark will generate three different test cases. 1. input_one requires grad 2. input_two requires grad 3. both inputs require grad Here is a sample output: ``` # Benchmarking PyTorch: add # Mode: Eager # Name: add_M1_N8_K8_bwdall # Input: M: 1, N: 8, K: 8 Backward Execution Time (us) : 863.744 # Benchmarking PyTorch: add # Mode: Eager # Name: add_M1_N8_K8_bwd1 # Input: M: 1, N: 8, K: 8 Backward Execution Time (us) : 727.915 # Benchmarking PyTorch: add # Mode: Eager # Name: add_M1_N8_K8_bwd2 # Input: M: 1, N: 8, K: 8 Backward Execution Time (us) : 687.626 ``` Reviewed By: zheng-xq Differential Revision: D16450355 fbshipit-source-id: 50ae0916e81c3ff9f0c482ed6d386319eb15b305
-
由 Ilia Cherniavskii 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23417 Test Plan: cd docs; make html Imported from OSS Differential Revision: D16523781 Pulled By: ilia-cher fbshipit-source-id: d6c09e8a85d39e6185bbdc4b312fea44fcdfff06
-
由 Wanchao Liang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23249 Test Plan: Imported from OSS Differential Revision: D16466587 Pulled By: wanchaol fbshipit-source-id: a721da01b2da0ef90cac80b77f1285102e3b1118
-
由 Wanchao Liang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23248 Test Plan: Imported from OSS Differential Revision: D16466588 Pulled By: wanchaol fbshipit-source-id: 3c3d5dec2cea2f9cb080eadaef457cc62ac3fbe0
-
由 BowenBao 提交于
Summary: No real change on the CI since currently the default latest is 0.4.0. houseroad bddppq Pull Request resolved: https://github.com/pytorch/pytorch/pull/23517 Differential Revision: D16550375 Pulled By: bddppq fbshipit-source-id: a669b8af678c79c4d6909300b28458fe6b7cd30c
-
由 Wanchao Liang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23247 Test Plan: Imported from OSS Differential Revision: D16466590 Pulled By: wanchaol fbshipit-source-id: cf52721eacd177d9040564790382db13a9fcc2fe
-
由 Edward Thomson 提交于
Summary: Install clang-tidy (from LLVM 8) for the `clang_tidy` job. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23518 Differential Revision: D16549621 Pulled By: ezyang fbshipit-source-id: b1d20641380cdfdb0589249770b98163528fa69f
-
由 Hong Xu 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22926 Differential Revision: D16546369 Pulled By: colesbury fbshipit-source-id: 56f7ef4476e586dee19366fdb720085d1c2f2027
-
由 dongfangduoshou123 提交于
Summary: avoid Include the same header file twice Pull Request resolved: https://github.com/pytorch/pytorch/pull/23418 Differential Revision: D16546422 Pulled By: colesbury fbshipit-source-id: 5cd868cce73d9199ced9b6f2f6f57bf42e5a5d5b
-
由 Elias Ellison 提交于
Summary: There is an internal fbcode assert that fails if i do not add these checks. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23511 Differential Revision: D16545606 Pulled By: eellison fbshipit-source-id: cd3a799850bae8f052f9d81c1e4a2678fda19317
-
由 Yaxun (Sam) Liu 提交于
Summary: PyTorch test sets a policy() method to assertLeaksNoCudaTensors. Whenever a test is run, assertLeaksNoCudaTensors is called, which in turn calls CudaMemoryLeakCheck, which in turn calls initialize_cuda_context_rng, where it executes torch.randn on each device, where a kernel is launched on each device. Since the kernel may not finish on device 1, the assertion self.assertTrue(s1.query()) fails. The fix is to insert torch.cuda.synchronize(d0) torch.cuda.synchronize(d1) at the beginning of the test so that previously launched kernels finish before the real test begins. Pull Request resolved: https://github.com/pytorch/pytorch/pull/23520 Differential Revision: D16547701 Pulled By: soumith fbshipit-source-id: 42ad369f909d534e15555493d08e9bb99dd64b6a
-