- 08 6月, 2019 13 次提交
-
-
由 Michael Suo 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21531 ghimport-source-id: 711867e19cc3948a5e2a6aa8c4f2cd631abb04d2 Reviewed By: zdevito Differential Revision: D15719260 Pulled By: suo fbshipit-source-id: e88c5d3e14e6ecc956ce30ab0246ed606f4b0a38
-
由 Richard Zou 提交于
Summary: namedtensor build + test should run on PRs only if the commit message includes [namedtensor ci]. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21520 Differential Revision: D15718404 Pulled By: zou3519 fbshipit-source-id: ce8b5df2682e795e64958a9d49e2e3c091599b33
-
由 Rui Zhu 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21484 cast<int32_t*> => cast<int32_t> Also fixed reserve problem which might cause incorrect pointer. Reviewed By: yinghai Differential Revision: D15699866 fbshipit-source-id: 374418476bddd60f5c5306c8c57319ccf28b9990
-
由 Michael Suo 提交于
Differential Revision: D15717337 Original commit changeset: 57e65a679a8f fbshipit-source-id: f73794087a23d56d03497b29d9a9e4e7d54deaad
-
由 Michael Suo 提交于
Summary: This should further reduce noise by only clang-formatting the lines you actually touched in the precommit hook. Pull Request resolved: https://github.com/pytorch/pytorch/pull/15657 Differential Revision: D15717337 Pulled By: suo fbshipit-source-id: 57e65a679a8fdee5c3ff28e241c74ced9398eb0c
-
由 Gregory Chanan 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21497 ghimport-source-id: bc03f274 Differential Revision: D15715478 Pulled By: gchanan fbshipit-source-id: 90e1b65249b4b12f936ee8877cc0bc5a972d9ceb
-
由 Elias Ellison 提交于
Summary: lower tuples pass didn't check bounds for tuple index Pull Request resolved: https://github.com/pytorch/pytorch/pull/21521 Differential Revision: D15716813 Pulled By: eellison fbshipit-source-id: 8eead98c2c63118e7d24a8c8bf6184b02afb7dcd
-
由 Tim Khatkevich 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21343 Needed to binarise features Reviewed By: yinghai Differential Revision: D15625653 fbshipit-source-id: 52f48259a040dac35a7000bb1eea9feb5c7ef1ab
-
由 svcscm 提交于
Reviewed By: yns88 fbshipit-source-id: 5778cdb5173fc16e5d5474fefa2ea89264101184
-
由 Tzu-Wei Huang 提交于
Summary: The new implementation of tracing supports more module. So many error-handling code can be removed by placing the old one (LegacyTracedModule). cc orionr Pull Request resolved: https://github.com/pytorch/pytorch/pull/21339 Reviewed By: natalialunova Differential Revision: D15695154 Pulled By: orionr fbshipit-source-id: af7d35754e9f34bd1a0ad7b72a9ebe276ff8ab98
-
由 huba 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21474 ghimport-source-id: b2477765362248a80557d1a20db02a1290bdcde3 Differential Revision: D15699700 Pulled By: fbhuba fbshipit-source-id: 1aa4309fec0982c8477cfab29ca5f42d2b171f97
-
由 Gregory Chanan 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21496 ghimport-source-id: d7197bcc Differential Revision: D15715479 Pulled By: gchanan fbshipit-source-id: fa59eb808d26119b33eb97bb90ef70e95e58458d
-
由 Lara 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20533 Reviewed By: zrphercule Differential Revision: D15579713 Pulled By: houseroad fbshipit-source-id: 91f3ac0cb14ef226f980362b0013b6b92cb8b8da
-
- 07 6月, 2019 27 次提交
-
-
由 Edward Yang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21476 ghimport-source-id: adfd08b8 Differential Revision: D15715270 Pulled By: ezyang fbshipit-source-id: dde02579d9960ac960306d0a024b8e17846ae0ff
-
由 Yiming Wu 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21488 Differential Revision: D15715264 Pulled By: ezyang fbshipit-source-id: 86978f294720e0ce6f60b748a71f0604d6cfa00c
-
由 Edward Yang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21466 ghimport-source-id: 0a235c8b Differential Revision: D15698096 Pulled By: ezyang fbshipit-source-id: 1759c2681071e9c7e83de3de86daf4333c5f8f3a
-
由 Kaixhin 提交于
Summary: Fixes #12259, needs to make sure tests (see #13766) don't break due to numerical precision issues. Not sure what would need to be adjusted here... Pull Request resolved: https://github.com/pytorch/pytorch/pull/13774 Differential Revision: D15715021 Pulled By: ezyang fbshipit-source-id: 20ce2beee1b39ebe9f023c5f2b25be53acccb5f3
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21281 ghimport-source-id: 4b241d54 Differential Revision: D15699063 Pulled By: zou3519 fbshipit-source-id: c0f00c370d266a4ea5211aae943041fd899e960a
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21280 ghimport-source-id: 92184832 Differential Revision: D15698516 Pulled By: zou3519 fbshipit-source-id: 502b9b019d51dd46327e6caf2af69aa520c70cb6
-
由 Edward Yang 提交于
Revert "Revert D15632268: [pytorch][PR] Continuation of Port max_unpool1d, max_unpool2d and max_unpool3d to ATen" (#21427) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21427 ghimport-source-id: 930c2fb2 Differential Revision: D15698423 Pulled By: ezyang fbshipit-source-id: 891c94c24b6d377cd6dd94d86cc66465b582359f
-
由 Edward Yang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21479 ghimport-source-id: 60fa97fb Differential Revision: D15713609 Pulled By: ezyang fbshipit-source-id: a3d9c49e2db985f4373508cd44e94d43ae6e24da
-
由 Peng Gong 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21492 If one async operator failed, async_scheduling net currently only marks all scheduled async operators as finished without cancelling the callbacks. The new behavior is to cancel the callbacks first, then set event status to finished. Reviewed By: ilia-cher Differential Revision: D15702475 fbshipit-source-id: 55a1774d768b2e238bab859b83332f1877a001ca
-
由 David Zhang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21423 - add gradient for boolean mask - add test for gradient checking Reviewed By: BIT-silence Differential Revision: D15640036 fbshipit-source-id: 79f40c6901e805bf1b8e9b01b57903e30b00f654
-
由 svcscm 提交于
Reviewed By: yns88 fbshipit-source-id: af5812e3d071e66f9d0272c36bf639eb04bde7e4
-
由 Owen Anderson 提交于
Summary: This saves ~7% DenseNet load time (4.3s -> 4.0s) on my laptop. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21429 Differential Revision: D15681374 fbshipit-source-id: 9925a6154d51f2d592e26cb5ff8bf7ab3ee2519b
-
由 Alex Şuhan 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21444 Differential Revision: D15701786 Pulled By: ezyang fbshipit-source-id: 8348e08f9b8f3047b30736f9a944786ab84e6b68
-
由 ptrblck 提交于
Summary: Turing GPUs (compute capability 7.5) require CUDA10 to work properly. We've seen some issues for these GPUs using PyTorch binaries with CUDA9 or older: [Discussion Board #1](https://discuss.pytorch.org/t/cudnn-status-execution-failed-error/38575) [Discussion Board #2](https://discuss.pytorch.org/t/cublas-runtime-error-on-gpu-running-but-works-on-cpu/46545/6) Tested on using CUDA9 with an RTX 2080Ti. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21468 Differential Revision: D15696170 Pulled By: ezyang fbshipit-source-id: ed43f4e4948d3f97ec8e7d7952110cbbfeafef2a
-
由 Edward Yang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21470 ghimport-source-id: 69800c1ce1187591b7bcdb8a63973b4fd8d0e326 Differential Revision: D15696930 Pulled By: ezyang fbshipit-source-id: fafbcba38d9572a23ee9c1d81cdcce3a154ae4c6
-
由 Huamin Li 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21502 In BenchResult, we keep name, avg_fwd, std_fwd, avg_bwd, and std_bwd. There is no information about the number of each iteration. In this diff, I am adding more info to BenchResult to include the number reported from each iteration. Reviewed By: wanchaol Differential Revision: D15706306 fbshipit-source-id: 3f14be4ba91f1f6da473995783bd7af1d067938d
-
由 Edward Yang 提交于
Differential Revision: D15629687 Original commit changeset: 2f87f18be655 fbshipit-source-id: a142c22be3fdf14a2b3c29b8766b218fb0883927
-
由 davidriazati 提交于
Summary: This moves `JitTestCase` to its own file so that we can have other jit test files (ex. `test_jit_py3.py`) There aren't any code changes, just a move and cleaning up the imports Pull Request resolved: https://github.com/pytorch/pytorch/pull/21491 Pulled By: driazati Differential Revision: D15703060 fbshipit-source-id: 6082e8b482100bb7b0cd9ae69738f1273e626171
-
由 Sebastian Messmer 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21426 - Differential Revision: D15679789 fbshipit-source-id: 5fd448e66af159fd79883aa874065424ec9694ad
-
由 Sebastian Messmer 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21368 - Differential Revision: D15629687 fbshipit-source-id: 2f87f18be65552f3eb3f4c945d7f19ba4bae0eb8
-
由 David Zhang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21230 tsia; we support empty tensor with this diff for reshape operator Reviewed By: jerryzh168 Differential Revision: D15583356 fbshipit-source-id: 6d44c04e95ca3546509bfb12102e29c878f9a7c7
-
由 Aapo Kyrola 提交于
Summary: Modify MKLDNN pooling operation to support ceil mode by adjusting the right/bottom padding accordingly. This is done similarly as in Caffe (see discussion https://github.com/pytorch/pytorch/pull/19205#discussion_r276903751). To make this possible, I split the padding to left and right (top / bottom). This naming is confusing but actually follows mkldnn's own naming for pooling::compute(). We increase the r paddings so that it matches the ceiling mode expected output size. Strengthened the test case. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21310 Reviewed By: bddppq Differential Revision: D15611664 Pulled By: akyrola fbshipit-source-id: 46b40015dafef69a8fd5e7b2c261d8dbf448cd20
-
由 Daya Khudia 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21393 Result of splitting the base diff. We moved a header from src/* to include/fbgemm/* Reviewed By: jianyuh Differential Revision: D15635188 fbshipit-source-id: ad7d0ddba964ff1cb8b2e33f5f98e457a4d2eac9
-
由 Brian Vaughan 提交于
Summary: https://github.com/pytorch/pytorch/issues/18111 Pull Request resolved: https://github.com/pytorch/pytorch/pull/20458 Differential Revision: D15699732 Pulled By: nairbv fbshipit-source-id: f7a5424c1f1d3b0e4eba0d503d75ae8a18ef7ff4
-
由 ThisIsIsaac 提交于
Summary: changed `UpsampleBilinearKernel` s.t. the throughput increased 40~50%. I tested locally with my local test code -- **not pytorch's provided test code** -- because I am having a build problem ( which I made an issue about [here](https://github.com/pytorch/pytorch/issues/19184)). I tested with various tensor sizes and across all the sizes, it should a significant increase in throughput. 1. added `__restrict__` 2. instead of launch as many threads as there are output elements, I launched only `output_height * output_width` may threads and had each thread iterate through the channel and batch dimension. Pull Request resolved: https://github.com/pytorch/pytorch/pull/19306 Differential Revision: D15701840 Pulled By: ezyang fbshipit-source-id: 53c54d4f4e4a28b58ecc7d7ae6b864cbfc760e27
-
由 Xingdong Zuo 提交于
Summary: fix #18254 for numerically instability of `SigmoidTransform` Pull Request resolved: https://github.com/pytorch/pytorch/pull/19802 Differential Revision: D15701837 Pulled By: ezyang fbshipit-source-id: fe6c755c523487c8bbdcc3bfb8455801617c70a4
-
由 fehiepsi 提交于
Summary: Currently, when the input of MVN is precision matrix, we take inverse to convert the result to covariance matrix. This, however, will easily make the covariance matrix not positive definite, hence will trigger a cholesky error. For example, ``` import torch torch.manual_seed(0) x = torch.randn(10) P = torch.exp(-(x - x.unsqueeze(-1)) ** 2) torch.distributions.MultivariateNormal(loc=torch.ones(10), precision_matrix=P) ``` will trigger `RuntimeError: cholesky_cpu: U(8,8) is zero, singular U.` This PR uses some math tricks ([ref](https://nbviewer.jupyter.org/gist/fehiepsi/5ef8e09e61604f10607380467eb82006#Precision-to-scale_tril)) to only take inverse of a triangular matrix, hence increase the stability. cc fritzo, neerajprad , SsnL Pull Request resolved: https://github.com/pytorch/pytorch/pull/21366 Differential Revision: D15696972 Pulled By: ezyang fbshipit-source-id: cec13f7dfdbd06dee94b8bed8ff0b3e720c7a188
-