- 14 6月, 2019 26 次提交
-
-
由 Benny Chen 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20924 I found a python3 bug for deserializing caffe2 code. The exception thrown is Unicode related error instead of just decode error, and we need to catch that as well Reviewed By: ipiszy Differential Revision: D15293221 fbshipit-source-id: 29820800d1b4cbe5bf3f5a189fe2023e655d0508
-
由 Aapo Kyrola 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21763 Custom __getattr__ functions can only raise AttributeError. This code throwed NotImplementedError which caused upstream troubles when hasattr() was called. Differential Revision: D15815176 fbshipit-source-id: 0982e2382de4578d3fc05c5d2a63f624d6b4765e
-
由 Anshul Jain (B*8) 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21652 Diff fixes issue of empty ROIs for convTranspose Issue StackTrace: P65374505 Reviewed By: jerryzh168 Differential Revision: D15766739 fbshipit-source-id: 39cf8feca66b6aae22ff4ec5c1b6a4e3f20f378d
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21647 ghimport-source-id: 1db4ec31f047f7854a39c28e2b38918dc6b44f42 Differential Revision: D15804425 Pulled By: zou3519 fbshipit-source-id: 575cc3de09287efe75e7052df129626748208d0d
-
由 Edward Yang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21750 ghimport-source-id: 4792aa5ccab7e4b54c21f23d0b78802f85bbeb8d Differential Revision: D15819367 Pulled By: ezyang fbshipit-source-id: db91ee727c66469ac78e59b3662f29db53a916bc
-
由 Ailing Zhang 提交于
Summary: fixes https://github.com/pytorch/hub/issues/29 Pull Request resolved: https://github.com/pytorch/pytorch/pull/21685 Differential Revision: D15817774 Pulled By: ailzhang fbshipit-source-id: d2f615e5d431186d45a21d8300fb9ba3c37b246c
-
由 Sherman Wong 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21446 this is used for easier tracing of iter id when looking at trace diagram Reviewed By: ilia-cher Differential Revision: D15628950 fbshipit-source-id: ee75b3bdb14a36abc18c7bddc49d8ec9789b724d
-
由 Mikhail Zolotukhin 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21141 Differential Revision: D15808354 Pulled By: ZolotukhinM fbshipit-source-id: 16d938fd5acafb445a0c433cabc9a55cab563165
-
由 Sam Gross 提交于
Summary: ``` The stride calculation using OffsetCalculator performs poorly with MAX_DIMS=25. This reduces MAX_DIMS (after coalescing) to 16 on ROCm. I think it's unlikely that anyone will exceed this limit. If they do, we can add additional specializations for ROCm with more dimensions. ``` I'm not sure about the underlying cause. With MAX_DIM=25, the add kernel's params is ~648 bytes vs. ~424 bytes with MAX_DIM=16. The kernel instruction footprint is bigger too, but most of these instructions are never executed and most kernel parameters are never loaded because the typical dimensionality is much smaller. Mini benchmark here: https://gist.github.com/colesbury/1e917ae6a0ca9d24712121b92fed4c8f (broadcasting operations are much faster) cc iotamudelta Pull Request resolved: https://github.com/pytorch/pytorch/pull/21754 Reviewed By: bddppq Differential Revision: D15811906 Pulled By: colesbury fbshipit-source-id: 063f92c083d26e2ef2edc98df7ff0400f9432b9d
-
由 Sungmann Cho 提交于
Summary: alloctor -> allocator excutable -> executable excution -> execution foward -> forward initiaize -> initialize paralell -> parallel preprocesor -> preprocessor tranpose -> transpose Pull Request resolved: https://github.com/pytorch/pytorch/pull/21665 Differential Revision: D15806155 Pulled By: soumith fbshipit-source-id: d92b21ec8650a2b32f05faf9af0b7d2b073e992c
-
由 Natalia Gimelshein 提交于
Summary: Currently multihead attention for half type is broken ``` File "/home/ngimel/pytorch/torch/nn/functional.py", line 3279, in multi_head_attention_forward attn_output = torch.bmm(attn_output_weights, v) RuntimeError: Expected object of scalar type Float but got scalar type Half for argument https://github.com/pytorch/pytorch/issues/2 'mat2' ``` because softmax converts half inputs into fp32 inputs. This is unnecessary - all the computations in softmax will be done in fp32 anyway, and the results need to be converted into fp16 for the subsequent batch matrix multiply, so nothing is gained by writing them out in fp32. This PR gets rid of type casting in softmax, so that half works. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21658 Differential Revision: D15807487 Pulled By: zhangguanheng66 fbshipit-source-id: 4709ec71a36383d0d35a8f01021e12e22b94992d
-
由 Will Feng 提交于
Summary: In this PR, we use `expect` to fill in the token for pytorchbot when doing `git push`, so that we don't need to save the token in the git remote URL. Pull Request resolved: https://github.com/pytorch/pytorch/pull/20459 Differential Revision: D15811676 Pulled By: yf225 fbshipit-source-id: cd3b780da05d202305f76878e55c3435590f15a8
-
由 Edward Yang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21755 ghimport-source-id: dfb53759024d9ba9d104fdb2a8151ab996e55234 Differential Revision: D15811172 Pulled By: ezyang fbshipit-source-id: c8c7c1c15277d8fe8cc513e20af449257d7ff15c
-
由 BowenBao 提交于
Summary: Fix an obvious bug. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21669 Reviewed By: zrphercule Differential Revision: D15806614 Pulled By: houseroad fbshipit-source-id: d0f6e934252e0057f3dbcc7f160236ee6f4497ac
-
由 Iurii Zdebskyi 提交于
Summary: Fixing reported [bug](https://github.com/pytorch/pytorch/issues/20322) The issue was related to not checking the dimensions of source vs destination tensors. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21617 Differential Revision: D15749963 Pulled By: izdeby fbshipit-source-id: acff114c729fd9c0a9a51325e0ebd8b42e1f2fc1
-
由 Aapo Kyrola 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21742 Add error message to NotImplementedError so we know which function it is about. Reviewed By: bddppq Differential Revision: D15806379 fbshipit-source-id: 14eab9d03aa5b44ab95c5caeadc0e01d51f22188
-
由 Guanheng Zhang 提交于
Summary: Add docs for TransformerEncoder and TransformerDecoder, plus minor edits. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21746 Differential Revision: D15807498 Pulled By: zhangguanheng66 fbshipit-source-id: 388efb5821c4c3d25865cecea70902e9b2bf5d15
-
由 davidriazati 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21751 Pulled By: driazati Differential Revision: D15809091 fbshipit-source-id: 3cc96e632a7b89b4d86d68d2a76021d971447e12
-
由 Tongzhou Wang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21747 Differential Revision: D15808189 Pulled By: ezyang fbshipit-source-id: 5413eaaa901be098c6bad135f702ba103bc79d6c
-
由 Natalia Lunova 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21739 Added event and event_counter columns for PyTorch/Caffe2 API usage metrics Reviewed By: dzhulgakov Differential Revision: D15119119 fbshipit-source-id: a71010bd659109a8e4f3a8bad84b22c1d15dc528
-
由 Kevin Chen 提交于
Summary: When converting pixel_shuffle to reshape + transpose + reshape, the first reshape should be: [N, C * r^2, H, W] => [N, C, r, r, H, W] in order to match pytorch's implementation (see ATen PixelShuffle.cpp). This previously wasn't caught by the test case, since it uses C = r = 4. Updated test case to have C = 2, r = 4. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21486 Reviewed By: houseroad Differential Revision: D15700945 Pulled By: houseroad fbshipit-source-id: 47019691fdc20e152e867c7f6fd57da104a12948
-
由 Ilia Cherniavskii 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21581 ghimport-source-id: 6a65d73694b17611d6ad45db0b39b86c318a68c7 Differential Revision: D15736495 Pulled By: ilia-cher fbshipit-source-id: 6b9109ad3611ff3c8b1a37796e9149bef0c2ad36
-
由 davidriazati 提交于
Summary: ](https://our.intern.facebook.com/intern/diff/15800009/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/21725 Pulled By: driazati Differential Revision: D15800009 fbshipit-source-id: 5409c213161e3f2031710933897b85872aad2a83
-
由 Ilia Cherniavskii 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20480 ghimport-source-id: c710f897c4c9b9616fc3dd76d80b4845aea43a1f Differential Revision: D15333692 Pulled By: ilia-cher fbshipit-source-id: 61e476dd5c737fe144e3aec000d8ebb11fbc0547
-
由 Stefan Krah 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21635 Differential Revision: D15768487 Pulled By: ezyang fbshipit-source-id: 85e1d883aded0f4d3ac5100719df335f5a337fc5
-
由 Xiaodong Wang 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21718 adding a detection method on whether the package is built for AMD. Reviewed By: bddppq Differential Revision: D15795893 fbshipit-source-id: 91a21ee76b2273b1032507bdebe57e016717181d
-
- 13 6月, 2019 14 次提交
-
-
由 Junjie Bai 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21724 Differential Revision: D15799149 Pulled By: bddppq fbshipit-source-id: c72689e73470f2ca145556a2ac8cb34e36e341ef
-
由 James Malcolm 提交于
Summary: **Closes:** Confusing documentation with distributions.Categorical about logits https://github.com/pytorch/pytorch/issues/16291 **Solution**: Changes documentation on the Categorical distribution from `log probabilities` to `event log-odds`. This makes should reduce confusion as raised by this issue, and is consistent with other distributions such as `torch.Binomial`. More than happy to make any other changes if they fit :). Pull Request resolved: https://github.com/pytorch/pytorch/pull/21707 Differential Revision: D15799181 Pulled By: soumith fbshipit-source-id: f11acca7a5c130102a3ff6674640235ee5aa69bf
-
由 BowenBao 提交于
Summary: - [x] Add tests after https://github.com/pytorch/pytorch/pull/20256 is merged - Support exporting ScriptModule with inputs/outputs of arbitrarily constructed tuples. - Moved the assigning of output shapes to after graph conversion to ONNX is completed. By then all tuples in the IR has already been lowered by the pass ```_jit_pass_lower_all_tuples```. If assigning output shapes is required to happen before that, we'll need to hand parse the tuple structures in the graph, and repeat the same logic in ```_jit_pass_lower_all_tuples```. Handling inputs is easier because all tuple information is encoded within the input tensor type. - Swap the order of ```_jit_pass_lower_all_tuples``` and ```_jit_pass_erase_number_types```. Ops like ```prim::TupleIndex``` relies on index being a scalar. ```_jit_pass_erase_number_types``` will convert these kind of scalars to tensors. Pull Request resolved: https://github.com/pytorch/pytorch/pull/20784 Reviewed By: zrphercule Differential Revision: D15484171 Pulled By: houseroad fbshipit-source-id: 4767a84038244c929f5662758047af6cb92228d3
-
由 vishwakftw 提交于
Summary: …ngular_solve Changelog: - Iterate over mini batches of 65535 matrices (maximum) Pull Request resolved: https://github.com/pytorch/pytorch/pull/21689 Differential Revision: D15800254 Pulled By: soumith fbshipit-source-id: c743ff13f1ba25d26874429d44e41a3c0ed21d6a
-
由 xiaobing.zhang 提交于
Summary: ### mkldnn backward ops list: - [ ] \(https://github.com/pytorch/pytorch/pull/20567) Add aten mkldnn conv2d backward operator
💛 - [ ] \(https://github.com/pytorch/pytorch/pull/20570) Add aten mkldnn backward ops: relu, linear and reshape💛 - [ ] \(https://github.com/pytorch/pytorch/pull/20571) Add aten mkldnn backward ops: max_pool2d, avg_pool2d and adaptive_avg_poo2d💛 - [ ] \(https://github.com/pytorch/pytorch/pull/20572) Add aten mkldnn batchnorm backward operator💛 - [ ] \(https://github.com/pytorch/pytorch/pull/20573) Add aten mkldnn zero_ operator:yellow_heart: - [ ] \(https://github.com/pytorch/pytorch/pull/20575) Add mkldnn mul operator💛 Pull Request resolved: https://github.com/pytorch/pytorch/pull/20575 Differential Revision: D15799529 Pulled By: bddppq fbshipit-source-id: 4887d8ef1a0e316ad9db199b657d9481fc13e486 -
由 Will Feng 提交于
Differential Revision: D15769066 Original commit changeset: 5853e0360581 fbshipit-source-id: ac6fa8429136abf4c7835919009f936eea11ea7b
-
由 Karl Ostmo 提交于
Summary: This renames the CMake `caffe2` target to `torch`, as well as renaming `caffe2_gpu` to `torch_gpu` (and likewise for other gpu target variants). Many intermediate variables that don't manifest as artifacts of the build remain for now with the "caffe2" name; a complete purge of `caffe2` from CMake variable names is beyond the scope of this PR. The shell `libtorch` library that had been introduced as a stopgap in https://github.com/pytorch/pytorch/issues/17783 is again flattened in this PR. Pull Request resolved: https://github.com/pytorch/pytorch/pull/20774 Differential Revision: D15769965 Pulled By: kostmo fbshipit-source-id: b86e8c410099f90be0468e30176207d3ad40c821
-
由 Nikolay Korovaiko 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21719 Differential Revision: D15797628 Pulled By: Krovatkin fbshipit-source-id: 87742bdde0b05aff4341ababb1f55c51991768ec
-
由 Nick Korovaiko 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21695 Differential Revision: D15795716 Pulled By: Krovatkin fbshipit-source-id: e14a44210ea4312a247157a6681fce449e40f779
-
由 Nikolay Korovaiko 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21615 Differential Revision: D15793434 Pulled By: Krovatkin fbshipit-source-id: d89f1bf61ea57a1e3b75f8e2b200c27beb8b46cf
-
由 Zachary DeVito 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21565 ghimport-source-id: d1fe735fb7821eadc59116fb921d8fe39a49f818 Reviewed By: driazati Differential Revision: D15729503 Pulled By: zdevito fbshipit-source-id: fabb678f040d21fae7545e3b2be1d098e24c544e
-
由 Zachary DeVito 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21660 ghimport-source-id: f9a11b2748f49042ee636755358d79c547aa249e Reviewed By: suo Differential Revision: D15770237 Pulled By: zdevito fbshipit-source-id: 41fa8577028eef247bc545635cd93192a0b19db4
-
由 Zachary DeVito 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21515 ghimport-source-id: 7898a68791db2b5050164ab01d6ca6991e05746d Reviewed By: suo Differential Revision: D15719981 Pulled By: zdevito fbshipit-source-id: 42cf26cf6541bcdf95f1343da3b9228fe2c229da
-
由 davidriazati 提交于
Summary: Class member annotations can be marked with `Final[T]` instead of adding them to `__constants__`. `Final` comes from the `typing_extensions` module (which will be used if it is present). If not, the polyfill from `_jit_internal` is exposed as `torch.jit.Final` for users that don't want to install `typing_extensions`. This keeps around `__constants__` since a lot of code is still using it, but in documentation follow ups we should change the examples to all to use `Final`. TODO: install typing_extensions on CI, move tests to a Python3 only file when #21489 lands ](https://our.intern.facebook.com/intern/diff/15746274/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/21603 Pulled By: driazati Differential Revision: D15746274 fbshipit-source-id: d2c9b5643b4abba069b130c26fd42714c906ffac
-