- 17 6月, 2019 2 次提交
-
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
- 14 6月, 2019 4 次提交
-
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
- 13 6月, 2019 34 次提交
-
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21364 gh/syed-ahmed/14/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21364 gh/syed-ahmed/14/head
-
由 David Riazati 提交于
Summary: This adds support for PEP 526 style annotations on assignments in place of `torch.jit.annotate()`, so ```python a = torch.jit.annotate(List[int], []) ``` turns into ```python a : List[int] = [] ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/21390 Differential Revision: D15790937 Pulled By: driazati fbshipit-source-id: 0cc204f7209a79839d330663cc6ba8320d3a4120
-
由 Brennan Vincent 提交于
Summary: This is intended to match [numpy.trapz](https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html): numerical integration based on the trapezoid rule. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21610 Differential Revision: D15747618 Pulled By: umanwizard fbshipit-source-id: 8eadb2e75c9877b07592d875ca0b2cca6cb72297
-
由 Sebastian Messmer 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21177 - Integrate c10::ListPtr into IValue and the c10 dispatcher. - Streamline conversion to/from IValue. Before, we had IValue::to<> and kernel_functor.h had its own ivalue_to_arg_type and return_type_to_ivalue. They are now unified. Also, this means that nested types like Dicts of Lists of Optional of Dict of ... do work as expected now Differential Revision: D15476433 fbshipit-source-id: bde9df80df20091aa8e6ae17ba7e90abd149b954
-
由 Syed Tousif Ahmed 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21364 ghimport-source-id: ca7d37e10190ba46dc8512f437404ca9216d3369 Differential Revision: D15696497 Pulled By: ezyang fbshipit-source-id: 2e713b8566ae915e175b5a79ac1dd9b86cc2a23d
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21555 gh/syed-ahmed/16/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21364 gh/syed-ahmed/14/head
-
由 Syed Tousif Ahmed 提交于
[CPU] Refactor Random Number Generators in ATen gh-metadata: pytorch pytorch 21364 gh/syed-ahmed/14/head
-
由 Mikhail Zolotukhin 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21141 Differential Revision: D15769066 Pulled By: ZolotukhinM fbshipit-source-id: 5853e0360581c44e42b068add3bf2bc68e671b2b
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21636 ghimport-source-id: 5eff5744cd3c80f75bdb02576be1407a64e0434d Differential Revision: D15780269 Pulled By: zou3519 fbshipit-source-id: 87ff40ffbe0ebd5fc4d105709c9f6f8dda5f9952
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21632 ghimport-source-id: 6a8da97ce153c6d279017af920edd0d20765c32c Differential Revision: D15760331 Pulled By: zou3519 fbshipit-source-id: b2f4c65df5f6f9322d47da995c76851387e5df47
-
由 Richard Zou 提交于
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21633 ghimport-source-id: 6cdf0b1559e696a19e282ff6d5ba79c6b119e8c0 Differential Revision: D15760589 Pulled By: zou3519 fbshipit-source-id: 537882c05ab7b19889a31c648c5efeb1949831a8
-
由 Guanheng Zhang 提交于
Summary: Accidentally rebased the old PR and make it too messy. Find it here (https://github.com/pytorch/pytorch/pull/19274) Create a PR for comments. The model is still WIP but I want to have some feedbacks before moving too far. The transformer model depends on several modules, like MultiheadAttention (landed). Transformer is implemented based on the paper (https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf). Users have the flexibility to build a transformer with self-defined and/or built-in components (i.e encoder, decoder, encoder_layer, decoder_layer). Users could use Transformer class to build a standard transformer model and modify sub-layers as needed. Add a few unit tests for the transformer module, as follow: TestNN.test_Transformer_cell TestNN.test_transformerencoderlayer TestNN.test_transformerdecoderlayer TestNN.test_transformer_args_check TestScript.test_scriptmodule_transformer_cuda There is another demonstration example for applying transformer module on the word language problem. https://github.com/pytorch/examples/pull/555 Pull Request resolved: https://github.com/pytorch/pytorch/pull/20170 Differential Revision: D15417983 Pulled By: zhangguanheng66 fbshipit-source-id: 7ce771a7e27715acd9a23d60bf44917a90d1d572
-
由 svcscm 提交于
Reviewed By: cdelahousse fbshipit-source-id: 5cbf562652b9d7cf3877b5f819141f88c9b857d3
-
由 Will Feng 提交于
Summary: Currently we don't have any Linux libtorch binary build in the PR CI, which led to nightly build failure such as https://circleci.com/gh/pytorch/pytorch/1939687. This PR adds Linux libtorch CPU binary build to prevent such breakage from happening in the future. Pull Request resolved: https://github.com/pytorch/pytorch/pull/21671 Differential Revision: D15785003 Pulled By: yf225 fbshipit-source-id: d1f2e4235e48296ddecb3367f8e5a0df16f4ea49
-
由 Shen Li 提交于
Summary: Fix https://github.com/pytorch/pytorch/issues/20421 `ProcessGroupGloo` only requires input/output tensors to be contiguous. Contiguous tensors might not start from the beginning of the underlying storage, e.g., `chunk(..., dim=0)[1]`. The current implementation passes `tensor.storage().data()` ptr to gloo buffer. This leads to wrong results if the tensor has a non-zero storage offset. The proposed solution is to use `tensor.data_ptr()` instead. Let's see if this breaks any tests. cc qijianan777 Pull Request resolved: https://github.com/pytorch/pytorch/pull/21490 Differential Revision: D15768907 Pulled By: mrshenli fbshipit-source-id: 9d7d1e9baf0461b31187c7d21a4a53b1fbb07397
-
由 fehiepsi 提交于
Summary: This pull request adds a line search for lbfgs. "strong Wolfe" is the default line search method in [minFunc](https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html) and it is also recommended in the [Numerical Optimization](https://www.springer.com/gp/book/9780387303031) book. The implementation is based on four sources: + https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html + https://www.springer.com/gp/book/9780387303031 Algorithms 3.5, 3.6, formula 3.59 + https://github.com/torch/optim/blob/master/lswolfe.lua + https://github.com/torch/optim/blob/master/polyinterp.lua The 'lua' version is based on an old version of `minFunc`, which has been updated in 2012. I made a couple of small changes based on the updated version. Due to that, the test of comparing with `.lua` version is not consistent (that's is the reason I changed a learning rate in the test). Pull Request resolved: https://github.com/pytorch/pytorch/pull/8824 Differential Revision: D15783067 Pulled By: vincentqb fbshipit-source-id: 5316d9088233981120376d79c7869d5f97e51b69
-