1. 11 6月, 2019 20 次提交
    • Richard Zou's avatar
      Update on "Fix namedtensor build" · 176bd7b1
      Richard Zou 提交于
      Fix namedtensor build
      
      test_cpp_extensions compiles c++ extensions. When it does this, it also
      needs USE_NAMEDTENSOR=1 to match the pytorch build.
      
      The fix is to thread USE_NAMEDTENSOR=1 to the testing environment as
      well.
      
      Test Plan
      - [namedtensor ci]
      
      gh-metadata: pytorch pytorch 21609 gh/zou3519/43/head
      176bd7b1
    • Richard Zou's avatar
      Update base for Update on "Fix namedtensor build" · 7a667419
      Richard Zou 提交于
      Fix namedtensor build
      
      test_cpp_extensions compiles c++ extensions. When it does this, it also
      needs USE_NAMEDTENSOR=1 to match the pytorch build.
      
      The fix is to thread USE_NAMEDTENSOR=1 to the testing environment as
      well.
      
      Test Plan
      - [namedtensor ci]
      
      gh-metadata: pytorch pytorch 21609 gh/zou3519/43/head
      7a667419
    • Richard Zou's avatar
      Fix namedtensor build · 922d73f1
      Richard Zou 提交于
      test_cpp_extensions compiles c++ extensions. When it does this, it also
      needs USE_NAMEDTENSOR=1 to match the pytorch build.
      
      The fix is to thread USE_NAMEDTENSOR=1 to the testing environment as
      well.
      
      Test Plan
      - [namedtensor ci]
      922d73f1
    • Gregory Chanan's avatar
      Use schema string specification in derivatives.yaml. (#20916) · dd0ffd68
      Gregory Chanan 提交于
      Summary:
      For consistency, derivatives.yaml now uses the same schema specification as native_functions.yaml.
      
      Note that there are some small downsides, e.g. changing the default values or return parameter names in native_functions.yaml also now requires updating derivatives.yaml as well.  But this has a few nice properties:
      1) Able to copy-paste definitions from native_functions to derivatives.
      2) Makes it impossible to write derivatives for operators without schemas (e.g. old TH operators).
      3) Moves us closer to the ideal situation of co-locating forward and backwards declarations.
      
      Note that this doesn't change any generated code; in particular, this has the same behavior of mapping in-place and out-of-place definitions together.
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/20916
      
      Differential Revision: D15497800
      
      Pulled By: gchanan
      
      fbshipit-source-id: baee5caf56b675ce78dda4aaf6ce6a34575a6432
      dd0ffd68
    • Sebastian Messmer's avatar
      Allow tensors with requires_grad=True in c10 ops (#21599) · 5f25a252
      Sebastian Messmer 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21599
      
      We prevented this because c10 ops can't have a backwards yet and calling them with requires_grad=True would do the wrong thing
      if the c10 op is not purely implemented by calling other autograd-able ops.
      
      However, it is a valid use case to have c10 ops that just call other autograd-aware ops, and these ops should be callable with requires_grad=True.
      
      This should fix https://github.com/pytorch/pytorch/issues/21584.
      
      Differential Revision: D15744692
      
      fbshipit-source-id: ba665365c850ef63fc9c51498fd69afe49e5d7ec
      5f25a252
    • Sam Gross's avatar
      Revert D15717575: [pytorch][PR] Fix bug in multinomial_alias_draw · 5a48642f
      Sam Gross 提交于
      Differential Revision:
      D15717575
      
      Original commit changeset: b1154e226d42
      
      fbshipit-source-id: 305ca010bfda88c9295c52e0626d867452c72f84
      5a48642f
    • Michael Suo's avatar
      fix optional type promotion for classes (#21593) · 4fb302eb
      Michael Suo 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21593
      ghimport-source-id: f68730618bccf2326218e08d0a2a70171fdd8921
      
      Differential Revision: D15741471
      
      Pulled By: suo
      
      fbshipit-source-id: 7ac1a0f6d9d2ff4bc819caff43a7a5b6d37cbc98
      4fb302eb
    • Michael Suo's avatar
      Consider contained types in alias analysis (#21431) · a436822c
      Michael Suo 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21431
      ghimport-source-id: d86ce974a065ec572e71cfa14a8f6bdf48216da7
      
      Reviewed By: jamesr66a
      
      Differential Revision: D15718560
      
      Pulled By: suo
      
      fbshipit-source-id: a36ce907ab26be22f12bab6175797fe8b34721f1
      a436822c
    • Michael Suo's avatar
      cleanups to memory_dag (#21430) · bb4aff26
      Michael Suo 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21430
      ghimport-source-id: 2dc5a0df8512e796c12d65d3ecc5981638122ce6
      
      Reviewed By: jamesr66a
      
      Differential Revision: D15718561
      
      Pulled By: suo
      
      fbshipit-source-id: 1ef31c08c8a757b632451eb07a47a8227e76c67f
      bb4aff26
    • Michael Suo's avatar
      cleanups to alias analysis interfaces (#21397) · ae144032
      Michael Suo 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21397
      ghimport-source-id: 8733e1af2fe66a3f4494a2c24c82a039375a982e
      
      Reviewed By: jamesr66a
      
      Differential Revision: D15642662
      
      Pulled By: suo
      
      fbshipit-source-id: ae66b7b4f19f255d6fe0e7e804bd0df6d86cb8d1
      ae144032
    • Michael Suo's avatar
      avoid calling front() on empty working set (#21396) · ddac8da8
      Michael Suo 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21396
      ghimport-source-id: 7e57282099d2fd57c58c990b51ae933e427aecb2
      
      Reviewed By: jamesr66a
      
      Differential Revision: D15642663
      
      Pulled By: suo
      
      fbshipit-source-id: f9b467ba53f03438879bf3929da522aabaff2343
      ddac8da8
    • vishwakftw's avatar
      Fix bug in multinomial_alias_draw (#21324) · bb1dbdb9
      vishwakftw 提交于
      Summary:
      An incorrect increment / decrement caused the samples to not be generated from a multinomial distribution
      
      Changelog:
      - Remove the incorrect increment / decrement operation
      
      Fixes #21257, fixes #21508
      
      cc: LeviViana neerajprad
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21324
      
      Differential Revision: D15717575
      
      Pulled By: ezyang
      
      fbshipit-source-id: b1154e226d426c0d412d360c15f7c64aec95d101
      bb1dbdb9
    • Nikolay Korovaiko's avatar
      BailOut Graphs · 30d69330
      Nikolay Korovaiko 提交于
      Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21381
      
      Differential Revision: D15724412
      
      Pulled By: Krovatkin
      
      fbshipit-source-id: 18e4a1916c7cd1baea76953d0087d6257e58c55b
      30d69330
    • Vishwak Srinivasan's avatar
      Skip triangular_solve CUDA test on non-default stream · 3df5a46a
      Vishwak Srinivasan 提交于
      Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21590
      
      Differential Revision: D15742549
      
      Pulled By: ezyang
      
      fbshipit-source-id: fd5b2cbce86e5f229c2ffba114ef362934296d07
      3df5a46a
    • Elias Ellison's avatar
      fix test (#21594) · 6f99bcda
      Elias Ellison 提交于
      Summary:
      test that wasn't on the CI, but is tested internally.
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21594
      
      Differential Revision: D15742157
      
      Pulled By: eellison
      
      fbshipit-source-id: 11fc82d1fc0281ffedd674ed96100e0c783c0599
      6f99bcda
    • fehiepsi's avatar
      clip sigmoid to prevent transforms return inf/nan values (#20288) · 91ea2cd5
      fehiepsi 提交于
      Summary:
      This PR addresses some numerical issues of Sigmoid/StickBreakingTransform, where these transforms give +-inf when the unconstrained values move to +-20 areas.
      
      For example, with
      ```
      t = torch.distributions.SigmoidTransform()
      x = torch.tensor(20.)
      t.inv(t(x)), t.log_abs_det_jacobian(x, t(x))
      ```
      current behaviour the inverse will return `inf` and logdet return `-inf` while this PR makes it to `15.9424` and `-15.9424`.
      
      And for
      ```
      t = torch.distributions.StickBreakingTransform()
      x = torch.tensor([20., 20.])
      t.inv(t(x)), t.log_abs_det_jacobian(x, t(x))
      ```
      current value is `(inf, nan)` and `-inf` for logdet, while this PR makes it `[16.6355, 71.3942]` and `-47.8272` for logdet.
      
      Although these finite values are wrong and seems unavoidable, it is better than returning `inf` or `nan` in my opinion. This is useful in HMC where despite that the grad will be zero when the unconstrained parameter moves to unstable area (due to clipping), velocity variable will force the parameter move to another area which by chance can move the parameter out of unstable area. But inf/nan can be useful to stop doing inference early. So the changes in this PR might be inappropriate.
      
      I also fix some small issues of `_Simplex` and `_RealVector` constraints where batch shape of the input is not respected when checking validation.
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/20288
      
      Differential Revision: D15742047
      
      Pulled By: ezyang
      
      fbshipit-source-id: b427ed1752c41327abb3957f98d4b289307a7d17
      91ea2cd5
    • Haixin Liu's avatar
      Add python binding to deserialize blob (#21532) · 4bdbd30b
      Haixin Liu 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21532
      
      Add python binding to deserialize blob
      
      Reviewed By: yinghai
      
      Differential Revision: D15706816
      
      fbshipit-source-id: f498c7e0f7392f055b13810bbf81cba59f25e1d2
      4bdbd30b
    • Elias Ellison's avatar
      Change compiler to use Load/Stores, then transform to SSA (#21101) · e4fae884
      Elias Ellison 提交于
      Summary:
      This changes our compiler so it first emits Loads & Stores, and then transforms the graph to SSA in a follow up pass. When a variable is set, we emit a prim::Store, and when a variable is referenced, we emit a prim::Load.
      ```
      a = 1
      print(a)
      ```
      becomes:
      ```
      %a.1 : int = prim::Constant[value=1]()
      prim::Store[name="a"](%a.1)
      %a : int = prim::Load[name="a"]()
      prim::Print(%a)
      ```
      In the follow up pass, convertToSSA, the values are turned into SSA form with the Loads & Stores removed. This change will enable breaks and continues because you can transform the graph with the variable naming information still intact.
      
      There are still some remaining jitter and edge cases issues that I have to look through, but I think is still ready for eview.
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21101
      
      Differential Revision: D15723353
      
      Pulled By: eellison
      
      fbshipit-source-id: 3269934d4bc24ddaf3a87fdd20620b0f954d83d0
      e4fae884
    • Ailing Zhang's avatar
      update hub doc (#21568) · 1e6c99a6
      Ailing Zhang 提交于
      Summary:
      update doc as pointed out in https://github.com/pytorch/hub/pull/22
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21568
      
      Differential Revision: D15732927
      
      Pulled By: ailzhang
      
      fbshipit-source-id: 78ab026539e5ee59e7c3a8144e2c9fcbbc225733
      1e6c99a6
    • mal's avatar
      Don't leak threads on exit (#21438) · f308b07e
      mal 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21438
      ghimport-source-id: 33f145f5b3508163365442c22a223c4a44e677d8
      
      Differential Revision: D15738856
      
      fbshipit-source-id: 656e8d0e3d0d22f116e3ab66bf0282608d6f1a76
      f308b07e
  2. 10 6月, 2019 18 次提交
  3. 08 6月, 2019 2 次提交