1. 11 6月, 2019 8 次提交
    • Nikolay Korovaiko's avatar
      BailOut Graphs · 30d69330
      Nikolay Korovaiko 提交于
      Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21381
      
      Differential Revision: D15724412
      
      Pulled By: Krovatkin
      
      fbshipit-source-id: 18e4a1916c7cd1baea76953d0087d6257e58c55b
      30d69330
    • Vishwak Srinivasan's avatar
      Skip triangular_solve CUDA test on non-default stream · 3df5a46a
      Vishwak Srinivasan 提交于
      Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21590
      
      Differential Revision: D15742549
      
      Pulled By: ezyang
      
      fbshipit-source-id: fd5b2cbce86e5f229c2ffba114ef362934296d07
      3df5a46a
    • Elias Ellison's avatar
      fix test (#21594) · 6f99bcda
      Elias Ellison 提交于
      Summary:
      test that wasn't on the CI, but is tested internally.
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21594
      
      Differential Revision: D15742157
      
      Pulled By: eellison
      
      fbshipit-source-id: 11fc82d1fc0281ffedd674ed96100e0c783c0599
      6f99bcda
    • fehiepsi's avatar
      clip sigmoid to prevent transforms return inf/nan values (#20288) · 91ea2cd5
      fehiepsi 提交于
      Summary:
      This PR addresses some numerical issues of Sigmoid/StickBreakingTransform, where these transforms give +-inf when the unconstrained values move to +-20 areas.
      
      For example, with
      ```
      t = torch.distributions.SigmoidTransform()
      x = torch.tensor(20.)
      t.inv(t(x)), t.log_abs_det_jacobian(x, t(x))
      ```
      current behaviour the inverse will return `inf` and logdet return `-inf` while this PR makes it to `15.9424` and `-15.9424`.
      
      And for
      ```
      t = torch.distributions.StickBreakingTransform()
      x = torch.tensor([20., 20.])
      t.inv(t(x)), t.log_abs_det_jacobian(x, t(x))
      ```
      current value is `(inf, nan)` and `-inf` for logdet, while this PR makes it `[16.6355, 71.3942]` and `-47.8272` for logdet.
      
      Although these finite values are wrong and seems unavoidable, it is better than returning `inf` or `nan` in my opinion. This is useful in HMC where despite that the grad will be zero when the unconstrained parameter moves to unstable area (due to clipping), velocity variable will force the parameter move to another area which by chance can move the parameter out of unstable area. But inf/nan can be useful to stop doing inference early. So the changes in this PR might be inappropriate.
      
      I also fix some small issues of `_Simplex` and `_RealVector` constraints where batch shape of the input is not respected when checking validation.
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/20288
      
      Differential Revision: D15742047
      
      Pulled By: ezyang
      
      fbshipit-source-id: b427ed1752c41327abb3957f98d4b289307a7d17
      91ea2cd5
    • Haixin Liu's avatar
      Add python binding to deserialize blob (#21532) · 4bdbd30b
      Haixin Liu 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21532
      
      Add python binding to deserialize blob
      
      Reviewed By: yinghai
      
      Differential Revision: D15706816
      
      fbshipit-source-id: f498c7e0f7392f055b13810bbf81cba59f25e1d2
      4bdbd30b
    • Elias Ellison's avatar
      Change compiler to use Load/Stores, then transform to SSA (#21101) · e4fae884
      Elias Ellison 提交于
      Summary:
      This changes our compiler so it first emits Loads & Stores, and then transforms the graph to SSA in a follow up pass. When a variable is set, we emit a prim::Store, and when a variable is referenced, we emit a prim::Load.
      ```
      a = 1
      print(a)
      ```
      becomes:
      ```
      %a.1 : int = prim::Constant[value=1]()
      prim::Store[name="a"](%a.1)
      %a : int = prim::Load[name="a"]()
      prim::Print(%a)
      ```
      In the follow up pass, convertToSSA, the values are turned into SSA form with the Loads & Stores removed. This change will enable breaks and continues because you can transform the graph with the variable naming information still intact.
      
      There are still some remaining jitter and edge cases issues that I have to look through, but I think is still ready for eview.
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21101
      
      Differential Revision: D15723353
      
      Pulled By: eellison
      
      fbshipit-source-id: 3269934d4bc24ddaf3a87fdd20620b0f954d83d0
      e4fae884
    • Ailing Zhang's avatar
      update hub doc (#21568) · 1e6c99a6
      Ailing Zhang 提交于
      Summary:
      update doc as pointed out in https://github.com/pytorch/hub/pull/22
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21568
      
      Differential Revision: D15732927
      
      Pulled By: ailzhang
      
      fbshipit-source-id: 78ab026539e5ee59e7c3a8144e2c9fcbbc225733
      1e6c99a6
    • mal's avatar
      Don't leak threads on exit (#21438) · f308b07e
      mal 提交于
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/21438
      ghimport-source-id: 33f145f5b3508163365442c22a223c4a44e677d8
      
      Differential Revision: D15738856
      
      fbshipit-source-id: 656e8d0e3d0d22f116e3ab66bf0282608d6f1a76
      f308b07e
  2. 10 6月, 2019 18 次提交
  3. 08 6月, 2019 14 次提交