functorch is a prototype of JAX-like composable function transforms for PyTorch.

Overview

functorch

Why functorch? | Install guide | Transformations | Documentation | Future Plans

This library is currently under heavy development - if you have suggestions on the API or use-cases you'd like to be covered, please open an github issue or reach out. We'd love to hear about how you're using the library.

functorch is a prototype of JAX-like composable FUNCtion transforms for pyTORCH.

It aims to provide composable vmap and grad transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance. Because this project requires some investment, we'd love to hear from and work with early adopters to shape the design. Please reach out on the issue tracker if you're interested in using this for your project.

In addition, there is experimental functionality to trace through these transformations using FX in order to capture the results of these transforms ahead of time. This would allow us to compile the results of vmap or grad to improve performance.

Why composable function transforms?

There are a number of use cases that are tricky to do in PyTorch today:

  • computing per-sample-gradients (or other per-sample quantities)
  • running ensembles of models on a single machine
  • efficiently batching together tasks in the inner-loop of MAML
  • efficiently computing Jacobians and Hessians
  • efficiently computing batched Jacobians and Hessians

Composing vmap, grad, and vjp transforms allows us to express the above without designing a separate subsystem for each. This idea of composable function transforms comes from the JAX framework.

Install

There are two ways to install functorch:

  1. functorch main
  2. functorch preview with PyTorch 1.10

We recommend installing the functorch main development branch for the latest and greatest. This requires an installation of the latest PyTorch nightly.

If you're looking for an older version of functorch that works with a stable version of PyTorch (1.10), please install the functorch preview. On the roadmap is more stable releases of functorch with future versions of PyTorch.

Installing functorch main

Click to expand

Using Colab

Follow the instructions in this Colab notebook

Locally

First, set up an environment. We will be installing a nightly PyTorch binary as well as functorch. If you're using conda, create a conda environment:

conda create --name functorch
conda activate functorch

If you wish to use venv instead:

python -m venv functorch-env
source functorch-env/bin/activate

Next, install one of the following following PyTorch nightly binaries.

# For CUDA 10.2
pip install --pre torch -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html --upgrade
# For CUDA 11.1
pip install --pre torch -f https://download.pytorch.org/whl/nightly/cu111/torch_nightly.html --upgrade
# For CPU-only build
pip install --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html --upgrade

If you already have a nightly of PyTorch installed and wanted to upgrade it (recommended!), append --upgrade to one of those commands.

Install functorch:

pip install ninja  # Makes the build go faster
pip install --user "git+https://github.com/pytorch/functorch.git"

Run a quick sanity check in python:

>>> import torch
>>> from functorch import vmap
>>> x = torch.randn(3)
>>> y = vmap(torch.sin)(x)
>>> assert torch.allclose(y, x.sin())

From Source

functorch is a PyTorch C++ Extension module. To install,

  • Install PyTorch from source. functorch usually runs on the latest development version of PyTorch.
  • Run python setup.py install. You can use DEBUG=1 to compile in debug mode.

Then, try to run some tests to make sure all is OK:

pytest test/test_vmap.py -v
pytest test/test_eager_transforms.py -v

Installing functorch preview with PyTorch 1.10

Click to expand

Using Colab

Follow the instructions here

Locally

Prerequisite: Install PyTorch 1.10

Next, run the following.

pip install ninja  # Makes the build go faster
pip install --user "git+https://github.com/pytorch/[email protected]/torch_1.10_preview"

Finally, run a quick sanity check in python:

>>> import torch
>>> from functorch import vmap
>>> x = torch.randn(3)
>>> y = vmap(torch.sin)(x)
>>> assert torch.allclose(y, x.sin())

What are the transforms?

Right now, we support the following transforms:

  • grad, vjp, jacrev
  • vmap

Furthermore, we have some utilities for working with PyTorch modules.

  • make_functional(model)
  • make_functional_with_buffers(model)

vmap

Note: vmap imposes restrictions on the code that it can be used on. For more details, please read its docstring.

vmap(func)(*inputs) is a transform that adds a dimension to all Tensor operations in func. vmap(func) returns a few function that maps func over some dimension (default: 0) of each Tensor in inputs.

vmap is useful for hiding batch dimensions: one can write a function func that runs on examples and then lift it to a function that can take batches of examples with vmap(func), leading to a simpler modeling experience:

>>> from functorch import vmap
>>> batch_size, feature_size = 3, 5
>>> weights = torch.randn(feature_size, requires_grad=True)
>>>
>>> def model(feature_vec):
>>>     # Very simple linear model with activation
>>>     assert feature_vec.dim() == 1
>>>     return feature_vec.dot(weights).relu()
>>>
>>> examples = torch.randn(batch_size, feature_size)
>>> result = vmap(model)(examples)

grad

grad(func)(*inputs) assumes func returns a single-element Tensor. It compute the gradients of the output of func w.r.t. to inputs[0].

>>> from functorch import grad
>>> x = torch.randn([])
>>> cos_x = grad(lambda x: torch.sin(x))(x)
>>> assert torch.allclose(cos_x, x.cos())
>>>
>>> # Second-order gradients
>>> neg_sin_x = grad(grad(lambda x: torch.sin(x)))(x)
>>> assert torch.allclose(neg_sin_x, -x.sin())

When composed with vmap, grad can be used to compute per-sample-gradients:

>>> from functorch import vmap
>>> batch_size, feature_size = 3, 5
>>>
>>> def model(weights,feature_vec):
>>>     # Very simple linear model with activation
>>>     assert feature_vec.dim() == 1
>>>     return feature_vec.dot(weights).relu()
>>>
>>> def compute_loss(weights, example, target):
>>>     y = model(weights, example)
>>>     return ((y - target) ** 2).mean()  # MSELoss
>>>
>>> weights = torch.randn(feature_size, requires_grad=True)
>>> examples = torch.randn(batch_size, feature_size)
>>> targets = torch.randn(batch_size)
>>> inputs = (weights,examples, targets)
>>> grad_weight_per_example = vmap(grad(compute_loss), in_dims=(None, 0, 0))(*inputs)

vjp and jacrev

The vjp transform applies func to inputs and returns a new function that computes vjps given some cotangents Tensors.

>>> from functorch import vjp
>>> outputs, vjp_fn = vjp(func, inputs); vjps = vjp_fn(*cotangents)

The jacrev transform returns a new function that takes in x and returns the Jacobian of torch.sin with respect to x

>>> from functorch import jacrev
>>> x = torch.randn(5)
>>> jacobian = jacrev(torch.sin)(x)
>>> expected = torch.diag(torch.cos(x))
>>> assert torch.allclose(jacobian, expected)

Use jacrev to compute the jacobian. This can be composed with vmap to produce batched jacobians:

>>> x = torch.randn(64, 5)
>>> jacobian = vmap(jacrev(torch.sin))(x)
>>> assert jacobian.shape == (64, 5, 5)

jacrev can be composed with itself to produce hessians:

>>> def f(x):
>>>   return x.sin().sum()
>>>
>>> x = torch.randn(5)
>>> hessian = jacrev(jacrev(f))(x)

Tracing through the transformations

We can also trace through these transformations in order to capture the results as new code using make_fx. There is also experimental integration with the NNC compiler (only works on CPU for now!).

>>> from functorch import make_fx, grad
>>> def f(x):
>>>     return torch.sin(x).sum()
>>> x = torch.randn(100)
>>> grad_f = make_fx(grad(f))(x)
>>> print(grad_f.code)

def forward(self, x_1):
    sin = torch.ops.aten.sin(x_1)
    sum_1 = torch.ops.aten.sum(sin, None);  sin = None
    cos = torch.ops.aten.cos(x_1);  x_1 = None
    _tensor_constant0 = self._tensor_constant0
    mul = torch.ops.aten.mul(_tensor_constant0, cos);  _tensor_constant0 = cos = None
    return mul

Working with NN modules: make_functional and friends

Sometimes you may want to perform a transform with respect to the parameters and/or buffers of an nn.Module. This can happen for example in:

  • model ensembling, where all of your weights and buffers have an additional dimension
  • per-sample-gradient computation where you want to compute per-sample-grads of the loss with respect to the model parameters

Our solution to this right now is an API that, given an nn.Module, creates a stateless version of it that can be called like a function.

  • make_functional(model) returns a functional version of model and the model.parameters()
  • make_functional_with_buffers(model) returns a functional version of model and the model.parameters() and model.buffers().

Here's an example where we compute per-sample-gradients using an nn.Linear layer:

import torch
from functorch import make_functional, vmap, grad

model = torch.nn.Linear(3, 3)
data = torch.randn(64, 3)
targets = torch.randn(64, 3)

func_model, params = make_functional(model)

def compute_loss(params, data, targets):
    preds = func_model(params, data)
    return torch.mean((preds - targets) ** 2)

per_sample_grads = vmap(grad(compute_loss), (None, 0, 0))(params, data, targets)

If you're making an ensemble of models, you may find combine_state_for_ensemble useful.

Documentation

For more documentation, see our docs website.

Debugging

functorch._C.dump_tensor: Dumps dispatch keys on stack functorch._C._set_vmap_fallback_warning_enabled(False) if the vmap warning spam bothers you.

Future Plans

In the end state, we'd like to upstream this into PyTorch once we iron out the design details. To figure out the details, we need your help -- please send us your use cases by starting a conversation in the issue tracker or try out the prototype.

License

Functorch has a BSD-style license, as found in the LICENSE file.

Citing functorch

If you use functorch in your publication, please cite it by using the following BibTeX entry.

@Misc{functorch2021,
  author =       {Horace He, Richard Zou},
  title =        {functorch: JAX-like composable function transforms for PyTorch},
  howpublished = {\url{https://github.com/pytorch/functorch}},
  year =         {2021}
}
Issues
  • ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv

    ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv

    Hi All,

    I was running an older version of PyTorch ( - built from source) with FuncTorch ( - built from source), and somehow I've broken the older version of functorch. When I import functorch I get the following error,

    import functorch
    #returns ImportError: ~/.local/lib/python3.9/site-packages/functorch/_C.so: undefined symbol: _ZNK3c1010TensorImpl16sym_sizes_customEv
    

    The version I had of functorch was 0.2.0a0+9d6ee76, is there a way to perhaps re-install to fix this ImportError? I do have the latest version of PyTorch/FuncTorch in a separate conda environment but I wanted to check how it compares to the older version in this 'older' conda environment PyTorch/Functorch were versions ,1.12.0a0+git7c2103a and 0.2.0a0+9d6ee76 respectively.

    Is there a way to download a specific version of functorch with https://github.com/pytorch/functorch.git ? Or another way to fix this issue?

    opened by AlphaBetaGamma96 24
  • Hessian (w.r.t inputs) calculation in PyTorch differs from FuncTorch

    Hessian (w.r.t inputs) calculation in PyTorch differs from FuncTorch

    Hi All,

    I've been trying to calculate the Hessian of the output of my network with respect to its inputs within FuncTorch. I had a version within PyTorch that supports batches, however, they seem to disagree with each other and I have no idea why they don't give the same results. Something is clearly wrong, I know my PyTorch version is right so either there's an issue in my version of FuncTorch or I've implemented it wrong in FuncTorch.

    Also, how can I use the has_aux flag in jacrev to return the jacobian from the first jacrev so I don't have to repeat the jacobian calculation?

    The only problem with my example is that it uses torch.linalg.slogdet and from what I remember FuncTorch can't vmap over .item(). I do have my own fork of pytorch where I edited the backward to remove the .item() call so it works with vmap. Although, it's not the greatest implementation as I just set it to the default nonsingular_case_backward like so,

    Tensor slogdet_backward(const Tensor& grad_logabsdet,
                            const Tensor& self,
                            const Tensor& signdet, const Tensor& logabsdet) {
      auto singular_case_backward = [&](const Tensor& grad_logabsdet, const Tensor& self) -> Tensor {
        Tensor u, sigma, vh;
        std::tie(u, sigma, vh) = at::linalg_svd(self, false);
        Tensor v = vh.mH();
        // sigma has all non-negative entries (also with at least one zero entry)
        // so logabsdet = \sum log(abs(sigma))
        // but det = 0, so backward logabsdet = \sum log(sigma)
        auto gsigma = grad_logabsdet.unsqueeze(-1).div(sigma);
        return svd_backward({}, gsigma, {}, u, sigma, vh);
      };
    
      auto nonsingular_case_backward = [&](const Tensor& grad_logabsdet, const Tensor& self) -> Tensor {
        // TODO: replace self.inverse with linalg_inverse
        return unsqueeze_multiple(grad_logabsdet, {-1, -2}, self.dim()) * self.inverse().mH();
      };
    
      auto nonsingular = nonsingular_case_backward(grad_logabsdet, self);
      return nonsingular;
    }
    

    My 'minimal' reproducible script is below with the output shown below that. It computes the Laplacian via a PyTorch method and via FuncTorch for a single sample of size [A,1] where A is the number of input nodes to the network.

    import torch
    import torch.nn as nn
    from torch import Tensor
    import functorch
    from functorch import jacrev, jacfwd, hessian, make_functional, vmap
    import time 
    
    _ = torch.manual_seed(0)
    
    print("PyTorch version:   ", torch.__version__)
    print("CUDA version:      ", torch.version.cuda)
    print("FuncTorch version: ", functorch.__version__)
    
    def sync_time() -> float:
      torch.cuda.synchronize()
      return time.perf_counter()
    
    B=1 #batch
    A=3 #input nodes
    
    device=torch.device("cuda")
    
    class model(nn.Module):
    
      def __init__(self, num_inputs, num_hidden):
        super(model, self).__init__()
        
        self.num_inputs=num_inputs
        self.func = nn.Tanh()
        
        self.fc1 = nn.Linear(2, num_hidden)
        self.fc2 = nn.Linear(num_hidden, num_inputs)
      
      def forward(self, x):
        """
        Takes x in [B,A,1] and maps it to sign/logabsdet value in Tuple([B,], [B,])
        """
        
        idx=len(x.shape)
        rep=[1 for _ in range(idx)]
        rep[-2] = self.num_inputs
        g = x.mean(dim=(idx-2), keepdim=True).repeat(*rep)
        f = torch.cat((x,g), dim=-1)
    
        h = self.func(self.fc1(f))
        
        mat = self.fc2(h)
        sgn, logabs = torch.linalg.slogdet(mat)
        return sgn, logabs
    
    net = model(A, 64)
    net = net.to(device)
    
    fnet, params = make_functional(net)
    
    def logabs(params, x):
      _, logabs = fnet(params, x)
      #print("functorch logabs: ",logabs)
      return logabs
    
    
    def kinetic_pytorch(xs: Tensor) -> Tensor:
      """Method to calculate the local kinetic energy values of a netork function, f, for samples, x.
      The values calculated here are 1/f d2f/dx2 which is equivalent to d2log(|f|)/dx2 + (dlog(|f|)/dx)^2
      within the log-domain (rather than the linear-domain).
    
      :param xs: The input positions of the many-body particles
      :type xs: class: `torch.Tensor`
      """
      xis = [xi.requires_grad_() for xi in xs.flatten(start_dim=1).t()]
      xs_flat = torch.stack(xis, dim=1)
    
      _, ys = net(xs_flat.view_as(xs))
      #print("pytorch logabs: ",ys)
      ones = torch.ones_like(ys)
    
      #df_dx calculation
      (dy_dxs, ) = torch.autograd.grad(ys, xs_flat, ones, retain_graph=True, create_graph=True)
    
    
      #d2f_dx2 calculation (diagonal only)
      lay_ys = sum(torch.autograd.grad(dy_dxi, xi, ones, retain_graph=True, create_graph=False)[0] \
                    for xi, dy_dxi in zip(xis, (dy_dxs[..., i] for i in range(len(xis))))
      )
      #print("(PyTorch): ",lay_ys, dy_dxs)
      
      ek_local_per_walker = -0.5 * (lay_ys + dy_dxs.pow(2).sum(-1)) #move const out of loop?
      return ek_local_per_walker
      
    jacjaclogabs = jacrev(jacrev(logabs, argnums=1), argnums=1)
    jaclogabs = jacrev(logabs, argnums=1)
      
    def kinetic_functorch(params, x):
      d2f_dx2 = vmap(jacjaclogabs, in_dims=(None, 0))(params, x)
      df_dx = vmap(jaclogabs, in_dims=(None, 0))(params, x)
      #print("(FuncTorch): ", d2f_dx2.squeeze(-3).squeeze(-1).diagonal(-2,-1).sum(-1), df_dx)
      #remove the trailing 1's so it's an A by A matrix 
      return -0.5 * d2f_dx2.squeeze(-3).squeeze(-1).diagonal(-2,-1).sum(-1) + df_dx.squeeze(-1).pow(2).sum(-1)
    
    x = torch.randn(B,A,1,device=device) #input Tensor 
    
    print("\nd2f/dx2, df/dx: ")
    t1=sync_time()
    kin_pt = kinetic_pytorch(x)
    t2=sync_time()
    t3=sync_time()
    kin_ft = kinetic_functorch(params, x)
    t4=sync_time()
    
    print("\nWalltime: ")
    print("PyTorch:   ",t2-t1)
    print("FuncTorch: ",t4-t3, "\n")
    
    print("Results: ")
    print("PyTorch: ",kin_pt)
    print("FuncTorch: ",kin_ft)
    

    This script returns

    PyTorch version:    1.12.0a0+git7c2103a
    CUDA version:       11.6
    FuncTorch version:  0.2.0a0+9d6ee76
    
    d2f/dx2, df/dx: 
    
    Walltime: 
    PyTorch:    0.4822753759999614
    FuncTorch:  0.004898710998531897 
    
    Results: 
    PyTorch:  tensor([1.3737], device='cuda:0', grad_fn=<MulBackward0>)    # should be the same values
    FuncTorch:  tensor([7.8411], device='cuda:0', grad_fn=<AddBackward0>) # the jacobian matches, but hessian does not
    

    Thanks for the help in advance! :)

    opened by AlphaBetaGamma96 18
  • add batching rule for block_diag, kill DECOMPOSE_FUNCTIONAL

    add batching rule for block_diag, kill DECOMPOSE_FUNCTIONAL

    Companion core PR: https://github.com/pytorch/pytorch/pull/77716

    The above PR makes block_diag composite compliant, and this PR adds a batching rule for it.

    Those two changes together should let us fully remove the DECOMPOSE_FUNCTIONAL macro, which was preventing me from moving the Functionalize dispatch key below FuncTorchBatched (which I want to do as part of XX, in order to properly get functionalization working with LTC/XLA).

    cla signed 
    opened by bdhirsh 13
  • svd-related op regression in functorch

    svd-related op regression in functorch

    https://github.com/pytorch/pytorch/pull/69827 and https://github.com/pytorch/pytorch/pull/70253 caused svd-related tests in functorch to fail:

    • https://app.circleci.com/pipelines/github/pytorch/functorch/1277/workflows/5aaf2c43-6c6a-4ab1-94f7-e0493b8049ff/jobs/7659

    The main problem seems to be that the backward pass uses in-place operations that are incompatible with vmap (aka Composite Compliance problems). There are some other failures that seem to be because some other operations are not Composite Compliant but somehow these weren't a problem previously.

    opened by zou3519 12
  • functorch doesn't work in debug mode

    functorch doesn't work in debug mode

    It's that autograd assert that we run into often:

    import torch
    from functorch import make_fx
    from functorch.compile import nnc_jit
    
    
    def f(x, y):
        return torch.broadcast_tensors(x, y)
    
    
    inp1 = torch.rand(())
    inp2 = torch.rand(3)
    
    print(f(inp1, inp2))  # without nnc compile everything works fine
    
    print(make_fx(f)(inp1, inp2))  # fails
    print(nnc_jit(f)(inp1, inp2))
    # RuntimeError: self__storage_saved.value().is_alias_of(result.storage())INTERNAL ASSERT FAILED at "autograd/generated/VariableType_3.cpp":3899, please report a bug to PyTorch.
    

    cc @albanD @soulitzer what's the chance we can add an option to turn these off? They've been more harmful (e.g. prevent debugging in debug mode) than useful for us.

    opened by zou3519 11
  • Index put vmap internal assert

    Index put vmap internal assert

    import torch
    from functorch import vmap
    self = torch.randn(4, 1, 1).cuda()
    idx = (torch.tensor([0]).cuda(),)
    value = torch.randn(1, 1).cuda()
    
    def foo(x):
        return x.index_put_(idx, value, accumulate=True)
    
    vmap(foo)(self)
    
    RuntimeError: linearIndex.numel()*sliceSize*nElemBefore == value.numel()INTERNAL ASSERT FAILED at "/raid/rzou/pt/debug-cuda/aten/src/ATen/native/cuda/Indexing.cu":249, please report a bug to PyTorch. number of flattened indices did not match number of elements in the value tensor41
    
    actionable 
    opened by zou3519 11
  • Batching rule not implemented for aten::item.

    Batching rule not implemented for aten::item.

    Hey, I would like to use functorch.vmap in a custom PyTorch activation function (the gradients are not needed, because the backward-pass is calculated differently). During the computation of the activation function, I do a lookup in a tensor X using a tensor Y.item() call, similar to the small dummy code below.

    Unfortunately I get the error message: RuntimeError: Batching rule not implemented for aten::item. We could not generate a fallback.

    Is it not possible to do an item() call in a vmap function or is something else wrong? Thanks a lot!

    import torch
    from functorch import vmap
    
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    
    sum = torch.zeros([100, 10], dtype=torch.int32).to(device)
    lookup = torch.randint(100, (20, 1000, 10)).to(device)
    input_tensor = torch.randint(1000, (100, 20)).to(device)
    
    def test_fun(sum, input_tensor):
      for j in range(20):
        for i in range(10):
          sum[i] += lookup[j, input_tensor[j].item(), i]
      return sum
    
    # non-vectorized version
    for i in range(100):
      test_fun(sum[i], input_tensor[I])
    
    # vectorized version throws error
    test_fun_vec = vmap(test_fun)
    test_fun_vec(sum, input_tensor)
    
    opened by hallojs 10
  • torch.atleast_1d batching rule implementation

    torch.atleast_1d batching rule implementation

    Hi functorch devs! I'm filing this issue because my code prints the following warning:

    UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::atleast_1d. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at  /tmp/pip-req-build-ytawxmfk/functorch/csrc/BatchedFallback.cpp:106.)
    

    Why Am I Using atleast_1d ?

    I'm subclassing torch.Tensor because my code needs to be able to add some extra data to that class (I'm integrating PyTorch's AD system with another AD system to be able to call torch functions from inside a PDE solve, which is why I also inherit from a class called OverloadedType), which is named _block_variable; e.g. the subclass looks like

    class MyTensor(torch.Tensor, OverloadedType):
        _block_variable = None
    
        @staticmethod
        def __new__(cls, x, *args, **kwargs):
            return super().__new__(cls, x, *args, **kwargs)
    
        def __init__(self, x, block_var=None):
            super(OverloadedType, self).__init__()
            self._block_variable = block_var or BlockVariable(self)
            
    
        def to(self, *args, **kwargs):
            new = Tensor([])
            tmp = super(torch.Tensor, self).to(*args, **kwargs)
            new.data = tmp.data
            new.requires_grad = tmp.requires_grad
            new._block_variable = self._block_variable
            return new
    
         ... #some subclass-specific methods etc
    

    This causes problems when I have code that does stuff like torch.tensor([torch.trace(x), torch.trace(x @ x)]) where x is a square MyTensor; the torch.tensor() call raises an exception related to taking the __len__ of a 0-dimentional tensor (the scalar traces). So instead, I do torch.cat([torch.atleast_1d(torch.trace(x)), torch.atleast_1d(torch.trace(x @ x))]), which works. However, this function is functorch.vmap-ed, which triggers the performance warning. It would be great if I could either get the naive implementation (using torch.tensor instead of torch.cat) to work, or if a batch rule for atleast_1d() were to be implemented.

    Thank you for any help you can provide!

    opened by DiffeoInvariant 10
  • Top 25 OpInfos for functorch

    Top 25 OpInfos for functorch

    We'd love help on these.

    The check box is for if the OpInfo has added to PyTorch core. The ultimate goal is for all of these OpInfos to exist in PyTorch core. The OpInfo is bolded if we have a poor man's version* of the OpInfo in the functorch repo (see https://github.com/facebookresearch/functorch/blob/main/test/functorch_additional_op_db.py).

    Exists

    • [x] torch.nn.functional.softmax (https://github.com/pytorch/pytorch/pull/62077)
    • [x] torch.nn.functional.relu (https://github.com/pytorch/pytorch/pull/62076)
    • [x] torch.nn.functional.interpolate (https://github.com/pytorch/pytorch/pull/61956)
    • [x] torch.nn.functional.pad (https://github.com/pytorch/pytorch/pull/62814)
    • [x] torch.nn.functional.normalize (https://github.com/pytorch/pytorch/pull/62635)
    • [x] torch.nn.functional.cross_entropy (https://github.com/pytorch/pytorch/pull/63547)
    • [x] torch.nn.functional.grid_sample (https://github.com/pytorch/pytorch/pull/62311)
    • [x] torch.nn.functional.one_hot (https://github.com/pytorch/pytorch/pull/62253)
    • [x] torch.nn.functional.mse_loss
    • [x] torch.nn.functional.conv2d (https://github.com/pytorch/pytorch/pull/63517)
    • [x] torch.nn.functional.dropout (https://github.com/pytorch/pytorch/pull/62315)
    • [x] torch.nn.functional.softplus (https://github.com/pytorch/pytorch/pull/62317)
    • [x] torch.nn.functional.linear (https://github.com/pytorch/pytorch/pull/61971)
    • [x] torch.nn.functional.avg_pool2d (https://github.com/pytorch/pytorch/pull/62455)
    • [x] torch.nn.functional.max_pool2d (https://github.com/pytorch/pytorch/pull/63530)
    • [x] torch.nn.functional.nll_loss (https://github.com/pytorch/pytorch/pull/64203)
    • [x] torch.nn.functional.embedding (https://github.com/pytorch/pytorch/pull/63633)
    • [x] torch.nn.functional.adaptive_avg_pool2d (https://github.com/pytorch/pytorch/pull/62704)
    • [x] torch.nn.functional.cosine_similarity (https://github.com/pytorch/pytorch/pull/62959)
    • [x] torch.nn.functional.unfold https://github.com/pytorch/pytorch/pull/62705
    • [x] torch.nn.functional.batch_norm (https://github.com/pytorch/pytorch/pull/63218)
    • [x] torch.nn.functional.conv_transpose2d https://github.com/pytorch/pytorch/pull/62882
    • [x] torch.nn.functional.layer_norm https://github.com/pytorch/pytorch/pull/63276

    *Why do we have poor man's version of these OpInfos? It's because right now we only care about float32 sample inputs on CPU and CUDA and OpInfos have a lot of flags that take some time to tweak.

    opened by zou3519 10
  • Memory Leak

    Memory Leak

    Hello! I am thrilled with the functorch package, and have been playing with it lately.

    With @soumik12345 we found a memory leak after training a NN. We documented our findings here:

    http://wandb.me/functorch-intro

    We are probably doing something wrong, but the memory increases after each epoch.

    image

    As the GPU is pretty monstrous we didn't notice this straight away, but it clearly fills up progresively. The stateful pytorch training loop does not produce this.

    high priority 
    opened by tcapelle 9
  • Use fake tensor for primal computation in AOTAutograd

    Use fake tensor for primal computation in AOTAutograd

    This prevents AOTAutograd from mutating inputs multiple times when the internal function mutates its inputs.

    Signed-off-by: Edward Z. Yang [email protected]

    cla signed 
    opened by ezyang 8
  • Cuda 11.7 support

    Cuda 11.7 support

    I'm trying to use functorch.compile.memory_efficient_fusion inside an Nvidia-pytorch docker image that runs Cuda 11.7. When I try to use pip install functorch, I get the following error.

    RuntimeError: We've detected an installation of PyTorch 1.12 with CUDA 11.7 support.
    

    When I try to build from source using:

    BUILD_VERSION=$PYTORCH_BUILD_VERSION pip install git+https://github.com/pytorch/functorch.git
    

    I get: RuntimeError: Error compiling objects for extension — the stack trace is quite long, but I'd be happy to post it if it would be helpful.

    Versions

    PyTorch version: 1.13.0a0+08820cb
    Is debug build: False
    CUDA used to build PyTorch: 11.7
    ROCM used to build PyTorch: N/A
    
    OS: Ubuntu 20.04.4 LTS (x86_64)
    GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
    Clang version: Could not collect
    CMake version: version 3.23.2
    Libc version: glibc-2.31
    
    Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10)  [GCC 10.3.0] (64-bit runtime)
    Python platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.10
    Is CUDA available: True
    CUDA runtime version: 11.7.99
    GPU models and configuration: 
    GPU 0: A100-SXM-80GB
    GPU 1: A100-SXM-80GB
    
    Nvidia driver version: 450.172.01
    cuDNN version: Probably one of the following:
    /usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.1
    /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.1
    /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.1
    usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.1
    /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.1
    /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.1
    /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.1
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    Is XNNPACK available: True
    
    Versions of relevant libraries:
    [pip3] numpy==1.22.4
    [pip3] nvidia-dlprof-pytorch-nvtx==1.8.0
    [pip3] torch==1.13.0a0+08820cb
    [pip3] torchmetrics==0.9.2
    [pip3] torchvision==0.14.0a0
    [conda] Could not collect
    
    opened by schmidt-jake 0
  • Add an Ensemble Module that is constructed from a list of Modules and encapsulates the necessary state

    Add an Ensemble Module that is constructed from a list of Modules and encapsulates the necessary state

    Most of the examples I've seen use hmap at the top level, to create an 'outer' ensemble of models, or to factor out the batch dimension. However, my use case is 'inner' ensembles of modules within a larger model. This means I have to register the parameters and buffers from combine_state_for_ensemble with the parent module, which is annoying and messy.

    An obvious solution is to create an Ensemble module which internally calls combine_state_for_ensemble and vmap along with storing the necessary state:

    self.ens = Ensemble(my_modules, in_dims=(0, 0, 2), out_dims=(0, 0, 2))
    ...
    x = ens(x)
    

    Even if registering the state weren't an issue, I still think this would be a popular feature. It's more intuitive than the current method of creating ensembles.

    opened by sinking-point 6
  • 25% Performance regression from v0.1.1 to 0.2.0 when calculating hessian

    25% Performance regression from v0.1.1 to 0.2.0 when calculating hessian

    Hi developers,

    After I upgraded functorch from v0.1.1 to 0.2.0, I noticed a 25% performance regression when calculating hessian, please check the following benchmark result and the attached benchmark script.

    Please let me know if I did anything wrong, and also whether the perf regression could be fixed. Thanks!

    Benchmark result

    Benchmark result on NVIDIA A100

    # torch 111 and functorch 0.1.1
    ===== benchmark without backward =====
    max pred       error: functorch: 0.00e+00
    max hessian    error: functorch: 0.00e+00
    reference_hessian: 61.837 ms
    functorch_hessian: 29.474 ms
    
    # torch 112 and functorch 0.2.0
    ===== benchmark without backward =====
    max pred       error: functorch: 1.49e-08
    max hessian    error: functorch: 0.00e+00
    reference_hessian: 62.519 ms
    functorch_hessian: 39.666 ms  (0.75 X)
    

    Benchmark result on NVIDIA A6000

    # torch 111 and functorch 0.1.1
    ===== benchmark without backward =====
    max pred       error: functorch: 1.49e-08
    max hessian    error: functorch: 0.00e+00
    reference_hessian: 65.984 ms
    functorch_hessian: 33.662 ms
    
    # torch 112 and functorch 0.2.0
    ===== benchmark without backward =====
    max pred       error: functorch: 1.86e-08
    max hessian    error: functorch: 0.00e+00
    reference_hessian: 67.285 ms
    functorch_hessian: 49.723 ms (0.68 X)
    

    benchmark script

    benchmark.py

    import time
    import argparse
    from functorch import vmap, jacrev, jacfwd
    import torch
    import torch.nn as nn
    
    torch.backends.cuda.matmul.allow_tf32 = False
    
    
    _ = torch.manual_seed(0)
    device = "cuda" if torch.cuda.is_available() else "cpu"
    D1 = 2  # x, y
    D2 = 3  # u, v, p
    B = 10000
    x = torch.randn(B, D1).to(device)
    run_backward = False
    
    model = nn.Sequential(
        nn.Linear(D1, 512),
        nn.ReLU(),
        nn.Linear(512, 512),
        nn.ReLU(),
        nn.Linear(512, 512),
        nn.ReLU(),
        nn.Linear(512, 512),
        nn.ReLU(),
        nn.Linear(512, 512),
        nn.ReLU(),
        nn.Linear(512, 512),
        nn.ReLU(),
        nn.Linear(512, D2),
    ).to(device)
    
    
    def predict(x):
        torch.cuda.nvtx.range_push("forward")
        out = model(x)
        torch.cuda.nvtx.range_pop()
        return out, out  # return two outputs is needed for jacrev auxiliary object
    
    
    def reference_hessian():
        x_ = x.clone().requires_grad_()
        ones = torch.ones(B, device=x.device)
        pred, _ = predict(x_)
        jacobian_rows = [None] * D2
        hessian_rows = [None] * (D2 * D1)
        for i in range(D2):
            torch.cuda.nvtx.range_push("autograd jacobian")
            jacobian_rows[i] = torch.autograd.grad(pred[:, i], x_, ones, create_graph=True)[
                0
            ]
            torch.cuda.nvtx.range_pop()
    
        for i in range(D2):
            for j in range(D1):
                torch.cuda.nvtx.range_push("autograd hesian")
                hessian_rows[i * D1 + j] = torch.autograd.grad(
                    jacobian_rows[i][:, j], x_, ones, create_graph=True
                )[0]
                torch.cuda.nvtx.range_pop()
    
        jacobian = torch.stack(jacobian_rows)  # [D2, B, D1]
        hessian = torch.stack(hessian_rows)  # [D2 * D1, B, D1]
        if run_backward:
            l = hessian.sum()
            l.backward()
        return hessian.transpose(0, 1), pred
    
    
    def functorch_hessian():
        x_ = x.clone().requires_grad_()
        hessian, pred = vmap(
            jacfwd(jacrev(predict, argnums=0, has_aux=True), argnums=0, has_aux=True),
            in_dims=0,
        )(
            x_
        )  # [B, D2, D1, D1]
        if run_backward:
            l = hessian.sum()
            l.backward()
        return hessian, pred
    
    
    def validate_result():
        # test functorch result
        ref_hes, ref_pred = reference_hessian()
        ft_hes, ft_pred = functorch_hessian()
        ref_hes = ref_hes.view_as(ft_hes)
        print(f"max pred       error: functorch: {(ref_pred - ft_pred).max():.2e}")
        print(f"max hessian    error: functorch: {(ref_hes - ft_hes).max():.2e}")
    
    
    def benchmark(func):
        N = 20
    
        torch.cuda.synchronize()
        start = time.time()
    
        for i in range(N):
            torch.cuda.nvtx.range_push(func.__name__)
            _ = func()
            torch.cuda.nvtx.range_pop()
    
        torch.cuda.synchronize()
        time_ms = ((time.time() - start) / N) * 1000
        print(f"{func.__name__}: {time_ms:.3f} ms")
    
    
    if __name__ == "__main__":
        parser = argparse.ArgumentParser()
        parser.add_argument("-b", "--backward", default=False, action="store_true")
        args = parser.parse_args()
        if args.backward:
            run_backward = True
            print("===== benchmark with backward =====")
        else:
            print("===== benchmark without backward =====")
    
        validate_result()
    
        # warm up
        for i in range(10):
            reference_hessian()
            functorch_hessian()
    
        # benchmark hessian
        benchmark(reference_hessian)
        benchmark(functorch_hessian)
    
    high priority 
    opened by yueyericardo 31
  • Excise dependency on networkx

    Excise dependency on networkx

    We want to merge functorch's build system into pytorch so we can package functorch and pytorch together. In order to do that, we want to make sure we don't take additional dependencies on other projects to preserve pytorch's portability.

    The time has come to revisit our networkx dependency. cc @Chillee

    opened by zou3519 2
  • Having BatchNorm2D raises in-place operation error

    Having BatchNorm2D raises in-place operation error

    I am working on a project which requires me to calculate the trace of the Hessian of standard ResNet architectures. To this end I am using the Hutchinson method, which requires me to form the Hessian vector product. I am currently using ResNet18 as implemented in torchvision. This entails BatchNorm2D operations with track_running_stats=True. If I set track_running_stats=False I can execute the following code without any problems:

    import torch
    from functorch import make_functional_with_buffers
    from functorch import grad, jvp, vjp
    
    
    
    criterion = torch.nn.CrossEntropyLoss()
    
    def rademacher(shape, dtype=torch.float32, device='cuda'):
        rand = ((torch.rand(shape) < 0.5)) * 2 - 1
        return rand.to(dtype).to(device)
    
    def loss(params, batch, fn, buffers):
        x,y = batch
        out = fn(params, buffers, x)
        loss = criterion(out,y)
        return loss
    
    def hvp(params, batch, v, fn, buffers):
        loss_fn = lambda x: loss(x, batch, fn, buffers)
        _, vjp_fn = vjp(grad(loss_fn), params)
        return  vjp_fn(v)[0]
    
    def hutchinson(net, x, y, iterations, device='cuda'):
        
        fn , params, buffers = make_functional_with_buffers(net)
        params = [p.data for p in params]
    
        trace = 0
        V = iterations
        for _ in range(V):
            v = [rademacher(p.shape, device=device) for p in params]
            Hv = hvp(params, (x,y), v, fn, buffers)
    
            for v, Hv in zip(v, Hv):
                vHv = torch.einsum("i,i->", v.flatten(), Hv.flatten())
                trace += vHv / V
        return trace
    

    where net is my ResNet18 and x and y are my images and labels respectively. However, if I set ```track_running_stats=True`` I get the following error:

    RuntimeError: During a grad (vjp, jvp, grad, etc) transform, the function provided attempted to call in-place operation (aten::add_.Tensor) that would mutate a captured Tensor. This is not supported; please rewrite the function being transformed to explicitly accept the mutated Tensor(s) as inputs.

    I have encountered the same problem when computing the NTK using the example given in the functorch documentation. Is there a quick work around to this problem?

    Thanks in advance.

    opened by MaxH1996 6
Releases(v0.2.0)
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch Why functorch? | Install guide | Transformations | Documentation | Future Plans This library is currently under heavy development - if you h

null 1.1k Aug 8, 2022
JBDL: A JAX-Based Body Dynamics Algorithm Library forRobotics

JBDL: A JAX-Based Body Dynamics Algorithm Library forRobotics

Tencent Robotics X 19 Aug 1, 2022
Blazing fast, composable, Pythonic quantile filters.

Rolling Quantiles for NumPy Hyper-efficient and composable filters. Simple, clean, intuitive interface. Supports streaming data or bulk processing. Py

Myrl Marmarelis 121 Jul 21, 2022
A c++ trainable semantic segmentation library based on libtorch (pytorch c++). Backbone: ResNet, ResNext. Architecture: FPN, U-Net, PAN, LinkNet, PSPNet, DeepLab-V3, DeepLab-V3+ by now.

中文 C++ library with Neural Networks for Image Segmentation based on LibTorch. The main features of this library are: High level API (just a line to cr

null 279 Jul 28, 2022
This is a code repository for pytorch c++ (or libtorch) tutorial.

LibtorchTutorials English version 环境 win10 visual sutdio 2017 或者Qt4.11.0 Libtorch 1.7 Opencv4.5 配置 libtorch+Visual Studio和libtorch+QT分别记录libtorch在VS和Q

null 358 Aug 3, 2022
GPU PyTorch TOP in TouchDesigner with CUDA-enabled OpenCV

PyTorchTOP This project demonstrates how to use OpenCV with CUDA modules and PyTorch/LibTorch in a TouchDesigner Custom Operator. Building this projec

David 65 Jun 15, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

JoliBrain 2.4k Aug 2, 2022
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Teddy Koker 612 Jul 30, 2022
Official PyTorch Code of GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection (CVPR 2021)

GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Monocular 3D Object Detection GrooMeD-NMS: Grouped Mathematically Differentiable NMS for Mo

Abhinav Kumar 74 Jul 28, 2022
Support Yolov4/Yolov3/Centernet/Classify/Unet. use darknet/libtorch/pytorch to onnx to tensorrt

ONNX-TensorRT Yolov4/Yolov3/CenterNet/Classify/Unet Implementation Yolov4/Yolov3 centernet INTRODUCTION you have the trained model file from the darkn

null 162 Jul 24, 2022
UE4 Plugin to execute trained PyTorch modules

SimplePyTorch UE4 Plugin to execute trained PyTorch modules ------- Packaging ------- Download PyTorch C++ distributions: https://pytorch.org/cppdocs/

null 41 Jun 28, 2022
C++ trainable detection library based on libtorch (or pytorch c++). Yolov4 tiny provided now.

C++ Library with Neural Networks for Object Detection Based on LibTorch. ?? Libtorch Tutorials ?? Visit Libtorch Tutorials Project if you want to know

null 47 Jul 12, 2022
A simple demonstration of how PyTorch autograd works

简单地演示了 PyTorch 中自动求导机制的原理。 官方博客:https://pytorch.org/blog/overview-of-pytorch-autograd-engine/ 编译运行 使用 Bazel bazel run autograd_test 包含了一个使用 MSE 损失函数的一

Howard Lau 14 Feb 24, 2022
An inofficial PyTorch implementation of PREDATOR based on KPConv.

PREDATOR: Registration of 3D Point Clouds with Low Overlap An inofficial PyTorch implementation of PREDATOR based on KPConv. The code has been tested

ZhuLifa 14 Aug 3, 2022
DLPrimitives/OpenCL out of tree backend for pytorch

Pytorch OpenCL backend based on dlprimitives DLPrimitives-OpenCL out of tree backend for pytorch It is only beginning, but you can train some vision n

Artyom Beilis 64 Jul 26, 2022
A external memory allocator example for PyTorch.

Custom PyTorch Memory Management This is a external memory allocator example for PyTorch. The underlying memory allocator is CNMeM. Usage Compile with

Zilin Zhu 12 Aug 2, 2022
A LLVM-based static analyzer to produce PyTorch operator dependency graph.

What is this? This is a clone of the deprecated LLVM-based static analyzer from the PyTorch repo, which can be used to produce the PyTorch operator de

Jiakai Liu 5 Dec 15, 2021
PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.

PSTensor : Custimized a Tensor Data Structure Compatible with PyTorch and TensorFlow. You may need this software in the following cases. Manage memory

Jiarui Fang 8 Feb 12, 2022
Official Pytorch implementation of RePOSE (ICCV2021)

RePOSE: Fast 6D Object Pose Refinement via Deep Texture Rendering (ICCV2021) [Link] Abstract We present RePOSE, a fast iterative refinement method for

Shun Iwase 65 Aug 1, 2022