Close Menu
    Trending
    • Topp 10 AI-filmer genom tiderna
    • OpenAIs nya webbläsare ChatGPT Atlas
    • Creating AI that matters | MIT News
    • Scaling Recommender Transformers to a Billion Parameters
    • Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know
    • Is RAG Dead? The Rise of Context Engineering and Semantic Layers for Agentic AI
    • ChatGPT Gets More Personal. Is Society Ready for It?
    • Why the Future Is Human + Machine
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » What PyTorch Really Means by a Leaf Tensor and Its Grad
    Artificial Intelligence

    What PyTorch Really Means by a Leaf Tensor and Its Grad

    ProfitlyAIBy ProfitlyAIJune 19, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    isn’t yet one more rationalization of the chain rule. It’s a tour via the weird facet of autograd — the place gradients serve physics, not simply weights

    I initially wrote this tutorial for myself through the first 12 months of my PhD, whereas navigating the intricacies of gradient calculations in PyTorch. Most of it’s clearly designed with commonplace backpropagation in thoughts — and that’s high-quality, since that’s what most individuals want.

    However Physics-Knowledgeable Neural Community (PINN) is a moody beast and it wants a distinct sort of gradient logic. I spent a while feeding it and I figured it is likely to be price sharing the findings with the group, particularly with fellow PINN practitioners — perhaps it’ll save somebody a couple of complications. However when you’ve got by no means heard of PINNs, don’t fear! This publish continues to be for you — particularly in the event you’re into issues like gradients of gradients and all that enjoyable stuff.

    Fundamentals phrases

    Tensor within the pc world means merely a multidimensional array, i.e. a bunch of numbers listed by a number of integers. To be exact, there exist additionally zero-dimensional tensors, that are simply single numbers. Some folks say that tensors are a generalization of matrices to greater than two dimensions.

    When you have studied basic relativity earlier than, you will have heard that mathematical tensors have things like covariant and contravariant indices. However neglect about it — in PyTorch tensors are simply multidimensional arrays. No finesse right here.

    Leaf tensor is a tensor that may be a leaf (within the sense of a graph concept) of a computation graph. We are going to take a look at these under, so this definition will make a bit extra sense.

    The requires_grad property of a tensor tells PyTorch whether or not it ought to keep in mind how this tensor is utilized in additional computations. For now, consider tensors with requires_grad=True as variables, whereas tensors with requires_grad=False as constants.

    Leaf tensors

    Let’s begin by creating a couple of tensors and checking their properties requires_grad and is_leaf.

    import torch
    
    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    
    c = torch.tensor([5.])
    d = c * c
    
    assert a.requires_grad is True and a.is_leaf is True
    assert b.requires_grad is True and b.is_leaf is False
    assert c.requires_grad is False and c.is_leaf is True
    assert d.requires_grad is False and d.is_leaf is True  # sic!
    del a, b, c, d

    a is a leaf as anticipated, and b is just not as a result of it’s a results of a multiplication. a is ready to require grad, so naturally b inherits this property.

    c is a leaf clearly, however why d is a leaf? The rationale d.is_leaf is True stems from a particular conference: all tensors with requires_grad set to False are thought of leaf tensors, as per PyTorch’s documentation:

    All Tensors which have requires_grad which is False can be leaf Tensors by conference.

    Whereas mathematically, d is just not a leaf (because it outcomes from one other operation, c * c), gradient computation won’t ever prolong past it. In different phrases, there gained’t be any spinoff with respect to c. This enables d to be handled as a leaf.

    In a nutshell, in PyTorch, leaf tensors are both:

    • Immediately inputted (i.e. not calculated from different tensors) and have requires_grad=True. Instance: neural community weights which can be randomly initialized.
    • Don’t require gradients in any respect, no matter whether or not they’re instantly inputted or computed. Within the eyes of autograd, these are simply constants. Examples:
      • any neural community enter knowledge,
      • an enter picture after imply removing or different operations, which includes solely non-gradient-requiring tensors.

    A small comment for many who wish to know extra. The requires_grad property is inherited as illustrated right here:

    a = torch.tensor([5.], requires_grad=True)
    b = torch.tensor([5.], requires_grad=True)
    c = torch.tensor([5.], requires_grad=False)
    
    d = torch.sin(a * b * c)
    
    assert d.requires_grad == any((x.requires_grad for x in (a, b, c)))

    Code comment: all code snippets ought to be self-contained apart from imports that I embrace solely after they seem first time. I drop them to be able to decrease boilerplate code. I belief that the reader will be capable of care for these simply.

    Grad retention

    A separate difficulty is gradient retention. All nodes within the computation graph, which means all tensors used, have gradients computed in the event that they require grad. Nonetheless, solely leaf tensors retain these gradients. This is smart as a result of gradients are sometimes used to replace tensors, and solely leaf tensors are topic to updates throughout coaching. Non-leaf tensors, like b within the first instance, will not be instantly up to date; they alter on account of adjustments in a, so their gradients could be discarded. Nonetheless, there are eventualities, particularly in Physics-Knowledgeable Neural Networks (PINNs), the place you would possibly wish to retain the gradients of those intermediate tensors. In such instances, you will want to explicitly mark non-leaf tensors to retain their gradients. Let’s see:

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.backward()
    
    assert a.grad is just not None
    assert b.grad is None  # generates a warning

    You most likely have simply seen a warning:

    UserWarning: The .grad attribute of a Tensor that isn't a leaf Tensor is being 
    accessed. Its .grad attribute will not be populated throughout autograd.backward(). 
    In the event you certainly need the .grad discipline to be populated for a non-leaf Tensor, use 
    .retain_grad() on the non-leaf Tensor. In the event you entry the non-leaf Tensor by 
    mistake, be sure you entry the leaf Tensor as an alternative. 
    See github.com/pytorch/pytorch/pull/30531 for extra informations. 
    (Triggered internally at atensrcATen/core/TensorBody.h:491.)

    So let’s repair it by forcing b to retain its gradient

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()  # <- the distinction
    b.backward()
    
    assert a.grad is just not None
    assert b.grad is just not None

    Mysteries of grad

    Now let’s take a look at the well-known grad itself. What’s it? Is it a tensor? In that case, is it a leaf tensor? Does it require or retain grad?

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward()
    
    assert isinstance(a.grad, torch.Tensor)
    assert a.grad.requires_grad is False and a.grad.retains_grad is False and a.grad.is_leaf is True
    assert b.grad.requires_grad is False and b.grad.retains_grad is False and b.grad.is_leaf is True

    Apparently:

    – grad itself is a tensor,
    – grad is a leaf tensor,
    – grad doesn’t require grad.

    Does it retain grad? This query doesn’t make sense as a result of it doesn’t require grad within the first place. We are going to come again to the query of the grad being a leaf tensor in a second, however now we’ll check a couple of issues.

    A number of backwards and retain_graph

    What’s going to occur once we calculate the identical grad twice?

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward()
    strive:
        b.backward()
    besides RuntimeError:
        """
        RuntimeError: Making an attempt to backward via the graph a second time (or 
        instantly entry saved tensors after they've already been freed). Saved 
        intermediate values of the graph are freed whenever you name .backward() or 
        autograd.grad(). Specify retain_graph=True if you'll want to backward via 
        the graph a second time or if you'll want to entry saved tensors after 
        calling backward.
        """

    The error message explains all of it. This could work:

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    
    b.backward(retain_graph=True)
    print(a.grad)  # prints tensor([6.])
    
    b.backward(retain_graph=True)
    print(a.grad)  # prints tensor([12.])
    
    b.backward(retain_graph=False)
    print(a.grad)  # prints tensor([18.])
    
    # b.backward(retain_graph=False)  # <- right here we'd get an error, as a result of in 
    # the earlier name we didn't retain the graph.

    Aspect (however essential) notice: you may as well observe, how the gradient accumulates in a: with each iteration it’s added.

    Highly effective create_graph argument

    How you can make grad require grad?

    a = torch.tensor([5.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward(create_graph=True)
    
    # Right here an attention-grabbing factor occurs: now a.grad would require grad! 
    assert a.grad.requires_grad is True
    assert a.grad.is_leaf is False
    
    # Alternatively, the grad of b doesn't require grad, as beforehand. 
    assert b.grad.requires_grad is False
    assert b.grad.is_leaf is True

    The above may be very helpful: a.grad which mathematically is [frac{partial b}{partial a}] is just not a continuing (leaf) anymore, however an everyday member of the computation graph that may be additional used. We are going to use that truth in Half 2.

    Why the b.grad doesn’t require grad? As a result of spinoff of b with respect to b is just 1.

    If the backward feels counterintuitive for you now, don’t fear. We are going to quickly change to a different technique known as nomen omen grad that permits to exactly select substances of the derivatives. Earlier than, two facet notes:

    Aspect notice 1: In the event you set create_graph to True, it additionally units retain_graph to True (if not explicitly set). Within the pytorch code it appears precisely like
    this:

        if retain_graph is None:
            retain_graph = create_graph

    Aspect notice 2: You most likely noticed a warning like this:

        UserWarning: Utilizing backward() with create_graph=True will create a reference 
        cycle between the parameter and its gradient which might trigger a reminiscence leak. 
        We suggest utilizing autograd.grad when creating the graph to keep away from this. If 
        it's important to use this operate, be sure that to reset the .grad fields of your 
        parameters to None after use to interrupt the cycle and keep away from the leak. 
        (Triggered internally at C:cbpytorch_1000000000000worktorchcsrcautogradengine.cpp:1156.)
          Variable._execution_engine.run_backward(  # Calls into the C++ engine to 
        run the backward go

    And we’ll comply with the recommendation and use autograd.grad now.

    Taking derivatives with autograd.grad operate

    Now let’s transfer from the in some way high-level .backward() technique to decrease degree grad technique that explicitly calculates spinoff of 1 tensor with respect to a different.

    from torch.autograd import grad
    
    a = torch.tensor([3.], requires_grad=True)
    b = a * a * a
    db_da = grad(b, a, create_graph=True)[0]
    assert db_da.requires_grad is True

    Equally, as with backward, the spinoff of b with respect to a could be handled as a operate and differentiated additional. So in different phrases, the create_graph flag could be understood as: when calculating gradients, preserve the historical past of how they have been calculated, so we will deal with them as non-leaf tensors that require grad, and use additional.

    Particularly, we will calculate second-order spinoff:

    d2b_da2 = grad(db_da, a, create_graph=True)[0]
    # Aspect notice: the grad operate returns a tuple and the primary factor of it's what we'd like.
    assert d2b_da2.merchandise() == 18
    assert d2b_da2.requires_grad is True

    As mentioned earlier than: that is really the important thing property that permits us to do PINN with pytorch.

    Wrapping up

    Most tutorials about PyTorch gradients give attention to backpropagation in classical supervised studying. This one explored a distinct perspective — one formed by the wants of PINNs and different gradient-hungry beasts.

    We learnt what leaves are within the PyTorch jungle, why gradients are retained by default just for leaf nodes, and the way to retain them when wanted for different tensors. We noticed how create_graph turns gradients into differentiable residents of the autograd world.

    However there are nonetheless many issues to uncover — particularly why gradients of non-scalar capabilities require additional care, the way to compute second-order derivatives with out utilizing your complete RAM, and why slicing your enter tensor is a nasty thought whenever you want an elementwise gradient.

    So let’s meet in Half 2, the place we’ll take a more in-depth take a look at grad 👋



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhich One Should You Use In 2025? » Ofemwire
    Next Article From Configuration to Orchestration: Building an ETL Workflow with AWS Is No Longer a Struggle
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Creating AI that matters | MIT News

    October 21, 2025
    Artificial Intelligence

    Scaling Recommender Transformers to a Billion Parameters

    October 21, 2025
    Artificial Intelligence

    Hidden Gems in NumPy: 7 Functions Every Data Scientist Should Know

    October 21, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    De dolda farorna med att använda AI-agenter för surfning

    May 26, 2025

    The Kolmogorov–Smirnov Statistic, Explained: Measuring Model Power in Credit Risk Modeling

    September 22, 2025

    This Self-Driving Taxi Could Replace Uber by 2025 — And It’s Backed by Toyota

    April 25, 2025

    Is It Worth Paying For? » Ofemwire

    April 4, 2025

    Reddit Users Secretly Manipulated by AI in Shocking Psychological Experiment

    April 29, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Why Your A/B Test Winner Might Just Be Random Noise

    September 16, 2025

    After an outcry, OpenAI swiftly rereleased 4o to paid users. But experts say it should not have removed the model so suddenly.

    August 15, 2025

    Preparing Video Data for Deep Learning: Introducing Vid Prepper

    September 29, 2025
    Our Picks

    Topp 10 AI-filmer genom tiderna

    October 22, 2025

    OpenAIs nya webbläsare ChatGPT Atlas

    October 22, 2025

    Creating AI that matters | MIT News

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.