Close Menu
    Trending
    • New J-PAL research and policy initiative to test and scale AI innovations to fight poverty | MIT News
    • How to Leverage Explainable AI for Better Business Decisions
    • Ubiquity to Acquire Shaip AI, Advancing AI and Data Capabilities
    • AI in Multiple GPUs: Understanding the Host and Device Paradigm
    • AI is already making online swindles easier. It could get much worse.
    • What’s next for Chinese open-source AI
    • Definition, Types, Benefits, Use Cases, and Challenges
    • How AI is Revolutionizing Doctor-Patient Conversations for Better Healthcare Outcomes
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » AI in Multiple GPUs: Understanding the Host and Device Paradigm
    Artificial Intelligence

    AI in Multiple GPUs: Understanding the Host and Device Paradigm

    ProfitlyAIBy ProfitlyAIFebruary 12, 2026No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    is a part of a collection about distributed AI throughout a number of GPUs:

    • Half 1: Understanding the Host and Gadget Paradigm (this text)
    • Half 2: Level-to-Level and Collective Operations (coming quickly)
    • Half 3: How GPUs Talk (coming quickly)
    • Half 4: Gradient Accumulation & Distributed Information Parallelism (DDP) (coming quickly)
    • Half 5: ZeRO (coming quickly)
    • Half 6: Tensor Parallelism (coming quickly)

    Introduction

    This information explains the foundational ideas of how a CPU and a discrete graphics card (GPU) work collectively. It’s a high-level introduction designed that will help you construct a psychological mannequin of the host-device paradigm. We’ll focus particularly on NVIDIA GPUs, that are probably the most generally used for AI workloads.

    For built-in GPUs, corresponding to these present in Apple Silicon chips, the structure is barely totally different, and it gained’t be lined on this submit.

    The Huge Image: The Host and The Gadget

    A very powerful idea to understand is the connection between the Host and the Gadget.

    • The Host: That is your CPU. It runs the working system and executes your Python script line by line. The Host is the commander; it’s accountable for the general logic and tells the Gadget what to do.
    • The Gadget: That is your GPU. It’s a robust however specialised coprocessor designed for massively parallel computations. The Gadget is the accelerator; it doesn’t do something till the Host provides it a job.

    Your program all the time begins on the CPU. Whenever you need the GPU to carry out a job, like multiplying two giant matrices, the CPU sends the directions and the information over to the GPU.

    The CPU-GPU Interplay

    The Host talks to the Gadget by way of a queuing system.

    1. CPU Initiates Instructions: Your script, working on the CPU, encounters a line of code meant for the GPU (e.g., tensor.to('cuda')).
    2. Instructions are Queued: The CPU doesn’t wait. It merely locations this command onto a particular to-do checklist for the GPU known as a CUDA Stream — extra on this within the subsequent part.
    3. Asynchronous Execution: The CPU doesn’t watch for the precise operation to be accomplished by the GPU, the host strikes on to the subsequent line of your script. That is known as asynchronous execution, and it’s a key to reaching excessive efficiency. Whereas the GPU is busy crunching numbers, the CPU can work on different duties, like making ready the subsequent batch of knowledge.

    CUDA Streams

    A CUDA Stream is an ordered queue of GPU operations. Operations submitted to a single stream execute so as, one after one other. Nonetheless, operations throughout totally different streams can execute concurrently — the GPU can juggle a number of impartial workloads on the similar time.

    By default, each PyTorch GPU operation is enqueued on the present energetic stream (it’s often the default stream which is mechanically created). That is easy and predictable: each operation waits for the earlier one to complete earlier than beginning. For many code, you by no means discover this. Nevertheless it leaves efficiency on the desk when you’ve gotten work that might overlap.

    A number of Streams: Concurrency

    The classic use case for multiple streams is overlapping computation with data transfers. While the GPU processes batch N, you can simultaneously copy batch N+1 from CPU RAM to GPU VRAM:

    Stream 0 (compute): [process batch 0]────[process batch 1]───
    Stream 1 (data):   ────[copy batch 1]────[copy batch 2]───

    This pipeline is possible because compute and data transfer happen on separate hardware units inside the GPU, enabling true parallelism. In PyTorch, you create streams and schedule work onto them with context managers:

    compute_stream = torch.cuda.Stream()
    transfer_stream = torch.cuda.Stream()
    
    with torch.cuda.stream(transfer_stream):
        # Enqueue the transfer on transfer_stream
        next_batch = next_batch_cpu.to('cuda', non_blocking=True)
    
    with torch.cuda.stream(compute_stream):
        # This runs concurrently with the transfer above
        output = model(current_batch)

    Note the non_blocking=True flag on .to(). Without it, the transfer would still block the CPU thread even when you intend it to run asynchronously.

    Synchronization Between Streams

    Since streams are independent, you need to explicitly signal when one depends on another. The blunt tool is:

    torch.cuda.synchronize()  # waits for ALL streams on the device to finish

    A more surgical approach uses CUDA Events. An event marks a specific point in a stream, and another stream can wait on it without halting the CPU thread:

    event = torch.cuda.Event()
    
    with torch.cuda.stream(transfer_stream):
        next_batch = next_batch_cpu.to('cuda', non_blocking=True)
        event.record()  # mark: transfer is done
    
    with torch.cuda.stream(compute_stream):
        compute_stream.wait_event(event)  # don't start until transfer completes
        output = model(next_batch)

    This is more efficient than stream.synchronize() because it only stalls the dependent stream on the GPU side — the CPU thread stays free to keep queuing work.

    For day-to-day PyTorch training code you won’t need to manage streams manually. But features like DataLoader(pin_memory=True) and prefetching rely heavily on this mechanism under the hood. Understanding streams helps you recognize why those settings exist and gives you the tools to diagnose subtle performance bottlenecks when they appear.

    PyTorch Tensors

    PyTorch is a powerful framework that abstracts away many details, but this abstraction can sometimes obscure what is happening under the hood.

    When you create a PyTorch tensor, it has two parts: metadata (like its shape and data type) and the actual numerical data. So when you run something like this t = torch.randn(100, 100, device=device), the tensor’s metadata is stored in the host’s RAM, while its data is stored in the GPU’s VRAM.

    This distinction is important. When you run print(t.shape), the CPU can immediately access this information because the metadata is already in its own RAM. But what happens if you run print(t), which requires the actual data living in VRAM?

    Host-Device Synchronization

    Accessing GPU data from the CPU can trigger a Host-Device Synchronization, a common performance bottleneck. This occurs whenever the CPU needs a result from the GPU that isn’t yet available in the CPU’s RAM.

    For example, consider the line print(gpu_tensor) which prints a tensor that is still being computed by the GPU. The CPU cannot print the tensor’s values until the GPU has finished all the calculations to obtain the final result. When the script reaches this line, the CPU is forced to block, i.e. it stops and waits for the GPU to finish. Only after the GPU completes its work and copies the data from its VRAM to the CPU’s RAM can the CPU proceed.

    As another example, what’s the difference between torch.randn(100, 100).to(device) and torch.randn(100, 100, device=device)? The first method is less efficient because it creates the data on the CPU and then transfers it to the GPU. The second method is more efficient because it creates the tensor directly on the GPU; the CPU only sends the creation command.

    These synchronization points can severely impact performance. Effective GPU programming involves minimizing them to ensure both the Host and Device stay as busy as possible. After all, you want your GPUs to go brrrrr.

    Image by author: generated with ChatGPT

    Scaling Up: Distributed Computing and Ranks

    Training large models, such as Large Language Models (LLMs), often requires more compute power than a single GPU can offer. Coordinating work across multiple GPUs brings you into the world of distributed computing.

    In this context, a new and important concept emerges: the Rank.

    • Each rank is a CPU process which gets assigned a single device (GPU) and a unique ID. If you launch a training script across two GPUs, you will create two processes: one with rank=0 and another with rank=1.

    This means you are launching two separate instances of your Python script. On a single machine with multiple GPUs (a single node), these processes run on the same CPU but remain independent, without sharing memory or state. Rank 0 commands its assigned GPU (cuda:0), while Rank 1 commands another GPU (cuda:1). Although both ranks run the same code, you can leverage a variable that holds the rank ID to assign different tasks to each GPU, like having each one process a different portion of the data (we’ll see examples of this in the next blog post of this series).

    Conclusion

    Congratulations for reading all the way to the end! In this post, you learned about:

    • The Host/Device relationship
    • Asynchronous execution
    • CUDA Streams and how they enable concurrent GPU work
    • Host-Device synchronization

    In the next blog post, we will dive deeper into Point-to-Point and Collective Operations, which enable multiple GPUs to coordinate complex workflows such as distributed neural network training.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAI is already making online swindles easier. It could get much worse.
    Next Article Ubiquity to Acquire Shaip AI, Advancing AI and Data Capabilities
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    New J-PAL research and policy initiative to test and scale AI innovations to fight poverty | MIT News

    February 13, 2026
    Artificial Intelligence

    How to Leverage Explainable AI for Better Business Decisions

    February 12, 2026
    Artificial Intelligence

    Accelerating science with AI and simulations | MIT News

    February 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    MIT affiliates win AI for Math grants to accelerate mathematical discovery | MIT News

    September 22, 2025

    New tool evaluates progress in reinforcement learning | MIT News

    May 6, 2025

    Study shows vision-language models can’t handle queries with negation words | MIT News

    May 14, 2025

    Pharmacy Placement in Urban Spain

    May 8, 2025

    How I Optimized My Leaf Raking Strategy Using Linear Programming

    December 19, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Adobe’s New AI Is So Good You Might Ditch Other Tools

    April 25, 2025

    At the core of problem-solving | MIT News

    April 4, 2025

    Mastering NLP with spaCy – Part 3

    August 20, 2025
    Our Picks

    New J-PAL research and policy initiative to test and scale AI innovations to fight poverty | MIT News

    February 13, 2026

    How to Leverage Explainable AI for Better Business Decisions

    February 12, 2026

    Ubiquity to Acquire Shaip AI, Advancing AI and Data Capabilities

    February 12, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.