Close Menu
    Trending
    • The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents
    • Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News
    • Understanding the Chi-Square Test Beyond the Formula
    • Microsoft has a new plan to prove what’s real and what’s AI online
    • AlpamayoR1: Large Causal Reasoning Models for Autonomous Driving
    • AI in Multiple GPUs: How GPUs Communicate
    • Parking-aware navigation system could prevent frustration and emissions | MIT News
    • Can AI Solve Failures in Your Supply Chain?
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » AI in Multiple GPUs: How GPUs Communicate
    Artificial Intelligence

    AI in Multiple GPUs: How GPUs Communicate

    ProfitlyAIBy ProfitlyAIFebruary 19, 2026No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    is a part of a collection about distributed AI throughout a number of GPUs:

    Introduction

    Earlier than diving into superior parallelism methods, we have to perceive the important thing applied sciences that allow GPUs to speak with one another.

    However why do GPUs want to speak within the first place? When coaching AI fashions throughout a number of GPUs, every GPU processes totally different information batches however all of them want to remain synchronized by sharing gradients throughout backpropagation or exchanging mannequin weights. The specifics of what will get communicated and when relies on your parallelism technique, which we’ll discover in depth within the subsequent weblog posts. For now, simply know that trendy AI coaching is communication-intensive, making environment friendly GPU-to-GPU information switch essential for efficiency.

    The Communication Stack

    PCIe

    PCIe (Peripheral Component Interconnect Express) connects expansion cards like GPUs to the motherboard using independent point-to-point serial lanes. Here’s what different PCIe generations offer for a GPU using 16 lanes:

    • Gen4 x16: ~32 GB/s bidirectional
    • Gen5 x16: ~64 GB/s bidirectional
    • Gen6 x16: ~128 GB/s bidirectional (FYI 16 lanes × 8 GB/s/lane = 128 GB/s)

    High-end server CPUs typically offer 128 PCIe lanes, and modern GPUs need 16 lanes for optimal bandwidth. This is why you usually see 8 GPUs per server (128 = 16 x 8). Power consumption and physical space in server chassis also make it impractical to go beyond 8 GPUs in a single node.

    NVLink

    NVLink enables direct GPU-to-GPU communication within the same server (node), bypassing the CPU entirely. This NVIDIA-proprietary interconnect creates a direct memory-to-memory pathway between GPUs with huge bandwidth:

    • NVLink 3 (A100): ~600 GB/s per GPU
    • NVLink 4 (H100): ~900 GB/s per GPU
    • NVLink 5 (Blackwell): Up to 1.8 TB/s per GPU
    Source: GitHub (MIT license)

    Be aware: on NVLink for CPU-GPU communication

    Sure CPU architectures assist NVLink as a PCIe substitute, dramatically accelerating CPU-GPU communication by overcoming the PCIe bottleneck in information transfers, equivalent to transferring coaching batches from CPU to GPU. This CPU-GPU NVLink functionality makes CPU-offloading (a method that saves VRAM by storing information in RAM as a substitute) sensible for real-world AI purposes. Since scaling RAM is often cheaper than scaling VRAM, this method gives vital financial benefits.

    CPUs with NVLink assist embrace IBM POWER8, POWER9, and NVIDIA Grace.

    Nevertheless, there’s a catch. In a server with 8x H100s, every GPU wants to speak with 7 others, splitting that 900 GB/s into seven point-to-point connections of about 128 GB/s every. That’s the place NVSwitch is available in.

    NVSwitch

    NVSwitch acts as a central hub for GPU communication, dynamically routing (switching if you will) data between GPUs as needed. With NVSwitch, every Hopper GPU can communicate at 900 GB/s with all other Hopper GPUs simultaneously, i.e. peak bandwidth doesn’t depend on how many GPUs are communicating. This is what makes NVSwitch “non-blocking”. Each GPU connects to several NVSwitch chips via multiple NVLink connections, ensuring maximum bandwidth.

    While NVSwitch started as an intra-node solution, it’s been extended to interconnect multiple nodes, creating GPU clusters that support up to 256 GPUs with all-to-all communication at near-local NVLink speeds.

    The generations of NVSwitch are:

    • First-Generation: Supports up to 16 GPUs per server (compatible with Tesla V100)
    • Second-Generation: Also supports up to 16 GPUs with improved bandwidth and lower latency
    • Third-Generation: Designed for H100 GPUs, supports up to 256 GPUs

    InfiniBand

    InfiniBand handles inter-node communication. While much slower (and cheaper) than NVSwitch, it’s commonly used in datacenters to scale to thousands of GPUs. Modern InfiniBand supports NVIDIA GPUDirect® RDMA (Remote Direct Memory Access), letting network adapters access GPU memory directly without CPU involvement (no expensive copying to host RAM).

    Current InfiniBand speeds include:

    • HDR: ~25 GB/s per port
    • NDR: ~50 GB/s per port
    • NDR200: ~100 GB/s per port

    These speeds are significantly slower than intra-node NVLink due to network protocol overhead and the need for two PCIe traversals (one at the sender and one at the receiver).

    Key Design Principles

    Understanding Linear Scaling

    Linear scaling is the holy grail of distributed computing. In simple terms, it means doubling your GPUs should double your throughput and halve your training time. This happens when communication overhead is minimal compared to computation time, allowing each GPU to operate at full capacity. However, perfect linear scaling is rare in AI workloads because communication requirements grow with the number of devices, and it’s usually impossible to achieve perfect compute-communication overlap (explained next).

    The Importance of Compute-Communication Overlap

    When a GPU sits idle waiting for data to be transferred before it can be processed, you’re wasting resources. Communication operations should overlap with computation as much as possible. When that’s not possible, we call that communication an “exposed operation”.

    Intra-Node vs. Inter-Node: The Performance Cliff

    Modern server-grade motherboards support up to 8 GPUs. Within this range, you can often achieve near-linear scaling thanks to high-bandwidth, low-latency intra-node communication.

    Once you scale beyond 8 GPUs and start using multiple nodes connected via InfiniBand, you’ll see a large performance degradation. Inter-node communication is much slower than intra-node NVLink, introducing network protocol overhead, higher latency, and bandwidth limitations. As you add more GPUs, each GPU must coordinate with more peers, spending more time idle waiting for data transfers to complete.

    Conclusion

    Follow me on X for more free AI content @l_cesconetto

    Congratulations on making it to the top! On this publish you realized about:

    • The CPU-GPU and GPU-GPU communication fundamentals:
      • PCIe, NVLink, NVSwitch, and InfiniBand
    • Key design ideas for distributed GPU computing
    • You’re now capable of make far more knowledgeable choices when designing your AI workloads

    Within the subsequent weblog publish, we’ll dive into our first parallelism method, the Distributed Knowledge Parallelism.

    1. NVIDIA Blog
    2. GPU Direct



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleParking-aware navigation system could prevent frustration and emissions | MIT News
    Next Article AlpamayoR1: Large Causal Reasoning Models for Autonomous Driving
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents

    February 19, 2026
    Artificial Intelligence

    Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    February 19, 2026
    Artificial Intelligence

    Understanding the Chi-Square Test Beyond the Formula

    February 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Top Scholarships To Study Artificial Intelligence Abroad In 2025 » Ofemwire

    April 4, 2025

    Step-by-Step Guide to Build and Deploy an LLM-Powered Chat with Memory in Streamlit

    May 2, 2025

    What Can the History of Data Tell Us About the Future of AI?

    July 15, 2025

    Creating a Data Pipeline to Monitor Local Crime Trends

    February 3, 2026

    Implementing the Fourier Transform Numerically in Python: A Step-by-Step Guide

    October 21, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    HNSW at Scale: Why Your RAG System Gets Worse as the Vector Database Grows

    January 7, 2026

    De-risking investment in AI agents

    September 17, 2025

    Undetectable AI’s Chatbot vs. ChatGPT: Bypassing AI Detection?

    April 8, 2025
    Our Picks

    The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents

    February 19, 2026

    Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    February 19, 2026

    Understanding the Chi-Square Test Beyond the Formula

    February 19, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.