Close Menu
    Trending
    • Three OpenClaw Mistakes to Avoid and How to Fix Them
    • I Stole a Wall Street Trick to Solve a Google Trends Data Problem
    • How AI is turning the Iran conflict into theater
    • Why Your AI Search Evaluation Is Probably Wrong (And How to Fix It)
    • Machine Learning at Scale: Managing More Than One Model in Production
    • Improving AI models’ ability to explain their predictions | MIT News
    • Write C Code Without Learning C: The Magic of PythoC
    • LatentVLA: Latent Reasoning Models for Autonomous Driving
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » Optimizing Data Transfer in Batched AI/ML Inference Workloads
    Artificial Intelligence

    Optimizing Data Transfer in Batched AI/ML Inference Workloads

    ProfitlyAIBy ProfitlyAIJanuary 12, 2026No Comments14 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    is a to Optimizing Data Transfer in AI/ML Workloads the place we demonstrated using NVIDIA Nsight™ Systems (nsys) in finding out and fixing the widespread data-loading bottleneck — occurrences the place the GPU idles whereas it waits for enter information from the CPU. On this submit we focus our consideration on information travelling in the wrong way, from the GPU gadget to the CPU host. Extra particularly, we tackle AI/ML inference workloads the place the scale of the output being returned by the mannequin is comparatively excessive. Frequent examples embody: 1) working a scene segmentation (per-pixel labeling) mannequin on batches of high-resolution photos and a pair of) capturing excessive dimensional characteristic embeddings of enter sequences utilizing an encoder mannequin (e.g., to create a vector database). Each examples contain executing a mannequin on an enter batch after which copying the output tensor from the GPU to the CPU for extra processing, storage, and/or over-the-network communication.

    GPU-to-CPU reminiscence copies of the mannequin output sometimes obtain a lot much less consideration in optimization tutorials than the CPU-to-GPU copies that feed the mannequin (e.g., see here). However their potential influence on mannequin effectivity and execution prices will be simply as detrimental. Furthermore, whereas optimizations to CPU-to-GPU data-loading are properly documented and straightforward to implement, optimizing information copy in the wrong way requires a bit extra handbook labor.

    On this submit we are going to apply the identical technique we utilized in our earlier submit: We are going to outline a toy mannequin and use nsys profiler to determine and clear up efficiency bottlenecks. We are going to run our experiments on an Amazon EC2 g6e.2xlarge occasion (with an NVIDIA L40S GPU) working an AWS Deep Learning (Ubuntu 24.04) AMI with PyTorch (2.8), nsys-cli profiler (model 2025.6.1), and the NVIDIA Tools Extension (NVTX) library.

    Disclaimers

    The code we are going to share is meant for demonstrative functions; please don’t depend on its correctness or optimality. Please don’t interpret our use of any library, software, or platform, as an endorsement of its use. The influence of the optimizations we are going to cowl can fluctuate drastically primarily based on the main points of the mannequin and the runtime setting. Please remember to assess their impact by yourself use case earlier than integrating their use.

    Many because of Yitzhak Levi and Gilad Wasserman for his or her contributions to this submit.

    A Toy PyTorch Mannequin

    We introduce a batched inference script that performs picture segmentation on an artificial dataset utilizing a DeepLabV3 mannequin with a ResNet-50 spine. The mannequin outputs are copied to the CPU for submit processing and storage. We wrap the totally different parts of the inference step with color-coded nvtx annotations:

    import time, torch, nvtx
    from torch.utils.information import Dataset, DataLoader
    from torch.cuda import profiler
    from torchvision.fashions.segmentation import deeplabv3_resnet50
    
    DEVICE = "cuda"
    WARMUP_STEPS = 10
    PROFILE_STEPS = 3
    COOLDOWN_STEPS = 1
    TOTAL_STEPS = WARMUP_STEPS + PROFILE_STEPS + COOLDOWN_STEPS
    BATCH_SIZE = 64
    TOTAL_SAMPLES = TOTAL_STEPS * BATCH_SIZE
    IMG_SIZE = 512
    N_CLASSES = 21
    NUM_WORKERS = 8
    ASYNC_DATALOAD = True
    
    
    # An artificial Dataset with random photos
    class FakeDataset(Dataset):
    
        def __len__(self):
            return TOTAL_SAMPLES
    
        def __getitem__(self, index):
            img = torch.randn((3, IMG_SIZE, IMG_SIZE))
            return img
    
    # utility class for prefetching information to GPU
    class DataPrefetcher:
        def __init__(self, loader):
            self.loader = iter(loader)
            self.stream = torch.cuda.Stream()
            self.next_batch = None
            self.preload()
    
        def preload(self):
            strive:
                information = subsequent(self.loader)
                with torch.cuda.stream(self.stream):
                    next_data = information.to(DEVICE, non_blocking=ASYNC_DATALOAD)
                self.next_batch = next_data
            besides:
                self.next_batch = None
    
        def __iter__(self):
            return self
    
        def __next__(self):
            torch.cuda.current_stream().wait_stream(self.stream)
            information = self.next_batch
            self.preload()
            return information
    
    mannequin = deeplabv3_resnet50(weights_backbone=None).to(DEVICE).eval()
    
    data_loader = DataLoader(
        FakeDataset(),
        batch_size=BATCH_SIZE,
        num_workers=NUM_WORKERS,
        pin_memory=ASYNC_DATALOAD
    )
    
    data_iter = DataPrefetcher(data_loader)
    
    def synchronize_all():
        torch.cuda.synchronize() 
    
    def to_cpu(output):
        return output.cpu()
    
    def process_output(batch_id, logits):
        # do some submit processing on output
        with open('/dev/null', 'wb') as f:
            f.write(logits.numpy().tobytes())
    
    with torch.inference_mode():
        for i in vary(TOTAL_STEPS):
            if i == WARMUP_STEPS:
                synchronize_all()
                start_time = time.perf_counter()
                profiler.begin()
            elif i == WARMUP_STEPS + PROFILE_STEPS:
                synchronize_all()
                profiler.cease()
                end_time = time.perf_counter()
    
            with nvtx.annotate(f"Batch {i}", coloration="blue"):
                with nvtx.annotate("get batch", coloration="purple"):
                    batch = subsequent(data_iter)
                with nvtx.annotate("compute", coloration="inexperienced"):
                    output = mannequin(batch)
                with nvtx.annotate("copy to CPU", coloration="yellow"):
                    output_cpu = to_cpu(output['out'])
                with nvtx.annotate("course of output", coloration="cyan"):
                    process_output(i, output_cpu)
    
    total_time = end_time - start_time
    throughput = PROFILE_STEPS / total_time
    print(f"Throughput: {throughput:.2f} steps/sec")

    Observe the inclusion of the entire CPU-to-GPU data-loading optimizations mentioned in our earlier submit.

    We run the next command to seize an nsys profile hint:

    nsys profile 
      --capture-range=cudaProfilerApi 
      --trace=cuda,nvtx,osrt 
      --output=baseline 
      python batch_infer.py

    This ends in a baseline.nsys-rep hint file that we copy over to our improvement machine for evaluation.

    To measure the inference throughput, we improve the variety of steps to 100. The common throughput of our baseline experiment is 0.45 steps-per-second. Within the following sections we are going to use the nsys profile traces to incrementally enhance this consequence.

    Baseline Efficiency Evaluation

    The picture under exhibits the nsys profile hint of our baseline experiment:

    Baseline Nsight Techniques Profiler Hint (by Creator)

    Within the GPU part we see the next recurring sample:

    1. A block of kernel compute (in gentle blue) that runs for ~520 milliseconds.
    2. A small block of host-to-device reminiscence copy (in inexperienced) that runs in parallel to the kernel compute. This concurrency was achieved utilizing the optimizations mentioned in our earlier submit.
    3. A block of device-to-host reminiscence copy (in purple) that runs for ~750 milliseconds.
    4. A protracted interval (~940 milliseconds) of GPU idle time (white area) between each two steps.

    Trying on the NVTX bar of the CPU part, we are able to see that the whitespace aligns completely with the “course of output” block (in cyan). In our preliminary implementation, each the mannequin execution and the output storage perform run in the identical single course of in a sequential method. This results in vital idle time on the GPU because the CPU waits for the storage perform to return earlier than feeding the GPU the subsequent batch.

    Optimization 1: Multi-Employee Output Processing

    Step one we take is to run the output storage perform in parallel employee processes. We took an analogous step in our earlier submit once we moved the enter batch preparation sequence to devoted staff. Nevertheless, whereas there we have been in a position to automate multi-process data loading by merely setting the num_workers argument of the DataLoader class to a non-zero worth, making use of multi-worker output-processing requires a handbook implementation. Right here we select a easy answer for demonstrative functions. This needs to be custom-made per your wants and design preferences.

    PyTorch Multiprocessing

    We implement a producer-consumer technique utilizing PyTorch’s built-in multiprocessing bundle, torch.multiprocessing. We outline a queue for storing output batches and a number of shopper staff that course of the batches on the queue. We modify our inference loop to place the output buffers within the output queue. We additionally replace the synchronize_all() utility to empty the queue and append a cleanup sequence on the finish of the script.

    The next block of code incorporates our preliminary implementation. As we are going to see within the subsequent sections, this may require some tuning in an effort to attain most efficiency.

    import torch.multiprocessing as mp
    
    POSTPROC_WORKERS = 8 # tune for optimum throughput
    
    output_queue = mp.JoinableQueue(maxsize=POSTPROC_WORKERS)
    
    def output_worker(in_q):
        whereas True:
            merchandise = in_q.get()
            if merchandise is None: break  # sign to close down
            batch_id, batch_preds = merchandise
            process_output(batch_id, batch_preds)
            in_q.task_done()
    
    processes = []
    for _ in vary(POSTPROC_WORKERS):
        p = mp.Course of(goal=output_worker, args=(output_queue,))
        p.begin()
        processes.append(p)
    
    def synchronize_all():
        torch.cuda.synchronize() 
        output_queue.be a part of() # drain queue
    
    
    with torch.inference_mode():
        for i in vary(TOTAL_STEPS):
            if i == WARMUP_STEPS:
                synchronize_all()
                start_time = time.perf_counter()
                profiler.begin()
            elif i == WARMUP_STEPS + PROFILE_STEPS:
                synchronize_all()
                profiler.cease()
                end_time = time.perf_counter()
    
            with nvtx.annotate(f"Batch {i}", coloration="blue"):
                with nvtx.annotate("get batch", coloration="purple"):
                    batch = subsequent(data_iter)
                with nvtx.annotate("compute", coloration="inexperienced"):
                    output = mannequin(batch)
                with nvtx.annotate("copy to CPU", coloration="yellow"):
                    output_cpu = to_cpu(output['out'])
                with nvtx.annotate("queue output", coloration="cyan"):
                    output_queue.put((i, output_cpu))
    
    
    total_time = end_time - start_time
    throughput = PROFILE_STEPS / total_time
    print(f"Throughput: {throughput:.2f} steps/sec")
    # cleanup
    for _ in vary(POSTPROC_WORKERS):
        output_queue.put(None)

    The multi-worker output processing optimization ends in a throughput of 0.71 steps-per-second — a 58% improve over our baseline outcomes.

    Rerunning the nsys command ends in the next profile hint:

    Multi-Employee Nsight Techniques Profiler Timeline (by Creator)

    We will see that the scale of the block of whitespace has dropped significantly (from ~940 milliseconds to ~50). Had been we to zoom in on the remaining whitespace, we might discover it aligned to an “munmap” operation. In our earlier submit, the identical discovering knowledgeable our asynchronous information copy optimization. However this time we take an intermediate memory-optimization step within the type of a pre-allocated pool of buffers.

    Optimization 2: Buffer Pool Pre-allocation

    To be able to cut back the overhead of allocating and managing a brand new CPU tensor on each iteration, we initialize a pool of tensors pre-allocated in shared reminiscence and outline a second queue to handle their use.

    Our up to date code seems under:

    form = (BATCH_SIZE, N_CLASSES, IMG_SIZE, IMG_SIZE)
    buffer_pool = [torch.empty(shape).share_memory_() 
                   for _ in range(POSTPROC_WORKERS)]
    
    buf_queue = mp.Queue()
    for i in vary(POSTPROC_WORKERS):
        buf_queue.put(i)
    
    def output_worker(buffer_pool, in_q, buf_q):
        whereas True:
            merchandise = in_q.get()
            if merchandise is None: break  # sign to close down
            batch_id, buf_id = merchandise
            process_output(batch_id, buffer_pool[buf_id])
            buf_q.put(buf_id)
            in_q.task_done()
    
    processes = []
    for _ in vary(POSTPROC_WORKERS):
        p = mp.Course of(goal=output_worker,
                       args=(buffer_pool,output_queue,buf_queue))
        p.begin()
        processes.append(p)
    
    def to_cpu(output):
        buf_id = buf_queue.get()
        output_cpu = buffer_pool[buf_id]
        output_cpu.copy_(output)
        return output_cpu, buf_id
    
    with torch.inference_mode():
        for i in vary(TOTAL_STEPS):
            if i == WARMUP_STEPS:
                synchronize_all()
                start_time = time.perf_counter()
                profiler.begin()
            elif i == WARMUP_STEPS + PROFILE_STEPS:
                synchronize_all()
                profiler.cease()
                end_time = time.perf_counter()
    
            with nvtx.annotate(f"Batch {i}", coloration="blue"):
                with nvtx.annotate("get batch", coloration="purple"):
                    batch = subsequent(data_iter)
                with nvtx.annotate("compute", coloration="inexperienced"):
                    output = mannequin(batch)
                with nvtx.annotate("copy to CPU", coloration="yellow"):
                    output_cpu, buf_id = to_cpu(output['out'])
                with nvtx.annotate("queue output", coloration="cyan"):
                    output_queue.put((i, buf_id))

    Following these modifications, the inference throughput jumps to 1.51 — a greater than 2X speed-up over our earlier consequence.

    The brand new profile hint seems under:

    Buffer Pool Nsight Techniques Profiler Timeline (by Creator)

    Not solely has the whitespace all however disappeared, however the CUDA DtoH reminiscence operation (in purple) has dropped from ~750 milliseconds to ~110. Presumably, the massive GPU-to-CPU information copy concerned fairly a little bit of memory-management overhead that we have now eliminated by implementing a devoted buffer pool.

    Regardless of the appreciable enchancment, if we zoom in we are going to discover that there stays round ~0.5 milliseconds of whitespace that’s attributable to the synchronicity of the GPU-to-CPU copy command — as long as the copy has not accomplished the CPU doesn’t set off the kernel computation of the subsequent batch.

    Optimization 3: Asynchronous Knowledge Copy

    Our third optimization is to vary the device-to-host copy to be asynchronous. As earlier than, we are going to discover that implementing this transformation is tougher than within the CPU-to-GPU route.

    Step one is to cross non_blocking=True to the GPU-to-CPU copy command.

    def to_cpu(output):
        buf_id = buf_queue.get()
        output_cpu = buffer_pool[buf_id]
        output_cpu.copy_(output, non_blocking=True)
        return output_cpu, buf_id

    However, as we saw in our previous post, this change will not have a meaningful impact unless we modify our tensors to use pinned memory:

    shape = (BATCH_SIZE, N_CLASSES, IMG_SIZE, IMG_SIZE)
    buffer_pool = [torch.empty(shape, pin_memory=True).share_memory_() 
                   for _ in range(POSTPROC_WORKERS)]

    Crucially, if we apply only these two changes to our script, the throughput would increase but the output may be corrupted (e.g., see here). We want an event-based mechanism for figuring out every time a GPU-to-CPU copy has been accomplished in order that we are able to proceed with the output information processing. (Observe, that this was not required when making the CPU-to-GPU copy asynchronous. As a result of a single GPU stream processes instructions sequentially, the kernel computation solely begins when the copy has accomplished. Synchronization was solely required when introducing a second stream.)

    To implement the notification mechanism, we outline a pool of CUDA occasions and a further queue for managing their use. We additional outline a listener thread for monitoring the state of occasions on the queue and populating the output queue as soon as the copies are full.

    import threading, queue
    
    event_pool = [torch.cuda.Event() for _ in range(POSTPROC_WORKERS)]
    event_queue = queue.Queue()
    
    def event_monitor(event_pool, event_queue, output_queue):
        whereas True:
            merchandise = event_queue.get()
            if merchandise is None: break
            batch_id, buf_idx = merchandise
            event_pool[buf_idx].synchronize()
            output_queue.put((batch_id, buf_idx))
            event_queue.task_done()
    
    monitor = threading.Thread(goal=event_monitor,
                               args=(event_pool, event_queue, output_queue))
    monitor.begin()

    The up to date inference sequence consists of the next steps:

    1. Get an enter batch that was prefetched to the GPU.
    2. Execute the mannequin on the enter batch to get an output tensor on the GPU.
    3. Request a vacant CPU buffer from the buffer queue and use it to set off an asynchronous information copy. Configure an occasion to set off when the copy is full and push the occasion to the event-queue.
    4. The monitor thread waits for the occasion to set off after which pushes the output tensor to the output queue for processing.
    5. A employee thread pulls the output tensor from the queue and saves it to disk. It then releases the buffer again to the buffer queue.

    The up to date code seems under.

    def synchronize_all():
        torch.cuda.synchronize()
        event_queue.be a part of()
        output_queue.be a part of()
    
    
    with torch.inference_mode():
        for i in vary(TOTAL_STEPS):
            if i == WARMUP_STEPS:
                synchronize_all()
                start_time = time.perf_counter()
                profiler.begin()
            elif i == WARMUP_STEPS + PROFILE_STEPS:
                synchronize_all()
                profiler.cease()
                end_time = time.perf_counter()
    
            with nvtx.annotate(f"Batch {i}", coloration="blue"):
                with nvtx.annotate("get batch", coloration="purple"):
                    batch = subsequent(data_iter)
                with nvtx.annotate("compute", coloration="inexperienced"):
                    output = mannequin(batch)
                with nvtx.annotate("copy to CPU", coloration="yellow"):
                    output_cpu, buf_id = to_cpu(output['out'])
                with nvtx.annotate("queue CUDA occasion", coloration="cyan"):
                    event_pool[buf_id].document()
                    event_queue.put((i, buf_id))
    
    total_time = end_time - start_time
    throughput = PROFILE_STEPS / total_time
    print(f"Throughput: {throughput:.2f} steps/sec")
    # cleanup
    event_queue.put(None)
    for _ in vary(POSTPROC_WORKERS):
        output_queue.put(None)

    The resultant throughput is 1.55 steps-per-second.

    The brand new profile hint seems under:

    Async Knowledge Switch Nsight Techniques Profiler Timeline (by Creator)

    Within the NVTX row of the CPU part we are able to see the entire operations within the inference loop bunched collectively on left aspect — implying that all of them ran instantly and asynchronously. We additionally see the occasion synchronization calls (in gentle inexperienced) working on the devoted monitor thread. Within the GPU part we see that the kernel computation begins instantly after the device-to-host copy has accomplished.

    Our remaining optimization will concentrate on bettering the parallelization of the kernel and reminiscence operations on the GPU.

    Optimization 4: Pipelining Utilizing CUDA Streams

    As in our earlier submit, we want to reap the benefits of the unbiased engines for reminiscence copying (the DMA) and kernel compute (the SMs). We do that by assigning the reminiscence copy to a devoted CUDA stream:

    egress_stream = torch.cuda.Stream()
    
    with torch.inference_mode():
        for i in vary(TOTAL_STEPS):
            if i == WARMUP_STEPS:
                synchronize_all()
                start_time = time.perf_counter()
                profiler.begin()
            elif i == WARMUP_STEPS + PROFILE_STEPS:
                synchronize_all()
                profiler.cease()
                end_time = time.perf_counter()
    
            with nvtx.annotate(f"Batch {i}", coloration="blue"):
                with nvtx.annotate("get batch", coloration="purple"):
                    batch = subsequent(data_iter)
                with nvtx.annotate("compute", coloration="inexperienced"):
                    output = mannequin(batch)
                
                # on separate stream
                with torch.cuda.stream(egress_stream):
                    # anticipate default stream to finish compute
                    egress_stream.wait_stream(torch.cuda.default_stream())
                    with nvtx.annotate("copy to CPU", coloration="yellow"):
                        output_cpu, buf_id = to_cpu(output['out'])
                    with nvtx.annotate("queue CUDA occasion", coloration="cyan"):
                        event_pool[buf_id].document(egress_stream)
                        event_queue.put((i, buf_id))

    This ends in a throughput of 1.85 steps per second — a further 19.3% enchancment over our earlier experiment.

    The ultimate profile hint seems under:

    Pipelined Nsight Techniques Profiler Timeline (by Creator)

    Within the GPU part we see a steady block of kernel compute (in gentle blue) with each the host-to-device (in gentle inexperienced) and device-to-host (in purple) working in parallel. Our inference loop is now compute-bound, implying that we have now exhausted all sensible alternatives for data-transfer optimization.

    Outcomes

    We summarize our ends in the next desk:

    Experiment Outcomes (by Creator)

    By means of using nsys profiler we have been in a position to improve effectivity by over 4X. Naturally, the influence of the optimizations we mentioned will fluctuate primarily based on the main points of the mannequin and runtime setting.

    Abstract

    This concludes the second a part of our collection of posts on the subject of optimizing data-transfer in AI/ML workloads. Half one centered on host-to-device copies and half two on device-to-host copies. When applied naively, data-transfer in both route can result in vital efficiency bottlenecks leading to GPU hunger and elevated runtime prices. Utilizing Nsight Techniques profiler, we demonstrated how one can determine and resolve these bottlenecks and improve runtime effectivity.

    Though the optimization of each instructions concerned comparable steps, the implementation particulars have been very totally different. Whereas optimizing CPU-to-GPU data-transfer is well-supported by PyTorch’s data-loading APIs and required comparatively small modifications to the execution loop, optimizing the the GPU-to-CPU route required a bit extra software program engineering. Importantly, the options we put forth on this submit have been chosen for demonstrative functions. Your individual answer could differ significantly primarily based in your undertaking wants and design preferences.

    Having coated each CPU-to-GPU and GPU-to-CPU information copies, we flip our consideration to GPU-to-GPU transactions: Keep tuned for a future submit on the subject of optimizing information switch between GPUs in distributed coaching workloads.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHyperscale AI data centers: 10 Breakthrough Technologies 2026
    Next Article The new biologists treating LLMs like an alien autopsy
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Three OpenClaw Mistakes to Avoid and How to Fix Them

    March 9, 2026
    Artificial Intelligence

    I Stole a Wall Street Trick to Solve a Google Trends Data Problem

    March 9, 2026
    Artificial Intelligence

    Why Your AI Search Evaluation Is Probably Wrong (And How to Fix It)

    March 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Xiaomi tar klivet in på AI-marknaden med sitt första språkmodell MiMo

    May 1, 2025

    Schweiz lanserar Apertus – den första helt öppna AI-modellen byggd för allmänheten

    September 7, 2025

    Plotly’s AI Tools Are Redefining Data Science Workflows 

    April 15, 2025

    Google integerar Gemini Nano i Chrome för att identifiera bedrägerier

    May 10, 2025

    TDS Newsletter: December Must-Reads on GraphRAG, Data Contracts, and More

    January 9, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Going Beyond the Context Window: Recursive Language Models in Action

    January 27, 2026

    Scaling Human-in-the-Loop: Overcoming AI Evaluation Challenges

    April 9, 2025

    What is Test Time Training

    April 4, 2025
    Our Picks

    Three OpenClaw Mistakes to Avoid and How to Fix Them

    March 9, 2026

    I Stole a Wall Street Trick to Solve a Google Trends Data Problem

    March 9, 2026

    How AI is turning the Iran conflict into theater

    March 9, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.