with calculating an software’s efficiency is that the real-world efficiency and theoretical efficiency can differ. With an ecosystem of merchandise that’s rising with excessive efficiency wants reminiscent of Excessive Efficiency Computing (HPC), gaming, or within the present panorama – Massive Language Fashions (LLMs), it’s important to calculate precisely the efficiency of an software.
Merely measuring theoretical GFLOPs (Floating-Level Operations Per Second) is just not sufficient, as functions hardly ever attain these maximums in the true world. That is the place the Roofline Mannequin is available in, providing a transparent visible technique to estimate an software’s efficiency and highlighting the essential function of hardware-specific optimizations.
Why easy metrics aren’t sufficient
Once we take into consideration measuring efficiency, there are a couple of metrics that come to thoughts:
- Execution time: This tells you how lengthy a activity took however gives no perception into why.
- Cycles per Directions (CPI): This only measures the processor’s compute efficiency.
- Serial vs Parallel execution: Measures compute efficiency overlooking any {hardware} optimizations.
- Floating Level Operations Per Second (FLOP/s): This only represents a theoretical most which is commonly not achievable in a real-world state of affairs.
Whereas these are good metrics, they typically don’t present sufficient data. For example, utilizing the Floating Level Operations Per Seconds is a theoretical restrict which isn’t typically achieved. So utilizing that because the solely metric is just not sufficient because it ignores a standard efficiency limiter – knowledge motion.
Roofline Modeling
The Roofline Mannequin is a strong device that visually maps an software’s efficiency in opposition to the capabilities of a selected {hardware} structure, reminiscent of a CPU or GPU. The mannequin will get its title from the form of the graph it produces, which includes a “roof” composed of a slanted line and a flat, horizontal line. This form represents the final word efficiency limits imposed by the {hardware}.
From this modeling approach, there are two parameters which outline the achievable limits with {hardware}:
- Knowledge motion: The time it takes to maneuver knowledge, calculated as the whole knowledge measurement divided by the system’s peak reminiscence bandwidth.
- Computation: The time required for calculations, decided by dividing the whole variety of floating-point operations by the system’s peak compute efficiency (generally measured in GFLOP/s).
The full execution time of an software is decided by the larger of those two values: max {data_movement, computation}
.
Regardless of the {hardware} having higher compute efficiency, knowledge motion can typically change into the bottleneck. Roofline Modeling introduces the idea of Arithmetic Depth (AI). AI is the ratio of floating-point operations carried out for each byte of knowledge moved from reminiscence.
- An algorithm with excessive Arithmetic Depth is taken into account compute-hungry. Its efficiency is proscribed by how rapidly calculations might be carried out.
- An algorithm with low Arithmetic Depth is taken into account data-hungry. Its efficiency is proscribed by how rapidly knowledge might be moved.
Understanding the graph
Creative Commons Attribution-Share Alike 4.0 International
A Roofline graph plots the Attainable FLOP/s (y-axis) in opposition to the Arithmetic Depth (x-axis). The “roof” itself reveals the {hardware}’s limitations. The slanted a part of the roof represents the height knowledge bandwidth (in GB/s), whereas the flat half represents the height computational efficiency (in GFLOPS). Notice that all the things within the picture is in a logarithmic scale.
- Factors under the roof: Point out suboptimal efficiency indicating scope of enchancment.
- Factors hitting the slanted line: Knowledge hungry software. Its efficiency is proscribed by knowledge bandwidth.
- Factors hitting the flat line: Compute hungry software. It’s utilizing the complete computational energy of the processor.
Why is Roofline Modeling necessary?
Roofline Modeling gives a visible, intuitive option to perceive software efficiency, exhibiting key traits like Operational Depth, GPU capabilities, and attainable FLOP/s. This sort of modeling helps the programmer make focused optimizations to their software for {hardware} with which higher outcomes might be obtained.
- Bottleneck evaluation: Having a visible assist makes it straightforward for the developer to determine the place the bottleneck is – reminiscence or efficiency. If the appliance is reminiscence intensive, a developer can concentrate on enhancing knowledge locality with strategies like caching or loop tiling. If it’s compute intensive, the main target can shift to enabling extra parallel computations or leveraging compiler optimizations.
- {Hardware} and software program design: Software program engineers shouldn’t worry the underlying {hardware}. As an alternative, the {hardware} design ought to be embraced and optimized. Software program engineers can use insights from Roofline Modeling to embrace and optimize for the precise structure they’re utilizing.
Roofline Modeling in Motion
To carry out Roofline Modeling, we have to profile the appliance to grasp the efficiency. From profiling, we are able to get metrics reminiscent of Floating Level Operations (FLOPs) and reminiscence bandwidth utilization, each of that are required for Roofline Modeling. This text explores two of those instruments – Nvidia’s ncu
which is the Nsight Compute CLI for GPU evaluation and PyTorch’s profiler, particularly for functions utilizing PyTorch.
For detailed CUDA kernel optimization and exact FLOP/byte calculations, ncu
gives direct GPU {hardware} counter data. In distinction, torch.profiler.profile
gives a higher-level perspective inside PyTorch, serving to within the understanding of operator-level efficiency, tensor reminiscence utilization, and the general software habits encompassing each CPU and GPU actions.
Profiling with ncu
ncu
is the command line interface which is used for profiling CUDA kernels [2]. It might show outcomes instantly within the terminal or save them to a log file for later evaluation. To construct a Roofline mannequin, we have to seize the precise metrics that can enable us to calculate Arithmetic Depth.
We’ll use the PyTorch ImageNet repository [3] as our instance. It’s a sensible choice as a result of it’s straightforward to grasp, well-documented by PyTorch, and works with their profiler, so we are able to actually dig into the efficiency.
Step 1: Run the ncu command to gather metrics
Step one is to run the appliance via ncu to gather the mandatory hardware-level knowledge. The command appears to be like like this:
ncu --log-file <log_file_name>
--metrics <list_of_metrics_separated_by_comma>
--target-processes all
python3 <your_application.py application_arguments>
- log-file: The log file wherein we wish to retailer the outcomes.
- metrics: That is an important parameter and depicts the metrics that we wish to seize. For calculating Arithmetic Depth, we contemplate:
dram__sectors_write.sum
: sum of DRAM sectors writtendram__sectors_read.sum
: sum of DRAM sectors learnsmsp__sass_thread_inst_executed_op_fadd_pred_on.sum
: sum of floating-point additionssmsp__sass_thread_inst_executed_op_fmul_pred_on.sum
: sum of floating-point multiplicationssmsp__sass_thread_inst_executed_op_ffma_pred_on.sum
: sum of floating-point fused multiply add operations
- target-process:
all
flag ensures that we profile the whole software.
Our ncu command adjustments to:
ncu --log-file logs_example --metrics dram__sectors_write.sum,
dram__sectors_read.sum,
smsp__sass_thread_inst_executed_op_fadd_pred_on.sum,
smsp__sass_thread_inst_executed_op_fmul_pred_on.sum,
smsp__sass_thread_inst_executed_op_ffma_pred_on.sum
--target-processes all python3
foremost.py /imagenet --arch resnet50 --epochs 1 --batch-size 10
--print-freq 10 --seed 42
Step 2: Calculating FLOPs from the metrics
As soon as the profiler has run, we are able to combination the collected metrics to calculate the whole floating-point operations. The formulation is:
[FLOPs = 2 * FMA_count + FADD_count + FMUL_count]
- FLOPs: Rely of Floating Level Operations.
- FMA_count: Fused Multiply-Add (FMA) operations usually depend as 2 FLOPs (one multiplication and one addition). That is represented by the
smsp__sass_thread_inst_executed_op_ffma_pred_on.sum
metric. - FADD_count: That is represented by the
smsp__sass_thread_inst_executed_op_fadd_pred_on.sum
metric. - FMUL_count: That is represented by the
smsp__sass_thread_inst_executed_op_fmul_pred_on.sum
metric.
Step 3: Calculate the bytes transferred
Subsequent, we calculate the whole knowledge transferred to and from DRAM. The ncu metrics present the variety of DRAM sectors learn and written. Assuming a standard sector measurement of 32 bytes for contemporary GPUs:
[Total_DRAM_bytes = (dram__sectors_read.sum + dram__sectors_write.sum) * 32]
Step 4: Calculate the Arithmetic Depth
With FLOPs and whole bytes, we are able to now calculate the Arithmetic Depth:
[AI = FLOPs / Total_DRAM_Bytes]
Step 5: Calculate execution time
To search out the appliance’s efficiency in FLOP/s, we additionally want the execution time. For this, we are able to use NVIDIA Nsight Techniques (nsys), a system-wide profiler that may precisely measure the runtime of software segments. We run our software once more, this time with nsys, to generate a time-based report. From this report, we are able to extract the whole GPU working time.
nsys profile -f true -o <your_nsys_output_file.qdrep> python3
<your_application.py application_arguments>
Our nsys command adjustments to:
nsys profile -f true -o time.qdrep python3 foremost.py /imagenet
--arch resnet50 --epochs 1 --batch-size 10 --print-freq 10
--seed 42
After working this command, we are able to get the GPU_RUNNING_TIME
.
Step 6: Calculate the appliance efficiency
Lastly, we calculate the achieved efficiency in FLOP/s by dividing the whole FLOPs by the execution time:
[FLOP/s = FLOPs / GPU_RUNNING_TIME]
This worth offers us the “attainable FLOP/s” that we are able to plot on our Roofline graph.
Profiling with torch
For functions written in PyTorch, the built-in torch.profiler.profile
gives a user-friendly option to collect efficiency knowledge. There are 2 choices which might be offered to the builders:
- Use the Profiler Context Supervisor
- Concentrating on Profiling for particular neural community layers
Profiler Context Supervisor
The a part of the code that we wish to profile might be wrapped throughout the with torch.profiler.profile()
context supervisor. Within the with
assertion, you may outline the actions
to hint (CPU, CUDA, or each), set a schedule
to profile particular coaching steps, and select whether or not to document tensor shapes, reminiscence utilization, or FLOPs. As soon as contained in the context, you need to name prof.step()
on the finish of every iteration to sign the profiler to advance, particularly when a schedule is used.
with profile(
actions=<arguments>,
schedule=torch.profiler.schedule(<arguments>),
record_shapes=<True|False>,
profile_memory=<True|False>,
with_flops=<True|False>
) as prof:
....
prof.step()
- actions: Specify whether or not to profile the CPU, CUDA or each.
- schedule: Helpful for profiling a number of steps within the coaching loop. If the schedule parameter is used, the profiler must name prof.step() to maneuver to the following step.
- record_shapes: Whether or not to document the shapes of the tensors.
- profile_memory: To seize reminiscence utilization
- with_flops: That is experimental however is used to FLOPs with operators.
Our profiler command adjustments to:
with profile(
actions=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(wait=1, warmup=1, lively=3, repeat=2),
record_shapes=True,
profile_memory=True,
with_flops=True
) as prof:
Concentrating on Profiling for particular neural community layers
The profiler will also be utilized in a extra focused method to investigate particular layers of a neural community. That is helpful to verify whether or not some particular layer is contributing extra to the efficiency than the opposite layers giving the developer the choice of modifying particular layers. Whereas utilizing that is very straightforward to make use of, normally, the primary choice works higher. The PyTorch profiler outcomes will also be exported and visualized on a TensorBoard.
profiler.begin()
self.conv2(x)
profiler.cease()
LLMs and Roofline Modeling
Coming to the subject everybody has been ready for – does Roofline Modeling assist with LLM efficiency calculation? The quick reply is sure.
LLMs are advanced neural community architectures with billions of parameters and the large datasets that they course of. Whereas coaching is a really resource-intensive activity, inference and advantageous tuning the mannequin additionally must be environment friendly.
- Bottlenecks: LLMs throughout inference can endure from bottlenecks as a result of sheer quantity of parameters that it’s working with. These parameters are the weights of the fashions and so they trigger reminiscence bandwidth points. Utilizing Roofline Modeling, the precise layers might be profiled for the bottlenecks.
- {Hardware} choice: As most organizations fine-tune present fashions moderately than coaching them from scratch, selecting the best infrastructure is essential for managing prices. This underscores the significance of selecting optimum infrastructure for coaching. For instance, selecting the {hardware} in keeping with your LLM structure or optimizing your mannequin to run on a selected structure can reduce coaching and inference prices.
Conclusion
The Roofline Mannequin gives a strong visible evaluation of software efficiency optimization. By visualizing the appliance efficiency throughout reminiscence and compute, a transparent steering is offered in selecting one of the simplest ways to strategy optimizations. Whereas this text solely thought-about Naive Roofline Fashions, there are extra superior strategies reminiscent of Hierarchical Roofline Fashions or including ceilings for particular compute optimizations.
References
[1] https://docs.nersc.gov/tools/performance/roofline/
[2] https://docs.nvidia.com/nsight-compute/NsightComputeCli/index.html