of the DeepSeek-R1 mannequin despatched ripples throughout the worldwide AI group. It delivered breakthroughs on par with the reasoning fashions from Meta and OpenAI, attaining this in a fraction of the time and at a considerably decrease value.
Past the headlines and on-line buzz, how can we assess the mannequin’s reasoning skills utilizing acknowledged benchmarks?
Deepseek’s user interface makes it straightforward to discover its capabilities, however utilizing it programmatically presents deeper insights and extra seamless integration into real-world purposes. Understanding run such fashions domestically additionally offers enhanced management and offline entry.
On this article, we discover use Ollama and OpenAI’s simple-evals to guage the reasoning capabilities of DeepSeek-R1’s distilled fashions primarily based on the well-known GPQA-Diamond benchmark.
Contents
(1) What are Reasoning Models?
(2) What is DeepSeek-R1?
(3) Understanding Distillation and DeepSeek-R1 Distilled Models
(4) Selection of Distilled Model
(5) Benchmarks for Evaluating Reasoning
(6) Tools Used
(7) Results of Evaluation
(8) Step-by-Step Walkthrough
Right here is the link to the accompanying GitHub repo for this text.
(1) What are Reasoning Fashions?
Reasoning fashions, resembling DeepSeek-R1 and OpenAI’s o-series fashions (e.g., o1, o3), are massive language fashions (LLMs) educated utilizing reinforcement studying to carry out reasoning.
Reasoning fashions suppose earlier than they reply, producing a protracted inside chain of thought earlier than responding. They excel in advanced problem-solving, coding, scientific reasoning, and multi-step planning for agentic workflows.
(2) What’s DeepSeek-R1?
DeepSeek-R1 is a state-of-the-art open-source LLM designed for superior reasoning, launched in January 2025 within the paper “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning”.
The mannequin is a 671-billion-parameter LLM educated with in depth use of reinforcement studying (RL), primarily based on this pipeline:
- Two reinforcement phases geared toward discovering improved reasoning patterns and aligning with human preferences
- Two supervised fine-tuning phases serving because the seed for the mannequin’s reasoning and non-reasoning capabilities.
To be exact, DeepSeek educated two fashions:
- The primary mannequin, DeepSeek-R1-Zero, a reasoning mannequin educated with reinforcement studying, generates information for coaching the second mannequin, DeepSeek-R1.
- It achieves this by producing reasoning traces, from which solely high-quality outputs are retained primarily based on their remaining outcomes.
- It implies that, not like most fashions, the RL examples on this coaching pipeline will not be curated by people however generated by the mannequin.
The end result is that the mannequin achieved efficiency similar to main fashions like OpenAI’s o1 model throughout duties resembling arithmetic, coding, and complicated reasoning.
(3) Understanding Distillation and DeepSeek-R1’s Distilled Fashions
Alongside the total mannequin, additionally they open-sourced six smaller dense fashions (additionally named DeepSeek-R1) of various sizes (1.5B, 7B, 8B, 14B, 32B, 70B), distilled from DeepSeek-R1 primarily based on Qwen or Llama as the bottom mannequin.
Distillation is a way the place a smaller mannequin (the “pupil”) is educated to copy the efficiency of a bigger, extra highly effective pre-trained mannequin (the “instructor”).
On this case, the instructor is the 671B DeepSeek-R1 mannequin, and the scholars are the six fashions distilled utilizing these open-source base fashions:
DeepSeek-R1 was used because the instructor mannequin to generate 800,000 coaching samples, a mixture of reasoning and non-reasoning samples, for distillation through supervised fine-tuning of the bottom fashions (1.5B, 7B, 8B, 14B, 32B, and 70B).
So why will we do distillation within the first place?
The aim is to switch the reasoning skills of bigger fashions, resembling DeepSeek-R1 671B, into smaller, extra environment friendly fashions. This empowers the smaller fashions to deal with advanced reasoning duties whereas being quicker and extra resource-efficient.
Moreover, DeepSeek-R1 has an enormous variety of parameters (671 billion), making it difficult to run on most consumer-grade machines.
Even probably the most highly effective MacBook Professional, with a most of 128GB of unified reminiscence, is insufficient to run a 671-billion-parameter mannequin.
As such, distilled fashions open up the opportunity of being deployed on units with restricted computational assets.
Unsloth achieved a powerful feat by quantizing the unique 671B-parameter DeepSeek-R1 mannequin down to simply 131GB — a outstanding 80% discount in measurement. Nevertheless, a 131GB VRAM requirement stays a major hurdle.
(4) Collection of Distilled Mannequin
With six distilled mannequin sizes to select from, choosing the suitable one largely is dependent upon the capabilities of the native gadget {hardware}.
For these with high-performance GPUs or CPUs and a necessity for optimum efficiency, the bigger DeepSeek-R1 fashions (32B and up) are splendid — even the quantized 671B model is viable.
Nevertheless, if one has restricted assets or prefers faster technology instances (as I do), the smaller distilled variants, resembling 8B or 14B, are a greater match.
For this mission, I might be utilizing the DeepSeek-R1 distilled Qwen-14B mannequin, which aligns with the {hardware} constraints I confronted.
(5) Benchmarks for Evaluating Reasoning
LLMs are usually evaluated utilizing standardized benchmarks that assess their efficiency throughout numerous duties, together with language understanding, code technology, instruction following, and query answering. Widespread examples embody MMLU, HumanEval, and MGSM.
To measure an LLM’s capability for reasoning, we’d like tougher, reasoning-heavy benchmarks that transcend surface-level duties. Listed below are some fashionable examples centered on evaluating superior reasoning capabilities:
(i) AIME 2024 — Competitors Math
- The American Invitational Mathematics Examination (AIME) 2024 serves as a robust benchmark for evaluating an LLM’s mathematical reasoning capabilities.
- It’s a difficult math contest with advanced, multi-step issues that check an LLM’s skill to interpret intricate questions, apply superior reasoning, and carry out exact symbolic manipulation.
(ii) Codeforces — Competitors Code
- The Codeforces Benchmark evaluates an LLM’s reasoning skill utilizing actual aggressive programming issues from Codeforces, a platform recognized for algorithmic challenges.
- These issues check an LLM’s capability to understand advanced directions, carry out logical and mathematical reasoning, plan multi-step options, and generate right, environment friendly code.
(iii) GPQA Diamond — PhD-Degree Science Questions
- GPQA-Diamond is a curated subset of the most tough questions from the broader GPQA (Graduate-Level Physics Question Answering) benchmark, particularly designed to push the boundaries of LLM reasoning in superior PhD-level matters.
- Whereas GPQA features a vary of conceptual and calculation-heavy graduate questions, GPQA-Diamond isolates solely probably the most difficult and reasoning-intensive ones.
- It’s thought of Google-proof, which means that they’re tough to reply even with unrestricted internet entry.
- Right here is an instance of a GPQA-Diamond query:
On this mission, we use GPQA-Diamond because the reasoning benchmark, as OpenAI and DeepSeek used it to guage their reasoning fashions.
(6) Instruments Used
For this mission, we primarily use Ollama and OpenAI’s simple-evals.
(i) Ollama
Ollama is an open-source device that simplifies operating LLMs on our pc or a neighborhood server.
It acts as a supervisor and runtime, dealing with duties resembling downloads and setting setup. This enables customers to work together with these fashions with out requiring a relentless web connection or counting on cloud providers.
It helps many open-source LLMs, together with DeepSeek-R1, and is cross-platform appropriate with macOS, Home windows, and Linux. Moreover, it presents an easy setup with minimal fuss and environment friendly useful resource utilization.
Necessary: Guarantee your native gadget has GPU entry for Ollama, as this dramatically accelerates efficiency and makes subsequent benchmarking workout routines far more environment friendly as in comparison with CPU. Run
nvidia-smi
in terminal to test if GPU is detected.
(ii) OpenAI simple-evals
simple-evals is a light-weight library designed to guage language fashions utilizing a zero-shot, chain-of-thought prompting strategy. It consists of well-known benchmarks like MMLU, MATH, GPQA, MGSM, and HumanEval, aiming to mirror practical utilization eventualities.
A few of chances are you’ll find out about OpenAI’s extra well-known and complete analysis library referred to as Evals, which is distinct from simple-evals.
In reality, the README of simple-evals additionally particularly signifies that it’s not supposed to switch the Evals library.
So why are we utilizing simple-evals?
The easy reply is that simple-evals comes with built-in analysis scripts for the reasoning benchmarks we’re concentrating on (resembling GPQA), that are lacking in Evals.
Moreover, I didn’t discover every other instruments or platforms, apart from simple-evals, that present an easy, Python-native approach to run quite a few key benchmarks, resembling GPQA, notably when working with Ollama.
(7) Outcomes of Analysis
As a part of the analysis, I chosen 20 random questions from the GPQA-Diamond 198-question set for the 14B distilled mannequin to work on. The full time taken was 216 minutes, which is ~11 minutes per query.
The end result was admittedly disappointing, because it scored solely 10%, far under the reported 73.3% rating for the 671B DeepSeek-R1 mannequin.
The principle problem I seen is that in its intensive inside reasoning, the mannequin usually both failed to supply any reply (e.g., returning reasoning tokens as the ultimate strains of output) or supplied a response that didn’t match the anticipated multiple-choice format (e.g., Reply: A).

As proven above, many outputs ended up as None
as a result of the regex logic in simple-evals couldn’t detect the anticipated reply sample within the LLM response.
Whereas the human-like reasoning logic was attention-grabbing to watch, I had anticipated stronger efficiency by way of question-answering accuracy.
I’ve additionally seen on-line customers point out that even the bigger 32B mannequin doesn’t carry out in addition to o1. This has raised doubts concerning the utility of distilled reasoning fashions, particularly once they battle to present right solutions regardless of producing lengthy reasoning.
That mentioned, GPQA-Diamond is a extremely difficult benchmark, so these fashions might nonetheless be helpful for easier reasoning duties. Their decrease computational calls for additionally make them extra accessible.
Moreover, the DeepSeek workforce advisable conducting a number of checks and averaging the outcomes as a part of the benchmarking course of — one thing I omitted as a consequence of time constraints.
(8) Step-by-Step Walkthrough
At this level, we’ve lined the core ideas and key takeaways.
When you’re prepared for a hands-on, technical walkthrough, this part offers a deep dive into the inside workings and step-by-step implementation.
Take a look at (or clone) the accompanying GitHub repo to observe alongside. The necessities for the digital setting setup may be discovered here.
(i) Preliminary Setup — Ollama
We start by downloading Ollama. Go to the Ollama download page, choose your working system, and observe the corresponding set up directions.
As soon as set up is full, launch Ollama by double-clicking the Ollama app (for Home windows and macOS) or operating ollama serve
within the terminal.
(ii) Preliminary Setup — OpenAI simple-evals
The setup of simple-evals is comparatively distinctive.
Whereas simple-evals presents itself as a library, the absence of __init__.py
recordsdata within the repository means it’s not structured as a correct Python bundle, resulting in import errors after cloning the repo domestically.
Since it is usually not printed to PyPI and lacks normal packaging recordsdata like setup.py
or pyproject.toml
, it can’t be put in through pip
.
Thankfully, we are able to make the most of Git submodules as an easy workaround.
A Git submodule lets us embody contents of one other Git repository inside our personal mission. It pulls the recordsdata from an exterior repo (e.g., simple-evals), however retains its historical past separate.
You may select one among two methods (A or B) to drag the simple-evals contents:
(A) If You Cloned My Venture Repo
My mission repo already consists of simple-evals
as a submodule, so you may simply run:
git submodule replace --init --recursive
(B) If You’re Including It to a Newly Created Venture
To manually add simple-evals as a submodule, run this:
git submodule add https://github.com/openai/simple-evals.git simple_evals
Observe: The simple_evals
on the finish of the command (with an underscore) is essential. It units the folder identify, and utilizing a hyphen as an alternative (i.e., easy–evals) can result in import points later.
Closing Step (For Each Strategies)
After pulling the repo contents, you need to create an empty __init__.py
within the newly created simple_evals
folder in order that it’s importable as a module. You may create it manually, or use the next command:
contact simple_evals/__init__.py
(iii) Pull DeepSeek-R1 mannequin through Ollama
The subsequent step is to domestically obtain the distilled mannequin of your selection (e.g., 14B) utilizing this command:
ollama pull deepseek-r1:14b
The record of DeepSeek-R1 fashions out there on Ollama may be discovered here.
(iv) Outline configuration
We outline the parameters in a configuration YAML file, as proven under:
The mannequin temperature is about to 0.6 (versus the everyday default worth of 0). This follows DeepSeek’s utilization suggestions, which recommend a temperature vary of 0.5 to 0.7 (0.6 advisable) to forestall limitless repetitions or incoherent outputs.
Do try the curiously distinctive DeepSeek-R1 usage recommendations — particularly for benchmarking — to make sure optimum efficiency when utilizing DeepSeek-R1 fashions.
EVAL_N_EXAMPLES
is the parameter for setting the variety of questions from the total 198-question set to make use of for analysis.
(v) Arrange Sampler code
To assist Ollama-based language fashions throughout the simple-evals framework, we create a customized wrapper class named OllamaSampler
saved inside utils/samplers/ollama_sampler.py
.
On this context, a sampler is a Python class that generates outputs from a language mannequin primarily based on a given immediate.
Since current samplers in simple-evals solely cowl suppliers like OpenAI and Claude, we’d like a sampler class that gives a appropriate interface for Ollama.
The OllamaSampler
extracts the GPQA query immediate, sends it to the mannequin with a specified temperature, and returns the plain textual content response.
The _pack_message
technique is included to make sure the output format matches what the analysis scripts in simple-evals anticipate.
(vi) Create analysis run script
The next code units up the analysis execution in major.py
, together with using the GPQAEval
class from simple-evals to run GPQA benchmarking.
The run_eval()
operate is a configurable analysis runner that checks LLMs by Ollama on benchmarks like GPQA.
It hundreds settings from the config file, units up the suitable analysis class from simple-evals, and runs the mannequin by a standardized analysis course of. It’s saved in major.py
, which may be executed with python major.py
.
Following the steps above, we’ve got efficiently arrange and executed the GPQA-Diamond benchmarking on the DeepSeek-R1 distilled mannequin.
Wrapping It Up
On this article, we showcased how we are able to mix instruments like Ollama and OpenAI’s simple-evals to discover and benchmark DeepSeek-R1’s distilled fashions.
The distilled fashions might not but rival the 671B parameter authentic mannequin on difficult reasoning benchmarks like GPQA-Diamond. Nonetheless, they reveal how distillation can broaden entry to LLM reasoning capabilities.
Regardless of subpar scores in advanced PhD-level duties, these smaller variants might stay viable for much less demanding eventualities, paving the way in which for environment friendly native deployment on a wider vary of {hardware}.
Earlier than you go
I welcome you to observe my GitHub and LinkedIn to remain up to date with extra participating and sensible content material. In the meantime, have enjoyable benchmarking LLMs with Ollama and simple-evals!