or so, it has been unimaginable to disclaim that there was a rise within the hype degree in direction of AI, particularly with the rise of generative AI and agentic AI. As an information scientist working in a consulting agency, I’ve famous a substantial development within the variety of enquiries relating to how we are able to leverage these new applied sciences to make processes extra environment friendly or automated. And whereas this curiosity may flatter us knowledge scientists, it generally looks like folks anticipate magic from AI fashions, as if they may remedy each downside with nothing greater than a immediate. Alternatively, whereas I personally imagine generative and agentic AI has modified (and can proceed to vary) how we work and reside, once we conduct business-process modifications, we should contemplate its limitations and challenges and see the place it proves to be a very good software (as we wouldn’t use a fork, for instance, to chop meals).
As I’m a nerd and perceive how LLMs work, I wished to check their efficiency in a logic sport just like the Spanish model of Wordle towards a logic I had inbuilt a few hours some years in the past (extra particulars on that may be discovered here). Particularly, I had the next questions:
- Will my algorithm be higher than LLM fashions?
- How will reasoning capabilities in LLM fashions have an effect on their efficiency?
Constructing an LLM-based resolution
To get an answer by the LLM mannequin, I constructed three fundamental prompts. The primary one was focused to get an preliminary guess:
Let’s suppose I’m taking part in WORDLE, however in Spanish. It’s a sport the place you need to guess a 5-letter phrase, and solely 5 letters, in 6 makes an attempt. Additionally, a letter will be repeated within the closing phrase.
First, let’s evaluation the foundations of the sport: Each day the sport chooses a five-letter phrase that gamers attempt to guess inside six makes an attempt. After the participant enters the phrase they assume it’s, every letter is marked in inexperienced, yellow, or grey: inexperienced means the letter is appropriate and within the appropriate place; yellow means the letter is within the hidden phrase however not within the appropriate place; whereas grey means the letter just isn’t within the hidden phrase.
However in the event you place a letter twice and one exhibits up inexperienced and the opposite yellow, it means the letter seems twice: as soon as within the inexperienced place, and as soon as in one other place that isn’t the yellow one.
Instance: If the hidden phrase is “PIZZA”, and your first try is “PANEL”, the response would seem like this: the “P” can be inexperienced, the “A” yellow, and the “N”, “E”, and “L” grey.
Since for now we don’t know something concerning the goal phrase, give me a very good beginning phrase—one that you simply assume will present helpful info to assist us work out the ultimate phrase.
Then, a second immediate can be used to indicate all of the phrase guidelines (the immediate right here just isn’t proven in full attributable to house, however the full model additionally had instance video games and instance reasonings):
Now, the concept is that we evaluation the sport technique. I’ll be providing you with the sport outcomes. The concept is that, given this outcome, you counsel a brand new 5-letter phrase. Bear in mind additionally that there are solely 6 whole makes an attempt. I’ll provide the outcome within the following format:
LETTER -> COLORFor instance, if the hidden phrase is PIZZA, and the try is PANEL, I’ll give the outcome on this format:
P -> GREEN (it’s the primary letter of the ultimate phrase)
A -> YELLOW (it’s within the phrase, however not within the second place—as an alternative it’s within the final one)
N -> GRAY (it’s not within the phrase)
E -> GRAY (it’s not within the phrase)
L -> GRAY (it’s not within the phrase)Let’s bear in mind the foundations. If a letter is inexperienced, it means it’s within the place the place it was positioned. If it’s yellow, it means the letter is within the phrase, however not in that place. If it’s grey, it means it’s not within the phrase.
When you place a letter twice and one exhibits inexperienced and the opposite grey, it means the letter solely seems as soon as within the phrase. However in the event you place a letter twice and one exhibits inexperienced and the opposite yellow, it means the letter seems twice: as soon as within the inexperienced place, and one other time in a unique place (not the yellow one).
All the data I provide you with should be used to construct your suggestion. On the finish of the day, we wish to “flip” all of the letters inexperienced, since meaning we guessed the phrase.
Your closing reply should solely comprise the phrase suggestion—not your reasoning.
The ultimate immediate was used to get a brand new suggestion after having the results of our try:
Right here’s the outcome. Do not forget that the phrase should have 5 letters, that you need to use the foundations and all of the data of the sport, and that the objective is to “flip” all of the letters inexperienced, with not more than 6 makes an attempt to guess the phrase. Take your time to assume by your reply—I don’t want a fast response. Don’t give me your reasoning, solely your closing outcome.
One thing essential right here is that I by no means tried to information the LLMs or identified errors or errors within the logic. I wished a pure LLM-based outcome and didn’t wish to bias the answer in any form or kind.
Preliminary experiments
The reality is that my preliminary speculation was that whereas I anticipated my algorithm to be higher than the LLMs, I assumed the Generative AI-based resolution was going to do a fairly good job with out a lot assist, however after some days, I observed some “humorous” behaviors, just like the one under (the place the reply was apparent):
The reply was fairly apparent: it solely needed to swap two letters. Nonetheless, ChatGPT answered with the identical guess as earlier than.
After seeing these sorts of errors, I began to ask about this on the finish of video games, and the LLMs principally acknowledged their errors, however didn’t present a transparent clarification on their reply:

Whereas these are simply two examples, this type of conduct was ordinary when producing the pure LLM resolution, showcasing some potential limitations within the reasoning of base fashions.
Outcomes Evaluation
With all this info into account, I ran an experiment for 30 days. For 15 days I in contrast my algorithm towards 3 base LLM fashions:
- ChatGPT’s 4o/5 mannequin (After OpenAI launched GPT-5 mannequin, I couldn’t toggle between fashions on the free-tier model of ChatGPT)
- Gemini’s 2.5-Flash mannequin
- Meta’s Llama 4 mannequin
Right here, I in contrast two fundamental metrics: the share of wins and a factors system metrics (any inexperienced letter within the closing guess awarded 3 factors, yellow letters awarded 1 level, and grey letters awarded 0 factors):

As will be seen, my algorithm (whereas particular to this use case, it solely took me a day or so to construct) is the one strategy that wins day-after-day. Analyzing the LLM fashions, Gemini supplies the more severe efficiency, whereas ChatGPT and Meta’s Llama present comparable numbers. Nonetheless, as will be seen on the determine on the correct, there’s nice variability within the efficiency of every mannequin and consistency is one thing that isn’t proven by these alternate options for this specific use case.
Nonetheless, these outcomes wouldn’t be full if we didn’t analyze a reasoning LLM mannequin towards my algorithm (and towards a base LLM mannequin). So, for the next 15 days I additionally in contrast the next fashions:
- ChatGPT’s 4o/5 mannequin utilizing reasoning functionality
- Gemini’s 2.5-Flash mannequin (similar mannequin as earlier than)
- Meta’s Llama 4 mannequin (similar mannequin as earlier than)
Some essential feedback right here: initially, I deliberate to make use of Grok as nicely, however after Grok 4 was launched, the reasoning toggle for Grok 3 disappeared, which made comparisons tough; alternatively, I attempted to make use of Gemini’s 2.5-Professional, however in distinction with ChatGPT’s reasoning choice, using this isn’t a toggle, however a unique mannequin which solely allowed me to ship 5 prompts per day, which didn’t permit us to finish a full sport. With this in thoughts, we present the outcomes for the next 15 days:

The reasoning functionality behind LLMs supplies an enormous enhance to efficiency on this job, which requires understanding which letter can be utilized in every place, which of them have been evaluated, remembering all outcomes and understanding all mixtures. Not solely are the typical outcomes higher, but additionally efficiency is extra constant, as within the two video games that weren’t gained, just one letter was missed. Despite this enchancment, the particular algorithm I constructed remains to be barely higher by way of efficiency, however as I discussed earlier, this was completed for this particular job. One thing fascinating is that for these 15 video games, the bottom LLM fashions (Gemini 2.5 Flash and Llama 4) didn’t win as soon as, and the efficiency was worse than the opposite set, which makes me surprise if the wins that had been achieved earlier than had been fortunate or not.
Remaining Remarks
The intention of this train has been to attempt to take a look at the efficiency of LLMs towards a particularly constructed algorithm for a job that requires making use of logic guidelines to generate a profitable outcome. We’ve seen that base fashions don’t have good efficiency, however that reasoning capabilities of LLM options present an essential enhance, producing comparable efficiency to the outcomes of the tailor-made algorithm I had constructed. One essential factor to consider is that whereas this enchancment is actual, with real-world purposes and manufacturing techniques we additionally should consider response time (reasoning LLM fashions take extra time to generate a solution than base fashions or, on this case, the logic I constructed) and value (in response to the Azure OpenAI pricing page, as of the 30th of August of 2025, the worth of 1M enter tokens for the overall function GPT-4o-mini basic function mannequin is round $0.15, whereas for the o4-mini reasoning mannequin, the price of 1M enter tokens is $1.10). Whereas I firmly imagine that LLMs and generative AI will proceed to evolve the best way we work, we are able to’t deal with them as a Swiss knife that solves all the things, with out contemplating its limitations and with out evaluating easy-to-build tailor-made options.