Close Menu
    Trending
    • Why CrewAI’s Manager-Worker Architecture Fails — and How to Fix It
    • How to Implement Three Use Cases for the New Calendar-Based Time Intelligence
    • Ten Lessons of Building LLM Applications for Engineers
    • How to Create Professional Articles with LaTeX in Cursor
    • LLM Benchmarking, Reimagined: Put Human Judgment Back In
    • How artificial intelligence can help achieve a clean energy future | MIT News
    • How to Implement Randomization with the Python Random Module
    • Struggling with Data Science? 5 Common Beginner Mistakes
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » A Hands-On Guide to Anthropic’s New Structured Output Capabilities
    Artificial Intelligence

    A Hands-On Guide to Anthropic’s New Structured Output Capabilities

    ProfitlyAIBy ProfitlyAINovember 24, 2025No Comments13 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    introduced Structured Outputs for its prime fashions in its API, a brand new function designed to make sure that model-generated outputs precisely match the JSON Schemas supplied by builders. 

    This solves an issue many builders face when a system or course of consumes an LLM’s output for additional processing. It’s important for that system to “know” what to anticipate as its enter so it could possibly course of it accordingly. 

    Equally, when displaying mannequin output to a consumer, you need this to be in the identical format every time. 

    Up to now, it’s been a ache to make sure constant output codecs from Anthropic fashions. Nevertheless, it seems that Anthropic has now solved this drawback for its prime fashions anyway. From their announcement (linked on the finish of the article), they are saying, 

    The Claude Developer Platform now helps structured outputs for Claude Sonnet 4.5 and Opus 4.1. Obtainable in public beta, this function ensures API responses all the time match your specified JSON schemas or software definitions. 

    Now, one factor to recollect earlier than we have a look at some instance code, is that Anthropic ensures that the mannequin’s output will adhere to a specified format, not that any output will probably be 100% correct. The fashions can and should hallucinate often. 

    So you may get completely formatted incorrect solutions! 

    Organising our dev atmosphere 

    Earlier than we have a look at some pattern Python code, it’s greatest apply to create a separate growth atmosphere the place you possibly can set up any crucial software program and experiment with coding. Now, something you do on this atmosphere will probably be siloed and gained’t impression any of your different initiatives. 

    I’ll be utilizing Miniconda for this, however you should utilize no matter methodology you’re most aware of. 

    If you wish to go down the Miniconda route and don’t have already got it, you should set up it first. Get it utilizing this hyperlink:

    https://docs.anaconda.com/miniconda/ 

    To comply with together with my examples, you’ll additionally want an Anthropic API key and a few credit score in your account. For reference, I used 12 cents to run the code on this article. If you have already got an Anthropic account, you may get an API key utilizing the Anthropic console at https://console.anthropic.com/settings/keys.  

    1/ Create our new dev atmosphere and set up the required libraries 

    this on WSL2 Ubuntu for Home windows.

    (base) $ conda create -n anth_test python=3.13 -y 
    (base) $ conda activate anth_test 
    (anth_test) $ pip set up anthropic beautifulsoup4 requests 
    (anth_test) $ pip set up httpx jupyter 

    2/ Begin Jupyter 

    Now sort in ‘jupyter pocket book’ into your command immediate. You need to see a jupyter pocket book open in your browser. If that doesn’t occur mechanically, you’ll seemingly see a screenful of knowledge after the command. Close to the underside, you’ll discover a URL to repeat and paste into your browser. It’ll look just like this:

    http://127.0.0.1:8888/tree?token=3b9f7bd07b6966b41b68e2350721b2d0b6f388d248cc69

    Code Examples

    In our two coding examples, we are going to use the brand new output_format parameter out there within the beta API. When specifying the structured output, we are able to use two totally different types.

    1. Uncooked JSON Schema.

    Because the title suggests, the construction is outlined by a JSON schema block handed on to the output format definition.

    2. A Pydantic mannequin class.

    This can be a common Python class utilizing Pydantic’s BaseModel that specifies the info we would like the mannequin to output. It’s a way more compact method to outline a construction than a JSON schema.

    Instance code 1 — Textual content summarisation 

    That is helpful you probably have a bunch of various texts you wish to summarise, however need the summaries to have the identical construction. On this instance, we’ll course of the Wikipedia entries for some well-known scientists and retrieve particular key details about them in a extremely organised approach.

    In our abstract, we wish to output the next construction for every scientist,

    • The title of the Scientist
    • When and the place they had been born
    • Their important declare to fame
    • The yr they gained the Nobel Prize
    • When and the place they died

    Word: Most textual content in Wikipedia, excluding quotations, has been launched beneath the Artistic Commons Attribution-Sharealike 4.0 Worldwide License (CC-BY-SA) and the GNU Free Documentation License (GFDL) Briefly which means that you might be free:

    to Share — copy and redistribute the fabric in any medium or format

    to Adapt — remix, remodel, and construct upon the fabric

    for any objective, even commercially.

    Let’s break the code into manageable sections, every with an evidence.

    First, we import the required third-party libraries and arrange connections to Anthropic utilizing our API Key.

    import anthropic
    import httpx
    import requests
    import json
    import os
    from bs4 import BeautifulSoup
    
    http_client = httpx.Consumer()
    api_key = 'YOUR_API_KEY'
    shopper = anthropic.Anthropic(
        api_key=api_key,
        http_client=http_client
    )

    That is the operate that can scrape Wikipedia for us.

    def get_article_content(url):
        strive:
            headers = {'Consumer-Agent': 'Mozilla/5.0 (Home windows NT 10.0; Win64; x64)'}
            response = requests.get(url, headers=headers)
            soup = BeautifulSoup(response.content material, "html.parser")
            article = soup.discover("div", class_="mw-body-content")
            if article:
                content material = "n".be part of(p.textual content for p in article.find_all("p"))
                return content material[:15000] 
            else:
                return ""
        besides Exception as e:
            print(f"Error scraping {url}: {e}")
            return ""

    Subsequent, we outline our JSON schema, which specifies the precise format for the mannequin’s output.

    summary_schema = {
        "sort": "object",
        "properties": {
            "title": {"sort": "string", "description": "The title of the Scientist"},
            "born": {"sort": "string", "description": "When and the place the scientist was born"},
            "fame": {"sort": "string", "description": "A abstract of what their important declare to fame is"},
            "prize": {"sort": "integer", "description": "The yr they gained the Nobel Prize. 0 if none."},
            "dying": {"sort": "string", "description": "When and the place they died. 'Nonetheless alive' if dwelling."}
        },
        "required": ["name", "born", "fame", "prize", "death"],
        "additionalProperties": False
    }

    This operate serves because the interface between our Python script and the Anthropic API. Its major purpose is to take unstructured textual content (an article) and pressure the AI to return a structured information object (JSON) containing particular fields, such because the scientist’s title, delivery date, and Nobel Prize particulars.

    The operate calls shopper.messages.create to ship a request to the mannequin. It units the temperature to 0.2, which lowers the mannequin’s creativity to make sure the extracted information is factual and exact. The extra_headers parameter permits a selected beta function that’s not but commonplace. By passing the anthropic-beta header with the worth structured-outputs-2025-11-13, the code tells the API to activate the Structured Outputs logic for this particular request, forcing it to supply legitimate JSON that matches your outlined construction.

    As a result of the output_format parameter is used, the mannequin returns a uncooked string that’s assured to be legitimate JSON. The road json.hundreds(response.content material[0].textual content) parses this string right into a native Python dictionary, making the info instantly prepared for programmatic use.

    def get_article_summary(textual content: str):
        if not textual content: return None
        
        strive:
            response = shopper.messages.create(
                mannequin="claude-sonnet-4-5", # Use the newest out there mannequin
                max_tokens=1024,
                temperature=0.2,
                messages=[
                    {"role": "user", "content": f"Summarize this article:nn{text}"}
                ],
                # Allow the beta function
                extra_headers={
                    "anthropic-beta": "structured-outputs-2025-11-13"
                },
                # Go the brand new parameter right here
                extra_body={
                    "output_format": {
                        "sort": "json_schema",
                        "schema": summary_schema
                    }
                }
            )
    
            # The API returns the JSON straight within the textual content content material
            return json.hundreds(response.content material[0].textual content)
    
        besides anthropic.BadRequestError as e:
            print(f"API Error: {e}")
            return None
        besides Exception as e:
            print(f"Error: {e}")
            return None

    That is the place we pull every thing collectively. The varied URLs we wish to scrape are outlined. Their contents are handed to the mannequin for processing, earlier than the top outcomes are displayed.

    urls = [
        "https://en.wikipedia.org/wiki/Albert_Einstein",
        "https://en.wikipedia.org/wiki/Richard_Feynman",
        "https://en.wikipedia.org/wiki/James_Clerk_Maxwell",
        "https://en.wikipedia.org/wiki/Alan_Guth"
    ]
    
    print("Scraping and analyzing articles...")
    
    for i, url in enumerate(urls):
        print(f"n--- Processing Article {i+1} ---")
        content material = get_article_content(url)
        
        if content material:
            abstract = get_article_summary(content material)
            if abstract:
                print(f"Scientist: {abstract.get('title')}")
                print(f"Born:      {abstract.get('born')}")
                print(f"Fame:      {abstract.get('fame')}")
                print(f"Nobel:     {abstract.get('prize')}")
                print(f"Died:      {abstract.get('dying')}")
            else:
                print("Didn't generate abstract.")
        else:
            print("Skipping (No content material)")
    
    print("nDone.")

    Once I ran the above code, I obtained this output.

    Scraping and analyzing articles...
    
    --- Processing Article 1 ---
    Scientist: Albert Einstein
    Born:      14 March 1879 in Ulm, Kingdom of Württemberg, German Empire
    Fame:      Creating the speculation of relativity and the mass-energy equivalence formulation E = mc2, plus contributions to quantum idea together with the photoelectric impact
    Nobel:     1921
    Died:      18 April 1955
    
    --- Processing Article 2 ---
    Scientist: Richard Phillips Feynman
    Born:      Could 11, 1918, in New York Metropolis
    Fame:      Path integral formulation of quantum mechanics, quantum electrodynamics, Feynman diagrams, and contributions to particle physics together with the parton mannequin
    Nobel:     1965
    Died:      February 15, 1988
    
    --- Processing Article 3 ---
    Scientist: James Clerk Maxwell
    Born:      13 June 1831 in Edinburgh, Scotland
    Fame:      Developed the classical idea of electromagnetic radiation, unifying electrical energy, magnetism, and light-weight via Maxwell's equations. Additionally key contributions to statistical mechanics, colour idea, and quite a few different fields of physics and arithmetic.
    Nobel:     0
    Died:      5 November 1879
    
    --- Processing Article 4 ---
    Scientist: Alan Harvey Guth
    Born:      February 27, 1947 in New Brunswick, New Jersey
    Fame:      Pioneering the speculation of cosmic inflation, which proposes that the early universe underwent a part of exponential growth pushed by constructive vacuum vitality density
    Nobel:     0
    Died:      Nonetheless alive
    
    Finished.

    Not too shabby! Alan Guth will probably be delighted that he’s nonetheless alive, however alas, he hasn’t but gained a Nobel Prize. Additionally, observe that James Clerk Maxwell had died earlier than the Nobel Prize was in operation.

    Instance code 2 — Automated Code Safety & Refactoring Agent.

    Here’s a fully totally different use case and a really sensible instance for software program engineering. Often, if you ask an LLM to “repair code,” it provides you a conversational response combined with code blocks. This makes it arduous to combine right into a CI/CD pipeline or an IDE plugin.

    By utilizing Structured Outputs, we are able to pressure the mannequin to return the clear code, a record of particular bugs discovered, and a safety danger evaluation in a single, machine-readable JSON object.

    The State of affairs

    We’ll feed the mannequin a Python operate containing a harmful SQL Injection vulnerability and poor coding practices. The mannequin should establish the precise flaws and rewrite the code securely.

    import anthropic
    import httpx
    import os
    import json
    from pydantic import BaseModel, Area, ConfigDict
    from typing import Listing, Literal
    
    # --- SETUP ---
    http_client = httpx.Consumer()
    api_key = 'YOUR_API_KEY'
    shopper = anthropic.Anthropic(api_key=api_key, http_client=http_client)
    
    # Deliberately dangerous code
    bad_code_snippet = """
    import sqlite3
    
    def get_user(u):
        conn = sqlite3.join('app.db')
        c = conn.cursor()
        # DANGER: Direct string concatenation
        question = "SELECT * FROM customers WHERE username = '" + u + "'"
        c.execute(question)
        return c.fetchall()
    """
    
    # --- DEFINE SCHEMA WITH STRICT CONFIG ---
    # We add model_config = ConfigDict(additional="forbid") to make sure 
    # "additionalProperties": false is generated within the schema.
    
    class BugReport(BaseModel):
        model_config = ConfigDict(additional="forbid") 
        
        severity: Literal["Low", "Medium", "High", "Critical"]
        line_number_approx: int = Area(description="The approximate line quantity the place the difficulty exists.")
        issue_type: str = Area(description="e.g., 'Safety', 'Efficiency', 'Type'")
        description: str = Area(description="Quick rationalization of the bug.")
    
    class CodeReviewResult(BaseModel):
        model_config = ConfigDict(additional="forbid") 
        
        is_safe_to_run: bool = Area(description="True provided that no Vital/Excessive safety dangers exist.")
        detected_bugs: Listing[BugReport]
        refactored_code: str = Area(description="The entire, fastened Python code string.")
        rationalization: str = Area(description="A quick abstract of adjustments made.")
    
    # --- API CALL ---
    strive:
        print("Analyzing code for safety vulnerabilities...n")
        
        response = shopper.messages.create(
            mannequin="claude-sonnet-4-5", 
            max_tokens=2048,
            temperature=0.0,
            messages=[
                {
                    "role": "user", 
                    "content": f"Review and refactor this Python code:nn{bad_code_snippet}"
                }
            ],
            extra_headers={
                "anthropic-beta": "structured-outputs-2025-11-13"
            },
            extra_body={
                "output_format": {
                    "sort": "json_schema",
                    "schema": CodeReviewResult.model_json_schema()
                }
            }
        )
    
        # Parse Consequence
        outcome = json.hundreds(response.content material[0].textual content)
    
        # --- DISPLAY OUTPUT ---
        print(f"Protected to Run: {outcome['is_safe_to_run']}")
        print("-" * 40)
        
        print("BUGS DETECTED:")
        for bug in outcome['detected_bugs']:
            # Shade code the severity (Crimson for Vital)
            prefix = "🔴" if bug['severity'] in ["Critical", "High"] else "🟡"
            print(f"{prefix} [{bug['severity']}] Line {bug['line_number_approx']}: {bug['description']}")
    
        print("-" * 40)
        print("REFACTORED CODE:")
        print(outcome['refactored_code'])
    
    besides anthropic.BadRequestError as e:
        print(f"API Schema Error: {e}")
    besides Exception as e:
        print(f"Error: {e}")

    This code acts as an automatic safety auditor. As a substitute of asking the AI to “chat” about code, it forces the AI to fill out a strict, digital kind containing particular particulars about bugs and safety dangers.

    Right here is the way it works in three easy steps.

    1. First, the code defines precisely what the reply should seem like utilizing Python lessons along with Pydantic. It tells the AI: “Give me a JSON object containing an inventory of bugs, a severity score (like ‘Vital’ or ‘Low’) for every, and the fastened code string.”
    2. When sending the susceptible code to the API, it passes the Pydantic blueprint utilizing the output_format parameter. This strictly constrains the mannequin, stopping it from hallucinating or including conversational filler. It should return legitimate information matching your blueprint.
    3. The script receives the AI’s response, which is assured to be machine-readable JSON. It then mechanically parses this information to show a clear report, flagging the SQL injection as a “Vital” problem for instance and printing the safe, refactored model of the code.

    Right here is the output I obtained after operating the code.

    Analyzing code for safety vulnerabilities...
    
    Protected to Run: False
    ----------------------------------------
    BUGS DETECTED:
    🔴 [Critical] Line 7: SQL injection vulnerability as a result of direct string concatenation in question development. Attacker can inject malicious SQL code via the username parameter.
    🟡 [Medium] Line 4: Database connection and cursor are usually not correctly closed, resulting in potential useful resource leaks.
    🟡 [Low] Line 1: Perform parameter title 'u' isn't descriptive. Ought to use significant variable names.
    ----------------------------------------
    REFACTORED CODE:
    import sqlite3
    from contextlib import closing
    
    def get_user(username):
        """
        Retrieve consumer info from the database by username.
        
        Args:
            username (str): The username to seek for
            
        Returns:
            record: Listing of tuples containing consumer information, or empty record if not discovered
        """
        with sqlite3.join('app.db') as conn:
            with closing(conn.cursor()) as cursor:
                # Use parameterized question to stop SQL injection
                question = "SELECT * FROM customers WHERE username = ?"
                cursor.execute(question, (username,))
                return cursor.fetchall()

    Why is that this highly effective?

    Integration-ready. You possibly can run this script in a GitHub Motion. If is_safe_to_run is False, you possibly can mechanically block a Pull Request.

    Separation of considerations. You get the metadata (bugs, severity) separate from the content material (the code). You don’t have to make use of Regex to strip out “Right here is your fastened code” textual content from the response.

    Strict typing. The severity area is constrained to particular Enum values (Vital, Excessive, and so forth.), guaranteeing your downstream logic doesn’t break when the mannequin returns “Extreme” as a substitute of “Vital” for instance.

    Abstract

    Anthropic’s launch of native Structured Outputs is a game-changer for builders who want reliability, not simply dialog. By imposing strict JSON schemas, we are able to now deal with Massive Language Fashions much less like chatbots and extra like deterministic software program elements.

    On this article, I demonstrated use this new beta function to streamline information extraction and output, and construct automated workflows that combine seamlessly with Python code. In case you’re a consumer of Anthropic’s API, the times of writing fragile Regex to parse AI responses are lastly over.

    For extra details about this new beta function, click on the hyperlink beneath to go to Anthropics’ official documentation web page.

    https://platform.claude.com/docs/en/build-with-claude/structured-outputs



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleLLM-as-a-Judge: What It Is, Why It Works, and How to Use It to Evaluate AI Models
    Next Article Struggling with Data Science? 5 Common Beginner Mistakes
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Why CrewAI’s Manager-Worker Architecture Fails — and How to Fix It

    November 25, 2025
    Artificial Intelligence

    How to Implement Three Use Cases for the New Calendar-Based Time Intelligence

    November 25, 2025
    Artificial Intelligence

    Ten Lessons of Building LLM Applications for Engineers

    November 25, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Making the art world more accessible | MIT News

    April 7, 2025

    Why CatBoost Works So Well: The Engineering Behind the Magic

    April 10, 2025

    New technologies tackle brain health assessment for the military | MIT News

    August 25, 2025

    ROC AUC Explained: A Beginner’s Guide to Evaluating Classification Models

    September 17, 2025

    Why CrewAI’s Manager-Worker Architecture Fails — and How to Fix It

    November 25, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Multiple Linear Regression Analysis | Towards Data Science

    May 23, 2025

    Designing a new way to optimize complex coordinated systems | MIT News

    April 25, 2025

    Why the White House and Big Tech Are Pouring Billions Into AI Education

    September 9, 2025
    Our Picks

    Why CrewAI’s Manager-Worker Architecture Fails — and How to Fix It

    November 25, 2025

    How to Implement Three Use Cases for the New Calendar-Based Time Intelligence

    November 25, 2025

    Ten Lessons of Building LLM Applications for Engineers

    November 25, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.