Close Menu
    Trending
    • Optimizing Data Transfer in Distributed AI/ML Training Workloads
    • Achieving 5x Agentic Coding Performance with Few-Shot Prompting
    • Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found
    • From Transactions to Trends: Predict When a Customer Is About to Stop Buying
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » JSON Parsing for Large Payloads: Balancing Speed, Memory, and Scalability
    Artificial Intelligence

    JSON Parsing for Large Payloads: Balancing Speed, Memory, and Scalability

    ProfitlyAIBy ProfitlyAIDecember 2, 2025No Comments12 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Introduction

    marketing campaign you arrange for Black Friday was an enormous success, and prospects begin pouring into your web site. Your Mixpanel setup which might normally have round 1000 buyer occasions an hour finally ends up having tens of millions of buyer occasions inside an hour. Thereby, your knowledge pipeline is now tasked with parsing huge quantities of JSON knowledge and storing it in your database. You see that your normal JSON parsing library just isn’t in a position to scale as much as the sudden knowledge development, and your close to real-time analytics studies fall behind. That is while you notice the significance of an environment friendly JSON parsing library. Along with dealing with giant payloads, JSON parsing libraries ought to be capable of serialize and deserialize extremely nested JSON payloads.

    On this article, we discover Python parsing libraries for big payloads. We particularly have a look at the capabilities of ujson, orjson, and ijson. We then benchmark the usual JSON library (stdlib/json), ujson, and orjson for serialization and deserialization efficiency. As we use the phrases serialization and deserialization all through the article, right here’s a refresher on the ideas. Serialization entails changing your Python objects to a JSON string, whereas Deserialization entails rebuilding the JSON string out of your Python knowledge constructions.

    As we progress by means of the article, you will see that a call circulation diagram to assist determine on the parser to make use of primarily based in your workflow and distinctive parsing wants. Along with this, we additionally discover NDJSON and libraries to parse NDJSON payloads. Let’s get began.

    Stdlib JSON

    Stdlib JSON helps serialization for all fundamental Python knowledge varieties, together with dicts, lists, and tuples. When the perform json.hundreds() is known as, it hundreds the whole JSON into reminiscence without delay. That is nice for smaller payloads, however for bigger payloads, json.hundreds() may cause vital efficiency points akin to out-of-memory errors and choking of downstream workflows. 

    import json
    
    with open("large_payload.json", "r") as f:
        json_data = json.hundreds(f)   #hundreds whole file into reminiscence, all tokens without delay

    ijson

    For payloads which can be within the order of a whole bunch of MBs, it’s advisable to make use of ijson. ijson, brief for ‘iterative json’, reads information one token at a time with out the reminiscence overhead. Within the code beneath, we evaluate json and ijson.

    #The ijson library reads data one token at a time
    import ijson
    with open("json_data.json", "r") as f:
        for report in ijson.gadgets(f, "gadgets.merchandise"): #fetch one dict from the array
           course of(report) 

    As you may see, ijson fetches one aspect at a time from the JSON and hundreds it right into a Python dict object. That is then fed to the calling perform, on this case, the method(report) perform. The general working of ijson has been supplied within the illustration beneath.

    A high-level illustration of ijson (Picture by the Creator)

    ujson

    Ujson – Below the Hood (Picture by the Creator)

    Ujson has been a broadly used library in lots of purposes involving giant JSON payloads, because it was designed to be a quicker various to the stdlib JSON in Python. The velocity of parsing is nice because the underlying code of ujson has been written in C, with Python bindings that hook up with the Python interface. The areas that wanted enchancment in the usual JSON library have been optimized in Ujson for velocity and efficiency. However, Ujson is not utilized in newer tasks, because the makers themselves have talked about on PyPI that the library has been positioned in maintenance-only mode. Under is an illustration of ujson’s processes at a high-level.

    import ujson
    taxonomy_data = '{"id":1, "genus":"Thylacinus", "species":"cynocephalus", "extinct": true}'
    data_dict = ujson.hundreds(taxonomy_data) #Deserialize
    
    with open("taxonomy_data.json", "w") as fh: #Serialize
        ujson.dump(data_dict, fh) 
    
    with open("taxonomy_data.json", "r") as fh: #Deserialize
        knowledge = ujson.load(fh)
        print(knowledge)

    We transfer to the following potential library named ‘orjson’.

    orjson

    Since Orjson is written in Rust, it’s optimized not just for velocity but additionally has memory-safe mechanisms to forestall buffer overflows that builders face whereas utilizing C-based JSON libraries like ujson. Furthermore, Orjson helps serialization of a number of extra datatypes past the usual Python datatypes, together with dataclass and datetime objects. One other key distinction between orjson and the opposite libraries is that orjson’s dumps() perform returns a bytes object, whereas the others return a string. Returning the information as a bytes object is among the most important causes for orjson’s quick throughput.

    import orjson
    book_payload = '{"id":1,"identify":"The Nice Gatsby","creator":"F. Scott Fitzgerald","Publishing Home":"Charles Scribner's Sons"}'
    data_dict = orjson.hundreds(book_payload) #Deserialize
    print(data_dict)          
      
    with open("book_data.json", "wb") as f: #Serialize
        f.write(orjson.dumps(data_dict)) #Returns bytes object
    
    with open("book_data.json", "rb") as f:#Deserialize
        book_data = orjson.hundreds(f.learn())
        print(book_data)

    Now that we’ve explored some JSON parsing libraries, let’s take a look at their serialization capabilities.

    Testing Serialization Capabilities of JSON, ujson and orjson

    We create a pattern dataclass object with an integer, string and a datetime variable.

    from dataclasses import dataclass
    from datetime import datetime
    
    @dataclass
    class Person:
        id: int
        identify: str
        created: datetime
    
    u = Person(id=1, identify="Thomas", created=datetime.now())

    We then move it to every of the libraries to see what occurs. We start with the stdlib JSON.

    import json
    strive:
        print("json:", json.dumps(u))
    besides TypeError as e:
        print("json error:", e)

    As anticipated, we get the next error. (The usual JSON library doesn’t help serialization of “dataclass” objects and datetime objects.)

    Subsequent, we take a look at the identical with the ujson library.

    import ujson
    strive:
    print("json:", ujson.dumps(u))
    besides TypeError as e:
    print("json error:", e)

    As we see above, ujson just isn’t in a position to serialize the information class object and the datetime datatype. Lastly, we use the orjson library for serialization.

    import orjson
    strive:
        print("orjson:", orjson.dumps(u))
    besides TypeError as e:
        print("orjson error:", e)

    We see that orjson was in a position to serialize each the dataclass and the datetime datatypes.

    Working with NDJSON (A particular Point out)

    We’ve seen the libraries for JSON parsing, however what about NDJSON? NDJSON (Newline Delimited JSON), as you may know, is a format by which every line is a JSON object. In different phrases, the delimiter just isn’t a comma however a newline character. For instance, that is what NDJSON seems like.

    {"id": "A13434", "identify": "Ella"}
    {"id": "A13455", "identify": "Charmont"}
    {"id": "B32434", "identify": "Areida"}

    NDJSON is usually used for logs and streaming knowledge, and therefore, NDJSON payloads are wonderful candidates for being parsed utilizing the ijson library. For small to reasonable NDJSON payloads, it is suggested to make use of the stdlib JSON. Aside from ijson and stdlib JSON, there’s a devoted NDJSON library. Under are code snippets displaying every strategy.

    NDJSON utilizing stdlib JSON & ijson

    As NDJSON just isn’t delimited by commas, it doesn’t qualify for a bulk load, as a result of stdlib json expects to see an inventory of dicts. In different phrases, stdlib JSON’s parser seems for a single legitimate JSON aspect, however is as a substitute given a number of JSON parts within the payload file. Due to this fact, the file needs to be parsed iteratively, line by line, and despatched to the caller perform for additional processing.

    import json
    ndjson_payload = """{"id": "A13434", "identify": "Ella"}
    {"id": "A13455", "identify": "Charmont"}
    {"id": "B32434", "identify": "Areida"}"""
    
    #Writing NDJSON file
    with open("json_lib.ndjson", "w", encoding="utf-8") as fh:
        for line in ndjson_payload.splitlines(): #Cut up string into JSON obj
            fh.write(line.strip() + "n") #Write every JSON object as its line
    
    #Studying NDJSON file utilizing json.hundreds
    with open("json_lib.ndjson", "r", encoding="utf-8") as fh:
        for line in fh:
            if line.strip():                       #Take away new traces
                merchandise= json.hundreds(line)             #Deserialize
                print(merchandise) #or ship it to the caller perform

    With ijson, the parsing is completed as proven beneath. With normal JSON, we have now only one root aspect, which is both a dictionary if it’s a single JSON or an array if it’s a record of dicts. However with NDJSON, every line is its personal root aspect. The argument “” in ijson.gadgets() tells the ijson parser to take a look at every root aspect. The arguments “” and multiple_values=True let the ijson parser know that there are a number of JSON root parts within the file, and to fetch one line (every JSON) at a time.

    import ijson
    ndjson_payload = """{"id": "A13434", "identify": "Ella"}
    {"id": "A13455", "identify": "Charmont"}
    {"id": "B32434", "identify": "Areida"}"""
    
    #Writing the payload to a file to be processed by ijson
    with open("ijson_lib.ndjson", "w", encoding="utf-8") as fh:
        fh.write(ndjson_payload)
    
    with open("ijson_lib.ndjson", "r", encoding="utf-8") as fh:
        for merchandise in ijson.gadgets(fh, "", multiple_values=True):
            print(merchandise)

    Lastly, we have now the devoted library NDJSON. It principally converts the NDJSON format to straightforward JSON.

    import ndjson
    ndjson_payload = """{"id": "A13434", "identify": "Ella"}
    {"id": "A13455", "identify": "Charmont"}
    {"id": "B32434", "identify": "Areida"}"""
    
    #writing the payload to a file to be processed by ijson
    with open("ndjson_lib.ndjson", "w", encoding="utf-8") as fh:
        fh.write(ndjson_payload)
    
    with open("ndjson_lib.ndjson", "r", encoding="utf-8") as fh:
        ndjson_data = ndjson.load(fh)   #returns an inventory of dicts

    As you might have seen, NDJSON file codecs can normally be parsed utilizing stdlib json and ijson. For very giant payloads, ijson is your best option as it’s memory-efficient. However in case you are seeking to generate NDJSON payloads from different Python objects, the NDJSON library is the perfect alternative. It is because the perform ndjson.dumps() mechanically converts python objects to NDJSON format with out having to iterate over these knowledge constructions.

    Now that we’ve explored NDJSON, let’s pivot again to benchmarking the libraries stdlib json, ujson, and orjson.

    The rationale IJSON just isn’t thought-about for Benchmarking

    ‘ijson’ being a streaming parser may be very totally different from the majority parsers that we checked out. If we benchmarked ijson together with these bulk parsers, we might be evaluating apples to oranges. Even when we benchmarked ijson alongside the opposite parsers, we might get the misunderstanding that ijson is the slowest, when in reality it serves a distinct objective altogether. ijson is optimized for reminiscence effectivity and due to this fact has decrease throughput than bulk parsers.

    Producing a Artificial JSON Payload for Benchmarking Functions

    We generate a big artificial JSON payload having 1 million data, utilizing the library ‘mimesis’. This knowledge can be used to benchmark the libraries. The beneath code can be utilized to create the payload for this benchmarking, for those who want to replicate this. The generated file could be between 100 MB and 150 MB in measurement, which I consider, is giant sufficient to conduct checks on benchmarking.

    from mimesis import Particular person, Tackle
    import json
    person_name = Particular person("en")
    complete_address = Tackle("en")
    
    #streaming to a file
    with open("large_payload.json", "w") as fh:
        fh.write("[")  #JSON array
        for i in range(1_000_000):
            payload = {
                "id": person_name.identifier(),
                "name": person_name.full_name(),
                "email": person_name.email(),
                "address": {
                    "street": complete_address.street_name(),
                    "city": complete_address.city(),
                    "postal_code": complete_address.postal_code()
                }
            }
            json.dump(payload, fh)
            if i < 999_999: #To prevent a comma at the last entry
                fh.write(",") 
        fh.write("]")   #finish JSON array

    Under is a pattern of what the generated knowledge would seem like. As you may see, the handle fields are nested to make sure that the JSON isn’t just giant in measurement but additionally represents real-world hierarchical JSONs.

    [
      {
        "id": "8177",
        "name": "Willia Hays",
        "email": "[email protected]",
        "handle": {
          "avenue": "Emerald Cove",
          "metropolis": "Crown Level",
          "postal_code": "58293"
        }
      },
      {
        "id": "5931",
        "identify": "Quinn Greer",
        "e mail": "[email protected]",
        "handle": {
          "avenue": "Ohlone",
          "metropolis": "Bridgeport",
          "postal_code": "92982"
        }
      }
    ]

    Let’s begin with benchmarking.

    Benchmarking Pre-requisites

    We use the learn() perform to retailer the JSON file as a string. We then use the masses() perform in every of the libraries (json, ujson, and orjson) to deserialize the JSON string right into a Python object. To begin with, we create the payload_str object from the uncooked JSON textual content.

    with open("large_payload1.json", "r") as fh:
        payload_str = fh.learn()   #uncooked JSON textual content

    We then create a benchmarking perform with two arguments. The primary argument is the perform that’s being examined. On this case, it’s the hundreds() perform. The second argument is the payload_str constructed from the file above.

    def benchmark_load(func, payload_str):
        begin = time.perf_counter()
        for _ in vary(3):
            func(payload_str)
        finish = time.perf_counter()
        return finish - begin

    We use the above perform to check for each serialization and deserialization speeds.

    Benchmarking Deserialization Velocity

    We load the three libraries being examined. We then run the perform benchmark_load() in opposition to the masses() perform of every of those libraries.

    import json, ujson, orjson, time
    
    outcomes = {
        "json.hundreds": benchmark_load(json.hundreds, payload_str),
        "ujson.hundreds": benchmark_load(ujson.hundreds, payload_str),
        "orjson.hundreds": benchmark_load(orjson.hundreds, payload_str),
    }
    
    for lib, t in outcomes.gadgets():
        print(f"{lib}: {t:.4f} seconds")

    As we will see, orjson has taken the least period of time for deserialization.

    Benchmarking Serialization Velocity

    Subsequent, we take a look at the serialization velocity of those libraries.

    import json
    import ujson
    import orjson
    import time
    
    
    outcomes = {
        "json.dumps": benchmark("json", json.dumps, payload_str),
        "ujson.dumps": benchmark("ujson", ujson.dumps, payload_str),
        "orjson.dumps": benchmark("orjson", orjson.dumps, payload_str),
    }
    
    for lib, t in outcomes.gadgets():
        print(f"{lib}: {t:.4f} seconds")

    On evaluating run occasions, we see that orjson takes the least period of time to serialize Python objects to a JSON object.

    Selecting the Greatest JSON library on your Workflow

    A information to picking the optimum JSON library (Picture by the Creator)

    Clipboard & Workflow Hacks for JSON

    Let’s suppose that you simply’d wish to view your JSON in a textual content editor akin to Notepad++ or share a snippet (from a big payload) on Slack with a teammate. You’ll shortly run into clipboard or textual content editor/IDE crashes. In such conditions, one might use Pyperclip or Tkinter. Pyperclip works effectively for payloads inside 50 MB, whereas Tkinter works effectively for medium-sized payloads. For big payloads, you possibly can write the JSON to a file to view the information.

    Conclusion

    JSON can appear easy, however the bigger the payload and the extra nesting, the extra these payloads can shortly flip right into a efficiency bottleneck. This text aimed to spotlight how every Python parsing library addresses this problem. Whereas choosing JSON parsing libraries, velocity and throughput aren’t at all times the principle standards. It’s the workflow that determines whether or not throughput, reminiscence effectivity, or long-term scalability is required for parsing payloads. In brief, JSON parsing shouldn’t be a one-size-fits-all strategy.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleNew Data Reveals 11.7% of the US Workforce Is Already Exposed to AI Automation
    Next Article The Machine Learning “Advent Calendar” Day 2: k-NN Classifier in Excel
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026
    Artificial Intelligence

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026
    Artificial Intelligence

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    EDA in Public (Part 1): Cleaning and Exploring Sales Data with Pandas

    December 12, 2025

    OpenAI lanserar Codex AI-agent för mjukvaruutveckling

    May 17, 2025

    When Models Stop Listening: How Feature Collapse Quietly Erodes Machine Learning Systems

    August 1, 2025

    Software Engineering in the LLM Era

    July 2, 2025

    Rationale engineering generates a compact new tool for gene therapy | MIT News

    May 28, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Combining technology, education, and human connection to improve online learning | MIT News

    June 17, 2025

    ChatGPT minskar hjärnaktivitet och minne hos studenter enligt MIT-studie

    June 20, 2025

    Implementing the Coffee Machine Project in Python Using Object Oriented Programming

    September 15, 2025
    Our Picks

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.