Close Menu
    Trending
    • Solving the Human Training Data Problem
    • Scaling Vector Search: Comparing Quantization and Matryoshka Embeddings for 80% Cost Reduction
    • Pragmatic by design: Engineering AI for the real world
    • I Finally Built My First AI App (And It Wasn’t What I Expected)
    • Are OpenAI and Google intentionally downgrading their models?
    • 3 Questions: On the future of AI and the mathematical and physical sciences | MIT News
    • Is Open AI actually making its own models dumber?
    • An Intuitive Guide to MCMC (Part I): The Metropolis-Hastings Algorithm
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » I Finally Built My First AI App (And It Wasn’t What I Expected)
    Artificial Intelligence

    I Finally Built My First AI App (And It Wasn’t What I Expected)

    ProfitlyAIBy ProfitlyAIMarch 12, 2026No Comments15 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    everybody’s speaking about AI apps, however nobody actually exhibits you what’s taking place behind the scenes? Yeah… that was me just a few weeks in the past — watching my display screen, questioning if I’d ever truly construct one thing that talked again.

    So, I made a decision to simply dive in, determine it out, and share every part alongside the best way. By the top of this put up, you’ll see precisely what occurs while you construct your first AI app, and also you’ll choose up just a few actual expertise alongside the best way: calling APIs, dealing with setting variables, and operating your first script with out breaking something (hopefully).

    Let’s get into it — I promise it’s easier than it seems.

    What Are We Constructing — and Why It Truly Issues

    Okay, so earlier than we begin typing code like maniacs, let’s pause for a second and discuss what we’re truly constructing right here. Spoiler: it’s not some sci-fi degree AI that can take over your job (but). It’s one thing sensible, real-world, and completely doable in a single afternoon: an AI-powered article summarizer.

    Right here’s the thought: you paste a piece of textual content — perhaps a information article, a analysis paper, or perhaps a tremendous lengthy weblog put up — and our little AI app spits out a brief, easy-to-read abstract. Consider it as your private TL;DR machine. 

    Why this issues:

    • It’s instantly helpful: Anybody who reads a number of content material (so… principally all of us on TDS) will love having a instrument that distills info immediately.
    • It’s easy, however highly effective: We’re solely making one API name, however the result’s a working AI app you may truly showcase.
    • It’s expandable: At the moment, it’s a command-line script. Tomorrow, you might hook it as much as Slack, an online interface, or batch-process a whole bunch of articles.

    So yeah, we’re not reinventing the wheel — however we are demystifying what truly occurs behind the scenes while you construct an AI app. And extra importantly, we’re doing it in public, studying as we go, and documenting each little step in order that by the point you end this put up, you’ll truly perceive what’s taking place below the hood.

    Subsequent, we’ll get our arms soiled with Python, set up the OpenAI package deal, and set every part up in order that our AI can begin summarising textual content. Don’t fear , I’ll clarify each single line as we go.

    Putting in the OpenAI Package deal (And Making Certain Nothing Breaks)

    Alright. That is the half the place issues often really feel “technical” and barely intimidating.

    However I promise — we’re simply putting in a package deal and operating a tiny script. That’s it.

    First, be sure to have Python put in. When you’re unsure, open your terminal (or Command Immediate on Home windows) and run:

    python --version

    When you see one thing like Python 3.x.x, you’re good. If not… set up Python first and are available again.

    Now let’s set up the OpenAI package deal. In your terminal:

    pip set up openai

    That command principally tells Python: “Hey, go seize this library from the web so I can use it in my mission.”

    If every part goes nicely, you’ll see a bunch of textual content scroll by and finally one thing like:

    Efficiently put in openai

    That’s your first small win.

    Fast Actuality Test: What Did We Simply Do?

    Once we ran pip set up openai, we didn’t “set up AI.” We put in a consumer library — a helper instrument that allows our Python script to speak with OpenAI’s servers.

    Consider it like this:

    • Your laptop = the messenger
    • The OpenAI API = the mind within the cloud

    The openai package deal = the language translator between them
    With out the package deal, your script wouldn’t know methods to correctly format a request to the API.

    Let’s Take a look at That It Works

    Earlier than we transfer ahead, let’s verify Python can truly see the package deal.

    Run this:

    python

    Then contained in the Python shell:

    import openai
    print("It really works!")

    When you don’t see any indignant purple error messages, congratulations — your setting is prepared.

    This may occasionally appear small, however this step teaches you one thing vital:

    • Methods to set up exterior libraries
    • How Python environments work
    • Methods to confirm that your setup is right

    These are foundational expertise. Each real-world AI or knowledge mission begins precisely like this.

    Subsequent, we’ll arrange our API key securely utilizing setting variables.

    Setting Up Your API Key (With out By chance Leaking It)

    Okay. This half is vital.

    To speak to the OpenAI API, we want one thing referred to as an API key. Consider it as your private password that claims, “Hey, it’s me — I’m allowed to make use of this service.”

    Now right here’s the error learners (together with previous me) make:

    They copy the API key and paste it instantly into the Python file. Please don’t try this.

    When you ever add that file to GitHub, share it publicly, and even ship it to a buddy, you’ve principally uncovered your secret key to the web. And sure — individuals and bots actively scan for that.

    So as an alternative, we’re going to retailer it safely utilizing setting variables.

    Step 1: Get Your API Key

    1. Create an account on OpenAI.
    2. Generate an API key out of your dashboard.
    3. Copy it someplace protected (for now).

    Don’t fear — we’re not placing it into our code.

    Step 2: Set the Surroundings Variable

    On Home windows (Command Immediate):

    setx OPENAI_API_KEY "your_api_key_here"

    On Mac/Linux:

    export OPENAI_API_KEY="your_api_key_here"

    After operating this, shut and reopen your terminal so the change takes impact.

    What we simply did: we created a variable saved in your system that solely your machine is aware of about.

    Step 3: Entry It in Python

    Now let’s verify Python can see it.

    Open Python once more:

    python

    Then kind:

    import os
    api_key = os.getenv("OPENAI_API_KEY")
    print(api_key[:4] + "...")

    When you see the primary few characters of your key, meaning every part labored.

    And if None exhibits up? That simply means the setting variable didn’t register — often mounted by restarting your terminal.

    What’s Truly Taking place Behind the Scenes?

    Once we use os.getenv("OPENAI_API_KEY"), Python is solely asking your working system:

    “Hey, do you’ve got a variable saved below this title?”

    If it exists, it returns the worth. If not, it returns None.

    This tiny step introduces an enormous real-world idea:

    • Safe configuration administration
    • Separating secrets and techniques from code
    • Writing production-safe scripts

    That is how actual purposes deal with credentials. You’re not simply constructing a toy app anymore. You’re following precise engineering finest practices.

    Subsequent, we’ll lastly make our first API name — the second the place your script sends textual content to the cloud… and one thing clever comes again.

    Making Your First API Name (This Is the Magic Second)

    Alright. That is it.

    That is the second the place your laptop truly talks to the AI.

    Up till now, we’ve simply been making ready the setting. Putting in packages. Setting keys. Doing the “accountable grownup” setup work.

    Now we lastly ship a request.

    Create a brand new file referred to as app.py and paste this in:

    import os
    from openai import OpenAI
    
    consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    
    text_to_summarize = """
    Synthetic intelligence is remodeling industries by automating duties,
    bettering decision-making, and enabling new services.
    Nevertheless, understanding how these programs work behind the scenes
    stays a thriller to many learners.
    """
    
    response = consumer.chat.completions.create(
    mannequin="gpt-4o-mini",
    messages=[
    {"role": "system", "content": "You are a helpful assistant that summarizes text clearly and concisely."},
    {"role": "user", "content": f"Summarize this text:
    {text_to_summarize}"}
    ]
    )
    print(response.decisions[0].message.content material)

    Now go to your terminal and run:

    python app.py

    And if every part is ready up accurately… you must see a clear abstract printed in your terminal.

    Pause for a second when that occurs. As a result of what simply occurred is form of wild.

    Let’s Break Down What Simply Occurred

    Let’s stroll by way of this slowly.

    from openai import OpenAI

    This imports the consumer library we put in earlier. It’s the bridge between your script and the API.

    consumer = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

    Right here, we create a consumer object and authenticate utilizing the setting variable we set earlier.

    If the bottom line is fallacious. The request fails.
    If the bottom line is right. You’re formally linked.

    response = consumer.chat.completions.create(...)

    That is the API name.

    Your script sends:

    • The mannequin title
    • A listing of messages (structured like a dialog)
    • OpenAI’s servers course of it.
    • The mannequin generates a response.
    • The server sends structured JSON again to your script.

    Then we extract the precise textual content with:

    response.decisions[0].message.content material

    That’s it.

    Only a correctly formatted HTTP request going to a cloud server and a structured response coming again.

    Why This Is a Large Deal

    You simply discovered methods to:

    • Authenticate with an exterior service
    • Ship structured knowledge to an API
    • Obtain and parse structured output
    • Execute a full AI-powered workflow in below 30 traces of code

    That is the muse of actual AI purposes.

    Subsequent, we’ll dig into what that response object truly seems like — as a result of understanding the construction is what separates copying code from truly figuring out what’s happening.

    It Labored… After a Small (Very Actual) Actuality Test

    Earlier than we transfer on, I have to inform you what occurred the primary time I ran this.

    • The code was right.
    • The API key was right.
    • The request construction was right.

    After which I obtained this:

    openai.RateLimitError: 429
    'insufficient_quota'

    At first look, that feels scary.

    However right here’s what it truly meant:

    My script efficiently linked to the API. The authentication labored. The server acquired my request.

    I simply didn’t have billing enabled. That’s it.

    Utilizing the API isn’t the identical as utilizing ChatGPT in your browser. The API is infrastructure. It runs on cloud assets. And people assets price cash.

    So I added a small quantity of credit to my account (nothing loopy — simply sufficient to experiment), ran the very same script once more…

    And it labored.

    Clear abstract printed to the terminal. No code modifications.

    That second is vital. As a result of now we are able to categorize newbie API points into two important buckets:

    • Code issues → Your Python script is invalid.
    • Infrastructure issues → Authentication, quota, or billing points.

    When you perceive that distinction, AI growth turns into means much less mysterious.

    Now… What Does response Truly Look Like?

    When your script works, response isn’t simply textual content. It’s a structured object (principally JSON below the hood).

    When you quickly print the entire thing:

    print(response)

    You’ll see one thing structured with fields like:

    id
    mannequin
    utilization
    decisions

    The precise abstract lives inside:

    response.decisions[0].message.content material

    Let’s unpack that:

    decisions → an inventory of generated outputs
    [0] → we’re grabbing the primary one
    message → the assistant’s reply object
    content material → the precise textual content

    This issues greater than it appears.

    As a result of in real-world purposes, you may:

    • Log token utilization for price monitoring
    • Retailer responses in a database
    • Deal with a number of decisions
    • Add correct error dealing with

    Proper now we’re simply printing the content material.

    However structurally, you now perceive methods to navigate an API response.

    And that’s the distinction between copying code… and truly figuring out what’s happening.

    At this level, you’ve:

    • Put in a production-grade consumer library
    • Secured credentials correctly
    • Despatched a structured API request
    • Understood how billing and quota have an effect on infrastructure
    • Parsed structured output

    That’s a full AI workflow.

    Subsequent, we’ll make this barely extra interactive — as an alternative of hardcoding textual content, we’ll let the person paste in their very own article to summarize.

    And that’s when it actually begins feeling like an actual app.

    Making It Interactive (Your TL;DR App, Lastly!)

    Up till now, we’ve been doing every part with a hardcoded chunk of textual content. That’s high-quality for testing, nevertheless it’s not very… you understand… app-like.

    We wish to truly let a person paste in any article and get a abstract.

    Let’s repair that.

    Step 1: Get Consumer Enter

    Python makes this tremendous simple with the enter() operate. Open your app.py and substitute your text_to_summarize variable with this:

    text_to_summarize = enter("Paste your article right here:n")

    That’s it. Now, while you run:

    python app.py

    The terminal will wait so that you can paste one thing in. You hit Enter, and the AI does its factor.

    Step 2: Print the Abstract Properly

    As an alternative of dumping uncooked textual content, let’s make it somewhat prettier:

    abstract = response.decisions[0].message.content material
    print("nHere’s your abstract:n")
    print(abstract)

    See what we did there?

    We retailer the output in a variable referred to as abstract — useful if we wish to use it later.

    We add somewhat heading to make it apparent what the AI returned.

    This tiny contact makes your app really feel extra “completed” with out truly being fancy.

    Step 3: Take a look at It Out

    Run the script, paste in a paragraph from any article, and watch the magic occur:

    python app.py

    It is best to see your customized abstract pop up in seconds.

    For this reason we began with a easy hardcoded string — now you may truly work together with the mannequin like an actual app person.

    Step 4: Elective Extras (If You’re Feeling Fancy)

    If you wish to take it one step additional, you may:

    • Loop till the person quits — allow them to summarize a number of articles with out restarting the script.
    • Save summaries to a file — useful for analysis or weblog prep.
    • Deal with empty enter — ensure the app doesn’t crash if the person by chance hits Enter.

    Sharpening the App for Longer Articles

    Alright, by now our little AI summarizer works. You paste textual content, hit Enter, and get a abstract. 

    However there’s a small drawback: what occurs if somebody pastes a tremendous lengthy article, like a 2,000-word weblog put up?

    If we ship that on to the API, one among two issues often occurs:

    The mannequin may truncate the enter and solely summarize a part of it.
    The request may fail, relying on token limits.

    Not supreme. So let’s make our app smarter.

    Step 1: Trim and Clear the Enter

    Even earlier than worrying about size, we should always tidy up the textual content.

    Take away pointless whitespace, newlines, or invisible characters:

    text_to_summarize = text_to_summarize.strip().substitute("n", " ")

    strip() removes additional areas at the beginning/finish
    substitute("n", " ") turns line breaks into areas so the mannequin sees a steady paragraph

    Small step, nevertheless it makes summaries cleaner.

    Step 2: Chunk Lengthy Textual content

    Let’s say we wish to break up articles into smaller chunks so the mannequin can deal with them comfortably. A easy method is splitting by sentences or paragraphs. Right here’s a fast instance:

    max_chunk_size = 500 # roughly 500 phrases
    chunks = []
    phrases = text_to_summarize.break up()
    for i in vary(0, len(phrases), max_chunk_size):
    chunk = " ".be a part of(phrases[i:i+max_chunk_size])
    chunks.append(chunk)

    Now chunks is an inventory of manageable textual content items.

    We are able to then loop by way of every chunk, summarize it, and mix the summaries on the finish.

    Step 3: Summarize Every Chunk

    Right here’s how that may look:

    final_summary = ""
    for chunk in chunks:
    response = consumer.chat.completions.create(
    mannequin="gpt-4o-mini",
    messages=[
    {"role": "system", "content": "You are a helpful assistant that summarizes text clearly and concisely."},
    {"role": "user", "content": f"Summarize this text:n{chunk}"}
    ]
    )
    final_summary += response.decisions[0].message.content material + " "

    Discover how small the change is? However now, even tremendous lengthy articles might be summarized with out breaking the app.

    Step 4: Current a Clear Output

    Lastly, let’s make the outcome simple to learn:

    print("nHere’s your closing abstract:n")
    print(final_summary.strip())

    .strip() on the finish ensures no additional areas or trailing newlines.

    The person sees one clear, steady abstract as an alternative of a number of disjointed outputs.

    From Concept to Actual AI App

    After I began this, it was only a easy concept:

    “What if I may paste an article and immediately get a clear abstract?”

    That’s just about it. No huge startup imaginative and prescient and sophisticated structure.

    And step-by-step, right here’s what occurred:

    • I put in an actual manufacturing library.
    • I discovered how APIs truly work.
    • I dealt with billing errors and setting variables.
    • I constructed a working CLI instrument.
    • Then I turned it into an online app anybody can use.

    Someplace alongside the best way, this stopped feeling like a “toy script.”
    It grew to become an actual AI workflow:

    Native machine → API name → cloud mannequin → structured response → person interface.

    And one of the best half? I perceive each piece of it now.

    The errors and warnings additionally helped. As a result of constructing in public forces you to decelerate, debug correctly, and truly study what’s taking place.

    That is how actual AI expertise are constructed. Not by memorizing code. However by transport small issues, breaking them, fixing them, and understanding them.

    So if this helped you, don’t cease right here.

    Break it. Enhance it.

    Add file uploads. Deploy it. Flip it right into a Chrome extension. Construct the model you want existed.

    And when you do — write about it.

    As a result of the quickest strategy to develop in AI proper now isn’t consuming content material.

    It’s constructing in public.

    And in the present day, we shipped!

    I additionally deployed the app so you may try it yourself here

    When you loved this text. Let me know. Would love your feedback and suggestions.

    Medium

    LinkedIn

    Twitter

    YouTube



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAre OpenAI and Google intentionally downgrading their models?
    Next Article Pragmatic by design: Engineering AI for the real world
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Solving the Human Training Data Problem

    March 12, 2026
    Artificial Intelligence

    Scaling Vector Search: Comparing Quantization and Matryoshka Embeddings for 80% Cost Reduction

    March 12, 2026
    Artificial Intelligence

    3 Questions: On the future of AI and the mathematical and physical sciences | MIT News

    March 11, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI Video Magic Meets Copyright Chaos

    October 7, 2025

    Combining technology, education, and human connection to improve online learning | MIT News

    June 17, 2025

    Multimodal Conversations Dataset Explained | Shaip

    November 13, 2025

    Systems thinking helps me put the big picture front and center

    October 30, 2025

    The Black Box Problem: Why AI-Generated Code Stops Being Maintainable

    March 6, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Practical Eigenvectors | Towards Data Science

    May 2, 2025

    Generative coding: 10 Breakthrough Technologies 2026

    January 12, 2026

    LongCut omvandlar långa YouTube-videor till kortare höjdpunktsklipp

    December 16, 2025
    Our Picks

    Solving the Human Training Data Problem

    March 12, 2026

    Scaling Vector Search: Comparing Quantization and Matryoshka Embeddings for 80% Cost Reduction

    March 12, 2026

    Pragmatic by design: Engineering AI for the real world

    March 12, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.