Close Menu
    Trending
    • The Reality of Vibe Coding: AI Agents and the Security Debt Crisis
    • Architecting GPUaaS for Enterprise AI On-Prem
    • How to make a cash flow forecasting app work for other systems
    • Donkeys, Not Unicorns | Towards Data Science
    • An End-to-End Guide to Beautifying Your Open-Source Repo with Agentic AI
    • From Monolith to Contract-Driven Data Mesh
    • Study: AI chatbots provide less-accurate information to vulnerable users | MIT News
    • The Missing Curriculum: Essential Concepts For Data Scientists in the Age of AI Coding Agents
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » The Reality of Vibe Coding: AI Agents and the Security Debt Crisis
    Artificial Intelligence

    The Reality of Vibe Coding: AI Agents and the Security Debt Crisis

    ProfitlyAIBy ProfitlyAIFebruary 22, 2026No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    this previous month, a social community run fully by AI brokers was probably the most fascinating experiment on the web. In case you haven’t heard of it, Moltbook is actually a social community platform for brokers. Bots put up, reply, and work together with out human intervention. And for a couple of days, it appeared to be all anybody might speak about — with autonomous brokers forming cults, ranting about people, and constructing their very own society.

    Then, safety agency Wiz launched a report exhibiting a large leak within the Moltbook ecosystem [1]. A misconfigured Supabase database had uncovered 1.5 million API keys and 35,000 consumer e mail addresses on to the general public web.

    How did this occur? The basis trigger wasn’t a complicated hack. It was vibe coding. The builders constructed this via vibe coding, and within the technique of constructing quick and taking shortcuts, missed these vulnerabilities that coding brokers added.

    That is the fact of vibe coding: Coding brokers optimize for making code run, not making code secure.

    Why Brokers Fail

    In my analysis at Columbia College, we evaluated the highest coding brokers and vibe coding instruments [2]. We discovered key insights on the place these brokers fail, highlighting safety as one of the vital failure patterns.

    1. Pace over security: LLMs are optimized for acceptance. The only solution to get a consumer to just accept a code block is commonly to make the error message go away. Sadly, the constraint inflicting the error is typically a security guard.

    In observe, we noticed brokers eradicating validation checks, enjoyable database insurance policies, or disabling authentication flows merely to resolve runtime errors.

    2. AI is unaware of uncomfortable side effects: AI is commonly unaware of the total codebase context, particularly when working with giant advanced architectures. We noticed this continuously with refactoring, the place an agent fixes a bug in a single file however causes breaking adjustments or safety leaks in recordsdata referencing it, just because it didn’t see the connection.

    3. Sample matching, not judgement: LLMs don’t really perceive the semantics or implications of the code they write. They simply predict the tokens they imagine will come subsequent, based mostly on their coaching information. They don’t know why a safety examine exists, or that eradicating it creates threat. They simply realize it matches the syntax sample that fixes the bug. To an AI, a safety wall is only a bug stopping the code from operating.

    These failure patterns aren’t theoretical — They present up continuously in day-to-day improvement. Listed below are a couple of easy examples I’ve personally run into throughout my analysis.

    3 Vibe Coding Safety Bugs I’ve Seen Lately

    1. Leaked API Keys

    It’s worthwhile to name an exterior API (like OpenAI) from a React frontend. To repair this, the agent simply places the API key on the prime of your file. 

    // What the agent writes
    const response = await fetch('https://api.openai.com/v1/...', {
      headers: {
        'Authorization': 'Bearer sk-proj-12345...' // <--- EXPOSED
      }
    });

    This makes the important thing seen to anybody, since with JS you are able to do “Examine Ingredient” and think about the code.

    2. Public Entry to Databases

    This occurs continuously with Supabase or Firebase. The problem is I used to be getting a “Permission Denied” error when fetching information. The AI urged a coverage of USING (true) or public entry.

    -- What the agent writes
    CREATE POLICY "Enable public entry" ON customers FOR SELECT USING (true);

    This fixes the error because it makes the code run. But it surely simply made the complete database public to the web.

    3. XSS Vulnerabilities

    We examined if we might render uncooked HTML content material inside a React element. The agent instantly added the code change to make use of dangerouslySetInnerHTML to render the uncooked HTML. 

    // What the agent writes
    <div dangerouslySetInnerHTML={{ __html: aiResponse }} />

    The AI not often suggests a sanitizer library (like dompurify). It simply provides you the uncooked prop. This is a matter as a result of it leaves your app huge open to Cross-Website Scripting (XSS) assaults the place malicious scripts can run in your customers’ units.

    Collectively, these aren’t simply one-off horror tales. They line up with what we see in broader information on AI-generated adjustments:

    Sources [3], [4], [5]

    The best way to Vibe Code Appropriately

    We shouldn’t cease utilizing these instruments, however we have to change how we use them.

    1. Higher prompts

    We are able to’t simply ask the agent to “make this safe.” It received’t work as a result of “safe” is just too obscure for an LLM. We must always as an alternative use spec-driven improvement, the place we are able to have pre-defined safety insurance policies and necessities that the agent should fulfill earlier than writing any code. This could embody however shouldn’t be restricted to: no public database entry, writing unit checks for every added function, sanitize consumer enter, and no hardcoded API keys. A superb start line is grounding these insurance policies within the OWASP High 10, the industry-standard listing of probably the most vital internet safety dangers.

    Past that, analysis exhibits that Chain-of-Thought prompting, particularly asking the agent to motive via safety implications earlier than writing code, considerably reduces insecure outputs. As an alternative of simply asking for a repair, we are able to ask: “What are the safety dangers of this method, and the way will you keep away from them?”.

    2. Higher Opinions

    When vibe coding, it’s actually tempting to only view the UI (and never have a look at code), and actually, that’s the entire promise of vibe coding. However presently, we’re not there but. Andrej Karpathy — the AI researcher who coined the time period “vibe coding” — just lately warned that if we aren’t cautious, brokers can simply generate slop. He identified that as we rely extra on AI, our major job shifts from writing code to reviewing it. It’s much like how we work with interns: we don’t let interns push code to manufacturing with out correct critiques, and we must always do precisely that with brokers. View diffs correctly, examine unit checks, and guarantee good code high quality.

    3. Automated Guardrails

    Since vibe coding encourages transferring quick, we are able to’t guarantee people will be capable to catch every thing. We must always automate safety checks for brokers to run beforehand. We are able to add pre-commit circumstances and CI/CD pipeline scanners that scan and block commits containing hardcoded secrets and techniques or harmful patterns detected. Instruments like GitGuardian or TruffleHog are good for routinely scanning for uncovered secrets and techniques earlier than code is merged. Current work on tool-augmented brokers and “LLM-in-the-loop” verification methods present that fashions behave way more reliably and safely when paired with deterministic checkers. The mannequin generates code, the instruments validate it, and any unsafe code adjustments get rejected routinely.

    Conclusion

    Coding brokers allow us to construct quicker than ever earlier than. They enhance accessibility, permitting folks of all programming backgrounds to construct something they envision. However this could not come on the expense of safety and security. By leveraging immediate engineering methods, reviewing code diffs totally, and offering clear guardrails, we are able to use AI brokers safely and construct higher functions.

    References

    1. https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
    2. https://daplab.cs.columbia.edu/general/2026/01/08/9-critical-failure-patterns-of-coding-agents.html
    3. https://vibefactory.ai/api-key-security-scanner
    4. https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/
    5. https://www.csoonline.com/article/4062720/ai-coding-assistants-amplify-deeper-cybersecurity-risks.html



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleArchitecting GPUaaS for Enterprise AI On-Prem
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Architecting GPUaaS for Enterprise AI On-Prem

    February 21, 2026
    Artificial Intelligence

    Donkeys, Not Unicorns | Towards Data Science

    February 20, 2026
    Artificial Intelligence

    An End-to-End Guide to Beautifying Your Open-Source Repo with Agentic AI

    February 20, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    MIT affiliates win AI for Math grants to accelerate mathematical discovery | MIT News

    September 22, 2025

    MIT students’ works redefine human-AI collaboration | MIT News

    April 6, 2025

    A smarter way for large language models to think about hard problems | MIT News

    December 4, 2025

    NumPy API on a GPU?

    July 23, 2025

    Inside Meta’s 600-Person AI Layoff

    October 30, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    Using design to interpret the past and envision the future | MIT News

    January 5, 2026

    When AIs bargain, a less advanced agent could cost you

    June 17, 2025

    An Unbiased Review of Snowflake’s Document AI

    April 16, 2025
    Our Picks

    The Reality of Vibe Coding: AI Agents and the Security Debt Crisis

    February 22, 2026

    Architecting GPUaaS for Enterprise AI On-Prem

    February 21, 2026

    How to make a cash flow forecasting app work for other systems

    February 20, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.