Skip to Content
BlogsYou Can't Prompt Your Way Past Incompetence
Opinion

You Can't Prompt Your Way Past Incompetence

By Ziva.sh • March 30, 2026 • 7 min read
TL;DR / Key Takeaways
  • AI tools amplify existing skills. They don’t replace them. Non-experts using AI produce output faster, but the quality ceiling is set by the user’s domain knowledge, not the model’s capability.

  • A CHI 2025 study found expert composers used AI music tools to validate concepts they already had, while novices couldn’t evaluate or direct the output at all.

  • The METR study found developers using AI perceived they were 24% faster while actually being 19% slower, with the gap coming from debugging and re-prompting.

I tried to make music for my game using Suno . I’m a programmer, not a musician. The AI generated something in seconds. It sounded… fine. Like elevator music for a dungeon crawler. Something was off, but I couldn’t tell you what. Was the key wrong? The tempo? The instrumentation? I didn’t have the vocabulary to describe the problem, which meant I didn’t have the vocabulary to fix it.

I spent an hour re-rolling prompts. “Make it more epic.” “Add tension.” “Less generic.” Each generation was different but none were better, because I was navigating a space I had no map for.

This experience keeps repeating across the industry. AI tools are getting better every month, but they all share the same bottleneck: the person using them.

A CHI 2025 study  on AI music generation made this concrete. Expert composers used AI tools to validate concepts they already had in mind, working across genres with clear creative goals. Novices “had difficulty analyzing AI-generated compositions in detail and making judgments about how to revise them.” The prompt-based interface was supposed to make music creation accessible, but musical vocabulary is built from years of listening and studying. You can’t type “make it feel like a Miyazaki film” if you don’t understand what makes those soundtracks work.

Three stories that prove the point

The non-programmer who got hacked in 48 hours. Leo Acevedo built an entire SaaS product  using Cursor AI with “zero hand-written code.” Two days after launch, users were bypassing his paywall, API keys were exposed in frontend code, and the database was filling with garbage. He shut it down. The AI had produced code that appeared to work, but Acevedo didn’t know enough about authentication, rate limiting, or input validation to notice what was missing. He couldn’t evaluate what the AI gave him.

The game designer who caught the AI lying. A senior game designer pseudonymously called “Ricky”  was told by management to use ChatGPT for design reference. The AI confidently described game systems, items, and enemy behaviors that didn’t exist. Ricky caught it instantly because he knew those games. “It was a huge waste of time, and really felt like an affront to my own expertise.” A junior designer using the same output would have built on fabricated foundations.

The beginner whose financial calculations were off by 20%. Renjit Philip, a self-described beginner programmer, spent 30 days vibe coding a financial tool . Customer acquisition cost showed $47 when the real number was $58.75. Export crashed with datasets over 50 rows. A night mode attempt produced white text on a white background. Philip couldn’t evaluate the AI’s math because he didn’t understand the calculations himself.

The vibe coding paradox

Google Chrome engineering lead Addy Osmani called this  “the paradox of vibe coding”: it works best for those who do not need it.

Skilled developers using AI refactor code into smaller modules, add edge case handling, and strengthen type definitions. These decisions come from years of experience. The AI accelerates execution, but the human provides direction.

Non-experts get trapped in a loop. The AI produces something. It looks wrong. They re-prompt. It looks different but equally wrong. They can’t articulate what’s broken because they lack the mental model to diagnose it. The METR study  measured this directly: developers using AI tools perceived they were 24% faster while actually being 19% slower, with the gap coming from time spent debugging and re-prompting output they couldn’t fully evaluate.

Jonathan Blow, creator of Braid and The Witness, put it bluntly : “For those who have made games, even just simple ones, getting stuff on the screen is not impressive, it’s easy; it’s making the game good that is hard.”

This applies to every creative domain

The pattern is identical across music, art, and code:

DomainWhat AI does wellWhat requires human expertise
MusicGenerate tracks matching a mood keywordHarmonic progression, frequency separation, arrangement
Visual artGenerate images matching a style promptComposition, color theory, anatomical accuracy, style coherence
CodeGenerate functions matching a descriptionArchitecture, security, error handling, performance under load
Game designGenerate ideas and system outlinesFun factor, balance, player psychology, pacing

In every case, the AI produces output that looks right to non-experts. A Reddit discussion about Suno  captured this perfectly: one user asked for a feature that would auto-generate prompts for them, essentially asking the AI to also decide what music to make. A commenter responded they didn’t understand why someone would “generate AI music if you are so creatively unimaginative that you can’t even articulate the kind of music you want to hear.”

That sounds harsh, but it’s the core issue. If you can’t describe what you want, you can’t evaluate what you get. And if you can’t evaluate what you get, no amount of re-prompting will converge on good.

Where this leaves us

AI tools are genuinely useful. But the users who get the most value are experienced developers who know what they want and can evaluate what the AI produces. They use AI to debug errors they already understand, to generate boilerplate they’ve written a hundred times, to explore variations on designs they’ve already sketched.

That’s why we build dev tools for devs, not for vibe coders. The GDC data shows only 5% of game studios use AI for player-facing creative output. The other 95% use it the way it actually works: as an accelerator for people who already know what they’re doing.

AI amplifies whatever skill level you bring to it. If you’re a senior developer, AI makes you faster. If you’re a beginner, AI gives you a faster way to produce code you can’t maintain, art you can’t direct, and music you can’t evaluate.

Expertise isn’t knowing how to use a tool. It’s knowing what to do with the output.