AI Is Not a Slop Machine. People Are Using It Like One.
AI Design9 min read

AI Is Not a Slop Machine. People Are Using It Like One.

AI is a force multiplier. And force multipliers do not care whether they are multiplying rigor or recklessness.

AI Is Not a Slop Machine. People Are Using It Like One.

The dissonance around AI's efficacy is completely insane right now.

Depending on who you ask, AI is either turning designers and engineers into unstoppable 10x operators, or it is mostly a bullshit narrative used to justify layoffs, flatter org charts, and cover for leadership failures that have nothing to do with actual productivity.

And to be honest with you, I understand why people are confused.

The discourse is full of people talking past each other. Some are talking about whether AI can materially improve what an individual can do. Others are talking about whether companies are overstating those gains. Others are reacting to seeing mountains of AI-generated garbage and concluding the whole thing is a joke.

These are not the same argument.

That distinction matters, because I think one thing has become very clear: AI is not necessarily a slop machine. It becomes one when people use it to avoid thinking.

That is the part a lot of critics get wrong.

When AI Fails, It Is Usually the User

When people use AI as a replacement for judgment, taste, verification, and accountability, of course the results are often garbage. In coding, that shows up as security issues, brittle code, hallucinated dependencies, low-inspection acceptance, and reliability problems. In design, it shows up as sameness, shallow thinking, generic outputs, bias, and a whole lot of work that looks impressive for five seconds and then collapses on contact with reality. Research is actually pretty consistent on this point: blind adoption and weak verification correlate with worse outcomes, especially outside the model's real capability frontier.

But that does not mean AI only produces garbage.

That conclusion is just as lazy as the worst AI use.

My Own Experience Says Otherwise

My own experience is the reason I can't take that take seriously anymore. I have used AI to create a whole portfolio of super polished apps and games that go far beyond my plain-vanilla coding ability. Not because I magically became a world-class engineer overnight. Not because I clicked "accept all" and prayed. And definitely not because I lowered my standards.

The opposite, actually.

AI let me go further than I could have gone on my own, but only because I stayed in the loop. I pushed, questioned, rewrote, tested, refined, art-directed, and polished. I used AI to expand my range, not to abdicate responsibility. That is a very different mode of use than "make the thing for me so I can think less or work less." And I think a lot of people pretending AI can only generate junk are reacting to the second mode while ignoring the first.

The Research Is More Nuanced Than the Takes

That distinction also lines up with the research better than the extreme takes do.

The strongest evidence does not say AI universally makes everything better. It also does not say it only creates slop. What it says is something more nuanced and, frankly, more useful: when people use AI inside its effective frontier and combine it with real verification and process discipline, they can get real gains in throughput and sometimes meaningful gains in quality. But when they overtrust it, under-check it, or let increased output outrun review and testing, quality and stability can degrade fast.

That is a much more believable picture of reality.

It also matches what many of us are seeing in practice. Some people are using AI to generate more garbage, faster. Others are using it to create work they simply could not have produced before, or could not have produced nearly as well, nearly as quickly, or nearly as ambitiously.

Those are both real.

The Binary Is the Problem

The problem is that the public conversation keeps trying to force them into a single binary. Either AI is magic or it is fake. Either it is transforming everything or it is all hype. Either the critics are luddites or the believers are grifters.

No. The reality is messier than that.

AI is a force multiplier. And force multipliers do not care whether they are multiplying rigor or recklessness.

If you already have taste, judgment, persistence, and standards, AI can be incredible. It can help you explore more directions, test more ideas, move faster through dead ends, and turn rough ambition into polished output. It can absolutely let one disciplined person punch above their historical weight. That is very real. I have lived it.

If you use it to skip thinking, skip checking, skip understanding, and skip ownership, it will happily help you produce polished-looking nonsense at industrial scale.

That is also real.

The Organizational Trap

And this is where I think a lot of organizations are getting themselves into trouble. The evidence suggests AI often increases speed and code volume, but verification and stability do not automatically keep up. In some cases, teams report feeling more productive while delivery quality or stability actually worsens. That is the trap. You can feel faster and still be building a worse system. You can ship more and still be creating more breakage. You can impress yourself with output and still be accumulating risk.

So no, I do not buy the claim that AI only results in crap.

I think that claim comes from seeing bad AI use and mistaking it for proof that the tool itself is useless.

But I also do not buy the opposite fantasy that AI automatically erases the tradeoff between speed and quality, or that every productivity gain at the individual level neatly translates into a rational argument for cutting teams.

That leap is intellectually sloppy.

Judgment Is the Bottleneck Now

The more honest takeaway is this: AI can dramatically expand what a capable, quality-conscious person is able to build. But it does not remove the need for craft. It does not remove the need for taste. It does not remove the need for testing, review, iteration, or accountability. If anything, it raises the importance of those things, because once output becomes cheap, judgment becomes the bottleneck.

That is the part I wish more people would say out loud.

AI did not help me make good work by letting me care less. It helped me make better work by letting me do more while still caring just as much. Maybe more.

Slop Is Not Inevitable

That is why the "AI equals slop" narrative feels so wrong to me.

Not because there is no slop. There is an unbelievable amount of slop.

But because slop is not the inevitable result of AI.

Slop is what happens when people confuse acceleration with exemption. When they think moving faster means they no longer have to think deeply, inspect carefully, or hold the line on quality.

Used that way, yes, AI will absolutely make things worse.

Used well, it can do something far more interesting.

It can help people make things they were not quite able to make before.

And that is exactly why this conversation matters.

M

Marco van Hylckama Vlieg

Builder shipping AI-native apps, games, and creative experiments.