The introduction of generative AI into content creation has done more than simply add another tool to our arsenal; it has fundamentally cleaved the industry into two distinct camps. Not by skill level, necessarily, but by philosophy. By approach. By what we're willing to accept as "good enough."
Let me be clear about what I mean when I use these terms.
The Tradesman approaches content creation as a transaction. The job is to fill the space, hit the word count, check the boxes. Get in, get it done, get out. There's no romance in it, no particular pride beyond completing the task. The tradesman sees a blank page as a problem to be solved as efficiently as possible, and if AI can solve it in thirty seconds instead of thirty minutes, the choice is obvious. The result? Serviceable. Functional. Forgettable.
The Craftsman approaches content creation as an act of creation itself. The blank page isn't a problem; it's a possibility. The craftsman obsesses over the right word, the compelling angle, the insight that hasn't been articulated quite that way before. They're not just filling space; they're building something that might actually matter to someone. The process itself has value. The struggle is part of the point.
The Tradesman's Bargain
When organizations hand their teams AI content tools and watch them churn out five blog posts before lunch, they think they're seeing productivity gains. What they're actually seeing is the industrialization of mediocrity. The tradesman has found their power tool, and they're thrilled with it.
These are the people who take ChatGPT's first draft and ship it. Who ask Claude for "10 blog post ideas about [topic]" and execute all ten without a single additional thought. Who have replaced research with prompts and editing with acceptance. They've made a bargain: less time, less effort, less thinking, but also less insight, less originality, less value.
And here's the uncomfortable truth: for a while, it works. The content gets published. The SEO boxes get checked. The content calendar stays green. Management sees the metrics and thinks the AI investment was brilliant.
But something else is happening beneath the surface. Trust is eroding. Not dramatically, not all at once, but steadily. Readers can feel it, even if they can't articulate it. The content is smooth but substanceless. Professional but pointless. It reads like it was written by someone who doesn't actually know anything, because increasingly, it was.
The tradesman's efficiency is real, but it's efficiency in service of nothing. They're building faster, but they're building slop.
The Craftsman's Dilemma
The craftsman, meanwhile, is having a more complicated relationship with AI.
Yes, they use it. Of course they use it. They're not Luddites. But they use it differently. They prompt, then they interrogate. They generate, then they verify. They ask AI to break their writer's block or offer a perspective they hadn't considered, and then they spend the next hour researching whether that perspective actually holds up.
Sometimes (often, actually) they spend more time with AI in their workflow than they would have without it. Because they're not using it to replace thinking; they're using it to enhance it. To challenge it. To argue with it, essentially, until something genuinely worthwhile emerges.
The craftsman's relationship with AI is adversarial in the best sense. They treat it like a talented but unreliable intern: capable of interesting ideas, but requiring constant supervision. They fact check every claim. They rewrite every bland phrase. They delete entire sections that feel too "AI-ish," that distinctive flavor of confident wrongness that large language models occasionally serve up.
The result? Content that took just as long to produce, or longer, but that actually says something. That reflects actual expertise. That might make someone think, or learn, or reconsider.
But here's their dilemma: they're competing with people who hit "generate" and move on. In a world that increasingly measures productivity by output rather than outcome, the craftsman looks inefficient. Slow. Expensive.
The Efficiency Question
So do craftsmen actually see efficiencies with AI?
Sometimes. In specific ways. AI is brilliant at breaking paralysis, at offering that initial sketch that gets you unstuck. It's useful for research starting points, for identifying angles you might not have considered, for summarizing dense material. These are real benefits, and the craftsman who learns to leverage them effectively can work smarter.
But the big efficiency gains everyone promised? Those mainly accrue to the tradesman, because those gains depend on accepting AI's output at face value. The craftsman's process (generate, interrogate, verify, rewrite, refine) is inherently time intensive. The quality is higher, but the velocity isn't.
And the tradesman? Do they actually see value and ROI in their slop?
In the short term, absolutely. Content is cheap, publishing is easy, and metrics are malleable. You can point to post counts, to pages indexed, to "content velocity," and make a compelling case that you're winning. The C-suite sees the numbers and approves the next budget.
But the long term ROI is built on sand. When every competitor is flooding the zone with AI generated content, when Google's algorithms become even more desperate to surface genuine expertise over synthetic text, when audiences grow so fatigued with bland nothingness that they actively seek out depth again, what happens to the tradesman's content library then? What happens to the brand that built itself on efficient slop?
Is There a Middle Ground?
This is the question that keeps me up at night, because I genuinely don't know the answer.
Perhaps there's a middle way: the pragmatist who uses AI extensively but maintains standards. Who trains teams to prompt effectively but also to edit ruthlessly. Who accepts AI's efficiency for routine content but insists on human expertise for anything that matters.
But I'm skeptical. Because the pressure is always toward efficiency, toward scale, toward "more with less." The middle ground requires intention, discipline, and organizational commitment to quality even when cheaper options exist. How many companies can sustain that when Wall Street wants growth and competitors are publishing ten times the content?
Who Wins?
The pessimistic scenario is that tradesmen flood the content space so thoroughly that the signal to noise ratio becomes unbearable. We're already halfway there. Search results increasingly return confident sounding nonsense. The web fills with plausible looking articles that contain subtle hallucinations, unverified claims, and reasoning that collapses under even basic scrutiny.
Does it matter? Maybe not, if AI itself becomes the primary interface for information. If we're all just asking ChatGPT for answers instead of reading articles, perhaps content quality becomes irrelevant. The slop is just training data for the next model, and the cycle continues.
But the optimistic scenario (and I'll admit it requires some optimism) is that we're approaching a correction. That the AI bubble, like all bubbles, will pop. That the hype will give way to disillusionment when organizations realize their AI content investment produced quantity without quality, traffic without trust, posts without purpose.
When that happens, the craftsman wins. Because they'll have been building real value all along. They'll have the expertise, the credibility, the depth that audiences will crave once they're exhausted by the endless stream of synthetic mediocrity.
The Plateau
Here's what gives me hope for that optimistic scenario: AI has largely plateaued.
Yes, the models get incrementally better. Yes, they can handle longer contexts and more complex prompts. But we're not seeing the exponential improvements that defined 2022 through 2023. GPT-4 was remarkable; everything since has been refinement. The fundamental limitations (hallucinations, reasoning failures, lack of genuine understanding) remain.
And AGI? The true artificial general intelligence that was supposed to be just around the corner? The serious researchers have quietly revised their timelines. Twenty years away, maybe more. Perhaps never.
This plateau matters because it means AI isn't going to magically solve its current problems. The tradesmen are stuck with tools that generate plausible sounding slop, and no amount of "prompt engineering" will turn slop into substance. The inherent limitations of the technology are becoming clear.
Meanwhile, the craftsman's skills (research, critical thinking, genuine expertise, the ability to synthesize and articulate original insights) don't plateau. They compound.
The Verdict
If I had to bet, here's what I think happens:
We're entering a period of maximum slop. The next two years will see unprecedented volumes of AI generated content flooding every corner of the web. The tradesmen will have their moment. It will look like they're winning.
But underneath, trust will erode. Quality publications will differentiate themselves by loudly committing to human expertise. Audiences will develop better BS detectors. And organizations that built their content strategy on AI efficiency will wake up to discover they've spent two years damaging their credibility for the sake of publishing velocity.
When the correction comes (when the AI hype cycle completes its inevitable journey from "this changes everything" to "this is actually pretty limited") the craftsman will still be standing. Because they never stopped doing the work. They never mistook a tool for a replacement. They never accepted that good enough was actually good enough.
The tradesman will have filled the quota. The craftsman will have built something that lasts.
In the end, the question isn't whether we'll use AI. We will, of course. The question is whether we'll let it make us lazy, or whether we'll demand more from it and from ourselves. Whether we'll settle for slop, or insist on craft.
I know which side I'm on.




