The first time I watched a generative AI model spit out 20 headline options in 10 seconds, I felt two emotions: relief and dread. Relief, because the blank page suddenly looked less like a cliff. Dread, because the output, while competent, had the same smooth, polished mediocrity sameness I had started noticing everywhere else.
That’s the paradox of the “AI era.” As generative AI becomes table stakes, creative work often looks more homogeneous, not less. I don’t think that’s because AI is inherently anti-creative. I believe it’s because AI exposes something uncomfortable: Many of teams never had a strong point of view. AI didn’t steal anyone’s soul. It just made it obvious when the soul wasn’t present in the choices.
Adoption is already high enough that “AI as novelty” is over. Stanford’s AI Index 2025 reports that 78 percent of organizations used AI in 2024, up from 55 percent the year before; the report also notes that reported use of generative AI in at least one business function more than doubled (from 33 percent in 2023 to 71 percent in 2024). McKinsey’s State of AI survey similarly describes AI use as widespread while many organizations remain early in translating it into enterprise value. In other words: AI is everywhere, but “advantage” isn’t.
What’s causing the sameness? Part of it is mechanical. Large language models (LLMs) are trained to predict likely next words based on patterns in vast datasets. When a prompt is generic, the output tends to regress toward the statistical norm: polished, plausible, and forgettable. Another part is human behavior. Under time pressure, it’s tempting to accept the first “good enough” output, especially when a tool can generate a dozen alternatives and bathe them all in confidence.
AI: The “Average Good”
AI can raise the floor for an individual while lowering the ceiling for a group.
In a widely cited Science Advances paper, researchers found that access to generative AI ideas made stories appear as more creative, especially for less creative writers — but over time, the AI-assisted stories became more similar to one another, reducing collective novelty. Another research line reaches a similar conclusion: An empirical study comparing human writing with ChatGPT output found that human-written essays contributed more new ideas per additional essay, while AI outputs exhibited a “homogenizing” effect at scale.
This is the mirror I’m talking about: AI reliably produces “average good.” If the strategy is average, the product becomes average. If the brief is derivative, the output becomes derivative. And if a team’s taste is underdeveloped, the model’s taste becomes the default.
The uncomfortable truth is that many organizations are still not using AI deeply enough to gain meaningful leverage, so they experience sameness without the upside. A new National Bureau of Economic Research (NBER) paper that drew on large business surveys in multiple countries found high reported adoption but limited realized impact. Most firms reported no measurable productivity effects so far, with decision-makers often using AI relatively lightly. This “productivity paradox” matters for creativity too. When adoption is shallow, AI becomes a copy machine for busy work: more outputs, faster, with the same underlying thinking.
What Is Holding Teams Back
So, if AI isn’t the villain, what is?
In my view, it’s the combination of lazy prompts, timid decision-making, and the absence of a sharp creative thesis. AI doesn’t force any of these. It just refuses to hide them.
Creative quality is not a “nice to have.” There’s strong evidence that it’s tied to profit. Kantar, drawing on analysis conducted with the World Advertising Research Center (WARC), reports that the most creative and effective ads can generate more than four times as much profit as the least creative, least effective work.
System1, working with effectiveness experts including Peter Field, argues that dull advertising is expensive, requiring significantly more spending to achieve similar effects, and has published research focused on the economic waste of low-emotion creative. AI may make it cheaper to produce content, but “cheap content” is not the same thing as “effective creative.” In fact, cheaper content can be a trap — it makes it easier to flood channels with work that doesn’t earn attention.
Taste, Judgment, and Restraint
I’m increasingly convinced the future belongs to leaders who pair algorithmic efficiency with something machines don’t possess: taste, judgment, and restraint.
Taste is the ability to recognize what’s distinctive from what’s merely common. AI can draft 10 versions of a line; taste is knowing which one is true to the brand, surprising in a relevant way, and emotionally legible to a human being.
Judgment is the ability to decide what not to do. It’s saying no to the obvious claim, the borrowed metaphor, and/or the copycat trend. It’s insisting on a point of view even when the model offers a safer option.
Restraint is the discipline to avoid turning “more” into a strategy. If AI can produce a thousand assets, restraint asks whether a brand should. This is where many teams will win or lose — not on generation, but on curation.
In practical terms, this changes the creative workflow in three ways.
First, AI should be used to widen the search space, not finalize the answer. I want the model to generate extremes, counter-positions, and divergent framings. If the outputs cluster too tightly, that’s a signal the prompt, or the underlying strategy, is too bland.
Second, leaders must invest in briefs that contain decisions. A brief that reads like a Wikipedia entry produces Wikipedia-flavored work. However, a brief that contains tradeoffs, what the brand refuses to be, which audience tension matters most, what emotional posture to take, gives AI something to build from besides consensus.
Third, measurement needs to reward distinctiveness, not volume. If the dashboard celebrates output velocity, the organization will produce a lot of sameness. If the metrics track outcomes tied to attention, memory, and brand effects, and connect those to creative quality, the incentives shift toward better choices. Work on creative effectiveness repeatedly points to the business cost of dullness and the upside of stronger creative quality.
AI didn’t kill creativity. It raised the minimum standard for competence and lowered the value of being merely “good at execution.” That’s terrifying for anyone who built a career on craft alone and liberating for anyone ready to build a career on point of view.
The soul was never in the tool. It’s in the choices: what to say, what to omit, what to stand for, what to risk, and what to repeat until it becomes unmistakably yours. AI can generate. Only a human can decide.