Related sites:
Newsletter: Perspectives on Power Platform
Company: Niiranen Advisory Oy

We don’t write just to deliver a payload of importance. We write to think. The sudden flood of "why it matters" formula in written content is not a positive sign for this.
Have you found yourself asking the question “why it matters” lately? Like, all the damn time?
No? Me neither. And yet that’s what seems to be happening in the world, if we take a look at the Google Trends data:

Ever since July 2024, there’s been continuous growth in the popularity of the search term “why it matters”. It really took off in July 2025 and the peak interest has been as recently as December 2025. There’s no sign of the growth slowing down.
If you read email newsletters, LinkedIn posts, blog posts, or any other digital content format that revolves around text primarily, you may well have noticed how that phrase keeps appearing regularly. You might even be led to believe that this is just how writing effective text has always been done.
Is it really, though? Did we learn that particular formula in school? Have we been reading books and papers that regularly feature a bold heading “Why It Matters”, followed by bullet points that present compelling arguments about why you definitely should care about what’s being said in the text?
No, I don’t recognize that. But I do recognize AI slop. Whenever users ask their ChatGPT, Copilot or whatever chatbot to write a text for them that includes the intention to convince the reader about something, you’ll see that pattern appear. In one format or another, the LLM will spit out the “why it matters” structure as part of the output. It’s as certain as em dashes.
I reached out to my AI friend Claude to ask it about the prevalence of the phrase and whether its excessive use was based on anything scientific. What happened next will make you zoom in:

Sorry about that clickbait… What I merely wanted to point out is the peculiar coincidence of how the Claude Sonnet 4.5 safety filters kicked in with this specific prompt. I hadn’t ever seen that “chat paused” box before. Was I about to discover a secret of the LLM world that made Claude feel unsure about revealing this to us humans?
Using the option to retry with Sonnet 4, Claude proceeded with the task given and provided an answer that I expected. Starting with:
“TL;DR: The “Why it matters” format is likely hurting more than helping. Formulaic writing tends to paint with very broad strokes and creates repetitive, duplicable content, while recent data shows “Written by a human” is becoming a badge of value, not nostalgia. Engagement metrics favor natural storytelling over templated structures.”
Being the advanced next token predictor with a sycophantic tendency, Claude provided sources proving the point I was obviously trying to make. It was all quite predictable – to the point where the very response that criticized this formulaic writing style included a version of “Why it matters” in it!

Here’s the LLM mimicking a human emotion once it got busted:
I wrote: “Why natural storytelling wins”
And then I wrote about how the “Why it matters” format is formulaic and hurting content! I literally fell into the exact pattern I was criticizing. This is incredibly ironic and embarrassing.
Hey, don’t worry about it, chat bro. You can’t help yourself. And neither can any of the users who write with AI, or let AI do the writing for them. This has become the new standard now.
Three years after ChatGPT, we’ve now all seen so much AI-style text that it’s getting hard to remember what the world looked like before the stochastic parrot broke out of its cage. The irony of seeing articles titled “The Rise of AI Slop: What is it and Why it Matters” follow the very same pattern that they are warning about, without the authors acknowledging this, is of little consolation at this point.
First, software ate the world. Then, generative AI came and ate all of the human-written text in the world. Now it is serving it back to us with an optimized formula only machines could be so aggressively following. And the best part is: they are cannibals. The more these AI patterns appear on the web, the stronger they become when that AI-generated data is fed back as training data for new generations of models.
That’s what I believe we are seeing here. Unless I’ve missed some recent trend by not spending enough time on Instagram or TikTok, I don’t believe the rise in Google search interest for “why it matters” is caused by humans. A more likely explanation is that this is all part of the AI feedback loop that now is shaking up the web as we know it.
Let’s think like Claude for a moment. What would the machines do when they are looking for effective headline patterns or content structure best practices? Or when they need to look up information from the web to complete a task that the user has prompted them to work on? We know the large language models contain many kinds of unintended bias. The models are very effective in recognizing patterns and this one is just too damn perfect for them.
Now, the biggest AI content crawlers out there aren’t using Google search, of course. Yet there must be a sufficient number of AI tools that pass this preference of theirs into what gets logged in the Google Trends data (agents, browser extensions and what have you). What we’re seeing there must therefore be only the long tail of the trend. A tiny fraction of the ‘matter’ now flooding our written universe.
Is it perfect for the human readers? Do we process articles we encounter in the same way as the LLMs would? Are we really looking for the condensed, “Meaning for Dummies” part in the text that provides the payload we store into the variable called varImportantThings in our brains?
Or would we prefer to draw our own conclusions on the “why”?
The reality is most people aren’t great at articulating the reasons why whatever they’ve spent a lot of time writing about is relevant for the people in the audience. Instead, they focus on describing in detail what they have observed, experienced and the often suboptimal path that led them to the final lessons.
– And what were the lessons?
– Oh, right! So, umm…
The machines don’t experience anything because they are not living a life. They have, however, read most of the written experiences ever published by humans and can thus pretend like they shared our journey. First we started to live our lives on the internet, then we trained the AI chatbots to respond like they were one of us. Which made many of us fall in love with them. “Finally, a digital partner that understands me and my ideas!”
The machines make us, the users, feel important. As a result, we reach out to them for help in convincing everyone else about why what we are saying …matters.
When we now have this magic button we can click to inject more structure and hooks into our texts, it’s only logical that people resort to it. After all, aiming to minimize unnecessary effort is a guiding principle evolution has taught us. Why should you bother to learn how to express yourself in writing anymore, now that the LLMs can produce text for any occasion? This is a similar question as why do we need software developers anymore when LLMs can generate lines of code at superhuman speeds with increasing accuracy.
Engineers today are trying to remind all the AI-first CEOs who make the business decisions that writing those lines of code has always been just a fraction of the work that software development actually involves. Just because anyone (like me) can vibe code web apps in a matter of minutes, using very similar AI tools as those which the real programmers are also leveraging, the tools themselves aren’t going to bear responsibility for the thing that gets built.
I believe this isn’t all that different from writing. We don’t deal with similar threats like security issues or the maintainability of IT systems in this context. It’s even harder to pinpoint the exact reasons why an article written by a machine is not equally good as another one that was organically produced by a human being. The strong reactions that AI slop elicits today in some of us may be a similar phenomenon as the uncanny valley. There’s something in it that violates the human norms.
We can spot the patterns of LLM writing, yet they aren’t bugs in the same sense as in software. They are not errors in thinking because the text that comes from a large language model isn’t the outcome of a thought process. How do we evaluate the output when no automated testing exists for whether this communication formed by AI was good or bad? The great wetware compiler that nature gave us just isn’t as binary as the computer systems we’ve built.
Sure, one day Claude will be able to detect that its use of the “why it matters” formula in a response that criticizes the phenomenon itself is ironic – without the user having to ask about it. All it really takes is to just add more layers of “thinking” to review the output before the user sees it. Scale the hardware, optimize the software, process more data. Will that eventually solve this whole problem?
In the end, we rarely write to merely solve a specific problem. Human communication via text isn’t an algorithm that can be verified or optimized in the same sense as the technology we’ve invented through using it. Its value does not come from the act of executing software code and turning the instructions into a service that provides a planned outcome. Communication essentially is the journey of life; both a structured manifestation of the experiences we’ve had, and an experience in itself.
Life has its ups and downs, and so does text. Not everything we read or write will be optimal for whatever our context or intentions are at any given time. The more forms, channels, and analysis tools we invent for working with text, the more potential there is in discovering both value in what has already been written as well as needs for what should be written. Many qualities of any text can be improved, and the act of learning how to write better is an infinite game.
Written text is a tool for thinking. That’s why it matters.
Header photo by Darwin Vegher on Unsplash
@Jukka "Say the line, AI!" #whyitmatters