Why AI Will Never Replace Content Writers
- Tarasekhar Padhy
- Oct 2
- 7 min read
(Disclaimer: Profanity.)
A few days ago, I was browsing Reddit (dumb move, I know). My eyes fell on a retard’s comment on a post that inquired about the potential impact of LLMs on the careers of content writers, creators, and marketers.
The commenter suggested that within a couple of years, AI will be coming for such jobs and that this line of work has zero stability. There were a few other folks like that who revealed that they changed fields because they wanted to do something that AI won’t be able to.
As a seasoned content writer and marketer who has been professionally using AI tools since they became accessible like a motherfucker, it pissed me off.
In this chapter, I will explain why the AI doom-thinkers have got pig shit for brains.
(Btw, no LLM can write an introduction like above. I dare you.)
A million constraints for one content piece
Every digital asset has to pass through a plethora of checks and criteria to get published for a general audience. Some of them include:
Purpose: Educational, entertaining, or commercial
Brand guidelines: Style, tone, and phrasing that determine the “vibe”
Logical flow: The rationale followed to explain a concept or suggest a product/service
Facts: Which truths will support the purpose best
Emotionality: How do you want the reader to feel
Perspective: Are you being a thought leader, an advisor, or a critic
Each of the above constraints can be described with one or multiple sentences. Regardless of the approach, you will only get suboptimal results.
If it’s one sentence, the AI tool may interpret it differently than what you might have intended. For instance, if you tell an LLM that the purpose is to “educate the audience about the challenges around vendor collaboration,” you are very likely to receive generic content.
AI will take the creative freedom and define “the degree to which the audience is to be educated” and “the challenges around vendor collaboration that the audience will give a damn about.”
On the other hand, if you use multiple sentences, say, a set of short bullets, you are setting yourself up for an ultra-robotic answer. The text predictor (yes, that’s what LLMs are) will provide an extremely bland output that will enrage you.
And that’s just the purpose of the piece.
Some AI fanatics might suggest that these tools’ output ought to be refined by a human writer before being published, which will accelerate the content production process. But it’s not as simple.
First, it takes a highly experienced content writer (like me) to effectively pick the sentences or paragraphs from AI’s answers that aren’t too shabby. Second, in many cases, LLMs’ generations are merely a rough piece, and the writer has to draft it from scratch anyway.
Either way, the time saved per content piece, especially if you want good returns, will be minimal, and if you have a seasoned writer in your team that you are paying by the month, it’s much better to have them produce the content rather than polish AI slop.
So, that’s the first reason why AI will never replace content writers. There are simply too many constraints per piece that need to be uniquely defined. As these constraints will be described in natural language, their interpretation by machines (as well as humans) will be inconsistent.
Heterogeneous mixture of ingredients
Writers leverage a disproportionate mixture of empathy, logic, facts, opinions, speculation, and suggestion to produce a persuasive content piece. The mixture ebbs and flows from paragraph to paragraph, section to section.
Consider this very chapter, for example.
The introduction is a bit emotional with a touch of humor, while the section after that starts with facts followed by logical reasoning. Within that, I’ve remained empathetic to the reader by considering multiple “what ifs” to ensure I respect different views.
On top of everything, the title of this chapter might be interpreted as a speculation or an opinion!
To make things more complicated, there might be more ingredients on top of what I’ve mentioned in this section’s very first sentence.
You need to use each of them appropriately to ensure the resulting content piece performs in a desired way. Whether it is validating your readers’ emotions or illustrating how a product works, only an experienced author can mix them up properly.
There’s no way you can prompt LLMs or build a custom AI model to do it all. These tools simply mimic patterns, and that approach always backfires in creative projects. Look at AI-generated images! They are everywhere, and they still fail to make any kind of impact on the mind of the viewer.
No, the upcoming AI models won’t understand them either. They might pretend to understand them, like they already pretend to “get” your statements, but when it comes to absorbing the concepts and implementing them in practice, they will fail spectacularly. I’ve explained this further in the penultimate section of this chapter.
Out of the box thinking, within constraints
One of the primary reasons why people think that content writers will be replaced by AI is the presence of dog-shit articles across the internet. Most articles on most topics are similar to one another because most marketing teams comprehensively copy others.
In fact, it’s a well-known practice in the B2B content marketing industry to create outlines that cover every aspect of a topic. Writers achieve that by reading articles on identical titles or topics and assembling the unique sections in their “brand new” piece. Even I did that.
That practice is quite effective for a variety of mercantile reasons. Long-form articles can target a wide range of popular keywords, increasing visibility. They also produce multiple infographics, which can be distributed across social media. Finally, they retain readers for long.
Unfortunately, when everyone caught up, the jig was up, and Google’s search results fetched nothing but keyword-stuffed generic crap that gave little value to the readers. Over time, search engines rolled out updates, like Google’s E-E-A-T, to curb this issue with some efficacy.
Honestly, I hated it as much as everyone, but it worked. And as a professional content writer, my objective was to get my clients to the top half of the first page.
For the most part, those articles can be written effectively with AI because they provide surface-level insights on popular topics with several keywords sprinkled throughout. I’ve actually created a complete AI-powered content writing workflow that does just that.
But then, LLM-based search platforms like Perplexity, ChatGPT, and Google’s AI Mode hit the market. This allowed users to ask questions rather than query broad keywords. Consequently, most of the dog-shit articles got little to no traffic.
At the same time, it illuminated the content pieces that actually solved problems and answered complex questions. Finally, content writing regained some of its dignity.
Now more than ever, it’s pivotal that writers do more than use certain terms and phrases to capture their readers’ attention and earn their trust. They have to learn about the various ingredients I mentioned earlier and use them properly to achieve that goal.
Of course, they still have to respect the constraints of the piece while being creative because no one likes a plagiarized piece.
I’ve coined a term for this — directional creativity.
Kind of like a Formula 1 team aims to design a faster car while adhering to the same set of regulations as their rivals.
There are boundaries you respect while using the tools and techniques at your disposal to develop the best possible racing machine (or content piece). AI sucks at that. The next section will explain why.
A few things about machine learning
“Machine learning” is a misnomer because machines never learn. They aren’t even capable of doing anything remotely related to learning.
Here’s what happens under the hood.
You take a dataset, then apply a statistical method to it to create a model. That model predicts. Let me explain this with a simple real estate example.
Assume that you have a large spreadsheet that contains property valuation data. Some columns can be location, city, number of bedrooms, square footage, and amenities. All of these affect the price.
Then, you use a statistical method, such as linear regression or decision trees, to create a predictive model. The statistical methods map a relationship between various parameters, such as location and square footage, to create a “formula” of sorts that predicts the pricing.
Now, you can simply enter the parameters, and the model will give you an estimation. Over time, to increase the accuracy and reliability, you give feedback to the model, which is known as fine-tuning.
LLMs are just that, but for words.
They don’t understand or learn shit. A rich billionaire stole a plethora of text-based content from the internet and used a statistical method on top of it to develop a word predictor. Most, if not all, LLMs are auto-correct on steroids.
Now, let’s circle back to the content writing situation.
When you give the machine a bunch of real-world parameters, such as the purpose of the article and the preferences or expectations of the target audience, it doesn’t “understand” it to produce engaging drafts.
Rather, it predicts which blob of text will satisfy the user. Again, at best, it is a prediction, and at worst, a hallucination.
What most AI doomthinkers fail to comprehend is that AI tools aren’t like traditional software.
Traditional software has hard-coded buttons that do what they are supposed to, 100% of the time. On the other hand, AI tools are a Pandora’s box. You press a button that promises a certain value, but you never know what you're gonna get.
Maybe it will be something partially useful or complete trash. It depends on how much feedback-based fine-tuning has been done. This is why LLMs get “better” because users constantly tell them when they fuck up. Then, they just guess more effectively.
Conclusion: No substitute for skills
AI is useful in many workflows, and content writing is one of them. This very book is the proof of that.
At the same time, folks need to understand that to extract value from them, you need to have experience running the said workflow and a thorough grasp of the principles, philosophies, or domain knowledge that govern them.
If you are a noob writer who thinks they can plan and execute campaigns from ChatGPT, you are a moron. You need to go into the trenches to gain the necessary knowledge and develop the necessary skills.
Only then can you find ways to productively integrate AI into your process to make more money.
Until then, head down.
—
Next Chapter: Content Writing With AI: Everything You Need to Know
Previous Chapter: Making Money With Gen AI: Can Anyone Do It
Index (with Prologue): Content Writing With AI: Principles, Strategies, and Prompts
