I’ve seen a lot of articles about how frequent AI use will slowly erode our ability to read, write, and think. Studies have found that when we don’t do the work, we don’t learn much. And if we do that long enough, we start to lose what we’ve already learned. That’s especially unsettling for writers, where reading, writing, and thinking are all we have. I’m not qualified to comment on the big questions – “Is it true?” “How bad is that?” “Where’s the nearest bunker?” – but it made me wonder:
If humans are still in charge here (at least for now), and if the problem is that AI makes everything easy, what if we went the other way? What if we used AI to make writing harder? Would more struggle equal more growth? Here are some of the experiments I’ve tried and how they turned out:
Adversarial Feedback
Imagine it’s time for your annual performance review, and when you sit down, your boss starts by reading you their latest poem and asks for your thoughts about it. That's basically the incentive structure an AI has when you ask for "honest" feedback. The AI might even be more sycophantic; it's not burdened by human frivolities like "artistic integrity." But you can position the AI on the other side of the desk to get less biased feedback.
Instead of asking for it to help you write, tell the AI that you are a very busy magazine editor or movie studio executive from (insert your dream home for this project), and the AI's job is to help you read submissions. Then ask for its honest assessment of the material. Ask what feels as good as our usual standard and what would need to improve if we moved forward. The catch: you must tell the AI to only give feedback and not say whether it would recommend or pass. Partially because it’ll always say “recommend,” but mostly because you will really want it to.
In my experience, the resulting notes are noticeably better, but you’re still inviting criticism from the hegemonized word-soup of an AI algorithm. The goal is to spark original human writing, so take it a step further:
Poison Pills
When you ask for feedback, tell the AI to generate a list of deliberately bad notes – suggestions that would water the story down or make it worse. Then tell it to mix those notes into the list of real ones so that it's impossible to tell them apart. Finally, instruct it to never reveal which are which (believe me, once you read the list, you will want to know).
Now you have to read every note and ask yourself: Do I actually agree with this? Will addressing this bring the writing closer to my vision for it? The only way to answer those questions is to get an even clearer idea of what that vision is.
Proof Battle
As a visually impaired guy with a busy spouse, AI proofreading has been a boon to our marriage. But I caught myself getting lax about spelling and grammar. So I started keeping score: How many proofs did the AI catch per every 1,000 words?
In truth, the proofing isn't the real benefit here. Near the end of the writing process, it's easy to get so familiar with the words that you glaze over them. The spark of competition is just enough to keep me engaged with the text through the home stretch.
Bonus: The AI always overreaches a little and starts proposing more significant revisions, often "for clarity". So I also keep a side list of the proofs I reject and why. Getting a sense of where our tastes diverge is an easy way to shed light on my own.
Chaos Injection
If you feel like a section of your piece is not living up to its full potential and you can't figure out why, try this:
Feed that section into an AI and ask it to change every sentence without altering the overall meaning. Then go back through both versions, sentence by sentence. If there are any elements of the AI output that work better, great! You aren't allowed to use them. Come up with a new sentence that beats them both.
You can also do the same thing with larger sections by instructing the AI to make them "better". Then note what it did differently. Again, you’re not allowed to use any of it, but it can help you see your words in a new light.
This process often makes me more aware of what I liked about the original version, what drew me to that arrangement of words in the first place. And when it exposes a weakness, I'm now more familiar with the area to find a better path.
Idea Degeneration
Anyone who's tried AI for brainstorming knows you get a lot of lackluster duds. We usually scroll past them all on the hunt for that diamond in the rough. But consider stopping at each unfulfilling idea to ask why. Is it something you've seen before? If so, where? Is it awkward or inhuman? What about it doesn't ring true? Is it too broad? Too dark? Too sappy? Too much golf? What specifically makes you think that? The AI doesn't have to generate the perfect idea if it leads you to your own via negativa.
In Conclusion
Your results will vary based on the exact wording of your prompts, but don't worry too much about it. The point isn't to get the most out of the AI; it's to get the most out of yourself. I can’t say these tactics have revolutionized my writing voice, but like any exercise, they’ve certainly made me more aware of my choices on the page. In the end, the best and most exacting teacher is still the process of writing, revising, and sharing with other humans. But every once in a while, it's fun to play with the new toys.
Let me know if and how you are using AI and if it’s making writing better, worse, easier, or harder?
Humankind will prevail 😊