Authentic Authorship: Why we question AI Writing ethics but not human ghostwriting
Full Transparency: The Axios Model
The Axios newsletter lands in my inbox every weekday morning. Through a US lense it tells me all about the international news of the day, and right upfront it provides complete transparency about their AI-human collaboration.
A recent example: Smart Brevity™ count: 1,848 words ... 7 mins. Thanks to Erica Pandey for orchestrating. Edited by Lauren Floyd.
I personally love this approach. As someone who is experimenting with various models, I get a sense of which are the best tools for the job, and love to know which tools counterparts are using to support this sort of copywriting.
2. Traditional Ghostwriting
A recent opinion piece we placed in a national newspaper for a large corporate client was entirely written by our team but published under our client's byline - with no acknowledgement of the actual human ghostwriter.
This opinion piece represented the 'old-fashioned' way of drafting, human only, approach that nobody questions. There were no questions raised by media, when the piece was accepted with the byline of a person who actually had very little to do with its construction.
AI-Assisted Authorship
I spoke at a Vision for Wellington event recently. A ten minute speech all about social cohesion. The day before the event I realised I was missing an opportunity for my message to reach a wider audience, but without the time to draft an opinion piece from scratch I used Anthropic’s Claude to recut my speech into a 600 word opinion piece for our national newspaper.
I probably provided direction around retaining some key points, cutting other stories, and appropriate framing for the reduced wordcount. What would have taken me possibly 2-3 hours of rework was completed in around 30 minutes in collaboration with AI, and ran in The Post the following day.
So how to meld your personal philosophical feels with AI writing?
Our workshop identified three philosophical considerations for professionals grappling with AI authorship:
1. The Training Question
How comfortable are you with how these AI tools were developed? While legal cases continued to roll out this week, and locally the debates rage on, these tools exist and can amplify the work of organisations creating positive social impact. Here at Heft, we have helped NFP with campaigns that would otherwise be beyond their budgets. We’ve helped Chief Executives execute comms plans that they otherwise wouldn’t have the resource to implement.
If you can reconcile the ethics of how these tools came into being, with the opportunity for social good, then you can choose to use them responsibly. Or you can choose not to use them. It is ultimately up to you.
2. The Attribution Challenge
When does something constitute human versus AI authorship? And where is the line between human – human authorship in ghostwriting? As our examples show, the lines are blurry regardless of whether the ghostwriter is human or machine – and effectively impossible to determine when the writing is the result of human opinion and thinking. As we say here at Heft ‘put the human into it’ – that way, you get a human result (just faster).
Remember, it's humans driving AI. When the thought partnership is done skilfully, from people who are experts in their field, it should be impossible to determine where human thought ends and machine contribution begins.
3. Did you use AI?
Many writers I know are terrified of being asked whether their writing is ‘theirs’ or whether it is ‘AI’ generated.
This is the most controversial part, because some people believe all initial drafting must be human-led, while others see human-AI collaboration as legitimate co-creation. Because there is no clear definition or agreement here, people take strong positions. Some argue that they can claim total ownership, others feel they need to flag the use of AI in their process.
But what should be asked is, If you used AI, how did you use it?
What matters
The level of transparency required should match the stakes involved. Social media copy or marketing materials may not require detailed attribution. However, significant thought leadership, policy positions, or opinion pieces need clear human ownership.
We trust human judgement and accountability far more than machine-generated positions. Ultimately, someone is accountable and responsible for the work.
Some Practical Principles
Rather than blanket rules, we need nuanced approaches:
Match transparency to impact: Higher stakes content requires clearer human accountability. This work is generally so heavily driven and influenced by human ideas and thinking – just as a client op-ed may be ghostwritten, but the human spokesperson is accountable for the opinions (not us as the PR firm!).
Acknowledge collaboration: When AI significantly contributes, consider transparent attribution, such as the Axios newsletter example.
Maintain human oversight: Ensure humans take responsibility for final positions and claims. You never want to be in the position of blaming poor work on AI.
This article was drafted from an eight minute dictation by Emily using Otter.AI, copied into Anthropic’s Claude and reformatted into a draft article, then copyedited by Emily on a Saturday morning at home. Total time: 34 minutes (plus uncountable human hours of pondering these questions in the lead up to last weeks ethics workshop).