AI created media needs nutritional labels
The Quiet Rebellion of transparency and ethics around AI use for content creation
Dispatches From the Quiet Rebellion
Notes from the field. Reflections for Rebels navigating change with clarity, courage, and consciousness. These essays go beyond tips or tactics. They track what I’m noticing in myself, in my coaching work, and in the cultural moment we’re all moving through. Want to read more Dispatches? You can find the full series here: coachwithnicholas.substack.com/t/dispatches
The Reckoning
I’ve been sitting with something lately.
It started as a quiet discomfort. Something that felt off, but wasn’t being named. Especially in my corner of the world, where people talk about values, integrity, and conscious leadership… but rarely about how they’re using AI behind the scenes.
Then it became impossible to ignore.
It showed up in AI-generated posts and comments on social media, even from friends. Then all the AI written newsletters filling up my inbox. Then in an email I received from a prospective business partner clearly not written in their voice.
Eventually, I found myself asking ChatGPT how to respond to a client, then again with a friend. In both cases, I used AI as a sounding board. I didn’t use what it suggested… but I’d be lying if I said I didn’t consider it.
What’s been missing in this conversation about use of AI is the honest nuance of the messy middle.
That was a line for me.
AI is among us Rebels, and it's not bringing out the best in us. Even for the most ethical and disciplined. I predict that the next "porn addiction" or "social media addiction" will be LLM addiction.
I believe we need to cultivate more conscious ways of using these tools while we still can, and I believe that starts with more transparency.
I get it. It’s easy to just blindly use these tools without thinking of the longer term repercussions, not only on our own creative and critical thinking skills, but also on our brand, our trust and the impact on our audience and clients.
We can ignore the impacts. Or shake fists at the AI overlords. Complain about em dashes. Let someone else deal with it.
Meanwhile, we keep using AI to reply to our posts, and to the people we care about.
But we all have the ability to make our own choices, and be clear with others about what those choices are. As a Conscious Leadership and Human BE-ing coach, as someone who builds trust for a living and creates for a purpose, I care not just what I create, but how.
I care about what happens in the space between the idea and the final draft. I care about what tools I use, how they shape the outcome, and how they shape me. I believe people who trust me care too.
The Truth
So here’s my truth:
I use AI. Not all the time. Not for everything. But enough that I’ve had to ask myself: Where’s the line between support and authorship? Where does collaboration end and convenience take over?
That led me to this experiment: What I’ve started calling a kind of "nutritional label" for AI-generated content.
Not to perform transparency. Not to preempt criticism. But to stay aligned with what matters most to me: honesty, clarity, and trust.
Because naming how something is made is a way of respecting the person receiving it. And if we say we care about building trust, then transparency isn’t just a value, it’s a form of consent.
Letting people know how something was created gives them the chance to opt in more consciously, to feel more grounded in the exchange. Especially in a time when so much of what we encounter online feels anonymous, flattened, or automated.
That respect matters to me. And I want it to matter here too. You can already see labels like this in a few areas.
LinkedIn, for example, has partnered with the Coalition for Content Provenance and Authenticity (C2PA) to add visible labels to AI-generated images. These tags appear in the top corner of an image and, when clicked, reveal the origin and generation process. It's subtle, but it’s a step toward informed consent.
Elsewhere, companies like Omnissa and initiatives like AI Nutrition Facts are starting to treat AI the way we treat food: with transparency. These labels detail what kind of model was used, whether customer data was involved, and how private or public that interaction is. Even model cards (used more in technical circles) aim to clarify how models were trained and where their blind spots might be.
But something is missing, and that's labels on the other end. Not just the telltale signs of AI use that we all love to hate. But a framework we can adopt to signal to our readers how we've used AI, in order to give them the option of consent, and allow them to choose how they want to engage their attention...or not.
All of this points to a quiet shift in expectations: if we want trust, we need to show our work.
That's why I'm proposing the AI Content Nutrition Labels or:
The 5 Levels of AI Content Creation
Level 1 – Fully Human-Crafted
“Written start to finish by a human (that’s me).”
From idea to final edit, this was all me. Maybe I ran spellcheck, but no AI tools were used.
Level 2 – Human First, Light AI Assist
“Human-crafted with light AI support (spellcheck, structure, or polish).”
I wrote this from scratch. AI may have helped reorganize a paragraph, suggest better flow, or trim a few lines. Kind of like a smart, non-opinionated editor.
Level 3 – Human-AI Collaboration
“Assembled with AI support. Human voice, values, and editing throughout.”
This is where I give AI a prompt, concepts, sources, and frameworks and let it suggest ideas, sentences, or alternative framings. Then I edit, rewrite, and guide the piece into alignment with my voice and purpose.
Level 4 – AI First, Human Final Pass
"AI-generated draft. Human reviewed, edited, and made it make sense."
AI did most of the heavy lifting on the first draft. I then stepped in to clean up the structure, fix tone, and make sure it wasn’t just a content blob. These are usually faster pieces or less emotionally complex.
Level 5 – AI-Generated, Light Human Touch
“Mostly AI-generated, lightly edited for tone and clarity.”
I used AI to draft, didn’t change much, and hit publish. Rarely used. Mostly for summaries or utility content.
This is a living framework. I'd love your feedback on it.
I’m not saying everyone needs to do this.
I’m not even sure I’ll do it every time.
But I wanted to name it, in case it helps others find their own clarity.
Because this isn’t just a conversation about AI.
It’s about trust. It’s about ownership. It’s about creativity.
It’s about what we’re reclaiming through our Quiet Rebellion.
It’s about staying awake to the tools we use and the subtle ways they shape the things we put into the world.
If you're a fellow creator, coach, or consumer of my content
Would you find a label like this helpful or unnecessary?
Would you try using something similar?
What do you notice in yourself when you use tools like ChatGPT?
Where should this label go? At the top of a piece, at the bottom, or in the comments?
Let me know. Leave a comment. Let’s talk about this.
If this Dispatch landed with you, forward it to someone who’s been wrestling with the same questions and tell them why it’s important to you.
AI Content Nutrition Label for this post: Human-AI Collaboration. Assembled with AI support. Human voice, values, and editing throughout.
Be Transparent.
Be Rebellious.
In Solidarity ✊
Nicholas Whitaker
Human BE-ing and Conscious Leadership Coach
nicholaswhitaker.com | Co-founder @ Changing Work
I'm going to report here what I wrote on Changing Work:
I love this dialogue, thank you for spearheading, Nick. I agree with your point of view here and appreciate the gradient system. It’s an important step in addressing the broader topic of the moral quandary surrounding AI-generated content.
Here’s what comes to mind for me…(As I’m writing this, I realize I should probably post a Substack of my own)
I think there are several “problems to be solved here,” or needs to be addressed, that are fundamentally moral quandaries. I’m referencing Jonathan Haidt's Moral Foundations Theory (MFT).
In his research, Haidt discovered 6 universal and innate moral pillars. (Some groups and cultures value more than others - see his excellent book, The Righteous Mind, Why Good People are Divided by Politics and Religion)
(Full transparency: the below 1-6 were directly copy-pasted from AI summary :-))
Care/Harm:
This foundation relates to our concern for the well-being and suffering of others, and our inherent aversion to causing harm. It's linked to virtues like kindness, compassion, and nurturance.
Fairness/Cheating:
This foundation is rooted in the evolutionary process of reciprocal altruism, leading to our understanding of justice, rights, and autonomy.
Loyalty/Betrayal:
This foundation stems from our history as tribal creatures, emphasizing the importance of group loyalty, self-sacrifice, and vigilance against betrayal.
Authority/Subversion:
This foundation is shaped by our primate history of hierarchical social interactions, influencing our respect for legitimate authority, traditions, and obedience.
Sanctity/Degradation:
This foundation is linked to our psychological reactions to disgust and contamination, leading to religious notions of striving for a higher, more noble state.
Liberty/Oppression:
This foundation revolves around our feelings of reactance and resentment towards those who dominate or restrict our liberty.
I think the AI-generated content discussion arouses several of these moral foundations in each of us, some more than others:
Fairness - Writing quality/style: Is it (un)fair to attribute a style or quality of writing to someone for something they didn’t write? Is it inauthentic to present AI-writing as your own? This harkens to fairness on several levels - 1) is it fair to have AI write something in general, AND 2) so good with less effort?
Trust & Betrayal - Content accuracy and trust: Can we trust something written by AI? And can we trust a person who uses AI and claims as their own? This harkens to fairness, but also betrayal/loyalty. Are we being lied to by someone we want to trust.
Liberty - General fear of AI taking over: This touches on the liberty foundation of - will this take over and dupe us all?
Authority: This gets in part to a speech-writing example of a politician who doesn't write his/her own speech. The moral foundation of authority - is someone still an authority on a topic if they didn’t fully create the content? We want to KNOW to what extent can we attribute authority to them. AI mucks that.
This has been a stream of consciousness for me, and if you’ve read this far, thanks for sticking with me! I’d love your thoughts!
Nick, I LOVE this. You've named something in a concrete way that I've had swimming aroud my brain as well. I love these 5 levels - they're simple and digestible.