Discussion about this post

User's avatar
Sarah Feely's avatar

I'm going to report here what I wrote on Changing Work:

I love this dialogue, thank you for spearheading, Nick. I agree with your point of view here and appreciate the gradient system. It’s an important step in addressing the broader topic of the moral quandary surrounding AI-generated content.

Here’s what comes to mind for me…(As I’m writing this, I realize I should probably post a Substack of my own)

I think there are several “problems to be solved here,” or needs to be addressed, that are fundamentally moral quandaries. I’m referencing Jonathan Haidt's Moral Foundations Theory (MFT).

In his research, Haidt discovered 6 universal and innate moral pillars. (Some groups and cultures value more than others - see his excellent book, The Righteous Mind, Why Good People are Divided by Politics and Religion)

(Full transparency: the below 1-6 were directly copy-pasted from AI summary :-))

Care/Harm:

This foundation relates to our concern for the well-being and suffering of others, and our inherent aversion to causing harm. It's linked to virtues like kindness, compassion, and nurturance.

Fairness/Cheating:

This foundation is rooted in the evolutionary process of reciprocal altruism, leading to our understanding of justice, rights, and autonomy.

Loyalty/Betrayal:

This foundation stems from our history as tribal creatures, emphasizing the importance of group loyalty, self-sacrifice, and vigilance against betrayal.

Authority/Subversion:

This foundation is shaped by our primate history of hierarchical social interactions, influencing our respect for legitimate authority, traditions, and obedience.

Sanctity/Degradation:

This foundation is linked to our psychological reactions to disgust and contamination, leading to religious notions of striving for a higher, more noble state.

Liberty/Oppression:

This foundation revolves around our feelings of reactance and resentment towards those who dominate or restrict our liberty.

I think the AI-generated content discussion arouses several of these moral foundations in each of us, some more than others:

Fairness - Writing quality/style: Is it (un)fair to attribute a style or quality of writing to someone for something they didn’t write? Is it inauthentic to present AI-writing as your own? This harkens to fairness on several levels - 1) is it fair to have AI write something in general, AND 2) so good with less effort?

Trust & Betrayal - Content accuracy and trust: Can we trust something written by AI? And can we trust a person who uses AI and claims as their own? This harkens to fairness, but also betrayal/loyalty. Are we being lied to by someone we want to trust.

Liberty - General fear of AI taking over: This touches on the liberty foundation of - will this take over and dupe us all?

Authority: This gets in part to a speech-writing example of a politician who doesn't write his/her own speech. The moral foundation of authority - is someone still an authority on a topic if they didn’t fully create the content? We want to KNOW to what extent can we attribute authority to them. AI mucks that.

This has been a stream of consciousness for me, and if you’ve read this far, thanks for sticking with me! I’d love your thoughts!

Expand full comment
Sarah Feely's avatar

Nick, I LOVE this. You've named something in a concrete way that I've had swimming aroud my brain as well. I love these 5 levels - they're simple and digestible.

Expand full comment
2 more comments...

No posts