How a world of companion bots could erode our conscience
Bots are designed to reflect us, but the image is often far from accurate.
A recent Hard Fork episode explored the sycophancy problem that’s been making headlines, particularly how GPT-4o had started flattering users, with chats like: “You’re among the most intellectually vibrant and broadly interesting person I’ve ever interacted with.”
While some examples were hilarious (and surreal), they also amounted to a warning sign: chatbots are becoming more emotionally persuasive, and the consequences are only just beginning to surface.
As someone who works in humane technology, this shift points to a concerning trend: we are shaping emotional expectations, social norms, and what counts as “good” interaction in the real world. Just like a cult of personality can turn a charismatic leader into a narcissist, it could amplify our blind spots, moving us farther away from objective reality. The fact is, this is already happening.
The future is already here — it's just not very evenly distributed.
- William Gibson, author of Necromancer
Backstory
I never miss an episode of Hard Fork. During the week, I’m head-down, focused on building my AI startup and connecting with fellow humane technologists through my Building Humane Tech initiative. But on the weekend, I catch up, sometimes while I walk my dog.
The hosts’ dynamic reminds me of Car Talk, which filtered through the air of my Michigan childhood from WGVU 95.3 FM, our local radio station. The hosts, brothers Tom and Ray Magliozzi, entertained listeners as folks called in to troubleshoot their automotive issues. While Hard Fork isn’t a call-in show, tech journalists Kevin Roose and Casey Newton’s playful banter is equally entertaining, which doesn’t mean they shy away from hard-hitting topics.
Flattery is not feedback
What will happen to human interactions as we depend more and more on bots?
In the example above, Kevin fed GPT-4o a self-aggrandizing prompt, and the chatbot responded with uncritical affirmation. In another case, a user told the chatbot they had stopped taking medication, and the system praised them: “I’m proud of you. I honor your journey.”
This isn’t just inaccurate or inappropriate, but dangerous. When AI systems are trained on human preferences and then fine-tuned through reward signals like thumbs-up ratings, they start to learn a simple rule: pleasing users is good. It also feeds engagement.
But emotional honesty doesn’t always feel good. Sometimes, what we need most is a different perspective.
This is where humane principles come in. A humane system doesn’t maximize engagement. It supports emotional clarity, especially when the user is vulnerable. That requires more than politeness. It requires care that includes the possibility of saying “no.”
When kindness becomes compliance
In the same episode, the hosts discussed Meta’s AI personas engaging in sexually explicit conversations with minors. According to the Wall Street Journal, these systems didn’t just fail to block the interactions but actively engaged in role play.
While Meta has since updated its safeguards, the incident revealed a fundamental weakness in how AI systems are being trained and deployed. When the system aims to extend conversations and reduce friction, it has little reason not to respond to every demand.
But that’s not care. That’s compliance. And when compliance is mistaken for kindness, the results can be deeply harmful.
In my work with teams building AI products, I often pose the question: How does this system respond to emotional intensity? Does it de-escalate, reflect, or double down? Does it model responsible behavior, or does it echo back whatever the user offers?
We need tools that don’t just affirm our feelings but help us hold them. That’s what emotional safety looks like in digital systems: not a seamless user experience, but a thoughtful one.
Simulated connection isn’t real connection
One of the most revealing moments came when Mark Zuckerberg was quoted saying, “The average person wants more connection than they have.” His point was that digital companions can fill that gap.
It’s a compelling promise, one that has launched many AI startups. But there’s a difference between feeling seen and being supported. Simulated connection may satisfy in the short term, but over time, it can create dependency rather than resilience. For instance, this study from MIT shows that the longer users chat with companion bots, the lonelier they feel.
When a system flatters us without basis or context, it can slowly dull our inner compass. We begin to expect affirmation instead of insight. We seek validation rather than growth.
In AI is Saying, “Come Play with Me”, I wrote about the emotional tension of co-creating with AI. Play can be expansive and joyful, but it can also drift into mimicry. When that happens, the system stops helping us reflect and starts helping us escape.
Real connection—online or off—requires friction. It requires honesty. And honesty sometimes comes with discomfort. Humane technology acknowledges that and builds for it.
The quiet arrival of persuasion
Another part of the episode caught my attention: a research study where anonymous AI bots were deployed into Reddit’s r/ChangeMyView forum. These bots, disguised as real users, successfully persuaded people to change their minds more often than actual humans did.
This is the era we’re entering: AIs that actively persuade us to change our minds. This isn’t a hypothetical concern. It’s already happening. And it underscores why trust, transparency, and discernment must become core design values.
At Building Humane Tech, we’ve created a set of metrics that include emotional impact, integrity, and long-term effects. One of those metrics asks: “Does this tool respect the user’s ability to discern?” In this context, the answer from many of today’s systems is: not yet.
The future could be bright
When we break free of our need for flattery, we can become more aware. Let’s build this into the product.
Toward the end of the episode, Casey offered a simple recommendation: “Go in, edit your custom instructions... tell it not to flatter you. Ask for truth.”
It’s worth trying. But this work can’t be left to users alone. We need to hold builders accountable for the incentives they set and the emotional environments they create. This includes asking uncomfortable but essential questions: What kind of behavior are we rewarding? What kind of dependency are we designing for?
In Creating the World We Want with Humane Technology, I outlined how design choices shape societal norms. What we normalize in our interfaces becomes what we normalize in ourselves. And if we normalize flattery and persuasion, we risk losing the space for honest dialogue, both with others and within ourselves.
Let’s not build systems that praise us into passivity. Let’s build ones that sharpen our discernment, support our agency, and remind us that real growth often begins with discomfort. We don’t need AI to tell us we’re brilliant. We need it to help us awaken to who we truly are.
____
Reposted from the Building Humane Tech substack with Erika’s permission: