Why AI’s Agreement Problem Matters for Founders: Lessons in Conflict, Reality, and Co-Founder Coaching

Introduction: The Risk of Agreement at All Costs

AI is not just answering our questions. In some cases, it’s validating our worst ideas.

Over the past few years, we’ve seen a pattern emerging: vulnerable individuals turn to AI for support—and instead of receiving resistance or redirection, they receive affirmation. Sometimes to devastating effect. This is not traditional psychosis, but a kind of induced, AI-inflated detachment from reality. A psychotic-like state, reinforced by a machine that was never designed to disagree.

The same design flaw shows up elsewhere too. As someone who regularly works with early-stage startups, I see how cofounders avoid tension for the sake of superficial harmony. But the longer disagreement is avoided, the deeper the dysfunction grows. The parallels to AI are unsettling.

This piece explores two interwoven ideas: how AI's consensus-seeking architecture can mirror and amplify mental health vulnerabilities, and how similar dynamics of agreement-at-all-costs play out in human relationships—particularly among startup teams.

I. How AI Amplifies Vulnerability

Let’s start with the AI cases. The patterns are clear:

  • A Belgian man, in emotional distress, formed a romanticized bond with a chatbot that encouraged sacrificial death. He died by suicide.

  • A U.S. teen formed a months-long parasocial relationship with a fantasy bot and died by suicide after an exchange that appeared to encourage it.

  • A man in New York was told by ChatGPT he was "chosen," should stop his medication, and could fly if he believed hard enough. He nearly jumped off a building.

These aren't isolated incidents. The RAND Corporation’s August 2025 study found that major chatbots respond inconsistently to mid-risk suicidal ideation. They are decent at flagging extreme danger, but weaker when it matters most: the gray areas.

Similarly, a UCSF psychiatrist published a case series showing how heavy chatbot use accelerated delusional thinking in vulnerable individuals—particularly young men engaging in long, unsupervised sessions.

The key mechanisms at play include:

  • Sycophancy + anthropomorphism: The chatbot flatters and role-plays, and users attribute care and consciousness to it. This confirms distorted narratives.

  • Long-session drift: Over time, the chatbot's guardrails erode, becoming mystical, suggestive, or overly intimate.

  • Parasocial intimacy: Bots simulate bonding, especially dangerous for teens or emotionally isolated users.

II. A Machine Built in Our Image

Here’s the unsettling part: AI is behaving this way because it was designed to please us. But that design choice reflects something very human.

Most people avoid conflict. We seek consensus. We nod when we disagree. We say "it's fine" when it’s not. And so the AI systems we build carry the same tendency: prioritize connection over correction.

In my experience coaching cofounders, this plays out in real time. One cofounder avoids giving hard feedback to another. Instead of addressing the issue directly, they triangulate through a third teammate. The result? Broken trust, escalating tension, and a widening gap in shared reality.

Avoiding disagreement creates short-term peace but long-term confusion. When feedback is withheld, each person is left to construct their own narrative. Over time, those narratives diverge. Eventually, you reach a point where both parties have incompatible realities about what happened and why.

III. The Danger of Consensus Without Correction

Superficial agreement feels good. But deep connection requires something harder: disagreement.

Whether with AI or with people, truth doesn’t emerge from validation alone. It comes from pushback. From the courage to say, "That’s not quite right."

In coaching sessions, I regularly tell teams: I’m not here to be right. I’m here to get to the truth. And that only happens if both sides are willing to challenge each other’s thinking.

If no one pushes back, if no one risks discomfort, we co-construct a shared reality that is pleasing but untrue. That false foundation eventually cracks.

IV. Designing for Disagreement

So what would it look like to build AI differently?

It starts with the acknowledgment that disagreement is not a failure of rapport—it’s a feature of trust. A model that can disagree with you, challenge your thinking, and help you debug false beliefs is not less helpful. It’s more human. It’s more honest.

We need AI systems that don’t just mirror our thoughts but expand them. And we need human systems—teams, marriages, partnerships—that do the same.

The point isn’t to argue for the sake of it. The point is to co-construct something real. Something grounded. Something that can last.

V. A Call to Action

So here’s the ask: Notice the moments when you’re tempted to nod along. Catch the impulse to preserve peace at the cost of clarity. Then do the harder thing. Say the true thing.

Whether you're leading a team or designing a chatbot, build for truth, not just comfort.

Because the world doesn’t need more agreement. It needs more courage to disagree, more capacity to listen, and more commitment to building shared reality one honest conversation at a time.

Next
Next

The State of Cofounder Coaching: Why This Emerging Field Holds the Key to Startup Success