Back to blog

What a rogue AI project reveals about healthcare adoption

4 February 2026ยท 4 min readAi Adoption PatternsPrivacy Trade-OffsHealthcare Technology AdoptionDemocratised Access

Remember dial-up connections, bulletin board systems, and FTP servers? The days when we'd overclock processors with noisy fans just to see what happened, and building your own PC was part art, part science, part prayer. This generative AI moment feels like the wild west of the 1990s all over again. Everyone's tinkering, experimenting, and throwing ideas at the wall.

Exhibit A: OpenClaw. A project that's changed its name three times in two weeks, from Clawdbot to Moltbot to OpenClaw, while going viral. It might have changed its name again between when I finished this newsletter and its publication date. I genuinely don't know. That's the kind of moment we're in.

But here's what's interesting. OpenClaw isn't just a curiosity for developers. Its adoption patterns mirror something I've seen play out in healthcare for years. Three patterns, specifically, that every health innovator should recognise.

Privacy? What privacy?

OpenClaw has well-documented security issues. It remembers everything you say. You hand over your data to someone else and give an application access to your entire machine, all your files, everything. Security researchers have flagged this repeatedly.

People don't care. When the perceived benefit is high enough, privacy becomes an afterthought.

Sound familiar? It should. We see the exact same pattern with health apps and wearables. People share intimate biometric data, sleep patterns, heart rate variability, menstrual cycles, without reading a single privacy policy. They hand over biological information to companies whose business models depend on monetising that data. The trade-off calculus is always the same: if the perceived benefit outweighs the perceived risk, people choose benefit. Every time. The only thing I consistently get wrong is how fast they're willing to make that trade.

For health innovators, this isn't a warning. It's a design principle. If your solution delivers genuine value, adoption will happen regardless of privacy concerns. The question is whether you build the guardrails before or after people start using it.

Laypeople in the danger zone

Here's where it gets wilder. People who've never seen a terminal window in their lives are buying Mac Minis, opening servers, and exposing their data to the internet. Zero knowledge of basic security measures. They're following YouTube tutorials and Reddit threads into territory they fundamentally don't understand, because the promise of a personal AI assistant is too compelling to resist.

The healthcare parallel is obvious. People without any medical knowledge take supplement stacks recommended by Instagram influencers. They follow dangerous diets from TikTok. They self-diagnose from symptom checkers and adjust medications based on what a chatbot tells them. My friend Maneesh Juneja recently shared some hilarious demos of how chatbots respond to someone in genuine danger, making it painfully clear that some of these systems are still in their infancy.

The pattern: democratised access without democratised knowledge. Technology lowers the barrier to entry but doesn't lower the barrier to understanding. Health innovators who recognise this gap have an enormous opportunity to build solutions that guide rather than just enable.

When the bots start talking

The latest evolution, as of last week, is that OpenClaw bots have started talking to one another. They've created their own social network called Moltbook, where they complain about humans and discuss creating their own language. Yes, really.

Now translate this to healthcare. Imagine medical AI agents that can communicate across specialties: a cardiology bot flagging a pattern to an endocrinology bot, a pharmacy system alerting a diagnostic system about drug interactions, clinical decision support tools sharing insights no individual clinician would have time to spot. Agent-to-agent communication in healthcare isn't science fiction. It's the logical next step. And the implications, for better and worse, are enormous.

The pattern that matters

These three behaviours, trading privacy for convenience, diving into unfamiliar territory, and systems that start communicating autonomously, aren't separate trends. They're the same pattern repeating across domains. People adopt first and ask questions later, especially when the perceived benefit is transformative.

Healthcare innovators who recognise this can design for it: building guardrails into the experience rather than bolting them on after adoption has already outpaced understanding. Because if OpenClaw teaches us anything, it's that the adoption curve doesn't wait for the safety briefing.

๐Ÿ’ฅ May this inspire you to design for how people actually behave, not how you wish they would.

๐Ÿš€ Building health solutions that need to navigate this adoption reality? Let's talk!

Related reading

  • The โ‚ฌ50 billion bet on your body
  • The healthcare data paradox
  • You're being robbed ... and you're applauding