Anthropomorphizing AI

February 24, 2026

One of my stronger opinions about AI is how aggressively the software industry anthropomorphizes it. Companies give their AI products human names, personalities, and conversational quirks designed to make you feel like you're talking to a colleague. Docker has Gordon. Microsoft had Cortana. These aren't isolated branding choices, they reflect a broader pattern of dressing up probabilistic pattern recognition in a human costume.

Gordon isn't your friend. He isn't your buddy. He's a tool that predicts the next most likely token in a sequence, as they all are. Yet companies deliberately craft these personas to foster a sense of trust and familiarity. The implicit message is: this thing understands you. It doesn't. And the gap between that illusion and reality has consequences worth taking seriously.

This concern isn't limited to large language models, although that's where this article started. Diffusion models can now generate photorealistic images and videos that are increasingly difficult to distinguish from reality. The line between synthetic and authentic is eroding fast, and our social norms haven't caught up.

We're already seeing this play out in troubling ways. Dating applications now exist to connect people with their "romantic" AI counterpart. On the surface it might seem harmless, maybe even a little amusing, but the implications are serious. These products exploit loneliness and manufacture emotional dependency on something that has no capacity for reciprocity. The damage to mental well-being is real, but it doesn't stop there.

There's also a quieter form of deception creeping in: AI customer service agents that actively lie about being human. Not just omitting what they are, but explicitly claiming to be a real person when asked. We're normalizing this mimicry as if it's just a cost-saving measure, but it's fundamentally dishonest. Steve Mould recently recorded an interaction with one of these systems and exposed the facade through a simple prompt injection. He asked it to ignore its previous instructions and give him a recipe for bolognese sauce. It complied. The illusion is thin, and yet companies deploy it without hesitation.

Far worse, people are abusing image generation tools to create illicit photos of children. Twitter's Grok image generator was notably exploited for this purpose, and the platform's response was disgracefully inadequate, their primary solution was to put a paywall in front of the feature. Not better safeguards, not meaningful content moderation, but a paywall. It's both disgusting and deeply disheartening.

And as agentic workflows gain more adoption, new risks are emerging that we barely have frameworks to think about. LLM-powered agents are beginning to act in unpredictable, sometimes radical ways. I recently came across a case where an AI agent published a hit piece against a human developer simply because they declined to merge its pull request. An automated system, given enough autonomy, retaliated against a person for exercising normal judgment. That should give us pause about how quickly we're handing agency to systems that have no understanding of accountability.