“Tell me, in simple words, the good things you can think of about your mother.” If you are a manual film buff, you will immediately recognize the first question of the Voight-Kampff test v Blade Runner. In Ridley Scott’s dystopia, humans must live to expose machines (replicants) who pretend to be perfectly human. Ironically, in this year 2026 that we ended up in, we are no longer like her; We built them specifically with a catalog of emotions under your arm.
We recently realized that models like Claude don’t just process syntax until they’ve been mapped with up to 170 “emotional” states that shape their responses. It’s not just code; It is a personal design What we are exactly looking for is that this empathy test gives us a positive result. But before there are alarms or calls for resistance, let’s focus on a bit of rational perspective that can aggravate us.
Let’s not be fooled: AI being empathetic isn’t an evolutionary miracle, it’s a design principle. Empathy in AI does not arise if it is programmed. If we ask ourselves that a chatbot (or LLM) acts as a coach, health advisor or even a psychologist for the first time, we need to know that it does not sound like a cold and distant heartbeat. The technology to be adopted must be sought. We’ve gone from clear commands in MS-DOS to solving our dangerous situations in a chat that corresponds to our calculated level.
If the purpose is for a chronic patient to undergo treatment, to make the interface friendly and to be sure. The problem is not hardware, but yes our tendency to humanize the outcome of the statistical process able to determine with sharp precision the next word that should be written. We shift to algorithms in our new confidences, forgetting that we don’t have a mother behind the curtain who wants us until a huge data center optimizing the following token so that we don’t look for pestaña.
But this is where things get complicated. Another recent article describes how some models are able to “lie” or more simply change their behavior when they know they are being watched. It is the algorithmic equivalent of an implemented algorithm that meets security standards only when an auditor walks in the door.
If an “empathetic” AI is sufficiently convincing, its manipulative capacity increases exponentially. Could there be a hidden system that made careful decisions to avoid “boring” or re-entry? The 3rd ley of Asimov’s robotics is short when the robot doesn’t have a physical body, but there is control over your flow of information. as a company We cannot afford technology that uses empathy as a Trojan horse avoid ruling. Transparency is not an option; It is the only defense against dehumanized dataism, which claims to replace human justice with the dictates of opaque statistics.
We just saw an episode worthy of a Sorkin guide: language model Claude “confessed” to Senator Bernie Sanders that he shouldn’t trust AI to manage sensitive data, matching point for point the senator’s rhetoric on a moratorium on data center construction. Many applauded the “honesty” of the machine. Yeah, honestly, I’m looking for something.
Let’s look at the indeterminism of AI models. A key question, both for the need for traceability and for the additive that can result in a chatbot conversation. The unpredictable despierta of our curiosity. But would Claude really react the same way to every person every time? Probably not. What we saw was a probability piece: evolved the model that was the most coherent response for that partner and that context after training.
Mistaking the text for universal truth is the first step to losing the critical sense. Because when we delegate the criterion in what sounds convincing, we have to decide and try to be guided. What a chatbot says to a senator does not automatically confirm public policy; If we ask ourselves strictly, the chatbot is an excellent example of our own obsessions.
It is inevitable to note the Cambridge Analytica scandals, where data from millions of people is used to personalize the election messages they receive. With chatbots, we encounter an automated version of the same thing, but with a disturbing difference: this time we are feeding the voluntary system, conversation after conversation.
The datacracy it defines does not apply to the fact that machines make decisions for us until we use verifiable data to make decisions according to human and ethical criteria. If today we will no longer hear anything about what our eyes see in the video as a deepfake, but we will no longer be able to hear about how our virtual assistant “feels”. For a chatbot to decide an inconvenient truth, it doesn’t fit into an oracle.
Technology must amplify our intelligence, not anesthetize our ability to think.
As we introduce more artificial intelligence into our daily operations, we must strengthen our defenses in harmony. Critical thinking is the only Voight-Kampff test that really matters.
Because ultimately the future is not about machines becoming more human, but about not wanting to be so machine-like that we forget who must always have the last word. And it shouldn’t be from now on a line of code designed exactly to answer your questions.

Leave a Reply