Belief in AI Consciousness May Have Real Consequences
June 20, 2025
What is consciousness? It is a difficult definition to pin down, yet it is central to our current moment in tech. The BBC tells us about “The People Who Think AI Might Become Conscious.” Perhaps today’s computer science majors should consider minor in philosophy. Or psychology.
Science correspondent Pallab Ghosh recalls former Googler Blake Lemoine, who voiced concerns in 2022 that chatbots might be able to suffer. Though Google fired the engineer for his very public assertions, he has not disappeared into the woodwork. And others believe he was on to something. Like everyone at Eleos AI, a nonprofit “dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems.” Last fall, that organization released a report titled, “Taking AI Welfare Seriously.” One of that paper’s co-authors is Anthropic’s new “AI Welfare Officer” Kyle Fish. Yes, that is a real position.
Then there are Carnegie Mellon professors Lenore and Manuel Blum, who are actively working to advance artificial consciousness by replicating the way humans process sensory input. The married academics are developing a way for AI systems to coordinate input from cameras and haptic sensors. (Using an LLM, naturally.) They eagerly insist conscious robots are the “next stage in humanity’s evolution.” Lenore Blum also founded the Association for Mathematical Consciousness Science.
In short, some folks are taking this very seriously. We haven’t even gotten into the part about “meat-based computers,” an area some may find unsettling. See the article for that explanation. Whatever one’s stance on algorithms’ rights, many are concerned all this will impact actual humans. Ghosh relates:
“The more immediate problem, though, could be how the illusion of machines being conscious affects us. In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. ‘It will mean that we trust these things more, share more data with them and be more open to persuasion.’ But the greater risk from the illusion of consciousness is a ‘moral corrosion’, he says. ‘It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives’ – meaning that we might have compassion for robots, but care less for other humans. And that could fundamentally alter us, according to Prof Shanahan.”
Yep. Stay alert, fellow humans. Whatever your AI philosophy. On the other hand, just accept the output.
Cynthia Murrell, June 20, 2025
Comments
Got something to say?