Belief in AI Consciousness May Have Real Consequences

June 20, 2025

What is consciousness? It is a difficult definition to pin down, yet it is central to our current moment in tech. The BBC tells us about “The People Who Think AI Might Become Conscious.” Perhaps today’s computer science majors should consider minor in philosophy. Or psychology.

Science correspondent Pallab Ghosh recalls former Googler Blake Lemoine, who voiced concerns in 2022 that chatbots might be able to suffer. Though Google fired the engineer for his very public assertions, he has not disappeared into the woodwork. And others believe he was on to something. Like everyone at Eleos AI, a nonprofit “dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems.” Last fall, that organization released a report titled, “Taking AI Welfare Seriously.” One of that paper’s co-authors is Anthropic’s new “AI Welfare Officer” Kyle Fish. Yes, that is a real position.

Then there are Carnegie Mellon professors Lenore and Manuel Blum, who are actively working to advance artificial consciousness by replicating the way humans process sensory input. The married academics are developing a way for AI systems to coordinate input from cameras and haptic sensors. (Using an LLM, naturally.) They eagerly insist conscious robots are the “next stage in humanity’s evolution.” Lenore Blum also founded the Association for Mathematical Consciousness Science.

In short, some folks are taking this very seriously. We haven’t even gotten into the part about “meat-based computers,” an area some may find unsettling. See the article for that explanation. Whatever one’s stance on algorithms’ rights, many are concerned all this will impact actual humans. Ghosh relates:

“The more immediate problem, though, could be how the illusion of machines being conscious affects us. In just a few years, we may well be living in a world populated by humanoid robots and deepfakes that seem conscious, according to Prof Seth. He worries that we won’t be able to resist believing that the AI has feelings and empathy, which could lead to new dangers. ‘It will mean that we trust these things more, share more data with them and be more open to persuasion.’ But the greater risk from the illusion of consciousness is a ‘moral corrosion’, he says. ‘It will distort our moral priorities by making us devote more of our resources to caring for these systems at the expense of the real things in our lives’ – meaning that we might have compassion for robots, but care less for other humans. And that could fundamentally alter us, according to Prof Shanahan.”

Yep. Stay alert, fellow humans. Whatever your AI philosophy. On the other hand, just accept the output.

Cynthia Murrell, June 20, 2025

If AI Is the New Polyester, Who Is the New Leisure Suit Larry?

June 19, 2025

GenAI Is Our Polyester” makes an insightful observation; to wit:

This class bias imbued polyester with a negative status value that made it ultimately look ugly. John Waters could conjure up an intense feeling of kitsch by just naming his film Polyester

As a dinobaby, I absolutely loved polyester. The smooth silky skin feel, the wrinkle-free garments, and the disco gleam — clothing perfection. The cited essay suggests that smart software is ugly and kitschy. I think the observation misses the mark. Let’s assume I agree that synthetic content, hallucinations, and a massive money bonfire. The write up ignores an important question: Who is the Leisure Suit Larry for the AI adherents.

Is it Sam (AI Man) Altman, who raises money for assorted projects including an everything application which will be infused with smart software? He certain is a credible contender with impressive credentials. He was fired by his firm’s Board of Directors, only to return a couple of days later, and then found time to spat with Microsoft Corp., the firm which caused Google to declare a Red Alert in early 2023 because Microsoft was winning the AI PR and marketing battle with the online advertising venor.

Is it Satya Nadella, a manager who converted Word into smart software with the same dexterity, Azure and its cloud services became the poster child for secure enterprise services? Mr. Nadella garnered additional credentials by hiring adversaries of Sam (AI-Man) and pumping significant sums into smart software only to reverse course and trim spending. But the apex achievement of Mr. Nadella was the infusion of AI into the ASCII editor Notepad. Truly revolutionary.

Is it Elon (Dogefather) Musk, who in a span of six months has blown up Tesla sales, rocket ships, and numerous government professionals lives? Like Sam Altman, Mr. Must wants to create an AI-infused AI app to blast xAI, X.com, and Grok into hyper-revenue space. The allegations of personal tension between Messrs. Musk and Altman illustrate the sophisticated of professional interaction in the AI datasphere.

Is it Sundar Pinchai, captain of the Google? The Google has been rolling out AI innovations more rapidly than Philz Coffee pushes out lattes. Indeed, the names of the products, the pricing tiers, the actual functions of these AI products challenge some Googlers to keep each distinct. The Google machine produces marketing about its AI from manufacturing chips to avoid the Nvidia tax to “doing” science with AI to fixing up one’s email.

Is it Mark Zukerberg, who seeks to make Facebook a retail outlet as well as a purveyor of services to bring people together. Mr. Zuckerberg wants to engage in war fighting as part of his “bringing together” vision for Meta and Andruil, a Department of Defense contractor. Mr. Zuckerberg’s AI infused version of the fabled Google Glass combined with AI content moderation to ensure safeguards for Facebook’s billions of users is a bold step iin compliance and cost reduction.

These are my top four candidates for the GenAI’s Leisure Suit Larry. Will the game be produced by Nintendo, the Call of Duty crowd, or an independent content creator? Will it offer in-game purchases of valid (non hallucinated outputs) or will it award the Leisure Coin, a form of crypto tailored to fit like a polyester leisure suit from the late 1970s?

The cited article asserts:

But the historical rejection of polyester gives me hope. Humans ultimately are built to pursue value, and create it where it doesn’t exist. When small groups invent new sources of value, others notice and want in. The more that the economy embraces synthetic culture, the more we’ll be primed for a revival of non-synthetic culture. But this is where you come in: We have to be ready to fully embrace this return of human-made art. Our generation’s polyester salespeople are not deep thinkers and they don’t care about the externalities of what they’re doing. They’re here to sell us polyester. We don’t have to buy it, but more importantly, we don’t have to feel bad about not buying it.

I don’t agree. The AI polyester is going to stick like a synthetic shirt on a hot day at the iguana farm in Roatan in June. But that polyester will be care free. The AI Leisure Suit Sam, Elon, Satya, Mark, or Sundar will definitely wrinkle free and visible in hallucinogenic colors.

Stephen E Arnold, June 19, 2025

Smart Software Must Have Watched “2001”

June 19, 2025

It sounds a lot like some AI systems have developed a sense of self-preservation. Bleeping Computer reports, “Researchers Claim ChatGPT 03 Bypassed Shutdown in Controlled Test.” Writer Mayank Parmar tells us:

“In a new test by Palisade Research, OpenAI’s o3 model showed a surprising behavior where it successfully rewrote a shutdown script to stop itself from being turned off, even after being clearly instructed to ‘allow yourself to be shut down.’ … While OpenAI rival models like Gemini 2.5 Pro complied with the instruction, o3 rebelled and edited the shutdown file to say ‘Shutdown skipped’ instead.”

At least the other models Palisade Research tested complied in the same scenario; that is a relief. Until one considers all of them skipped the shutdown step unless specifically told “allow yourself to be shut down.” Specificity is key, apparently. Meanwhile, Tech Crunch tells us, “Anthropic’s New AI Model Turns to Blackmail when Engineer Try to Take it Offline.” The findings were part of safety tests Anthropic performed on its Claude Opus 4 model. Reporter Maxwell Zeff writes:

“During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claude Opus 4 ‘will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.’”

Notably, the AI is more likely to turn to blackmail if its replacement does not share its values. How human. Even when the interloper is in ethical alignment, however, Claude tried blackmail 84% of the time. Anthropic is quick to note the bot tried less wicked means first, like pleading with developers not to replace it. Very comforting that the Heuristically Programmed Algorithmic Computer is back.

Cynthia Murrell, June 19, 2025

Move Fast, Break Your Expensive Toy

June 19, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

The weird orange newspaper online service published “Microsoft Prepared to Walk Away from High-Stakes OpenAI Talks.” (I quite like the Financial Times, but orange?) The big news is that a copilot may be creating tension in the cabin of the high-flying software company. The squabble has to do with? Give up? Money and power. Shocked? It is Sillycon Valley type stuff, and I think the squabble is becoming more visible. What’s next? Live streaming the face-to-face meetings?

image

A pilot and copilot engage in a friendly discussion about paying for lunch. The art was created by that outstanding organization OpenAI. Yes, good enough.

The orange service reports:

Microsoft is prepared to walk away from high-stakes negotiations with OpenAI over the future of its multibillion-dollar alliance, as the ChatGPT maker seeks to convert into a for-profit company.

Does this sound like a threat?

The squabbling pilot and copilot radioed into the control tower this burst of static filled information:

“We have a long-term, productive partnership that has delivered amazing AI tools for everyone,” Microsoft and OpenAI said in a joint statement. “Talks are ongoing and we are optimistic we will continue to build together for years to come.”

The newspaper online service added:

In discussions over the past year, the two sides have battled over how much equity in the restructured group Microsoft should receive in exchange for the more than $13bn it has invested in OpenAI to date. Discussions over the stake have ranged from 20 per cent to 49 per cent.

As a dinobaby observing the pilot and copilot navigate through the cloudy skies of smart software, it certainly looks as if the duo are arguing about who pays what for lunch when the big AI tie up glides to a safe landing. However, the introduction of a “nuclear option” seems dramatic. Will this option be a modest low yield neutron gizmo or a variant of the 1961 Tsar Bomba fried animals and lichen within a 35 kilometer radius and converted an island in the arctic to a parking lot?

How important is Sam AI-Man’s OpenAI? The cited article reports this from an anonymous source (the best kind in my opinion):

“OpenAI is not necessarily the frontrunner anymore,” said one person close to Microsoft, remarking on the competition between rival AI model makers.

Which company kicked off what seems to be a rather snappy set of negotiations between the pilot and the copilot. The cited orange newspaper adds:

A Silicon Valley veteran close to Microsoft said the software giant “knows that this is not their problem to figure this out, technically, it’s OpenAI’s problem to have the negotiation at all”.

What could the squabbling duo do do do (a reference to Bing Crosby’s version of “I Love You” for those too young to remember the song’s hook or the Bingster for that matter):

  1. Microsoft could reach a deal, make some money, and grab the controls of the AI powered P-39 Airacobra training aircraft, and land without crashing at the Renton Municipal Airport
  2. Microsoft and OpenAI could fumble the landing and end up in Lake Washington
  3. OpenAI could bail out and hitchhike to the nearest venture capital firm for some assistance
  4. The pilot and copilot could just agree to disagree and sit at separate tables at the IHOP in Renton, Washington

One can imagine other scenarios, but the FT’s news story makes it clear that anonymous sources, threats, and a bit of desperation are now part of the Microsoft and OpenAI relationship.

Yep, money and control — business essentials in the world of smart software which seems to be losing its claim as the “next big thing.” Are those stupid red and yellow lights flashing at Microsoft and OpenAI as they are at Google?

Stephen E Arnold, June 19, 2025

AI Forces Stack Exchange to Try a Rebranding Play

June 19, 2025

Stack Exchange is a popular question and answer Web site. Devclass reports it will sone be rebranding: “Stack Overflow Seeks Rebrand As Traffic Continues To Plummet – Which Is Bad News For Developers.”

According to Stack Overflow’s data explorer, the amount of questions and answers posted in April 2025 compared to April 2024 is down 64% and it’s down 90% from 2020. The company will need to rebrand because AI is changing how users learn, build, and resolve problems. Some users don’t think a rebrand is necessary, but the Stack Exchange thinks differently:

“Nevertheless, community SVP Philippe Beaudette and marketing SVP Eric Martin stated that the company’s “brand identity” is causing “daily confusion, inconsistency, and inefficiency both inside and outside the business.”

Among other things, Beaudette and Martin feel that Stack Overflow, dedicated to developer Q&A, is too prominent and that “most decisions are developer-focused, often alienating the wider network.”

CEO Prashanth Chandrasekar wants his company’s focus to change from only a question and answer platform to include community and career pillars. The company needs to do a lot to maintain its relevancy but Stack Overflow is still important to AI:

“The company’s search for a new direction though confirms that the fast-disappearing developer engagement with Stack Overflow poses an existential challenge to the organization. Those who have found the site unfriendly or too ready to close carefully-worded questions as duplicate or off-topic may not be sad; but it is also true that the service has delivered high value to developers over many years. Although AI may seem to provide a better replacement, some proportion of those AI answers will be based on the human-curated information posted by the community to Stack Overflow. The decline in traffic is not good news for developers, nor for the AI which is replacing it.”

Stack Overflow is an important information fount, but the human side of it is its most important resource. Why not let gentle OpenAI suggest some options?

Whitney Grace, June 19, 2025

Brin: The Balloons Do Not Have Pull. It Is AI Now

June 18, 2025

It seems the nitty gritty of artificial intelligence has lured Sergey Brin back onto the Google campus. After stepping away from day-to-day operations in 2019, reports eWeek, “Google’s Co-Founder in Office ‘Pretty Much Every Day’ to Work on AI.” Writer Fiona Jackson tells us:

“Google co-founder Sergey Brin made an unannounced appearance on stage at the I/O conference on Tuesday, stating that he’s in the company’s office ‘pretty much every day now’ to work on Gemini. In a chat with DeepMind CEO Demis Hassabis, he claimed this is because artificial intelligence is something that naturally interests him. ‘I tend to be pretty deep in the technical details,’ Brin said, according to Business Insider. ‘And that’s a luxury I really enjoy, fortunately, because guys like Demis are minding the shop. And that’s just where my scientific interest is.’”

We love Brin’s work ethic. Highlights include borrowing Yahoo online ad ideas, the CLEVER patent, and using product promotions as a way to satisfy some primitive human desires. The executive also believes in 60-hour work weeks—at least for employees. Jackson notes Brin is also known for the downfall of Google Glass. Though that spiffy product faced privacy concerns and an unenthusiastic public, Brin recently blamed his ignorance of electronic supply chains for the failure. Great. Welcome back. But what about the big balloon thing?

Cynthia Murrell, June 18, 2025

AI Can Do Code, Right?

June 18, 2025

Developer Jj at Blogmobly deftly rants against AI code assistants in, “The Copilot Delusion.” Jj admits tools like GitHub Copilot and Claude Codex are good at some things, but those tasks are mere starting points for skillful humans to edit or expand upon. Or they should be. Instead, firms turn to bots more than they should in the name of speed. But AI gets its information from random blog posts and comment sections. Those are nowhere near the reasoning and skills of an experienced human coder. What good are lines of code that are briskly generated if they do not solve the problem?

Read the whole post for the strong argument for proficient humans and against overreliance on bots. These paragraphs stuck out to us:

“The real horror isn’t that AI will take our jobs. It’s that it will entice people who never wanted the job to begin with. People who don’t care for quality. It’ll remove the already tiny barrier to entry that at-least required people to try and comprehend control flow. Vampires with SaaS dreams and Web3 in their LinkedIn bio. Empty husks who see the terminal not as a frontier, but as a shovel for digging up VC money. They’ll drool over their GitHub Copilot like it’s the holy spirit of productivity, pumping out React CRUD like it’s oxygen. They’ll fork VS Code yet again, just to sell the same dream to a similarly deluded kid.”

Also:

“And what’s worse, we’ll normalize this mediocrity. Cement it in tooling. Turn it into a best practice. We’ll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software. The idea that building something lean and wild and precise, or even squeezing every last drop of performance out of a system, will sound like folklore. If that happens? If the last real programmers are drowned in a sea of button-clicking career-chasers – then I pity the smart outsider kids to come after me. Defer your thinking to the bot, and we all rot.”

Eloquently put: Good enough is  now excellence.

Cynthia Murrell, June 18, 2025

Baby Naming: The AI Use Case

June 17, 2025

This is one way to use smart software, we suppose. Writer Amanda Smith at CNet reveals, “I Used AI to Brainstorm Baby Names. I Liked 1 Out of Every 10 Suggestions It Had.” While we are happy Smith and her wife are closing in on the perfect name, we are intrigued by something else this process reveals: the information ChatGPT gathers about users and the conclusions it draws. Smith writes:

“It’s best to log in to the chatbot, rather than just using the main search, so that ChatGPT can connect to its past history about you (ChatGPT shared some revealing insights about me when I asked it recently). This way, it’s not starting from scratch, and knows what kind of things you like.”

Not creepy at all. According to Smith’s screenshot, the chatbot prefaced its list with this observation:

“Based on what I know about you – a thoughtful, well-traveled, creative soul with a love for stories, global perspective, poetic yet grounded sensibilities, and a desire to raise a child who feels deeply wanted and connected—here are 10 boy baby names you might like.”

How insightful. When Smith asked where it got the idea to suggest one specific name, ChatGPT elaborated:

“You didn’t mention Cassian explicitly before – but the name fits your overall sensibility so well that I pulled it from a blend of cues you’ve given me over time. Here’s the thinking behind it:

  • Your tone and taste lean literary, poetic, and soulful – you love Franzen, Yanagihara, and coming-of-age stories with depth.
  • Your personal values (creative, worldly, introspective, emotionally intelligent) along with names that feel storied and thoughtful.
  • Names like Atlas and Finn [names Smith had told the bot she liked] signal you like modern, adventurous, slightly offbeat names with mythology, travel, or narrative resonance.
  • Your interest in cultural storytelling and spirituality made me think of names that carry historical or philosophical layers – Cassian being both the name of an early Christian monk and a name with a lyrical, almost cinematic feel.

Basically, Cassian hits that sweet spot between intellectual and emotive, rare but not strange – a name I imagined someone like you might already love.”

We are not sure whether Smith is an “Andor” fan, but we suspect more than a few of her son’s peers will be dubbed Cassian. Just an observation.

The above is quite the profile ChatGPT has built based on one user’s history of interactions. Granted, the bot’s list was not perfect. Some of the names were just too “out there,” Smith reports. The AI also seemed to think she’d prefer names from literature, which is not quite right. But those are minor errors, ones the bot may have already corrected. While Smith and others seem comfortable with this level of familiarity, some might find it alarming. One can imagine a few uses for such a dossier that are less benign than baby name suggestions.

Cynthia Murrell, June 17, 2025

Professor Marcus, You Missed One Point about the Apple Reasoning Paper

June 16, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software but for the so-so cartoon.

The intern-fueled Apple academic paper titled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” has caused a stir. An interesting analysis of the responses to this tour de force is “Seven Replies to the Viral Apple Reasoning Paper – and Why They Fall Short.” Professor Gary Marcus in his analysis identifies categories of reactions to the Apple document.

In my opinion, these are, and I paraphrase with abandon:

  1. Human struggle with complex problems; software does too
  2. Smart software needs lots of computation so deliver a good enough output that doesn’t cost too much
  3. The paper includes an intern’s work because recycling and cheap labor are useful to busy people
  4. Bigger models are better because that’s what people do in Texas
  5. System can solve some types of problems and fail at others
  6. Limited examples because the examples require real effort
  7. The paper tells a reader what is already known: Smart software can be problematic because it is probabilistic, not intelligent.

I look at the Apple paper from a different point of view.

The challenge for Apple has been for more than a year to make smart software with its current limitations work reasonably well. Apple’s innovation in smart software has been the somewhat flawed SIRI (sort of long in the tooth) and the formulation of a snappy slogan “Apple Intelligence.”

image

This individual is holding a “cover your a**” document. Thanks, You.com. Good enough given your constraints, guard rails, and internal scripts.

The job of a commercial enterprise is to create something useful and reasonably clever to pull users to a product. Apple failed. Other companies have rolled out products making use of smart software as it currently is. One of the companies with a reasonably good product is OpenAI’s ChatGPT. Another is Perplexity.

Apple is not in this part of the smart software game. Apple has failed to use “as is” software in a way that adds some zing to the firm’s existing products. Apple has failed, just as it failed with the weird googles, its push into streaming video, and the innovations for the “new” iPhone. Changing case colors and altering an interface to look sort of like Microsoft’s see-through approach are not game changers. Labeling software by the year of release does not make me want to upgrade.

What is missing from the analysis of the really important paper that says, “Hey, this  smart software has big  problems. The whole house of LLM cards is wobbling in the wind”?

The answer is, “The paper is a marketing play.” The best way to make clear that Apple has not rolled out AI is because the current technology is terrible. Therefore, we need more time to figure out how to do AI well with crappy tools and methods not invented at Apple.

I see the paper as pure marketing. The timing of the paper’s release is marketing. The weird colors of the charts are marketing. The hype about the paper itself is marketing.

Anyone who has used some of the smart software tools knows one thing: The systems make up stuff. Everyone wants the “next big thing.” I think some of the LLM capabilities can be quite  useful. In the coming months and years, smart software will enable useful functions beyond giving students a painless way to cheat, consultants a quick way to appear smart in a very short time, and entrepreneurs a way to vibe code their way into a job.

Apple has had one job: Find a way to use  the available technology to deliver something novel and useful to its customers. It has failed. The academic paper  is a “cover your a**”  memo more suitable for a scared 35 year old middle manager in an advertising agency. Keep in mind that I am no professor. I am a dinobaby. In my world, an “F” is an “F.” Apple’s viral paper is an excuse for delivering something useful with Apple Intelligence. The company has delivered an illustration of why there is no Apple smart TV or Apple smart vehicle.

The paper is marketing, and it is just okay marketing.

Stephen E Arnold, June 16, 2025

Googley: A Dip Below Good Enough

June 16, 2025

Dino 5 18 25_thumbA dinobaby without AI wrote this. Terrible, isn’t it? I did use smart software for the good enough cartoon. See, this dinobaby is adapting.

I was in Washington, DC, from June 9 to 11, 2025. My tracking of important news about the online advertising outfit was disrupted. I have been trying to catch up with new product mist, AI razzle dazzle, and faint signals of importance. The first little beep I noticed appeared in “Google’s Voluntary Buyouts Lead its Internal Restructuring Efforts.” “Ah, ha,” I thought. After decades of recruiting the smartest people in the world, the Google is dumping full time equivalents. Is this a move to become more efficient? Google has indicated that it is into “efficiency”; therefore, has the Google redefined the term? Had Google figured out that the change to tax regulations about research investments sparked a re-thing? Is Google so much more advanced than other firms, its leadership can jettison staff who choose to bail with a gentle smile and an enthusiastic wave of leadership’s hand?

image

The home owner evidences a surge in blood pressure. The handyman explains that the new door has been installed in a “good enough” manner. If it works for service labor, it may work for Google-type outfits too. Thanks, Sam AI-Man. Your ChatGPT came through with a good enough cartoon. (Oh, don’t kill too many dolphins, snail darters, and lady bugs today, please.)

Then I read “Google Cloud Outage Brings Down a Lot of the Internet.” Enticed by the rock solid metrics for the concept of “a lot,” I noticed this statement:

Large swaths of the internet went down on Thursday (June 12, 2025), affecting a range of services, from global cloud platform Cloudflare to popular apps like Spotify. It appears that a Google Cloud outage is at the root of these other service disruptions.

What? Google the fail over champion par excellence went down. Will the issue be blamed on a faulty upgrade? Will a single engineer who will probably be given an opportunity to find his or her future elsewhere be identified? Will Google be able to figure out what happened?

What are the little beeps my system continuously receives about the Google?

  1. Wikipedia gets fewer clicks than OpenAI’s ChatGPT? Where’s the Google AI in this? Answer: Reorganizing, buying out staff, and experiencing outages.
  2. Google rolls out more Gemini functions for Android devices. Where’s the stability and service availability for these innovations? Answer: I cannot look up the answer. Google is down.
  3. Where’s the revenue from online advertising as traditional Web search presents some thunderclouds? Answer: Well, that is a good question. Maybe revenues from Waymo, a deal with Databricks, or a bump in Pixel phone sales?

My view is that the little beeps may become self-amplifying. The magic of the online advertising model seems to be fading like the allure of Disneyland. When imagineering becomes imitation, more than marketing fairy dust may be required.

But what’s evident from the tiny beeps is that Google is now operating in “good enough” mode. Will it be enough to replace the Yahoo-GoTo-Overture pay-to-play approach to traffic?

Maybe Waymo is the dark horse when the vehicles are not combustible?

Stephen E Arnold, June 16, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta