Copilot Disappointments: You Are to Blame

May 30, 2025

dino orange_thumbNo AI, just a dinobaby and his itty bitty computer.

Another interesting Microsoft story from a pro-Microsoft online information service. Windows Central published “Microsoft Won’t Take Bigger Copilot Risks — Due to ‘a Post-Traumatic Stress Disorder from Embarrassments,’ Tracing Back to Clippy.” Why not invoke Bob, the US government suggesting Microsoft security was needy, or the software of the Surface Duo?

The write up reports:

Microsoft claims Copilot and ChatGPT are synonymous, but three-quarters of its AI division pay out of pocket for OpenAI’s superior offering because the Redmond giant won’t allow them to expense it.

Is Microsoft saving money or is Microsoft’s cultural momentum maintaining the velocity of Steve Ballmer taking an Apple iPhone from an employee and allegedly stomping on the device. That helped make Microsoft’s management approach clear to some observers.

The Windows Central article adds:

… a separate report suggested that the top complaint about Copilot to Microsoft’s AI division is that “Copilot isn’t as good as ChatGPT.” Microsoft dismissed the claim, attributing it to poor prompt engineering skills.

This statement suggests that Microsoft is blaming a user for the alleged negative reaction to Copilot. Those pesky users again. Users, not Microsoft, is at fault. But what about the Microsoft employees who seem to prefer ChatGPT?

Windows Central stated:

According to some Microsoft insiders, the report details that Satya Nadella’s vision for Microsoft Copilot wasn’t clear. Following the hype surrounding ChatGPT’s launch, Microsoft wanted to hop on the AI train, too.

I thought the problem was the users and their flawed prompts. Could the issue be Microsoft’s management “vision”? I have an idea. Why not delegate product decisions to Copilot. That will show the users that Microsoft has the right approach to smart software: Cutting back on data centers, acquiring other smart software and AI visionaries, and putting Copilot in Notepad.

Stephen E Arnold, May 30, 2025

AI Can Do Your Knowledge Work But You Will Not Lose Your Job. Never!

May 30, 2025

Dino 5 18 25_thumbThe dinobaby wrote this without smart software. How stupid is that?

Ravical is going to preserve jobs for knowledge workers. Nevertheless, the company’s AI may complete 80% of the work in these types of organizations. No bean counter on earth would figure out that reducing humanoid workers would cut costs, eliminate the useless vacation scam, and chop the totally unnecessary health care plan. None.

The write up “Belgian AI Startup Says It Can Automate 80% of Work at Expert Firms” reports:

Joris Van Der Gucht, Ravical’s CEO and co-founder, said the “virtual employees” could do 80% of the work in these firms.  “Ravical’s agents take on the repetitive, time-consuming tasks that slow experts down,” he told TNW, citing examples such as retrieving data from internal systems, checking the latest regulations, or reading long policies. Despite doing up to 80% of the work in these firms, Van Der Gucht downplayed concerns about the agents supplanting humans.

I believe this statement is 100 percent accurate. AI firms do not use excessive statements to explain their systems and methods. The article provides more concrete evidence that this replacement of humans is spot on:

Enrico Mellis, partner at Lakestar, the lead investor in the round, said he was excited to support the company in bringing its “proven” experience in automation to the booming agentic AI market. “Agentic AI is moving from buzzword to board-level priority,” Mellis said.

Several observations:

  1. Humans absolutely will be replaced, particularly those who cannot sell
  2. Bean counters will be among the first to point out that software, as long as it is good enough, will reduce costs
  3. Executives are judged on financial performance, not the quality of the work as long as revenues and profits result.

Will Ravical become the go-to solution for outfits engaged in knowledge work? No, but it will become a company that other agentic AI firms will watch closely. As long as the AI is good enough, humanoids without the ability to close deals will have plenty of time to ponder opportunities in the world of good enough, hallucinating smart software.

Stephen E Arnold, May 30, 2025

It Takes a Village Idiot to Run an AI Outfit

May 29, 2025

Dino 5 18 25The dinobaby wrote this without smart software. How stupid is that?

I liked the the write up “The Era Of The Business Idiot.” I am not sure the term “idiot” is 100 percent accurate. According to the Oxford English Dictionary, the word “idiot” is a variant of the phrase “the village idget.” Good enough for me.

The AI marketing baloney is a big thick sausage indeed. Here’s a pretty good explanation of a high-technology company executive today:

We live in the era of the symbolic executive, when "being good at stuff" matters far less than the appearance of doing stuff, where "what’s useful" is dictated not by outputs or metrics that one can measure but rather the vibes passed between managers and executives that have worked their entire careers to escape the world of work. Our economy is run by people that don’t participate in it and our tech companies are directed by people that don’t experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.

The essay contains a number of observations which match well to my experiences as an officer in companies and as a consultant to a wide range of organizations. Here’s an example:

In simpler terms, modern business theory trains executives not to be good at something, or to make a company based on their particular skills, but to "find a market opportunity" and exploit it. The Chief Executive — who makes over 300 times more than their average worker — is no longer a leadership position, but a kind of figurehead measured on their ability to continually grow the market capitalization of their company. It is a position inherently defined by its lack of labor, the amorphousness of its purpose and its lack of any clear responsibility.

I urge you to read the complete write up.

I want to highlight some assertions (possibly factoids) which I found interesting. I shall, of course, offer a handful of observations.

First, I noted this statement:

When the leader of a company doesn’t participate in or respect the production of the goods that enriches them, it creates a culture that enables similarly vacuous leaders on all levels.

Second, this statement:

Management has, over the course of the past few decades, eroded the very fabric of corporate America, and I’d argue it’s done the same in multiple other western economies, too.

Third, this quote from a “legendary” marketer:

As the legendary advertiser Stanley Pollitt once said, “bullshit baffles brains.”

Fourth, this statement about large language models, the next big thing after quantum, of course:

A generative output is a kind of generic, soulless version of production, one that resembles exactly how a know-nothing executive or manager would summarise your work.

And, fifth, this comment:

By chasing out the people that actually build things in favour of the people that sell them, our economy is built on production puppetry — just like generative AI, and especially like ChatGPT.

More little nuggets nestle in the write up; it is about 13,000 words. (No, I did  not ask Copilot to count the words. I am a good estimator of text length.) It is now time for my observations:

  1. I am not sure the leadership is vacuous. The leadership does what it learned, knows how to do, and obtained promotions for just being “authentic.” One leader at the blue chip consulting firm at which I learned to sell scope changes, built pianos in his spare time. He knew how to do that: Build a piano. He also knew how to sell scope changes. The process is one that requires a modicum of knowledge and skill.
  2. I am not sure management has eroded the “fabric.” My personal view is that accelerated flows of information has blasted certain vulnerable types of constructs. The result is leadership that does many of the things spelled out in the write up. With no buffer between thinking big thoughts and doing work, the construct erodes. Rebuilding is not possible.
  3. Mr. Pollitt  was a marketer. He is correct, and that marketing mindset is in the cat-bird seat.
  4. Generative AI outputs what is probably an okay answer. Those who were happy with a “C” in school will find the LLM a wonderful invention. That alone may make further erosion take place more rapidly. If I am right about information flows, the future is easy to predict, and it is good for a few and quite unpleasant for many.
  5. Being able to sell is the top skill. Learn to embrace it.

Stephen E Arnold, May 29, 2025

A Grok Crock: That Dog Ate My Homework

May 29, 2025

Dino 5 18 25_thumb_thumbJust the dinobaby operating without Copilot or its ilk.

I think I have heard Grok (a unit of XAI I think) explain that outputs have been the result of a dog eating the code or whatever. I want to document these Grok Crocks. Perhaps I will put them in a Grok Pot and produce a list of recipes suitable for middle school and high school students.

The most recent example of “something just happened” appears in “Grok Says It’s Skeptical’ about Holocaust Death Toll, Then Blames Programming Error.” Does this mean that smart software is programming Grok? If so, the explanation should be worded, “Grok hallucinates.” If a human wizard made a programming error, then making a statement that quality control will become Job One. That worked for Microsoft until Copilot became the go-to task.

The cited article stated:

Grok said this response was “not intentional denial” and instead blamed it on “a May 14, 2025, programming error.” “An unauthorized change caused Grok to question mainstream narratives, including the Holocaust’s 6 million death toll, sparking controversy,” the chatbot said. Grok said it “now aligns with historical consensus” but continued to insist there was “academic debate on exact figures, which is true but was misinterpreted.” The “unauthorized change” that Grok referred to was presumably the one xAI had already blamed earlier in the week for the chatbot’s repeated insistence on mentioning “white genocide” (a conspiracy theory promoted by X and xAI owner Elon Musk), even when asked about completely unrelated subjects.

I am going to steer clear of the legality of these statements and the political shadows these Grok outputs cast. Instead, let me offer a few observations:

  1. I use a number of large language models. I have used Grok exactly twice. The outputs had nothing of interest for me. I asked, “Can you cite X.com messages.” The system said, “Nope.” I tried again after Grok 3 became available. Same answer. Hasta la vista, Grok.
  2. The training data, the fancy math, and the algorithms determine the output. Since current LLMs rely on Google’s big idea, one would expect the outputs to be similar. Outlier outputs like these alleged Grokings are a bit of a surprise. Perhaps someone at Grok could explain exactly why these outputs are happening. I know dogs could eat homework. The event is highly unlikely in my experience, although I had a dog which threw up on the typewriter I used to write a thesis.
  3. I am a suspicious person. Grok makes me suspicious. I am not sure marketing and smarmy talk can reduce my anxiety about Grok providing outlier content to middle school, high school, college, and “I don’t care” adults. Weaponized information in my opinion is just that a weapon. Dangerous stuff.

Net net: Is the dog eating homework one of the Tesla robots? if so, speak with the developers, please. An alternative would be to use Claude 3.7 or Gemini to double check Grok’s programming.

Stephen E Arnold, May 29, 2025

Telegram and xAI: Deal? What Deal?

May 29, 2025

Dino 5 18 25_thumb[3]Just a dinobaby and no AI: How horrible an approach?

What happens when two people with a penchant for spawning babies seem to sort of, mostly, well, generally want a deal? On May 28, 2025, one of the super humans suggested a deal existed between the Telegram company and the xAI outfit. Money and equity would change hands. The two parties were in sync. I woke this morning to an email that said, “No deal signed.”

The Kyiv Independent, a news outfit that pays close attention to Telegram because of the “special operation”, published “Durov Announces Telegram’s Partnership with Musk’s xAI, Who Says No Deal Signed Yet.” The story reports:

Telegram and Elon Musk’s xAI will enter a one-year partnership, integrating the Grok chatbot into the messaging app, Telegram CEO Pavel Durov announced on May 28. Musk, the world’s richest man who also owns Tesla and SpaceX, commented that "no deal has been signed," prompting Durov to clarify that the deal has been agreed in "principle" with "formalities pending." "This summer, Telegram users will gain access to the best AI technology on the market," Durov said.

The write up included an interesting item of information; to wit:

Durov has claimed he is a pariah and has been effectively exiled from Russia, but it was reported last year that he had visited Russia over 60 times since leaving the country, according to Kremlingram, a Ukrainian group that campaigns against the use of Telegram in Ukraine.

Mr. Musk, the master mind behind a large exploding space vehicle, and Mr. Durov have much to gain from a linkage. Telegram, like Apple, is not known for its smart software. Third party bots have made AI services available to Telegram’s more enterprising users. xAI has made modest progress on its path to becoming the “everything” app might benefit from getting front and center to the Telegram user base.

Both individuals are somewhat idiosyncratic. Both have interesting technology. Both present themselves as bright, engaging, and often extremely confident professionals.

What’s likely to happen? With two leaders with much in common, Grok or another smart software will make its way to the Telegram faithful. When that happens is unknown. The terms of the “deal” (if one exists) are marketing or jockeying as of May 29, 2025. The timeline for action is fuzzy. 

What’s obvious is that volatility and questionable information shine the spotlight on both forward leading companies. The Telegram information distracts a bit from the failed rocket. Good for Mr. Musk. The Grok deal distracts a bit from the French-styled dog collar around Mr. Durov’s neck. Good for Mr. Durov.

When elephants fight, grope, and deal, the grass may take a beating. When the dust settles, what are these elephants doing? The grass has been stomped upon, but the beasties?

Stephen E Arnold, May 29, 2025

xAI and Telegram: What Will the Durovs Do? The Clock Is Ticking

May 28, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

One of my colleagues called my attention to the  Coindesk online service’s article “Telegram Signs $300M Deal with Elon Musk’s xAI to Integrate Grok into Its Messaging App, TON up 16%.” The subtitle is interesting:

Telegram Will Also Received 50% of Revenue from xAI Subscriptions Sold via the App

If one views Telegram as a simple messaging app, Telegram itself has not done much to infuse its “mini app” with AI functions. However, Telegram bot developers have. Dozens of bots include AI features. The most popular smart software among bot developers is, based on my team’s research, a toss up between open source AI and ChatGPT. If our information are correct, Elon Musk now has a conduit to the Telegram user base. Depending on what source you select, Telegram has 900 to one billion users. How many are humans with an actual mobile phone number? We don’t know, and I am not sure law enforcement knows until the investigators try to match a mobile number with a person, a company, or some mysterious off shore entity with offices in the Seychelles or a similarly flexible nation.

The write up says:

Telegram founder Pavel Durov, revealed on X, that the two companies agreed to a 1-year partnership that would see Telegram receive $300 million in cash and equity from xAI, in addition to 50% of revenues from xAI subscriptions sold via Telegram.

Let’s pull out the allegedly true factoids:

  1. The deal is a one-year partnership. In the world of the French judiciary, one-year can be a generous amount of time to de-rail the Telegram operation. Mr. Durov’s criticism of France with regards to the Romanian elections and increasing criticism of the French government may add risk to the xAI deal. With Pavel Durov in France between August 2024 and March 2025, Telegram’s pace of innovation stalled on STARs token fiddling, not AI.
  2. Mr. Musk’s willingness to sign up a sales channel for Grok may be related to the prevalence of Sam Altman’s AI system in third-party bots for customer support and performing a steadily increasing range of Telegram-centric functions. Because Telegram’s approach to messaging allows bots to move across boundaries between blockchains as well as traditional Web services, Telegram’s bot ecosystem should deliver, Mr. Musk hopes, an alternative AI to bot developers and provide a new source of users to the Grok smart software.
  3. The “equity” angle is interesting. Equity in what? xAI or some other property. Perhaps — just perhaps — Mr. Musk becomes a stakeholder in Telegram. Mr. Musk wants to convert X.com into an “everything” service, a dream shared with Sam Altman. Mr. Altman is not a particularly enthusiastic supporter of Mr. Musk. Mr. Musk is equally disenchanted with Mr. Altman. The love triangle will be interesting to observe as the days click toward the end of the one year tie up between Telegram and xAI.

Another angle on the deal was offered by the online information service Watcher.Guru. “Elon Musk’s xAI Joins Telegram in $300M Grok Partnership”, speculates":

This integration has addressed several critical pain points that crypto users face across multiple essential areas daily. Many people find blockchain technology overwhelming, and the complexity often prevents them from fully engaging with digital assets right now. By leveraging AI assistance directly within Telegram, users can get help with crypto-related questions, market analysis, and blockchain education without leaving their messaging app. The AI integration revolutionizes security by providing tools that identify crypto scams. This becomes valuable given how scams prevail on messaging platforms.

The cited paragraph makes clear that convergence is coming among smart software, social media services with hefty user counts, and crypto currency. However, the idea that smart software will prevent fraud causes me to chortle. Crypto is, in my opinion, a fraudulent enterprise. Mashing up the Telegram system with X.com binds a range of alleged criminal activities to a communications system that can be shaped to promote quite specific propaganda. Toss in crypto, and what do you get? Answer: More cyber crime.

Will this union create a happy, sunny user experience free from human trafficking, online gambling, and the sale of contraband? One can only hope, but this tie up has to prove that it delivers a positive, constructive user experience. When Sam Altman releases his everything app, will X.com be positioned to be a worthy competitor? Will Elon Musk purchase Telegram and compete with proven technology, a large user base, and a team of core engineers able to create a slam dunk product and service?

Good questions. Unlike Watcher.Guru’s observation that “AI integration revolutionizes security”, the disposition of the deal between Messers. Durov and Musk is unknown. (Oh, how can AI integration revolutionize security when the services are not yet integrated.) Oh, well, close enough for horse shoes.

Stephen E Arnold, May 28, 2025

Traditional Publishers Hallucinate More Than AI Systems

May 28, 2025

Dino 5 18 25

Just the dinobaby operating without Copilot or its ilk.

I sincerely hope that the information presented in “Major Papers Publish AI-Hallucinated Summer Reading List Of Nonexistent Books.” The components of this “real” news story are:

  1. A big time newspaper syndicator
  2. A “real” journalist / writer allegedly named Marco Buscaglia
  3. Smart software bubbling with the type of AI goodness output by Google-type outfits desperate to make their big bets on smart software pay off
  4. Humans who check “facts”— real or hallucinated.

Blend these together in an information process like that operated at the Sun-Times in the city with big shoulders and what do you get:

In an embarrassing episode that will help aggravate society’s uneasy relationship with artificial intelligence, the Chicago Sun-Times, Philadelphia Inquirer and other newspapers around the country published a summer-reading list where most of the books were entirely made up by ChatGPT. The article was licensed content provided by King Features Syndicate, a subsidiary of Hearst Newspapers. Initial reporting of the bogus list focused on the Sun-Times, which two months earlier announced that 20% of its staff had accepted buyouts as the paper staggers under a dying business model. However, several other newspapers also ran the syndicated article, which was part of a package of summer-themed content called "Heat Index." 

What happened? The editorial process and the “real” journalist did their work. The editorial process involved using smart software to create a list of must-read books. The real journalist converted the raw list into a formatted presentation of books you, gentle reader, must consume whilst reclining in a beach lounger or crunched into a customer-first airplane seat.

The cited write up explains the clip twixt cup and lip or lips:

As the scandal quickly made waves across traditional and social media, the Sun-Times — which not-so-accurately bills itself as "The Hardest-Working Paper in America"raced to apologize while also trying to distance itself from the work. “This is licensed content that was not created by, or approved by, the Sun-Times newsroom, but it is unacceptable for any content we provide to our readers to be inaccurate,” a spokesperson said. In a separate post to its website, the paper said, "This should be a learning moment for all of journalism.” Meanwhile, the Inquirer’s CEO Lisa Hughes told The Atlantic, "Using artificial intelligence to produce content, as was apparently the case with some of the Heat Index material, is a violation of our own internal policies and a serious breach.” 

The kindergarten smush up inspires me to offer several observations:

  1. Editorial processes require editors who pay attention, know or check facts, and think about how to serve their readers
  2. Writers need to do old-fashioned work like read books, check with sources likely to be sort of correct, and invest time in their efforts
  3. Readers need to recognize that this type of information baloney can be weaponized. Shaping will do far more harm than give me a good laugh.

Outstanding. My sources tell me that the “real” news about this hallucinating shirk off is mostly accurate.

Stephen E Arnold, May 28, 2025

China Slated To Overtake US In AI Development. How about Bypass?

May 28, 2025

China was scheduled to become the world’s top performing economy by now. This was predicted in the early 2000s, but the Middle Kingdom has experienced some roadblocks. Going through all of them would require an entire class on world history and economics. We don’t have time for that because SCMP says, “China To Harness Nation’s Resources To AI Self-Reliance Ambitions."

Winnie the Pooh a.k.a. President Xi Jinping told the Communist Party’s inner circle that he plans to stimulate AI theory and core technologies. Xi wants to leverage his country’s “new whole national system” to repair bottlenecks like high end chips. The “new whole national system” is how the Community Party describes directing resources towards national strategic goals.

Xi is desperate for China to overtake the US in AI development. This pipe dream was crushed when the US placed tariffs on Chinese goods. While the tariff war is on hiatus for a few months, it doesn’t give China a desperate leg up.

Xi said:

“‘We must acknowledge the technological gap, redouble our efforts to comprehensively push forward technological innovation, industrial development and applications, and the AI regulatory system,’ state news agency Xinhua quoted Xi as saying. ‘[China should] continue to strengthen basic research, and concentrate on conquering core technologies such as high-end chips and basic software, so as to build an independent, controllable, and collaborative AI basic software and hardware system. ‘[We should then] use AI to lead the paradigm shift in scientific research and accelerate scientific and technological innovation breakthroughs in various fields.’”

So said Winnie the Pooh. He’s searching for that irresistible pot of honey while dealing with US and Trump bumblebees. Maybe if he disguises himself as a little black raincloud instead of a “weather balloon” he might advance further in AI? However, some tension in the military may lead to a bit of choppy weather in what is supposed to be a smooth, calm sea of agreement.

Let’s ask Deepseek.

Whitney Grace, May 28, 2025

SEO Dead? Nope, Just Wounded But Will Survive Unfortunately

May 27, 2025

SEO or search engine optimization is one of the forces that killed old fashioned precision and recall. Precision morphed from presenting on point sources to smashing a client’s baloney content into a searcher’s face. Recall went from a metric indicating that a query was passed across available processed content. Now it means, “Buy, believe, and baloney information.”

The write up “The Future of SEO As the Future Google Search Rolls Out” explains:

“Google isn’t going to keep its search engine the way it was for the past two decades. Google knows it has to change, despite them making an absolute fortune from search ads. Google is worried about TikTok, worried about, ChatGPT, worried about searchers going to something new and better.”

These paragraphs make clear that SEO is not going to its grave without causing some harm to the grave diggers:

“There are a lot of concerned people in the search marketing industry right now. The bottom line is while many of us like to complain and we honestly have good reason to be upset, complaining won’t help. We need to adapt and change and experiment. Experiment with these new experiences, keep on top of these changes happening in Google and at other AI and search companies. Then try new things and keep testing. If you do not adapt, you will die. SEO won’t die, but you will become irrelevant. The good news, SEOs are some of the best at adapting, embracing change and testing new strategies out. So you are all ready and equipped for the future of search.”

Let me share some observations about this statement from the cited write up:

First, the SEO professionals are concerned. About relevance and returning precise on point information to the user? Are you kidding me? SEO professionals are worried about their making money. Google, after using SEOs as part of their push to sell ads, the SEO crowd is wracked with uncertainty.

Second, adaptation is important. A failure to adapt means no money. Now the SEO professionals must embrace anxiety. Is stress good for SEO professionals? Probably not.

Third, SEO professionals with 20 years of experience must experiment. Are these individuals equipped to head to the innovation space and crank out new ways to generate money? A few will be able to be the old that that learns to roll over on late night television. Most — well — will struggle to get up or die trying.

What’s my prediction for the future of SEO? Snake oil vendors are part of the data carnival. Ladies and gentlemen, get your cure for no traffic here. Step right up.”

Stephen E Arnold, May xx, 2-25

Coincidence or No Big Deal for the Google: User Data and Suicide

May 27, 2025

Dino 5 18 25_thumbJust the dinobaby operating without Copilot or its ilk.

I have ignored most of the carnival noise about smart software. Google continues its bug spray approach to thwarting the equally publicity-crazed Microsoft and OpenAI. (Is Copilot useful? Is Sam Altman the heir to Steve Jobs?)

Two stories caught my attention. The first is almost routine. Armed with the Chrome Hoover, long-lived cookies, and the permission hungry Android play — The Verge published “Google Has a Big AI Advantage: It Already Knows Everything about You.” Sigh. another categorical affirmative: “Everything.” Is that accurate? “Everything” or is it just a scare tactic to draw readers? Old news.

But the sub title is more interesting; to wit:

Google is slowly giving Gemini more and more access to user data to ‘personalize’ your responses.

Slowly. Really? More access? More than what? And “your responses?” Whose?

The write up says:

As an example, Google says if you’re chatting with a friend about road trip advice, Gemini can search through your emails and files, allowing it to find hotel reservations and an itinerary you put together. It can then suggest a response that incorporates relevant information. That, Google CEO Sundar Pichai said during the keynote, may even help you “be a better friend.” It seems Google plans on bringing personal context outside Gemini, too, as its blog post announcing the feature says, “You can imagine how helpful personal context will be across Search, Gemini and more.” Google said in March that it will eventually let users connect their YouTube history and Photos library to Gemini, too.

No kidding. How does one know that Google has not been processing personal data for decades. There’s a patent *with a cute machine generated profile of Michael Jackson. This report generated by Google appeared in the 2007 patent application US2007/0198481:

image

The machine generated bubble gum card about Michael Jackson, including last known address, nicknames, and other details. See US2007/0198481 A1, “Automatic Object Reference Identification and Linking in a Browsable Fact Repository.”

The inventors Andrew W. Hogue (Ho Ho Kus, NJ) and Jonathan T. Betz (Summit, NJ) appear on the “final” version of their invention. The name of the patent was the same, but there was an important different between the patent application and the actual patent. The machine generated personal profile was replaced with a much less useful informative screen capture; to wit:

image

From Google Patent 7774328, granted in 2010 as “Browsable Fact Repository.”

Google wasn’t done “inventing” enhancements to its profile engine capable of outputting bubble gum cards for either authorized users or Google systems. Check out Extension US9760570 B2 “Finding and Disambiguating References to Entities on Web Pages.” The idea is that items like “aliases” and similarly opaque factoids can be made concrete for linking to cross correlated content objects.,

Thus, the “everything” assertion while a categorical affirmative reveals a certain innocence on the part of the Verge “real news” story.

Now what about the information in “Google, AI Firm Must Face Lawsuit Filed by a Mother over Suicide of Son, US Court Says.” The write up is from the trusted outfit Thomson Reuters (I know it is trusted because it says so on the Web page). The write up dated May 21, 2025, reports:

The lawsuit is one of the first in the U.S. against an AI company for allegedly failing to protect children from psychological harms. It alleges that the teenager killed himself after becoming obsessed with an AI-powered chatbot. A Character.AI spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm." Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it."

Absent from the Reuters’ report and the allegedly accurate Google and semi-Google statements, the company takes steps to protect users, especially children. With The profiling and bubble gum card technology Google invented, does it seem prudent for Google to identify a child, cross correlate the child’s queries with the bubble gum card and dynamically [a] flag an issue, [b] alert a parent or guardian, [c] use the “everything” information to present suggestions for mental health support? I want to point out that if one searches for words on a stop list, the Dark Web search engine Ahmia.fi presents a page providing links to Clear Web resources to assist the person with counseling. Imagine: A Dark Web search engine performing a function specifically intended to help users.

Google, is Ahmia,fi more sophisticated that you and your quasi-Googles? Are the statements made about Google’s AI capabilities in line with reality? My hunch is requests like “Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it." made after the presentation of evidence were not compelling. (Compelling is a popular word in some AI generated content. Yeah, compelling: A kid’s death. Inventions by Googlers specifically designed to profile a user, disambiguate disparate content objects, and make available a bubble gum card. Yeah, compelling.

I am optimistic that Google knowing “everything,” the death of a child, a Dark Web search engine that can intervene, and the semi-Google lawyers  add up to comfort and support.

Yeah, compelling. Google’s been chugging along in the profiling vineyard since 2007. Let’s see that works out to longer than the 14 year old had been alive.

Compelling? Nah. Googley.

Stephen E Arnold, May 27, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta