Google Wears a Necklace and Sneakers with Flashing Blue LEDs. Snazzy.

April 15, 2025

dino orangeNo AI. Just an old dinobaby pointing out some exciting developments in the world “beyond search.”

I can still see the flashing blue light in Aisle 7. Yes, there goes the siren. K-Mart in Central Illinois was running a big sale on underwear. My mother loved those “blue light specials.” She would tell me as I covered my eyes and ears, “I don’t want to miss out.” Into the scrum she would go, emerging with two packages of purple boxer shorts for my father. He sat in the car while my mother shopped. I accompanied her because that’s what sons in Central Illinois do. I wonder if procurement officials are familiar with blue light specials. The sirens in DC wail 24×7.

image

Thanks, OpenAI. You produced a good enough illustration. A first!

I thought about K-Mart when I read “Google Slashes Business Software Prices for US Federal Agencies.” I see that flickering blue light as I type this short blog post. The trusted “real” news source reports:

Google will offer steep discounts to U.S. federal agencies for its business apps package as the company looks to capitalize on the Trump administration’s cost-cutting push and chip away at Microsoft’s longstanding grip on the government software market.

Yep, discounts. Now Microsoft has some traction in the US government. I cannot imagine what life would be like for aides to a senior Pentagon if he did not have nifty PowerPoint presentations. Perhaps offering a deal will get some Microsoft afficionados to learn to live without Excel and Word? I don’t know, but Google is giving the “discount” method a whirl.

What’s up with Google? I think someone told me that Gemini 2.5 was free. Now a discount on GSA listed services which could amount to $2 billion in savings … if — yes, that magic word — if the US government dumps the Softies’ outstanding products for the cloudy goodness of the Google’s way. Yep, “if.”

I have a cute anecdote about Google and the US government from the year 2000, but, alas, I cannot share it. Trust me. It is a knee slapper. And, no, it is not about Sergey wearing silver sparkle sneakers to meetings with US elected officials. Those were indeed eye catchers among shoes with toes that looked like potatoes.

Several observations:

  1. Google, like Amazon, is trying to obtain US government business. I think the flashing blue lights, if I were still working in the hallowed halls, would impair my vision. Price cutting seems to be the one true way right now.
  2. Will lower prices have an impact on US government procurement? I am not sure. The procurement process chugs along every day and in quite predictable ways. How long does it take to turn a battleship, assuming the captain can pull off the maneuver without striking a small fishing boat, of course.
  3. Google seems to think that slashing prices for its “products” will boost sales. My understanding of Google is that its sale to government agencies pivots on several characteristics; for example, [a] listening and understanding what government professionals say, [b] providing a modicum of customer support or at the very least answering a phone call from a government professional, and [c] delivering products that the aides, assistants, and contractors understand and can use to crank out documents with numbered lines, dense charts, and bullet points that mostly stay in place after a graphic is inserted.

To sum up, I find the idea of price cuts interesting. My initial reaction is that price cuts and procurement are not necessarily lined up procedurally. But I am a dinobaby. But after 50 years of “government” work I have a keen desire to see if the Google can shine enough blue lights to bedazzle people involved in purchasing software to keep the admirals happy. (I speak from a little experience working with the late Admiral Craig Hosmer, R-Calif. whom I thank for his service.)

Stephen E Arnold, April 15, 2025

AI Horn Honking: Toot for Refact

April 10, 2025

What is one of the things we were taught in kindergarten? Oh, right. Humility. That, however, doesn’t apply when you’re in a job interview, selling a product, or writing a press release. Dev.to’s wrote a press release about their open source AI agent for programming in IDE was high ranking: “Our AI Agent + 3.7 Sonnet Ranked #1 Pn Aider’s Polyglot Bench — A 76.4% Score.”

As the title says, Dev.to’s open source AI programming agent ranked 76.4%. The agent is called Refact.ai and was upgraded with 3.7 Sonnet. It outperformed other AI agents, include Claude, Deepseek, ChatGPT, GPT-4.5 Preview, and Aider.

Refact.ai does better than the others because it is an intuitive AI agent. It uses a feedback loop to create self-learning and auto-correcting AI agent:

• “Writes code: The agent generates code based on the task description.

• Fixes errors: Runs automated checks for issues.

• Iterates: If problems are found, the agent corrects the code, fixes bugs, and re-tests until the task is successfully completed.

• Delivers the result, which will be correct most of the time!”

Dev.to has good reasons to pat itself on the back. Hopefully they will continue to develop and deliver high-performing AI agents.

Whitney Grace, April 10, 2025

AI Addicts Are Now a Thing

April 9, 2025

Hey, pal, can you spare a prompt?

Gee, who could have seen this coming? It seems one can become dependent on a chatbot, complete with addition indicators like preoccupation, withdrawal symptoms, loss of control, and mood modification. "Something Bizarre Is Happening to People Who Use ChatGPT a Lot," reports The Byte. Writer Noor Al-Sibai cites a recent joint study by OpenAI and MIT Media Lab as she writes:

"To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of ‘affective cues,’ which was defined in a joint summary of the research as ‘aspects of interactions that indicate empathy, affection, or support,’ they used when chatting with it. Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a ‘friend.’ The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too. Add it all up, and it’s not good. In this study as in other cases we’ve seen, people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI — and where that leads could end up being sad, scary, or somewhere entirely unpredictable."

No kidding. Interestingly, the study found those who use the bot as an emotional or psychological sounding board were less likely to become dependent than those who used it for "non-personal" tasks, like brainstorming. Perhaps because the former are well-adjusted enough to examine their emotions at all? (The privacy risks of sharing such personal details with a chatbot are another issue entirely.) Al-Sibai emphasizes the upshot of the research: The more time one spends using ChatGPT, the more likely one is to become emotionally dependent on it. We think parents, especially, should be aware of this finding.

How many AI outfits will offer free AI? You know. Just give folks a taste.

Cynthia Murrell, April 9, 2025

Bye-Bye Newsletters, Hello AI Marketing Emails

April 4, 2025

Adam Ryan takes aim at newsletters in the Work Week article, “Perpetual: The Major Shift of Media.” Ryan starts the article saying we’re already in changing media landscape and if you’re not preparing you will be left behind. He then dives into more detail explaining that the latest trend setter is an email newsletter. From his work in advertising, Ryan has seen newsletters rise from the bottom of the food chain to million dollar marketing tools.

He explains that newsletters becoming important marketing tools wasn’t an accident and that it happened through a the democratization process. By democratization Ryan means that newsletters became easier to make through the use of simplification software. He uses the example of Shopify streamlining e-commerce and Beehiiv doing the same for newsletters. Another example is Windows making PCs easier to use with its intuitive UI.

Continuing with the Shopify example, Ryan says that mass adoption of the e-commerce tool has flooded the market place. Top brands that used to dominate the market were now overshadowed by competition. In short, everyone and the kitchen sink was selling goods and services.

Ryan says that the newsletter trend is about to shift and people (operators) who solely focus on this trend will fall out of favor. He quotes Warren Buffet: “Be fearful when others are greedy, and be greedy when others are fearful.” Ryan continues that people are changing how they consume information and they want less of it, not more. Enter the AI tool:

“Here’s what that means:

• Email open rates will drop as people consume summaries instead of full emails.

• Ad clicks will collapse as fewer people see newsletter ads.

• The entire value of an “owned audience” declines if AI decides what gets surfaced.”

It’s not the end of the line for newsletter is you become indispensable such as creating content that can’t be summarized, build relationships beyond emails, and don’t be a commodity:

“This shift is coming. AI will change how people engage with email. That means the era of high-growth newsletters is ending. The ones who survive will be the ones who own their audience relationships, create habit-driven content, and build businesses beyond the inbox.”

This is true about every major change, not just news letters.

Whitney Grace, April 4, 2025

The AI Market: The Less-Educated

April 2, 2025

Writing is an essential function of education and communication. Writing is an innate skill as well as one that can be curated through dedicated practice. Digital writing tools such as spelling and grammar checkers and now AI like Grammarly and ChatGPT have influenced writing. Stanford University studied how AI writing tools have impacted writing in professional industries. The discovered that less-educated parts of the US heavily rely on AI. Ars Technica reviews the study in: “Researchers Surprised To Find Less-Educated Areas Adopting AI Writing Tools Faster.”

Stanford’s AI study tracked LLM adoption from January 2022 to September 2024 with a dataset that included US Consumer Financial Protection Bureau consumer complaints, corporate press releases, job postings, and UN press releases. The researchers used a statistical detection system that tracked word usage patterns. The system found that 14-24% of these communications showed AI assistance. The study also found an interesting pattern:

“The study also found that while urban areas showed higher adoption overall (18.2 percent versus 10.9 percent in rural areas), regions with lower educational attainment used AI writing tools more frequently (19.9 percent compared to 17.4 percent in higher-education areas). The researchers note that this contradicts typical technology adoption patterns where more educated populations adopt new tools fastest.”

The researchers theorize that AI-writing tools serve as equalizing measures for less-educated individuals. They also noted that AI-writing tools are being adopted because the market is saturated or the LLMs are becoming more advanced. IT will be difficult to distinguish between human and machine written text. They predict negative outcomes from this:

“ ‘The growing reliance on AI-generated content may introduce challenges in communication,’ the researchers write. ‘In sensitive categories, over-reliance on AI could result in messages that fail to address concerns or overall release less credible information externally. Over-reliance on AI could also introduce public mistrust in the authenticity of messages sent by firms.’”

It’s not good to blindly trust AI, especially with the current state of datasets. Can you imagine the critical thinking skills these future leaders and entrepreneurs will develop? On that thought, what will happen to imagination?

Whitney Grace, April 2, 2025

Free AI Sites (Well, Mostly Free Sort of)

April 1, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

One of my team generated images of French bulldogs. After months of effort, he presented me with a picture of our French bulldog complete with one floppy ear. The image was not free. I pay for the service because free image generation systems work and then degrade because of the costs associated with doing smart software without oodles of cash.

Another person proudly emailed everyone a link to Best AI Websites and the page “Free AI Tools.” The interfaces, functionality, and the outputs vary. The linked Web page is a directory presented with some of that mobile interface zip.l

There are more than 30 tools anyone can try. Here’s what the “directory” interface looks like:

image

The first click displays the BestFreeAIWebsites’ write up for each “service” or “tool.” Then a direct link to the free AI site is displayed. There is a “submit” button to allow those with a free AI tool to add theirs to the listing. The “add” function is a common feature of Telegram bot and Channel listings.

Here is a selection of the “free” services that are available as of March 28, 2025, in alphabetical order:

  1. HUUK.ai, a trip planner
  2. Metavoice at https://studio.themetavoice.xyz/, a “one click voice changer”
  3. Presentpicker.ai, a service to help a user choose a gift.
  4. Remaker.ai, a face swap tool
  5. Yomii.app, a real estate investing assistant

ChatGPT features numerous times in the list of “free” AI tools. Google shows up a couple of times with Bard and Gemini. The majority of the services “wrap” functionality around the big dogs in the LLM space.

Are these services “free”? Our view is that the “free” is a way to get people to give the services a try. If the experience is positive, upgrades are available.

As one of my team worked through the listings, he said, “Most of these services have been available as Telegram bots from other developers.” If he is correct, perhaps Telegram’s AI functions should be included in the listing?

Stephen E Arnold, April 1, 2025

The Chinese AI PR Keeps Flowing

March 27, 2025

Is China moving ahead in the AI race? Some seem to think so. Interesting Engineering reports, "‘World’s First’ Fully Autonomous AI Agent Unveiled in China, Handles Real-World Tasks." Writer Christopher McFadden tells us:

"A group of Chinese software engineers have developed what they have called the ‘world’s first’ fully autonomous artificial intelligence (AI) agent. Called ‘Manus,’ the AI agent can independently perform complex tasks without human guidance. Unlike AI chatbots like ChatGPT, Google’s Gemini, or Grok, which need human input to perform things, Manus can proactively make decisions and complete tasks independently. To this end, the AI agent doesn’t necessarily need to wait for instructions to do something. For example, if a human asks, ‘ Find me an apartment,’ Manus can conduct research, evaluate multiple factors (crime rates, weather, market trends), and provide tailored recommendations."

Apparently, Manus works like a contractor directing their subcontractors. We learn:

"Rather than using just one AI model, Manus operates like an executive managing multiple specialized sub-agents. This allows it to tackle complex, multi-step workflows seamlessly. Moreover, the AI agent can work asynchronously, meaning it completes tasks in the background and notifies users only when results are ready, without constant human supervision. This is a significant development; most AIs have relied heavily on humans to initiate tasks. Manus represents a shift toward fully independent AI, raising exciting possibilities and serious concerns about job displacement and responsibility."

A fully independent AI? Perhaps. If so, the escalated threat to human jobs may be real. Manus has some questioning whether the US is truly the unrivaled leader in the AI space. We shall see if the expectations pan out or are, once again, overblown.

Cynthia Murrell, March 27, 2025

From $20 a Month to $20K a Month. Great Idea… or Not?

March 10, 2025

dino orange_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

OpenAI was one of many smart software companies. If you meet the people on my team, you will learn that I dismissed most of the outfits as search-and-retrieval outfits looking for an edge. Search definitely needs an edge, but I was not confident that predictive generation of an “answer” was a solution. It was a nifty party trick, but then the money started flowing. In January 2023, Microsoft put Google’s cute sharp teeth on edge. Suddenly AI or smart software was the next big thing. The virtual reality thing did not ring the bell. The increasingly weird fiddling with mobile phones did not get the brass ring. And the idea of Apple becoming the next big thing in chips has left everyone confused. My M1 devices work pretty well, and unless I look at the label on the gizmos I can tell an M1 from and M3. Do I care? Nope.

But OpenAI became news. It squabbled with the mastermind of “renewable” satellites, definitely weird trucks, and digging tunnels in Las Vegas. (Yeah, nice idea, just not for anyone who does not want to get stalled in traffic.) When ChatGPT became available, one of those laboring in my digital vineyards signed me up. I fiddled with it and decided that I would run some of my research through the system. I learned that my research was not in the OpenAI “system.” I had it do some images. Those sucked. I will cancel this week.

I put in my AI folder this article “OpenAI’s is Getting Ready to Release PhD Level AI Agents.” I was engaging in some winnowing and I scanned it. In early February 2025, Digital Marketing News wrote about PhD level agents. I am not a PhD. I quite before I finished my dissertation to work in the really socially conscious nuclear unit of that lovable outfit Halliburton. You know the company. That’s the one that charged about $950.00 for a gallon of fuel during the Iraq war. You will also associate Dick Cheney, a fun person, with the company. So no PhD for me.

I was skeptical because of the dismal performance of ChatGPT 4, oh, whatever, trying to come up with the information I have assembled for my new book for law enforcement professionals. Then I read a Slashdot post with the title “OpenAI Plots Charging $20,000 a Month For PhD-Level Agents” shared from a publication I don’t know much about. I think it is like 404 or a for-fee Substack. The publication has great content, and you have to pay for it.

Be that as it may, the Slashdot post reports or recycles information that suggests the fee per month for a PhD level version of OpenAI’s smart software will be a modest $20,000 a month. I think the service one of my team registered costs $20.00 per month. What’s with the 20s? Twenty is a pronic number; that is, it can be slapped on a high school math test so students can say it is the product of two consecutive integers. In college I knew a person who was a numerologist. I recall that the meaning of 20 was cooperation.

The interesting part of the Slashdot post was the comments. I scanned them and concluded that some of the commenters saw the high-end service killing jobs for high-end programmers and consultants. Yeah, maybe. Somehow I doubt that a code base that struggles with information related to a widely-used messaging application is suddenly going to replicate the information I have obtained from my sources in Eastern Europe seems a bit of stretch. Heck, ChatGPT could barely do English. Russian? Not a change, but who knows. And for $200,000 it is not likely this dinobaby will take what seems like unappetizing bait.

One commenter allegedly named TheGreatEmu said:

I was about to make a similar comment, but the cost still doesn’t add up. I’m at a national lab with generally much higher overheads than most places, and a postdoc runs us $160k/year fully burdened. And of course the AI sure as h#ll can’t connect cables, turn knobs, solder, titrate, use a drill press, clean, chat with the machinist who doesn’t use email, sneaker net data out of the air-gapped lab, or understand napkin drawings over beer where all real science gets done. Or do anything useful with information that isn’t already present in the training data, and if you’re not pushing past existing knowledge boundaries, you’re not really doing science are you?

My hunch is that this is a PR or marketing play. Let’s face it. With Microsoft cutting off data center builds and Google floundering with cheese, the smart software revolution is muddling forward. The wins are targeted applications in quite specific domains. Yes, gentle reader, that’s why people pay for Chemical Abstracts online. The information is not on the public Internet. The American Chemical Society has information that the super capable AI outfits have not figured as something the non-computational, organic, or inorganic chemist will use from a somewhat volatile outfit. Get something wrong in a nuclear lab and smart software won’t be too helpful if it hallucinates.

Net net: Is everything marketing? At age 80, my answer is, “Absolutely.” Sam AI-Thinks in terms of trillions. Is $20 trillion the next pricing level?

Stephen E Arnold, March 10, 2025

What Do Gamers Know about AI? Nothing, Nothing at All

February 20, 2025

Take-Two Games CEO says, "There’s no such thing" as AI.

Is the head of a major gaming publisher using semantics to downplay the role of generative AI in his industry? PC Gamer reports, "Take-Two CEO Strauss Zelnick Takes a Moment to Remind Us Once Again that ‘There’s No Such Thing’ as Artificial Intelligence." Writer Andy Chalk quotes Strauss’ from a recent GamesIndustry interview:

"Artificial intelligence is an oxymoron, there’s no such thing. Machine learning, machines don’t learn. Those are convenient ways to explain to human beings what looks like magic. The bottom line is that these are digital tools and we’ve used digital tools forever. I have no doubt that what is considered AI today will help make our business more efficient and help us do better work, but it won’t reduce employment. To the contrary, the history of digital technology is that technology increases employment, increases productivity, increases GDP and I think that’s what’s going to happen with AI. I think the videogame business will probably be on the leading, if not bleeding, edge of using AI."

So AI, which does not exist, will actually create jobs instead of eliminate them? The write-up correctly notes the evidence points to the contrary. On the other hand, Strauss seems clear-eyed on the topic of copyright violations. AI-on-AI violations, anyway. We learn:

"That’s a mess Zelnick seems eager to avoid. ‘In terms of [AI] guardrails, if you mean not infringing on other people’s intellectual property by poaching their LLMs, yeah, we’re not going to do that,’ he said. ‘Moreover, if we did, we couldn’t protect that, we wouldn’t be able to protect our own IP. So of course, we’re mindful of what technology we use to make sure that it respects others’ intellectual property and allows us to protect our own.’"

Perhaps Strauss is on to something. It is true that generative AI is just another digital tool—albeit one that tends to put humans out of work. But as we know, hype is more important than reality for those chasing instant fame and riches.

Cynthia Murrell, February 20, 2025

LLMs Paired With AI Are Dangerous Propaganda Tools

February 13, 2025

AI chatbots are in their infancy. While they have been tested for a number of years, they are still prone to bias and other devastating mistakes. Big business and other organizations aren’t waiting for the technology to improve. Instead they’re incorporating chatbots and more AI into their infrastructures. Baldur Bjarnason warns about the dangers of AI, especially when it comes to LLMs and censorship:

“Poisoning For Propaganda: Rising Authoritarianism Makes LLMs More Dangerous.”

Large language models (LLMs) are used to train AI algorithms. Bjarnason warns that using any LLM, even those run locally, are dangerous.

Why?

LLMs are contained language databases that are programmed around specific parameters. These parameters are prone to error, because they were written by humans—ergo why AI algorithms are untrustworthy. They can also be programmed to be biased towards specific opinions aka propaganda machines. Bjarnason warns that LLMs are being used for the lawless takeover of the United States. He also says that corporations, in order to maintain their power, won’t hesitate to remove or add the same information from LLMs if the US government asks them.

This is another type of censorship:

“The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place. That’s the job it does. You won’t notice when the censorship kicks in… The alternative approach to censorship, fine-tuning the model to return a specific response, is more costly than keyword blocking and more error-prone. And resorting to prompt manipulation or preambles is somewhat easily bypassed but, crucially, you need to know that there is something to bypass (or “jailbreak”) in the first place. A more concerning approach, in my view, is poisoning.”

Corporations paired with governments (it’s not just the United States) are “poisoning” the AI LLMs with propagandized sentiments. It’s a subtle way of transforming perspectives without loud indoctrination campaigns. It is comparable to subliminal messages in commercials or teaching only one viewpoint.

Controls seem unlikely.

Whitney Grace, February 13, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta