Google Is Just Like Santa with Free Goodies: Get “High” Grades, of Course

April 18, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby himself.

Google wants to be [a] viewed as the smartest quantumly supreme outfit in the world and [b] like Santa. The “smart” part is part of the company’s culture. The CLEVER approach worked in Web search. Now the company faces what might charitably be called headwinds. There are those pesky legal hassles in the US and some gaining strength in other countries. Also, the competitive world of smart software continues to bedevil the very company that “invented” the transformer. Google gave away some technology, and now everyone from the update champs in Redmond, Washington, to Sam AI-Man is blowing smoke about Google’s systems and methods.

What a state of affairs?

The fix is to give away access to Google’s most advanced smart software to college students. How Santa like. According to “Google Is Gifting a Year of Gemini advanced to Every College Student in the US” reports:

Google has announced today that it’s giving all US college students free access to Gemini Advanced, and not just for a month or two—the offer is good for a full year of service. With Gemini Advanced, you get access to the more capable Pro models, as well as unlimited use of the Deep Research tool based on it. Subscribers also get a smattering of other AI tools, like the Veo 2 video generator, NotebookLM, and Gemini Live. The offer is for the Google One AI Premium plan, so it includes more than premium AI models, like Gemini features in Google Drive and 2TB of Drive storage.

The approach is not new. LexisNexis was one of the first online services to make online legal research available to law school students. It worked. Lawyers are among the savviest of the work fast, bill more professionals. When did Lexis Nexis move this forward? I recall speaking to a LexisNexis professional named Don Wilson in 1980, and he was eager to tell me about this “new” approach.

I asked Mr. Wilson (who as I recall was a big wheel at LexisNexis then), “That’s a bit like drug dealers giving the curious a ‘taste’?”

He smiled and said, “Exactly.”

In the last 45 years, lawyers have embraced new technology with a passion. I am not going to go through the litany of search, analysis, summarization, and other tools that heralded the success of smart software for the legal folks. I recall the early days of LegalTech when the most common question was, “How?” My few conversations with the professionals laboring in the jungle of law, rules, and regulations have shifted to “which system” and “how much.”

The marketing professionals at Google have “invented” their own approach to hook college students on smart software. My instinct is that Google does not know much about Don Wilson’s big idea. (As an aside, I remember one of Mr. Wilson’s technical colleague sometimes sported a silver jumpsuit which anticipated some of the fashion choices of Googlers by half a century.)

The write up says:

Google’s intention is to give students an entire school year of Gemini Advanced from now through finals next year. At the end of the term, you can bet Google will try to convert students to paying subscribers.

I am not sure I agree with this. If the program gets traction, Sam AI-Man and others will be standing by with special offers, deals, and free samples. The chemical structure of certain substances is similar to today’s many variants of smart software. Hey, whatever works, right? Whatever is free, right?

Several observations:

  1. Google’s originality is quantumly supreme
  2. Some people at the Google dress like Mr. Wilson’s technical wizard, jumpsuit and all
  3. The competition is going to do their own version of this “original” marketing idea; for example, didn’t Bing offer to pay people to use that outstanding Web search-and-retrieval system?

Net net: Hey, want a taste? It won’t hurt anything.  Try it. You will be mentally sharper. You will be more informed. You will have more time to watch YouTube. Trust the Google.

Stephen E Arnold, April 18, 2025

Google Gemini 2.5: A Somewhat Interesting Content Marketing Write Up

April 18, 2025

dino orangeJust a still alive dinobaby . No smart software involved.

How about this headline: “Google’s Gemini 2.5 Pro Is the Smartest Model You’re Not Using – and 4 Reasons It Matters for Enterprise AI”?

OpenAI scroogled the Google again. First, it was the January 2023 starting gun for AI hype. Now it was the release of a Japanese cartoon style for ChatGPT. Who knew that Japanese cartoons could have blasted the Google Gemini 2.5 Pro launch more effectively than a detonation of a failed SpaceX rocket?

The write up pants:

Gemini 2.5 Pro marks a significant leap forward for Google in the foundational model race – not just in benchmarks, but in usability. Based on early experiments, benchmark data, and hands-on developer reactions, it’s a model worth serious attention from enterprise technical decision-makers, particularly those who’ve historically defaulted to OpenAI or Claude for production-grade reasoning.

Yeah, whatever.

Announcements about Google AI are about as satisfying as pizza with glued-on cheese or Apple’s AI fantasy PR about “intelligence.”

But I like this statement:

Bonus: It’s Just Useful

The headline and this “just useful” make it clear none of Google’s previous AI efforts are winning the social media buzz game. Plus, the author points out that billions of Google dollars have not made the smart software speedy. And if you want to have smart software write that history paper about Germany after WW 2, stick with other models which feature “conversational smoothness.”

Quite an advertisement. A headline that says, “No one is using this” and” it is sluggish and writes in a way that a student will get flagged for cheating.

Stick to ads maybe?

And what about “why it matters to for enterprise AI.” Yeah, nice omission.

Stephen E Arnold, April 18, 2025

Why Is Meta Experimenting With AI To Write Comments?

April 18, 2025

Who knows why Meta does anything original? Amazon uses AI to write snapshots of book series. Therefore, Meta is using AI to write comments. We were not surprised to read “Meta Is Experimenting With AI-Generated Comments, For Some Reason."

Meta is using AI to write Instagram comments. It sounds like a very stupid idea, but Meta is doing it. Some Instagram accounts can see a new icon to the left of the text field after choosing to leave a comment. The icon is a pencil with a star. When the icon is tapped, a new Meta AI menu pops up, and offers a selection of comment choices. These comments are presumed to be based off whatever content the comment corresponds to in the post.

It doesn’t take much effort to write a simple Instagram comment, but offloading the task appears to take more effort than completing the task yourself. Plus, Instagram is already plagued with chatbot comments already. Does it need more? Nope.

Here’s what the author Jake Peterson requests of his readers:

“Writing comments isn’t hard, and yet, someone at Meta thought there was a usefulness—a market—for AI-generated comments. They probably want more training data for their AI machine, which tracks, considering companies are running out of internet for models to learn from. But that doesn’t mean we should be okay with outsourcing all human tasks to AI.

Mr. Peterson suggest that what bugs him the most is users happily allowing hallucinating software to perform cognitive tasks and make decision for people like me. Right on, Mr. Peterson.

Whitney Grace, April 18, 2025

Trust: Zuck, Meta, and Llama 4

April 17, 2025

dino orange_thumb_thumbSorry, no AI used to create this item.

CNET published a very nice article that says to me: “Hey, we don’t trust you.” Navigate to “Meta Llama 4 Benchmarking Confusion: How Good Are the New AI Models?” The write up is like a wimpy version of the old PC Perspective podcast with Ryan Shrout. Before the embrace of Intel’s intellectual blanket, the podcast would raise questions about video card benchmarks. Most of the questions addressed: “Is this video card that fast?” In some cases, yes, the video card benchmarks were close to the real world. In other cases, video card manufacturers did what the butcher on Knoxville Avenue did in 1951. Mr. Wilson put his thumb on the scale. My grandmother watched friendly Mr. Wilson who drove a new Buick in a very, very modest neighborhood, closely. He did not smile as broadly when my grandmother and I would enter the store for a chicken.

image

Would someone put an AI professional benchmarked to this type of test? Of course not. But the idea has a certain charm. Plus, if the person dies, he was fooling. If the person survives, that individual is definitely a witch. This was a winner method to some enlightened leaders at one time.

The CNET story says about the Zuck’s most recent non-virtual reality investment:

Meta’s Llama 4 models Maverick and Scout are out now, but they might not be the best models on the market.

That’s a good way to say, “Liar, liar, pants on fire.”

The article adds:

the model that Meta actually submitted to the LMArena tests is not the model that is available for people to use now. The model submitted for testing is called “llama-4-maverick-03-26-experimental.” In a footnote on a chart on Llama’s website (not the announcement), in tiny font in the final bullet point, Meta clarifies that the model submitted to LMArena was ‘optimized for conversationality.”

Isn’t this a GenZ way to say, “You put your thumb on the scale, Mr. Wilson”?

Let’s review why one should think about the desire to make something better than it is:

  1. Meta’s decision is just marketing. Think about the self driving Teslas. Consequences? Not for fibbing.
  2. The Meta engineers have to deliver good news. Who wants to tell the Zuck that the Llama innovations are like making the VR thing a big winner? Answer: No one who wants to get a bonus and curry favor.
  3. Meta does not have the ability to distinguish good from bad. The model swap is what Meta is going to do anyway. So why not just use it? No big deal. Is this a moral and ethical dead zone?

What’s interesting is that from my point of view, Meta and the Zuck have a standard operating procedure. I am not sure that aligns with what some people expect. But as long as the revenue flows and meaningful regulation of social media remains a windmill for today’s Don Quixotes, Meta is the best — until another AI leader puts out a quantumly supreme news release.

Stephen E Arnold, April 17, 2025

Google AI Search: A Wrench in SEO Methods

April 17, 2025

Does AI finally spell the end of SEO? Or will it supercharge the practice? Pymnts declares, “Google’s AI Search Switch Leaves Indie Websites Unmoored.” The brief write-up states:

“Google’s AI-generated search answers have reportedly not been good for independent websites. Those answers, along with Google’s alterations to its search algorithm in support of them, have caused traffic to those websites to plunge, Bloomberg News reported Monday (April 7), citing interviews with 25 publishers and people working with them. The changes, Bloomberg said, threaten a ‘delicate symbiotic relationship’ between businesses and Google: they generate good content, and the tech giant sends them traffic. According to the report, many publishers said they either need to shut down or revamp their distribution strategy. Experts this effort could ultimately reduce the quality of information Google can access for its search results and AI answers.”

To add insult to injury, we are reminded, AI Search’s answers are often inaccurate. SEO pros are scrambling to adapt to this new reality. We learn:

“‘It’s important for businesses to think of more than just pure on-page SEO optimization,’ Ben Poulton, founder of the SEO agency Intellar, told PYMNTS. ‘AI overviews tend to try and showcase the whole experience. That means additional content, more FAQs answered, customer feedback addressed on the page, details about walking distance and return policies for brands with a brick-and-mortar, all need to be readily available, as that will give you the best shot of being featured,’ Poulton said.”

So it sounds like one thing has not changed: Second to buying Google ads, posting thoroughly good content is the best way to surface in search results. Or, now, to donate knowledge for the algorithm to spit out. Possibly with hallucinations mixed in.

Cynthia Murrell, April 17, 2025

Google AI: Invention Is the PR Game

April 17, 2025

Google was so excited to tout its AI’s great achievement: In under 48 hours, It solved a medical problem that vexed human researchers for a decade. Great! Just one hitch. As Pivot to AI tells us, "Google Co-Scientist AI Cracks Superbug Problem in Two Days!—Because It Had Been Fed the Team’s Previous Paper with the Answer In It." With that detail, the feat seems much less impressive. In fact, two days seems downright sluggish. Writer David Gerard reports:

"The hype cycle for Google’s fabulous new AI Co-Scientist tool, based on the Gemini LLM, includes a BBC headline about how José Penadés’ team at Imperial College asked the tool about a problem he’d been working on for years — and it solved it in less than 48 hours! [BBC; Google] Penadés works on the evolution of drug-resistant bacteria. Co-Scientist suggested the bacteria might be hijacking fragments of DNA from bacteriophages. The team said that if they’d had this hypothesis at the start, it would have saved years of work. Sounds almost too good to be true! Because it is. It turns out Co-Scientist had been fed a 2023 paper by Penadés’ team that included a version of the hypothesis. The BBC coverage failed to mention this bit. [New Scientist, archive]"

It seems this type of Googley AI over-brag is a pattern. Gerard notes the company claims Co-Scientist identified new drugs for liver fibrosis, but those drugs had already been studied for this use. By humans. He also reminds us of this bit of truth-stretching from 2023:

"Google loudly publicized how DeepMind had synthesized 43 ‘new materials’ — but studies in 2024 showed that none of the materials was actually new, and that only 3 of 58 syntheses were even successful. [APS; ChemrXiv]"

So the next time Google crows about an AI achievement, we have to keep in mind that AI often is a synonym for PR.

Cynthia Murrell, April 17, 2026

AI Impacts Jobs: But Just 40 Percent of Them

April 16, 2025

AI enthusiasts would have us believe workers have nothing to fear from the technology. In fact, they gush, AI will only make our jobs easier by taking over repetitive tasks and allowing time for our creative juices to flow. It is a nice vision. Far-fetched, but nice. Euronews reports, “AI Could Impact 40 Percent of Jobs Worldwide in the Next Decade, UN Agency Warns.” Writer Anna Desmarais cites a recent report as she tells us:

“Artificial intelligence (AI) may impact 40 per cent of jobs worldwide, which could mean overall productivity growth but many could lose their jobs, a new report from the United Nations Department of Trade and Development (UNCTAD) has found. The report … says that AI could impact jobs in four main ways: either by replacing or complementing human work, deepening automation, and possibly creating new jobs, such as in AI research or development.”

So it sounds like we could possibly reach a sort of net-zero on jobs. However, it will take deliberate action to get there. And we are not currently pointed in the right direction:

“A handful of companies that control the world’s advancement in AI ‘often favour capital over labour,’ the report continues, which means there is a risk that AI ‘reduces the competitive advantage’ of low-cost labour from developing countries. Rebeca Grynspan, UCTAD’s Secretary-General, said in a statement that there needs to be stronger international cooperation to shift the focus away ‘from technology to people’.”

Oh, is that all? Easy peasy. The post notes it is not just information workers under threat—when combined with other systems, AI can also perform physical production jobs. Desmarais concludes:

“The impact that AI is going to have on the labour force depends on how automation, augmentation, and new positions interact. The UNCTAD said developing countries need to invest in reliable internet connections, making high-quality data sets available to train AI systems and building education systems that give them necessary digital skills, the report added. To do this, UNCTAD recommends building a shared global facility that would share AI tools and computing power equitably between nations.”

Will big tech and agencies around the world pull together to make it happen?

Cynthia Murrell, April 16, 2025

Stanford AI Report: Credible or Just Marketing?

April 14, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

I am not sure I believe reports or much of anything from Stanford University. Let me explain my skepticism. Here’s one of the snips a quick search provided:

image

I think it was William James said great things about Stanford University when he bumped into the distinguished outfit. If Billie was cranking out Substacks, he would probably be quite careful in using words like “leadership,” “ethical behavior,” and the moral sanctity of big thinkers. Presidents don’t get hired like a temporary worker in front of Home Depot. There is a process, and it quite clear the process and the people and cultural process at the university failed. Failed spectacularly.

Stanford hired and retained a cheater if the news reports are accurate.

Now let’s look at “The 2025 AI Index Report.”

The document’s tone is one of lofty pronouncements.

Stanford mixes comments about smart software with statements like “

Global AI optimism is rising—but deep regional divides remain.

Yep, I would submit that AI-equipped weapons are examples of “regional divides.”

I think this report is:

  1. Marketing for Stanford’s smart software activities
  2. A reminder that another country (China) is getting really capable in smart software and may zip right past the noodlers in the Gates Computer Science Building
  3. Stanford wants to be a thought leader which helps the “image” of the school, the students, the faculty, and the wretches in fund raising who face a tough slog in the years ahead.

For me personally, I think the “report” should be viewed with skepticism. Why? A university which hires a cheater makes quite clear that the silly notions of William James are irrelevant.

I am not sure they are.

Stephen E Arnold, April 14, 2025

Programming in an AI World: Spruiked Again Like We Were Last Summer

April 14, 2025

Software engineers are, reasonably, concerned about losing their jobs to AI. Australian blogger Clinton Boys asks, "How Will LLMs Take Our Jobs?" After reading several posts by programmers using LLMs for side projects, he believes such accounts suggest where we are headed. He writes:

"The consensus seems to be that rather than a side project being some sort of idea you have, then spend a couple of hours on, maybe learn a few things, but quickly get distracted by life or a new side project, you can now just chuck your idea into the model and after a couple of hours of iterating you have a working project. To me, this all seems to point to the fact that we are currently in the middle of a significant paradigm shift, akin to the transition from writing assembly to compiled programming languages. A potential future is unfolding before our eyes in which programmers don’t write in programming languages anymore, but write in natural language, and generative AI handles the gruntwork of actually writing the code, the same way a compiler translates your C code into machine instructions."

Perhaps. But then, he ponders, will the job even fit the title of "engineer"? Will the challenges and creative potential many love about this career vanish? And what would they do then? Boys suggests several routes one might take, with the caveat that a realistic path forward would probably blend several of these. He recognizes one could simply give up and choose a different career entirely. An understandable choice, if one can afford to start over. If not, one might join the AI cavalcade by learning how to create LLMs and/or derive value from them. It may also be wise to climb the corporate ladder—managers should be safer longer, Boys expects. Then again one might play ostrich:

"You could also cross your fingers and hope it pans out differently — particularly if, like me you find the vision of the future spruiked by the most bullish LLM proponents a little ghoulish and offensive to our collective humanity."

Always an option, we suppose. I had to look up the Australian term "spruik." According to Wordsmith.org, it means "to make an elaborate speech, especially to attract customers." Fitting. Finally, Boys says, one could bet on software connoisseurs of the future. Much as some now pay more for hand-made pastries or small-batch IPAs, some clients may be willing to shell out for software crafted the old-fashioned way. One can hope.

Cynthia Murrell, April 14, 2025

Meta a Great Company Lately?

April 10, 2025

dino orange_thumb_thumb_thumbSorry, no AI used to create this item.

Despite Google’s attempt to flood the zone with AI this and AI that, Meta kept popping up in my newsfeed this morning (April 10, 2025). I pushed past the super confidential information from the US District Court of Northern District of California (an amazing and typically incoherent extract of super confidential information) and focused on a non-fiction author.

The Zuck – NSO Group dust up does not make much of a factoid described in considerable detail in Wikipedia. That encyclopedia entry is “Onavo.” In a nutshell, Facebook acquired a company which used techniques not widely known to obtain information about users of an encrypted app. Facebook’s awareness of Onavo took place, according to Wikipedia, prior to 2013 when Facebook purchased Onavo. My thought is that someone in the Facebook organization learned about other Israeli specialized software firms. Due to the high profile NSO Group had as a result of its participation in certain intelligence-related conferences and the relatively small community of specialized software developers in Israel, Facebook may have learned about the Big Kahuna, NSO Group. My personal view is that Facebook and probably more than a couple of curious engineers learned how specialized software purpose-built to cope with mobile phone data and were more than casually aware of systems and methods. The Meta – NSO Group dust up is an interesting case. Perhaps someday someone will write up how the Zuck precipitated a trial, which to an outsider, looks like a confused government-centric firm facing a teenagers with grudge. Will this legal matter turn a playground-type of argument about who is on whose team into an international kidney stone for the specialized software sector? For now, I want to pick up the Meta thread and talk about Washington, DC.

The Hill, an interesting publication about interesting institutions, published “Whistleblower Tells Senators That Meta Undermined U.S. Security, Interests.” The author is a former Zucker who worked as the director of global public policy at Facebook. If memory serves me, she labored at the estimable firm when Zuck was undergoing political awakening.

The Hill reports:

Wynn-Williams told Hawley’s panel that during her time at Meta: “Company executives lied about what they were doing with the Chinese Communist Party to employees, shareholders, Congress and the American public,” according to a copy of her remarks. Her most explosive claim is that she witnessed Meta executives decide to provide the Chinese Communist Party with access to user data, including the data of Americans. And she says she has the “documents” to back up her accusations.

After the Zuck attempted to block, prevent, thwart, or delete Ms. Wynn-Williams’ book Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism from seeing the light of a Kindle, I purchased the book. Silicon Valley tell-alls are usually somewhat entertaining. It is a mark of distinction for Ms. Wynn-Williams that she crafted a non-fiction write up that made me downright uncomfortable. Too much information about body functions and allegations about sharing information with a country not getting likes from too many people in certain Washington circles made me queasy. Dinobabies are often sensitive creatures unless they grow up to be Googzillas.

The Hill says:

Wynn-Williams testified that Meta started briefing the Chinese Communist party as early as 2015, and provided information about critical emerging technologies and artificial intelligence. “There’s a straight line you can draw from these briefings to the recent revelations that China is developing AI models for military use,” she said.

But isn’t open source AI software the future a voice in my head said?

What adds some zip to the appearance is this factoid from the article:

Wynn-Williams has filed a shareholder resolution asking the company’s board to investigate its activity in China and filed whistleblower complaints with the Securities and Exchange Administration and the Department of Justice.

I find it fascinating that on the West Coast, Facebook is unhappy with intelware being used on a Zuck-purchased service to obtain information about alleged persons of interest. About the same time, on the East coast, a former Zucker is asserting that the estimable social media company buddied up to a nation-state not particularly supportive of American interests.

Assuming that the Northern District court case is “real” and “actual factual” and that Ms. Wynn-Williams’ statements are “real” and “actual factual,” what can one hypothesize about the estimable Meta outfit? Here are my thoughts:

  1. Meta generates little windstorms of controversy. It doesn’t need to flood the zone with Google-style “look at us” revelations. Meta just stirs up storms.
  2. On the surface, Meta seems to have an interesting public posture. On one hand, the company wants to bring people together for good, etc. etc. On the other, the company could be seen as annoyed that a company used his acquired service to do data collection at odds with Meta’s own pristine approach to information.
  3. The tussles are not confined to tiny spaces. The West Coast matter concerns what I call intelware. When specialized software is no longer “secret,” the entire sector gets a bit of an uncomfortable feeling. Intelware is a global issue. Meta’s approach is in my opinion spilling outside the courtroom. The East Coast matter is another bigly problem. I suppose allegations of fraternization with a nation-state less than thrilled with the US approach to life could be seen as “small.” I think Ms. Wynn-Williams has a semi-large subject in focus.

Net net: [a] NSO Group cannot avoid publicity which could have an impact on a specialized software sector that should have remained in a file cabinet labeled “Secret.” [b] Ms. Wynn-Williams could have avoided sharing what struck me as confidential company information and some personal stuff as well. The book is more than a tell-all; it is a summary of what could be alleged intentional anti-US activity. [c] Online seems to be the core of innovation, finance, politics, and big money. Just forty five years ago, I wore bunny ears when I gave talks about the impact of online information. I called myself the Data Bunny. and, believe it or not, wore white bunny rabbit ears for a cheap laugh and make the technical information more approachable. Today many know online has impact. From a technical oddity used by fewer than 5,000 people to disruption of the specialized software sector by a much-loved organization chock full of Zuckers.

Stephen E Arnold, April 10, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta