Meta a Great Company Lately?
April 10, 2025
Sorry, no AI used to create this item.
Despite Google’s attempt to flood the zone with AI this and AI that, Meta kept popping up in my newsfeed this morning (April 10, 2025). I pushed past the super confidential information from the US District Court of Northern District of California (an amazing and typically incoherent extract of super confidential information) and focused on a non-fiction author.
The Zuck – NSO Group dust up does not make much of a factoid described in considerable detail in Wikipedia. That encyclopedia entry is “Onavo.” In a nutshell, Facebook acquired a company which used techniques not widely known to obtain information about users of an encrypted app. Facebook’s awareness of Onavo took place, according to Wikipedia, prior to 2013 when Facebook purchased Onavo. My thought is that someone in the Facebook organization learned about other Israeli specialized software firms. Due to the high profile NSO Group had as a result of its participation in certain intelligence-related conferences and the relatively small community of specialized software developers in Israel, Facebook may have learned about the Big Kahuna, NSO Group. My personal view is that Facebook and probably more than a couple of curious engineers learned how specialized software purpose-built to cope with mobile phone data and were more than casually aware of systems and methods. The Meta – NSO Group dust up is an interesting case. Perhaps someday someone will write up how the Zuck precipitated a trial, which to an outsider, looks like a confused government-centric firm facing a teenagers with grudge. Will this legal matter turn a playground-type of argument about who is on whose team into an international kidney stone for the specialized software sector? For now, I want to pick up the Meta thread and talk about Washington, DC.
The Hill, an interesting publication about interesting institutions, published “Whistleblower Tells Senators That Meta Undermined U.S. Security, Interests.” The author is a former Zucker who worked as the director of global public policy at Facebook. If memory serves me, she labored at the estimable firm when Zuck was undergoing political awakening.
The Hill reports:
Wynn-Williams told Hawley’s panel that during her time at Meta: “Company executives lied about what they were doing with the Chinese Communist Party to employees, shareholders, Congress and the American public,” according to a copy of her remarks. Her most explosive claim is that she witnessed Meta executives decide to provide the Chinese Communist Party with access to user data, including the data of Americans. And she says she has the “documents” to back up her accusations.
After the Zuck attempted to block, prevent, thwart, or delete Ms. Wynn-Williams’ book Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism from seeing the light of a Kindle, I purchased the book. Silicon Valley tell-alls are usually somewhat entertaining. It is a mark of distinction for Ms. Wynn-Williams that she crafted a non-fiction write up that made me downright uncomfortable. Too much information about body functions and allegations about sharing information with a country not getting likes from too many people in certain Washington circles made me queasy. Dinobabies are often sensitive creatures unless they grow up to be Googzillas.
The Hill says:
Wynn-Williams testified that Meta started briefing the Chinese Communist party as early as 2015, and provided information about critical emerging technologies and artificial intelligence. “There’s a straight line you can draw from these briefings to the recent revelations that China is developing AI models for military use,” she said.
But isn’t open source AI software the future a voice in my head said?
What adds some zip to the appearance is this factoid from the article:
Wynn-Williams has filed a shareholder resolution asking the company’s board to investigate its activity in China and filed whistleblower complaints with the Securities and Exchange Administration and the Department of Justice.
I find it fascinating that on the West Coast, Facebook is unhappy with intelware being used on a Zuck-purchased service to obtain information about alleged persons of interest. About the same time, on the East coast, a former Zucker is asserting that the estimable social media company buddied up to a nation-state not particularly supportive of American interests.
Assuming that the Northern District court case is “real” and “actual factual” and that Ms. Wynn-Williams’ statements are “real” and “actual factual,” what can one hypothesize about the estimable Meta outfit? Here are my thoughts:
- Meta generates little windstorms of controversy. It doesn’t need to flood the zone with Google-style “look at us” revelations. Meta just stirs up storms.
- On the surface, Meta seems to have an interesting public posture. On one hand, the company wants to bring people together for good, etc. etc. On the other, the company could be seen as annoyed that a company used his acquired service to do data collection at odds with Meta’s own pristine approach to information.
- The tussles are not confined to tiny spaces. The West Coast matter concerns what I call intelware. When specialized software is no longer “secret,” the entire sector gets a bit of an uncomfortable feeling. Intelware is a global issue. Meta’s approach is in my opinion spilling outside the courtroom. The East Coast matter is another bigly problem. I suppose allegations of fraternization with a nation-state less than thrilled with the US approach to life could be seen as “small.” I think Ms. Wynn-Williams has a semi-large subject in focus.
Net net: [a] NSO Group cannot avoid publicity which could have an impact on a specialized software sector that should have remained in a file cabinet labeled “Secret.” [b] Ms. Wynn-Williams could have avoided sharing what struck me as confidential company information and some personal stuff as well. The book is more than a tell-all; it is a summary of what could be alleged intentional anti-US activity. [c] Online seems to be the core of innovation, finance, politics, and big money. Just forty five years ago, I wore bunny ears when I gave talks about the impact of online information. I called myself the Data Bunny. and, believe it or not, wore white bunny rabbit ears for a cheap laugh and make the technical information more approachable. Today many know online has impact. From a technical oddity used by fewer than 5,000 people to disruption of the specialized software sector by a much-loved organization chock full of Zuckers.
Stephen E Arnold, April 10, 2025
AI Horn Honking: Toot for Refact
April 10, 2025
What is one of the things we were taught in kindergarten? Oh, right. Humility. That, however, doesn’t apply when you’re in a job interview, selling a product, or writing a press release. Dev.to’s wrote a press release about their open source AI agent for programming in IDE was high ranking: “Our AI Agent + 3.7 Sonnet Ranked #1 Pn Aider’s Polyglot Bench — A 76.4% Score.”
As the title says, Dev.to’s open source AI programming agent ranked 76.4%. The agent is called Refact.ai and was upgraded with 3.7 Sonnet. It outperformed other AI agents, include Claude, Deepseek, ChatGPT, GPT-4.5 Preview, and Aider.
Refact.ai does better than the others because it is an intuitive AI agent. It uses a feedback loop to create self-learning and auto-correcting AI agent:
• “Writes code: The agent generates code based on the task description.
• Fixes errors: Runs automated checks for issues.
• Iterates: If problems are found, the agent corrects the code, fixes bugs, and re-tests until the task is successfully completed.
• Delivers the result, which will be correct most of the time!”
Dev.to has good reasons to pat itself on the back. Hopefully they will continue to develop and deliver high-performing AI agents.
Whitney Grace, April 10, 2025
China and AI: Moving Ahead?
April 10, 2025
There’s a longstanding rivalry between the United States and China. The rivalry extends to everything from government, economy, GDP, and technology. There’s been some recent technology developments in this heated East and West rivalry says The Independent in the article, “Has China Just Built The World’s First Human-Level AI?”
Deepseek is a AI start-up that’s been compared to OpenAI with its AI models. The clincher is that Deepseek’s models are more advanced than OpenAI because they perform better and use less resources. Another Chinese AI company claims they’ve made another technology breakthrough and it’s called “Manus.” Manus is is supposedly the world’s first fully autonomous AI agent that can perform complex tasks without human guidance. These tasks include creating a podcast, buying property, or booking travel plans.
Yichao Ji is the head of Manu’s AI development. He said that Manus is the next AI evolution and that it’s the beginning of artificial general intelligence (AGI). AGI is AI that rivals or surpasses human intelligence. Yichao Ji said:
“ ‘This isn’t just another chatbot or workflow, it’s a truly autonomous agent that bridges the gap between conception and execution,’ he said in a video demonstrating the AI’s capabilities. ‘Where other AI stops at generating ideas, Manus delivers results. We see it as the next paradigm of human-machine collaboration.’”
Meanwhile Dario Amodei’s company designed Claude, the ChatGPT rival, and he predicted that AGI would be available as soon as 2026. He wrote an essay in October 2024 with the following statement:
“ ‘It can engage in any actions, communications, or remote operations,’ he wrote, ‘including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with a skill exceeding that of the most capable humans in the world.’”
These are tasks that Manus can do, according to the AI’s Web site. However when Manus was tested users spotted it making mistakes that most humans would spot.
Manus’s team is grateful for the insight into its AI’s flaws and will work to deliver a better AGI. The experts are viewing Manus with a more critical eye, because Manus is not delivering the same results as its American counterparts.
It appears that the US is still developing higher performing AI that will become the basis of AGI. Congratulations to the red, white, and blue!
Whitney Grace, April 10, 2025
AI: Job Harvesting
April 9, 2025
It is a question that keeps many of us up at night. Commonplace ponders, "Will AI Automate Away Your Job?" The answer: Probably, sooner or later. The when depends on the job. Some workers may be lucky enough to reach retirement age before that happens. Writer Jason Hausenloy explains:
"The key idea where the American worker is concerned is that your job is as automatable as the smallest, fully self-contained task is. For example, call center jobs might be (and are!) very vulnerable to automation, as they consist of a day of 10- to 20-minute or so tasks stacked back-to-back. Ditto for many forms of many types of freelancer services, or paralegals drafting contracts, or journalists rewriting articles. Compare this to a CEO who, even in a day broken up into similar 30-minute activities—a meeting, a decision, a public appearance—each required years of experiential context that a machine can’t yet simply replicate. … This pattern repeats across industries: the shorter the time horizon of your core tasks, the greater your automation risk."
See the post for a more detailed example that compares the jobs of a technical support specialist and an IT systems architect.
Naturally, other factors complicate the matter. For example, Hausenloy notes, blue-collar jobs may be safer longer because physical robots are more complex to program than information software. Also, the more data there is on how to do a job, the better equipped algorithms are to mimic it. That is one reason many companies implement tracking software. Yes, it allows them to micromanage workers. And also it gathers data needed to teach an LLM how to do the job. With every keystroke and mouse click, many workers are actively training their replacements.
Ironically, it seems those responsible for unleashing AI on the world may be some of the most replaceable. Schadenfreude, anyone? The article notes:
"The most vulnerable jobs, then, are not those traditionally thought of as threatened by automation—like manufacturing workers or service staff—but the ‘knowledge workers’ once thought to be automation-proof. And most vulnerable of all? The same Silicon Valley engineers and programmers who are building these AI systems. Software engineers whose jobs are based on writing code as discrete, well-documented tasks (often following standardized updates to a central directory) are essentially creating the perfect training data for AI systems to replace them."
In a section titled "Rethinking Work," Hausenloy waxes philosophical on a world in which all of humanity has been fired. Is a universal basic income a viable option? What, besides income, do humans get out of their careers? In what new ways will we address those needs? See the write-up for those thought exercises. Meanwhile, if you do want to remain employed as long as possible, try to make your job depend less on simple, repetitive tasks and more on human connection, experience, and judgement. With luck, you may just reach retirement before AI renders you obsolete.
Cynthia Murrell, April 9, 2025
AI Addicts Are Now a Thing
April 9, 2025
Hey, pal, can you spare a prompt?
Gee, who could have seen this coming? It seems one can become dependent on a chatbot, complete with addition indicators like preoccupation, withdrawal symptoms, loss of control, and mood modification. "Something Bizarre Is Happening to People Who Use ChatGPT a Lot," reports The Byte. Writer Noor Al-Sibai cites a recent joint study by OpenAI and MIT Media Lab as she writes:
"To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of ‘affective cues,’ which was defined in a joint summary of the research as ‘aspects of interactions that indicate empathy, affection, or support,’ they used when chatting with it. Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a ‘friend.’ The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too. Add it all up, and it’s not good. In this study as in other cases we’ve seen, people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI — and where that leads could end up being sad, scary, or somewhere entirely unpredictable."
No kidding. Interestingly, the study found those who use the bot as an emotional or psychological sounding board were less likely to become dependent than those who used it for "non-personal" tasks, like brainstorming. Perhaps because the former are well-adjusted enough to examine their emotions at all? (The privacy risks of sharing such personal details with a chatbot are another issue entirely.) Al-Sibai emphasizes the upshot of the research: The more time one spends using ChatGPT, the more likely one is to become emotionally dependent on it. We think parents, especially, should be aware of this finding.
How many AI outfits will offer free AI? You know. Just give folks a taste.
Cynthia Murrell, April 9, 2025
Oh, Oh, a Technological Insight: Unstable, Degrading, Non-Reversable.
April 9, 2025
Dinobaby says, “No smart software involved. That’s for “real” journalists and pundits.
“Building a House of Cards” has a subtitle which echoes other statements of “Oh, oh, this is not good”:
Beneath the glossy promises of artificial intelligence lies a ticking time bomb — and it’s not the one you’re expecting
Yep, another, person who seems younger than I has realized that flows of digital information erode, not just social structures but other functions as well.
The author, who publishes in Mr. Plan B, states:
The real crisis isn’t Skynet-style robot overlords. It’s the quiet, systematic automation of human bias at scale.
The observation is excellent. The bias of engineers and coders who set thresholds, orchestrate algorithmic beavers, and use available data. The human bias is woven into the systems people use, believe, and depend upon.
The essay asserts:
We’re not coding intelligence — we’re fossilizing prejudice.
That, in my opinion, is a good line.
The author, however, runs into a bit of a problem. The idea of a developers’ manifesto is interesting but flawed. Most devs, as some term this group, like creating stuff and solving problems. That’s the kick. Most of the devs with whom I have worked laugh when I tell them I majored in medieval religious poetry. One, a friend of mine, said, “I paid someone to write my freshman essay, and I never took any classes other than math and science.”
I like that: Ignorance and a good laugh at how I spent my college years. The one saving grace is that I got paid to help a professor index Latin sermons using the university’s one computer to output the word lists and microfilm locators. Hey, in 1962, this was voodoo.
Those who craft the systems are not compensated to think about whether Latin sermons were original or just passed around when a visiting monk exchanged some fair copies for a snort of monastery wine and a bit of roast pig. Let me tell you that most of those sermons were tediously similar and raised such thorny problems as the originality of the “author.”
The essay concludes with a factoid:
25 years in tech taught me one thing: Every “revolutionary” technology eventually faces its reckoning. AI’s is coming.
I am not sure that those engaged in the noble art and craft of engineering “smart” software accept, relate, or care about the validity of the author’s statement.
The good news is that the essay’s author now understand that flows of digital information do not construct. The bits zipping around erode just like the glass beads or corn cob abrasive in a body shop’s media blaster aimed at rusted automobile frame.
The body shop “restores” the rusted part until it is as good as new. Even better some mechanics say.
As long as it is “good enough,” the customer is happy. But those in the know realize that the frame will someday be unable to support the stress placed upon it.
See. Philosophy from a mechanical process. But the meaning speaks to a car nut. One may have to give up or start over.
Stephen E Arnold, April 9, 2025
Programmers? Just the Top Code Wizards Needed. Sorry.
April 8, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
Microsoft has some interesting ideas about smart software and writing “code.” To sum it up, consider another profession.
“Microsoft CTO Predicts AI Will Generate 95% of Code by 2030” reports:
Developers’ roles will shift toward orchestrating AI-driven workflows and solving complex problems.
I think this means that instead of figuring out how to make something happen, one will perform the higher level mental work. The “script” comes out of the smart software.
The write up says:
“It doesn’t mean that the AI is doing the software engineering job … authorship is still going to be human,” Scott explained. “It creates another layer of abstraction [as] we go from being an input master (programming languages) to a prompt master (AI orchestrator).” He doesn’t believe AI will replace developers, but it will fundamentally change their workflows. Instead of painstakingly writing every line of code, engineers will increasingly rely on AI tools to generate code based on prompts and instructions. In this new paradigm, developers will focus on guiding AI systems rather than programming computers manually. By articulating their needs through prompts, engineers will allow AI to handle much of the repetitive work, freeing them to concentrate on higher-level tasks like design and problem-solving.
The idea is good. Does it imply that smart software has reached the end of its current trajectory and will not be able to:
- Recognize a problem
- Formulate appropriate questions
- Obtain via research, experimentation, or Eureka! moments a solution?
The observation by the Microsoft CTO does not seem to consider this question about a trolly line that can follow its tracks.
The article heads off in another direction; specifically, what happens to the costs?
IBM CEO Arvind Krishna’s is quoted as saying:
“If you can produce 30 percent more code with the same number of people, are you going to get more code written or less?” Krishna rhetorically posed, suggesting that increased efficiency would stimulate innovation and market growth rather than job losses.
Where does this leave “coders”?
Several observations:
- Those in the top one percent of skills are in good shape. The other 99 percent may want to consider different paths to a bright, fulfilling future
- Money, not quality, is going to become more important
- Inexperienced “coders” may find themselves looking for ways to get skills at the same time unneeded “coders” are trying to reskill.
It is no surprise that CNET reported, “The public is particularly concerned about job losses. AI experts are more optimistic.”
Net net: Smart software, good or bad, is going to reshape work in a big chunk of the workforce. Are schools preparing students for this shift? Are there government programs in place to assist older workers? As a dinobaby, it seems the answer is not far to seek.
Stephen E Arnold, April 8, 2025
Amazon Takes the First Step Toward Moby Dickdom
April 7, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
This Engadget article does not predict the future. “Amazon Will Use AI to Generate Recaps for Book Series on the Kindle” reports:
Amazon’s new feature could make it easier to get into the latest release in a series, especially if it’s been some time since you’ve read the previous books. The new Recaps feature is part of the latest software update for the Kindle, and the company compares it to “Previously on…” segments you can watch for TV shows. Amazon announced Recaps in a blog post, where it said that you can get access to it once you receive the software update over the air or after you download and install it from Amazon’s website. Amazon didn’t talk about the technology behind the feature in its post, but a spokesperson has confirmed to TechCrunch that the recaps will be AI generated.
You may know a person who majored in American or English literature. Here’s a question you could pose:
Do those novels by a successful author follow a pattern; that is, repeatable elements and a formula?
My hunch is that authors who have written a series of books have a recipe. The idea is, “If it makes money, do it again.” In the event that you could ask Nora Roberts or commune with Billy Shakespeare, did their publishers ask, “Could you produce another one of those for us? We have a new advance policy.” When my Internet 2000: The Path to the Total Network made money in 1994, I used the approach, tone, and research method for my subsequent monographs. Why? People paid to read or flip through the collected information presented my way. I admit I that combined luck, what I learned at a blue chip consulting firm, and inputs from people who had written successful non-fiction “reports.” My new monograph — The Telegram Labyrinth — follows this blueprint. Just ask my son, and he will say, “My dad has a template and fills in the blanks.”
If a dinobaby can do it, what about flawed smart software?
Chase down a person who teaches creative writing, preferably in a pastoral setting. Ask that person, “Do successful authors of series follow a pattern?”
Here’s what I think is likely to happen at Amazon. Remember. I have zero knowledge about the inner workings of the Bezos bulldozer. I inhale its fumes like many other people. Also, Engadget doesn’t get near this idea. This is a dinobaby opinion.
Amazon will train its smart software to write summaries. Then someone at Amazon will ask the smart software to generate a 5,000 word short story in the style of Nora Roberts or some other money spinner. If the story is okay, then the Amazonian with a desire to shift gears says, “Can you take this short story and expand it to a 200,000 word novel, using the patterns, motifs, and rhetorical techniques of the series of novels by Nora, Mark, or whoever.
Guess what?
Amazon now has an “original” novel which can be marketed as an Amazon test, a special to honor whomever, or experiment. If Prime members or the curious click a lot, that Amazon employee has a new business to propose to the big bulldozer driver.
How likely is this scenario? My instinct is that there is a 99 percent probability that an individual at Amazon or the firm from which Amazon is licensing its smart software has or will do this.
How likely is it that Amazon will sell these books to the specific audience known to consume the confections of Nora and Mark or whoever? I think the likelihood is close to 80 percent. The barriers are:
- Bad optics among publishers, many of which are not pals of fume spouting bulldozers in the few remaining bookstores
- Legal issues because both publishers and authors will grouse and take legal action. The method mostly worked when Google was scanning everything from timetables of 19th century trains in England to books just unwrapped for the romance novel crowd
- Management disorganization. Yep, Amazon is suffering the organization dysfunction syndrome just like other technology marvels
- The outputs lack the human touch. The project gets put on ice until OpenAI, Anthropic, or whatever comes along and does a better job and probably for fewer computing resources which means more profit.
What’s important is that this first step is now public and underway.
Engadget says, “Use it at your own risk.” Whose risk may I ask?
Stephen E Arnold, April 7, 2025
AI May Fizzle and the New York Times Is Thrilled
April 7, 2025
Yep, a dinobaby blog post. No smart software required.
I read “The Tech Fantasy That Powers A.I. Is Running on Fumes.” Is this a gleeful headline or not. Not even 10 days after the Italian “all AI” newspaper found itself the butt of merciless humor, the NYT is going for the jugular.
The write up opines:
- “Midtech” — tech but not really
- “Silly” — Showing little thought or judgment
- “Academics” — Ivory tower dwellers, not real journalists and thinkers
Here’s a quote from a person who obviously does not like self check outs:
The economists Daron Acemoglu and Pascual Restrepo call these kinds of technological fizzles “so-so” technologies. They change some jobs. They’re kind of nifty for a while. Eventually they become background noise or are flat-out annoying, say, when you’re bagging two weeks’ worth of your own groceries.
And now the finale:
But A.I. is a parasite. It attaches itself to a robust learning ecosystem and speeds up some parts of the decision process. The parasite and the host can peacefully coexist as long as the parasite does not starve its host. The political problem with A.I.’s hype is that its most compelling use case is starving the host — fewer teachers, fewer degrees, fewer workers, fewer healthy information environments.
My thought is that the “real” journalists at the NYT hope that AI fails. Most routine stories can be handled by smart software. Sure, there are errors. But looking at a couple of versions of the same event is close enough for horse shoes.
The writing is on the wall of the bean counters’ offices: Reduce costs. Translation: Some “real” journalists can try to get a job as a big time consultant. Oh, strike that. Outfits that sell brains are replacing flakey MBAs with smart software. Well, there is PR and marketing. Oh, oh, strike that tool. Telegram’s little engines of user controlled smart software can automate ads. Will other ad outfits follow Telegram’s lead? Absolutely.
Yikes. It won’t be long before some “real” journalists will have an opportunity to write their version of:
- Du côté de chez Swann
- À l’ombre des jeunes filles en fleurs
- Le Côté de Guermantes
- Sodome et Gomorrhe
- La Prisonnière
- Albertine disparue (also published as La Fugitive)
- Le Temps retrouvé
Which one will evoke the smell of the newsroom?
Stephen E Arnold, April 7, 2025
Free! Does Google Do Anything for Free?
April 7, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
What an inducement! Such a deal!
How excited was I to read this headline:
Gemini 2.5 Pro Is Google’s Most Powerful AI Model and It’s Already Free
The write up explains:
Google points to several benchmark tests that show the prowess of Gemini 2.5 Pro. At the time of writing it tops the LMArena leaderboard, where users give ratings on responses from dozens of AI chatbots. It also scores 18.8 percent on the Humanity’s Last Exam test—which measures human knowledge and reasoning—narrowly edging out rival models from OpenAI and Anthropic.
As a dinobaby, I understand this reveal is quantumly supreme. Google is not only the best. The “free” approach puts everyone on notice that Google is not interested in money. Google is interested in…. Well, frankly, I am not sure.
Thanks, You.com. Good enough. I have to pay to get this type of smart art.
Possible answers include: [a] publicity to deal with the PR tsunami the OpenAI Ghibli capability splashed across my newsfeeds, [b] a response to the Chinese open source alternatives from eCommerce outfits and mysterious venture capital firms, [c] Google’s tacit admission that its best card is the joker that allows free access to the game, [d] an unimaginative response to a competitive environment less and less Google centric each day.
Pick one.
The write up reports:
The frenetic pace of AI development shows no signs of slowing down anytime soon, and we can expect more Gemini 2.5 models to appear in the near future. “As always, we welcome feedback so we can continue to improve Gemini’s impressive new abilities at a rapid pace, all with the goal of making our AI more helpful,” says Koray Kavukcuoglu, from Google’s DeepMind AI lab.
The question is, “Have the low-hanging AI goodies been harvested?”
I find that models are becoming less distinctive. One of my team handed me two sheets of paper. On one was a paragraph from our locally installed Deepseek. The other was a sheet of paper of an answer from You.com’s “smart” option.
My response was, “So?” I could not tell which model produced what because the person whom I pay had removed the idiosyncratic formatting of the Deepseek output and the equally distinctive outputting from You.com’s Smart option.
My team member asked, “Which do you prefer?”
I said, “Get Whitney to create one write up and input our approach to the topic.”
Both were okay; neither was good enough to use as handed to me.
Good enough. The AI systems reached “good enough” last year. Since then, not much change except increasing similarity.
Free is about right. What’s next? Paying people to use Bing Google?
Now to answer the headline question, “Does Google do anything for free?” My answer: Only when the walls are closing in.
Stephen E Arnold, April 7, 2025