Unified Data Across Governments? How Useful for a Non Participating Country
February 18, 2025
A dinobaby post. No smart software involved.
I spoke with a person whom I have known for a long time. The individual lives and works in Washington, DC. He mentioned “disappeared data.” I did some poking around and, sure enough, certain US government public facing information had been “disappeared.” Interesting. For a short period of time I made a few contributions to what was FirstGov.gov, now USA.gov.
For those who don’t remember or don’t know about President Clinton’s Year 2000 initiative, the idea was interesting. At that time, access to public-facing information on US government servers was via the Web search engines. In order to locate a tax form, one would navigate to an available search system. On Google one would just slap in IRS or IRS and the form number.
Most of the US government public-facing Web sites were reasonably straight forward. Others were fairly difficult to use. The US Marine Corps’ Web site had poor response times. I think it was hosted on something called Server Beach, and the would-be recruit would have to wait for the recruitment station data to appear. The Web page worked but it was slow.
President Clinton wanted or someone in his administration wanted the problem to be fixed with a search system for US government public-facing content. After a bit of work, the system went online in September 2000. The system morphed into a US government portal a bit like the Yahoo.com portal model.
I thought about the information in “Oracle’s Ellison Calls for Governments to Unify Data to Feed AI.” The write up reports:
Oracle Corp.’s co-founder and chairman Larry Ellison said governments should consolidate all national data for consumption by artificial intelligence models, calling this step the “missing link” for them to take full advantage of the technology. Fragmented sets of data about a population’s health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models…
Several questions arise; for instance:
- What country or company provides the technology?
- Who manages what data are added and what data are deleted?
- What are the rules of access?
- What about public data which are not available for public access; for example, the “disappeared” data from US government Web sites?
- What happens to commercial or quasi-commercial government units which repackage public data and sell it at a hefty mark up?
Based on my brief brush with the original Clinton project, I think the idea is interesting. But I have one other question in mind: What happens when non-participating countries get access to the aggregated public facing data. Digital information is a tricky resource to secure. In fact, once data are digitized and connected to a network, it is fair game. Someone, somewhere will figure out how to access, obtain, exfiltrate, and benefit from aggregated data.
The idea is, in my opinion, a bit of grandstanding like Google’s quantum supremacy claims. But US high technology wizards are ready and willing to think big thoughts and take even bigger actions. We live in interesting times, but I am delighted that I am old.
Stephen E Arnold, February 18, 2025
Real AI News? Yes, with Fact Checking, Original Research, and Ethics Too
February 17, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
This is “real” news… if the story is based on fact checking, original research, and those journalistic ethics pontifications. Let’s assume that these conditions of old-fashioned journalism to apply. This means that the story “New York Times Goes All-In on Internal AI Tools” pinpoints a small shift in how “real” news will be produced.
The write up asserts:
The New York Times is greenlighting the use of AI for its product and editorial staff, saying that internal tools could eventually write social copy, SEO headlines, and some code.
Yep, some. There’s ground truth (that’s an old-fashioned journalism concept) in blue-chip consulting. The big money maker is what’s called scope creep. Stated simply, one starts small like a test or a trial. Then if the sky does not fall as quickly as some companies’ revenue, the small gets a bit larger. You check to make sure the moon is in the sky and the revenues are not falling, hopefully as quickly as before. Then you expand. At each step there are meetings, presentations, analyses, and group reassurances from others in the deciders category. Then — like magic! — the small project is the rough equivalent of a nuclear-powered aircraft carrier.
Ah, scope creep.
Understate what one is trying. Watch it. Scale it. End up with an aircraft carrier scale project. Yes, it is happening at an outfit like the New York Times if the cited article is accurate.
What scope creep stage setting appears in the write up? Let look:
- Staff will be trained. You job, one assumes, is safe. (Ho ho ho)
- AI will help uncover “the truth.” (Absolutely)
- More people will benefit (Don’t forget the stakeholders, please)
What’s the write up presenting as actual factual?
The world’s greatest newspaper will embrace hallucinating technology, but only a little bit.
Scope creep begins, and it won’t change a thing, but that information will appear once the cost savings, revenue, and profit data become available at the speed of newspaper decision making.
Stephen E Arnold, February 17, 2025
Sam Altman: The Waffling Man
February 17, 2025
Another dinobaby commentary. No smart software required.
Chaos is good. Flexibility is good. AI is good. Sam Altman, whom I reference as “Sam AI-Man” has some explaining to do. OpenAI is a consumer of cash. The Chinese PR push suggests that Deepseek has found a way to do OpenAI-type computing like Shein and Temu do gym clothes.
I noted “Sam Altman Admits OpenAI Was On the Wrong Side of History in Open Source Debate.” The write up does not come out state, “OpenAI was stupid when it embraced proprietary software’s approach” to meeting user needs. To be frank, Sam AI-Man was not particularly clear either.
The write up says that Sam AI-Man said:
“Yes, we are discussing [releasing model weights],” Altman wrote. “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.” He noted that not everyone at OpenAI shares his view and it isn’t the company’s current highest priority. The statement represents a remarkable departure from OpenAI’s increasingly proprietary approach in recent years, which has drawn criticism from some AI researchers and former allies, most notably Elon Musk, who is suing the company for allegedly betraying its original open source mission.
My view is that Sam AI-Man wants to emulate other super techno leaders and get whatever he wants. Not surprisingly, other super techno leaders have their own ideas. I would suggest that the objective of these AI jousts is power, control, and money.
“What about the users?” a faint voice asks. “And the investors?” another bold soul queries.
Who?
Stephen E Arnold, February 17, 2025
IBM Faces DOGE Questions?
February 17, 2025
Simon Willison reminded us of the famous IBM internal training document that reads: “A Computer Can Never Be Held Accountable.” The document is also relevant for AI algorithms. Unfortunately the document has a mysterious history and the IBM Corporate Archives don’t have a copy of the presentation. A Twitter user with the name @bumblebike posted the original image. He said he found it when he went through his father’s papers. Unfortunately, the presentation with the legendary statement was destroyed in a 2019 flood.
I believe the image was first shared online in this tweet by @bumblebike in February 2017. Here’s where they confirm it was from 1979 internal training.
Here’s another tweet from @bumblebike from December 2021 about the flood:
Unfortunately destroyed by flood in 2019 with most of my things. Inquired at the retirees club zoom last week, but there’s almost no one the right age left. Not sure where else to ask.”
We don’t need the actual IBM document to know that IBM hasn’t done well when it comes to search. IBM, like most firms tried and sort of fizzled. (Remember Data Fountain or CLEVER?) IBM also moved into content management. Yep, the semi-Xerox, semi-information thing. But the good news is that a time sharing solution called Watson is doing pretty well. It’s not winning Jeopardy! but it is chugging along.
Now IBM professionals in DC have to answer the Doge nerd squad questions? Why not give OpenAI a whirl? The old Jeopardy! winner is kicking back. Doge wants to know.
Whitney Grace, February 17, 2025
Who Knew? AI Makes Learning Less Fun
February 14, 2025
Bill Gates was recently on the Jimmy Fallon show to promote his biography. In the interviews Gates shared views on AI stating that AI will replace a lot of jobs. Fallon hoped that TV show hosts wouldn’t be replaced and he probably doesn’t have anything to worry about. Why? Because he’s entertaining and interesting.
Humans love to be entertained, but AI just doesn’t have the capability of pulling it off. Media And Learning shared one teacher’s experience with AI-generated learning videos: “When AI Took Over My Teaching Videos, Students Enjoyed Them Less But Learned The Same.” Media and Learning conducted an experiment to see whether students would learn more from teacher-made or AI-generated videos. Here’s how the experiment went:
“We used generative AI tools to generate teaching videos on four different production management concepts and compared their effectiveness versus human-made videos on the same topics. While the human-made videos took several days to make, the analogous AI videos were completed in a few hours. Evidently, generative AI tools can speed up video production by an order of magnitude.”
The AI videos used ChatGPT written video scripts, MidJourney for illustrations, and HeyGen for teacher avatars. The teacher-made videos were made in the traditional manner of teachers writing scripts, recording themselves, and editing the video in Adobe Premier.
When it came to students retaining and testing on the educational content, both videos yielded the same results. Students, however, enjoyed the teacher-made videos over the AI ones. Why?
“The reduced enjoyment of AI-generated videos may stem from the absence of a personal connection and the nuanced communication styles that human educators naturally incorporate. Such interpersonal elements may not directly impact test scores but contribute to student engagement and motivation, which are quintessential foundations for continued studying and learning.”
Media And Learning suggests that AI could be used to complement instruction time, freeing teachers up to focus on personalized instruction. We’ll see what happens as AI becomes more competent, but we can rest easy for now that human engagement is more interesting than algorithms. Or at least Jimmy Fallon can.
Whitney Grace, February 14, 2025
What Happens When Understanding Technology Is Shallow? Weakness
February 14, 2025
Yep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.
I like this question. Even more satisfying is that a big name seems to have answered it. I refer to an essay by Gary Marcus in “The Race for “AI Supremacy” Is Over — at Least for Now.”
Here’s the key passage in my opinion:
China caught up so quickly for many reasons. One that deserves Congressional investigation was Meta’s decision to open source their LLMs. (The question that Congress should ask is, how pivotal was that decision in China’s ability to catch up? Would we still have a lead if they hadn’t done that? Deepseek reportedly got its start in LLMs retraining Meta’s Llama model.) Putting so many eggs in Altman’s basket, as the White House did last week and others have before, may also prove to be a mistake in hindsight. … The reporter Ryan Grim wrote yesterday about how the US government (with the notable exception of Lina Khan) has repeatedly screwed up by placating big companies and doing too little to foster independent innovation
The write up is quite good. What’s missing, in my opinion, is the linkage of a probe to determine how a technology innovation released as a not-so-stealthy open source project can affect the US financial markets. The result was satisfying to the Chinese planners.
Also, the write up does not put the probe or “foray” in a strategic context. China wants to make certain its simple message “China smart, US dumb” gets into the world’s communication channels. That worked quite well.
Finally, the write up does not point out that the US approach to AI has given China an opportunity to demonstrate that it can borrow and refine with aplomb.
Net net: I think China is doing Shien and Temu in the AI and smart software sector.
Stephen E Arnold, February 14, 2025
Orchestration Is Not Music When AI Agents Work Together
February 13, 2025
Are multiple AIs better than one? Megaputer believes so. The data firm sent out a promotional email urging us to “Build Multi-Agent Gen-AI Systems.” With the help of its products, of course. We are told:
“Most business challenges are too complex for a single AI engine to solve. What is the way forward? Introducing Agent-Chain Systems: A novel groundbreaking approach leveraging the collaborative strengths of specialized AI models, each configured for distinct analytical tasks.
- Validate results through inter-agent verification mechanisms, minimizing hallucinations and inconsistencies.
- Dynamically adapt workflows by redistributing tasks among Gen-AI agents based on complexity, optimizing resource utilization and performance.
- Build AI applications in hours for tasks like automated taxonomy building and complex fact extraction, going beyond traditional AI limitations.”
If this approach really reduces AI hallucinations, there may be something to it. The firm invites readers to explore a few case studies they have put together: One is for an anonymous pharmaceutical company, one for a US regulatory agency, and the third for a large retail company. Snapshots of each project’s dashboard further illustrate the concept. Are cooperative AI agents the next big thing in generative AI? Megaputer, for one, is banking on it. Founded back in 1997, the small business is based in Bloomington, Indiana.
Cynthia Murrell, February 10, 2025
LLMs Paired With AI Are Dangerous Propaganda Tools
February 13, 2025
AI chatbots are in their infancy. While they have been tested for a number of years, they are still prone to bias and other devastating mistakes. Big business and other organizations aren’t waiting for the technology to improve. Instead they’re incorporating chatbots and more AI into their infrastructures. Baldur Bjarnason warns about the dangers of AI, especially when it comes to LLMs and censorship:
“Poisoning For Propaganda: Rising Authoritarianism Makes LLMs More Dangerous.”
Large language models (LLMs) are used to train AI algorithms. Bjarnason warns that using any LLM, even those run locally, are dangerous.
Why?
LLMs are contained language databases that are programmed around specific parameters. These parameters are prone to error, because they were written by humans—ergo why AI algorithms are untrustworthy. They can also be programmed to be biased towards specific opinions aka propaganda machines. Bjarnason warns that LLMs are being used for the lawless takeover of the United States. He also says that corporations, in order to maintain their power, won’t hesitate to remove or add the same information from LLMs if the US government asks them.
This is another type of censorship:
“The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place. That’s the job it does. You won’t notice when the censorship kicks in… The alternative approach to censorship, fine-tuning the model to return a specific response, is more costly than keyword blocking and more error-prone. And resorting to prompt manipulation or preambles is somewhat easily bypassed but, crucially, you need to know that there is something to bypass (or “jailbreak”) in the first place. A more concerning approach, in my view, is poisoning.”
Corporations paired with governments (it’s not just the United States) are “poisoning” the AI LLMs with propagandized sentiments. It’s a subtle way of transforming perspectives without loud indoctrination campaigns. It is comparable to subliminal messages in commercials or teaching only one viewpoint.
Controls seem unlikely.
Whitney Grace, February 13, 2025
Are These Googlers Flailing? (Yes, the Word Has “AI” in It Too)
February 12, 2025
Is the Byte write up on the money? I don’t know, but I enjoyed it. Navigate to “Google’s Finances Are in Chaos As the Company Flails at Unpopular AI. Is the Momentum of AI Starting to Wane?” I am not sure that AI is in its waning moment. Deepseek has ignited a fire under some outfits. But I am not going to critic the write up. I want to highlight some of its interesting information. Let’s go, as Anatoly the gym Meister says, just with an Eastern European accent.
Here’s the first statement in the article which caught my attention:
Google’s parent company Alphabet failed to hit sales targets, falling a 0.1 percent short of Wall Street’s revenue expectations — a fraction of a point that’s seen the company’s stock slide almost eight percent today, in its worst performance since October 2023. It’s also a sign of the times: as the New York Times reports, the whiff was due to slower-than-expected growth of its cloud-computing division, which delivers its AI tools to other businesses.
Okay, 0.1 percent is something, but I would have preferred the metaphor of the “flail” word to have been used in the paragraph begs for “flog,” “thrash,” and “whip.”
I used Sam AI-Man’s AI software to produce a good enough image of Googlers flailing. Frankly I don’t think Sam AI-Man’s system understands exactly what I wanted, but close enough for horseshoes in today’s world.
I noted this information and circled it. I love Gouda cheese. How can Google screw up cheese after its misstep with glue and cheese on pizza. Yo, Googlers. Check the cheese references.
Is Alphabet’s latest earnings result the canary in the coal mine? Should the AI industry brace for tougher days ahead as investors become increasingly skeptical of what the tech has to offer? Or are investors concerned over OpenAI’s ChatGPT overtaking Google’s search engine? Illustrating the drama, this week Google appears to have retroactively edited the YouTube video of a Super Bowl ad for its core AI model called Gemini, to remove an extremely obvious error the AI made about the popularity of gouda cheese.
Stalin revised history books. Google changes cheese references for its own advertising. But cheese?
The write up concludes with this, mostly from American high technology watching Guardian newspaper in the UK:
Although it’s still well insulated, Google’s advantages in search hinge on its ubiquity and entrenched consumer behavior,” Emarketer senior analyst Evelyn Mitchell-Wolf told The Guardian. This year “could be the year those advantages meaningfully erode as antitrust enforcement and open-source AI models change the game,” she added. “And Cloud’s disappointing results suggest that AI-powered momentum might be beginning to wane just as Google’s closed model strategy is called into question by Deepseek.”
Does this constitute the use of the word “flail”? Sure, but I like “thrash” a lot. And “wane” is good.
Stephen E Arnold, February 12, 2025
A New Spin on Insider Threats: Employees Secretly Use AI At Work
February 12, 2025
We’re afraid of AI replacing our jobs. Employers are blamed for wanting to replace humans with algorithms, but employees are already bringing AI into work. According to the BBC, employees are secretly using AI: “Why Employees Smuggle AI Into Work.” In IT departments across the United Kingdom (and probably the world), knowledge workers are using AI tools without permission from their leads.
Software AG conducted a survey of knowledge workers and the results showed that half of them used personal AI tools. Knowledge workers are defined at people who primarily work at a desk or a computer. Some of them are using the tools because their job doesn’t offer tools and others said they wanted to choose their tools.
Many of the workers are also not asking. They’re abiding by the mantra of, “It’s easier to ask forgiveness than permission.”
One worker uses ChatGPT as a mechanized coworker. ChatGPT allows the worker to consume information at faster rates and it has increased his productivity. His company banned AI tools, he didn’t know why but assumes it is a control thing.
AI tools also pose security risks, because the algorithms learn from user input. The algorithms store information and it can expose company secrets:
“Companies may be concerned about their trade secrets being exposed by the AI tool’s answers, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that’s unlikely. "It’s pretty hard to get the data straight out of these [AI tools]," he says.
However, firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.”
Using AI tools is like any new technology. The AI tools need to be used and tested, then regulated. AI can’t replace experience, but it certainly helps get the job done.
Whitney Grace, February 12, 2025