AI May Be Discovering Kurt Gödel Just as Einstein and von Neumann Did

March 17, 2025

Hopping Dino_thumb_thumb_thumbThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything.

AI re-thinking is becoming more widespread. I published a snippet of an essay about AI and its impact in socialist societies on March 10, 2025. I noticed “A Bear Case: My Predictions Regarding AI Progress.” The write is interesting, and I think it represents thinking which is becoming more prevalent among individuals who have racked up what I call AI mileage.

The main theme of the write up is a modern day application of Kurt Gödel’s annoying incompleteness theorem. I am no mathematician like my great uncle Vladimir Arnold, who worked for year with the somewhat quirky Dr. Kolmogorov. (Family tip: Going winter camping with Dr. Kolmogorov wizard was not a good idea unless. Well, you know…)

The main idea is a formal axiomatic system satisfying certain technical conditions cannot decide the truth value of all statements about natural numbers. In a nutshell, a set cannot contain itself. Smart software is not able to go outside of its training boundaries as far as I know.

Back to the essay, the author points out that AI something useful:

There will be a ton of innovative applications of Deep Learning, perhaps chiefly in the field of biotech, see GPT-4b and Evo 2. Those are, I must stress, human-made innovative applications of the paradigm of automated continuous program search. Not AI models autonomously producing innovations.

The essay does contain a question I found interesting:

Because what else are they [AI companies and developers] to do? If they admit to themselves they’re not closing their fingers around godhood after all, what will they have left?

Let me offer several general thoughts. I admit that I am not able to answer the question, but some ideas crossed my mind when I was thinking about the sporty Kolmogorov, my uncle’s advice about camping in the winter, and this essay:

  1. Something else will come along. There is a myth that technology progresses. I think technology is like the fictional tribble on Star Trek. The products and services are destined to produce more products and services. Like the Santa Fe Institute crowd, order emerges. Will the next big thing be AI? Probably AI will be in the DNA of the next big thing. So one answer to the question is, “Something will emerge.” Money will flow and the next big thing cycle begins again.
  2. The innovators and the AI companies will pivot. This is a fancy way of saying, “Try to come up with something else.” Even in the age of monopolies and oligopolies, change is relentless. Some of the changes will be recognized as the next big thing or at least the thing a person can do to survive. Does this mean Sam AI-Man will manage the robots at the local McDonald’s? Probably not, but he will come up with something.
  3. The AI hot pot will cool. Life will regress to the mean or a behavior that is not hell bent on becoming a super human like the guy who gets transfusions from his kid, the wonky “have my baby” thinking of a couple of high profile technologist, or the money lust of some 25 year old financial geniuses on Wall Street. A digitized organization man man living out the theory of the leisure class will return. (Tip: Buy a dark grey suit. Lose the T shirt.)

As an 80 year old dinobaby, I find the angst of AI interesting. If Kurt Gödel were alive, he might agree to comment, “Sonny, you can’t get outside the set.” My uncle would probably say, “Those costs. Are they crazy?”

Stephen E Arnold, March 17, 2025

What is the Difference Between Agentic and Generative AI? A Handy Chart

March 17, 2025

Agentic is the new AI buzzword. But what does it mean? Data-platform and AI firm Domo offers clarity in, "Agentic AI Explained: Definition, Benefits, and Use Cases." Writer Haziqa Sajid defines the term:

"Agentic AI is an advanced AI system that can act independently, make decisions, and adapt to changing situations. These AI systems can handle complex tasks such as strategic planning, multi-step automation, and dynamic problem-solving with minimal human oversight. This makes them more capable than traditional rule-based AI. … Agentic AI is designed to work like a human employee performing tasks that comprehend natural language input, set objectives, reason through a task, and modify actions based on updated input. It employs advanced machine learning, generative AI, and adaptive decision-making to learn from the data, refine its approach, and improve performance over time."

Wow, that sounds a lot like what we were promised with generative AI. Perhaps this version will meet expectations. AI Agents are still full of potential, poised on the edge of infiltrating real-world tools. The post describes what Domo sees as the tech’s advantages and gives the basics of how it works.

The most useful part is the handy chart comparing agentic and generative AI. For example, while the (actual) purpose of generative AI is mainly to generate text, image, and audio content, agentic ai is for executing tasks and making decisions in changing environments. The chart’s other measures of comparison include autonomy, interactivity, use cases, learning processes, and integration methods. See the post for that bookmark-worthy chart.

Founded back in 2010, Domo is based in Utah. The publicly traded firm boasts over 2,600 clients across diverse industries.

Cynthia Murrell, March 17, 2025

Ah, Apple, Struggling with AI like Amazon, Google, et al

March 14, 2025

Hopping DinoThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you?

Yes, it is Friday, March 14, 2025. Everyone needs a moment of amusement. I found this luscious apple bit and thought I would share it. Dinobabies like knowing how the world and Apple treats other dinobabies. You, as a younger humanoid, probably don’t care. Someday you will.

Grandmother Gets X-Rated Message after Apple AI Fail” reports:

A woman from Dunfermline has spoken of her shock after an Apple voice-to-text service mistakenly inserted a reference to sex – and an apparent insult – into a message left by a garage… An artificial intelligence (AI) powered service offered by Apple turned it into a text message which – to her surprise – asked if she been "able to have sex" before calling her a "piece of ****".

Not surprisingly, Apple did not respond to the BBC request for a comment. Unperturbed, the Beeb made some phone calls. According to the article:

An expert has told the BBC the AI system may have struggled in part because of the caller’s Scottish accent, but far more likely factors were the background noise at the garage and the fact he was reading off a script.

One BBC expert offered these reasons for the foul fouled message:

Peter Bell, a professor of speech technology at the University of Edinburgh, listened to the message left for Mrs Littlejohn. He suggested it was at the "challenging end for speech-to-text engines to deal with". He believes there are a number of factors which could have resulted in rogue transcription:

  • The fact it is over the telephone and, therefore, harder to hear
  • There is some background noise in the call
  • The way the garage worker speaks is like he is reading a prepared script rather than speaking in a natural way

"All of those factors contribute to the system doing badly, " he added. "The bigger question is why it outputs that kind of content.

I have a much simpler explanation. Like Microsoft, marketing is much easier than delivering something that works for humans. I am tempted to make fun of Apple Intelligence, conveniently abbreviated AI. I am tempted to point out that real world differences in the flow of Apple computers are not discernable when browsing Web pages or entering one’s iTunes password into the system several times a day.

Let’s be honest. Apple is big. Like Amazon (heaven help Alexa by the way), Google (the cheese fails are knee slappers, Sundar), and the kindergarten squabbling among Softies and OpenAI at Microsoft — Apple cannot “do” smart software at this time. Therefore, errors will occur.

On the other hand, perhaps the dinobaby who received the message is “a piece of ****"? Most dinobabies are.

Stephen E Arnold, March 14, 2025

Microsoft Leadership Will Be Replaced by AI… Yet

March 14, 2025

Whenever we hear the latest tech announcement, we believe it is doom and gloom for humanity. While fire, the wheel, the Industrial Revolution, and computers have yet to dismantle humanity, the jury is still out for AI. However, Gizmodo reports that Satya Nadella of Microsoft says we shouldn’t be worried about AI and it’s time to stop glorifying it, “Microsoft’s Satya Nadella Pumps the Brakes on AI Hype.” Nadella placed a damper on AI hype with the following statement from a podcast: “Success will be measured through tangible, global economic growth rather than arbitrary benchmarks of how well AI programs can complete challenges like obscure math puzzles. Those are interesting in isolation but do not have practical utility.”

Nadella said that technology workers are saying AI will replace humans, but that’s not the case. He calls that type of thinking a distraction and the tech industry needs to “get practical and just try and make money before investors get impatient.” Nadella’s fellow Microsoft worker CEO Sam Altman is a prime example of AI fear mongering. He uses it as a tool to give himself power.

Nadella continued that if the tech industry and its investors want AI growth akin to the Industrial Revolution then let’s concentrate in it. Proof of that type of growth would be if there was 10% inflation attributed to AI. Investing in AI can’t just happen on the supply side, there needs to be demand AI-built products.

Nadella’s statements are like a pouring a bucket of cold water on a sleeping person:

"On that sense, Nadella is trying to slap tech executives awake and tell them to cut out the hype. AI safety is somewhat of a concern—the models can be abused to create deepfakes or mass spam—but it exaggerates how powerful these systems are. Eventually, push will come to shove and the tech industry will have to prove that the world is willing to put down real money to use all these tools they are building. Right now, the use cases, like feeding product manuals into models to help customers search them faster, are marginal.”

Many well-known companies still plan on implementing AI despite their difficulties. Other companies have downsized their staffing to include more AI chatbots, but the bots prove to be inefficient and frustrating. Microsoft, however, is struggling with management issues related to OpenAI, its internal “experts,” and the Softies who think they can do better. (Did Microsoft ask Grok, “How do I manage this billions of dollar bonfire?”)

Let’s blame it on AI.

Whitney Grace, March 14, 2025, 2025

Keeping an Eye on AI? Here Are Fifteen People of Interest for Some

March 13, 2025

Underneath the hype, there are some things AI is actually good at. But besides the players who constantly make the news, who is really shaping the AI landscape? A piece at Silicon Republic introduces us to "15 Influential Players Driving the AI Revolution." Writer Jenny Darmody observes:

"As AI continues to dominate the headlines, we’re taking a closer look at some of the brightest minds and key influencers within the industry. Throughout the month of February, SiliconRepublic.com has been putting AI under the microscope for more of a deep dive, looking beyond the regular news to really explore what this technology could mean. From the challenges around social media advertising in the new AI world to the concerns around its effect on the creative industries, there were plenty of worrying trends to focus on. However, there were also positive sides to the technology, such as its ability to preserve minority languages like Irish and its potential to reduce burnout in cybersecurity. While exploring these topics, the AI news just kept rolling: Deepseek continued to ruffle industry feathers, Thomson Reuters won a partial victory in its AI copyright case and the Paris AI Summit brought further investments and debates around regulation. With so much going on in the industry, we thought it was important to draw your attention to some key influencers you should know within the AI space."

Ugh, another roster of tech bros? Not so fast. On this list, the women actually outnumber the men, eight to seven. In fact, the first entry is Ireland’s first AI Ambassador Patricia Scanlon, who has hopes for truly unbiassed AI. Then there is the EU’s Lucilla Sioli, head of the European Commission’s AI Office. She is tasked with both coordinating Europe’s AI strategy and implementing the AI Act. We also happily note the inclusion of New York University’s Juliette Powell, who advises clients from gaming companies to banks in the responsible use of AI. See the write-up for the rest of the women and men who made the list.

Cynthia Murrell, March 13, 2025

AI Hiring Spoofs: A How To

March 12, 2025

dino orange_thumbBe aware. A dinobaby wrote this essay. No smart software involved.

The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.

AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.

The write up explains that a company wants to hire a professional. Everything hums along and then:

…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.

The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.

Several observations:

  1. Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
  2. The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
  3. As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt  quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.

To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?

Nope.

Stephen E Arnold, March 12, 2025

Survey: Kids and AI Tools

March 12, 2025

Our youngest children are growing up alongside AI. Or, perhaps, it would be more accurate to say increasingly intertwined with it. Axios tells us, "Study Zeroes in on AI’s Youngest Users." Write Megan Morrone cites a recent survey from Common Sense Media that examined AI use by children under 8 years old. The researchers surveyed 1,578 parents last August. We learn:

"Even the youngest of children are experimenting with a rapidly changing technology that could reshape their learning and critical thinking skills in unknown ways. By the numbers: One in four parents of kids ages 0-8 told Common Sense their children are learning critical thinking skills from using AI.

  • 39% of parents said their kids use AI to ‘learn about school-related material,’ while only 8% said they use AI to ‘learn about AI.’
  • For older children (ages 5-8) nearly 40% of parents said their child has used an app or a device with AI to learn.
  • 24% of children use AI for ‘creative content,’ like writing short stories or making art, according to their parents."

It is too soon to know the long-term effects of growing up using AI tools. These kids are effectively subjects in a huge experiment. However, we already see indications that reliance on AI is bad for critical thinking skills. And that research is on adults, never mind kids whose base neural pathways are just forming. Parents, however, seem unconcerned. Morrone reports:

  • More than half (61%) of parents of kids ages 0-8 said their kids’ use of AI had no impact on their critical thinking skills.
  • 60% said there was no impact on their child’s well-being.
  • 20% said the impact on their child’s creativity was ‘mostly positive.’

Are these parents in denial? They cannot just be happy to offload parenting to algorithms. Right? Perhaps they just need more information. Morrone points us to EqualAI’s new AI Literacy Initiative but, again, that resource is focused on adults. The write-up emphasizes the stakes of this great experiment on our children:

‘Our youngest children are on the front lines of an unprecedented digital transformation,’ said James P. Steyer, founder and CEO of Common Sense.

‘Addressing the impact of AI on the next generation is one of the most pressing issues of our time,’ Miriam Vogel, CEO of EqualAI, told Axios in an email. ‘Yet we are insufficiently developing effective approaches to equip young people for a world where they are both using and profoundly affected by AI.’

What does this all mean for society’s future? Stay tuned.

Cynthia Murrell, March 12, 2025

AI and Jobs: Tell These Folks AI Will Not Impact Their Work

March 12, 2025

dino orange_thumbThe work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.

I have a friend who does some translation work. She’s chugging along because of her reputation for excellent work. However, one of the people who worked with me on a project requiring Russian language skills has not worked out. The young person lacks the reputation and the contacts with a base of clients. The older person can be as busy as she wants to be.

What’s the future of translating from one language to another for money? For the established person, smart software appears to have had zero impact. The younger person seems to be finding that smart software is getting the translation work.

I will offer my take in a moment. First, let’s look at “Turkey’s Translators Are Training the AI Tools That Will Replace Them.”

I noted this statement in the cited article:

Turkey’s sophisticated translators are moonlighting as trainers of artificial intelligence models, even as their profession shrinks with the rise of machine translations. As the models improve, these training jobs, too, may disappear.

What’s interesting is that the skilled translators are providing information to AI models. These models are definitely going to replace the humans. The trajectory is easy to project. Machines will work faster and cheaper. The humans will abandon the discipline. Then prices will go up. Those requiring translations will find themselves spending more and having few options. Eventually the old hands will wither. Excellent translations which capture nuance will become a type of endangered species. The snow leopard of knowledge work is with us.

I noted this statement in the article:

Book publishing, too, is transforming. Turkish publisher Dedalus announced in 2023 that it had machine-translated nine books. In 2022, Agora Books, helmed by translator Osman Ak?nhay, released a Turkish edition of Jean-Dominique Brierre’s Milan Kundera, une vie d’écrivain, a biography of the Czech-French novelist Milan Kundera. Ak?nhay, who does not know French, used Google Translate to help him in the translation, to much criticism from the industry.

What’s this mean?

  1. Jobs will be lost and the professionals with specialist skills are going to be the buggy whip makers in a world of automobiles
  2. The downstream impact of smart software is going to kill off companies. The Chegg legal matter illustrates how a monopoly can mindlessly erode a company. This is like a speeding semi-truck smashing love bugs on a Florida highway. The bugs don’t know what hit them, and the semi-truck is unaware and the driver is uncaring. Dead bugs? So what? See “Chegg Sues Google for Hurting Traffic with AI As It Considers Strategic Alternatives.”
  3. Data from different sources suggesting that AI will just create jobs is either misleading, public relations, or dead wrong. The Bureau of Labor Statistics data are spawning articles like “AI and Its Impact on Software Development Jobs.”

Net net: What’s emerging is one of those classic failure scenarios. Nothing big seems to go wrong. Then a collapse occurs. That’s what’s beginning to appear. Just little changes. Heed the signals? Of course not. I can hear someone saying, “That won’t happen to me.” Of course not but cheaper and faster are good enough at this time.

Stephen E Arnold, March 12, 2025

Microsoft: Marketing Is One Thing, a Cost Black Hole Is Quite Another

March 11, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumbYep, another dinobaby original.

I read “Microsoft Cuts Data Centre Plans and Hikes Prices in Push to Make Users Carry AI Cost.” The headline meant one thing to me: The black hole of AI costs must be capped. For my part,  I try to avoid MSFT AI. After testing the Redmoanians’ smart software for months, I decided, “Nope.”

The write up says:

Last week, Microsoft unceremoniously pulled back on some planned data centre leases. The move came after the company increased subscription prices for its flagship 365 software by up to 45%, and quietly released an ad-supported version of some products. The tech giant’s CEO, Satya Nadella, also recently suggested AI has so far not produced much value.

No kidding. I won’t go into the annoyances. AI in Notepad? Yeah, great thinking like that which delivered Bob to users who loved Clippy.

The essay notes:

Having sunk billions into generative AI, Microsoft is trying to find the business model that will make the technology profitable.

Maybe someday, but that day is not today or tomorrow. If anything, Microsoft is struggling with old-timey software as well. The Register, a UK online publication, reports:

Microsoft blames Outlook’s wobbly weekend on ‘problematic code change’ And Monday’s not looking that steady, either.

Back to AI. The AI financial black hole exists, and it may not be easy to resolve. What’s the fix? Here’s the Microsoft data center plan as of March 2025:

As AI infrastructure costs rise and model development evolves, shifting the costs to consumers becomes an appealing strategy for AI companies. While big enterprises such as government departments and universities may manage these costs, many small businesses and individual consumers may struggle.

Several observations are warranted:

  1. What happens if Microsoft cannot get consumers to pay the AI bills?
  2. What happens if people like this old dinobaby don’t want smart software and just shift to work flows without Microsoft products?
  3. What happens if the marvel of the Tensor and OpenAI’s and others’ implementations continue to hallucinate creating more headaches than the methods improve?

Net net: Marketing may have gotten ahead of reality, but the black hole of costs are very real and not hallucinations. Can Microsoft escape a black hole like this one?

Stephen E Arnold, March 11, 2025

Microsoft Sends a Signal: AI, AIn’t Working

March 11, 2025

dino orange_thumb_thumb_thumb_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

The problems with Microsoft’s AI push were evident from the start of its AI push in 2023. The company thought it had identified the next big thing and had the big fish on the line. Now the work was easy. Just reel in the dough.

Has it worked out for Microsoft? We know that big companies often have difficulty innovating. The enervating white board sessions which seek to answer the question, “Do we build it or buy it?” usually give way to: [a] Let’s lock it up somehow or [b] Let’s steal it because it won’t take our folks too long to knock out a me-too.

Microsoft sent a fairly loud beep-beep-beep when it began to cut back on its dependence on OpenAI. Not long ago, Microsoft trimmed some of its crazy spending for AI. Now we have the allegedly accurate information in “Microsoft Is Reportedly Potting a Future without OpenAI.”

The write up states:

Microsoft has poured over $13 billion into the AI firm since 2019, but now it wants more control over its own models and costs. Simple enough in theory—build in-house alternatives, cut expenses, and call the shots.

Is this a surprise? No, I think it is just one more beep added to the already emitted beep-beep-beep.

Here’s my take:

  1. Narrowly focused smart software adds some useful capabilities to what I would call workflow enhancement. The narrow focus for an AI system reduces some of the wonkiness of the output. Therefore, certain tasks benefit; for example, grinding through data for a chemistry application or providing a call center operation with a good enough solution to rising costs. Broad use cases are more problematic.
  2. Humans who rely on information for a living don’t want to be caught out. This means that using smart software is an assist or a supplement. This is like an older person using a cane when walking on a senior citizens adventure tour.
  3. Productizing a broad use case for smart software is expensive and prone to the sort of failure rate associated with a new product or service. A good example is a self driving auto with collision avoidance. Would you stand in front of such a vehicle confident in the smart software’s ability to not run over you? I wouldn’t.

What’s happening at Microsoft is a reasonably predictable and understandable approach. The company wants to hedge its bets since big bucks are flowing out, not in. The firm thinks it has enough smarts to do a better job even though in my opinion this is unlikely. Remember Bob, Clippy, and Windows updates? I do.

Also, small teams believe their approach will be a winner. Big companies believe their people can row that boat faster than anyone else. I know from personal experience and observation that this is not true. But the appearance of effort and the illusion of high value work encourages the approach.

Plus, the idea that a “leadership team” can manage innovation is a powerful one. Microsoft’s leadership believes in its leadership. That’s why the company is a leader. (I love this logic.)

Net net: My hunch is that Microsoft’s AI push is a disappointment. Now the company can shift into SWAT team mode and overwhelm the problem: AI that does not pay for itself.

Will this approach work? Nope, the outcome will be good enough. That is a bit more than one can say about Apple intelligence: Seriously out of step with the Softies.

Stephen E Arnold, March 11, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta