Cyber Security: Evidence That Performance Is Different from Marketing

August 20, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

In 2022, Google bought a cyber security outfit named Mandiant. The firm had been around since 2004, but when Google floated more than $5 billion for the company, it was time to sell.

If you don’t recall, Google operates a large cloud business and is trying diligently to sell to Microsoft customers in the commercial and government sector. A cyber security outfit would allow Google to argue that it would offer better security for its customers and their users.

Mandiant’s business was threat intelligence. The idea is that Mandiant would monitor forums, the Web, and any other online information about malware and other criminal cyber operations. As an added bonus, Mandiant would blend automated security functions with its technology. Wham, bam! Slam dunk, right?

I read “Google Confirms Major Security Breach After Hackers Linked To ShinyHunters Steal Sensitive Corporate Data, Including Business Contact Information, In Coordinated Cyberattack.” First, a disclaimer. I have no idea if this WCCF Tech story is 100 percent accurate. It could be one of those Microsoft 1,000 Russian programmers are attacking us” plays. On the other hand, it will be fun to assume that some of the information in the cited article is accurate.

With that as background, I noted this passage:

The tech giant has recently confirmed a data breach linked to the ShinyHunters ransomware group, which targeted Google’s corporate Salesforce database systems containing business contact information.

Okay. Google’s security did not work. A cloud customer’s data were compromised. The assertion that Google’s security is better than or equal to Microsoft’s is tough for me to swallow.

Here’s another passage:

As per Google’s Threat Intelligence Group (GTIG), the hackers used a voice phishing technique that involved calling employees while pretending to be members of the internal IT team, in order to have them install an altered version of Salesforce’s Data Loader. By using this technique, the attackers were able to access the database before their intrusion was detected.

A human fooled another human. The automated systems were flummoxed. The breach allegedly took place.

Several observations are warranted:

  1. This is security until a breach occurs. I am not sure that customers expect this type of “footnote” to their cyber security licensing mumbo jumbo. The idea is that Google should deliver a secure service.
  2. Mandiant, like other threat intelligence services, allows the customer to assume that the systems and methods generally work. That’s true until they don’t.
  3. Bad actors have an advantage. Armed with smart software and tools that can emulate my dead grandfather, the humans remain a chink in the otherwise much-hyped armor of an outfit like Google.

What this example, even if only partly accurate, makes it clear than cyber security marketing performs better than the systems some of the firms sell. Consider that the victim was Google. That company has touted its technical superiority for decades. Then Google buys extra security. The combo delivers what? Evidence that believing the cyber security marketing may do little to reduce the vulnerability of an organization. What’s notable is that the missteps were Google’s. Microsoft may enshrine this breach case and mount it on the walls of every cyber security employees’ cubicles.

I can imagine hearing a computer-generated voice emulating Bill Gates’, saying, “It wasn’t us this time.”

Stephen E Arnold, August 20, 2025

The Risks of Add-On AI: Apple, Telegram, Are You Paying Attention?

August 20, 2025

Dino 5 18 25No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

Name three companies trying to glue AI onto existing online services? Here’s my answer:

  • Amazon
  • Apple
  • Telegram.

There are others, but each of these has a big “tech rep” and command respect from other wizards. We know that Tim Apple suggested that the giant firm had AI pinned to the mat and whimpering, “Let me be Siri.” Telegram mumbled about Nikolai working on AI. And Amazon? That company has flirted with smart software with its Sagemaker announcements years ago. Now it has upgraded Alexa, the device most used as a kitchen timer.

Amazon’s Rocky Alexa+ Launch Might Justify Apple’s Slow Pace with Next-Gen Siri” ignores Telegram (of course. Who really cares?) and uses Amazon’s misstep to apologize for Apple’s goofs. The write up says:

Apple has faced a similar technical challenge in its own next-generation Siri project. The company once aimed to merge Siri’s existing deterministic systems with a new generative AI layer but reportedly had to scrap the initial attempt and start over. … Apple’s decision to delay shipping may be frustrating for those of us eager for a more AI-powered Siri, but Amazon’s rocky launch is a reminder of the risks of rushing a replacement before it’s actually ready.

Why does this matter?

My view is that Apple’s and Amazon’s missteps make clear that bolting on, fitting in, and snapping on smart software is more difficult than it seemed. I also believe that the two firms over-estimated their technical professionals’ ability to just “do” AI. Plus, both US companies appear to be falling behind in the “AI race.”

But what about Telegram? That company is in the same boat. Its AI innovations are coming from its third party developers who have been using Telegram’s platform as a platform. Telegram itself has missed opportunities to reduce the coding challenge for its developers with it focus on old-school programming languages, not AI assisted coding.

I think that it is possible that these three firms will get their AI acts together. The problem is that AI native solutions for the iPhone, the Telegram “community,” and Amazon’s own hardware products. The fumbles illustrate a certain weakness in each firm. Left unaddressed, these can be debilitating in an uncertain economic environment.

But the mantra go fast or the jargon accelerate is not in line with the actions of these three companies.

Stephen E Arnold, August 20, 2025

Inc. Magazine May Find that Its MSFT Software No Longer Works

August 20, 2025

Dino 5 18 25_thumb[3]No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.

I am not sure if anyone else has noticed that one must be very careful about making comments. A Canadian technology dude found himself embroiled with another Canadian technology dude. To be frank, I did not understand why the Canadian tech dudes were squabbling, but the dust up underscores the importance of the language, tone, rhetoric, and spin one puts on information.

An example of a sharp-toothed article which may bite Inc. Magazine on the ankle is the story “Welcome to the Weird New Empty World of LinkedIn: Just When Exactly Did the World’s Largest Business Platform Turn into an Endless Feed of AI-Generated Slop?” My teeny tiny experience as a rental at the world’s largest software firm taught me three lessons:

  1. Intelligence is defined many ways. I asked a group of about 75 listening to one of my lectures, “Who is familiar with Kolmogorov?” The answer was for that particular sampling of Softies was exactly zero. Subjective impression: Rocket scientists? Not too many.
  2. Feistiness. The fellow who shall remain nameless dragged me to a weird mixer thing in one of the buildings on the “campus.” One person (whose name and honorifics I do not remember) said, “Let me introduce you  to Mr. X. He is driving the Word project.” I replied with a smile. We walked to the fellow, were introduced, and I asked, “Will Word fix up its autonumbering?” The Word Softie turned red, asked the fellow who introduced me to him, “Who is this guy?” The Word Softie stomped away and shot deadly sniper eyes at me until we left after about 45 minutes of frivolity. Subjective impression: Thin skin. Very thin skin.
  3. Insecurity. At a lunch with a person whom I had met when I was a contractor at Bell Labs and several other Softies, the subject of enterprise search came up. I had written the Enterprise Search Report, and Microsoft had purchased copies. Furthermore, I wrote with Susan Rosen “Managing Electronic Information Projects.” Ms. Rosen was one of the senior librarians at Microsoft. While waiting for the rubber chicken, a Softie asked me about Fast Search & Transfer, which Microsoft had just purchased. The question posed to me was, “What do you think about Fast Search as a technology for SharePoint?” I said, “Fast Search was designed to index Web sites. The enterprise search functions were add ons. My hunch is that getting the software to handle the data in SharePoint will be quite difficult?” The response was, “We can do it.” I said, “I think that BA Insight, Coveo, and a couple of other outfits in my Enterprise Search Report will be targeting SharePoint search quickly.” The person looked at me and said, “What do these companies do? How quickly do they move?” Subjective impression: Fire up ChatGPT and get some positive mental health support.

The cited write up stomps into a topic that will probably catch some Softies’ attention. I noted this passage:

The stark fact is that reach, impressions and engagement have dropped off a cliff for the majority of people posting dry (read business-focused) content as opposed to, say, influencer or lifestyle-type content.

The write up adds some data about usage of LinkedIn:

average platform reach had fallen by no less than 50 percent, while follower growth was down 60 percent. Engagement was, on average, down an eye-popping 75 percent.

The main point of the article in my opinion is that LinkedIn does filter AI content. The use of AI content produces a positive for the emitter of the AI content. The effect is to convert a shameless marketing channel into a conduit for search engine optimized sales information.

The question “Why?” is easy to figure out:

  1. Clicks if the content is hot
  2. Engagement if the other LinkedIn users and bots become engaged or coupled
  3. More zip in what is essentially a one dimension, Web 1 service.

How will this write up play out? Again the answers strike me as obvious:

  1. LinkedIn may have some Softies who will carry a grudge toward Inc. Magazine
  2. Microsoft may be distracted with its Herculean efforts to make its AI “plays” sustainable as outfits like Amazon say, “Hey, use our cloud services. They are pretty much free.”
  3. Inc. may take a different approach to publishing stories with some barbs.

Will any of this matter? Nope. Weird and slop do that.

Stephen E Arnold, August 20, 2025

Smart Software Fix: Cash, Lots and Lots of Cash

August 19, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way. But I asked ChatGPT one question. Believe it or not.

If you have some spare money, Sam Altman aka Sam AI-Man wants to talk with you. It is well past two years since OpenAI forced the 20 year old Google to go back to the high school lab. Now OpenAI is dealing with the reviews of ChatGPT 5. The big news in my opinion is that quite a  few people are annoyed with the new smart software from the money burning Bessemer furnace at 3180 18th Street in San Francisco. (I have heard that a satellite equipped with an infra red camera gets a snazzy image of the heat generated from the immolation of cash. There are also tiny red dots scattered around the SF financial district. Those, I believe, are the burning brain cells of the folks who have forked over dough to participate in Sam AI-Man’s next big thing.

As People Ridicule GPT-5, Sam Altman Says OpenAI Will Need ‘Trillions’ in Infrastructure” addresses the need for cash. The write up says:

Whether AI is a bubble or not, Altman still wants to spend a certifiably insane amount of money building out his company’s AI infrastructure. “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future,” Altman told reporters.

Trillions is a general figure that most people cannot relate to everyday life. Years ago when I was an indentured servant at a consulting firm, I worked on a project that sought to figure out what types of decisions Boards of Directors of Fortune 1000 companies consumed the most time. The results surprised me then and still do.

Boards of directors spent the most time discussing relatively modest-scale projects; for example, expanding a parking lot or developing of list of companies for potential joint ventures. Really big deals like spending large sums to acquire a company were often handled in swift, decisive votes.

Why?

Boards of directors, like most average people, cannot relate to massive numbers. It is easier to think in terms of a couple hundred thousand dollars to lease a parking lot than borrow billions and buy a giant allegedly synergistic  company.

When Mr. Altman uses the word “trillions,” I think he is unable to conceptualize the amount of money represented in his casual “you should expect OpenAI to spend trillions…”

Several observations:

  1. AI is useful in certain use cases. Will AI return the type of payoff that Google’s charge ‘em every which way from Sunday for advertising model does?
  2. AI appears to produce incorrect outputs. I liked the application for oncology docs who reported losing diagnostic skills when relying on AI assistants.
  3. AI creates negative mental health effects. One old person, younger than I, believed a chat bot cared for him. On the way to meet his digital friend, he flopped over dead. Anticipative anxiety or a use case for AI sparking nutso behavior?

What’s a trillion look like? Answer: 1,000,000,000,000.

How many railroad boxcars would it take to move $1 trillion from a collection point like Denver, Colorado, to downtown San Francisco? Answer from ChatGPT: you would need 10,000 standard railroad boxcars. This calculation is based on the weight and volume of the bills, as well as the carrying capacity of a typical 50-foot boxcar. The train would stretch over 113.6 miles—about the distance from New York City to Philadelphia!

Let’s talk about expanding the parking lot.

Stephen E Arnold, August 19, 2025

The Bubbling Pot of Toxic Mediocrity? Microsoft LinkedIn. Who Knew?

August 19, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby working the old-fashioned way.

Microsoft has a magic touch. The company gets into Open Source; the founder “gits” out. Microsoft hires a person from Intel. Microsoft hires garners an engineer, asks some questions, and the new hire is whipped with a $34,000 fine and two years of mom looking in his drawers.

Now i read “Sunny Days Are Warm: Why LinkedIn Rewards Mediocrity.” The write up includes an outstanding metaphor in my opinion: Toxic Mediocrity. The write up says:

The vast majority of it falls into a category I would describe as Toxic Mediocrity. It’s soft, warm and hard to publicly call out but if you’re not deep in the bubble it reads like nonsense. Unlike it’s cousins ‘Toxic Positivity’ and ‘Toxic Masculinity’ it isn’t as immediately obvious. It’s content that spins itself as meaningful and insightful while providing very little of either. Underneath the one hundred and fifty words is, well, nothing. It’s a post that lets you know that sunny days are warm or its better not to be a total psychopath. What is anyone supposed to learn from that?

When I read a LinkedIn post it is usually referenced in an article I am reading. I like to follow these modern slippery footnotes. (If you want slippery, try finding interesting items about Pavel Durov in certain Russian sources.)

Here’s what I learn:

  1. A “member” makes clear that he or she has information of value. I must admit. Once in a while a useful post will turn up. Not often, but it has happened. I do know the person believes something about himself or herself. Try asking a GenAI about their personal “beliefs.” Let me know how that works.
  2. Members in a specific group with an active moderator often post items of interest. Instead of writing my unread blog, these individuals identify an item and use LinkedIn as a “digital bulletin board” for people who shop at the same sporting goods store in rural Kentucky. (One sells breakfast items and weapons.)
  3. I get a sense of the jargon people use to explain their expertise. I work alone. I am writing a book. I don’t travel to conferences or client locations now. I rely on LinkedIn as the equivalent of going to a conference mixer and listening to the conversations.

That useful. I have a person who interacts on LinkedIn for me. I suppose my “experience” is therefore different from someone who visits the site, posts, and follows the antics of LinkedIn’s marketers as they try to get the surrogate me to pay to do what I do. (Guess what? I don’t pay.)

I noted this statement in the essay:

Honestly, the best approach is to remember that LinkedIn is a website owned by Microsoft, trying to make money for Microsoft, based on time spent on the site. Nothing you post there is going to change your career. Doing work that matters might. Drawing attention to that might. Go for depth over frequency.

I know that many people rely on LinkedIn to boost their self confidence. One of the people who worked for me moved to another city. I suggested that she give LinkedIn a whirl. She wrote interesting short items about her interests. She got good feedback. Her self confidence ticked up, and she landed a successful job. So there’s a use case for you.

You should be able to find a short item that a new post appears on my blog. Write me and my surrogate will write you back and give you instructions about how to contact me. Why don’t I conduct conversations on LinkedIn? Have you checked out the telemetry functions in Microsoft software?

Stephen E Arnold, August 19, 2025

A Baloney Blizzard: What Is Missing? Oh, Nothing, Just Security

August 19, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I do not know what a CVP is. I do know a baloney blizzard when I see one. How about these terms: Ambient, pervasive, and multi-modal. I interpret ambient as meaning temperature or music like the tunes honked in Manhattan elevators. Pervasive I view as surveillance; that is, one cannot escape the monitoring. What a clever idea. Who doesn’t want Microsoft Windows to be inescapable? And multi-modal sparks in me thoughts of a cave painting and a shaman. I like the idea of Windows intermediating for me.

Where did I get these three odd ball words? I read “Microsoft’s Windows Lead Says the Next Version of Windows Will Be More Ambient, Pervasive, and Multi-Modal As AI Redefines the Desktop Interface.” The source of this write up is an organization that absolutely loves Microsoft products and services.

Here’s a passage I noted:

Davuluri confirms that in the wake of AI, Windows is going to change significantly. The OS is going to become more ambient and multi-modal, capable of understanding the content on your screen at all times to enable context-aware capabilities that previously weren’t possible. Davuluri continues, “you’ll be able to speak to your computer while you’re writing, inking, or interacting with another person. You should be able to have a computer semantically understand your intent to interact with it.”

Very sci-fi. However, I don’t want to speak to my computer. I work in silence. My office is set up do I don’t have people interrupting, chattering, or asking me to go to get donuts. My view is, “Send me an email or a text. Don’t bother me.” Is that why in many high-tech companies people wear earbuds? It is. They don’t want to talk, interact, or discuss Netflix. These people want to “work” or what they think is “work.”

Does Microsoft care? Of course not. Here’s a reasonably clear statement of what Microsoft is going to try and force upon me:

It’s clear that whatever is coming next for Windows, it’s going to promote voice as a first class input method on the platform. In addition to mouse and keyboard, you will be able to ambiently talk to Windows using natural language while you work, and have the OS understand your intent based on what’s currently on your screen.

Several observations:

  1. AI is not reliable
  2. Microsoft is running a surveillance operation in my opinion
  3. This is the outfit which created Bob and Clippy.

But the real message in this PR marketing content essay: Security is not mentioned. Does a secure operation want people talking about their work?

Stephen E Arnold, August 19, 2025

News Flash from the Past: Bad Actors Use New Technology and Adapt Quickly

August 18, 2025

Dino 5 18 25_thumb_thumbNo AI. Just a dinobaby working the old-fashioned way.

NBC News is on top of cyber security trends. I think someone spotted Axios report that bad actors were using smart software to outfox cyber security professionals. I am not sure this is news, but what do I know?

Criminals, Good Guys and Foreign Spies: Hackers Everywhere Are Using AI Now” reports this “hot off the press” information. I quote:

The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.

My goodness. Who knew that stealers have been zipping around for many years? Even more startling old information is:

LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents.  The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster.

Stunning. A free chunk of smart software, unemployed or intra-gig programmers, and juicy targets pushed out with a fairy land of vulnerabilities. Isn’t it insightful that bad actors would apply these tools to clueless employees, inherently vulnerable operating systems, and companies too busy outputting marketing collateral to do routine security updates.

The cat-and-mouse game works this way. Bad actors with access to useful scripting languages, programming expertise, and smart software want to generate revenue or wreck havoc. One individual or perhaps a couple of people in a coffee shop hit upon a better way to access a corporate network or obtain personally identifiable information from a hapless online user.

Then, after the problem has been noticed and reported, cyber security professionals will take a closer look. If these outfits have smart software running, a human will look more closely at logs and say, “I think I saw something.”

Okay, mice are in and swarming. Now the cats jump into action. The cats will find [a] a way to block the exploit, [b] rush to push the fix to paying customers, and [c] share the information in a blog post or a conference.

What happens? The bad actors notice their mice aren’t working or they are being killed instantly. The bad actors go back to work. In most cases, the bad actors are not unencumbered by bureaucracy or tough thought problems about whether something is legal or illegal. The bad actors launch more attacks. If one works, its gravy.

Now the cats jump back into the fray.

In the current cyber crime world, cyber security firms, investigators, and lawyers are in reactive mode. The bad actors play offense.

One quick example: Telegram has been enabling a range of questionable online activities since 2013. In 2024 after a decade of inaction, France said, “Enough.” Authorities in France arrested Pavel Durov. The problem from my point of view is that it took 12 years to man up to the icon Pavel Durov.

What happens when a better Telegram comes along built with AI as part of its plumbing?

The answer is, “You can buy licenses to many cyber security systems. Will they work?”

There are some large, capable mice out there in cyber space.

Stephen E Arnold, August 18, 2025

If You Want to Work at Meta, You Must Say Yes, Boss, Yes Boss, Yes Boss

August 18, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

These giant technology companies are not very good in some situations. One example which comes to mind in the Apple car. What was the estimate? About $10 billion blown Meta pulled a similar trick with its variant of the Google Glass. Winners.

I read “Meta Faces Backlash over AI Policy That Lets Bots Have Sensual Conversations with Children.” My reaction was, “You are kidding, right?” Nope. Not a joke. Put aside common sense, a parental instinct for appropriateness, and the mounting evidence that interacting with smart software can be a problem. What are these lame complaints.

The write up says:

According to Meta’s 200-page internal policy seen by Reuters, titled “GenAI: Content Risk Standards”, the controversial rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist.

Okay, let’s stop the buggy right here, pilgrim.

A “chief ethicist”! A chief ethicist who thought that this was okay:

An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”.

What is an ethicist? First, it is a knowledge job. One I assume requiring knowledge of ethical thinking embodied in different big thinkers. Second, it is  a profession which relies on context because what’s right for Belgium in the Congo may not be okay today. Third, the job is likely one that encourages flexible definitions of ethics. It may be tough to get another high-paying gig if one points out that the concept of sensual conversations with children is unethical.

The write up points out that an investigation is needed. Why? The chief ethicist should say, “Sorry. No way.”

Chief ethicist? A chief “yes, boss” person.

Stephen E Arnold, August 18, 2025

c

Google: Simplicity Is Not a Core Competency

August 18, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Telegram Messenger is reasonably easy to use messaging application. People believe that it is bulletproof, but I want to ask, “Are you sure?” Then there is WhatsApp, now part of Darth Zuck’s empire. However, both of these outfits appear to be viewed as obtuse and problematic by Kremlin officials. The fix? Just ban these service. Banning online services is a popular way for a government to “control” information flow.

I read a Russian language article about an option some Russians may want to consider. The write up’s title is “How to Replace Calls on WhatsApp and Telegram. Review of the Google Meet Application for Android and iOS.”

I worked through the write up and noted this statement:

Due to the need to send invitation links Meet is not very convenient for regular calls— and most importantly it belongs to the American company Google, whose products, by definition, are under threat of blocking. Moreover, several months ago, Russian President Vladimir Putin himself called for «stifling» Western services operating in Russia, and instructed the Government to prepare a list of measures to limit them by September 1, 2025.

The bulk of the write up is a how to. In order to explain the process of placing a voice call via the Google system, PCNews presented:

  1. Nine screenshots
  2. These required seven arrows
  3. One rectangular box in red to call attention to something. (I couldn’t figure out what, however.)
  4. Seven separate steps.

How does one “do” a voice call in Telegram Messenger. Here are the steps:

  1. I opened Telegram mini app and select the contact with whom I want to speak
  2. I tap on my contact’s name
  3. I look for the phone call icon and tap it
  4. I choose “Voice Call” from the options to start an audio call. If I want to make a video call instead, I select “Video Call”

One would think that when a big company wants to do a knock off of a service, someone would check out what Telegram does. (It is a Russian audience due to the censorship in the country.) Then the savvy wizard would figure out how to make the process better and faster and easier.  Instead the clever Googlers add steps. That’s the way of the Sundar & Prabhakar Comedy Show.

Stephen E Arnold, August 18, 2025

The Early Bird Often Sings Alone

August 17, 2025

Mathematicians, computer developers, science-fiction writers, etc. smarter than the average human have known for decades that computers would outpace human intelligence. Computers have actually been capable of this since the first machine printed its initial binary 01. AI algorithms are the next evolution of computers and they can do research, explore science, and extrapolate formulas beyond all the last known recorded digit of PI.

Future House explains how its Robin the AI system is designed to automate scientific discovery: “Demonstrating End-To-End Scientific Discovery With Robin: A Multi-Agent System.” Future House developed AI agents that automated different segments of the discovery process, but Robin is the first unified system that does everything. Robin’s inventors automated the scientific process and used the new system to make a generated discovery by using the past AI agents.

They asked Robin to:

“We applied Robin to identify ripasudgl, a Rho-kinase (ROCK) inhibitor clinically used to treat glaucoma, as a novel therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide.”

Robin did follow the scientific process. It made an initial hypothesis, but mechanized investigation instead of doing things the old-fashioned way, and then it made a discovery. Everything was done by Robin the AI system:

“All hypotheses, experiment choices, data analyses, and main text figures in the manuscript describing this work were generated by Robin autonomously. Human researchers executed the physical experiments, but the intellectual framework was entirely AI-driven.”

Robins creators are happy with their progress:

“By automating hypothesis generation, experimental planning, and data analysis in an integrated system, Robin represents a powerful new paradigm for AI-driven scientific discovery. Although we first applied Robin to therapeutics, our agents are general-purpose and can be used for a wide variety of discoveries across diverse fields—from materials science to climate technology. “

Mathematicians are chugging away at AI development, including number theorists. Listen to Curt Jaimungal’s podcast episode, “The AI Math That Left Number Theorists Speechless” and within the first five minutes you’ll have an understanding of where AI is at being very smart. Here’s the summary: it’s beyond human comprehension.

Whitney Grace, August 17, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta