Enterprise Search Is Back, Baby, or Is It Spelled Baiby

October 28, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I want to be objective, but I admit I laughed. The spark for my jocularity was the marketing content document called “Claude AI Integrates with Microsoft 365 and Launches Enterprise Search for Business Teams.” But the subtitle tickled by aged ribs:

Anthropic has rolled out two new enterprise features for Claude, integration with Microsoft 365 and a unified search tool designed to connect organizational data across platforms.

There is one phrase to which I shall return, but let’s look at what the document presents as actual factual, ready to roll, enterprise ready software plus assorted AI magic dust.

image

Thanks, Venice.ai. Good enough.

I noted this statement about Microsoft / Anthropic or maybe Anthropic / Microsoft:

“Introducing two new features: Claude now connects to Microsoft 365 and offers enterprise search,” the company wrote on LinkedIn. “You can connect Claude to SharePoint, OneDrive, Outlook and Teams to search documents, analyze email threads, and review meeting summaries directly in conversation.” Anthropic, known for developing AI models focused on reliability and alignment, said both features are available immediately for Claude Team and Enterprise customers.

The Holy Grail is herewith ready for licensees to use to guzzle knowledge.

But what if the organization’s information is partitioned; for example, the legal department has confidential documents or is engaged in litigation and discovery is underway? What if the organization is in the pharmaceutical business, and the work is secret with a bit of medical trials activity underway. There are interview notes, laboratory data, and photographs of results? What if the organization lands a government contract with the Department of War and has to split off staff quickly as they transition from “regular” work to that which is conducted under quite specific rules and requirements? There are other questions as well; for example, what about those digitized schematics, the vendor information, and the data from digital cameras and work monitoring software?

I noted this statement as well:

Anthropic said the capability “brings your company’s knowledge into one place, using a dedicated shared project.” The system also includes custom prompts to refine searches and improve response accuracy. The company emphasized use cases such as onboarding new team members, identifying experts across departments, and analyzing feedback patterns to guide strategy and decision-making. The Microsoft 365 connector and enterprise search are now live for all Claude Team and Enterprise customers. Organization administrators can enable access and configure connected data sources.

My reaction, “This is 1990s enterprise search wearing a sweatshirt with a robot and AI on the front.”

Is this possible? Sure. The difficulty is that when employees interact with this type of system, interesting actions take place. One of the most common is, “This is not the document I wanted.” Often an employee will say, “This is not the PowerPoint I signed off for use at the conference.” Others may say, “Did you know that those salary schedules are in an Excel file with the documents about the employee picnic?”

Now let’s look at the phrase I thought interesting enough to discuss it in a separate paragraph. Here’s the phrase: “organizational data across platforms.” This evokes the idea that somewhere in the company is a cloud service containing corporate data. The Anthropic or Microsoft system will spider that content, process it, and make it findable. The hitch in the git along is that the other platforms may not embrace Microsoft security methods. Further the data may require a specific application to access those data. The cycle time between original indexing of the other platforms may be out of sync with the most recent data on those other platforms. In a magic world like the Fast Search & Transfer type environment which Microsoft purchased in 2008, the real world caused the magic carpet to lose altitude.

Now we have magic carpet 2025. How well with the painful realities of resources, security, cost, optimization, and infrastructure make it difficult for the magic carpet to keep flying? Marketing collateral pitches are easy. Delivering AI-enabled search to live up to the magic is slightly more difficult and, in my experience, shockingly expensive.

Stephen E Arnold, October 28, 2025

Microsoft, by Golly, Has an Ethical Compass: It Points to Security? No. Clippy? No. Subscriptions? Yes!

October 27, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The elephants are in training for a big fight. Yo, grass, watch out.

Microsoft AI Chief Says Company Won’t Build Chatbots for Erotica” reports:

Microsoft AI CEO Mustafa Suleyman said the software giant won’t build artificial intelligence services that provide “simulated erotica,” distancing itself from longtime partner OpenAI. “That’s just not a service we’re going to provide,” Suleyman said on Thursday [October 23, 2025] at the Paley International Council Summit in Menlo Park, California. “Other companies will build that.”

My immediate question: “Will Microsoft build tools and provide services allowing others to create erotica or conduct illegal activities; for example, delivery of phishing emails from the Microsoft Cloud to Outlook users?” A quick no seems to be implicit in this report about what Microsoft itself will do. A more pragmatic yes means that Microsoft will have no easy, quick, and cheap way to restrain what a percentage of its users will either do directly or via some type of obfuscation.

image

Microsoft seems to step away from converting the digital Bob into an adult star or Clippy engaging with a user in a “suggestive” interaction.

The write up adds:

On Thursday, Suleyman said the creation of seemingly conscious AI is already happening, primarily with erotica-focused services. He referenced Altman’s comments as well as Elon Musk’s Grok, which in July launched its own companion features, including a female anime character. “You can already see it with some of these avatars and people leaning into the kind of sexbot erotica direction,” Suleyman said. “This is very dangerous, and I think we should be making conscious decisions to avoid those kinds of things.”

I heard that 25 percent of Internet traffic is related to erotica. That seems low based on my estimates which are now a decade old. Sex not only sells; it seems to be one of the killer applications for digital services whether the user is obfuscated, registered, or using mom’s computer.

My hunch is that the AI enhanced services will trip over their own [a] internal resources, [b] the costs of preventing abuse, sexual or criminal, and [c] the leadership waffling.

There is big money in salacious content. Talking about what will and won’t happen in a rapidly evolving area of technology is little more than marketing spin. The proof will be what happens as AI becomes more unavoidable in Microsoft software and services. Those clever teenagers with Windows running on a cheap computer can do some very interesting things. Many of these will be actions that older wizards do not anticipate or simply push to the margins of their very full 9-9-6 day.

Stephen E Arnold, October 27, 2025

Do You Want To Take A Survey? AI Does!

October 27, 2025

How many times are you asked to complete a customer response survey? It happens whenever you visit a doctor’s office or request tech support. Most people ignore those surveys because they never seem to make things better, especially with tech support. Now companies won’t be able to rely on those surveys to measure customer satisfaction because AI is taking over says VentureBeat: “This New AI Technique Creates ‘Digital Twin’ Consumers, And It Could Kill The Traditional Survey Industry.”

In another groundbreaking case for AI (but another shudder up the spin for humanity), new research indicates that LLMs can imitate consumer behavior. The new research says fake customers can provide realistic customer ratings and qualitative reasons for them. However, humans are already using AI for customer response surveys:

“This development arrives at a critical time, as the integrity of traditional online survey panels is increasingly under threat from AI. A 2024 analysis from the Stanford Graduate School of Business highlighted a growing problem of human survey-takers using chatbots to generate their answers. These AI-generated responses were found to be "suspiciously nice," overly verbose, and lacking the "snark" and authenticity of genuine human feedback, leading to what researchers called a "homogenization" of data that could mask serious issues like discrimination or product flaws.

The research isn’t perfect. It only works for large population responses and not individuals. What’s curious is that consumers are so lazy they’re using AI to fill out the surveys. It’s easier not to do them all.

Whitney Grace, October 27, 2025

Losing Money? No Problem, Says OpenAI.

October 24, 2025

Losing billions? Not to worry.

I wouldn’t want to work on OpenAI’s financial team with these numbers, according to Tech In Asia’s article, “OpenAI’s H1 2025: $4.3b In Income, $13.5b In Loss.” You don’t have to be proficient in math to see that OpenAI is in the red after losing over thirteen billion dollars and only bringing in a little over four billion.

The biggest costs were from the research and development department operating at a loss of $6.7 billion. It spent $2 billion in sales and advertising, then had $2.5 bullion in stock-based compensation. These were both double that of expenses in these departments last year. Operating costs were another hit at $7.8 billion and it spent $2.5 billion in cash.

Here’s the current state of things:

“OpenAI paid Microsoft 20% of its revenue under an existing agreement.

At the end of June, the company held roughly US$17.5 billion in cash and securities, boosted by US$10 billion in new funding, and as of the end of July, was seeking an additional US$30 billion from investors.

A tender offer underway values OpenAI’s for-profit arm at about US$500 billion.”

The company isn’t doing well in the numbers but its technology is certainly in high demand and will put the company back in black…eventually. We believe that if one thinks it, the “it” will manifest, become true, and make the world very bright.

Whitney Grace, October 24, 2025

AI: There Is Gold in Them There Enterprises Seeking Efficiency

October 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a “ride-em-cowboy” write up called “IBM Claims 45% Productivity Gains with Project Bob, Its Multi-Model IDE That Orchestrates LLMs with Full Repository Context.” That, gentle reader, is a mouthful. Let’s take a quick look at what sparked an efflorescence of buzzing jargon.

image

Thanks, Midjourney. Good enough like some marketing collateral.

I noted this statement about Bob (no, not the famous Microsoft Bob):

Project Bob, an AI-first IDE that orchestrates multiple LLMs to automate application modernization; AgentOps for real-time agent governance; and the first integration of open-source Langflow into Watsonx Orchestrate, IBM’s platform for deploying and managing AI agents. IBM’s announcements represent a three-pronged strategy to address interconnected enterprise AI challenges: modernizing legacy code, governing AI agents in production and bridging the prototype-to-production gap.

Yep, one sentence. The spirit of William Faulkner has permeated IBM’s content marketing team. Why not make a news release that is a single sentence like the 1300 word extravaganza in “Absalom, Absalom!”?

And again:

Project Bob isn’t another vibe coder, it’s an enterprise modernization tool.

I can visualize IBM customers grabbing the enterprise modernization tool and modernizing the enterprise. Yeah, that’s going to become a 100 percent penetration quicker than I can say, “Bob was the precursor to Clippy.” (Oh, sorry. I was confusing Microsoft’s Bob with IBM’s Bob again. Drat!)

Is it Watson making the magic happen with IDE’s and enterprise modernization? No, Watson is probably there because, well, that’s IBM. But the brains for Bob comes from Anthropic. Now Bob and Claude are really close friends. IBM’s middleware is Watson, actually Watsonx. And the magic of these systems produces …. wait for it … AgentOps and Agentic Workflows.

The write up says:

Agentic Workflows handles the orchestration layer, coordinating multiple agents and tools into repeatable enterprise processes.  AgentOps then provides the governance and observability for those running workflows. The new built-in observability layer provides real-time monitoring and policy-based controls across the full agent lifecycle. The governance gap becomes concrete in enterprise scenarios. 

Yep, governance. (I still don’t know what that means exactly.) I wonder if IBM content marketing documents should come with a glossary like the 10 pages of explanations of Telegram’s wild and wonderful crypto freedom jargon.

My hunch is that IBM wants to provide the Betty Crocker approach to modernizing an enterprise’s software processes. Betty did wonders for my mother’s chocolate cake. If you want more information, just call IBM. Perhaps the agentic workflow Claude Watson customer service line will be answered by a human who can sell you the deed to a mountain chock full of gold.

Stephen E Arnold, October 23, 2025

AI and Data Exhaustion: Just Use Synthetic Data and Recycle User Prompts

October 23, 2025

That did not take long. The Independent reports, “AI Has Run Out of Training Data, Warns Data Chief.” Yes, AI models have gobbled up the world’s knowledge in just a few years. Neema Raphael, Goldman Sach’s chief data officer and head of engineering, made that declaration on a recent podcast. He added that, as a result, AI models will increasingly rely on synthetic data. Get ready for exponential hallucinations. Writer Anthony Cuthbertson quotes Raphael:

“We’ve already run out of data. I think what might be interesting is people might think there might be a creative plateau… If all of the data is synthetically generated, then how much human data could then be incorporated? I think that’ll be an interesting thing to watch from a philosophical perspective.”

Interesting is one word for it. Cuthbertson notes Raphael’s warning did not come out of the blue. He writes:

“An article in the journal Nature in December predicted that a ‘crisis point’ would be reached by 2028. ‘The internet is a vast ocean of human knowledge, but it isn’t infinite,’ the article stated. ‘Artificial intelligence researchers have nearly sucked it dry.’ OpenAI co-founder Ilya Sutskever said last year that the lack of training data would mean that AI’s rapid development ‘will unquestionably end’. The situation is similar to fossil fuels, according to Mr Sutskever, as human-generated content is a finite resource just like oil or coal. ‘We’ve achieved peak data and there’ll be no more,’ he said. ‘We have to deal with the data that we have. There’s only one internet.’”

So AI firms knew this limitation was coming. Did they warn investors? They may have concerns about this “creative plateau.” The write-up suggests the dearth of fresh data may force firms to focus less on LLMs and more on agentic AI. Will that be enough fuel to keep the hype train going? Sure, hype has a life of its own. Now synthetic data? That’s forever.

Cynthia Murrell, October 23, 2025

Apple Can Do AI Fast … for Text That Is

October 22, 2025

Wasn’t Apple supposed to infuse Siri with Apple Intelligence? Yeah, well, Apple has been working on smart software. Unlike the Google and Samsung, Apple is still working out some kinks in [a] its leadership, [b] innovation flow, [c] productization, and [d] double talk.

Nevertheless, I learned by reading “Apple’s New Language Model Can Write Long Texts Incredibly Fast.” That’s excellent. The cited source reports:

In the study, the researchers demonstrate that FS-DFM was able to write full-length passages with just eight quick refinement rounds, matching the quality of diffusion models that required over a thousand steps to achieve a similar result. To achieve that, the researchers take an interesting three-step approach: first, the model is trained to handle different budgets of refinement iterations. Then, they use a guiding “teacher” model to help it make larger, more accurate updates at each iteration without “overshooting” the intended text. And finally, they tweak how each iteration works so the model can reach the final result in fewer, steadier steps.

And if you want proof, just navigate to the archive of research and marketing documents. You can access for free the research document titled “FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models.” The write up contains equations and helpful illustrations like this one:

The research paper is in line with other “be more efficient”-type efforts. At some point, companies in the LLM game will run out of money, power, or improvements. Efforts like Apple’s are helpful. However, like its debunking of smart software, Apple is lagging in the AI game.

Net net: Like orange iPhones and branding plays like Apple TV, a bit more in the delivery of products might be helpful. Apple did produce a gold thing-a-ma-bob for a world leader. It also reorganizes. Progress of a sort I surmise.

Stephen E Arnold, October 21, 2025

Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley

October 22, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

image

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.

I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?

The write up says:

OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”

This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”

Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.

What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:

On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.

What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.

For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.

Several observations:

  1. Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
  2. Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
  3. Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.

The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.

Stephen E Arnold, October 23, 2025

Smart Software: The DNA and Its DORK Sequence

October 22, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:

A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.

My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.

image

Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.

Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.

The write up adds:

Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.

Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.

Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.

The write up concludes with this gem:

The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”

Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.

Stephen E Arnold, October 22, 2025

A Positive State of AI: Hallucinating and Sloppy but Upbeat in 2025

October 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who can resist a report about AI authored on the “interwebs.” Is this a variation of the Internet as pipes? The write up is “Welcome to State of AI  Report 2025.” When I followed the links, I could read this blog post, view a YouTube video, work through more than 300 online slides, or  see “live survey results.” I must admit that when I write a report, I distribute it to a few people and move on. Not this “interwebs” outfit. The data are available for those who are in tune, locked in, and ramped up about smart software.

image

An anxious parent learns that a robot equipped with agentic AI will perform her child’s heart surgery. Thanks, Venice.ai. Good enough.

I appreciate enthusiasm, particularly when I read this statement:

The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.

Agree or disagree, the report makes clear that doom is not associated with smart software. I think that this blossoming of smart software services, applications, and apps reflects considerable optimism. Some of these people and companies are probably in the AI game to make money. That’s okay as long as the products and services don’t urge teens to fall in love with digital friends, cause a user mental distress as a rabbit hole is plumbed, or just output incorrect information. Who wants to be the doctor who says, “Hey, sorry your child died. The AI output a drug that killed her. Call me if you have questions”?

I could not complete the 300 plus slides in the slide deck. I am not a video type so the YouTube version was a non-starter. However, I did read the list of findings from t he “interwebs” and its “team.” Please, consult the source documents for a full, non-dinobaby version of what the enthusiastic researchers learned about 2025. I will highlight three findings and then offer a handful of comments:

  • OpenAI is the leader of the pack. That’s good news for Sam AI-Man or SAMA.
  • “Commercial traction accelerated.” That’s better news for those who have shoveled cash into the giant open hearth furnaces of smart software companies.
  • Safety research is in a “pragmatic phase.” That’s the best news in the report. OpenAI, the leader like the Philco radio outfit, is allowing erotic interactions. Yes, pragmatic because sex sells as Madison Avenue figured out a century ago.

Several observations are warranted because I am a dinobaby, and I am not convinced that smart software is more than a utility, not an application like Lotus 1-2-2 or the original laser printer. Buckle up:

  1. The money pumped into AI is cash that is not being directed at the US knowledge system. I am talking about schools and their job of teaching reading, writing, and arithmetic. China may be dizzy with AI enthusiasm, but their schools are churning out people with fundamental skills that will allow that nation state to be the leader in a number of sectors, including smart software.
  2. Today’s smart software consists of neural network and transformer anchored methods. The companies are increasingly similar and the outputs of the different systems generate incorrect or misleading output scattered amidst recycled knowledge, data, and information. Two pigs cannot output an eagle except in a video game or an anime.
  3. The handful of firms dominating AI are not motivated by social principles. These firms want to do what they want. Governments can’t reign them in. Therefore, the “governments” try to co-opt the technology, hang on, and hope for the best. Laws, rules, regulations, ethical behavior — forget that.

Net net: The State of AI in 2025 is exactly what one would expect from Silicon Valley- and MBA-type thinking. Would you let an AI doc treat your 10-year-old child? You can work through the 300 plus slides to assuage your worries.

Stephen E Arnold, October 21, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta