Marketers and AI: Killing Sales and Trust. Collateral Damage? Meh

November 11, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

The Trust Collapse: Infinite AI Content Is Awful” is an impassioned howl in the arctic space of zeros and ones. The author is what one might call collateral damage. When a UPS airplane drops a wing on take off, the people in the path of the fuel spewing aircraft are definitely not thrilled.

AI is a bit like a UPS-type of aircraft arrowing to an unhappy end, at least for the author of the cited article. The write up makes clear that smart software can vaporize a writer of marketing collateral. The cost for the AI is low. The penalty for the humanoid writer is collateral damage.

image

Let’s look at some of the points in the cited essay:

For the first time since, well, ever, the cost of creating content has dropped to essentially zero. Not “cheaper than before”, but like actually free. It’s so easy to generate a thousand blog posts or ten thousand ”personalized” emails and it barely costs you anything (for now).

Yep, marketing content appears on some of the lists I have mentioned in this blog. Usually customer service professionals top the list, but advertising copywriters and email pitch writers usually appear in the top five of AI-terminated jobs.

The write up explains:

What they [a prospect] actually want to know is “why the hell would I buy it from you instead of the other hundred companies spamming my inbox with identical claims?” And because everything is AI slop now, answering that question became harder for them.

The idea is that expensive, slow, time-consuming relationship selling is eroding under a steady stream of low cost, high volume marketing collateral produced by … smart software. Yes, AI and lousy AI at that.

The write up provides an interesting example of how low cost, high volume AI content has altered the sales landscape:

Old World (…-2024):

  • Cost to produce credible, personalized outreach: $50/hour (human labor)
  • Volume of credible outreach a prospect receives: ~10/week
  • Prospect’s ability to evaluate authenticity: Pattern recognition works ~80% of time

New World (2025-…):

  • Cost to produce credible, personalized outreach: effectively 0
  • Volume of credible outreach a prospect receives: ~200/week
  • Prospect’s ability to evaluate authenticity: Pattern recognition works ~20% of time

The signal-to-noise ratio has hit a breaking point where the cost of verification exceeds the expected value of engagement.

So what? The write up answers this question:

You’re representing a brand. And your brand must continuously earn that trust, even if you blend of AI-powered relevance. We still want that unmistakable human leadership.

Yep, that’s it. What’s the fix? What does a marketer do? What does a customer looking for a product or service to solve a problem?

There’s no answer. That’s the way smart software from Big AI Tech (BAIT) is supposed to work. The question, however, “What’s the fix? What’s your recommendation, dear author? Whom do you suggest solve this problem you present in a compelling way?”

Crickets.

AI systems and methods disintermediate as a normal function. The author of the essay does not offer a solution. Why? There isn’t one short of an AI implosion. At this time, too many big time people want AI to be the next big thing. They only look for profits and growth, not collateral damage. Traditional work is being rubble-ized. The build up of opportunity will take place where the cost of redevelopment is low. Some new business may be built on top of the remains of older operations, but moving a few miles down the road may be a more appealing option.

Stephen E Arnold, November 11, 2025

Agentic Software: Close Enough for Horse Shoes

November 11, 2025

green-dino_thumb_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

I read a document that I would describe as tortured. The lingo was trendy. The charts and graphs sported trendy colors. The data gathering seemed to be a mix of “interviews” and other people’s research. Plus the write up was a bit scattered. I prefer the rigidity of old-fashioned organization. Nevertheless, I did spot one chunk of information that I found interesting.

The title of the research report (sort of an MBA- or blue chip consulting firm-type of document) is “State of Agentic AI: Founder’s Edition.” I think it was issued in March 2025, but with backdating popular, who knows. I had the research report in my files, and yesterday (November 3, 2025) I was gathering some background information for a talk I am giving on November 6, 2025. The document walked through data about the use of software to replace people. Actually, the smart software agents generally do several things according to the agent vendors’ marketing collateral. The cited document restated these items this way:

  1. Agents are set up to reach specific goals
  2. Agents are used to reason which means “break down their main goal … into smaller manageable tasks and think about the next best steps.”
  3. Agents operate without any humans in India or Pakistan operating invisibly and behind the scenes
  4. Agents can consult a “memory” of previous tasks, “experiences,” work, etc.

Agents, when properly set up and trained, can perform about as well as a human. I came away from the tan and pink charts with a ball park figure of 75 to 80 percent reliability. Close enough for horseshoes? Yep.

There is a run down of pricing options. Pricing seems to be challenge for the vendors with API usage charges and traditional software licensing used by a third of the agentic vendors.

Now here’s the most important segment from the document:

We asked founders in our survey: “What are the biggest issues you have encountered when deploying AI Agents for your customers? Please rank them in order of magnitude (e.g. Rank 1 assigned to the biggest issue)” The results of the Top 3 issues were illuminating: we’ve frequently heard that integrating with legacy tech stacks and dealing with data quality issues are painful. These issues haven’t gone away; they’ve merely been eclipsed by other major problems. Namely:

  • Difficulties in integrating AI agents into existing customer/company workflows, and the human-agent interface (60% of respondents)
  • Employee resistance and non-technical factors (50% of respondents)
  • Data privacy and security (50% of respondents).

Here’s the chart tallying the results:

image

Several ideas crossed my mind as I worked through this research data:

  1. Getting the human-software interface right is a problem. I know from my work at places like the University of Michigan, the Modern Language Association, and Thomson-Reuters that people have idiosyncratic ways to do their jobs. Two people with similar jobs add the equivalent of extra dashboard lights and yard gnomes to the process. Agentic software at this time is not particularly skilled in the dashboard LED and concrete gnome facets of a work process. Maybe someday, but right now, that’s a common deal breaker. Employees says, “I want my concrete unicorn, thank you.”
  2. Humans say they are into mobile phones, smart in-car entertainment systems, and customer service systems that do not deliver any customer service whatsoever. Change as somebody from Harvard said in a lecture: “Change is hard.” Yeah, and it may not get any easier if the humanoid thinks he or she will allowed to find their future pushing burritos at the El Nopal Restaurant in the near future.
  3. Agentic software vendors assume that licensees will allow their creations to suck up corporate data, keep company secrets, and avoid disappointing  customers by presenting proprietary information to a competitor. Security is “regular” enterprise software is a bit of a challenge. Security in a new type of agentic software is likely to be the equivalent of a ride on roller coaster which has tossed several middle school kids to their death and cut off the foot of a popular female. She survived, but now has a non-smart, non-human replacement.

Net net: Agentic software will be deployed. Most of its work will be good enough. Why will this be tolerated in personnel, customer service, loan approvals, and similar jobs? The answer is reduced headcounts. Humans cost money to manage. Humans want health care. Humans want raises. Software which is good enough seems to cost less. Therefore, welcome to the agentic future.

Stephen E Arnold, November 11, 2025

Sure, Sam. We Trust Your with Our Data

November 11, 2025

OpenAI released a new AI service called “company knowledge” that collects and analyzes all information within an organization.  Why does this sound familiar?  Because malware does the same thing for nefarious purposes.  The story comes from Computer World and is entitled, “OpenAI’s Company Knowledge Wants Access To All Of Your Internal Data.”

A major problem is that OpenAI is a still a relatively young company and organizations are reluctant to share all of their data with it.  AI is still an untested pool and so much can go wrong when it comes to regulating security and privacy.  Here’s another clincher in the deal:

“Making granting that trust yet more difficult is the lack of clarity around the ultimate OpenAI business model. Specifically, how much OpenAI will leverage sensitive enterprise data in terms of selling it, even with varying degrees of anonymization, or using it to train future models.”

What does the vice-president and principal analyst at Forrester, Jeff Pollard say?

“ ‘The capabilities across all these solutions are similar, and benefits exist: Context and intelligence when using AI, more efficiency for employees, and better knowledge for management.”

But there’s a big but that Pollard makes clear:

“ ‘Data privacy, security, regulatory, compliance, vendor lock-in, and, of course, AI accuracy and trust issues.  But for many organizations, the benefits of maximizing the value of AI outweighs the risks.’”

The current AI situation is that applications are transiting from isolated to connected agents and agentic systems developed to maximize value for the users.  In other words, according to Pollard, “high risk and high reward.”  The rewards are tempting but the consequences are also alarming. 

Experts say that companies won’t place all of their information and proprietary knowledge in the hands of a young company and untested technology.  They could but there aren’t any regulations to protect them.

OpenAI should practice with its own company first, then see what happens.

Whitney Grace, November 11, 2025

Microsoft: Desperation or Inspiration? Copilot, Have We Lost an Engine?

November 10, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

Microsoft is an interesting member of the high-tech in-crowd. It is the oldest of the Big Players. It invented Bob and Clippy. It has not cracked the weirdness of Word’s numbering weirdness. Updates routinely kill services. I think it would be wonderful if Task Manager did not spawn multiple instances of itself.

Furthermore Microsoft, the cloudy giant with oodles of cash, ignited the next big thing frenzy a couple of years ago. The announcement that Bob and Clippy would operate on AI steroids. Googzilla experienced the equivalent of traumatic stress injury and blinked red, yellow, and orange for months. Crisis bells rang. Klaxons interrupted Foosball games. Napping in pods became difficult.

Imagine what’s happening at Microsoft now that this Sensor Tower chart is popping up in articles like “Microsoft Bets on Influencers Like Alix Earle to Close the Gap With ChatGPT.” Here’s the gasp inducer:

image

Source: The chart comes from Sensor Tower. It carries Bloomberg branding. But  it appeared in an MSN.com article. Who crafted the data? How were the data assembled? What mathematical processes were use to produce such nice round numbers? I have no clue, but let’s assume those fat, juicy round numbers are “real,” and the weird imaginary “i” things electrical engineers enjoy each day.

The write up states:

Microsoft Corp., eager to boost downloads of its Copilot chatbot, has recruited some of the most popular influencers in America to push a message to young consumers that might be summed up as: Our AI assistant is as cool as ChatGPT. Microsoft could use the help. The company recently said its family of Copilot assistants attracts 150 million active users each month. But OpenAI’s ChatGPT claims 800 million weekly active users, and Google’s Gemini boasts 650 million a month. Microsoft has an edge with corporate customers, thanks to a long history of selling them software and cloud services. But it has struggled to crack the consumer market — especially people under 30.

Microsoft came up with a novel solution to its being fifth in the smart software league table. Is Microsoft developing useful AI-infused services for Notepad? Yes. Is Microsoft pushing Copilot and its hallucinatory functions into Excel? Yes. Is Microsoft using Copilot to help partners code widgets for their customers to use in Azure? Yeah, sort of, but I have heard that Anthropic Claude has some Certified Partners as fans.

The Microsoft professionals, the leadership, and the legions of consultants have broken new marketing ground. Microsoft is paying social media influencers to pitch Microsoft Copilot as the one true smart software. Forget that “God is my copilot” meme. It is now “Meme makers are Microsoft’s Copilot.”

The write up includes this statement about this stunningly creative marketing approach:

“We’re a challenger brand in this area, and we’re kind of up and coming,” Consumer Chief Marketing Officer Yusuf Mehdi

Excuse me, Microsoft was first when it announced its deal with OpenAI a couple of years ago. Microsoft was the only game in town. OpenAI was a Silicon Valley start up with links to Sam AI-Man and Mr. Tesla. Now Microsoft, a giant outfit, is “up and coming.” No, I would suggest Microsoft is stalled and coming down.

The write up from that university / consulting outfit New York University is quoted in the cited write up. Here is that snippet:

Anindya Ghose, a marketing professor at New York University’s Stern School of Business, expressed surprised that Microsoft is using lifestyle influencers to market Copilot. But he can see why the company would be attracted to their cult followings. “Even if the perceived credibility of the influencer is not very high but the familiarity with the influencers is high, there are some people who would be willing to bite on that apple,” Ghose said in an interview.

The article presents proof that the Microsoft creative light saber has delivered. Here’s that passage:

Mehdi cited a video Earle posted about the new Copilot Groups feature as evidence that the campaign is working. “We  can see very much people say, ‘Oh, I’m gonna go try that,’ and we can see the usage it’s driving.” The video generated 1.9 million views on Earle’s Instagram account and 7 million on her TikTok. Earle declined to comment for this story.

Following my non-creative approach, here are several observations:

  1. From first to fifth. I am not sure social media influencers are likely to address the reason the firm associated with Clippy occupies this spot.
  2. I am not sure Microsoft knows how to fix the “problem.” My hunch is that the Softies see the issue as one that is the fault of the users. Its either the Russian hackers or the users of Microsoft products and services. Yeah, the problem is not ours.
  3. Microsoft, like Apple and Telegram, are struggling to graft smart software into ageing platforms, software, and systems. Google is doing a better job, but it is in second place. Imagine that. Google in the “place” position in the AI Derby. But Google has its own issues to resolve, and it is thinking about putting data centers in space, keeping its allegedly booming Web search business cranking along at top speed, and sucking enough cash from online advertising to pay for its smart software ambitions. Those wizards are busy. But Googzilla is in second place and coping with acute stress reaction.

Net net: The big players have put huge piles of casino chips in the AI poker game. Desperation takes many forms. The sport of extreme marketing is just one of the disorder’s manifestations. Watch it on TikTok-type services.

Stephen E Arnold, November 10, 2025

Train Models on Hostility Oriented Slop and You Get Happiness? Nope, Nastiness on Steroids

November 10, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Informed discourse, factual reports, and the type of rhetoric loved by Miss Spurling in 1959 can have a positive effect. I wasn’t sure at the time if I wanted to say Whoa! Nelly to my particular style of writing and speaking. She did her best to convert me from a somewhat weird 13 year old into a more civilized creature. She failed I fear.

image

A young student is stunned by the criticism of his approach to a report. An “F”. Failure. The solution is not to listen. Some AI vendors take the same approach. Thanks, Venice.ai, good enough.

When I read “Study: AI Models Trained on Clickbait Slop Result In AI Brain Rot, Hostility,” I thought about Miss Spurling and her endless supply of red pencils. Algorithms, it seems, have some of the characteristics of an immature young person. Feed that young person some unusual content, and you get some wild and crazy outputs

The write up reports:

To see how these [large language] models would “behave” after subsisting on a diet of clickbait sewage, the researchers cobbled together a sample of one million X posts and then trained four different LLMs on varying mixtures of control data (long form, good faith, real articles and content) and junk data (lazy, engagement chasing, superficial clickbait) to see how it would affect performance. Their conclusion isn’t too surprising; the more junk data that is fed into an AI model, the lower quality its outputs become and the more “hostile” and erratic the model is …

But here’s the interesting point:

They also found that after being fed a bunch of ex-Twitter slop, the models didn’t just get “dumber”, they were (shocking, I know) far more likely to take on many of the nastier “personality traits” that now dominate the right wing troll platform …

The write up makes a point about the wizards creating smart software; to wit:

The problem with AI generally is a decidedly human one: the terrible, unethical, and greedy people currently in charge of it’s implementation (again, see media, insurance, countless others) — folks who have cultivated some unrealistic delusions about AI competency and efficiency (see this recent Stanford study on how rushed AI adoption in the workforce often makes people less efficient).

I am not sure that the highly educated experts at Google-type AI companies would agree. I did not agree with Miss Spurling. On may points, she was correct. Adolescent thinking produces some unusual humans as well as interesting smart software. I particularly like some of the newer use cases; for instance, driving some people wacky or appealing to the underbelly of human behavior.

Net net: Scale up, shut up, and give up.

Stephen E Arnold, November 10, 2025

Mobile Hooking People: Digital Drugs

November 10, 2025

Most of us know that spending too much time on our phones is a bad idea, especially for young minds. We also know the companies on the other end profit from keeping us glued to the screen. The Conversation examines the ways “Smartphones Manipulate our Emotions and Trigger our Reflexes– No Wonder We’re Addicted.” Yes–try taking a 12 year old’s mobile phone and let us know how that goes.

Of course, social media, AI chatbots, games, and other platforms have their own ways of capturing our attention. This article, however, focuses on ways the phones themselves manipulate users. Author Stephen Monteiro writes:

“As I argue in my newly published book, Needy Media: How Tech Gets Personal, our phones — and more recently, our watches — have become animated beings in our lives. These devices can build bonds with us by recognizing our presence and reacting to our bodies. Packed with a growing range of technical features that target our sensory and psychological soft spots, smartphones create comforting ties that keep us picking them up. The emotional cues designed into these objects and interfaces imply that they need our attention, while in actuality, the devices are soaking up our data.”

The write-up explores how phones’ responsive features, like facial recognition, geolocation, touchscreen interactions, vibrations and sounds, and motion and audio sensing, combine to build a potent emotional attachment. Meanwhile, devices have drastically increased how much information they collect and when. They constantly record data on everything we do on our phones and even in our environments. One chilling example: With those sensors, software can build a fairly accurate record of our sleep patterns. Combine that with health and wellness apps, and that gives app-makers a surprisingly comprehensive picture. Have you seen any eerily insightful ads for fitness, medical, or mindfulness products lately? Soon, they will be even be able to gauge our emotions through analysis of our facial expressions. Just what we need.

Given a cell phone is pretty much required to navigate life these days, what are we to do? Monteiro suggests:

“We can access device settings and activate only those features we truly require, adjusting them now and again as our habits and lifestyles change. Turning on geolocation only when we need navigation support, for example, increases privacy and helps break the belief that a phone and a user are an inseparable pair. Limiting sound and haptic alerts can gain us some independence, while opting for a passcode over facial recognition locks reminds us the device is a machine and not a friend. This may also make it harder for others to access the device.”

If these measures do not suffice, one can go retro with a “dumb” phone. Apparently, that is a trend among Gen Z. Perhaps there is hope for humanity yet.

Cynthia Murrell, November 10, 2025

News Flash: Smart Software Can Be Truly Stupid about News

November 10, 2025

Chatbots Are Wrong About News

Do you receive your news via your favorite chatbot?  It doesn’t matter which one is your favorite, because most of the time you’re being served misinformation.  While the Trump and Biden administrations went crazy about the spreading of  fake news, they weren’t entirely wrong.  ZDNet reports that if, “Get Your News Drom AI? Watch Out – It’s Wrong Almost Half The Time.”

The European Broadcasting Union and the BBC discovered that popular chatbots are incorrectly reporting the news.  The BBC and the EBU had journalists study media in eighteen countries and fourteen languages from Perplexity, Copilot, Gemini, and ChatGPT.  Here are the results:

“The researchers found that close to half (45%) of all of the responses generated by the four AI systems “had at least one significant issue,” according to the BBC, while many (20%) “contained major accuracy issues,” such as hallucination — i.e., fabricating information and presenting it as fact — or providing outdated information. Google’s Gemini had the worst performance of all, with 76% of its responses containing significant issues, especially regarding sourcing.”

The implications are obvious to the smart thinker: distorted information.  Thankfully Reuters found that only 7% of adults received all of their news from AI sources, while the numbers were larger at 15% for those under age twenty-five.  More than three-quarters of adults never turn to chatbots for their news.

Why is anyone surprised by this?  More importantly, why aren’t the big news outlets, including those on the fr left and right, sharing this information?  I thought these companies were worried about going out of business because of chatbots.  Why aren’t they reporting on this story?

Whitney Grace, November 7, 2025

Cyber Security: Do the Children of Shoemakers Have Yeezies or Sandals?

November 7, 2025

green-dino_thumbAnother short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.

When I attended conferences, I liked to stop at the exhibitor booths and listen to the sales pitches. I remember one event held in a truly shabby hotel in Tyson’s Corner. The vendor whose name escapes me explained that his firm’s technology could monitor employee actions, flag suspicious behaviors, and virtually eliminate insider threats. I stopped at the booth the next day and asked, “How can your monitoring technology identify individuals who might flip the color of their hat from white to black?” The answer was, “Patterns.” I found the response interesting because virtually every cyber security firm with whom I have interacted over the years talks about patterns.

image

Thanks, OpenAI. Good enough.

The problem is that individuals aware of what are mostly brute-force methods of identifying that employee A tried to access a Dark Web site known for selling malware works if the bad actor is clueless. But what happens if the bad actors were actually wearing white hats, riding white stallions, and saying, “Hi ho, Silver, away”?

Here’s the answer: “Prosecutors allege incident response pros used ALPHV/BlackCat to commit string of ransomware attacks

.” The write up explains that “cybersecurity turncoats attacked at least five US companies while working for” cyber security firms. Here’s an interesting passage from the write up:

Ryan Clifford Goldberg, Kevin Tyler Martin and an unnamed co–conspirator — all U.S. nationals — began using ALPHV, also known as BlackCat, ransomware to attack companies in May 2023, according to indictments and other court documents in the U.S. District Court for the Southern District of Florida. At the time of the attacks, Goldberg was a manager of incident response at Sygnia, while Martin, a ransomware negotiator at DigitalMint, allegedly collaborated with Goldberg and another co-conspirator, who also worked at DigitalMint and allegedly obtained an affiliate account on ALPHV.  The trio are accused of carrying out the conspiracy from May 2023 through April 2025, according to an affidavit.

How long did the malware attacks persist? Just from May 2023 until April 2025. 

Obviously the purpose of the bad behavior was money. But the key point is that, according to the article, “he was recruited by the unnamed co-conspirator.”

And that, gentle reader, is how bad actors operate. Money pressure, some social engineering probably at a cyber security conference, and a pooling of expertise. I am not sure that insider threat software can identify this type of behavior. The evidence is that multiple cyber security firms employed these alleged bad actors and the scam was afoot for more that 20 months. And what about the people who hired these individuals? That screening seems to be somewhat spotty, doesn’t it?

Several observations:

  1. Cyber security firms themselves are not able to operate in a secure manner
  2. Trust in Fancy Dan software may be misplaced. Managers and co-workers need to be alert and have a way to communicate suspicions in an appropriate way
  3. The vendors of insider threat detection software may want to provide some hard proof that their systems operate when hats change from black to white.

Everyone talks about the boom in smart software. But cyber security is undergoing a similar economic gold rush. This example, if it is indeed accurate, indicates that companies may develop, license, and use cyber security software. Does it work? I suggest you ask the “leadership” of the firms involved in this legal matter.

Stephen E Arnold, November 7, 2025

How Frisky Will AI Become? Users Like Frisky… a Lot

November 7, 2025

OpenAI promised to create technology that would benefit humanity, much like Google and other Big tech companies. We know how that has gone. Much to the worry of its team, OpenAI released a TikTok-like app powered by AI. What could go wrong? Well we’re still waiting to see the fallout, but TechCrunch shares that possibilities in the story: “OpenAI Staff Grapples With The Company’s Social Media Push.”

OpenAI is headed into social media because that is where the money is. The push for social media is by OpenAI’s bigwigs. The new TikTok-like app is called Sora 2 and it has an AI-based feed. Past and present employees are concerned how Sora 2 will benefit humanity. They are worried that Sora 2 will produce more AI slop, the equivalent of digital brain junk food, to consumers instead of benefitting humanity. Even OpenAI’s CEO Sam Altman is astounded by the amount of money allowed to AI social media projects:

‘ ‘We do mostly need the capital for build [sic] AI that can do science, and for sure we are focused on AGI with almost all of our research effort,’ said Altman. ‘It is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need.’ ‘When we launched chatgpt there was a lot of ‘who needs this and where is AGI,’ Altman continued. ‘[R]eality is nuanced when it comes to optimal trajectories for a company.’”

Here’s another quote about the negative effects of AI:

‘One of the big mistakes of the social media era was [that] the feed algorithms had a bunch of unintended, negative consequences on society as a whole, and maybe even individual users. Although they were doing the thing that a user wanted — or someone thought users wanted — in the moment, which is [to] get them to, like, keep spending time on the site.’”

Let’s start taking bets about how long it will take the bad actors to transform Sora 2 into quite frisky service.

Whitney Grace, November 7, 2025

Govini? Another Palantir Technologies?

November 7, 2025

Good news. Another Palantir. Just what we need. CNBC reports, “Govini, a Defense Tech Startup Taking on Palantir, Hits $100 Million in Annual Recurring Revenue.” Writer Samantha Subin tells us:

“Govini, a defense tech software startup taking on the likes of Palantir, has blown past $100 million in annual recurring revenue, the company announced Friday. ‘We’re growing faster than 100% in a three-year CAGR, and I expect that next year we’ll continue to do the same,’ CEO Tara Murphy Dougherty told CNBC’s Morgan Brennan in an interview. With how ‘big this market is, we can keep growing for a long, long time, and that’s really exciting.’ CAGR stands for compound annual growth rate, a measurement of the rate of return. The Arlington, Virginia-based company also announced a $150 million growth investment from Bain Capital. It plans to use the money to expand its team and product offering to satisfy growing security demands.”

A former business-development leader at Palantir, Dougherty says her current firm is aiming for a “vertical slice” of the defense tech field. We learn:

“The 14-year-old Govini has already secured a string of big wins in recent years, including an over $900-million U.S. government contract and deals with the Department of War. Govini is known for its flagship AI software Ark, which it says can help modernize the military’s defense tech supply chain by better managing product lifecycles as military needs grow more sophisticated.”

The CEO asserts China’s dominance in rare earths and processed minerals and its faster shipbuilding capacity are reasons to worry. Sounds familiar. However, she believes an efficient and effective procurement system like Ark can provide an advantage for the US. Perhaps. But does it come with sides of secrecy, surveillance, and influence a la Palantir? Stay tuned.

Cynthia Murrell, November 7, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta