What Does One Do When Innovation Falters? Do the Me-Too Bop

February 10, 2025

Hopping Dino_thumbAnother dinobaby commentary. No smart software required.

I found the TechRadar story “In Surprise Move Microsoft Announces Deepseek R1 Is Coming to CoPilot+ PCs – Here’s How to Get It” an excellent example of bit tech innovation. The article states:

Microsoft has announced that, following the arrival of Deepseek R1 on Azure AI Foundry, you’ll soon be able to run an NPU-optimized version of Deepseek’s AI on your Copilot+ PC. This feature will roll out first to Qualcomm Snapdragon X machines, followed by Intel Core Ultra 200V laptops, and AMD AI chipsets.

Yep, me too, me too. The write up explains the ways in which one can use Deepseek, and I will leave taking that step to you. (On the other hand, navigate to Hugging Face and download it, or you could zip over to You.com and give it a try.)

The larger issue is not the speed with which Microsoft embraced the me too approach to innovation. For me, the decision illustrates the paucity of technical progress in one of the big technology giants. You know, Microsoft, the originator of Bob and the favorite software company of bad actors who advertise their malware on Telegram.

Several observations:

  1. It doesn’t matter how the Chinese start up nurtured by a venture capital firm got Deepseek to work. The Chinese outfit did it. Bang. The export controls and the myth of trillions of dollars to scale up disappeared. Poof.
  2. No US outfit — with or without US government support — was in the hockey rink when the Chinese team showed up and blasted a goal in the first few minutes of a global game. Buzz. 1 to zip. The question is, “Why not?” and “What’s happened since Microsoft triggered the crazy Code Red or whatever at the Google?” Answer: Burning money quickly.
  3. More pointedly, are the “innovations” in AI touted by Product Hunt and podcasters innovations? What if these are little more than wrappers with some snappy names? Answer: A reminder that technical training and some tactical kung fu can deliver a heck of a punch.

Net net: Deepseek was a tactical foray or probe. The data are in. Microsoft will install Chinese software in its global software empire. That’s interesting, and it underscores the problem of me to. Innovation takes more than raising prices and hiring a PR firm.

Stephen E Arnold, February 10, 2025

Deepseek: Details Surface Amid Soft Numbers

February 7, 2025

dino orange_thumb_thumb_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I read “Research exposes Deepseek’s AI Training Cost Is Not $6M, It’s a Staggering $1.3B.” The assertions in the write up are interesting and closer to the actual cost of the Deepseek open source smart software. Let’s take a look at the allegedly accurate and verifiable information. Then I want to point out two costs not included in the estimated cost of Deepseek.

The article explains that the analysis for training was closer to $1.3 billion. I am not sure if this estimate is on the money, but a higher cost is certainly understandable based on the money burning activities of outfits like Microsoft, OpenAI, Facebook / Meta, and the Google, among others.

The article says:

In its latest report, SemiAnalysis, an independent research company, has spotlighted Deepseek, a rising player in the AI landscape. The SemiAnalysis challenges some of the prevailing narratives surrounding Deepseek’s costs and compares them to competing technologies in the market. One of the most prominent claims in circulation is that Deepseek V3 incurs a training cost of around $6 million.

One important point is that building and making available for free a smart software system incurs many costs. The consulting firm has narrowed its focus to training costs.

The write up reports:

The $6 million estimate primarily considers GPU pre-training expenses, neglecting the significant investments in research and development, infrastructure, and other essential costs accruing to the company. The report highlights that Deepseek’s total server capital expenditure (CapEx) amounts to an astonishing $1.3 billion. Much of this financial commitment is directed toward operating and maintaining its extensive GPU clusters, the backbone of its computational power.

But “astonishing.” Nope. Sam AI-Man tossed around numbers in the trillions. I am not sure we will ever know how much Amazon, Facebook, Google, and Microsoft — to name four outfits — have spent in the push to win the AI war, get a new monopoly, and control everything from baby cams to zebra protection in South Africa.

I do agree that the low ball number was low, but I think the pitch for this low ball was a tactic designed to see what a Chinese-backed AI product could do to the US financial markets.

There are some costs that neither the SemiAnalytics outfit or the Interesting Engineering wordsmith considered.

First, if you take a look at the authors of the Deepseek ArXiv papers you will see a lot of names. Most of these individuals are affiliated with Chinese universities. How we these costs handled? My hunch is that the costs were paid by the Chinese government and the authors of the paper did what was necessary to figure out how to come up with a “do more for less” system. The idea is that China, hampered by US export restrictions, is better at AI than the mythological Silicon Valley. Okay, that’s a good intelligence operation: Test destabilization with a reasonably believable free software gilded with AI sparklies. But the costs? Staff, overhead, and whatever perks go with being a wizard at a Chinese university have to be counted, multiplied by the time required to get the system to work mostly, and then included in the statement of accounts. These steps have not been taken, but a company named Complete Analytics should do the work.

Second, what was the cost of the social media campaign that made Deepseek more visible than the head referee of the Kansas City Chiefs and Philadelphia Eagle game? That cost has not been considered. Someone should grind through the posts, count the authors or their handles, and produce an estimate. As far as I know, there is no information about who is a paid promoter of Deepseek.

Third, how much did the electricity to get DeepSeek to do its tricks? We must not forget the power at the universities, the research labs, and the laptops. Technology Review has some thoughts along this power line.

Finally, what’s the cost of the overhead. I am thinking about the planning time, the lunches, the meetings, and the back and forth needed to get Deepseek on track to coincide with the new president’s push to make China not so great again? We have nothing. We need a firm called SpeculativeAnalytics for this task or maybe MasterCard can lend a hand?

Net net: The Deepseek operation worked. The recriminations, the allegations, and the explanations will begin. I am not sure they will have as much impact as this China smart, US dumb strategy. Plus, that SemiAnalytics’ name is a hoot.

Stephen E Arnold, February 7, 2025

China Smart, US Dumb: The Deepseek Foray into Destabilization of AI Investment

February 6, 2025

dino orange_thumb_thumbYep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.

I have published a few blog posts about the Chinese information warfare directed at the US. Examples have included videos of a farm girl with primitive tools repairing complex machinery, the carpeting of ArXiv with papers about Deepseek’s AI innovations, and the stories in the South China Morning Post about assorted US technology issues.

image

Thanks You.com. Pretty good illustration.

Now the Deepseek foray is delivering fungible results. Numerous articles appeared on January 27, 2025, pegged to the impact of the Deepseek smart software on the US AI sector. A representative article is “China’s Deepseek Sparks AI Market Rout.”

The trusted real news outfit said:

Technology shares around the world slid on Monday as a surge in popularity of a Chinese discount artificial intelligence model shook investors’ faith in the AI sector’s voracious demand for high-tech chips. Startup Deepseek has rolled out a free assistant it says uses lower-cost chips and less data, seemingly challenging a widespread bet in financial markets that AI will drive demand along a supply chain from chipmakers to data centres.

Facebook ripped a page from the Google leadership team’s playbook. According to “Meta Scrambles After Chinese AI Equals Its Own, Upending Silicon Valley,” the Zuckerberg outfit assembled four “war rooms” to figure out how a Chinese open source AI could become such a big problem from out of the blue.

I find it difficult to believe that big US outfits were unaware of China’s interest in smart software. Furthermore, the Deepseek team made quite clear by listing dozens upon dozens of AI experts who contributed to the Deepseek effort. But who in US AI land has time to cross correlate the names of the researchers in the ArXiv essays to ask, “What are these folks doing to output cheaper AI models?”

Several observations are warranted:

  1. The effect of this foray has been to cause an immediate and direct concern about US AI firms’ ability to reduce costs. China allegedly has rolled out a good model at a lower price. Price competition comes in many forms. In this case, China can use less modern components to produce more modern AI. If you want to see how this works for basic equipment navigate to “Genius Girl Builds Amazing Hydroelectric Power Station For An Elderly Living Alone in the Mountains.” Deepseek is this information warfare tactic in the smart software space.
  2. The mechanism for the foray was open source. I have heard many times from some very smart people that open source is the future. Maybe that’s true. We now have an example of open source creating a credibility problem for established US big technology outfits who use open source to publicize how smart and good they are, prove they can do great work, and appear to be “community” minded. Deepseek just posted software that showed a small venture firm was able to do what US big technology has done at a fraction of the cost. Chinese business understands price and cost centric methods. This is the cost angle driven through the heart of scaling up solutions. Like giant US trucks, the approach is expensive and at some point will collapses of its own bloated framework.
  3. The foray has been broken into four parts: [a] The arXiv thrust, [b] the free and open source software thrust which begs the question, “What’s next from this venture firm?”, [c] the social media play with posts ballooning on BlueSky, Telegram, and Twitter, [d] the real journalism outfits like Bloomberg and Reuters yapping about AI innovation. The four-part thrust is effective.

China’s made the US approach to smart software look incredibly stupid. I don’t believe that a small group of hard workers at a venture firm cooked up the Deepseek method. The number of authors on the arXiv Deepseek papers make that clear.

With one deft, non kinetic, non militaristic foray, China has amplified doubt about US AI methods. The action has chopped big bucks from outfits like Nvidia. Plus China has combined its playbook for lower costs and better prices with information warfare. I am not sure that Silicon Valley type outfits have a response to roll out quickly. The foray has returned useful intelligence to China.

Net net: More AI will be coming to destabilize the Silicon Valley way.

Stephen E Arnold, February 6, 2025

Google and Job Security? What a Hoot

February 4, 2025

dino orange_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

Yesterday (January 30, 2025), one of the group mentioned that Google employees were circulating a YAP. I was not familiar with the word “yap”, so I asked, “What’s a yap?” The answer: It is yet another petition.

Here’s what I learned and then verified by a source no less pristine than NBC news. About a 1,000 employees want Google to assure the workers that they have “job security.” Yo, Googlers, when lawyers at the Department of Justice and other Federal workers lose their jobs between sips of their really lousy DoJ coffee, there is not much job security. Imagine professionals with sinecures now forced to offer some version of reality on LinkedIn. Get real.

The “real” news outfit reported:

Google employees have begun a petition for “job security” as they expect more layoffs by the company. The petition calls on Google CEO Sundar Pichai to offer buyouts before conducting layoffs and to guarantee severance to employees that do get laid off. The petition comes after new CFO Anat Ashkenazi said one of her top priorities would be to drive more cost cutting as Google expands its spending on artificial intelligence infrastructure in 2025.

I remember when Googlers talked about the rigorous screening process required to get a job. This was the unicorn like Google Labs Aptitude Test or GLAT. At one point, years ago, someone in the know gave me before a meeting the “test.” Here’s the first page of the document. (I think I received this from a Googler in 2004 or 2005 five:

image

If you can’t read this, here’s question 6:

One your first day at Google, you discover that your cubicle mate wrote the textbook you used as a primary resource in your first year of graduate school. Do you:

a) Fawn obsequiously and ask if you can have an aut0ograph

b) Sit perfectly still and use only soft keystrokes to avoid disturbing her concentration

c) Leave her daily offerings of granola and English toffee from the food bins

d) Quote your favorite formula from the text book and explain how it’s now your mantra

e) Show her how example 17b could have been solved with 34 fewer lines of code?

I have the full GLAT if you want to see it. Just write benkent2020 at yahoo dot com and we will find a way to provide the allegedly real document to you.

The good old days of Googley fun and self confidence are, it seems, gone. As a proxy for the old Google one has employees we have words like this:

“We, the undersigned Google workers from offices across the US and Canada, are concerned about instability at Google that impacts our ability to do high quality, impactful work,” the petition says. “Ongoing rounds of layoffs make us feel insecure about our jobs. The company is clearly in a strong financial position, making the loss of so many valuable colleagues without explanation hurt even more.”

I would suggest that the petition won’t change Google’s RIF. The company faces several challenges. One of the major ones is the near impossibility of paying for [a] indexing and updating the wonderful Google index, [b] spending money in order to beat the pants off the outfits which used Google’s transformer tricks, and [c] buy, hire, or coerce the really big time AI wizards to join the online advertising company instead of starting an outfit to create a wrapper for Deepseek and getting money from whoever will offer it.

Sorry, petitions are unlikely to move a former McKinsey big time blue chip consultant. Get real, Googler. By the way, you will soon be a proud Xoogler. Enjoy that distinction.

Stephen E Arnold, February 4, 2025

Google AI Product Names: Worse Than the Cheese Fixation

February 4, 2025

dino orange_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

If you are Googley, you intuitively and instantly know what these products are:

Gemini Advanced 2.0 Flash

Gemini Advanced 2.0 Flash Thinking Experimental

2.0 Flash Thinking Experimental with apps

2.0 Pro Experimental

1.5 Pro

1.5 Flash

If you don’t get it, you write articles like this one: “You Only Need to See This Screenshot Once to Realize Why Gemini Needs to Follow ChatGPT in Making Its AI Products Less Confusing.” Follow ChatGPT, from the outfit OpenAI which is an open source and a non profit with a Chief Wizard who was fired and rehired more quickly than I can locate hallucinations in ChatGPT whatever. (With Google hallucinations, particularly in the cheese department, I know it is just a Sundar & Prabhakar joke.) With OpenAI, I am not quite sure of anything other than a successful (so far) end run around the masterful leader of X.com.

The write up says:

What we want is AI that just works, with simple naming conventions. If you look at the way Apple brands its products, it normally has up to three versions of a product with a simple name indicating the differences. It has two versions of its MacBook – the MacBook Air and MacBook Pro – and its latest iPhone – iPhone 16 and iPhone 16 Pro – that’s nice and simple.

Yeah, sure, Apple is the touchstone with indistinguishable iPhones, the M1, M2, M3, and M4 which are exactly understood as different by what percentage of the black turtleneck crowd?

Here’s a tip: These outfits are into marketing. Whether it is Apple designers influencing engineers or Google engineers influencing art history majors, neither company wants to do what courses in branding suggest; for example, consistency in naming and messaging and community engagement. I suppose confusion in social media and bafflement when trying to figure out what each black box large language model delivers other than acceptable high school essays and made up information is no big deal.

Word prediction is okay. Just a tip: Use the free services and read authoritative sources. Do some critical thinking. You may not be Googley, but you will be recognized as an individual who makes an honest effort to formulate useful outputs. Oh, you can label them experimental and flash to add some mystery to your old fashioned work, not “flash” work which is inconsistent, confusing, and sort of dumb in my opinion.

Stephen E Arnold, March 4, 2025

AI Smart, Humans Dumb When It Comes to Circuits

February 3, 2025

Anyone who knows much about machine learning knows we don’t really understand how AI comes to its conclusions. Nevertheless, computer scientists find algorithms do some things quite nicely. For example, ZME Science reports, "AI Designs Computer Chips We Can’t Understand—But They Work Really Well." A team from Princeton University and IIT Madras decided to flip the process of chip design. Traditionally, human engineers modify existing patterns to achieve desired results. The task is difficult and time-consuming. Instead, these researchers fed their AI the end requirements and told it to take it from there. They call this an "inverse design" method. The team says the resulting chips work great! They just don’t really know how or why. Writer Mihai Andrei explains:

"Whereas the previous method was bottom-up, the new approach is top-down. You start by thinking about what kind of properties you want and then figure out how you can do it. The researchers trained convolutional neural networks (CNNs) — a type of AI model — to understand the complex relationship between a circuit’s geometry and its electromagnetic behavior. These models can predict how a proposed design will perform, often operating on a completely different type of design than what we’re used to. … Perhaps the most exciting part is the new types of designs it came up with."

Yes, exciting. That is one word for it. Lead researcher Kaushik Sengupta notes:

"’We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance,’ says Sengupta. The designs were unintuitive and very different than those made by the human mind. Yet, they frequently offered significant improvements."

But at what cost? We may never know. It is bad enough that health care systems already use opaque algorithms, with all their flaws, to render life-and-death decisions. Just wait until these chips we cannot understand underpin those calculations. New world, new trade-offs for a world with dumb humans.

Cynthia Murrell, February 3, 2025

Dumb Smart Software? This Is News?

January 31, 2025

dino orange_thumbA blog post written by a real and still-alive dinobaby. If there is art, there is AI in my workflow.

The prescient “real” journalists at the Guardian have a new insight: When algorithms are involved, humans get the old shaftola. I assume that Weapons of Math Destruction was not on some folks’ reading list. (O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016). That book did a reasonably good job of explaining how smart software’s math can create some excitement for mere humans. Anecdotes about Amazon’s management of its team of hard-working delivery professionals shifting into survival tricks revealed by the wily Dane creating Survival Russia videos for YouTube.

(Yep, he took his kids to search for graves near a gulag.) “It’s a Nightmare: Couriers Mystified by the Algorithms That Control Their Jobs” explains that smart software raises some questions. The “real” journalist explains:

This week gig workers, trade unions and human rights groups launched a campaign for greater openness from Uber Eats, Just Eat and Deliveroo about the logic underpinning opaque algorithms that determine what work they do and what they are paid. The couriers wonder why someone who has only just logged on gets a gig while others waiting longer are overlooked. Why, when the restaurant is busy and crying out for couriers, does the app say there are none available?

Confusing? To some but to the senior managers of the organizations shifting to smart software, the cost savings are a big deal. Imagine. In Britain, a senior manager can spend a week or two in Nice, maybe Monaco? The write up reports:

The app companies say they do have rider support staffed by people and some information about the algorithms is available on their websites and when drivers are initially “onboarded”.

Of course the “app companies” say positive things. The issue is that management embraces smart software. A third-party firm is retained to advise the lawyers and accountants and possibly one presentable information technology person to a briefing. The options are considered and another third-party firm is retained to integrate the smart software. That third-party retains a probably unpresentable IT person who can lash up some smart software to the bailing-wire-and-spit enterprise software system. Bingo! The algorithms perform their magic. Oh, whom does one blame for a flawed solution? I don’t know. Just call in the lawyers.

The article explains the impact on a worker who delivers for people who cannot walk to a restaurant or the grocery:

“Every worker should understand the basis on which they are paid,” Farrar [a delivery professional] said. “But you’re being gamed into deciding whether to accept a job or not. Will I get a better offer? It’s like gambling and it’s very distressing and stressful for people. You are completely in a vacuum about how best to do the job and because people often don’t understand how decisions are being made about their work, it encourages conspiracies.”

To whom should Mr. Farrar and others shafted by math complain? Perhaps the Guardian newspaper, which is slightly less popular than TikTok or X.com, Facebook or Red Book, or BlueSky or YouTube. My suggestion would be for the Guardian to use these channels and beg for pounds or dollars like other valiant social media professionals. The person doing deliveries might want to explore working for Amazon deliveries and avail himself of Survival Russia videos when on his generous Amazon breaks. And what about the people who call a restaurant and specify at home delivery? I would recommend getting out of that comfy lounge chair and walking to the restaurant in person. While you wait for your lovingly-crafted meal at the Indian takeaway, you can read Weapons of Math Destruction.

Stephen E Arnold, January 31, 2025

AI Innovation: Writing Checks Is the Google Solution

January 30, 2025

dino orangeA blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.

Wow. First, Jeff Dean gets the lateral arabesque. Then the Google shifts its smart software to the “I am a star” outfit Deep Mind in the UK. Now, the cuddly Google has, according to Analytics India, pulled a fast one on the wizards laboring at spelling advertising another surprise. “Google Invests $1 Bn in Anthropic” reports:

This new investment is separate from the company’s earlier reported funding round of nearly $2 billion earlier this month, led by Lightspeed Venture Partners, to bump the company’s valuation to about $60 billion. In 2023, Google had invested $300 million in Anthropic, acquiring a 10% stake in the company. In November last, Amazon led Anthropic’s $4 billion fundraising effort, raising its overall funding to $8 billion for the company.

I thought Google was quantumly supreme. I thought Google reinvented protein stuff. I thought Google could do podcasts and fix up a person’s Gmail. I obviously was wildly off the mark. Perhaps Google’s “leadership” has taken time from writing scripts for the Sundar & Prabhakar Comedy Tour and had an epiphany. Did the sketch go like this:

Prabhakar: Did you see the slide deck for my last talk about artificial intelligence?

Sundar: Yes, I thought it was so so. Your final slide was a hoot. Did you think it up?

Prabhakar: No, I think little. I asked Anthropic Claude for a snappy joke. It worked.

Sundar: Did Jeff Dean help? Did Dennis Hassabis contribute?

Prabhakar: No, just Claude Sonnet. He likes me, Sundar.

Sundar: The secret of life is honesty, fair dealing, and Code Yellow!

Prabhakar: I think Google intelligence may be a contradiction in terms. May I requisition another billion for Anthropic?

Sundar: Yes, we need to care about posterity. Otherwise, our posterity will be defined by a YouTube ad.

Prabhakar: We don’t want to take it in the posterity, do we?

Sundar: Well….

Anthropic allegedly will release a “virtual collaborator.” Google wants that, right Jeff and Dennis? Are there anti-trust concerns? Are there potential conflicts of interest? Are there fears about revenues?

Of course not.

Will someone turn off those darned flashing red and yellow lights! Innovation is tough with the sirens, the lights, the quantumly supremeness of Googleness.

Stephen E Arnold, January 30, 2025

How Does Smart Software Interpret a School Test

January 29, 2025

dino orange_thumb_thumb_thumb_thumbA blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.

I spotted an article titled “‘Is This Question Easy or Difficult to You?’: This LSAT Reading Comprehension Question Is Breaking Brains.” Click bait? Absolutely.

Here’s the text to figure out:

Physical education should teach people to pursue healthy, active lifestyles as they grow older. But the focus on competitive sports in most schools causes most of the less competitive students to turn away from sports. Having learned to think of themselves as unathletic, they do not exercise enough to stay healthy.

Imagine you are sitting in a hot, crowded examination room. No one wants to be there. You have to choose one of the following solutions.

(a) Physical education should include noncompetitive activities.

[b] Competition causes most students to turn away from sports.

[c] People who are talented at competitive physical endeavors exercise regularly.

[d] The mental aspects of exercise are as important as the physical ones.

[e] Children should be taught the dangers of a sedentary lifestyle.

Okay, what did you select?

Well, the “correct” answer is [a], Physical education should include noncompetitive activities.

Now how did some of the LLMs or smart software do?

ChatGPT o1 settled on [a].

Claude Sonnet 3.5 spit out a page of text but did conclude that the correct answer as [a].

Gemini 1.5 Pro concluded that [a] was correct.

Llama 3.2 90B output two sentences and the correct answer [a]

Will students use large language models for school work, tests, and real life?

Yep. Will students question or doubt the outputs? Nope.

Are the LLMs “good enough”?

Yep.

Stephen E Arnold, January 29, 2025

The Joust of the Month: Microsoft Versus Salesforce

January 29, 2025

These folks don’t seem to see eye to eye: Windows Central tells us, “Microsoft Claps Back at Salesforce—Claims ‘100,000 Organizations’ Had Used Copilot Studio to Create AI Agents by October 2024.” Microsoft’s assertion is in response to jabs from Salesforce CEO Marc Benioff, who declares, “Microsoft has disappointed everybody with how they’ve approached this AI world.” To support this allegation, Benioff points to lines from a recent MarketWatch post. A post which, coincidentally, also lauds his company’s success with AI agents. The smug CEO also insists he is receiving complaints about his giant competitor’s AI tools. Writer Kevin Okemwa elaborates:

“Benioff has shared interesting consumer feedback about Copilot’s user experience, claiming customers aren’t finding themselves transformed while leveraging the tool’s capabilities. He added that customers barely use the tool, ‘and that’s when they don’t have a ChatGPT license or something like that in front of them.’ Last year, Salesforce’s CEO claimed Microsoft’s AI efforts are a ‘tremendous disservice’ to the industry while referring to Copilot as the new Microsoft Clippy because it reportedly doesn’t work or deliver value. As the AI agent race becomes more fierce, Microsoft has seemingly positioned itself in a unique position to compete on a level playing field with key players like Salesforce Agentforce, especially after launching autonomous agents and integrating them into Copilot Studio. Microsoft claims over 100,000 organizations had used Copilot Studio to create agents by October 2024. However, Benioff claimed Microsoft’s Copilot agents illustrated panic mode, majorly due to the stiff competition in the category.”

One notable example, writes Okemwa, is Zuckerberg’s vision of replacing Meta’s software engineers with AI agents. Oh, goodie. This anti-human stance may have inspired Benioff, who is second-guessing plans to hire live software engineers in 2025. At least Microsoft still appears to be interested in hiring people. For now. Will that antiquated attitude hold the firm back, supporting Benioff’s accusations?

Mount your steeds. Fight!

Cynthia Murrell, January 29, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta