Microsoft and Job Loss Categories: AI Replaces Humans for Sure
July 31, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I read “Working with AI: Measuring the Occupational Implications of Generative AI.” This is quite a sporty academic-type write up. The people cranking out this 41 page Sociology 305 term paper work at Microsoft (for now).
The main point of the 41-page research summary is:
Lots of people will lose their jobs to AI.
Now this might be a surprise to many people, but I think the consensus among bean counters is that humans cost too much and require too much valuable senior manager time to manage correctly. Load up the AI, train the software, and create some workflows. Good enough and the cost savings are obvious even to those who failed their CPA examination.
The paper is chock full of jargon, explanations of the methodology which makes the project so darned important, and a wonky approach to presenting the findings.
Remember:
Lots of people will lose their jobs to AI.
The highlight of the paper in my opinion is the “list” of occupations likely to find that AI displaces humans at a healthy pace. The list is on page 12 of the report. I snapped an image of this chart “Top 40 Occupations with Highest AI Applicability Score.” The jargon means:
Lots of people will lose their jobs to AI.
Here’s the chart. (Yes, I know you cannot read it. Just navigate to the original document and read the list. I am not retyping 40 job categories. Also, I am not going to explain the MSFT “mean action score.” You can look at that capstone to academic wizardry yourself.)
What are the top 10 jobs likely to result in big time job losses? Microsoft says they are:
- People who translate from one language to another
- Historians which I think means “history teachers” and writers of non-fiction books about the past
- Passenger attendants (think robots who bring you a for-fee vanilla cookie and an over-priced Coke with “real cane sugar”)
- People who sell services (yikes, that’s every consulting firm in the world. MBAs, be afraid)
- Writers (this category appears a number of times in the list of 40, but the “mean action score” knows best)
- Customer support people (companies want customers to never call. AI is the way to achieve this goal)
- CNC tool programmers (really? Someone has to write the code for the nifty Chip Foose wheel once I think. After that, who needs the programmer?)
- Telephone operators (there are still telephone operators. Maybe the “mean action score” system means receptionists at the urology doctors’ office?)
- Ticket agents (No big surprise)
- Broadcast announcers (no more Don Wilsons or Ken Carpenters. Sad.)
The 30 are equally eclectic and repetitive. I think you get the idea. Service jobs and work that is repetitive — Dinosaurs waiting to die.
Microsoft knows how to brighten the day for recent college graduates, people under 35, and those who are unemployed.
Oh, well, there is the Copilot system to speed information access about job hunting and how to keep a positive attitude. Thanks, Microsoft.
Stephen E Arnold, July 31, 2025
Guess Who Coded the Official Messaging App of Russia
July 30, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
The Bloomberg story title “Russia Builds a New Web Around Kremlin’s Handpicked Super App” caused me to poke around in the information my team and I have collected about “super apps,” encrypted messaging services, and ways the Kremlin wants to get access to any communication by Russian citizens and those living in the country and across the Russian Federation. The Bloomberg story is interesting, but I want to add some color to what seems to be a recent development.
If you answered the question “Guess who coded the official messaging app of Russia?” by saying, “Pavel and Nikolai Durov,” you are mostly correct. The official messaging act is a revamped version of VKontakte, the the Facebook knock off coded by Pavel and Nikolai Durov. By 2011, Kremlin authorities figured out that access to the content on a real time social media service like VK was a great way to stamp out dissent.
The Durovs did not immediately roll over, but by 2013, Pavel Durov folded. He took some cash, left Nikolai at home with mom, and set off to find a place for hospitable to his views of freedom, privacy, security, and living a life not involving a Siberian prison. Pavel Durov, however, has a way of attracting attention from government officials outside of Russia at this time. He is awaiting trial in France for a number of alleged online crimes, including CSAM. (CSAM is in the news in the US recently as well.)
Ongoing discussions with VK and an “integrator” have been underway for years. The Kremlin contracted with Sber and today’s VK to create a mandatory digital service for Russian citizens and anyone in the country buying a mobile phone in Russia. The idea is that with a mandatory messaging app, the Kremlin could access the data that Pavel Durov refused to produce.
The official roll out of the “new”, government-controlled VK service began in June 2025. On September 1, 2025, the new VK app must be pre-installed on any smartphone or tablet sold in the country. Early reports suggested that about one million users had jumped on the “new” messaging app MAX. Max is the post-Durov version of VKontakte without the Pavel Durov obstinacy and yapping about privacy.
The Russian online service https://PCNews.ru published “Ministry of Digital: Reports That the MAX Messenger Will Be Mandatory for Signing Electronic Documents Are Not True.” The write up reports that the “official” messaging service “MAX” will not be required for Russian is not true.
Earlier this week (July 28, 2025):
… the [Russian] government of the Kemerovo region is officially switching to using the Russian MAX messenger for all work communications. Before this, the national messenger began to be implemented in St. Petersburg, as we have already reported, Novosibirsk and Tatarstan. Depending on the region, the platform is used both in government structures and in the field of education. In Russia they want to ensure free and secure transfer of user data from WhatsApp and Telegram instant messengers to the Russian MAX platform. From September 1, 2025, the Max messenger will have to be pre-installed on all smartphones and tablets sold in Russia. In late June 2025, the developers announced that over one million users had registered with Max.
This means that not everything the Kremlin requires will reside on the super app MAX. From a government security vantage point, the decision is a good one. The Kremlin, like other governments, has information it tries hard to keep secret. The approach works until something like Microsoft SharePoint is installed or an outstanding person like Edward Snowden hauls off some sensitive information.
The Russians appear to be quite enthusiastic about the new government responsive super app. Here’s some data to illustrate the level of the survey sample’s enthusiasm.
“The Attitude of Russians Towards the National Messenger Has Become Known” reports:
- 55% of respondents admitted that they would like their data to be stored on Russian servers
- 85% communicate with loved ones using messaging apps
- 49% watch the news
- 47% of respondents use instant messengers for work or study
- 38% of respondents supported the idea of creating a Russian national messenger
- 26% answered that they rather support it
- 19% of respondents admitted that they were indifferent to this topic.
Other findings included:
- 36% of Russians named independence from the departure of foreign services among the advantages of creating a domestic messenger
- 33% appreciate popularization of Russian developments
- 32% see a positive from increasing data security
- 53% of respondents liked the idea when in one service you can not only communicate, but also use government services and order goods.
Will Russians be able to circumvent the mandatory use of MAX? Almost anything set up to cage online users can be circumvented. The Great Firewall of China after years of chatter does not seem to impede the actions of some people living in China from accessing certain online services. At this time, I can see some bright young people poking around online for tips and tricks related to modern proxy services, commodity virtual private networks, and possibly some fancy dancing with specialized hardware.
What about Telegram Messenger, allegedly the most popular encrypted messaging super app in Russia, the Russian Federation, and a chunk of Southeast Asia? My perception is that certain online habits, particularly if they facilitate adult content, contraband transactions, and money laundering are likely to persist. I don’t think it will take long for the “new” MAX super app to be viewed as inappropriate for certain types of online behavior. How long? Maybe five seconds?
Stephen E Arnold, July 30, 2025
Microsoft: Knee Jerk Management Enigma
July 29, 2025
This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.
I read “In New Memo, Microsoft CEO Addresses Enigma of Layoffs Amid Record Profits and AI Investments.” The write up says in a very NPR-like soft voice:
“This is the enigma of success in an industry that has no franchise value,” he wrote. “Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding. But it’s also a new opportunity for us to shape, lead through, and have greater impact than ever before.” The memo represents Nadella’s most direct attempt yet to reconcile the fundamental contradictions facing Microsoft and many other tech companies as they adjust to the AI economy. Microsoft, in particular, has been grappling with employee discontent and internal questions about its culture following multiple rounds of layoffs.
Discontent. Maybe the summer of discontent. No, it’s a reshaping or re-invention of a play by William Shakespeare (allegedly) which borrows from Chaucer’s Troilus and Criseyde with a bit more emphasis on pettiness and corruption to add spice to Boccaccio’s antecedent. Willie’s Troilus and Cressida makes the “love affair” more ironic.
Ah, the Microsoft drama. Let’s recap: [a] Troilus and Cressida’s Two Kids: Satya and Sam, [b] Security woes of SharePoint (who knew? eh, everyone]; [c] buying green credits or how much manure does a gondola rail card hold? [d] Copilot (are the fuel switches on? Nope); and [e] layoffs.
What’s the description of these issues? An enigma. This is a word popping up frequently it seems. An enigma is, according to Venice, a smart software system:
The word “enigma” derives from the Greek “ainigma” (meaning “riddle” or “dark saying”), which itself stems from the verb “aigin” (“to speak darkly” or “to speak in riddles”). It entered Latin as “aenigma”, then evolved into Old French as “énigme” before being adopted into English in the 16th century. The term originally referred to a cryptic or allegorical statement requiring interpretation, later broadening to describe any mysterious, puzzling, or inexplicable person or thing. A notable modern example is the Enigma machine, a cipher device used in World War II, named for its perceived impenetrability. The shift from “riddle” to “mystery” reflects its linguistic journey through metaphorical extension.
Okay, let’s work through this definition.
- Troilus and Cressida or Satya and Sam. We have a tortured relationship. A bit of a war among the AI leaders, and a bit of the collapse of moral certainty. The play seems to be going nowhere. Okay, that fits.
- Security woes. Yep, the cipher device in World War II. Its security or lack of it contributed to a number of unpleasant outcomes for a certain nation state associated with beer and Rome’s failure to subjugate some folks.
- Manure. This seems to be a metaphorical extension. Paying “green” or money for excrement is a remarkable image. Enough said.
- Fuel switches and the subsequent crash, explosion, and death of some hapless PowerPoint users. This lines up with “puzzling.” How did those Word paragraphs just flip around? I didn’t do it. Does anyone know why? Of course not.
- Layoffs. Ah, an allegorical statement. Find your future elsewhere. There is a demand for life coaches, LinkedIn profile consultants, and lawn service workers.
Microsoft is indeed speaking darkly. The billions burned in the AI push have clouded the atmosphere in Softie Land. When the smoke clears, what will remain? My thought is that the items a to e mentioned above are going to leave some obvious environmental alterations. Yep, dark saying because knee jerk reactions are good enough.
Stephen E Arnold, July 29, 2025
Silicon Valley: The New Home of Unsportsmanlike Conduct
July 26, 2025
Sorry, no smart software involved. A dinobaby’s own emergent thoughts.
I read the Axios run down of Mark Zuckerberg’s hiring blitz. “Mark Zuckerberg Details Meta’s Superintelligence Plans” reports:
The company [Mark Zuckerberg’s very own Meta] is spending billions of dollars to hire key employees as it looks to jumpstart its effort and compete with Google, OpenAI and others.
Meta (formerly the estimable juicy brand Facebook) had some smart software people. (Does anyone remember Jerome Pesenti?) Then there was Llama, and like the guanaco, tamed and used to carry tourists to Peruvian sights, has been seen as a photo opp for parents wanting to document their kids’ visit to Cusco.
Is Mr. Zuckerberg creating a mini Bell Labs in order to take the lead in smart software?The Axios write up contains some names of people who may have some connection to the Middle Kingdom. The idea is to get smart people, put them in a two-story building in Silicon Valley, turn up the A/C, and inject snacks.
I interpret the hiring and the allegedly massive pay packets to a simpler, more direct idea: Move fast, break things.
What are the things Mr. Zuckerberg is breaking?
First, I worked in Silicon Valley (aka Plastic Fantastic) for a number of years. I lived in Berkely and loved that commute to San Mateo, Foster City, and environs. Poaching employees was done in a more relaxed way. A chat at a conference, a small gathering after a softball game at the public fields not far from Stanford (yes, the school which had a president who made up information), or at some event like a talk at the Computer Museum or whatever it was called. That’s history. Mr. Zuckerberg shows up (virtually or in a T shirt), offers an alleged $100 million and hires a big name. No muss. No fuss. No social conventions. Just money. Cash. (I almost wish I was 25 and working in Mountain View. Sigh.)
Second, Mr. Zuckerberg is targeting the sensitive private parts of big leadership people. No dancing. Just targeted castration of key talent. Ouch. The Axios write up provides the names of some of these individuals. What interesting is that these people come from the knowledge parts hidden from the journalistic spotlight. Those suffering life changing removals without anesthesia include Google, OpenAI, and similar firms. In the good old days, Silicon Valley firms competed less of that Manhattan, lower east side vibe. No more.
Third, Mr. Zuckerberg is not announcing anything at conferences or with friendly emails. He is just taking action. Let the people at Apple, Safe Superintelligence, and similar outfits read the news in a resignation email. Mr. Zuckerberg knows that those NDAs and employment contracts can be used to wipe away tears when the loss of a valuable person is discovered.
What’s up?
Obviously Mr. Zuckerberg is not happy that his outfit is perceived as a loser in the AI game. Will this Bell Labs’ West approach work? Probably not. It will deliver one thing, however. Mr. Zuckerberg is sending a message that he will spend money to cripple, hobble, and derail AI innovation at firms beating his former LLM to death.
Move fast and break things has come to the folks who used the approach to take out swaths of established businesses. Now the technique is being used on companies next door. Welcome to the ungentrified neighborhood. Oh, expect more fist fights at those once friendly, co-ed softball games.
Stephen E Arnold, July 26, 2025
Will Apple Do AI in China? Subsidies, Investment, Saluting Too
July 25, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
Apple long ago vowed to use the latest tech to design its hardware. Now that means generative AI. Asia Financial reports, “Apple Keen to Use AI to Design Its Chips, Tech Executive Says.” That tidbit comes from a speech Apple VP Johny Srouji made as he accepted an award from tech R&D group Imec. We learn:
“In the speech, a recording of which was reviewed by Reuters, Srouji outlined Apple’s development of custom chips from the first A4 chip in an iPhone in 2010 to the most recent chips that power Mac desktop computers and the Vision Pro headset. He said one of the key lessons Apple learned was that it needed to use the most cutting-edge tools available to design its chips, including the latest chip design software from electronic design automation (EDA) firms. The two biggest players in that industry – Cadence Design Systems and Synopsys – have been racing to add artificial intelligence to their offerings. ‘EDA companies are super critical in supporting our chip design complexities,’ Srouji said in his remarks. ‘Generative AI techniques have a high potential in getting more design work in less time, and it can be a huge productivity boost.’”
Srouji also noted Apple is one to commit to its choices. The post notes:
“Srouji said another key lesson Apple learned in designing its own chips was to make big bets and not look back. When Apple transitioned its Mac computers – its oldest active product line – from Intel chips to its own chips in 2020, it made no contingency plans in case the switch did not work.”
Yes, that gamble paid off for the polished tech giant. Will this bet be equally advantageous?
Has Apple read “Apple in China”?
Cynthia Murrell, July 25, 2025
Lawyers Do What Lawyers Do: Revenues, AI, and Talk
July 22, 2025
A legal news service owned by LexisNexis now requires every article be auto-checked for appropriateness. So what’s appropriate? Beyond Search does not know. However, here’s a clue. Harvard’s NeimanLab reports, “Law360 Mandates Reporters Use AI Bias Detection on All Stories.” LexisNexis mandated the policy in May 2025. One of the LexisNexis professionals allegedly asserted that bias surfaced in reporting about the US government.The headline cited by VP Teresa Harmon read: “DOGE officials arrive at SEC with unclear agenda.” Um, okay.
Journalist Andrew Deck shares examples of wording the “bias” detection tool flagged in an article. The piece was a breaking story on a federal judge’s June 12 ruling against the administration’s deployment of the National Guard in LA. We learn:
“Several sentences in the story were flagged as biased, including this one: ‘It’s the first time in 60 years that a president has mobilized a state’s National Guard without receiving a request to do so from the state’s governor.’ According to the bias indicator, this sentence is ‘framing the action as unprecedented in a way that might subtly critique the administration.’ It was best to give more context to ‘balance the tone.’ Another line was flagged for suggesting Judge Charles Breyer had ‘pushed back’ against the federal government in his ruling, an opinion which had called the president’s deployment of the National Guard the act of ‘a monarchist.’ Rather than ‘pushed back,’ the bias indicator suggested a milder word, like ‘disagreed.’”
Having it sound as though anyone challenges the administration is obviously a bridge too far. How dare they? Deck continues:
“Often the bias indicator suggests softening critical statements and tries to flatten language that describes real world conflict or debates. One of the most common problems is a failure to differentiate between quotes and straight news copy. It frequently flags statements from experts as biased and treats quotes as evidence of partiality. For a June 5 story covering the recent Supreme Court ruling on a workplace discrimination lawsuit, the bias indicator flagged a sentence describing experts who said the ruling came ‘at a key time in U.S. employment law.’ The problem was that this copy, ‘may suggest a perspective.’”
Some Law360 journalists are not happy with their “owners.” Law360’s reporters and editors may not be on the same wave length as certain LexisNexis / Reed Elsevier executives. In June 2025, unit chair Hailey Konnath sent a petition to management calling for use of the software to be made voluntary. At this time, Beyond Search thinks that “voluntary” has a different meaning in leadership’s lexicon.
Another assertion is that the software mandate appeared without clear guidelines. Was there a dash of surveillance and possible disciplinary action? To add zest to this publishing stew, the Law360 Union is negotiating with management to adopt clearer guidelines around the requirement.
What’s the software engine? Allegedly LexisNexis built the tool with OpenAI’s GPT 4.0 model. Deck notes it is just one of many publishers now outsourcing questions of bias to smart software. (Smart software has been known for its own peculiarities, including hallucination or making stuff up.) For example, in March 2025, the LA Times launched a feature dubbed “Insights” that auto-assesses opinion stories’ political slants and spits out AI-generated counterpoints. What could go wrong? Who new that KKK had an upside?
What happens when a large publisher gives Grok a whirl? What if a journalist uses these tools and does not catch a “glue cheese on pizza moment”? Senior managers training in accounting, MBA get it done recipes, and (date I say it) law may struggle to reconcile cost, profit, fear, and smart software.
But what about facts?
Cynthia Murrell, July 22, 2025
Why Customer Trust of Chatbot Does Not Matter
July 22, 2025
Just a dinobaby working the old-fashioned way, no smart software.
The need for a winner is pile driving AI into consumer online interactions. But like the piles under the San Francisco Leaning Tower of Insurance Claims, the piles cannot stop the sag, the tilt, and the sight of a giant edifice tilting.
I read an article in the “real” new service called Fox News. The story’s title is “Chatbots Are Losing Customer Trust Fast.” The write up is the work of the CyberGuy, so you know it is on the money. The write up states:
While companies are excited about the speed and efficiency of chatbots, many customers are not. A recent survey found that 71% of people would rather speak with a human agent. Even more concerning, 60% said chatbots often do not understand their issue. This is not just about getting the wrong answer. It comes down to trust. Most people are still unsure about artificial intelligence, especially when their time or money is on the line.
So what? Customers are essentially irrelevant. As long as the outfit hits its real or imaginary revenue goals, the needs of the customer are not germane. If you don’t believe me, navigate to a big online service like Amazon and try to find the number of customer service. Let me know how that works out.
Because managers cannot “fix” human centric systems, using AI is a way out. Let AI do it is a heck of lot easier than figuring out a work flow, working with humans, and responding to customer issues. The old excuse was that middle management was not needed when decisions were pushed down to the “workers.”
AI flips that. Managerial ranks have been reduced. AI decisions come from “leadership” or what I call carpetland. AI solves problems: Actually managing, cost reduction, and having good news for investor communications.
The customers don’t want to talk to software. The customer wants to talk to a human who can change a reservation without automatically billing for a service charge. The customer wants a person to adjust a double billing for a hotel doing business Snap Commerce Holdings. The customer wants a fair shake.
AI does not do fair. AI does baloney, confusion, errors, and hallucinations. I tried a new service which put Google Gemini front and center. I asked one question and got an incomplete and erroneous answer. That’s AI today.
The CyberGuy’s article says:
If a company is investing in a chatbot system, it should track how well that system performs. Businesses should ask chatbot vendors to provide real-world data showing how their bots compare to human agents in terms of efficiency, accuracy and customer satisfaction. If the technology cannot meet a high standard, it may not be worth the investment.
This is simply not going to happen. Deployment equals cost savings. Only when the money goes away will someone in leadership take action. Why? AI has put many outfits in a precarious position. Big money has been spent. Much of that money comes from other people. Those “other people” want profits, not excuses.
I heard a sci-fi rumor that suggests Apple can buy OpenAI and catch up. Apple can pay OpenAI’s investors and make good on whatever promissory payments have been offered by that firm’s leadership. Will that solve the problem?
Nope. The AI firms talk about customers but don’t care. Dealing with customers abused by intentionally shady business practices cooked up by a committee that has to do something is too hard and too costly. Let AI do it.
If the CyberGuy’s write up is correct, some excitement is speeding down the information highway toward some well known smart software companies. A crash at one of the big boys junctions will cause quite a bit of collateral damage.
Whom do you trust? Humans or smart software.
Stephen E Arnold, July 22, 2025
What Did You Tay, Bob? Clippy Did What!
July 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
I was delighted to read “OpenAI Is Eating Microsoft’s Lunch.” I don’t care who or what wins the great AI war. So many dollars have been bet that hallucinating software is the next big thing. Most content flowing through my dinobaby information system is political. I think this food story is a refreshing change.
So what’s for lunch? The write up seems to suggest that Sam AI-Man has not only snagged a morsel from the Softies’ lunch pail but Sam AI-Man might be prepared to snap at those delicate lady fingers too. The write up says:
ChatGPT has managed to rack up about 10 times the downloads that Microsoft’s Copilot has received.
Are these data rock solid? Probably not, but the idea that two “partners” who forced Googzilla to spasm each time its Code Red lights flashed are not cooperating is fascinating. The write up points out that when Microsoft and OpenAI were deeply in love, Microsoft had the jump on the smart software contenders. The article adds:
Despite that [early lead], Copilot sits in fourth place when it comes to total installations. It trails not only ChatGPT, but Gemini and Deepseek.
Shades of Windows phone. Another next big thing muffed by the bunnies in Redmond. How could an innovation power house like Microsoft fail in the flaming maelstrom of burning cash that is AI? Microsoft’s long history of innovation adds a turbo boost to its AI initiatives. The Bob, Clippy, and Tay inspired Copilot is available to billions of Microsoft Windows users. It is … everywhere.
The write up explains the problem this way:
Copilot’s lagging popularity is a result of mismanagement on the part of Microsoft.
This is an amazing insight, isn’t it? Here’s the stunning wrap up to the article:
It seems no matter what, Microsoft just cannot make people love its products. Perhaps it could try making better ones and see how that goes.
To be blunt, the problem at Microsoft is evident in many organizations. For example, we could ask IBM Watson what Microsoft should do. We could fire up Deepseek and get some China-inspired insight. We could do a Google search. No, scratch that. We could do a Yandex.ru search and ask, “Microsoft AI strategy repair.”
I have a more obvious dinobaby suggestion, “Make Microsoft smaller.” And play well with others. Silly ideas I know.
Stephen E Arnold, July 21, 2025
Xooglers Reveal Googley Dreams with Nightmares
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Fortune Magazine published a business school analysis of a Googley dream and its nightmares titled “As Trump Pushes Apple to Make iPhones in the U.S., Google’s Brief Effort Building Smartphones in Texas 12 years Ago Offers Critical Lessons.” The author, Mr. Kopytoff, states:
Equivalent in size to nearly eight football fields, the plant began producing the Google Motorola phones in the summer of 2013.
Mr. Kopytoff notes:
Just a year later, it was all over. Google sold the Motorola phone business and pulled the plug on the U.S. manufacturing effort. It was the last time a major company tried to produce a U.S. made smartphone.
Yep, those Googlers know how to do moon shots. They also produce some digital rocket ships that explode on the launch pads, never achieving orbit.
What happened? You will have to read the pork loin write up, but the Fortune editors did include a summary of the main point:
Many of the former Google insiders described starting the effort with high hopes but quickly realized that some of the assumptions they went in with were flawed and that, for all the focus on manufacturing, sales simply weren’t strong enough to meet the company’s ambitious goals laid out by leadership.
My translation of Fortune-speak is: “Google was really smart. Therefore, the company could do anything. Then when the genius leadership gets the bill, a knee jerk reaction kills the project and moves on as if nothing happened.”
Here’s a passage I found interesting:
One of the company’s big assumptions about the phone had turned out to be wrong. After betting big on U.S. assembly, and waving the red, white, and blue in its marketing, the company realized that most consumers didn’t care where the phone was made.
Is this statement applicable to people today? It seems that I hear more about costs than I last year. At a 4th of July hoe down, I heard:
- “The prices are Kroger go up each week.”
- “I wanted to trade in my BMW but the prices were crazy. I will keep my car.”
- “I go to the Dollar Store once a week now.”
What’s this got to do with the Fortune tale of Google wizards’ leadership goof and Apple (if it actually tries to build an iPhone in Cleveland?
Answer: Costs and expertise. Thinking one is smart and clever is not enough. One has to do more than spend big money, talk in a supercilious manner, and go silent when the crazy “moon shot” explodes before reaching orbit.
But the real moral of the story is that it is political. That may be more problematic than the Google fail and Apple’s bitter cider. It may be time to harvest the fruit of tech leaderships’ decisions.
Stephen E Arnold, July 18, 2025
Swallow Your AI Pill or Else
July 18, 2025
Just a dinobaby without smart software. I am sufficiently dull without help from smart software.
Annoyed at the next big thing? I find it amusing, but a fellow with the alias of “Honest Broker” (is that an oxymoron) sure seems to upset with smart software. Let me make clear my personal view of smart software; specifically, the outputs and the applications are a blend of the stupid, semi useful, and dangerous. My team and I have access smart software, some running locally on one of my work stations, and some running in the “it isn’t cheap is it” cloud.
The write up is titled “The Force-Feeding of AI on an Unwilling Public: This Isn’t Innovation. It’s Tyranny.” The author, it seems, is bristling at how 21st century capitalism works. News flash: It doesn’t work for anyone except the stakeholders. When the stakeholders are employees and the big outfit fires some stakeholders, awareness dawns. Work for a giant outfit and get to the top of the executive pile. Alternatively, become an expert in smart software and earn lots of money, not a crappy car like we used to give certain high performers. This is cash, folks.
The argument in the polemic is that outfits like Amazon, Google, and Microsoft, et al, are forcing their customers to interact with systems infused with “artificial intelligence.” Here’s what the write up says:
“The AI business model would collapse overnight if they needed consumer opt-in. Just pass that law, and see how quickly the bots disappear. ”
My hunch is that the smart software companies lobbied to get the US government to slow walk regulation of smart software. Not long ago, wizards circulated a petition which suggested a moratorium on certain types of smart software development. Those who advocate peace don’t want smart software in weapons. (News flash: Check out how Ukraine is using smart software to terminate with extreme prejudice individual Z troops in a latrine. Yep, smart software and a bit of image recognition.)
Let me offer several observations:
- For most people technology is getting money from an automatic teller machine and using a mobile phone. Smart software is just sci-fi magic. Full stop.
- The companies investing big money in smart software have to make it “work” well enough to recover their investment and (hopefully) railroad freight cars filled with cash or big crypto transfers. To make something work, deception will be required. Full stop.
- The products and services infused with smart software will accelerate the degradation of software. Today’s smart software is a recycler. Feed it garbage; it outputs garbage. Maybe a phase change innovation will take place. So far, we have more examples of modest success or outright disappointment. From my point of view, core software is not made better with black box smart software. Someday, but today is not the day.
I like the zestiness of the cited write up. Here’s another news flash: The big outfits pumping billions into smart software are relentless. If laws worked, the EU and other governments would not be taking these companies to court with remarkable regularity. Laws don’t seem to work when US technology companies are “innovating.”
Have you ever wondered if the film Terminator was sent to the present day by aliens? Forget the pyramid stuff. Terminator is a film used by an advanced intelligence to warn us humanoids about the dangers of smart software.
The author of the screed about smart software has accomplished one thing. If smart software turns on humanoids, I can identify a person who will be a list for in-depth questioning.
I love smart software. I think the developers need some recognition for their good work. I believe the “leadership” of the big outfits investing billions are doing it for the good of humanity.
I also have a bridge in Brooklyn for sale… cheap. Oh, I would suggest that the analogy is similar to the medical device by which liquid is introduced into the user’s system typically to stimulate evacuation of the wallet.
Stephen E Arnold, July 18, 2025