Why Stuff No Longer Works Very Well
December 28, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Own a Tesla? What about those Southwest flight delays? Been to a hospital emergency room in DC? Tried to get a plumber on a holiday? Yep, systems work … sometimes, sort of, or mostly. Have you ever wondered why teens working at a fruit market cannot make change, recognize a fifty cent piece, or know zero about when the grapes were put on display?
I think I have found the answer to these and other questions about modern life. Navigate to “Become an Expert in Less Than an Hour.” The write up is a how to be superficially smart. Now, don’t get me wrong, superficiality is an important characteristic. People decide whether a person is okay or not in seconds, maybe less. Impressing a person to whom one is selling a used car relies on that instant charm feature of some people. The skill of superficial smartness is important to those who want to pick up a person of interest in a bar, a consultant at a blue chip firm, a lawyer explaining his fees to a trust customer, and political advisors who shift from art history to geopolitics over lunch.
The write up reduces superficial intelligence to a cook book, and I think quite a few people will find the ideas in the essay of considerable value. Here’s an example:
“anthropologists frequently have to learn how to grok an entire subfield in under an hour. Yes, real expertise takes years of hard work, but identifying the key works and ideas that define a subfield can be done quickly if you know where to look.”
Perfect.
Stephen E Arnold, December 28, 2023
Quantum Management: The Google Method
December 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read a story (possibly sad or at least bittersweet) in Inc. Magazine. “Google Fired 12,000 Employees. A Year Later, the CEO Says It Was the Right Call, Just Done in the Wrong Way” asks an interesting question of a company which has triggered a number of employee-related actions. From protests to stochastic parrots, the Google struggles to tailor its management methods to the people it hires.
What happens when high school science club engineering is applied to modern tasks? Some projects fall down. Hello, San Francisco, do you have a problem with a certain big building? Thanks, MSFT Copilot. Good enough.
The story reports:
A few days ago, Google’s CEO Sundar Pichai openly acknowledged that the way Google managed the layoff of 12,000 employees, about 6 percent of its workforce, was not done right…. Initially, Google’s stance on the layoffs was presented as a strategic necessity, a move to streamline operations and focus on crucial business areas…. Pichai’s frank admission that the process could have been handled differently is a notable shift from the company’s earlier justifications??.
What I think this means is that Google’s esteemed leader made a somewhat typical decision for a person imbued with some of the philosophy of a non-Western culture. In 2023, Google has lurched from Red Alert to Red Alert. In January 2023, Microsoft seized the marketing initiative in the lucrative world of enterprise artificial intelligence. And what about some of Google’s AI demonstrations? Yeah, some were edited and tweaked to be more Googley. Then after a couple of high profile legal cases went against the company, Sundar Pichai has allegedly admitted that he has made some errors.
No kidding. Like the architect engineers of the Florida high rise which collapsed to ruin the day of a number of people, mistakes were made. I suppose San Francisco’s Millennium Tower could topple over the holidays. That event would pull some eyeballs off the online advertising company.
The sad reality is that Google’s senior management is pushing buttons and getting poor results. The Inc. Magazine article ends this way:
The key questions moving forward are: Will Google face any repercussions for the way it handled the layoffs? What concrete actions will the company take to improve communication and support for its employees, both those who were let go and those who remain? And, importantly, how will this experience shape Google’s, and potentially other companies’, approach to workforce management in the future?
Questions, just not the right one. In my opinion, Google’s Board of Directors may want to ask:
Is it time to big adieu to Sundar Pichai and his expensive hires? With the current team in place, Google’s core business model at risk from ChatGPT-type findability services, legal eagles hovering over the company, and now a public admission that firing 12,000 wizards by email was a mistake, I ask, “What’s next, Sundar?”
Net net: The company’s management method (which reminds me of how my high school science club solved problems) is showing signs of cracking and crumbling in my opinion.
Stephen E Arnold, December 27, 2023
AI Risk: Are We Watching Where We Are Going?
December 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
To brighten your New Year, navigate to “Why We Need to Fear the Risk of AI Model Collapse.” I love those words: Fear, risk, and collapse. I noted this passage in the write up:
When an AI lives off a diet of AI-flavored content, the quality and diversity is likely to decrease over time.
I think the idea of marrying one’s first cousin or training an AI model on AI-generated content is a bad idea. I don’t really know, but I find the idea interesting. The write up continues:
Is this model at risk of encountering a problem? Looks like it to me. Thanks, MSFT Copilot. Good enough. Falling off the I beam was a non-starter, so we have a more tame cartoon.
Model collapse happens when generative AI becomes unstable, wholly unreliable or simply ceases to function. This occurs when generative models are trained on AI-generated content – or “synthetic data” – instead of human-generated data. As time goes on, “models begin to lose information about the less common but still important aspects of the data, producing less diverse outputs.”
I think this passage echoes some of my team’s thoughts about the SAIL Snorkel method. Googzilla needs a snorkel when it does data dives in some situations. The company often deletes data until a legal proceeding reveals what’s under the company’s expensive, smooth, sleek, true blue, gold trimmed kimonos
The write up continues:
There have already been discussions and research on perceived problems with ChatGPT, particularly how its ability to write code may be getting worse rather than better. This could be down to the fact that the AI is trained on data from sources such as Stack Overflow, and users have been contributing to the programming forum using answers sourced in ChatGPT. Stack Overflow has now banned using generative AIs in questions and answers on its site.
The essay explains a couple of ways to remediate the problem. (I like fairy tales.) The first is to use data that comes from “reliable sources.” What’s the definition of reliable? Yeah, problem. Second, the smart software companies have to reveal what data were used to train a model. Yeah, techno feudalists totally embrace transparency. And, third, “ablate” or “remove” “particular data” from a model. Yeah, who defines “bad” or “particular” data. How about the techno feudalists, their contractors, or their former employees.
For now, let’s just use our mobile phone to access MSFT Copilot and fix our attention on the screen. What’s to worry about? The person in the cartoon put the humanoid form in the apparently risky and possibly dumb position. What could go wrong?
Stephen E Arnold, December 27, 2023
Google Gobbles Apple Alums
December 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Technology companies are notorious for poaching employees from one other. Stealing employees is so common that business experts have studied it for years. One of the more recent studies concentrates on the destination of ex-Apple associates as told by PC Magazine: “Apple Employees Leave For Google More Than Any Other Company.”
Switch on Business investigated LinkedIn data to determine which tech giants poach the industry’s best talent. All of the big names were surveyed: Uber, Intel, Adobe, Salesforce, Nvidia, Netflix, Oracles, Tesla, IBM, Microsoft, Meta, Apple, Amazon, and Google. The study mainly focused on employees working at the aforementioned names and if they switched to another listed company.
Meta had the highest proportion of any of the tech giants with 26.51% of employees having worked at rival. Google had the most talent by volume with 24.15%. IBM stole the least employees at 2.28%. Apple took 5.7% of its competitions’ talent and that comes with some drama. Apple used to purchase Intel chips for its products then the company recently decided to build its own chips. They hired 2000 people away from Intel.
The most interesting factoids are the patterns found in employee advancements:
“Potentially surprising is the fact that Apple employees are twice as likely to make the move to Google from Apple than the next biggest post-Apple destination, Amazon. After Amazon, Apple employees make the move to Meta, followed by Microsoft, Tesla, Nvidia, Salesforce, Adobe, Intel, and Oracle.
As for where Apple employees come from, new Apple employees are most likely to enter the company from Intel, followed by Microsoft, Amazon, Google, IBM, Oracle, Tesla, Nvidia, Adobe, and Meta.
While Apple employees are most often headed to Google, Google employees are most often headed to Meta, Microsoft, and Amazon, with Apple only making it to fourth on the list.”
It sounds like a hiring game of ring-around-the-rosy. Unless the employees retire, they’ll eventually make it back to their first company.
Whitney Grace, December 25, 2023
AI and the Obvious: Hire Us and Pay Us to Tell You Not to Worry
December 26, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “Accenture Chief Says Most Companies Not Ready for AI Rollout.” The paywalled write up is an opinion from one of Captain Obvious’ closest advisors. The CEO of Accenture (a general purpose business expertise outfit) reveals some gems about artificial intelligence. Here are three which caught my attention.
#1 — “Sweet said executives were being “prudent” in rolling out the technology, amid concerns over how to protect proprietary information and customer data and questions about the accuracy of outputs from generative AI models.”
The secret to AI consulting success: Cost, fear of failure, and uncertainty or CFU. Thanks, MSFT Copilot. Good enough.
Arnold comment: Yes, caution is good because selling caution consulting generates juicy revenues. Implementing something that crashes and burns is a generally bad idea.
#2 — “Sweet said this corporate prudence should assuage fears that the development of AI is running ahead of human abilities to control it…”
Arnold comment: The threat, in my opinion, comes from a handful of large technology outfits and from the legions of smaller firms working overtime to apply AI to anything that strikes the fancy of the entrepreneurs. These outfits think about sizzle first, consequences maybe later. Much later.
# 3 — ““There are no clients saying to me that they want to spend less on tech,” she said. “Most CEOs today would spend more if they could. The macro is a serious challenge. There are not a lot of green shoots around the world. CEOs are not saying 2024 is going to look great. And so that’s going to continue to be a drag on the pace of spending.”
Arnold comment: Great opportunity to sell studies, advice, and recommendations when customers are “not saying 2024 is going to look great.” Hey, what’s “not going to look great” mean?
The obvious is — obvious.
Stephen E Arnold, December 26, 2023
AI Is Here to Help Blue Chip Consulting Firms: Consultants, Tighten Your Seat Belts
December 26, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “Deloitte Is Looking at AI to Help Avoid Mass Layoffs in Future.” The write up explains that blue chip consulting firms (“the giants of the consulting world”) have been allowing many Type A’s to find their future elsewhere. (That’s consulting speak for “You are surplus,” “You are not suited for another team,” or “Hasta la vista.”) The message Deloitte is sending strikes me as, “We are leaders in using AI to improve the efficiency of our business. You (potential customers) can hire us to implement AI strategies and tactics to deliver the same turbo boost to your firm.) Deloitte is not the only “giant” moving to use AI to improve “efficiency.” The big folks and the mid-tier players are too. But let’s look at the Deloitte premise in what I see as a PR piece.
Hey, MSFT Copilot. Good enough. Your colleagues do have experience with blue-chip consulting firms which obviously assisted you.
The news story explains that Deloitte wants to use AI to help figure out who can be billed at startling hourly fees for people whose pegs don’t fit into the available round holes. But the real point of the story is that the “giants” are looking at smart software to boost productivity and margins. How? My answer is that management consulting firms are “experts” in management. Therefore, if smart software can make management better, faster, and cheaper, the “giants” have to use best practices.
And what’s a best practice in the context of the “giants” and the “avoid mass layoffs” angle? My answer is, “Money.”
The big dollar items for the “giants” are people and their associated costs, travel, and administrative tasks. Smart software can replace some people. That’s a no brainer. Dump some of the Type A’s who don’t sell big dollar work, winnow those who are not wedded to the “giant” firm, and move the administrivia to orchestrated processes with smart software watching and deciding 24×7.
Imagine the “giants” repackaging these “learnings” and then selling the information about how to and payoffs to less informed outfits. Once that is firmly in mind, the money for the senior partners who are not on on the “hasta la vista” list goes up. The “giants” are not altruistic. The firms are built fro0m the ground up to generate cash, leverage connections, and provide services to CEOs with imposter syndrome and other issues.
My reaction to the story is:
- Yep, marketing. Some will do the Harvard Business Review journey; others will pump out white papers; many will give talks to “preferred” contacts; and others will just imitate what’s working for the “giants”
- Deloitte is redefining what expertise it will require to get hired by a “giant” like the accounting/consulting outfit
- The senior partners involved in this push are planning what to do with their bonuses.
Are the other “giants” on the same path? Yep. Imagine. Smart software enabled “giants” making decisions for the organizations able to pay for advice, insight, and warm embrace of AI-enabled humanoids. What’s the probability of success? Close enough for horseshoes. and even bigger money for some blue chip professionals. Did Deloitte over hiring during the pandemic?
Of course not, the tactic was part of the firm’s plan to put AI to a real world test. Sound good. I cannot wait until the case studies become available.
Stephen E Arnold, December 26, 2023
Amazon and the US Government: Doing Just Fine, Thanks
December 26, 2023
This essay is the work of a dumb dinobaby. No smart software required.
OSHA was established to protect workers from unsafe conditions. Big technology barons like Jeff Bezos with Amazon don’t give rat’s hind quarters about employee safety. They might project an image of caring and kindness but that’s from Amazon’s PR department. Amazon is charged with innumerable workplace violations, including micromanaging yawning to poor compensation. The Washington Posts details one of Amazon’s latest scandals, “A 20-Year-Old Amazon Employee Died At Work. Indiana Issued A $7000 Fine.”
Twenty-year old Caes Gruesbeck was clearing a blockage on an overhead conveyor belt at the Amazon distribution center in Fort Wayne, Indiana. He needed to use an elevated lift to reach the blockage. His head collided with the conveyor and became trapped. Gruesbeck later died from blunt force trauma.
Indiana safety officials investigated for eleven weeks and found that Amazon failed to ensure a safe work environment. Amazon was only cited and fined $7000. Amazon employees continue to be injured and the country’s second largest private employer is constantly scrutinized, but state and federal safety regulators are failing to enforce policies. They are failing because Amazon is a powerful corporation with a hefty legal department.
“‘Seven thousand dollars for the death of a 20-year-old? What’s that going to do to Amazon?’ said Stephen Wagner, an Indiana attorney who has advocated for more worker-friendly laws in the state. ‘There’s no real financial incentive for an employer like Amazon to change their working environment to make it more safe.’”
Federal and state governments are trying to make Amazon take responsibility through the current system but it’s slow. Safety regulators can’t inspect every Amazon complaint and building. They are instead working towards a sweeping company approach like the Family Dollar and Dollar Tree investigations about blocked fire exits. It took six years, resulting in $15 million in fines and a $1.35 million settlement.
Once companies are hit with large fines it changes how they do business. Amazon probably will be brought to justice but it will take a long time.
Whitney Grace, December 26, 2023
Quantum Supremacy in Management: A Google Incident
December 25, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I spotted an interesting story about an online advertising company which has figured out how to get great PR in respected journals. But this maneuver is a 100 yard touchdown run for visibility. “Hundreds Gather at Google’s San Francisco Office to Protest $1.2 Billion Contract with Israel” reports:
More than 400 protesters gathered at Google’s San Francisco office on Thursday to demand the tech company cut ties with Israel’s government.
Some managers and techno wizards envy companies which have the knack for attracting crowds and getting free publicity. Thanks, MSFT Copilot. Close enough for horseshoes
The demonstration, according to the article, was a response to Google and its new BFF’s project for Israel. The SFGate article contains some interesting photographs. One is a pretend dead person wrapped in a shroud with the word “Genocide” in bright, cheerful Google log colors. I wanted to reproduce it, but I am not interested in having copyright trolls descend on me like a convocation of legal eagles. The “Project Nimbus” — nimbus is a type of cloud which I learned about in the fifth- or sixth-grade — “provides the country with local data centers and cloud computing services.”
The article contains words which cause OpenAI’s art generators to become uncooperative. That banned word is “genocide.” The news story adds some color to the fact of the protest on December 14, 2023:
Multiple speakers mentioned an article from The Intercept, which reported that Nimbus delivered Israel the technology for “facial detection, automated image categorization, object tracking, and even sentiment analysis.” Others referred to an NPR investigation reporting that Israel says it is using artificial intelligence to identify targets in Gaza, though the news outlet did not link the practice to Google’s technology.
Ah, ha. Cloud services plus useful technologies. (I wonder if the facial recognition system allegedly becoming available to the UK government is included in the deal?) The story added a bit of spice too:
For most of Thursday’s protest, two dozen people lay wrapped in sheets — reading “Genocide” in Google’s signature rainbow lettering — in a “die-in” performance. At the end, they stood to raise up white kites, as a speaker read Refaat Alareer’s “If I must die,” written just over a month before the Palestinian poet was killed by an Israeli airstrike.
The article included a statement from a spokesperson, possible from Google. This individual said:
“We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education,” she said. “Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”
Does this sound a bit like an annoyed fifth- or sixth-grade teacher interrupted by a student who said out loud: “Clouds are hot air.” While not technically accurate, the student was sent to the principal’s office. What will happen in this situation?
Some organizations know how to capture users’ attention. Will the company be able to monetize it via a YouTube Short or a more lengthy video. Google is quite skilled at making videos which purport to show reality as Google wants it to be. The “real” reality maybe be different. Revenue is important, particularly as regulatory scrutiny remains popular in the EU and the US.
Stephen E Arnold, December 25, 2023
A Grade School Food Fight Could Escalate: Apples Could Become Apple Sauce
December 25, 2023
This essay is the work of a dumb dinobaby. No smart software required.
A squabble is blowing up into a court fight. “Beeper vs Apple Battle Intensifies: Lawmakers Demand DOJ Investigation” reports:
US senators have urged the DOJ to probe Apple’s alleged anti-competitive conduct against Beeper.
Apple killed a messaging service in the name of protecting apple pie, mom, love, truth, justice, and the American way. Ooops, sorry. That’s something from the Superman comix.
“You squashed my apple. You ruined my lunch. You ruined my life. My mommy will call your mommy, and you will be in trouble,” says the older, more mature child. The principal appears and points out that screeching is not comely. Thanks, MSFT Copilot. Close enough for horseshoes.
The article said:
The letter to the DOJ is signed by Minnesota Senator Amy Klobuchar, Utah Senator Mike Lee, Congressman Jerry Nadler, and Congressman Ken Buck. They have urged the law enforcement body to investigate “whether Apple’s potentially anti-competitive conduct against Beeper violates US antitrust laws.” Apple has been constantly trying to block Beeper Mini and Beeper Cloud from accessing iMessage. The two Beeper messaging apps allow Android users to interact with iPhone users through iMessage — an interoperability Apple has been opposed to for a long time now.
As if law enforcement did not have enough to think about. Now an alleged monopolist is engaged in a grade school cafeteria spat with a younger, much smaller entity. By golly, that big outfit is threatened by the jejune, immature, and smaller service.
How will this play out?
- A payday for Beeper when Apple makes the owners of Beeper an offer that would be tough to refuse. Big piles of money can alter one’s desire to fritter away one’s time in court
- The dust up spirals upwards. What if the attitude toward Apple’s approach to its competitors becomes a crusade to encourage innovation in a tough environment for small companies? Containment may be difficult.
- The jury decision against Google may kindle more enthusiasm for another probe of Apple and its posture in some tricky political situations; for example, the iPhone in China, the non-repairability issues, and Apple’s mesh of inter-connected services which may be seen as digital barriers to user choice.
In 2024, Apple may find that some government agencies are interested in the fruit growing on the company’s many trees.
Stephen E Arnold, December 25, 2023
An Important, Easily Pooh-Poohed Insight
December 24, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Dinobaby here. I am on the regular highway, not the information highway. Nevertheless l want to highlight what I call an “easily poohpoohed factoid. The source of the item this morning is an interview titled “Google Cloud Exec: Enterprise AI Is Game-Changing, But Companies Need to Prepare Their Data.”
I am going to skip the PR baloney, the truisms about Google fumbling the AI ball, and rah rah about AI changing everything. Let me go straight to factoid which snagged my attention:
… at the other side of these projects, what we’re seeing is that organizations did not have their data house in order. For one, they had not appropriately connected all the disparate data sources that make up the most effective outputs in a model. Two, so many organizations had not cleansed their data, making certain that their data is as appropriate and high value as possible. And so we’ve heard this forever — garbage in, garbage out. You can have this great AI project that has all the tenets of success and everybody’s really excited. Then, it turns out that the data pipeline isn’t great and that the data isn’t streamlined — all of a sudden your predictions are not as accurate as they could or should have been.
Why are points about data significant?
First, investors, senior executives, developers, and the person standing on line with you at Starbucks dismisses data normalization as a solved problem. Sorry, getting the data boat to float is a work in progress. Few want to come to grips with the issue.
Second, fixing up data is expensive. Did you ever wonder why the Stanford president made up data, forcing his resignation? The answer is that the “cost of fixing up data is too high.” If the president of Stanford can’t do it, is the run-fo-the-mill fast talking AI guru different? Answer: Nope.
Third, knowledge of exception folders and non-conforming data is confined to a small number of people. Most will explain what is needed to make a content intake system work. However, many give up because the cloud of unknowing is unlikely to disperse.
The bottom line is that many data sets are not what senior executives, marketers, or those who use the data believe they are. The Google comment — despite Google’s sketchy track record in plain honest talk — is mostly correct.
So what?
- Outputs are often less useful than many anticipated. But if the user is uninformed or the downstream system uses whatever is pushed to it, no big deal.
- The thresholds and tweaks needed to make something semi useful are not shared, discussed, or explained. Keep the mushrooms in the dark and feed them manure. What do you get? Mushrooms.
- The graphic outputs are eye candy and distracting. Look here, not over there. Sizzle sells and selling is important.
Net net: Data are a problem. Data have been due to time and cost issues. Data will remain a problem because one can sidestep a problem few recognize and those who do recognize the pit find a short cut. What’s this mean for AI? Those smart systems will be super. What’s in your AI stocking this year?
Stephen E Arnold, December 24, 2023

