Debbie Downer Says, No AI Payoff Until 2026
December 27, 2024
Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:
shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.
Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”
The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.
To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:
Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”
Too add to the uncertainty about US AI “dominance,” Venture Beat reports:
DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.
Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.
In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?
Stephen E Arnold, December 27, 2024
Anthropic Gifts a Feeling of Safety: Insecurity Blooms This Holiday Season
December 25, 2024
Written by a dinobaby, not an over-achieving, unexplainable AI system.
TechCrunch published “Google Is Using Anthropic’s Claude to Improve Its Gemini AI.” The write up reports:
Contractors working to improve Google’s Gemini AI are comparing its answers against outputs produced by Anthropic’s competitor model Claude, according to internal correspondence seen by TechCrunch. Google would not say, when reached by TechCrunch for comment, if it had obtained permission for its use of Claude in testing against Gemini.
Beyond Search notes Pymnts.com report from February 5, 2023, that Google invested at that time $300 million in Anthropic. Beyond Search recalls a presentation at a law enforcement conference. One comment made by an attendee to me suggested that Google was well aware of Anthropic’s so-called constitutional AI. I am immune to AI and crypto babble, but I did chase down “constitutional AI” because the image the bound phrase sparked in my mind was that of the mess my French bulldog delivers when he has eaten spicy food.
The illustration comes from You.com. Kwanzaa was the magic word. Good enough.
The explanation consumes 34 pages of an ArXiv paper called “Constitutional AI: Harmlessness from AI Feedback.” The paper has more than 48 authors. (Headhunters, please, take note when you need to recruit AI wizards.) I read the paper, and I think — please, note, “think” — the main idea is:
Humans provides some input. Then the Anthropic system figures out how to achieve helpfulness and instruction-following without human feedback. And the “constitution”? Those are the human-created rules necessary to get the smart software rolling along. Presumably Anthropic’s algorithms ride without training wheels forevermore.
The CAI acronym has not caught on like the snappier RAG or “retrieval augmented generation” or the most spectacular jargon “synthetic data.” But obviously Google understands and values to the tune of hundreds of millions of dollars, staff time, and the attention of big Googler thinkers like Jeff Dean (who once was the Big Dog of AI) but has given way to the alpha dog at DeepMind).
The swizzle for this “testing” or whatever the Googlers are doing is “safety.” I know that when I ask for an image like “a high school teacher at the greenboard talking to students who are immersed in their mobile phones”, I am informed that the image is not safe. I assume Anthropic will make such crazy prohibitions slightly less incomprehensible. Well, maybe, maybe not.
Several observations are warranted:
- Google’s investment in Anthropic took place shortly after the Microsoft AI marketing coup in 2023. Perhaps someone knew that Google’s “we invented it” transformer technology was becoming a bit of a problem
- Despite the Google “we are the bestest” in AI technology, the company continues to feel the need to prove that it is the bestest. That’s good. Self- knowledge and defeating “not invented here” malaise are positives.
- DeepMind itself — although identified as the go-to place for the most bestest AI technology — may not be perceived as the outfit with the Holy Grail, the secret to eternal life, and the owner of most of the land on which the Seven Cities of Cibola are erected.
Net net: Lots of authors, Google testing itself, and a bit of Google’s inferiority complex — Quite a Kwanzaa gift.
Stephen E Arnold, December 25, 2024
McKinsey Takes One for the Team
December 25, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I read the “real” news in “McKinsey & Company to Pay $650 Million for Role in Opioid Crisis.” The write up asserts:
The global consulting firm McKinsey and Company Friday [December 13, 2024] agreed to pay $650 million to settle a federal probe into its role in helping “turbocharge” sales of the highly addictive opioid painkiller OxyContin for Purdue Pharma…
If I were still working at a big time blue chip consulting firm, I would suggest to the NPR outfit that its researchers should have:
- Estimated the fees billed for opioid-related consulting projects
- Pulled together the estimated number of deaths from illegal / quasi-legal opioid overdoses
- Calculated the revenue per death
- Calculated the cost per death
- Presented the delta between the two totals.
- Presented to aggregate revenue generated for McKinsey’s clients from opioid sales
- Estimated the amount spent to “educate” physicians about the merits of synthetic opioids.
Interviewing a couple of parents or surviving spouses from Indiana, Kentucky, or West Virginia would have added some local color. But assembling these data cannot be done with a TikTok query. Hence, the write up as it was presented.
Isn’t that efficiency of MBA think outstanding? I did like the Friday the 13th timing. A red ink Friday? Nope. The fine doesn’t do the job for big time Blue Chip consulting firms. Just like EU fines don’t deter the Big Tech outfits. Perhaps something with real consequences is needed? Who am I kidding?
Stephen E Arnold, December 25, 2024
FOGINT: Telegram Gets Some Lipstick to Put on a Very Dangerous Pig
December 23, 2024
Information from the FOGINT research team.
We noted the New York Times article “Under Pressure, Telegram Turns a Profit for the First Time.” The write up reported on December 23, 2024:
Now Telegram is out to show it has found its financial footing so it can move past its legal and regulatory woes, stay independent and eventually hold an initial public offering. It has expanded its content moderation efforts, with more than 750 contractors who police content. It has introduced advertising, subscriptions and video services. And it has used cryptocurrency to pay down its debt and shore up its finances. The result: Telegram is set to be profitable this year for the first time, according to a person with knowledge of the finances who declined to be identified discussing internal figures. Revenue is on track to surpass $1 billion, up from nearly $350 million last year, the person said. Telegram also has about $500 million in cash reserves, not including crypto assets.
The FOGINT’s team viewpoint is different.
- Telegram took profit on its crypto holdings and pumped that money into its financials. Like magic, Telegram will be profitable.
- The arrest of Mr. Durov has forced the company’s hand, and it is moving forward at warp speed to become the hub for a specific category of crypto transactions.
- The French have thrown a monkey wrench into Telegram’s and its associated organizations’ plans for 2025. The manic push to train developers to create click-to-earn games, use the Telegram smart contracts, and ink deals with some very interesting partners illustrates that 2025 may be a turning point in the organizations’ business practices.
The French are moving at the speed of a finely tuned bureaucracy, and it is unlikely that Mr. Durov will shake free of the pressure to deliver names, mobile numbers, and messages of individuals and groups of interest to French authorities.
The New York Times write up references profitability. There are more gears engaging than putting lipstick on a financial report. A cornered Pavel Durov can be a dangerous 40 year old with money, links to interesting countries, and a desire to create an alternative to the traditional and regulated financial system.
Stephen E Arnold, December 23, 2024
Technology Managers: Do Not Ask for Whom the Bell Tolls
December 18, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I read the essay “The Slow Death of the Hands-On Engineering Manager.” On the surface, the essay provides some palliative comments about a programmer who is promoted to manager. On a deeper level, the message I carried from the write up was that smart software is going to change the programmer’s work. As smart software become more capable, the need to pay people to do certain work goes down. At some point, some “development” may skip the human completely.
Thanks OpenAI ChatGPT. Good enough.
Another facet of the article concerned a tip for keeping one’s self in the programming game. The example chosen was the use of OpenAI’s ChatGPT open source software to provide “answers” to developers. Thus instead of asking a person, a coder could just type into the prompt box. What could be better for an introvert who doesn’t want to interact with people or be a manager? The answer is, “Not too much.”
What the essay makes clear is that a good coder may get promoted to be a manager. This is a role which illustrates the Peter Principle. The 1969 book explains why incompetent people can get promoted. The idea is that if one is a good coder, that person will be a good manager. Yep, it is a principle still evident in many organizations. One of its side effects is a manager who knows he or she does not deserve the promotion and is absolutely no good at the new job.
The essay unintentionally makes clear that the Peter Principle is operating. The fix is to do useful things like eliminate the need to interact with colleagues when assistance is required.
John Donne in the 17th century wrote a poorly structured sonnet which asserted:
No man is an island,
Entire of itself.
Each is a piece of the continent,
A part of the main.
The cited essay provides a way to further that worker isolation.
With AI the top-of-mind thought for most bean counters, the final lines of the sonnet is on point:
Therefore, send not to know
For whom the bell tolls,
It tolls for thee.
My view is that “good enough” has replaced individual excellence in quite important jobs. Is this AI’s “good enough” principle?
Stephen E Arnold, December 17, 2024
Telegram: Edging Forward in Crypto
December 12, 2024
This blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.
Telegram wants to be the one stop app for anonymous crypto tasks. While we applaud those efforts when they related to freedom fighting or undermining bad actors, the latter also uses them and we can’t abide by that. Telegram, however, plans to become the API for crypto communication says Cryptologia in, “DWF Labs’ Listing Bot Goes Live On Telegram.”
DWF Labs is a crypto enterprise capital firm and it is launching an itemizing Bot on Telegram. The Bot turns Telegram into a bitcoin feed, because it notifies users of changes in the ten main crypto exchanges: Binance, HTX, Gate.io, Bybit, OKX, KuCoin, MEXC, Coinbase Alternate, UpBit, and Bithumb. Users can also watch foreign money pairs, launchpad bulletins, and spot and/or futures listings.
DWF Labs is on the forefront of alternative currency and financial options. It is a lucrative market:
“In a latest interview, Lingling Jiang, a Associate at DWF Labs, mentioned DWF Labs’ place on the forefront of delivering liquidity providers and forging alliances with conventional finance. By offering market-making assist and funding, Jiang stated, DWF Labs provides tasks the infrastructure needed to grasp of tokenized belongings. With the launch of the brand new Itemizing Bot, DWF Labs brings market information nearer to the retail consumer, particularly these on the Telegram (TON) community. Following the introduction of HOT, a non-custodial pockets on TON powered by Chain Signature, DWF Labs’ Itemizing Bot is one other welcome addition to the ecosystem, particularly within the mild of the latest announcement of HOT Labs, HERE Pockets and HAPI’s new joint crypto platform.”
What’s Telegram’s game for 2025? Spring Durov? Join hands with BRICS? Become the new Morgan Stanley? Father more babies?
Whitney Grace, December 12, 2024
Do Not Worry About Tomorrow. Worry About Tod”AI”
December 12, 2024
This blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.
According to deep learning pioneer Yoshua Bengio, we may be headed for utopia—at least if one is a certain wealthy tech-bro type. For the rest of us, not so much. The Byte tells us, “Godfather of AI Warns of Powerful People who Want Humans ‘Replaced by Machines’.” He is not referring to transhumanism, which might ultimately seek to transform humans into machines. No, this position is about taking people out of the equation entirely. Except those at the top, presumably. Reporter Noor Al-Sibai writes:
“In an interview with CNBC, computer science luminary Yoshua Bengio said that members of an elite tech ‘fringe’ want AI to replace humans. The head of the University of Montreal’s Institute for Learning Algorithms, Bengio was among the public signatories of the ‘Right to Warn‘ open letter penned by leading AI researchers at OpenAI who claim they’re being silenced about the technology’s dangers. Along with famed experts Yann LeCun and Geoffrey Hinton, he’s sometimes referred to as one of the ‘Godfathers of AI.’ ‘Intelligence gives power. So who’s going to control that power?’ the preeminent machine learning expert told the outlet during the One Young World Summit in Montreal. ‘There are people who might want to abuse that power, and there are people who might be happy to see humanity replaced by machines,’ Bengio claimed. ‘I mean, it’s a fringe, but these people can have a lot of power, and they can do it unless we put the right guardrails right now.’”
Indeed. This is not the first time the esteemed computer scientist has rung AI alarm bells. As Bengio notes, those who can afford to build AI systems are very, very rich. And money leads to other types of power. Political and military power. Can government regulations catch up to these players? Only if it takes them more than five years to attain artificial general intelligence, he predicts. The race for the future of humanity is being evaluated by what’s cheaper, not better.
Cynthia Murrell, December 12, 2024
Smart Software Is Coming for You. Yes, You!
December 9, 2024
This write up was created by an actual 80-year-old dinobaby. If there is art, assume that smart software was involved. Just a tip.
“Those smart software companies are not going to be able to create a bot to do what I do.” — A CPA who is awash with clients and money.
Now that is a practical, me–me-me idea. However, the estimable Organization for Economic Co-Operation and Development (OECD, a delightful acronym) has data suggesting a slightly different point of view: Robots will replace workers who believe themselves unreplaceable. (The same idea is often held by head coaches of sports teams losing games.)
Thanks, MidJourney. Good enough.
The report is titled in best organizational group think: Job Creation and Local Economic Development 2024; The Geography of Generative AI.
I noted this statement in the beefy document, presumably written by real, live humanoids and not a ChatGPT type system:
In fact, the finance and insurance industry is the tightest industry in the United States, with 2.5 times more vacancies per filled position than the regional average (1.6 times in the European Union).
I think this means that financial institutions will be eager to implement smart software to become “workers.” If that works, the confident CPA quoted at the beginning of this blog post is going to get a pink slip.
The OECD report believes that AI will have a broad impact. The most interesting assertion / finding in the report is that one-fifth of the tasks a worker handles can be handled by smart software. This figure is interesting because smart software hallucinates and is carrying the hopes and dreams of many venture outfits and forward leaning wizards on its digital shoulders.
And what’s a bureaucratic report without an almost incomprehensible chart like this one from page 145 of the report?
Look closely and you will see that sewing machine operators are more likely to retain jobs than insurance clerks.
Like many government reports, the document focuses on the benefits of smart software. These include (cue the theme from Star Wars, please) more efficient operations, employees who do more work and theoretically less looking for side gigs, and creating ways for an organization to get work done without old-school humans.
Several observations:
- Let’s assume smart software is almost good enough, errors and all. The report makes it clear that it will be grabbed and used for a plethora of reasons. The main one is money. This is an economic development framework for the research.
- The future is difficult to predict. After scanning the document, I was thinking that a couple of college interns and an account to You.com would be able to generate a reasonable facsimile of this report.
- Agents can gather survey data. One hopes this use case takes hold in some quasi government entities. I won’t trot out my frequently stated concerns about “survey” centric reports.
Stephen E Arnold, December 9, 2024
The Very Expensive AI Horse Race
December 4, 2024
This write up is from a real and still-alive dinobaby. If there is art, smart software has been involved. Dinobabies have many skills, but Gen Z art is not one of them.
One of the academic nemeses of smart software is a professional named Gary Marcus. Among his many intellectual accomplishments is cameo appearance on a former Jack Benny child star’s podcast. Mr. Marcus contributes his views of smart software to the person who, for a number of years, has been a voice actor on the Simpsons cartoon.
The big four robot stallions are racing to a finish line. Is the finish line moving away from the equines faster than the steeds can run? Thanks, MidJourney. Good enough.
I want to pay attention to Mr. Marcus’ Substack post “A New AI Scaling Law Shell Game?” The main idea is that the scaling law has entered popular computer jargon. Once the lingo of Galileo, scaling law now means that AI, like CPUs, are part of the belief that technology just gets better as it gets bigger.
In this essay, Mr. Marcus asserts that getting bigger may not work unless humanoids (presumably assisted by AI0 innovate other enabling processes. Mr. Marcus is aware of the cost of infrastructure, the cost of electricity, and the probable costs of exhausting content.
From my point of view, a bit more empirical “evidence” would be useful. (I am aware of academic research fraud.) Also, Mr. Marcus references me when he says keep your hands on your wallet. I am not sure that a fix is possible. The analogy is the old chestnut about changing a Sopwith Camel’s propeller when the aircraft is in a dogfight and the synchronized machine gun is firing through the propeller.
I want to highlight one passage in Mr. Marcus’ essay and offer a handful of comments. Here’s the passage I noted:
Over the last few weeks, much of the field has been quietly acknowledging that recent (not yet public) large-scale models aren’t as powerful as the putative laws were predicting. The new version is that there is not one scaling law, but three: scaling with how long you train a model (which isn’t really holding anymore), scaling with how long you post-train a model, and scaling with how long you let a given model wrestle with a given problem (or what Satya Nadella called scaling with “inference time compute”).
I think this is a paragraph I will add to my quotes file. The reasons are:
First, investors, would be entrepreneurs, and giant outfits really want a next big thing. Microsoft fired the opening shot in the smart software war in early 2023. Mr. Nadella suggested that smart software would be the next big thing for Microsoft. The company has invested in making good on this statement. Now Microsoft 365 is infused with smart software and Azure is burbling with digital glee with its “we’re first” status. However, a number of people have asked, “Where’s the financial payoff?” The answer is standard Silicon Valley catechism: The payoff is going to be huge. Invest now.” If prayers could power hope, AI is going to be hyperbolic just like the marketing collateral for AI promises. But it is almost 2025, and those billions have not generated more billions and profit for the Big Dogs of AI. Just sayin’.
Second, the idea that the scaling law is really multiple scaling laws is interesting. But if one scaling law fails to deliver, what happens to the other scaling laws? The interdependencies of the processes for the scaling laws might evoke new, hitherto identified scaling laws. Will each scaling law require massive investments to deliver? Is it feasible to pay off the investments in these processes with the original concept of the scaling law as applied to AI. I wonder if a reverse Ponzi scheme is emerging. The more pumped in the smaller the likelihood of success. Is AI a demonstration of convergence or The mathematical property you’re describing involves creating a sequence of fractions where the numerator is 1 and the denominator is an increasing sequence of integers. Just askin’.
Third, the performance or knowledge payoff I have experienced with my tests of OpenAI and the software available to me on You.com makes clear that the systems cannot handle what I consider routine questions. A recent example was my request to receive a list of the exhibitors at the November 1 Gateway Conference held in Dubai for crypto fans of Telegram’s The Open Network Foundation and TON Social. The systems were unable to deliver the lists. This is just one notable failure which a humanoid on my research team was able to rectify in an expeditious manner. (Did you know the Ku Group was on my researcher’s list?) Just reportin’.
Net net: Will AI repay the billions sunk into the data centers, the legal fees (many still looming), the staff, and the marketing? If you ask an accelerationist, the answer is, “Absolutely.” If you ask a dinobaby, you may hear, “Maybe, but some fundamental innovations are going to be needed.” If you ask an AI will kill us all type like the Xoogler Mo Gawdat, you will hear, “Doom looms.” Just dinobabyin’.
Stephen E Arnold, December 4, 2024
The Golden Fleecer of the Year: Boeing
November 29, 2024
When I was working in Washington, DC, I had the opportunity to be an “advisor” to the head of the Joint Committee on Atomic Energy. I recall a comment by Craig Hosmer (R. California) and retired rear admiral saying, “Those Air Force guys overpay.” The admiral was correct, but I think that other branches of the US Department of Defense have been snookered a time or two.
In the 1970s and 1980s, Senator William Proxmire (D. Wisconsin) had one of his staff keep an eye of reports about wild and crazy government expenditures. Every year, the Senator reminded people of a chivalric award dating allegedly from the 1400s. Yep, the Middle Ages in DC.
The Order of the Golden Fleece in old timey days of yore meant the recipient received a snazzy chivalric order intended to promote Christian values and the good neighbor policy of Spain and Austria. A person with the fleece was important, a bit like a celebrity arriving at a Hollywood Oscar event. (Yawn)
Thanks, Wikipedia. Allegedly an example of a chivalric Golden Fleece. Yes, that is a sheep, possibly dead or getting ready to be dipped. Thanks,
Reuters, the trusted outfit which tells me it is trusted each time I read one of its “real” news stories, published “Boeing Overcharged Air Force Nearly 8,000% for Soap Dispensers, Watchdog Alleges.” The write up stated in late October 2024:
Boeing overcharged the U.S. Air Force for spare parts for C-17 transport planes, including marking up the price on soap dispensers by 7,943%, according to a report by a Pentagon watchdog. The Department of Defense Office of Inspector General said on Tuesday the Air Force overpaid nearly $1 million for a dozen spare parts, including $149,072 for an undisclosed number of lavatory soap dispensers from the U.S. plane maker and defense contractor.
I have heard that the Department of Defense has not been able to monitor some of its administrative activities or complete an audit of what it does with its allocated funds.
According to the trusted write up:
The Pentagon’s budget is huge, breaking $900 billion last year, making overcharges by defense contractors a regular headache for internal watchdogs, but one that is difficult to detect. The Inspector General also noted it could not determine if the Air Force paid a fair price on $22 million of spare parts because the service did not keep a database of historical prices, obtain supplier quotes or identify commercially similar parts.
My view is that one of the elected officials in Washington, DC, should consider reviving the Proxmire Golden Fleece Award. Boeing may qualify, but there may be other contenders for the award as well.
I quite like the idea of scope changes and engineering change orders for some US government projects. But I have to admit that Senator Proxmire’s identification of a $600 hammer sold to the US Department of Defense is not interesting.
That 8,000 percent mark up is pretty nifty. Oh, on Amazon soap dispensers cost between $20 and $100. Should the Reuters’ story have mentioned:
- Procurement reform
- Poor financial controls
- Lack of common sense?
Of course not! The trust outfit does not get mired in silly technicalities. And Boeing? That outfit is doing a bang up job.
Stephen E Arnold, November 29, 2024