Just Train AI with AI Output: What Could Go Wrong?
January 9, 2026
AI is still dumb technology and needs to be trained to improve. Unfortunately AI training datasets are limited. Patronus AI claims it has a solution to training problem and the news is told on VentureBeat in the article, “AI Agents Fail 63% Of The Time On Complex Tasks. Patronus AI Says Its New ‘Living’ Training Worlds Can Fix That.” Patronus AI is a new AI startup with backing from Datadog and Lightspeed Venture Partners.
The company’s newest project is called “Generative Simulators” that creates simulated environments that continuously generate new challenges for AI algorithms to evaluate. AI Patronus could potentially be a critical tool for the AI industry. Research discovered that AI algorithms with a 1% error rate per step compound a 63% chance of failure.
Patronus AI explains that traditional datasets and measurements are like standardized tests: “they measure specific capabilities at a fixed point in time but struggle to capture the messy, unpredictable nature of real work.” The new Generative Simulators produces environments and assignments that adapt based on how the algorithm responds:
“The technology builds on reinforcement learning — an approach where AI systems learn through trial and error, receiving rewards for correct actions and penalties for mistakes. Reinforcement learning is an approach where AI systems learn to make optimal decisions by receiving rewards or penalties for their actions, improving through trial and error. RL can help agents improve, but it typically requires developers to extensively rewrite their code. This discourages adoption, even though the data these agents generate could significantly boost performance through RL training.”
Patronus AI said that training has improved AI algorithm’s task completion by 10-20%. The company also says that Big Tech can’t build all of their AI training tools in house because the amount of specialized training needed for niche fields is infinite. It’s a natural place for third party companies like Patronus AI.
Patronus AI founds its niche and is cashing in! But that failure rate? No problem.
Whitney Grace, January 9, 2026
Telegram Glitch Makes Some Russians Unhappy
January 9, 2026
According to the Russian PC News, Telegram experienced an unexpected hiccup: “A Glitch In Telegram Has Been Recorded In Russia: Users Are Complaining About Access Problems.” The problem occurred on the day after Christmas also celebrated as Boxing Day. The Down Detector service, a website that monitors the status of popular services, reported the outage.
Here’s what exactly happened:
“As of 13:25 Moscow time, 387 people reported problems, and the total number of requests over the past 24 hours reached 846. The largest number of complaints came from Moscow, the Oryol region and St. Petersburg — each of these regions accounted for 4% of complaints. The failure also affected users in the Belgorod and Samara regions, where about 2% of complaints were received. Most often, users reported problems with the Telegram mobile application — 38% of requests indicated it. Another 33% of complaints concerned the unavailability of the web version of the service, 20% — incorrect operation of notifications.”
The percentages don’t lie. Something happened with Telegram around large Russians cities. Why did it happen? Was the Kremlin testing something? Did the Kremlin want Telegram out of service so no one could report on nefarious activities? Maybe the Kremlin was testing ways to disrupt Telegram? Or maybe it was just a hiccup in service.
Telegram is a point of interest and has been for more than a decade.
Whitney Grace, January 9, 2025
Gambling Is An Addiction & The Internet Starts ‘Em Young
January 8, 2026
Robert Custer was a psychiatrist who promoted the theory that gambling addition was a mental disorder. His pioneering research is the basis for modern treatments of gambling disorder. Since Custer’s prime in the 1970s and 1980s, gambling has exploded, not just with brick and mortar casinos, but also online gambling and expansion of mobile sports betting. Science News discusses the rising tide of online gambling in the article, “As Gambling Addiction Spreads, One Scientist’s Work Reveals Timely Insights.”
Custer’s research is more relevant now than ever especially as the behavior is nurtured in kids from the moment they can hold a mobile device. Custer fought to include the disorder in the DSM and he succeeded:
“Custer argued that pathological gambling was not just a matter of an individual’s building and releasing tension. Rather, pathological gambling followed a progressive course from slightly unhealthy gambling behaviors to increasingly problematic wagering with tangible financial and social consequences. As a result, the committee incorporated the common consequences Custer saw in his clinical experience — such as defaulting on debts, borrowing money and struggling with family relationships — as diagnostic criteria to better identify those suffering. So, while pathological gambling remained alongside impulse control disorders in the DSM-III, its description and diagnostic criteria more closely mirrored the way the manual approached substance use disorders.”
Kids become addicted to online games that mimic the same dopamine release that gamblers experience. Social media giants are huge enablers of this behavior but so is Telegram. Telegram wants to hook the kids young so they’ll be addicted until the day they fall into a hole. It’s despicable and makes you want to toss a kid outside with a ball and stick. Go outside!
Whitney Grace, January 8, 2025
OpenCode: A Little Helper for Good Guys and the Obverse
January 7, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I gave a couple of talks late in 2025 to cyber investigators in one of those square, fly-over states. In those talks, I showed code examples. Some were crafted by the people on my team and a complete Telegram app was coded by Anthropic Claude. Yeah, the smart software needed some help. Telegram bots are not something lots of Claude users whip up. But the system worked.
Here I am reading the OpenCode “privacy” document. I look pretty good when Venice.ai converts me from an old dinobaby to a young dinobaby.
One of my team called my attention to OpenCode. The idea is to make it easier than ever for a good actor or a not-so-good actor to create scripts, bots, and functional applications. OpenCode is an example of what I call “meta ai”; that is, the OpenCode service is a wrapper that allows a lot of annoying large language model operations, library chasing, and just finding stuff to take place under one roof. If you are a more hip cat than I, you would probably say that OpenCode is a milestone on the way to a point and click dashboard to make writing goodware and malware easier and more reliable than previously possible. I will let you ponder the implications of this statement.
According to the organization:
OpenCode is an open source agent that helps you write and run code with any AI model. It’s available as a terminal-based interface, desktop app, or IDE extension.
The sounds as if an AI contributed to the sentences. But that’s to be expected. The organization is OpenCode.ai.
The organization says:
OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen*, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models. [*Zen gives you access to a handpicked set of AI models that OpenCode has tested and benchmarked specifically for coding agents. No need to worry about inconsistent performance and quality across providers, use validated models that work.]
The system is open source. As of January 7, 2026, it is free to use. You will have to cough up money if you want to use OpenCode with the for fee smart software.
The OpenCode Web site makes a big deal about privacy. You can find about 10,000 words explaining what the developers of the system bundle in their “privacy” write up. It is a good idea to read the statement. It includes some interesting features; for example:
- Accepting the privacy agreement allows home buyers to be data recipients
- Fuzzy and possibly contradictory statements about user data sales
- Continued use of the service means you accept terms which can be changed.
I won’t speculate on how a useful service needs a long and somewhat challenging “privacy” statement. But this is 2026. I still can’t figure out why home buyers are involved, but I am a clueless dinobaby. Remember there are good actors and the other type too.
Stephen E Arnold, January 7, 2026
Meta 2026: Grousing Baby Dinobabies and Paddling Furiously
January 7, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I am an 81 year old dinobaby. I get a kick out of a baby dinobaby complaining about young wizards. Imagine how that looks to me. A pretend dinobaby with many years to rock and roll complaining about how those “young people” are behaving. What a hoot!
A dinobaby explains to a young, brilliant entrepreneur, “You are clueless.” My response: Yeah, but who has a job? Thanks, Qwen. Good enough.
Why am I thinking about age classification of dinobabies? I read “Former Meta Scientist Says Mark Zuckerberg’s New AI Chief Is ‘Young’ And ‘Inexperienced’—Warns ‘Lot Of People’ Who Haven’t Yet Left Meta ‘Will Leave’.” Now this is a weird newsy story or a story presented as a newsy release by an outfit called Benzinga. I don’t know much about Benzinga. I will assume it is a version of the estimable Wall Street Journal or the more estimable New York Times. With that nod to excellence in mind, what is this write up about?
Answer: A staff change and what I call departure grief. People may hate their job. However, when booted from that job, a problem looms. No matter how important a person’s family, no matter how many technical accolades a person has accrued, and no matter the sense of self worth — the newly RIFed, terminated, departed, or fired feels bad.
Many Xooglers fume online after losing their status as Googlers. These essays are amusing to me. Then when Mother Google kicks them out of the quiet pod, the beanbag, or the table tennis room — these people fume. I think that’s what this Benzinga “zinger” of a write up conveys.
Let’s take a quick look, shall we?
First, the write up reports that the French-born Yann LeCun is allegedly 65 years old. I noted this passage about Alexandr [sic] Wang is the top dog in Meta’s Superintelligence Labs (MSL) or “MISSILE” I assume. That’s quite a metaphor. Missiles are directed or autonomous. Sometimes they work and sometimes they explode at wedding parties in some countries. Whoops. Now what does the baby dinobaby Mr. LeCun say about the 28 year old sprout Alexandr (sic) Wang, founder of Scale AI. Keep in mind that the genius Mark Zuckerberg paid $14 billion dollars for this company in the middle of 2025.
Alexandr Wang is intelligent and learns quickly, but does not yet grasp what attracts — or alienates — top researchers.
Okay, we have a baby dinobaby complaining about the younger generation. Nothing new here except that Mr. Wang is still employed by the genius Mark Zuckerberg. Mr. LeCun is not as far as I know.
Second, the article notes:
According to LeCun, internal confidence eroded after Meta was criticized for allegedly overstating benchmark results tied to Llama 4. He said the controversy angered Zuckerberg and led him to sideline much of Meta’s existing generative AI organization.
And why? According the zinger write up:
LLMs [are] a dead end.
But was Mr. LeCun involved in these LLMs and was he tainted by the failure that appears to have sparked the genius Mark Zuckerberg to pay $14 billion for an indexing and content-centric company? I would assume that the answer is, “Yep, Mr. LeCun was in his role for about 13 years.” And the result of that was a “dead end.”
I would suggest that the former Facebook and Meta employee was not able to get the good ship Facebook and its support boats Instagram and WhatsApp on course despite 156 months of navigation, charting, and inputting.
Several observations:
- Real dinobabies and pretend dinobabies complain. No problem. Are the complaints valid? One must know about the mental wiring of said dinobaby. Young dinobabies may be less mature complainers.
- Geniuses with a lot of money can be problematic. Mr. LeCun may not appreciate the wisdom of this dinobaby’s statement … yet.
- The genius Mr. Zuckerberg is going to spend his way back into contention in the AI race.
Net net: Meta (Facebook) appears to have floundered with the virtual worlds thing and now is paddling furiously as the flood of AI solutions rushes past him. Can geniuses paddle harder or just buy bigger and more powerful boats? Yep, zinger.
Stephen E Arnold, January 7, 2026
ChatGPT Channels Telegram
January 7, 2026
Just what everyone needs: Telegram type apps on the Sam AI-Man platform. What will bad actors do? Obviously nothing. Apps will be useful, do good, and make the world a better place.k
ChatGPT now connects to apps without leaving the AI interface.? ? According to Mashable, “ChatGPT Launches Apps Beta: 8 Big Apps You Can Now Use In ChatGPT.”? ? ChatGPT wants its users to easily access apps or take suggestions during conversations with the AI.? ? The idea is that ChatGPT will be augmented by apps and extend conversations.
App developers will also be able to use ChatGPT to build chat-native experiences to bring context and action directly into conversations.
The new app integration is described as:
“While some commentators have referred to the new Apps beta as a ChatGPT app store, at this time, it’s more of an app directory. However, in the “Looking Ahead” section of its announcement post, OpenAI does note that this tool could eventually ‘expand the ways developers can reach users and monetize their work.’”
The apps that are integrated into ChatGPT are Zillow, Target, Expedia, Tripadvisor, Instacart, DoorDash, Apple Music, and Spotify.? ? This sounds similar to what Telegram did.? ? Does this mean OpenAI is on the road to Telegram like services?
Just doing good. Bad actors will pay no attention.
Whitney Grace, January 7, 2025
The Branding Genius of Cowpilot: New Coke and Jaguar Are No Longer the Champs
January 6, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
We are beavering away on our new Telegram Notes series. I opened one of my newsfeeds and there is was. Another gem of a story from PCGamer. As you may know, PCGamer inspired this bit of art work from the and AI system. I thought I would have a “cow” when I saw. Here’s the visual gag again:

Is that cow standing in its output? Could that be a metaphor for “cowpilot” output? I don’t know. Qwen, like other smart software can hallucinate. Therefore, I see a semi-sacred bovine standing in a muddy hole. I do not see AI output. If you do, I am not sure you are prepared for the contents about which I shall comment; that is, the story from PCGamer called “In a Truly Galaxy-Brained Rebrand, Microsoft Office Is Now the Microsoft 365 Copilot App, but Copilot Is Also Still the Name of the AI Assistant.”
I thought New Coke was an MBA craziness winner. I thought the Jaguar rebrand was an even crazier MBA craziness winner. I thought the OpenAI smart software non mobile phone rebranding effort that looks like a 1950s dime store fountain pen was in the running for crazy. Nope. We have a a candidate for the rebranding that tops the leader board.
Microsoft Office is now the M3CA or Microsoft 365 Copilot App.
The PCGamer write up says:
Copilot is the app for launching the other apps, but it’s also a chatbot inside the apps.
Yeah, I have a few. But what else does PCGamer say in this write up?

An MBA study group discusses the branding strategy behind Cowpilot. Thanks, Qwen. Nice consistent version of the heifer.
Here’s a statement I circled:
Copilot is, notably, a thing that already exists! But as part of the ongoing effort to juice AI assistant usage numbers by making it impossible to not use AI, Microsoft has decided to just call its whole productivity software suite Copilot, I guess.
Yep, a “guess.” That guess wording suggests that Microsoft is simply addled. Why name a product that causes a person to guess? Not even Jaguar made people “guess” about a weird square car painted some jazzy semi hip color. Even the Atlanta semi-behemoth slapped “new” Coke on something that did not have that old Coke vibe. Oh, both of these efforts were notable. I even remember when the brain trust at NBC dumped the peacock for a couple of geometric shapes. But forcing people to guess? That’s special.
Here’s another statement that caught my dinobaby brain:
Should Microsoft just go ahead and rebrand Windows, the only piece of its arsenal more famous than Office, as Copilot, too? I do actually think we’re not far off from that happening. Facebook rebranded itself “Meta” when it thought the metaverse would be the next big thing, so it seems just as plausible that Microsoft could name the next version of Windows something like “Windows with Copilot” or just “Windows AI.” I expect a lot of confusion around whatever Office is called now, and plenty more people laughing at how predictably silly this all is.
I don’t agree with this statement. I don’t think “silly” captures what Microsoft is attempting to do. In my experience, Microsoft is a company that bet on the AI revolution. That turned into a cost sink hole. Then AI just became available. Suddenly Microsoft has to flog its business customers to embrace not just Azure, Teams, and PowerPoint. Microsoft has to make it so users of these services have to do Copilot.
Take your medicine, Stevie. Just like my mother’s approach to giving me cough medicine. Take your medicine or I will nag you to your grave. My mother haunting me for the rest of my life was a bummer thought. Now I have the Copilot thing. Yikes, I have to take my Copilot medicines whether I want to or not. That’s not “silly.” This is desperation. This is a threat. This is a signal that MBA think has given common sense a pink slip.
Stephen E Arnold, January 6, 2026
AlphaTON: Tactical Brilliance or Bumbling?
January 6, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
A few days ago, I published on my “Telegram Notes” page a story about some fancy dancing. I thought the insights into strategic thinking and tactical moves might be of interest to those who follow Beyond Search and its information on social media like the estimable LinkedIn.
What caught my attention? Telegram — again. Pavel Durov is headed for a trial in France. He’s facing about a baker’s dozens of charges related to admirable activities like ignoring legitimate requests for information about terrorism, a little bit of kiddie video excitement, and the more mundane allegations related to financial fancy dancing.
This means that although other nation states could send a diplomatic note to the French government, that is unlikely. Thus, some casual conversations may be held, but the wheels of French justice which rolled like tumbrils during the French revolution, are grinding forward. The outcome is difficult to predict. If you have been in France, you know how some expected things become “quelle surprise.” And fast.
Not surprisingly, the activity related to Telegram has been stepped up. Nikolai Durov has apparently come out of his cocoon to allow Pavel, his GOAT and greatest of all time brother, to say:
It happened. Our decentralized confidential compute network, Cocoon, is live. The first AI requests from users are now being processed by Cocoon with 100% confidentiality. GPU owners are already earning TON.
Then a number of interesting things happened.
First, there was joy of Telegram true believers who wondered by the GOAT and his two Ph.D. toting brother were acting like Tim Apple and its non-AI initiative. Telegram was doing AI as a distributed service. Yep, AI mining along with other types of digital mining. Instead of buying scare and expensive hardware, Nikolai ideated a method that would use other people’s GPUs. Do some work for Telegram and get paid in TONcoin. Yep, that the currency completely and totally under the control of the independent TON Foundation. Yep, completely separate.
Second, there was some executive shuffling at the TON Foundation. You know that this outfit is totally, 100 percent separate from Telegram and now has responsibility for the so called TON blockchain technology and the marketing of all things Telegram. Manual (Manny) Stotz, a German financial wizard, left his job at the TON Foundation and became the president of TON Strategy Company. I think he convinced a person with a very low profile named Veronika Kapustina to help manage the new public company. TON Strategy has a tie up with Manny’s own Kingsway Capital and possibly with Manny’s other outfit Koenigsweg Holding. (Did you know that Koenigsweg means Kingsway in German?)
Is it possible for a pop up company to offer open market visitors an deal on hot TONcoins? I don’t know. I do know that Qwen did a good enough job on this illustration, however.
Third, was the beep beep Road Runner acceleration of AlphaTON Capital. Like TON Strategy Company (which rose like a phoenix from the smoking shell of VERB Technology), AlphaTON popped into existence about four months ago. Like TON Strategy, it sported a NASDAQ listing under the ticker symbol ATON. Some Russians might remember the famous or infamous ATON Bank affair. (I wonder if someone was genuinely clueless about the ATON ticker symbol’s metaphorical association or just having a bit of fun on the clueless, tightly leashed “senior managers” of AlphaTON. I thought I heard a hooray when AlphaTON was linked to the very interesting high frequency trading outfit DWF MaaS. No CFO or controller was needed when the company appeared like a pop up store in the Mall of America. A person with an interesting background would be in charge of AlphaTON’s money. For those eager to input one type of currency and trade it for another, the DWF MaaS outfit and its compatriots in Switzerland could do the job.
But what happened to these beep beep Road Runner moves?
Answer: TONcoin value went down. TON Strategy Company share price went down. AlphaTON share price cratered. On January 2, one AlphaTON share was about US$0.77. That is just a 85 percent drop since the pop distributed mining company came into existence by morphing from a cancer fighting company to an AI mining outfit.
In the midst of these financial flameouts, Manny had to do some fast dancing for the US SEC because he did not do the required shareholder vote process before making some money moves. Then, a couple of weeks later, the skilled senior managers of AlphaTON announced a deal with the high flying private company Anduril. But there was one small problem. Anduril came out and said, “AlphaTON is not telling the truth.” The blue chip thinkers at AlphaTON had to issue a statement to the effect that it was acting like an AI system and hallucinating. There was no deal with Anduril.
Then the big news was AlphaTON’s cutting ties with DWF MaaS, distancing itself from the innovative owner of DWF Maas, and paying DWF MaaS an alleged US$15 million to just go away pronto.
Where is AlphaTON now?
That’s a good question. I think 2026 will allow AlphaTON to demonstrate its business acumen. Personally I hope that information becomes available to answer these questions:
- Was AlphaTON’s capitalization at US$420.69 million a joke like ATON as the ticker symbol?
- What’s the linkage among RSV Capital (Canada) and sources of money in Russia and Telegram’s TONcoin (which is totally under the control of the completely separate TON Foundation)?
- What does Enzo Villani know about mining artificial intelligence?
- What does Britany Kaiser know about mining artificial intelligence?
- What does Logan Ryan Golema, an egamer, know about building out AI data centers?
- How does AlphaTON plan to link financial services, cancer fighting, and AI mining into a profitable business in 2026?
If you want to read the post in Telegram Notes, click here.
Stephen E Arnold, January 6, 2026
Grok AI Hallucinates Less Than Any Other AI. Believe It or Not!
January 6, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I had to put down my Telegram Notes project to capture the factoids in “Elon Musk’s Grok Records Lowest Hallucination Rate in AI Reliability Study.” Of course, I believe everything I read on the Internet. I am a veritable treasure trove of the factoids about Elon Musk’s technologies; for example, full self driving, the trade in value of a used Tesla Model S, and the launch date for a trip to Mars. I am so ready.

The AI system tells the programmers, “Yes, you can use AI to predict which of the egame players are most likely to have access to a parent’s credit card. This card can be used to purchase tokens or crypto to enter higher-stake gambling games. Does that help?” Thanks, Qwen. Too bad you weren’t in the selection of AI systems tested by the wizards at Relum.
The write up presents information from gold-standard sources too; for instance, the casino games aggregation company Relum. And what did this estimable company’s research reveal? Consider these factoids:
- Grok’s hallucination “rate” is just eight percent. Out of 100 prompts / queries, Grok goes off the rails only eight times. This is outstanding. Everyone wants such a low rate of hallucination. Exceptions may apply for some nitpickers.
- The worst LLMs in the hallucination rate category are ChatGPT and Google Gemini. These outfits make up information more than one third of the time. That’s not too bad if you are planning on selling adds. The idea is “prompt relaxation.” The more relaxed, the wider the net for allegedly relevant ads. More ads yield more revenue. Maybe there is more to making up answers than meets the idea. I am okay with ChatGPT and Google competing for the most hallucinogenic crown. Have at it, folks.
- Deepseek, the Chinese freebie, hallucinates only 14 percent of the time. Way to go, Chinese political strategists. (Qwen’s hallucination was not reported in the article. By the way, that’s one of the models Pavel Durov’s Telegram will allegedly rely upon to translate Messenger content and perform other magical functions for the TONcoin pivot. Note the word “magical.” Two public companies listed on NASDAQ in six months. As I said, “Magic.”
Here’s a quote from the gambling company’s chief product officer. Obviously this individual is an expert in the field of machine learning, neural networks, matrix transforms, and the other bits and bobs of building smart software. Here’s the statement:
About 65% of US companies now use AI chatbots in their daily work, and nearly 45% of employees admit they’ve shared sensitive company information with these tools. These numbers show well how important chatbots have become in everyday work.
Absolutely. When one runs Windows, the user is “using” smart software. When one uses Google, AI is there. These market winners are moving forward, greased on wheels of fabricated output. Yeah, great.
Several observations:
- Grok seems unaware of messages posted on X.com. I wonder why.
- A bad actor has to sign up to access the Grok API and the X.com API in order to pull off some slick AI-based cyber activities. I wonder why.
- Grok’s appeal to online gaming companies is interesting. I wonder way.
I have no answers. Relum does. These data do not reassure me about Mr. Musk’s business tactics for building Grok’s market footprint.
Stephen E Arnold, January 6, 2025
A Revised AI Glossary for 2026
January 5, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I have a lot to do. I spotted this article: “ChatGPT Glossary: 61 AI Terms Everyone Should Know.” I read the list and the definitions. I have decided to invest a bit of time to de-obfuscate this selection of verbal gymnastics. My hunch is that few will be amused. I, however, find this type of exercise very entertaining. On the other hand, my reframing of this “everyone should know” stuff reflects on my role as an addled dinobaby limping around rural Kentucky.
Herewith my recasting of the “everyone should know” list. Remember. Everyone means everyone. That’s a categorical affirmative, and these assertions trigger me.
Artificial general intelligence. Sci-fi craziness from “we can rule the world” wizards
Agentive. You, human, don’t do this stuff anymore.
AI ethics. Anything goes, bros.
AI psychosis. It’s software and it’s driving you nuts.
AI safety. Sure, only if something does not make money or abrogate our control
Algorithms. Numerical recipes explained in classes normal people do not take
Alignment. Weaponizing models and information outputs
Anthropomorphism. Sure, fall in love with outputs. We don’t really care. Just click on the sponsored content.
Artificial intelligence. Code words for baloney and raising money
Autonomous agents. Stay home and make stuff to sell on Etsy
Bias. Our way is the only way
Chatbot. Talk to our model, pal
Claude. An example of tech bro-loney
Cognitive computing. A librarian consultant’s contribution to gibberish
Data augmentation. Indexing
Dataset. Anything an AI outfit can grab and process
Deep learning. Pretending to be smart
Diffusion. Moral dissipation and hot gas
Emergent behavior. Shameless rip off of the Santa Fe Institute and Walter Kaufman
End-to-end learning. Update models instead of retraining them
Ethical considerations. Pontifical statements or “"Those are my ethical principles, and if you don’t like them… well, I have others."
Foom. GenZ’s spelling of the Road Runner’s cartoon beep beep
Generative adversarial network. Jargon fog for inputs along the way to an output
Generative AI. Reason to fire writers and PR people
Google Gemini. An example of tech bro-loney from an ad sales outfit
Guardrails. Stuff to minimize suicides, law suits, and the proliferation of chemical weapons
Hallucination. Errors
Inference. Guesses
LLM. Today’s version of LSMFT
Machine learning. Math from half century ago
Microsoft Bing. Beats the heck out of me
Multimodal AI. A fancy way to say words, sound, pix, and video to help un-employ humans or did this type of work
Natural language processing. Software that understands William Carlos Williams’ poetry
Neural network. Lots of probability and human-fiddled thresholds
Open weights. You can put your finger on the scale too
Overfitting. Baloney about hallucinations, being wrong, and helping kids commit de-living
Paperclips. Less sexy than The Terminator but loved by tech bros who like the 1999 film Office Space
Parameters. Where you put your finger on the scale to fiddle outputs
Perplexity. Another example of tech bro-loney
Prompt. A query
Prompt chaining. Related queries fed into the baloney machine
Prompt engineering. Hunting for words and phrases to output pornography, instructions for making poison gas, and ways to defraud elders online
Prompt injection. Pressing enter after prompt engineering
Quantization. Jargon to say, “We won’t need so much money now, Mr. Venture Bankman”
Slop. Outputs from smart software
Sora. Lights, camera, you’re fired. Cut.
Stochastic parrot. A bound phrase that allowed Google to give Timnit Gebru a chance to find her future elsewhere
Style transfer. You too can generate a sketch in the style of Max Ernst and a Batman comic book
Sycophancy. AI models emulate new hires at McKinsey & Company
Synthetic data. Hey, we just fabricate data. No copyright problems, right
Temperature. A fancy way to explain twiddling with parameters
Text-to-image generation. Artists. Who needs them?
Tokens. n-grams but to crypto dudes it’s value
Training data. Copyright protected information, personally identifiable information, and confidential inputs in any format, even the synthetic made up stuff
Transformer model. A reason for Google leadership to ask, “Why did we release this to open source?”
Turing test. Do you love me? Of course, the system does. Now read the sponsored content
Unsupervised learning. Automated theft of training data
Weak AI (narrow AI). A model trained on a bounded data set, not whatever the AI company can suck down
Zero-shot learning. A stepping stone to artificial intelligence able to do more than any miserable human
I love smart software.
Oh, the cited source leaves out OpenAI’s ChatGPT. This means “Titanic” after the iceberg.
Stephen E Arnold, January 5, 2025

