The Future: Autonomous Machines
October 7, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Does mass customization ring a bell? I cannot remember whether it was Joe Pine or Al Toffler who popularized the idea. The concept has become a trendlet. Like many high-technology trends, a new term is required to help communication the sizzle of “new.”
An organization is now an “autonomous machine.” The concept is spelled out in “This Is Why Your Company Is Transforming into an Autonomous Machine.” The write up asserts:
Industries are undergoing a profound transformation as products, factories, and companies adopt the autonomous machine design model, treating each element as an integrated system that can sense, understand, decide, and act (SUDA business operating system) independently or in coordination with other platforms.
I assume SUDA rhymes with OODA (Observe, Orient, Decide, Act), but who knows?
The inspiration for the autonomous machine may be Elon Musk, who allegedly said: “I’m really thinking of the factory like a product.” Gnomic stuff.
The write up adds:
The Tesla is a cyber-physical system that improves over time through software updates, learns from millions of other vehicles, and can predict maintenance needs before problems occur.
I think this is an interesting idea. There is a logical progression at work; specifically:
- An autonomous “factory”
- Autonomous “companies” but I think one could just think about organizations and not be limited to commercial enterprises
- Agentic enterprises.
The future appears to be like this:
The path to becoming an autonomous enterprise, using a hybrid workforce of humans and digital labor powered by AI agents, will require constant experimentation and learning. Go fast, but don’t hurry. A balanced approach, using your organization’s brains and hearts, will be key to success. Once you start, you will never go back. Adopt a beginner’s mindset and build. Companies that are built like autonomous machines no longer have to decide between high performance and stability. Thanks to AI integration, business leaders are no longer forced to compromise. AI agents and physical AI can help business leaders design companies like a stealth aircraft. The technology is ready, and the design principles are proven in products and production. The fittest companies are autonomous companies.
I am glad I am a dinobaby, a really old dinobaby. Mass customization alright. Oligopolies producing what they want for humans who are supposed to have a job to buy the products and services. Yeah.
Stephen E Arnold, October 7, 2025
AI May Be Like a Disneyland for Threat Actors
October 7, 2025
AI is supposed to revolutionize the world, but bad actors are the ones who are benefitting the most tight now. AI is the ideal happy place for bad actors, because there’s an easy hack using autonomous browser based agents that use them as a tool for their nefarious deeds. This alert cokes from Hacker Noon’s story: “Studies Show AI Agents And Browsers Are A Hacker’s Perfect Playground.”
Many companies are running on at least one AI enterprise agent, using it as a tool to fetch external data, etc. Security, however, is still viewed as an add-on for the developers in this industry. Zenity Labs, a leading Agentic AI security and governance company, discovered that 3000 publicly accessible MS Copilot agents.
The Copilot agents failed because they relied on soft boundaries:
“…i.e., fragile, surface-level protections (i.e., instructions to the AI about what it should and shouldn’t do, with no technical controls). Agents were instructed in their prompts to “only help legitimate customers,” yet such rules were easy to bypass. Prompt shields designed to filter malicious inputs proved ineffective, while system messages outlining “acceptable behavior” did little to stop crafted attacks. Critically, there was no technical validation of the input sources feeding the agents, leaving them open to manipulation. With no sandboxing layer separating the agent from live production data, attackers can exploit these weaknesses to access sensitive systems directly.”
White hat hackers also found other AI exploits that were demonstrated at Black Hat USA 2025. Here’s a key factoid: “The more autonomous the AI agent, the higher the security risk.”
Many AI agents are vulnerable to security exploits and it’s a scary thought information is freely available to bad actors. Hacker Noon suggests putting agents through stress tests to find weak points then adding the necessary security levels. But Oracle (the marketer of secure enterprise search) and Google (owner of the cyber security big dog Mandiant) have both turned on their klaxons for big league vulnerabilities. Is AI helping? It depends whom one asks.
Whitney Grace, October 7, 2025
Telegram and EU Regulatory Consolidation: Trouble Ahead
October 6, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Imagine you are Pavel Durov. The value of TONcoin is problematic. France asked you to curtail some content in a country unknown to the folks who hang out at the bar at the Harrod’s Creek Inn in rural Kentucky. Competitors are announcing plans to implement Telegram-type functions in messaging apps built with artificial intelligence as steel girders. How can the day become more joyful?
Thanks, Midjourney. Good enough pair of goats. One an actual goat and the other a “Greatest of All Time” goat.
The orange newspaper has an answer to that question. “EU Watchdog Prepares to Expand Oversight of Crypto and Exchanges” reports:
Stock exchanges, cryptocurrency companies and clearing houses operating in the EU are set to come under the supervision of the bloc’s markets watchdog…
Crypto currency and some online services (possibly Telegram) operate across jurisdictions. The fragmented rules and regulations allow organizations with sporty leadership to perform some remarkable financial operations. If you poke around, you will find the names of some outfits allied with industrious operators linked to a big country in Asia. Pull some threads, and you may find an unknown Russian space force professional beavering away in the shadows of decentralized financial activities.
The write up points out:
Maria Luís Albuquerque, EU commissioner for financial services, said in a speech last month that it was “considering a proposal to transfer supervisory powers to Esma for the most significant cross-border entities” including stock exchanges, crypto companies and central counterparties.
How could these rules impact Telegram? It is nominally based in the United Arab Emirates? Its totally independent do-good Open Network Foundation works tirelessly from a rented office in Zug, Switzerland. Telegram is home free, right?
No pesky big government rules can ensnare the Messenger crowd.
Possibly. There is that pesky situation with the annoying French judiciary. (Isn’t that country with many certified cheeses collapsing?) One glitch: Pavel Durov is a French citizen. He has been arrested, charged, and questioned about a dozen heinous crimes. He is on a leash and must check in with his grumpy judicial “mom” every couple of weeks. He allegedly refused to cooperate with a request from a French government security official. He is awaiting more thrilling bureaucracy from the French judicial system. How does he cope? He criticizes France, the legal processes, and French officials asking him to do for France what Mr. Durov did for Russia earlier this year.
Now these proposed regulations may intertwine with Mr. Durov’s personal legal situation. As the Big Dog of Telegram, the French affair is likely to have some repercussions for Telegram and its Silicon Valley big tech approach to rules and regulations. EU officials are indeed aware of Mr. Durov and his activities. From my perspective in nowheresville in rural Kentucky, the news in the Financial Times on October 6, 2025, is problematic for Mr. Durov. The GOAT of Messaging, his genius brother, and a close knit group of core engineers will have to do some hard thinking to figure out how to deal with these European matters. Can he do it? Does a GOAT eat what’s available?
Stephen E Arnold, October 6, 2025
Forget AI. The Real Game Is Control by Tech Wizards
October 6, 2025
This essay is the work of a dumb dinobaby. No smart software required.
The weird orange newspaper ran an opinion-news story titled “How Tech Lords and Populists Changed the Rules of Power.” The author is Giuliano da Empoli. Now he writes. He has worked in the Italian government. He was the Deputy Mayor for Culture in the city of Florence. Niccolò Machiavelli (1469-1527) lived in Florence. That Florentine’s ideas may have influenced Giuliano.
What are the tech bros doing? M. da Empoli writes:
The new technological elites, the Musks, Mark Zuckerbergs and Sam Altmans of this world, have nothing in common with the technocrats of Davos. Their philosophy of life is not based on the competent management of the existing order but, on the contrary, on an irrepressible desire to throw everything up in the air. Order, prudence and respect for the rules are anathema to those who have made a name for themselves by moving fast and breaking things, in accordance with Facebook’s famous first motto. In this context, Musk’s words are just the tip of the iceberg and reveal something much deeper: a battle between power elites for control of the future.
In the US, the current pride of tech lions have revealed their agenda and their battle steed, Donald J. Trump. The “governing elite” are on their collective back feet. M. da Empoli points the finger at social media and online services as the magic carpet the tech elites ride even though these look like private jets. In the online world, M. da Empoli says:
On the internet, a campaign of aggression or disinformation costs nothing, while defending against it is almost impossible. As a result, our republics, our large and small liberal democracies, risk being swept away like the tiny Italian republics of the early 16th century. And taking center stage are characters who seem to have stepped out of Machiavelli’s The Prince to follow his teachings. In a situation of uncertainty, when the legitimacy of power is precarious and can be called into question at any moment, those who fail to act can be certain that changes will occur to their disadvantage.
What’s the end game? M. da Empoli asserts:
Together, political predators and digital conquistadors have decided to wipe out the old elites and their rules. If they succeed in achieving this goal, it will not only be the parties of lawyers and technocrats that will be swept away, but also liberal democracy as we have known it until today.
Several observations:
- The tech elites are in a race which they have to win. Dumb phones and GenAI limiting their online activities are two indications that in the US some behavioral changes can be identified. Will the “spirit of log off” spread?
- The tech elites want AI to win. The reason is that control of information streams translates into power. With power comes opportunities to increase the wealth of those who manage the AI systems. A government cannot do this, but the tech elites can. If AI doesn’t work, lots of money evaporates. The tech elites do not want that to happen.
- Online tears down and leads inevitably to monopolistic or oligopolistic control of markets. The end game does not interest the tech elite. Power and money do.
Net net: What’s the fix? M. da Empoli does not say. He knows what’s coming is bad. What happens to those who deliver bad news? Clever people like Machiavelli write leadership how-to books.
Stephen E Arnold, October 6, 2025
AI Service Industry: Titan or Titanic?
October 6, 2025
Venture capitalists believe they have a new recipe for success: Buy up managed-services providers and replace most of the staff with AI agents. So far, it seems to be working. (For the VCs, of course, not the human workers.) However, asserts TechCrunch, “The AI Services Transformation May Be Harder than VCs Think.” Reporter Connie Loizos throws cold water on investors’ hopes:
“But early warning signs suggest this whole services-industry metamorphosis may be more complicated than VCs anticipate. A recent study by researchers at Stanford Social Media Lab and BetterUp Labs that surveyed 1,150 full-time employees across industries found that 40% of those employees are having to shoulder more work because of what the researchers call ‘workslop’ — AI-generated work that appears polished but lacks substance, creating more work (and headaches) for colleagues. The trend is taking a toll on the organizations. Employees involved in the survey say they’re spending an average of nearly two hours dealing with each instance of workslop, including to first decipher it, then decide whether or not to send it back, and oftentimes just to fix it themselves. Based on those participants’ estimates of time spent, along with their self-reported salaries, the authors of the survey estimate that workslop carries an invisible tax of $186 per month per person. ‘For an organization of 10,000 workers, given the estimated prevalence of workslop . . . this yields over $9 million per year in lost productivity,’ they write in a new Harvard Business Review article.”
Surprise: compounding baloney produces more baloney. If companies implement the plan as designed, “workslop” will expand even as the humans who might catch it are sacked. But if firms keep on enough people to fix AI mistakes, they will not realize the promised profits. In that case, what is the point of the whole endeavor? Rather than upending an entire industry for no reason, maybe we should just leave service jobs to the humans that need them.
Cynthia Murrell, October 6, 2025
Hey, No Gain without Pain. Very Googley
October 6, 2025
AI firms are forging ahead with their projects despite predictions, sometimes by their own leaders, that artificial intelligence could destroy humanity. Some citizens have had enough. The Telegraph reports, “Anti-AI Doom Prophets Launch Hunger Strike Outside Google.” The article points to hunger strikes at both Google DeepMind’s London headquarters and a separate protest in San Francisco. Writer Matthew Field observes:
“Tech leaders, including Sir Demis of DeepMind, have repeatedly stated that in the near future powerful AI tools could pose potential risks to mankind if misused or in the wrong hands. There are even fears in some circles that a self-improving, runaway superintelligence could choose to eliminate humanity of its own accord. Since the launch of ChatGPT in 2022, AI leaders have actively encouraged these fears. The DeepMind boss and Sam Altman, the founder of ChatGPT developer OpenAI, both signed a statement in 2023 warning that rogue AI could pose a ‘risk of extinction’. Yet they have simultaneously moved to invest hundreds of billions in new AI models, adding trillions of dollars to the value of their companies and prompting fears of a seismic tech bubble.”
Does this mean these tech leaders are actively courting death and destruction? Some believe so, including San Francisco hunger-striker Guido Reichstadter. He asserts simply, “In reality, they’re trying to kill you and your family.” He and his counterparts in London, Michaël Trazzi and Denys Sheremet, believe previous protests have not gone far enough. They are willing to endure hunger to bring attention to the issue.
But will AI really wipe us out? Experts are skeptical. However, there is no doubt that AI systems perpetuate some real harms. Like opaque biases, job losses, turbocharged cybercrime, mass surveillance, deepfakes, and damage to our critical thinking skills, to name a few. Perhaps those are the real issues that should inspire protests against AI firms.
Cynthia Murrell, October 6, 2025
What a Hoot? First, Snow White and Now This
October 3, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “Disney+ Cancellation Page Crashes As Customers Rush to Quit after Kimmel Suspension.” I don’t think too much about Disney, the cost of going to a theme park, or the allegedly chill Walt Disney. Now it is Disney, Disney, Disney. The chant is almost displacing Epstein, Epstein, Epstein.
Somehow the Disney company muffed the bunny with Snow White. I think the film hit my radar when certain short human actors were going to be in a remake of the 1930s’ cartoon “Snow White.” Then then I noted some stories about a new president and an old president who wanted to be the president again or whatever. Most recently, Disney hit the pause button for a late night comedy show. Some people were not happy.
The write up informed me:
With cancellations surging, many subscribers reported technical issues. On Reddit’s r/Fauxmoi, one post read, “The page to cancel your Hulu/Disney+ subscription keeps crashing.”
As a practical matter, the way to stop cancellations is to dial back the resources available to the Web site. Presto. No more cancellations until the server is slowly restored to functionality so it can fall over again.
I am pragmatic. I don’t like to think that information technology professionals (either full time “cast” or part-timers) can’t keep a Web site online. It is 2025. A phone call to a service provider can solve most reliability problems as quickly as the data can be copied to a different data center.
Let me step back. I see several signals in what I will call the cartoon collapse.
- The leadership of Disney cannot rely on the people in the company; for example, the new Snow White and the Web server fell over.
- The judgment of those involved in specific decisions seems to be out of sync with the customers and the stakeholders in the company. Walt had Mickey Mouse aligned with what movie goers wanted to see and what stakeholders expected the enterprise to deliver.
- The technical infrastructure seems flawed. Well, not “seems.” The cancellation server failed.
Disney is an example of what happens when “leadership” has not set up an organization to succeed. Furthermore, the Disney case raises this question, “How many other big, well-known companies will follow this Disney trajectory?” My thought is that the disconnect between “management” staff, customers, stakeholders, and technology is similar to Disney in a number of outfits.
What will be these firms’ Snow White and late night comedian moment?
Stephen E Arnold, October 3, 2025
PS. Disney appears to have raised prices and then offered my wife a $2.99 per month “deal.” Slick stuff.
Big Tech Group Think: Two Examples
October 3, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Do the US tech giants do group think? Let’s look at two recent examples of the behavior and then consider a few observations.
First, navigate to “EU Rejects Apple Demand to Scrap Landmark Tech Rules.” The thrust of the write up is that Apple is not happy with the European digital competition law. Why? The EU is not keen on Apple’s business practices. Sure, people in the EU use Apple products and services, but the data hoovering makes some of those devoted Apple lovers nervous. Apple’s position is that the EU is annoying.
Thanks, Midjourney. Good enough.
The write up says:
“Apple has simply contested every little bit of the DMA since its entry into application,” retorted EU digital affairs spokesman Thomas Regnier, who said the commission was “not surprised” by the tech giant’s move.
Apple wants to protect its revenue, its business models, and its scope of operation. Governments are annoying and should not interfere with a US company of Apple’s stature is my interpretation of the legal spat.
Second, take a look at the Verge story “Google Just Asked the Supreme Court to Save It from the Epic Ruling.” The idea is that the online store restricts what a software developer can do. Forget that the Google Play Store provides access to some sporty apps. A bit of spice is the difficulty one has posting reviews of certain Play Store apps. And refunds for apps that don’t work? Yeah, no problemo.
The write up says:
… [Google] finally elevated its Epic v. Google case, the one that might fracture its control over the entire Android app ecosystem, to the Supreme Court level. Google has now confirmed it will appeal its case to the Supreme Court, and in the meanwhile, it’s asking the Court to press pause one more time on the permanent injunction that would start taking away its control.
It is observation time:
- The two technology giants are not happy with legal processes designed to enforce rules, regulations, and laws. The fix is to take the approach of a five year old, “I won’t clean up my room.”
- The group think appears to operate on the premise that US outfits of a certain magnitude should not be hassled like Gulliver by Lilliputians wearing robes, blue suits, and maybe a powdered wig or hair extenders
- The approach of the two companies strikes me, a definite non lawyer, as identical.
Therefore, the mental processes of these two companies appear to be aligned. Is this part of the mythic Silicon Valley “way”? Is it a consequence of spending time on Highway 101 or the Foothills Expressway thinking big thoughts? Is the approach the petulance that goes with superior entities encountering those who cannot get with the program?
My view: After decades of doing whatever, some outfits believe that type of freedom is the path to enlightenment, control, and money. Reinforced behaviors lead to what sure looks like group think to me.
Stephen E Arnold, October 3, 2025
AI, Students, Studies, and Pizza
October 3, 2025
Google used to provide the best search results on the Web, because of accuracy and relevancy. Now Google search is chock full of ads, AI responses, and Web sites that manipulate the algorithm. Google searches, of course, don’t replace good, old-fashioned research. SSRN shares the paper: “Better than a Google Search? Effectiveness of Generative AI Chatbots as Information Seeking Tools in Law, Health Sciences, and Library and Information Sciences” by Erica Friesen & Angélique Roy.
The pair point out that students are using AI chatbots, claiming they help them do better research and improve their education. Sounds worse than the pathetic fallacy to me, right? Maybe if you’re only using the AI to help with writing or even a citation but Friesen and Roy decided to research if this conjecture was correct. Insert their abstract:
“is perceived trust in these tools speaks to the importance of the quality of the sources cited when they are used as an information retrieval system. This study investigates the source citation practices of five widely available chatbots-ChatGPT, Copilot, DeepSeek, Gemini, and Perplexity-across three academic disciplines-law, health sciences, and library and information sciences. Using 30 discipline-specific prompts grounded in the respective professional competency frameworks, the study evaluates source types, organizational affiliations, the accessibility of sources, and publication dates. Results reveal major differences between chatbots, which cite consistently different numbers of sources, with Perplexity and DeepSeek citing more and Copilot providing fewer, as well as between disciplines, where health sciences questions yield more scholarly source citations and law questions are more likely to yield blog and professional website citations. Paywalled sources and discipline-specific literature such as case law or systematic reviews are rarely retrieved. These findings highlight inconsistencies in chatbot citation practices and suggest discipline-specific limitations that challenge their reliability as academic search tools.”
I draw three conclusions from this:
- These AI chatbots are useful tools, but they need way more improvement, and shouldn’t be relied on 100%.
- Chatbooks are convenient. Students like convenience. Proof: How popular is carry-out pizza on a college campus.
- Paywalled data is valuable, but who is going to pay when the answers are free?
Will students use AI to complement old fashioned library research, writing, and memorizing? Sure they will. Do you want sausage or pepperoni on the pizza?
Whitney Grace, October 3, 2025
Hiring Problems: Yes But AI Is Not the Reason
October 2, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I read “AI Is Not Killing Jobs, Finds New US Study.” I love it when the “real” news professionals explain how hiring trends are unfolding. I am not sure how many recent computer science graduates, commercial artists, and online marketing executives are receiving this cheerful news.

The magic carpet of great jobs is flaming out. Will this professional land a new position or will the individual crash? Thanks, Midjourney. Good enough.
The write up states: “Research shows little evidence the cutting edge technology such as chatbots is putting people out of work.”
I noted this statement in the source article from the Financial Times:
Research from economists at the Yale University Budget Lab and the Brookings Institution think-tank indicates that, since OpenAI launched its popular chatbot in November 2022, generative AI has not had a more dramatic effect on employment than earlier technological breakthroughs. The research, based on an analysis of official data on the labor market and figures from the tech industry on usage and exposure to AI, also finds little evidence that the tools are putting people out of work.
That closes the doors on any pushback.
But some people are still getting terminated. Some are finding that jobs are not available. (Hey, those lucky computer science graduates are an anomaly. Try explaining that to the parents who paid for tuition, books, and a crash summer code academy session.)
“Companies Are Lying about AI Layoffs” provides a slightly different take on the jobs and hiring situation. This bit of research points out that there are terminations. The write up explains:
American employees are being replaced by cheaper H-1B visa workers.
If the assertions in this write up are accurate, AI is providing “cover” for what is dumping expensive workers and replacing them with lower cost workers. Cheap is good. Money savings… also good. Efficiency … the core process driving profit maximization. If you don’t grasp the imperative of this simply line of reasoning, ask an unemployed or recently terminated MBA from a blue chip consulting firm. You can locate these individuals in coffee shops in cities like New York and Chicago because the morose look, the high end laptop, and carefully aligned napkin, cup, and ink pen are little billboards saying, “Big time consultant.”
The “Companies Are Lying” article includes this quote:
“You can go on Blind, Fishbowl, any work related subreddit, etc. and hear the same story over and over and over – ‘My company replaced half my department with H1Bs or simply moved it to an offshore center in India, and then on the next earnings call announced that they had replaced all those jobs with AI’.”
Several observations:
- Like the Covid thing, AI and smart software provide logical ways to tell expensive employees hasta la vista
- Those who have lost their jobs can become contractors and figure out how to market their skills. That’s fun for engineers
- The individuals can “hunt” for jobs, prowl LinkedIn, and deal with the wild and crazy schemes fraudsters present to those desperate for work
- The unemployed can become entrepreneurs, life coaches, or Shopify store operators
- Mastering AI won’t be a magic carpet ride for some people.
Net net: The employment picture is those photographs of my great grandparents. There’s something there, but the substance seems to be fading.
Stephen E Arnold, October 2, 2025

