So Much AI and Now More Doom and Gloom
August 22, 2025
No AI. Just a dinobaby and a steam-powered computer in rural Kentucky.
Amidst the hype about OpenAI’s ChatGPT 5, I have found it difficult to identify some quiet but to me meaningful signals. One, in my opinion, appears in “Sam Altman Sounds Alarm on AI Crisis That Even He Finds Terrifying.” I was hoping that the article would provide some color on the present negotiations between Sam and Microsoft. For a moment, I envisioned Sam in a meeting with the principals of the five biggest backers of OpenAI. The agenda had one item on the agenda, “When do we get our money back with a payoff, Mr. Altman?”
But no. The signal is that smart software will enable fast-moving, bureaucracy-free bad actors to apply smart software to online fraud. The write up says:
[Mr.] Altman fears that the current AI-fraud crisis will expand beyond voice cloning attacks, deepfake video call scams and phishing emails. He warns that in the future, FaceTime or video fakes may become indistinguishable from reality. The alarming abilities of current AI-technology in the hands of bad faith actors is already terrifying. Scammers can now use AI to create fake identification documents, explicit photos, and headshots for social media profiles.
Okay, he is on the money, but he overlooks one use case for smart software. A bad actor can use different smart software systems and equip existing malware with more interesting features. At some point, a clever bad actor will use AI to build a sophisticated money laundering mechanism that uses the numerous new crypto currencies and their attendant blockchain systems to make the wizards at Huione Guarantee look pretty pathetic.
Can this threat be neutralized. I don’t think it can be in the short term. The reason is that AI is here and has been available for more than a year. Code generation is getting easier. A skilled bad actor can, just like a Google-type engineer, become more productive. In the mid-term, the cyber security companies will roll out AI tools that, according to one outfit whose sales pitch I listened to last wee, will “predict the future.” Yeah, sure. News flash: Once a breach has been discovered, then the cyber security firms kick into action. If the predictive stuff were reliable, these outfits would be betting on horse races and investing in promising start ups, not trying to create such a company.
Mr. Altman captures significant media attention. His cyber fraud message is a faint signal amidst the cacophony of the AI marketing blasts. By the way, cyber fraud is booming, and our research into outfits like Telegram suggest that AI is a contributing factor.
With three new Telegram-type services in development at this time, the future for bad actors looks as bright and the future for cyber security firms looks increasingly reactive. For investors and those with retirement funds, the forecast is less cheery.
Stephen E Arnold, August 22, 2025
Comments
Got something to say?