Deepfake Crime Surges With Scams
October 16, 2024
Just a humanoid processing information related to online services and information access.
Everyone with a brain knew that deepfakes, AI generated images, videos, and audio, would be used for crime. According to the Global Newswire, “Deepfake Fraud Doubles Down: 49% of Businesses Now Hit By Audio and Video Scams, Regula’s Survey Reveals.” Regula is a global developer of ID verification and forensic devices. The company released the survey: “The Deepfake Trends 2024” and it revealed some disturbing trends.
Regula’s survey discovered that there’s a 20% increase in deepfake videos from 2022. Meanwhile, fraud decision-makers across the globe reported a 49% increase encounter deepfakes and there’s also a 12% rise in fake audio. What’s even more interesting is that bad actors are still using old methods for identity fraud scams:
“As Regula’s survey shows, 58% of businesses globally have experienced identity fraud in the form of fake or modified documents. This happens to be the top identity fraud method for Mexico (70%), the UAE (66%), the US (59%), and Germany (59%). This implies that not only do businesses have to adapt their verification methods to deal with new threats, but they also are forced to combat old threats that continue to pose a significant challenge.”
Deepfakes will only get more advanced and worse. Bad actors and technology are like the illnesses: they evolve every season with new ways to make people sick while still delivering the common cold.
Whitney Grace, October 16, 2024
Forget Surveillance Capitalism. Think Parasite Culture
October 15, 2024
Ted Gioia touts himself as The Honest Broker on his blog and he recently posted about the current state of the economy: “Are We Now Living In A Parasite Culture?” In the opening he provides examples of natural parasites before moving to his experience working with parasite strategies.
Gioia said that when he consulted fortune 500 companies, he and others used parasite strategies as thought exercises. Here’s what a parasite strategy is:
1. “You allow (or convince) someone else to make big investments in developing a market—so they cover the cost of innovation, or advertising, or lobbying the government, or setting up distribution, or educating customers, or whatever. But…
2. You invest your energy instead on some way of cutting off these dutiful folks at the last moment—at the point of sale, for example. Hence…
3. You reap the benefits of an opportunity that you did nothing to create.”
On first reading, it doesn’t seem that our economy is like that until he provides true examples: Facebook, Spotify, TikTok, and Google. All of these platforms are nothing more than a central location for people to post and share their content or they aggregate content from the Internet. These platforms thrive off the creativity of their users and their executive boards reap the benefits, while the creators struggle to rub two cents together.
Smart influencers know to diversify their income streams through sponsorship, branding, merchandise, and more. Gioia points out that the Forbes lists of billionaires includes people who used parasitical business strategies to get rich. He continues by saying that these parasites will continue to guzzle off their hosts’ lifeblood with a chance of killing said host.
Its happening now in the creative economy with Big Tech’s investment in AI and how, despite lawsuits and laws, these companies are illegally training AI on creative pursuits. He finishes with the obvious statement that politicians should be protecting people, but that they’re probably part of the problem. No duh.
Whitney Grace, October 15, 2024
AI Guru Says, “Yep, AI Doom Is Coming.” Have a Nice Day
October 15, 2024
Just a humanoid processing information related to online services and information access.
In science-fiction stories, it is a common storyline for the creator to turn against their creation. These stories serve as a warning to humanity of Titanic proportions: keep your ego in check. The Godfather of AI, Yoshua Bengio advices the same except not in so many words and he applies it to AI, as reported by Live Science: “Humanity Faces A ‘Catastrophic’ Future If We Don’t Regulate AI, ‘Godfather of AI’ Yoshua Bengio Says.”
Bengio is correct. He’s also a leading expert in artificial intelligence, pioneer in creating artificial neural networks and deep learning algorithms, and won the Turing Award in 2018. He is also. The chair of the International Scientific Report on the Safety of Advanced AI, an advisory panel backed by the UN, EU, and 30 nations. Bengio believes that AI, because it is quickly being developed and adopted, will irrevocably harm human society.
He recently spoke at the HowTheLightGetsIn Festival in London about AI developing sentience and its associated risks. In his discussion, he says he backed off from his work because AI was moving too fast. He wanted to slow down AI development so humans would take more control of the technology.
He advises that governments enforce safety plans and regulations on AI. Bengio doesn’t want society to become too reliant on AI technology, then, if there was a catastrophe, humans would be left to pick up the broken pieces. Big Tech companies are also using a lot more energy than the report, especially on their data centers. Big Tech companies are anything but green.
Thankfully Big Tech is talking precautions against AI becoming dangerous threats. He cites the AI Safety Institute’s in the US and UK working on test models. Bengio wants AI to be developed but not unregulated and he wants nations to find common ground for the good of all:
“It’s not that we’re going to stop innovation, you can direct efforts in directions that build tools that will definitely help the economy and the well-being of people. So it’s a false argument.
We have regulation on almost everything, from your sandwich, to your car, to the planes you take. Before we had regulation we had orders of magnitude more accidents. It’s the same with pharmaceuticals. We can have technology that’s helpful and regulated, that is the thing that’s worked for us.
The second argument is that if the West slows down because we want to be cautious, then China is going to leap forward and use the technology against us. That’s a real concern, but the solution isn’t to just accelerate as well without caution, because that presents the problem of an arms race.
The solution is a middle ground, where we talk to the Chinese and we come to an understanding that’s in our mutual interest in avoiding major catastrophes. We sign treaties and we work on verification technologies so we can trust each other that we’re not doing anything dangerous. That’s what we need to do so we can both be cautious and move together for the well-being of the planet.”
Will this happen? Maybe.
The problem is countries don’t want to work together and each wants to be the most powerful in the world.
Whitney Grace, October 15, 2024
Flappy Bird Flutters to Life Thanks to the Power of the New Idol, Crypto
October 15, 2024
Just a humanoid processing information related to online services and information access.
Flappy Bird is coming out of retirement after a decade away. Launched in 2013, the original game was wildly popular and lucrative. However, less than a year later, its creator pulled it from app stores for being unintentionally addictive. Subsequently, players/addicts were willing to pay hundreds or thousands of dollars for devices that still had the game installed. Now it has reemerged as a Telegram crypto game. Much better. Decrypt reports, “What Is ‘Flappy Bird’ on Telegram? Iconic Game Returns with Crypto Twist.” Writer Ryan S. Gladwin tells us the game is basically the same as before, with a few additions just for crypto bros:
“Developed by the Flappy Bird Foundation, the Telegram game mixes in elements from other crypto games on the app, including the likes of Hamster Kombat, by allowing players to passively earn in-game points by obtaining upgrades. These are earned through a variety of ways, including watching ads and inviting friends.”
Naturally, a custom Flappy Bird token will be introduced. And, as with most of this year’s “tap-to-earn” games, it will reside on Telegram’s decentralized network, simply named The Open Network (TON). We learn:
"Yes, there will be a FLAP token launched in relation with the Telegram version of Flappy Bird. This has been confirmed in tweets from the official game account on Twitter (aka X), and the game will also offer staking rewards for the future token. Previously, The Flappy Bird Foundation said that it has plans to integrate The Open Network (TON)—the network that most tap-to-earn games launch tokens on. Notcoin, the tap-to-earn game that started the Telegram craze with the largest crypto gaming token launch of the year, is the ‘strategic publishing partner’ for Flappy Bird’s return. This partnership is set to help introduce The Open Network (TON) ecosystem to Flappy Bird with the game starting a ‘free mining event’ at launch called ‘Flap-a-TON.’ A mining event is usually a period of time in which players can make gameplay progress to get a cut of a future token airdrop.”
What a cutting-edge way to maximize engagement. If he was so upset about his game’s addictive qualities, why did creator Dong Nguyen sell it to an outfit that meant to crypto-tize it? In fact, he did not. After the game languished for four years, the trademark was deemed abandoned. A firm called Mobile Media Partners Inc. snapped up the languishing trademark and later sold it to one Gametech Holdings LLC, from whom the Flappy Bird Foundation bought it earlier this year. That must have been quite a surprise to the conscientious developer. Not only were Nguyen’s wishes for his game completely disregarded, he is receiving no compensation from the game’s reemergence. Classy.
Cynthia Murrell, October 15, 2024
FOGINT: UN Says Telegram Is a Dicey Outfit
October 14, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
One of my colleagues forwarded a dump truck of links to articles about a UN Report. Before commenting on the report, I want to provide a snapshot of the crappy Web search tools and the useless “search” function on the UN Web site.
First, the title of the October 2024 report is:
Transnational Organized Crime and the Convergence of Cyber-Enabled Fraud, Underground Banking and Technological Innovation in Southeast Asia: A Shifting Threat Landscape
I want to point out that providing a full title in an online article is helpful to some dinobabies like me.
Second, including an explicit link to a document is also appreciated by some people, most of whom are over 25 years in age, of above average intelligence, and interested in online crime. With that in mind, here is the explicit link to the document:
https://www.unodc.org/roseap/uploads/documents/Publications/2024/TOC_Convergence_Report_2024.pdf
Now let’s look briefly at what the 142 page report says:
Telegram is a dicey outfit.
Not bad: 142 pages compressed to five words. Let look at two specifics and then I encourage you to read the full report and draw your own conclusions about the quite clever outfit Telegram.
The first passage which caught my attention was this one which is a list of the specialized software and services firms paying attention to Telegram. Here is that list. It is important because most of these outfits make their presence known to enforcement and intelligence entities, not the TikTok-type crowd:
Bitrace
Chainalysis
Chainargos
Chainvestigate
ChongLuaDao (Viet Nam)
Coeus
Crystal Intelligence
CyberArmor
Flare Systems
Flashpoint
Group-IB
Hensoldt Analytics
Intel 471
Kela
Magnet Forensics
Resecurity
Sophos
SlowMist
Trend Micro
TRM Labs
Other firms played ball with the UN, but these companies may have suggested, “Don’t tell anyone we assisted.” That’s my view; yours may differ.
The second interesting passage in the document for me was:
Southeast Asia faces unprecedented challenges posed by transnational organized crime and illicit economies. The region is witnessing a major convergence of different crime types and criminal services fueled by rapid and shifting advancements in physical, technological, and digital infrastructure have have allowed organized crime networks to expand these operations.
Cyber crime is the hot ticket in southeast Asia. I would suggest that the Russian oligarchs are likely to get a run for their money if these well-groomed financial wizards try to muscle in on what is a delightful mix of time Triads, sleek MBAs, and testosterone fueled crypto kiddies with motos, weapons and programming expertise. The mix of languages, laws, rules, and special purpose trade zones add some zest to the run-of-the-mill brushing activities. I will not suggest that many individuals who visit or live in Southeast Asia have a betting gene, but the idea is one worthy of Stuart Kauffman and his colleagues at the Santa Fe Institute. Gambling emerges from chaos and good old greed.
A third passage which I circled addressed Telegram. By the way, “Telegram” appears more than 100 times in the document. Here’s the snippet:
Providing further indication of criminal activity, Kokang casinos and associated companies have developed a robust presence across so-called ‘grey and black business’ Telegram channels facilitating cross-border ‘blockchain’ gambling, underground banking, money laundering, and related recruitment in Myanmar, Cambodia, China, and several other countries in East and Southeast Asia.
The key point to me is that this is a workflow process with a system and method spanning countries. The obvious problem is, “Whom does law enforcement arrest?” Another issue, “Where is the Telegram server?” The answer to the first question is, “In France.” The second question is more tricky and an issue that the report does not address. This is a problematic omission. The answer to the “Where is the Telegram server?” is, “In lots of places.” Telegram is into dApps or distributed applications. The servers outside of Moscow and St Petersburg are virtual. The providers or enablers of Telegram probably don’t know Telegram is a customer and have zero clue what’s going on in virtual machines running Telegram’s beefy infrastructure.
The report is worth reading. If you are curious about Telegram’s plumbing, please, write benkent2020 at yahoo dot com. The FOGINT team has a lecture about the components of the Telegram architecture as well as some related information about the company’s most recent social plays.
Stephen E Arnold, October 14, 2024
An Emergent Behavior: The Big Tech DNA Proves It
October 14, 2024
Writer Mike Masnick at TechDirt makes quite the allegation: “Big Tech’s Promise Never to Block Access to Politically Embarrassing Content Apparently Only Applies to Democrats.” He contends:
“It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when that political figure is a Democrat. If it’s a Republican, then of course the content will be suppressed, and the GOP officials who demanded that big tech never ever again suppress such content will look the other way.”
The basis for Masnick’s charge of hypocrisy lies in a tale of two information leaks. Tech execs and members of Congress responded to each data breach very differently. Recently, representatives from both Meta and Google pledged to Senator Tom Cotton at a Senate Intelligence Committee hearing to never again “suppress” news as they supposedly did in 2020 with Hunter Biden laptop story. At the time, those platforms were leery of circulating that story until it could be confirmed.
Less than two weeks after that hearing, Journalist Ken Klippenstein published the Trump campaign’s internal vetting dossier on JD Vance, a document believed to have been hacked by Iran. That sounds like just the sort of newsworthy, if embarrassing, story that conservatives believe should never be suppressed, right? Not so fast—Trump mega-supporter Elon Musk immediately banned Ken’s X account and blocked all links to Klippenstein’s Substack. Similarly, Meta blocked links to the dossier across its platforms. That goes further than the company ever did with the Biden laptop story, the post reminds us. Finally, Google now prohibits users from storing the dossier on Google Drive. See the article for more of Masnick’s reasoning. He concludes:
“Of course, the hypocrisy will stand, because the GOP, which has spent years pointing to the Hunter Biden laptop story as their shining proof of ‘big tech bias’ (even though it was nothing of the sort), will immediately, and without any hint of shame or acknowledgment, insist that of course the Vance dossier must be blocked and it’s ludicrous to think otherwise. And thus, we see the real takeaway from all that working of the refs over the years: embarrassing stuff about Republicans must be suppressed, because it’s doxing or hacking or foreign interference. However, embarrassing stuff about Democrats must be shared, because any attempt to block it is election interference.”
Interesting. But not surprising.
Cynthia Murrell, October 14, 2024
Apple: The Company Follows Its Rules Consistently, Right Mr. Putin
October 14, 2024
Just a humanoid processing information related to online services and information access.
Russia is tightening regulations on Internet privacy and monitoring. In other words, it wants more control of the Internet and its citizens. Money Control reports the following: “Apple Removes Multiple VPN Apps From The App Store In Russia, Here’s Why.”
While Apple reduced its services in Russia because of the latter’s war on Ukraine, the App Store still runs. Other countries criticize Apple for continuing to offer its goods and services in Russia and comply with its mandates. It is debatable about why Apple did the following:
“The App Censorship Project found that more than 60 apps, including some of the best VPN (Virtual Private Network) services on the market, were silently removed by Apple between early July and September 18, 2024. Another investigation from anti-censorship advocacy group GreatFire reveals, that Apple has barred over 20% of known VPN apps from being available in Russia.”
The amount of VPN removals exceeds the 25 the Roskomnadzor reported on in June 2024. VPNs are still permissible in Russia, but the loss of many privacy options demonstrate that the country wants to crack down and control how its citizens use the Internet.
Apple probably removed the VPNs to comply with Russian authorities, like Google had to do last year with YouTube. Apple also complies with China’s VPN and messenger ban.
Authoritarian governments are horrible but they also present big business opportunities. Apple and other Big Tech companies are happy to comply with regulations as long as they continue to rake in money.
Whitney Grace, October 14, 2024
AI: New Atlas Sees AI Headed in a New Direction
October 11, 2024
I like the premise of “AI Begins Its Ominous Split Away from Human Thinking.” Neural nets trained by humans on human information are going in their own direction. Whom do we thank? The neural net researchers? The Googlers who conceived of “the transformer”? The online advertisers who have provided significant sums of money? The “invisible hand” tapping on a virtual keyboard? Maybe quantum entanglement? I don’t know.
I do know that New Atlas’ article states:
AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.
But isn’t that the point? The high school science club types beavering away in the smart software vineyards know the catchphrase:
Boldly go where no man has gone before!
The big outfits able to buy fancy chips and try to start mothballed nuclear plants have “boldly go where no man has gone before.” Get in the way of one of these captains of the star ship US AI, and you will be terminated, harassed, or forced to quit. If you are not boldly going, you are just not going.
The article says ChatGPT 4 whatever is:
… the first LLM that’s really starting to create that strange, but super-effective AlphaGo-style ‘understanding’ of problem spaces. In the domains where it’s now surpassing Ph.D.-level capabilities and knowledge, it got there essentially by trial and error, by chancing upon the correct answers over millions of self-generated attempts, and by building up its own theories of what’s a useful reasoning step and what’s not.
But, hey, it is pretty clear where AI is going from New Atlas’ perch:
OpenAI’s o1 model might not look like a quantum leap forward, sitting there in GPT’s drab textual clothing, looking like just another invisible terminal typist. But it really is a step-change in the development of AI – and a fleeting glimpse into exactly how these alien machines will eventually overtake humans in every conceivable way.
But if the AI goes its own way, how can a human “conceive” where the software is going?
Doom and fear work for the evening news (or what passes for the evening news). I think there is a cottage industry of AI doomsters working diligently to stop some people from fooling around with smart software. That is not going to work. Plus, the magical “transformer” thing is a culmination of years of prior work. It is simply one more step in the more than 50 year effort to process content.
This “stage” seems to have some utility, but more innovations will come. They have to. I am not sure how one stops people with money hunting for people who can say, “I have the next big thing in AI.”
Sorry, New Atlas, I am not convinced. Plus, I don’t watch movies or buy into most AI wackiness.
Stephen E Arnold, October 11, 2024
Cyber Criminals Rejoice: Quick Fraud Development Kit Announced
October 11, 2024
I am not sure the well-organized and managed OpenAI intended to make cyber criminals excited about their future prospects. Several Twitter enthusiasts pointed out that OpenAI makes it possible to develop an app in 30 seconds. Prashant posted:
App development is gonna change forever after today. OpenAI can build an iPhone app in 30 seconds with a single prompt. [emphasis added]
The expert demonstrating this programming capability was Romain Huet. The announcement of the capability débuted at OpenAI’s Dev Day.
A clueless dinobaby is not sure what this group of youngsters is talking about. An app? Pictures of a slumber party? Thanks, MSFT Copilot, good enough.
What’s a single prompt mean? That’s not clear to me at the moment. Time is required to assemble the prompt, run it, check the outputs, and then fiddle with the prompt. Once the prompt is in hand, then it is easy to pop it into o1 and marvel at the 30 second output. Instead of coding, one prompts. Zip up that text file and sell it on Telegram. Make big bucks or little STARS and TONcoins. With some cartwheels, it is sort of money.
Is this quicker that other methods of cooking up an app; for example, some folks can do some snappy app development with Telegram’s BotFather service?
Let’s step back from the 30-second PR event.
Several observations are warranted.
First, programming certain types of software is becoming easier using smart software. That means that a bad actor may be able to craft a phishing play more quickly.
Second, specialized skills embedded in smart software open the door to scam automation. Scripts can generate other needed features of a scam. What once was a simple automated bogus email becomes an orchestrated series of actions.
Third, the increasing cross-model integration suggests that a bad actor will be able to add a video or audio delivering a personalized message. With some fiddling, a scam can use a phone call to a target and follow that up with an email. To cap off the scam, a machine-generated Zoom-type video call makes a case for the desired action.
The key point is that legitimate companies may want to have people they manage create a software application. However, is it possible that smart software vendors are injecting steroids into a market given little thought by most people? What is that market? I am thinking that bad actors are often among the earlier adopters of new, low cost, open source, powerful digital tools.
I like the gee whiz factor of the OpenAI announcement. But my enthusiasm is a fraction of that experienced by bad actors. Sometimes restraint and judgment may be more helpful than “wow, look at what we have created” show-and-tell presentations. Remember. I am a dinobaby and hopelessly out of step with modern notions of appropriateness. I like it that way.
Stephen E Arnold, October 11, 2024
The GoldenJackals Are Running Free
October 11, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
Remember the joke about security. Unplugged computer in a locked room. Ho ho ho. “Mind the (Air) Gap: GoldenJackal Gooses Government Guardrails” reports that security is getting more difficult. The write up says:
GoldenJackal used a custom toolset to target air-gapped systems at a South Asian embassy in Belarus since at least August 2019… These toolsets provide GoldenJackal a wide set of capabilities for compromising and persisting in targeted networks. Victimized systems are abused to collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems. The ultimate goal of GoldenJackal seems to be stealing confidential information, especially from high-profile machines that might not be connected to the internet.
What’s interesting is that the sporty folks at GoldenJackal can access the equivalent of the unplugged computer in a locked room. Not exactly, of course, but allegedly darned close.
Microsoft Copilot does a great job of presenting an easy to use cyber security system and console. Good work.
The cyber experts revealing this exploit learned of it in 2020. I think that is more than three years ago. I noted the story in October 2024. My initial question was, “What took so long to provide some information which is designed to spark fear and ESET sales?”
The write up does not tackle this question but the write up reveals that the vector of compromise was a USB drive (thumb drive). The write up provides some detail about how the exploit works, including a code snippet and screen shots. One of the interesting points in the write up is that Kaspersky, a recently banned vendor in the US, documented some of the tools a year earlier.
The conclusion of the article is interesting; to wit:
Managing to deploy two separate toolsets for breaching air-gapped networks in only five years shows that GoldenJackal is a sophisticated threat actor aware of network segmentation used by its targets.
Several observations come to mind:
- Repackaging and enhancing existing malware into tool bundles demonstrates the value of blending old and new methods.
- The 60 month time lag suggests that the GoldenJackal crowd is organized and willing to invest time in crafting a headache inducer for government cyber security professionals
- With the plethora of cyber alert firms monitoring everything from secure “work use only” laptops to useful outputs from a range of devices, systems, and apps why is it that only one company sufficiently alert or skilled to explain the droppings of the GoldenJackal?
I learn about new exploits every couple of days. What is now clear to me is that a cyber security firm which discovers something novel does so by accident. This leads me to formulate the hypothesis that most cyber security services are not particularly good at spotting what I would call “repackaged systems and methods.” With a bit of lipstick, bad actors are able to operate for what appears to be significant periods of time without detection.
If this hypothesis is correct, US government memoranda, cyber security white papers, and academic type articles may be little more than puffery. “Puffery,” as we have learned is no big deal. Perhaps that is what expensive cyber security systems and services are to bad actors: No big deal.
Stephen E Arnold, October 11, 2024
One

