A Interesting Free Software: FreeVPN
August 28, 2025
No AI. Just a dinobaby working the old-fashioned way.
I often hear about the wonders of open source software. Even an esteemed technologist like Pavel Durov offers free and open source software. He wants to make certain aspects of Telegram transparent. “Transparent” is a popular word in some circles. China releases Qwen and it is free. The commercial variants are particularly stimulating. Download free and open source software. If you run into a problem, just fix it yourself. Alternatively you can pay for “commercial for fee” support. Choice! That’s the right stuff.
I read “Chrome VPN Extension with 100K Installs Screenshots All Sites Users Visit.” Note: By the time you read this, the Googlers may have blocked this extension or the people who rolled out this digital Trojan horse may have modified the extension’s behavior to something slightly less egregious.
Now back to the Trojan horse with a saddle blanket displaying the word “spyware.” I quote:
FreeVPN.One, a Chrome extension with over 100,000 installs and a verified badge on the Chrome Web Store, is exposed by researchers for taking screenshots of users’ screens and exfiltrating them to remote servers. A Koi Security investigation of the VPN tool reveals that it has been capturing full-page screenshots from users’ browsers, logging sensitive visual data like personal messages, financial dashboards, and private photos, and uploading it to aitd[.]one, a domain registered by the extension’s developer.
The explanation makes clear that one downloads and installs or activates a Chrome extension. Then the software sends data to the actor deploying the malware.
The developer says:
The extension’s developer claimed to Koi Security that the background screenshot functionality is part of a “security scan” intended to detect threats.
Whom does one believe? The threat detection outfit or the developer.
Can you recall a similar service? Hint: Capitalize the “r” in “Recall.”
Can the same stealth (clumsy stealth in some cases) exist in other free software? Does a jet air craft stay aloft when its engines fail?
Stephen E Arnold, August 28, 2025
A Better Telegram: Max (imum) Surveillance
August 27, 2025
No AI. Just a dinobaby working the old-fashioned way.
The super duper everything apps include many interesting functions. But one can spice up a messaging app with a bit of old-fashioned ingenuity. The newest “player” in the “secret” messaging game is not some knock off Silicon Valley service. The MAX app has arrived.
Reuters reported in “Russia Orders Sate-Backed MAX Messenger Ap, a WhatsApp Rival, Pre-Installed on Phones and Tablets.” (Did you notice the headline did not include Telegram?) The trusted news source says:
A Russian state-backed messenger application called MAX, a rival to WhatsApp that critics say could be used to track users, must be pre-installed on all mobile phones and tablets from next month, the Russian government said on Thursday. The decision to promote MAX comes as Moscow is seeking greater control over the internet space as it is locked in a standoff with the West over Ukraine, which it casts as part of an attempt to shape a new world order.
I like the inclusion of a reference to “a new world order.”
The trusted news source adds:
State media says accusations from Kremlin critics that MAX is a spying app are false and that it has fewer permissions to access user data than rivals WhatsApp and Telegram.
Yep, Telegram. Several questions:
- Are any of the companies supporting MAX providing services to Telegram?
- Were any of the technologists working on MAX associated with VKontakte or Telegram?
- Will other countries find the MAX mandated installation an interesting idea?
- How does MAX intersect with data captured from Russia-based telecom outfits and online service providers?
I can’t answer these questions, but I would think that a trusted news service would.
Stephen E Arnold, August 27, 2025
What Cyber Security Professionals “Fear”
August 21, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
My colleague Robert David Steele (now deceased) loved to attend Black Hat. He regaled me with the changing demographics of the conference, the reaction to his often excitement-inducing presentations, and the interesting potential “resources” he identified. I was content to stay in my underground office in rural Kentucky and avoid the hacking and posturing.
I still keep up (sort of but not too enthusiastically) with Black Hat events by reading articles like “Black Hat 2025: What Keeps Cyber Experts Up at Night?” The write up explains that:
“Machines move faster than humans.”
Okay, that makes sense. The write up then points out:
“Tools like generative AI are fueling faster, more convincing phishing and social engineering campaigns.”
I concluded that cyber security professionals fear fast computers and smart software. When these two things are combined, the write up states:
The speed of AI innovation is stretching security management to its limits.
My conclusion is that the wide availability of smart software is the big “fear.”
I interpret the information in the write up from a slightly different angle. Let me explain.
First, cyber security companies have to make money to stay in business. I could name one Russian outfit that gets state support, but I don’t want to create waves. Let’s go with money is the driver of cyber security. In order to make money, the firms have to come up with fancy ways of explaining DNS analysis, some fancy math, or yet another spin on the Maltego graph software. I understand.
Second, cyber security companies are by definition reactive. So far the integration of smart software into the policeware and intelware systems I track adds some workflow enhancements; for example, grouping information and in some cases generating a brief paragraph, thus saving time. Proactive perimeter defense systems and cyber methods designed to spot insider attacks are in what I call “sort of helpful” mode. These systems can easily overwhelm the person monitoring the data signals. Firms respond by popping up a level with another layer of abstraction. Those using the systems are busy, of course, and it is not clear if more work gets done or if time is bled off to do busy-work. Cyber security firms, therefore, are usually not in proactive mode except for marketing.
Third, cyber security firms are consolidating. I think about outfits like Pala Alto or the private equity roll ups. The result is that bureaucratic friction is added to the technology development these firms must do. Just figuring out how to snag data from the latest and greatest Dark Web secret forum and actually getting access to a Private Channel on Telegram disseminating content that is illegal in many jurisdictions takes time. With smart software, bad actors can experiment. The self-appointed gatekeepers do little to filter these malware activities because some bad actors are customers of the gatekeepers. (No, I won’t name firms. I don’t want to talk to lawyers or inflamed cyber security firms’ leadership.) My point is that consolidation creates bureaucratic work. That activity puts the foot on the fast moving cyber firm’s brakes. Reaction time slows.
What does this mean?
I think the number one fear for cyber security professionals may be the awareness that bad actors with zero bureaucratic, technical, or financial limits can use AI to make old wine new again. Recently a major international law enforcement organization announced the shutdown of particular stealer software. Unfortunately that stealer is currently being disseminated via Web search systems with live links to the Telegram-centric vendor pumping the malware into thousands of unsuspecting Telegram users each month.
What happens when that “old school” stealer is given some new capabilities by one of the smart software tools? The answer is, “Cyber security firms may have to hype their capabilities to an even greater degree than they now do. Behind the scenes, the stage is now set for developer burn out and churn.
The fear, then, is a nagging sense that bad guys may be getting a tool kit to punch holes in what looks like a slam dunk business. I am probably wrong because I am a dinobaby. I don’t go to many conferences. I don’t go to sales meetings. I don’t meet with private equity people. I just look at how AI makes asymmetric cyber warfare into a tough game. One should not take a squirt gun to a shoot out with a bad actor working without bureaucratic and financial restraints armed with an AI system.
Stephen E Arnold, August 21, 2025
Cyber Security: Evidence That Performance Is Different from Marketing
August 20, 2025
This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.
In 2022, Google bought a cyber security outfit named Mandiant. The firm had been around since 2004, but when Google floated more than $5 billion for the company, it was time to sell.
If you don’t recall, Google operates a large cloud business and is trying diligently to sell to Microsoft customers in the commercial and government sector. A cyber security outfit would allow Google to argue that it would offer better security for its customers and their users.
Mandiant’s business was threat intelligence. The idea is that Mandiant would monitor forums, the Web, and any other online information about malware and other criminal cyber operations. As an added bonus, Mandiant would blend automated security functions with its technology. Wham, bam! Slam dunk, right?
I read “Google Confirms Major Security Breach After Hackers Linked To ShinyHunters Steal Sensitive Corporate Data, Including Business Contact Information, In Coordinated Cyberattack.” First, a disclaimer. I have no idea if this WCCF Tech story is 100 percent accurate. It could be one of those Microsoft 1,000 Russian programmers are attacking us” plays. On the other hand, it will be fun to assume that some of the information in the cited article is accurate.
With that as background, I noted this passage:
The tech giant has recently confirmed a data breach linked to the ShinyHunters ransomware group, which targeted Google’s corporate Salesforce database systems containing business contact information.
Okay. Google’s security did not work. A cloud customer’s data were compromised. The assertion that Google’s security is better than or equal to Microsoft’s is tough for me to swallow.
Here’s another passage:
As per Google’s Threat Intelligence Group (GTIG), the hackers used a voice phishing technique that involved calling employees while pretending to be members of the internal IT team, in order to have them install an altered version of Salesforce’s Data Loader. By using this technique, the attackers were able to access the database before their intrusion was detected.
A human fooled another human. The automated systems were flummoxed. The breach allegedly took place.
Several observations are warranted:
- This is security until a breach occurs. I am not sure that customers expect this type of “footnote” to their cyber security licensing mumbo jumbo. The idea is that Google should deliver a secure service.
- Mandiant, like other threat intelligence services, allows the customer to assume that the systems and methods generally work. That’s true until they don’t.
- Bad actors have an advantage. Armed with smart software and tools that can emulate my dead grandfather, the humans remain a chink in the otherwise much-hyped armor of an outfit like Google.
What this example, even if only partly accurate, makes it clear than cyber security marketing performs better than the systems some of the firms sell. Consider that the victim was Google. That company has touted its technical superiority for decades. Then Google buys extra security. The combo delivers what? Evidence that believing the cyber security marketing may do little to reduce the vulnerability of an organization. What’s notable is that the missteps were Google’s. Microsoft may enshrine this breach case and mount it on the walls of every cyber security employees’ cubicles.
I can imagine hearing a computer-generated voice emulating Bill Gates’, saying, “It wasn’t us this time.”
Stephen E Arnold, August 20, 2025
News Flash from the Past: Bad Actors Use New Technology and Adapt Quickly
August 18, 2025
No AI. Just a dinobaby working the old-fashioned way.
NBC News is on top of cyber security trends. I think someone spotted Axios report that bad actors were using smart software to outfox cyber security professionals. I am not sure this is news, but what do I know?
“Criminals, Good Guys and Foreign Spies: Hackers Everywhere Are Using AI Now” reports this “hot off the press” information. I quote:
The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.
My goodness. Who knew that stealers have been zipping around for many years? Even more startling old information is:
LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster.
Stunning. A free chunk of smart software, unemployed or intra-gig programmers, and juicy targets pushed out with a fairy land of vulnerabilities. Isn’t it insightful that bad actors would apply these tools to clueless employees, inherently vulnerable operating systems, and companies too busy outputting marketing collateral to do routine security updates.
The cat-and-mouse game works this way. Bad actors with access to useful scripting languages, programming expertise, and smart software want to generate revenue or wreck havoc. One individual or perhaps a couple of people in a coffee shop hit upon a better way to access a corporate network or obtain personally identifiable information from a hapless online user.
Then, after the problem has been noticed and reported, cyber security professionals will take a closer look. If these outfits have smart software running, a human will look more closely at logs and say, “I think I saw something.”
Okay, mice are in and swarming. Now the cats jump into action. The cats will find [a] a way to block the exploit, [b] rush to push the fix to paying customers, and [c] share the information in a blog post or a conference.
What happens? The bad actors notice their mice aren’t working or they are being killed instantly. The bad actors go back to work. In most cases, the bad actors are not unencumbered by bureaucracy or tough thought problems about whether something is legal or illegal. The bad actors launch more attacks. If one works, its gravy.
Now the cats jump back into the fray.
In the current cyber crime world, cyber security firms, investigators, and lawyers are in reactive mode. The bad actors play offense.
One quick example: Telegram has been enabling a range of questionable online activities since 2013. In 2024 after a decade of inaction, France said, “Enough.” Authorities in France arrested Pavel Durov. The problem from my point of view is that it took 12 years to man up to the icon Pavel Durov.
What happens when a better Telegram comes along built with AI as part of its plumbing?
The answer is, “You can buy licenses to many cyber security systems. Will they work?”
There are some large, capable mice out there in cyber space.
Stephen E Arnold, August 18, 2025
A Security Issue? What Security Issue? Security? It Is Just a Normal Business Process.
July 23, 2025
Just a dinobaby working the old-fashioned way, no smart software.
I zipped through a write up called “A Little-Known Microsoft Program Could Expose the Defense Department to Chinese Hackers.” The word program does not refer to Teams or Word, but to a business process. If you are into government procurement, contractor oversight, and the exiting world of inspector generals, you will want to read the 4000 word plus write up.
Here’s a passage I found interesting:
Microsoft is using engineers in China to help maintain the Defense Department’s computer systems — with minimal supervision by U.S. personnel — leaving some of the nation’s most sensitive data vulnerable to hacking from its leading cyber adversary…
The balance of the cited article explain what’s is going on with a business process implemented by Microsoft as part of a government contract. There are lots of quotes, insider jargon like “digital escort,” and suggestions that the whole approach is — how can I summarize it? — ill advised, maybe stupid.
Several observations:
- Someone should purchase a couple of hundred copies of Apple in China by Patrick McGee, make it required reading, and then hold some informal discussions. These can be modeled on what happens in the seventh grade; for example, “What did you learn about China’s approach to information gathering?”
- A hollowed out government creates a dependence on third-parties. These vendorsdo not explain how outsourcing works. Thus, mismatches exist between government executives’ assumptions and how the reality of third-party contractors fulfill the contract.
- Weaknesses in procurement, oversight, continuous monitoring by auditors encourage short cuts. These are not issues that have arisen in the last day or so. These are institutional and vendor procedures that have existed for decades.
Net net: My view is that some problems are simply not easily resolved. It is interesting to read about security lapses caused by back office and legal processes.
Stephen E Arnold, July 23, 2025
Technology Firms: Children of Shoemakers Go Barefoot
July 7, 2025
If even the biggest of Big Tech firms are not safe from cyberattacks, who is? Investor news site Benzinga reveals, “Apple, Google and Facebook Among Services Exposed in Massive Leak of More than 16 Billion Login Records.” The trove represents one of the biggest exposures of personal data ever, writer Murtuza J. Merchant tells us. We learn:
“Cybersecurity researchers have uncovered 30 massive data collections this year alone, each containing tens of millions to over 3.5 billion user credentials, Cybernews reported. These previously unreported datasets were briefly accessible through misconfigured cloud storage or Elasticsearch instances, giving the researchers just enough time to detect them, though not enough to trace their origin. The findings paint a troubling picture of how widespread and organized credential leaks have become, with login information originating from malware known as infostealers. These malicious programs siphon usernames, passwords, and session data from infected machines, usually structured as a combination of a URL, username, and password.”
Ah, advanced infostealers. One of the many handy tools AI has made possible. The write-up continues:
“The leaked credentials span a wide range of services from tech giants like Apple, Facebook, and Google, to platforms such as GitHub, Telegram, and various government portals. Some datasets were explicitly labeled to suggest their source, such as ‘Telegram’ or a reference to the Russian Federation. … Researchers say these leaks are not just a case of old data resurfacing.”
Not only that, the data’s format is cybercriminal-friendly. Merchant writes:
“Many of the records appear recent and structured in ways that make them especially useful for cybercriminals looking to run phishing campaigns, hijack accounts, or compromise corporate systems lacking multi-factor authentication.”
But it is the scale of these datasets that has researchers most concerned. The average collection held 500 million records, while the largest had more than 3.5 billion. What are the chances your credentials are among them? The post suggests the usual, most basic security measures: complex and frequently changed passwords and regular malware scans. But surely our readers are already observing these best practices, right?
Cynthia Murrell, July 7, 2025
Sharp Words about US Government Security
May 22, 2025
No AI. Just a dinobaby who gets revved up with buzzwords and baloney.
On Monday (April 29, 2025), I am headed to the US National Cyber Crime Conference. I am 80, and I don’t do too many “in person” lectures. Heck, I don’t do too many lectures anymore period. A candidate for the rest home or an individual ready for a warehouse for the soon-to-die is a unicorn amidst the 25 to 50 year old cyber fraud, law enforcement professionals, and government investigators.
In my lectures, I steer clear of political topics. This year, I have been assigned a couple of topics which the NCCC organizers know attract a couple of people out of the thousand or so attendees. One topic concerns changes in the Dark Web. Since I wrote “Dark Web Notebook” years ago, my team and I keep track of what’s new and interesting in the world of the Dark Web. This year, I will highlight three or four services which caught our attention. The other topic is my current research project: Telegram. I am not sure how I became interested in this messaging service, but my team and I will will make available to law enforcement, crime analysts, and cyber fraud investigators a monograph modeled on the format we used for the “Dark Web Notebook.”
I am in a security mindset before the conference. I am on the lookout for useful information which I can use as a point of reference or as background information. Despite my age, I want to appear semi competent. Thus, I read “Signalgate Lessons Learned: If Creating a Culture of Security Is the Goal, America Is Screwed.” I think the source publication is British. The author may be an American journalist.
Several points in the write up caught my attention.
First, the write up makes a statement I found interesting:
And even if they are using Signal, which is considered the gold-standard for end-to-end chat encryption, there’s no guarantee their personal devices haven’t been compromised with some sort of super-spyware like Pegasus, which would allow attackers to read the messages once they land on their phones.
I did not know that Signal was “considered the gold standard for end-to-end chat encryption.” I wonder if there are some data to back this up.
Second, is NSO Group’s Pegasus “super spyware.” My information suggests that there are more modern methods. Some link to Israel but others connect to other countries; for example, Spain, the former Czech Republic, and others. I am not sure what “super” means, and the write up does not offer much other than a nebulous adjectival “super spyware.”
Third, these two references are fascinating:
“The Salt Typhoon and Volt Typhoon campaigns out of China demonstrate this ongoing threat to our telecom systems. Circumventing the Pentagon’s security protocol puts sensitive intelligence in jeopardy.”
The authority making the statement is a former US government official who went on to found a cyber security company. There were publicized breaches, and I am not sure comparable to Pegasus type of data exfiltration method. “Insider threats” are different from lousy software from established companies with vulnerabilities as varied as Joseph’s multi-colored coat. An insider, of course, is an individual presumed to be “trusted”; however, that entity provides information for money to an individual who wants to compromise a system, a person who makes an error (honest or otherwise), and victims who fall victim to quite sophisticated malware specifically designed to allow targeted emails designed to obtain information to compromise that person or a system. In fact, the most sophisticated of these “phishing” attack systems are available for about $250 per month for the basic version with higher fees associated with more robust crime as a service vectors of compromise.
The opinion piece seems to focus on a single issue focused on one of the US government’s units. I am okay with that; however, I think a slightly different angle would put the problem and challenge of “security” in a context less focused on ad hominin rhetorical methods.
Stephen E Arnold, May 22, 2025
Employee Time App Leaks User Information
May 22, 2025
Oh boy! Security breaches are happening everywhere these days. It’s not scary unless your personal information is leaked, like what happened to, “Top Employee Monitoring App Leaks 21 Million Screenshots On Thousands Of Users,” reports TechRadar. The app in question is called WorkComposer and it’s described as an “employee productivity monitoring tool.” Cybernews cybersecurity researchers discovered an archive of millions of WorkComposer-generated real time screenshots. These screenshot showed what the employee worked on, which might include sensitive information.
The sensitive information could include intellectual property, passwords, login portals, emails, proprietary data, etc. These leaked images are a major privacy violation, meaning WorkComposer is in boiling water. Privacy organizations and data watchdogs could get involved.
Here is more information about the leak:
“Cybernews said that WorkComposer exposed more than 21 million images in an unsecured Amazon S3 bucket. The company claims to have more than 200,000 active users. It could also spell trouble if it turns out that cybercriminals found the bucket in the past. At press time, there was no evidence that it did happen, and the company apparently locked the archive down in the meantime.”
WorkComposer was designed for companies to monitor the work of remote employees. It allows leads to track their employees’ work and captures an image every twenty seconds.
It’s a useful monitoring application but a scary situation with the leaks. Why doesn’t the Cybernews people report the problem or fix it? That’s a white hat trick.
Whitney Grace, May 22, 2025
Scamming: An Innovation Driver
May 19, 2025
Readers who caught the 2022 documentary “The Tinder Swindler” will recognize Pernilla Sjöholm as one of that conman’s marks. Since the film aired, Sjöholm has co-developed a tool to fend off such fraudsters. The Next Web reports, “Tinder Swindler Survivor Launches Identity Verifier to Fight Scams.” The platform, cofounded with developer Suejb Memeti, is called IDfier. Writer Thomas Macaulay writes:
“The platform promises a simple yet secure way to check who you’re interacting with. Users verify themselves by first scanning their passport, driver’s license, or ID card with their phone camera. If the document has an NFC (near-field communication), IDfier will also scan the chip for additional security. The user then completes a quick head movement to prove they’re a real person — rather than a photo, video, or deepfake. Once verified, they can send other people a request to do the same. Both of them can then choose which information to share, from their name and age to their contact number. All their data is encrypted and stored across disparate servers. IDfier was built to blend this security with precision. According to the platform, the tech is 99.9% accurate in detecting real users and blocking impersonation attempts. The team envisions the system securing endless online services, from e-commerce and email to social media and, of course, dating apps such as Tinder.”
For those who have not viewed the movie: In 2018 Sjöholm and Simon Leviev met on Tinder and formed what she thought was a close, in-person relationship. But Simon was not the Leviev he pretended to be. In the end, he cheated her out of tens of thousands of euros with a bogus sob story.
It is not just fellow humans’ savings Sjöholm aims to protect, but also our hearts. She emphasizes such tactics amount to emotional abuse as well as fraud. The trauma of betrayal is compounded by a common third-party reaction—many observers shame victims as stupid or incautious. Sjöholm figures that is because people want to believe it cannot happen to them. And it doesn’t. Until it does.
Since her ordeal, Sjöholm has been dismayed to see how convincing deepfakes have grown and how easy they now are to make. She is also appalled at how vulnerable our children are. Someday, she hopes to offer IDfier free for kids. We learn:
“Sjöholm’s plan partly stems from her experience giving talks in schools. She recalls one in which she asked the students how many of them interacted with strangers online. ‘Ninety-five percent of these kids raised their hands,’ she said. ‘And you could just see the teacher’s face drop. It’s a really scary situation.’”
We agree. Sjöholm states that between fifty and sixty percent of scams involve fake identities. And, according to The Global Anti-Scam Alliance, scams collectively rake in more than $1 trillion (with a “t”) annually. Romance fraud alone accounts for several billion dollars, according to the World Economic Forum. At just $2 per month, IDfier seems like a worthwhile precaution for those who engage with others online.
Cynthia Murrell, May 19, 2025

