Sharp Words about US Government Security
May 22, 2025
No AI. Just a dinobaby who gets revved up with buzzwords and baloney.
On Monday (April 29, 2025), I am headed to the US National Cyber Crime Conference. I am 80, and I don’t do too many “in person” lectures. Heck, I don’t do too many lectures anymore period. A candidate for the rest home or an individual ready for a warehouse for the soon-to-die is a unicorn amidst the 25 to 50 year old cyber fraud, law enforcement professionals, and government investigators.
In my lectures, I steer clear of political topics. This year, I have been assigned a couple of topics which the NCCC organizers know attract a couple of people out of the thousand or so attendees. One topic concerns changes in the Dark Web. Since I wrote “Dark Web Notebook” years ago, my team and I keep track of what’s new and interesting in the world of the Dark Web. This year, I will highlight three or four services which caught our attention. The other topic is my current research project: Telegram. I am not sure how I became interested in this messaging service, but my team and I will will make available to law enforcement, crime analysts, and cyber fraud investigators a monograph modeled on the format we used for the “Dark Web Notebook.”
I am in a security mindset before the conference. I am on the lookout for useful information which I can use as a point of reference or as background information. Despite my age, I want to appear semi competent. Thus, I read “Signalgate Lessons Learned: If Creating a Culture of Security Is the Goal, America Is Screwed.” I think the source publication is British. The author may be an American journalist.
Several points in the write up caught my attention.
First, the write up makes a statement I found interesting:
And even if they are using Signal, which is considered the gold-standard for end-to-end chat encryption, there’s no guarantee their personal devices haven’t been compromised with some sort of super-spyware like Pegasus, which would allow attackers to read the messages once they land on their phones.
I did not know that Signal was “considered the gold standard for end-to-end chat encryption.” I wonder if there are some data to back this up.
Second, is NSO Group’s Pegasus “super spyware.” My information suggests that there are more modern methods. Some link to Israel but others connect to other countries; for example, Spain, the former Czech Republic, and others. I am not sure what “super” means, and the write up does not offer much other than a nebulous adjectival “super spyware.”
Third, these two references are fascinating:
“The Salt Typhoon and Volt Typhoon campaigns out of China demonstrate this ongoing threat to our telecom systems. Circumventing the Pentagon’s security protocol puts sensitive intelligence in jeopardy.”
The authority making the statement is a former US government official who went on to found a cyber security company. There were publicized breaches, and I am not sure comparable to Pegasus type of data exfiltration method. “Insider threats” are different from lousy software from established companies with vulnerabilities as varied as Joseph’s multi-colored coat. An insider, of course, is an individual presumed to be “trusted”; however, that entity provides information for money to an individual who wants to compromise a system, a person who makes an error (honest or otherwise), and victims who fall victim to quite sophisticated malware specifically designed to allow targeted emails designed to obtain information to compromise that person or a system. In fact, the most sophisticated of these “phishing” attack systems are available for about $250 per month for the basic version with higher fees associated with more robust crime as a service vectors of compromise.
The opinion piece seems to focus on a single issue focused on one of the US government’s units. I am okay with that; however, I think a slightly different angle would put the problem and challenge of “security” in a context less focused on ad hominin rhetorical methods.
Stephen E Arnold, May 22, 2025
Employee Time App Leaks User Information
May 22, 2025
Oh boy! Security breaches are happening everywhere these days. It’s not scary unless your personal information is leaked, like what happened to, “Top Employee Monitoring App Leaks 21 Million Screenshots On Thousands Of Users,” reports TechRadar. The app in question is called WorkComposer and it’s described as an “employee productivity monitoring tool.” Cybernews cybersecurity researchers discovered an archive of millions of WorkComposer-generated real time screenshots. These screenshot showed what the employee worked on, which might include sensitive information.
The sensitive information could include intellectual property, passwords, login portals, emails, proprietary data, etc. These leaked images are a major privacy violation, meaning WorkComposer is in boiling water. Privacy organizations and data watchdogs could get involved.
Here is more information about the leak:
“Cybernews said that WorkComposer exposed more than 21 million images in an unsecured Amazon S3 bucket. The company claims to have more than 200,000 active users. It could also spell trouble if it turns out that cybercriminals found the bucket in the past. At press time, there was no evidence that it did happen, and the company apparently locked the archive down in the meantime.”
WorkComposer was designed for companies to monitor the work of remote employees. It allows leads to track their employees’ work and captures an image every twenty seconds.
It’s a useful monitoring application but a scary situation with the leaks. Why doesn’t the Cybernews people report the problem or fix it? That’s a white hat trick.
Whitney Grace, May 22, 2025
Scamming: An Innovation Driver
May 19, 2025
Readers who caught the 2022 documentary “The Tinder Swindler” will recognize Pernilla Sjöholm as one of that conman’s marks. Since the film aired, Sjöholm has co-developed a tool to fend off such fraudsters. The Next Web reports, “Tinder Swindler Survivor Launches Identity Verifier to Fight Scams.” The platform, cofounded with developer Suejb Memeti, is called IDfier. Writer Thomas Macaulay writes:
“The platform promises a simple yet secure way to check who you’re interacting with. Users verify themselves by first scanning their passport, driver’s license, or ID card with their phone camera. If the document has an NFC (near-field communication), IDfier will also scan the chip for additional security. The user then completes a quick head movement to prove they’re a real person — rather than a photo, video, or deepfake. Once verified, they can send other people a request to do the same. Both of them can then choose which information to share, from their name and age to their contact number. All their data is encrypted and stored across disparate servers. IDfier was built to blend this security with precision. According to the platform, the tech is 99.9% accurate in detecting real users and blocking impersonation attempts. The team envisions the system securing endless online services, from e-commerce and email to social media and, of course, dating apps such as Tinder.”
For those who have not viewed the movie: In 2018 Sjöholm and Simon Leviev met on Tinder and formed what she thought was a close, in-person relationship. But Simon was not the Leviev he pretended to be. In the end, he cheated her out of tens of thousands of euros with a bogus sob story.
It is not just fellow humans’ savings Sjöholm aims to protect, but also our hearts. She emphasizes such tactics amount to emotional abuse as well as fraud. The trauma of betrayal is compounded by a common third-party reaction—many observers shame victims as stupid or incautious. Sjöholm figures that is because people want to believe it cannot happen to them. And it doesn’t. Until it does.
Since her ordeal, Sjöholm has been dismayed to see how convincing deepfakes have grown and how easy they now are to make. She is also appalled at how vulnerable our children are. Someday, she hopes to offer IDfier free for kids. We learn:
“Sjöholm’s plan partly stems from her experience giving talks in schools. She recalls one in which she asked the students how many of them interacted with strangers online. ‘Ninety-five percent of these kids raised their hands,’ she said. ‘And you could just see the teacher’s face drop. It’s a really scary situation.’”
We agree. Sjöholm states that between fifty and sixty percent of scams involve fake identities. And, according to The Global Anti-Scam Alliance, scams collectively rake in more than $1 trillion (with a “t”) annually. Romance fraud alone accounts for several billion dollars, according to the World Economic Forum. At just $2 per month, IDfier seems like a worthwhile precaution for those who engage with others online.
Cynthia Murrell, May 19, 2025
Alleged Oracle Misstep Leaves Hospitals Without EHR Access for Just Five Days
May 13, 2025
When I was young, hospitals were entirely run on paper records. It was a sight to behold. Recently, 45 hospitals involuntarily harkened back to those days, all because “Oracle Engineers Caused Dayslong Software Outage at U.S. Hospitals,” CNBC reports. Writer Ashley Capoot tells us:
“Oracle engineers mistakenly triggered a five-day software outage at a number of Community Health Systems hospitals, causing the facilities to temporarily return to paper-based patient records. CHS told CNBC that the outage involving Oracle Health, the company’s electronic health record (EHR) system, affected ‘several’ hospitals, leading them to activate ‘downtime procedures.’ Trade publication Becker’s Hospital Review reported that 45 hospitals were hit. The outage began on April 23, after engineers conducting maintenance work mistakenly deleted critical storage connected to a key database, a CHS spokesperson said in a statement. The outage was resolved on Monday, and was not related to a cyberattack or other security incident.”
That is a relief. Because gross incompetence is so much better than getting hacked. Oracle has only been operating the EHR system since 2022, when it bought Cerner. The acquisition made Oracle Health the second largest vendor in that market, after Epic Systems.
But perhaps Oracle is experiencing buyers’ remorse. This is just the latest in a string of stumbles the firm has made in this crucial role. In 2023, the US Department of Veteran Affairs paused deployment of its Oracle-based EHR platform over patient safety concerns. And just this March, the company’s federal EHR system experienced a nationwide outage. That snafu was resolved after six and a half hours, and all it took was a system reboot. Easy peasy. If only replacing deleted critical storage were so simple.
What healthcare system will be the next to go down due to an Oracle Health blunder? Cynthia Murrell, May 13, 2025
Secret Messaging: I Have a Bridge in Brooklyn to Sell You
May 5, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
I read “The Signal Clone the Trump Admin Uses Was Hacked.” I have no idea if this particular write up is 100 percent accurate. I do know that people want to believe that AI will revolutionize making oodles of money, that quantum computing will reinvent how next-generation systems will make oodles of money, and how new “secret” messaging apps will generate oodles of secret messages and maybe some money.
Here’s the main point of the article published by MichaFlee.com, an online information source:
TeleMessage, a company that makes a modified version of Signal that archives messages for government agencies, was hacked.
Due to the hack the “secret” messages were no longer secret; therefore, if someone believes the content to have value, those messages, metadata, user names, etc., etc. can be sold via certain channels. (No, I won’t name these, but, trust me, such channels exist, are findable, and generate some oodles of bucks in some situations.)
The Flee write up says:
A hacker has breached and stolen customer data from TeleMessage, an obscure Israeli company that sells modified versions of Signal and other messaging apps to the U.S. government to archive messages…
A snip from the write up on Reddit states:
The hack shows that an app gathering messages of the highest ranking officials in the government—Waltz’s chats on the app include recipients that appear to be Marco Rubio, Tulsi Gabbard, and JD Vance—contained serious vulnerabilities that allowed a hacker to trivially access the archived chats of some people who used the same tool. The hacker has not obtained the messages of cabinet members, Waltz, and people he spoke to, but the hack shows that the archived chat logs are not end-to-end encrypted between the modified version of the messaging app and the ultimate archive destination controlled by the TeleMessage customer. Data related to Customs and Border Protection (CBP), the cryptocurrency giant Coinbase, and other financial institutions are included in the hacked material…
First, TeleMessage is not “obscure.” The outfit has been providing software for specialized services since the founders geared up to become entrepreneurs. That works out to about a quarter of a century. The “obscure” tells me more about the knowledge of the author of the allegedly accurate story than about the firm itself. Second, yes, companies producing specialized software headquartered in Israel have links to Israeli government entities. (Where do you think the ideas for specialized software services and tools originate? In a kindergarten in Tel Aviv?) Third, for those who don’t remember October 2023, which one of my contacts labeled a day or two after the disastrous security breach resulting in the deaths of young people, was “Israel’s 9/11.” That’s correct and the event makes crystal clear that Israel’s security systems and other cyber security systems developed elsewhere in the world may not be secure. Is this a news flash? I don’t think so.
What does this allegedly true news story suggest? Here are a few observations:
- Most people make assumptions about “security” and believe fairy dust about “secure messaging.” Achieving security requires operational activities prior to selecting a system and sending messages or paying a service to back up Signal’s disappearing content. No correct operational procedures means no secure messaging.
- Cyber security software, created by humans, can be compromised. There are many ways. These include systemic failures, human error, believing in unicorns, and targeted penetrations. Therefore, security is a bit like the venture capitalists’ belief that the next big thing is their most recent investment colorfully described by a marketing professional with a degree in art history.
- Certain vendors do provide secure messaging services; however, these firms are not the ones bandied about in online discussion groups. There is such a firm providing at this time secure messaging to the US government. It is a US firm. Its system and method are novel. The question becomes, “Why not use the systems already operating, not a service half a world away, integrated with a free “secure” messaging application, and made wonderful because some of its code is open source?
Net net: Perhaps it is time to become more informed about cyber security and secure messaging apps?
PS. To the Reddit poster who said, “404 Media is the only one reporting this.” Check out the Israel Palestine News item from May 4, 2025.
Stephen E Arnold, May 5, 2025
Deep Fake Recognition: Google Has a Finger In
May 5, 2025
Sorry, no AI used to create this item.
I spotted this Newsweek story: “‘AI Imposter’ Candidate Discovered During Job Interview, Recruiter Warns.” The main idea is that a humanoid struggled to identify a deep fake. The deep fake was applying for a job.
The write up says:
Several weeks ago, Bettina Liporazzi, the recruiting lead at letsmake.com was contacted by a seemingly ordinary candidate who was looking for a job. Their initial message was clearly AI-generated, but Liporazzi told Newsweek that this “didn’t immediately raise any flags” because that’s increasingly commonplace.
Here’s the interesting point:
Each time the candidate joined the call, Liporazzi got a warning from Google to say the person wasn’t signed in and “might not be who they claim to be.”
This interaction seems to have taken place online.
The Newsweek story includes this statement:
As generative-AI becomes increasingly powerful, the line between what’s real and fake is becoming harder to decipher. Ben Colman, co-founder and CEO of Reality Defender, a deepfake detection company, tells Newsweek that AI impersonation in recruiting is “just the tip of the iceberg.”
The recruiter figured out something was amiss. However, in the sequence Google injected its warning.
Several questions:
- Does Google monitor this recruiter’s online interactions and analyze them?
- How does Google determine which online interaction is one in which it should simply monitor and which to interfere?
- What does Google do with the information about [a] the recruiter, [b] the job on offer itself, and [c] the deep fake system’s operator?
I wonder if Newsweek missed the more important angle in this allegedly actual factual story; that is, Google surveillance? Perhaps Google was just monitoring email when it tells me that a message from a US law enforcement agency is not in my list of contacts. How helpful, Google?
Will Google’s “monitoring” protect others from Deep Fakes? Those helpful YouTube notices are part of this effort to protect it seems.
Stephen E Arnold, May 5, 2025
Oracle: Pricked by a Rose and Still Bleeding
April 15, 2025
How disappointing. DoublePulsar documents a senior tech giant’s duplicity in, “Oracle Attempt to Hide Serious Cybersecurity Incident from Customers in Oracle SaaS Service.” Blogger Kevin Beaumont cites reporting by Bleeping Computer as he tells us someone going by rose87168 announced in March they had breached certain Oracle services. The hacker offered to remove individual companies’ data for a price. They also invited Oracle to email them to discuss the matter. The company, however, immediately denied there had been a breach. It should know better by now.
Rose87168 responded by releasing evidence of the breach, piece by piece. For example, they shared a recording of an internal Oracle meeting, with details later verified by Bleeping Computer and Hudson Rock. They also shared the code for Oracle configuration files, which proved to be current. Beaumont writes:
“In data released to a journalist for validation, it has now become 100% clear to me that there has been cybersecurity incident at Oracle, involving systems which processed customer data. … All the systems impacted are directly managed by Oracle. Some of the data provided to journalists is current, too. This is a serious cybersecurity incident which impacts customers, in a platform managed by Oracle. Oracle are attempting to wordsmith statements around Oracle Cloud and use very specific words to avoid responsibility. This is not okay. Oracle need to clearly, openly and publicly communicate what happened, how it impacts customers, and what they’re doing about it. This is a matter of trust and responsibility. Step up, Oracle — or customers should start stepping off.”
In an update to the original post, Beaumont notes some linguistic slight-of-hand employed by the company:
“Oracle rebadged old Oracle Cloud services to be Oracle Classic. Oracle Classic has the security incident. Oracle are denying it on ‘Oracle Cloud’ by using this scope — but it’s still Oracle cloud services that Oracle manage. That’s part of the wordplay.”
However, it seems the firm finally admitted the breach was real to at least some users. Just not in in black and white. We learn:
“Multiple Oracle cloud customers have reached out to me to say Oracle have now confirmed a breach of their services. They are only doing so verbally, they will not write anything down, so they’re setting up meetings with large customers who query. This is similar behavior to the breach of medical PII in the ongoing breach at Oracle Health, where they will only provide details verbally and not in writing.”
So much for transparency. Beaumont pledges to keep investigating the breach and Oracle’s response to it. He invites us to follow his Mastodon account for updates.
Cynthis Murrell, April 15, 2025
Trapped in the Cyber Security Gym with Broken Gear?
April 11, 2025
As an IT worker you can fall into more pitfalls than a road that needs repaving. Mac Chaffee shared a new trap on his blog, Mac’s Tech Blog and how he handled: “Avoid Building A Security Treadmill.” Chaffee wrote that he received a ticket that asked him to stop people from using a GPU service to mine cryptocurrencies. Chafee used Falco, an eBPF-powered agent that runs on the Kubernetes cluster, to monitor the spot and deactivate the digital mining.
Chaffee doesn’t mind the complexity of the solution. His biggest issue was with the “security treadmill” that he defines as:
“A security treadmill1 is a piece of software that, due to a weakness of design, requires constant patching to keep it secure. Isn’t that just all software? Honestly… kinda, yeah, but a true treadmill is self-inflicted. You bought it, assembled it, and put it in your spare bedroom; a device specifically designed to let you walk/run forever without making forward progress.”
One solution suggested to Chaffee was charging people to use the GPU. The idea was that if they charged people more to use the GPU than what they were making with cryptocurrencies than it would stop. That idea wasn’t followed of reasons Chaffee wasn’t told, so Falco was flown.
Unfortunately Falco only detects network traffic to host when its directly connected to the IP. The security treadmill was in full swing because users were bypassing the Internet filter monitored by Falco. Falco needs to be upgraded to catch new techniques that include a VPN or proxy.
Another way to block cryptocurrency mining is blocking all outbound traffic except for those an allowed-user list. It would also prevent malware attacks, command and control servers, and exfiltration attacks. Another problem Chaffee noted is that applications doesn’t need a full POSIX environment. To combat this he suggests:
“Perhaps free-tier users of these GPUs could have been restricted to running specific demos, or restrictive timeouts for GPU processing times, or denying disk write access to prevent downloading miners, or denying the ability to execute files outside of a read-only area.”
Chaffee declares it’s time to upgrade legacy applications or make them obsolete to avoid security treadmills. It sounds like there’s a niche to make a startup there. What a thought a Planet Fitness with one functioning treadmill.
Whitney Grace, April 11, 2025
No Joke: Real Secrecy and Paranoia Are Needed Again
April 1, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
In the US and the UK, secrecy and paranoia are chic again. The BBC reported “GCHQ Worker Admits Taking top Secret Data Home.” Ah, a Booz Allen / Snowden type story? The BBC reports:
The court heard that Arshad took his work mobile into a top secret GCHQ area and connected it to work station. He then transferred sensitive data from a secure, top secret computer to the phone before taking it home, it was claimed. Arshad then transferred the data from the phone to a hard drive connected to his personal home computer.
Mr. Snowden used a USB drive. The question is, “What are the bosses doing? Who is watching the logs? Who is checking the video feeds? Who is hiring individuals with some inner need to steal classified information?
But outside phones in a top secret meeting? That sounds like a great idea. I attended a meeting held by a local government agency, and phones and weapons were put in little steel boxes. This outfit was no GHCQ, but the security fellow (a former Marine) knew what he was doing for that local government agency.
A related story addresses paranoia, a mental characteristic which is getting more and more popular among some big dogs.
CNBC reported an interesting approach to staff trust. “Anthropic Announces Updates on Security Safeguards for Its AI Models” reports:
In an earlier version of its responsible scaling policy, Anthropic said it would begin sweeping physical offices for hidden devices as part of a ramped-up security effort.
The most recent update to the firm’s security safeguards adds:
updates to the “responsible scaling” policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards.
The actual explanation is a master piece of clarity. Here’s snippet of what Anthropic actually said in its “Anthropic’s Responsible Scaling Policy” announcement:
The current iteration of our RSP (version 2.1) reflects minor updates clarifying which Capability Thresholds would require enhanced safeguards beyond our current ASL-3 standards.
The Anthropic methods, it seems to me, to include “sweeps” and “compartmentalization.”
Thus, we have two examples of outstanding management:
First, the BBC report implies that personal computing devices can plug in and receive classified information.
And:
Second, CNBC explains that sweeps are not enough. Compartmentalization of systems and methods puts in “cells” who can do what and how.
Andy Grove’s observation popped into my mind. He allegedly rattled off this statement:
Success breeds complacency. Complacency breeds failure. Only the paranoid survive.
Net net: Cyber security is easier to “trust” and “assume”. Real fixes edge into fear and paranoia.
Stephen E Arnold, April 9, 2025
FOGINT: Targets Draw Attention. Signal Is a Target
April 1, 2025
Dinobaby says, “No smart software involved. That’s for “real” journalists and pundits.
We have been plugging away on the “Telegram Overview: Notes for Analysts and Investigators.” We have not exactly ignored Signal or the dozens of other super secret, encrypted beyond belief messaging applications. We did compile a table of those we came across, and Signal was on that list.
I read “NSA Warned of Vulnerabilities in Signal App a Month Before Houthi Strike Chat.” I am not interested in the political facets of this incident. The important point for me is this statement:
The National Security Agency sent out an operational security special bulletin to its employees in February 2025 warning them of vulnerabilities in using the encrypted messaging application Signal
One of the big time cyber security companies spoke with me, and I mentioned that Signal might not be the cat’s pajamas. To the credit of that company and the former police chief with whom I spoke, the firm shifted to an end to end encrypted messaging app we had identified as slightly less wonky. Good for that company, and a pat on the back for the police chief who listened to me.
In my experience, operational bulletins are worth reading. When the bulletin is “special,” re-reading the message is generally helpful.
Signal, of course, defends itself vigorously. The coach who loses a basketball game says, “Our players put out a great effort. It just wasn’t enough.”
In the world of presenting oneself as a super secret messaging app immediately makes that messaging app a target. I know first hand that some whiz kid entrepreneurs believe that their EE2E solution is the best one ever. In fact, a year ago, such an entrepreneur told me, “We have developed a method that only a government agency can compromise.”
Yeah, that’s the point of the NSA bulletin.
Let me ask you a question: “How many computer science students in countries outside the United States are looking at EE2E messaging apps and trying to figure out how to compromise the data?” Years ago, I gave some lectures in Tallinn, Estonia. I visited a university computer science class. I asked the students who were working on projects each selected. Several of them told me that they were trying to compromise messaging systems. A favorite target was Telegram but Signal came up.
I know the wizards who cook up EE2E messaging apps and use the latest and greatest methods for delivering security with bells on are fooling themselves. Here are the reasons:
- Systems relying on open source methods are well documented. Exploits exist and we have noticed some CaaS offers to compromise these messages. Now the methods may be illegal in many countries, but they exist. (I won’t provide a checklist in a free blog post. Sorry.)
- Techniques to prevent compromise of secure messaging systems involve some patented systems and methods. Yes, the patents are publicly available, but the methods are simply not possible unless one has considerable resources for software, hardware, and deployment.
- A number of organizations turn EE2E messaging systems into happy eunuchs taking care of the sultan’s harem. I have poked fun at the blunders of the NSO Group and its Pegasus approach, and I have pointed out that the goodies of the Hacking Team escaped into the wild a long time ago. The point is that once the procedures for performing certain types of compromise are no longer secret, other humans can and will create a facsimile and use those emulations to suck down private messages, the metadata, and probably the pictures on the device too. Toss in some AI jazziness, and the speed of the process goes faster than my old 1962 Studebaker Lark.
Let me wrap up by reiterating that I am not addressing the incident involving Signal. I want to point out that I am not into the “information wants to be free.” Certain information is best managed when it is secret. Outfits like Signal and the dozens of other EE2E messaging apps are targets. Targets get hit. Why put neon lights on oneself and try to hide the fact that those young computer science students or their future employers will find a way to compromise the information.
Technical stealth, network fiddling, human bumbling — Compromises will continue to occur. There were good reasons to enforce security. That’s why stringent procedures and hardened systems have been developed. Today it’s marketing, and the possibility that non open source, non American methods may no longer be what the 23 year old art history who has a job in marketing says the systems actually deliver.
Stephen E Arnold, April 1, 2025