Secret Messaging: I Have a Bridge in Brooklyn to Sell You

May 5, 2025

dino-orange_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

I read “The Signal Clone the Trump Admin Uses Was Hacked.” I have no idea if this particular write up is 100 percent accurate. I do know that people want to believe that AI will revolutionize making oodles of money, that quantum computing will reinvent how next-generation systems will make oodles of money, and how new “secret” messaging apps will generate oodles of secret messages and maybe some money.

Here’s the main point of the article published by MichaFlee.com, an online information source:

TeleMessage, a company that makes a modified version of Signal that archives messages for government agencies, was hacked.

Due to the hack the “secret” messages were no longer secret; therefore, if someone believes the content to have value, those messages, metadata, user names, etc., etc. can be sold via certain channels. (No, I won’t name these, but, trust me, such channels exist, are findable, and generate some oodles of bucks in some situations.)

The Flee write up says:

A hacker has breached and stolen customer data from TeleMessage, an obscure Israeli company that sells modified versions of Signal and other messaging apps to the U.S. government to archive messages…

A snip from the write up on Reddit states:

The hack shows that an app gathering messages of the highest ranking officials in the government—Waltz’s chats on the app include recipients that appear to be Marco Rubio, Tulsi Gabbard, and JD Vance—contained serious vulnerabilities that allowed a hacker to trivially access the archived chats of some people who used the same tool. The hacker has not obtained the messages of cabinet members, Waltz, and people he spoke to, but the hack shows that the archived chat logs are not end-to-end encrypted between the modified version of the messaging app and the ultimate archive destination controlled by the TeleMessage customer. Data related to Customs and Border Protection (CBP), the cryptocurrency giant Coinbase, and other financial institutions are included in the hacked material…

First, TeleMessage is not “obscure.” The outfit has been providing software for specialized services since the founders geared up to become entrepreneurs. That works out to about a quarter of a century. The “obscure” tells me more about the knowledge of the author of the allegedly accurate story than about the firm itself. Second, yes, companies producing specialized software headquartered in Israel have links to Israeli government entities. (Where do you think the ideas for specialized software services and tools originate? In a kindergarten in Tel Aviv?) Third, for those who don’t remember October 2023, which one of my contacts labeled a day or two after the disastrous security breach resulting in the deaths of young people, was “Israel’s 9/11.” That’s correct and the event makes crystal clear that Israel’s security systems and other cyber security systems developed elsewhere in the world may not be secure. Is this a news flash? I don’t think so.

What does this allegedly true news story suggest? Here are a few observations:

  1. Most people make assumptions about “security” and believe fairy dust about “secure messaging.” Achieving security requires operational activities prior to selecting a system and sending messages or paying a service to back up Signal’s disappearing content. No correct operational procedures means no secure messaging.
  2. Cyber security software, created by humans, can be compromised. There are many ways. These include systemic failures, human error, believing in unicorns, and targeted penetrations. Therefore, security is a bit like the venture capitalists’ belief that the next big thing is their most recent investment colorfully described by a marketing professional with a degree in art history.
  3. Certain vendors do provide secure messaging services; however, these firms are not the ones bandied about in online discussion groups. There is such a firm providing at this time secure messaging to the US government. It is a US firm. Its system and method are novel. The question becomes, “Why not use the systems already operating, not a service half a world away, integrated with a free “secure” messaging application, and made wonderful because some of its code is open source?

Net net: Perhaps it is time to become more informed about cyber security and secure messaging apps?

PS. To the Reddit poster who said, “404 Media is the only one reporting this.” Check out the Israel Palestine News item from May 4, 2025.

Stephen E Arnold, May 5, 2025

Deep Fake Recognition: Google Has a Finger In

May 5, 2025

dino orangeSorry, no AI used to create this item.

I spotted this Newsweek story: “‘AI Imposter’ Candidate Discovered During Job Interview, Recruiter Warns.” The main idea is that a humanoid struggled to identify a deep fake. The deep fake was applying for a job.

The write up says:

Several weeks ago, Bettina Liporazzi, the recruiting lead at letsmake.com was contacted by a seemingly ordinary candidate who was looking for a job. Their initial message was clearly AI-generated, but Liporazzi told Newsweek that this “didn’t immediately raise any flags” because that’s increasingly commonplace.

Here’s the interesting point:

Each time the candidate joined the call, Liporazzi got a warning from Google to say the person wasn’t signed in and “might not be who they claim to be.”

This interaction seems to have taken place online.

The Newsweek story includes this statement:

As generative-AI becomes increasingly powerful, the line between what’s real and fake is becoming harder to decipher. Ben Colman, co-founder and CEO of Reality Defender, a deepfake detection company, tells Newsweek that AI impersonation in recruiting is “just the tip of the iceberg.”

The recruiter figured out something was amiss. However,  in the sequence Google injected its warning.

Several questions:

  1. Does Google monitor this recruiter’s online interactions and analyze them?
  2. How does Google determine which online interaction is one in which it should simply monitor and which to interfere?
  3. What does Google do with the information about [a] the recruiter, [b] the job on offer itself, and [c] the deep fake system’s operator?

I wonder if Newsweek missed the more important angle in this allegedly actual factual story; that is, Google surveillance? Perhaps Google was just monitoring email when it tells me that a message from a US law enforcement agency is not in my list of contacts. How helpful, Google?

Will Google’s “monitoring” protect others from Deep Fakes? Those helpful YouTube notices are part of this effort to protect it seems.

Stephen E Arnold, May 5, 2025

Oracle: Pricked by a Rose and Still Bleeding

April 15, 2025

How disappointing. DoublePulsar documents a senior tech giant’s duplicity in, “Oracle Attempt to Hide Serious Cybersecurity Incident from Customers in Oracle SaaS Service.” Blogger Kevin Beaumont cites reporting by Bleeping Computer as he tells us someone going by rose87168 announced in March they had breached certain Oracle services. The hacker offered to remove individual companies’ data for a price. They also invited Oracle to email them to discuss the matter. The company, however, immediately denied there had been a breach. It should know better by now.

Rose87168 responded by releasing evidence of the breach, piece by piece. For example, they shared a recording of an internal Oracle meeting, with details later verified by Bleeping Computer and Hudson Rock. They also shared the code for Oracle configuration files, which proved to be current. Beaumont writes:

“In data released to a journalist for validation, it has now become 100% clear to me that there has been cybersecurity incident at Oracle, involving systems which processed customer data. … All the systems impacted are directly managed by Oracle. Some of the data provided to journalists is current, too. This is a serious cybersecurity incident which impacts customers, in a platform managed by Oracle. Oracle are attempting to wordsmith statements around Oracle Cloud and use very specific words to avoid responsibility. This is not okay. Oracle need to clearly, openly and publicly communicate what happened, how it impacts customers, and what they’re doing about it. This is a matter of trust and responsibility. Step up, Oracle — or customers should start stepping off.”

In an update to the original post, Beaumont notes some linguistic slight-of-hand employed by the company:

“Oracle rebadged old Oracle Cloud services to be Oracle Classic. Oracle Classic has the security incident. Oracle are denying it on ‘Oracle Cloud’ by using this scope — but it’s still Oracle cloud services that Oracle manage. That’s part of the wordplay.”

However, it seems the firm finally admitted the breach was real to at least some users. Just not in in black and white. We learn:

“Multiple Oracle cloud customers have reached out to me to say Oracle have now confirmed a breach of their services. They are only doing so verbally, they will not write anything down, so they’re setting up meetings with large customers who query. This is similar behavior to the breach of medical PII in the ongoing breach at Oracle Health, where they will only provide details verbally and not in writing.”

So much for transparency. Beaumont pledges to keep investigating the breach and Oracle’s response to it. He invites us to follow his Mastodon account for updates.

Cynthis Murrell, April 15, 2025

Trapped in the Cyber Security Gym with Broken Gear?

April 11, 2025

As an IT worker you can fall into more pitfalls than a road that needs repaving. Mac Chaffee shared a new trap on his blog, Mac’s Tech Blog and how he handled: “Avoid Building A Security Treadmill.” Chaffee wrote that he received a ticket that asked him to stop people from using a GPU service to mine cryptocurrencies. Chafee used Falco, an eBPF-powered agent that runs on the Kubernetes cluster, to monitor the spot and deactivate the digital mining.

Chaffee doesn’t mind the complexity of the solution. His biggest issue was with the “security treadmill” that he defines as:

“A security treadmill1 is a piece of software that, due to a weakness of design, requires constant patching to keep it secure. Isn’t that just all software? Honestly… kinda, yeah, but a true treadmill is self-inflicted. You bought it, assembled it, and put it in your spare bedroom; a device specifically designed to let you walk/run forever without making forward progress.”

One solution suggested to Chaffee was charging people to use the GPU. The idea was that if they charged people more to use the GPU than what they were making with cryptocurrencies than it would stop. That idea wasn’t followed of reasons Chaffee wasn’t told, so Falco was flown.

Unfortunately Falco only detects network traffic to host when its directly connected to the IP. The security treadmill was in full swing because users were bypassing the Internet filter monitored by Falco. Falco needs to be upgraded to catch new techniques that include a VPN or proxy.

Another way to block cryptocurrency mining is blocking all outbound traffic except for those an allowed-user list. It would also prevent malware attacks, command and control servers, and exfiltration attacks. Another problem Chaffee noted is that applications doesn’t need a full POSIX environment. To combat this he suggests:

“Perhaps free-tier users of these GPUs could have been restricted to running specific demos, or restrictive timeouts for GPU processing times, or denying disk write access to prevent downloading miners, or denying the ability to execute files outside of a read-only area.”

Chaffee declares it’s time to upgrade legacy applications or make them obsolete to avoid security treadmills. It sounds like there’s a niche to make a startup there. What a thought a Planet Fitness with one functioning treadmill.

Whitney Grace, April 11, 2025

No Joke: Real Secrecy and Paranoia Are Needed Again

April 1, 2025

dino orangeNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

In the US and the UK, secrecy and paranoia are chic again. The BBC reported “GCHQ Worker Admits Taking top Secret Data Home.” Ah, a Booz Allen / Snowden type story? The BBC reports:

The court heard that Arshad took his work mobile into a top secret GCHQ area and connected it to work station. He then transferred sensitive data from a secure, top secret computer to the phone before taking it home, it was claimed. Arshad then transferred the data from the phone to a hard drive connected to his personal home computer.

Mr. Snowden used a USB drive. The question is, “What are the bosses doing? Who is watching the logs? Who is  checking the video feeds? Who is hiring individuals with some inner need to steal classified information?

But outside phones in a top secret meeting? That sounds like a great idea. I attended a meeting held by a local government agency, and phones and weapons were put in little steel boxes. This outfit was no GHCQ, but the security fellow (a former Marine) knew what he was doing for that local government agency.

A related story addresses paranoia, a mental characteristic which is getting more and more popular among some big dogs.

CNBC reported an interesting approach to staff trust. “Anthropic Announces Updates on Security Safeguards for Its AI Models” reports:

In an earlier version of its responsible scaling policy, Anthropic said it would begin sweeping physical offices for hidden devices as part of a ramped-up security effort.

The most recent update to the firm’s security safeguards adds:

updates to the “responsible scaling” policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards.

The actual explanation is a master piece of clarity. Here’s snippet of what Anthropic actually said in its “Anthropic’s Responsible Scaling Policy” announcement:

The current iteration of our RSP (version 2.1) reflects minor updates clarifying which Capability Thresholds would require enhanced safeguards beyond our current ASL-3 standards.

The Anthropic methods, it seems to me, to include “sweeps” and “compartmentalization.”

Thus, we have two examples of outstanding management:

First, the BBC report implies that personal computing devices can plug in and receive classified information.

And:

Second, CNBC explains that sweeps are not enough. Compartmentalization of systems and methods puts in “cells” who can do what and how.

Andy Grove’s observation popped into my mind. He allegedly rattled off this statement:

Success breeds complacency. Complacency breeds failure. Only the paranoid survive.

Net net: Cyber security is easier to “trust” and “assume”. Real fixes edge into fear and paranoia.

Stephen E Arnold, April 9, 2025

FOGINT: Targets Draw Attention. Signal Is a Target

April 1, 2025

dino orange_thumb_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

We have been plugging away on the “Telegram Overview: Notes for Analysts and Investigators.” We have not exactly ignored Signal or the dozens of other super secret, encrypted beyond belief messaging applications. We did compile a table of those we came across, and Signal was on that list.

I read “NSA Warned of Vulnerabilities in Signal App a Month Before Houthi Strike Chat.” I am not interested in the political facets of this incident. The important point for me is this statement:

The National Security Agency sent out an operational security special bulletin to its employees in February 2025 warning them of vulnerabilities in using the encrypted messaging application Signal

One of the big time cyber security companies spoke with me, and I mentioned that Signal might not be the cat’s pajamas. To the credit of that company and the former police chief with whom I spoke, the firm shifted to an end to end encrypted messaging app we had identified as slightly less wonky. Good for that company, and a pat on the back for the police chief who listened to me.

In my experience, operational bulletins are worth reading. When the bulletin is “special,” re-reading the message is generally helpful.

Signal, of course, defends itself vigorously. The coach who loses a basketball game says, “Our players put out a great effort. It just wasn’t enough.”

In the world of presenting oneself as a super secret messaging app immediately makes that messaging app a target. I know first hand that some whiz kid entrepreneurs believe that their EE2E solution is the best one ever. In fact, a year ago, such an entrepreneur told me, “We have developed a method that only a government agency can compromise.”

Yeah, that’s the point of the NSA bulletin.

Let me ask you a question: “How many computer science students in countries outside the United States are looking at EE2E messaging apps and trying to figure out how to compromise the data?” Years ago, I gave some lectures in Tallinn, Estonia. I visited a university computer science class. I asked the students who were working on projects each selected. Several of them told me that they were trying to compromise messaging systems. A favorite target was Telegram but Signal came up.

I know the wizards who cook up EE2E messaging apps and use the latest and greatest methods for delivering security with bells on are fooling themselves. Here are the reasons:

  1. Systems relying on open source methods are well documented. Exploits exist and we have noticed some CaaS offers to compromise these messages. Now the methods may be illegal in many countries, but they exist. (I won’t provide a checklist in a free blog post. Sorry.)
  2. Techniques to prevent compromise of secure messaging systems involve some patented systems and methods. Yes, the patents are publicly available, but the methods are simply not possible unless one has considerable resources for software, hardware, and deployment.
  3. A number of organizations turn EE2E messaging systems into happy eunuchs taking care of the sultan’s harem. I have poked fun at the blunders of the NSO Group and its Pegasus approach, and I have pointed out that the goodies of the Hacking Team escaped into the wild a long time ago. The point is that once the procedures for performing certain types of compromise are no longer secret, other humans can and will create a facsimile and use those emulations to suck down private messages, the metadata, and probably the pictures on the device too. Toss in some AI jazziness, and the speed of the process goes faster than my old 1962 Studebaker Lark.

Let me wrap up by reiterating that I am not addressing the incident involving Signal. I want to point out that I am not into the “information wants to be free.” Certain information is best managed when it is secret. Outfits like Signal and the dozens of other EE2E messaging apps are targets. Targets get hit. Why put neon lights on oneself and try to hide the fact that those young computer science students or their future employers will find a way to compromise the information.

Technical stealth, network fiddling, human bumbling — Compromises will continue to occur. There were good reasons to enforce security. That’s why stringent procedures and hardened systems have been developed. Today it’s marketing, and the possibility that non open source, non American methods may no longer be what the 23 year old art history who has a job in marketing says the systems actually deliver.

Stephen E Arnold, April 1, 2025

Cyber Attacks in Under a Minute

March 25, 2025

Cybercrime has evolved. VentureBeat reports, "51 Seconds to Breach: How CISOs Are Countering AI-Driven, Lightning-Fast Deepfake, Vishing and Social Engineering Attacks." Yes, according to cybersecurity firm CrowdStrike‘s Adam Meyers, the fastest breakout time he has seen is 51 seconds. No wonder bad actors have an advantage—it can take cyber defense weeks to months to determine a system has been compromised. In the interim, hackers can roam undetected.

Cybercrime methods have also changed. Where malware was once the biggest problem, hackers now favor AI-assisted phishing and vishing (voice-based phishing) campaigns. We learn:

"Vishing is out of control due in large part to attackers fine-turning their tradecraft with AI. CrowdStrike’s 2025 Global Threat Report found that vishing exploded by 442% in 2024. It’s the top initial access method attackers use to manipulate victims into revealing sensitive information, resetting credentials and granting remote access over the phone. ‘We saw a 442% increase in voice-based phishing in 2024. This is social engineering, and this is indicative of the fact that adversaries are finding new ways to gain access because…we’re kind of in this new world where adversaries have to work a little bit harder or differently to avoid modern endpoint security tools,’ Meyers said. Phishing, too, continues to be a threat. Meyers said, ‘We’ve seen that with phishing emails, they have a higher click-through rate when it’s AI-generated content, a 54% click-through rate, versus 12% when a human is behind it.’"

The write-up suggests three strategies to fight today’s breaches. Stop attackers at the authentication layer by shortening token lifetimes and implementing real-time revocation. Also, set things up so no one person can bypass security measures. No, not even the owner. Maybe especially not them. Next, we are advised, fight AI with AI: Machine-learning tools now exist to detect intrusions and immediately shut them down. Finally, stop lateral movement from the breach point with security that is unified across the system. See the write-up for more details on each of these.

Cynthia Murrell, March 25, 2025

Why Worry about TikTok?

March 21, 2025

dino orange_thumb_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I hope this news item from WCCF Tech is wildly incorrect. I have a nagging thought that it might be on the money. “Deepseek’s Chatbot Was Being Used By Pentagon Employees For At Least Two Days Before The Service Was Pulled from the Network; Early Version Has Been Downloaded Since Fall 2024” is the headline I noted. I find this interesting.

The short article reports:

A more worrying discovery is that Deepseek mentions that it stores data on servers in China, possibly presenting a security risk when Pentagon employees started playing around with the chatbot.

And adds:

… employees were using the service for two days before this discovery was made, prompting swift action. Whether the Pentagon workers have been reprimanded for their recent act, they might want to exercise caution because Deepseek’s privacy policy clearly mentions that it stores user data on its Chinese servers.

Several observations:

  1. This is a nifty example of an insider threat. I thought cyber security services blocked this type of to and fro from government computers on a network connected to public servers.
  2. The reaction time is either months (fall of 2024 to 48 hours). My hunch is that it is the months long usage of an early version of the Chinese service.
  3. Which “manager” is responsible? Sorting out which vendors’ software did not catch this and which individual’s unit dropped the ball will be interesting and probably unproductive. Is it in any authorized vendors’ interest to say, “Yeah, our system doesn’t look for phoning home to China but it will be in the next update if your license is paid up for that service.” Will a US government professional say, “Our bad.”

Net net: We have snow removal services that don’t remove snow. We have aircraft crashing in sight of government facilities. And we have Chinese smart software running on US government systems connected to the public Internet. Interesting.

Stephen E Arnold, March 21, 2025

AI Hiring Spoofs: A How To

March 12, 2025

dino orange_thumbBe aware. A dinobaby wrote this essay. No smart software involved.

The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.

AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.

The write up explains that a company wants to hire a professional. Everything hums along and then:

…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.

The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.

Several observations:

  1. Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
  2. The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
  3. As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt  quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.

To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?

Nope.

Stephen E Arnold, March 12, 2025

Encryption: Not the UK Way but Apple Is A-Okay

March 6, 2025

The UK is on a mission. It seems to be making progress. The BBC Reports, "Apple Pulls Data Protection Tool After UK Government Security Row." Technology editor Zoe Kleinman explains:

"Apple is taking the unprecedented step of removing its highest level data security tool from customers in the UK, after the government demanded access to user data. Advanced Data Protection (ADP) means only account holders can view items such as photos or documents they have stored online through a process known as end-to-end encryption. But earlier this month the UK government asked for the right to see the data, which currently not even Apple can access. Apple did not comment at the time but has consistently opposed creating a ‘backdoor’ in its encryption service, arguing that if it did so, it would only be a matter of time before bad actors also found a way in. Now the tech giant has decided it will no longer be possible to activate ADP in the UK. It means eventually not all UK customer data stored on iCloud – Apple’s cloud storage service – will be fully encrypted."

The UK’s security agency, the Home Office, refused to comment on the matter. Apple states it was "gravely disappointed" with this outcome. It emphasizes its longstanding refusal to build any kind of back door or master key. It is the principle of the thing. Instead, it is now removing the locks on the main entrance. Much better.

As of the publication of Kleinman’s article, new iCloud users who tried to opt into ADP received an error message. Apparently, protection for existing users will be stripped at a later date. Some worry Apple’s withdrawal of ADP from the UK sets a bad precedent in the face of similar demands in other countries. Of course, so would caving in to them. The real culprit here, some say, is the UK government that put its citizens’ privacy at risk. Will other governments follow its lead? Will tech firms develop some best practices in the face of such demands? We wonder what their priorities will be.

Cynthia Murrell, March 6, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta