Google and Personnel Vetting: Careless?
February 20, 2025
No smart software required. This dinobaby works the old fashioned way.
The Sundar & Prabhakar Comedy Show pulled another gag. This one did not delight audiences the way Prabhakar’s AI presentation did, nor does it outdo Google’s recent smart software gaffe. It is, however, a bit of a hoot for an outfit with money, smart people, and smart software.
I read the decidedly non-humorous news release from the Department of Justice titled “Superseding Indictment Charges Chinese National in Relation to Alleged Plan to Steal Proprietary AI Technology.” The write up states on February 4, 2025:
A federal grand jury returned a superseding indictment today charging Linwei Ding, also known as Leon Ding, 38, with seven counts of economic espionage and seven counts of theft of trade secrets in connection with an alleged plan to steal from Google LLC (Google) proprietary information related to AI technology. Ding was initially indicted in March 2024 on four counts of theft of trade secrets. The superseding indictment returned today describes seven categories of trade secrets stolen by Ding and charges Ding with seven counts of economic espionage and seven counts of theft of trade secrets.
Thanks, OpenAI, good enough.
Mr. Ding, obviously a Type A worker, appears to have quite industrious at the Google. He was not working for the online advertising giant; he was working for another entity. The DoJ news release describes his set up this way:
While Ding was employed by Google, he secretly affiliated himself with two People’s Republic of China (PRC)-based technology companies. Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC. By May 2023, Ding had founded his own technology company focused on AI and machine learning in the PRC and was acting as the company’s CEO.
What technology caught Mr. Ding’s eye? The write up reports:
Ding intended to benefit the PRC government by stealing trade secrets from Google. Ding allegedly stole technology relating to the hardware infrastructure and software platform that allows Google’s supercomputing data center to train and serve large AI models. The trade secrets contain detailed information about the architecture and functionality of Google’s Tensor Processing Unit (TPU) chips and systems and Google’s Graphics Processing Unit (GPU) systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of training and executing cutting-edge AI workloads. The trade secrets also pertain to Google’s custom-designed SmartNIC, a type of network interface card used to enhance Google’s GPU, high performance, and cloud networking products.
At least, Mr. Ding validated the importance of some of Google’s sprawling technical insights. That’s a plus I assume.
One of the more colorful items in the DoJ news release concerned “evidence.” The DoJ says:
As alleged, Ding circulated a PowerPoint presentation to employees of his technology company citing PRC national policies encouraging the development of the domestic AI industry. He also created a PowerPoint presentation containing an application to a PRC talent program based in Shanghai. The superseding indictment describes how PRC-sponsored talent programs incentivize individuals engaged in research and development outside the PRC to transmit that knowledge and research to the PRC in exchange for salaries, research funds, lab space, or other incentives. Ding’s application for the talent program stated that his company’s product “will help China to have computing power infrastructure capabilities that are on par with the international level.”
Mr. Ding did not use Google’s cloud-based presentation program. I found the explicit desire to “help China” interesting. One wonders how Google’s Googley interview process run by Googley people failed to notice any indicators of Mr. Ding’s loyalties? Googlers are very confident of their Googliness, which obviously tolerates an insider threat who conveys data to a nation state known to be adversarial in its view of the United States.
I am a dinobaby, and I find this type of employee insider threat at Google. Google bought Mandiant. Google has internal security tools. Google has a very proactive stance about its security capabilities. However, in this case, I wonder if a Googler ever noticed that Mr. Ding used PowerPoint, not the Google-approved presentation program. No true Googler would use PowerPoint, an archaic, third party program Microsoft bought eons ago and has managed to pump full of steroids for decades.
Yep, the tell — Googlers who use Microsoft products. Sundar & Prabhakar will probably integrate a short bit into their act in the near future.
Stephen E Arnold, February 20, 2025
Hackers and AI: Of Course, No Hacker Would Use Smart Software
February 18, 2025
This blog post is the work of a real live dinobaby. Believe me, after reading the post, you know that smart software was not involved.
Hackers would never ever use smart software. I mean those clever stealer distributors preying on get-rich-quick stolen credit card users. Nope. Those people using online games to lure kiddies and people with kiddie-level intelligence into providing their parents’ credit card data? Nope and double nope. Those people in computer science classes in Azerbaijan learning how to identify security vulnerability while working as contractors for criminals. Nope. Never. Are you crazy. These bad actors know that smart software is most appropriate for Mother Teresa type activities and creating Go Fund Me pages to help those harmed by natural disasters, bad luck, or not having a job except streaming.
I mean everyone knows that bad actors respect the firms providing smart software. It is common knowledge that bad actors play fair. Why would a criminal use smart software to create more efficacious malware payloads, compromise Web sites, or defeat security to trash the data on Data.gov. Ooops. Bad example. Data.gov has been changed.
I read “Google Says Hackers Abuse Gemini AI to Empower Their Attacks.” That’s the spirit. Bad actors are using smart software. The value of the systems is evident to criminals. The write up says:
Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets. Google’s Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses. Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period. Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China.
Stop the real time news stream! Who could have imagined that bad actors would be interested in systems and methods that would make their behaviors more effective and efficient.
When Microsoft rolled out its marketing gut punch aimed squarely at Googzilla, the big online advertising beast responded. The Code Red and Code Yellow lights flashed. Senior managers held meetings after Foosball games and hanging at Philz’ Coffee.
Did Google management envision the reality of bad actors using Gemini? No. It appears that the Google acquisition Mandiant figured it out. Eventually — it’s been two years and counting since Microsoft caused the AI tsunami — the Eureka! moment arrived.
The write up reports:
Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.
Of course, the attacks were. Do US banks tell their customers when check fraud or other cyber dishonesty relieves people of their funds. Sure they don’t. Therefore, it is only the schlubs who are unfortunate enough to have the breach disclosed. Then the cyber security outfits leap into action and issue fixes. Everything is the cyber security world is buttoned up and buttoned down. Absolutely.
Several observations:
- How has free access without any type of vetting working out? The question is directed at the big tech outfits who are beavering away in this technology blast zone.
- What are the providers of free smart software doing to make certain that the method can only produce seventh grade students’ essays about the transcontinental railroad?
- What exactly is a user of free smart software supposed to do to reign in the actions of nation states with which most Americans are somewhat familiar. I mean there is a Chinese restaurant near Harrod’s Creek. Am I to discuss the matter with the waitress?
Why worry? That worked for Mad Magazine until it didn’t. Hey, Google, thanks for the information. Who could have known smart software can be used for nefarious purposes? (Obviously not Google.)
Stephen E Arnold, February 18, 2025
A Vulnerability Bigger Than SolarWinds? Yes.
February 18, 2025
No smart software. Just a dinobaby doing his thing.
I read an interesting article from WatchTowr Labs. (The spelling is what the company uses, so the url is labs.watchtowr.com.) On February 4, 2024, the company reported that it discovered what one can think of as orphaned or abandoned-but-still alive Amazon S3 “buckets.” The discussion of the firm’s research and what it revealed is presented in “8 Million Requests Later, We Made The SolarWinds Supply Chain Attack Look Amateur.”
The company explains that it was curious if what it calls “abandoned infrastructure” on a cloud platform might yield interesting information relevant to security. We worked through the article and created what in the good old days would have been called an abstract for a database like ABI/INFORM. Here’s our summary:
The article from WatchTowr Labs describes a large-scale experiment where researchers identified and took control of about 150 abandoned Amazon Web Services S3 buckets previously used by various organizations, including governments, militaries, and corporations. Over two months, these buckets received more than eight million requests for software updates, virtual machine images, and sensitive files, exposing a significant vulnerability. Watchtowr explain that bad actors could have injected malicious content. Abandoned infrastructure could be used for supply chain attacks like SolarWinds. Had this happened, the impact would have been significant.
Several observations are warranted:
- Does Amazon Web Services have administrative functions to identify orphaned “buckets” and take action to minimize the attack surface?
- With companies information technology teams abandoning infrastructure, how will these organizations determine if other infrastructure vulnerabilities exist and remediate them?
- What can cyber security vendors’ software and systems do to identify and neutralize these “shoot yourself in the foot” vulnerabilities?
One of the most compelling statements in the WatchTowr article, in my opinion, is:
… we’d demonstrated just how held-together-by-string the Internet is and at the same time point out the reality that we as an industry seem so excited to demonstrate skills that would allow us to defend civilization from a Neo-from-the-Matrix-tier attacker – while a metaphorical drooling-kid-with-a-fork-tier attacker, in reality, has the power to undermine the world.
Is WatchTowr correct? With government and commercial organizations leaving S3 buckets available, perhaps WatchTowr should have included gum, duct tape, and grade-school white glue in its description of the Internet?
Stephen E Arnold, February 18, 2025
A New Spin on Insider Threats: Employees Secretly Use AI At Work
February 12, 2025
We’re afraid of AI replacing our jobs. Employers are blamed for wanting to replace humans with algorithms, but employees are already bringing AI into work. According to the BBC, employees are secretly using AI: “Why Employees Smuggle AI Into Work.” In IT departments across the United Kingdom (and probably the world), knowledge workers are using AI tools without permission from their leads.
Software AG conducted a survey of knowledge workers and the results showed that half of them used personal AI tools. Knowledge workers are defined at people who primarily work at a desk or a computer. Some of them are using the tools because their job doesn’t offer tools and others said they wanted to choose their tools.
Many of the workers are also not asking. They’re abiding by the mantra of, “It’s easier to ask forgiveness than permission.”
One worker uses ChatGPT as a mechanized coworker. ChatGPT allows the worker to consume information at faster rates and it has increased his productivity. His company banned AI tools, he didn’t know why but assumes it is a control thing.
AI tools also pose security risks, because the algorithms learn from user input. The algorithms store information and it can expose company secrets:
“Companies may be concerned about their trade secrets being exposed by the AI tool’s answers, but Alastair Paterson, CEO and co-founder of Harmonic Security, thinks that’s unlikely. "It’s pretty hard to get the data straight out of these [AI tools]," he says.
However, firms will be concerned about their data being stored in AI services they have no control over, no awareness of, and which may be vulnerable to data breaches.”
Using AI tools is like any new technology. The AI tools need to be used and tested, then regulated. AI can’t replace experience, but it certainly helps get the job done.
Whitney Grace, February 12, 2025
Acquiring AWS Credentials—Let Us Count the Ways
February 7, 2025
Will bad actors interested in poking around Amazon Web Services find the Wiz’s write up interesting? The answer is that the end of this blog post.
Cloud security firm Wiz shares an informative blog post: "The Many Ways to Obtain Credentials in AWS." It is a write-up that helps everyone: customers, Amazon, developers, cybersecurity workers, and even bad actors. We have not seen a similar write up about Telegram, however. Why publish such a guide to gaining IAM role and other AWS credentials? Why, to help guard against would- be hackers who might use these methods, of course.
Writer Scott Piper describes several services and features one might use to gain access: Certain AWS SDK credential providers; the Default Host Management Configuration; Systems Manager hybrid activation; the Internet of Things credentials provider; IAM Roles Anywhere; Cognito’s API, GetCredentialsForIdentity; and good old Datasync. The post concludes:
"There are many ways that compute services on AWS obtain their credentials and there are many features and services that have special credentials. This can result in a single EC2 having multiple IAM principals accessible from it. In order to detect attackers, we need to know the various ways they might attempt to obtain these credentials. This article has shown how this is not a simple problem and requires defenders to have just as much, if not more, expertise as attackers in credential access."
So true. Especially with handy cheat sheets like this one available online. Based in New York, New York, Wiz was founded in 2020.
Will bad actors find the Wiz’s post interesting? Answer: Yes but probably less interesting than a certain companion of Mr. Bezos’ fashion sense. But not by much.
Cynthia Murrell, February 7, 2025
Several Security Pitfalls to Avoid in Software Design
February 6, 2025
Developers concerned about security should check out "Seven Types of Security Issues in Software Design" at InsBug. The article does leave out a few points we would have included. Using Microsoft software, for example, or paying for cyber security solutions that don’t work as licensees believe. And don’t forget engineering for security rather than expediency and cost savings. Nevertheless, the post makes some good points. It begins:
"Software is gradually defining everything, and its forms are becoming increasingly diverse. Software is no longer limited to the applications or apps we see on computers or smartphones. It is now an integral part of hardware devices and many unseen areas, such as cars, televisions, airplanes, warehouses, cash registers, and more. Besides sensors and other electronic components, the actions and data of hardware often rely on software, whether in small amounts of code or in hidden or visible forms. Regardless of the type of software, the development process inevitably encounters bugs that need to be identified and fixed. While major bugs are often detected and resolved before release or deployment by developers or testers, security vulnerabilities don’t always receive the same attention."
Sad but true. The seven categories include: Misunderstanding of Security Protection Technologies; Component Integration and Hidden Security Designs; Ignoring Security in System Design; Security Risks from Poor Exception Handling; Discontinuous or Inconsistent Trust Relationships; Over-Reliance on Single-Point Security Measures; and Insufficient Assessment of Scenarios or Environments. See the write-up for details on each point. We note a common thread—a lack of foresight. The post concludes:
"To minimize security risks and vulnerabilities in software design and development, one must possess solid technical expertise and a robust background in security offense and defense. Developing secure software is akin to crafting fine art — it requires meticulous thought, constant consideration of potential threats, and thoughtful design solutions. This makes upfront security design critically important."
Security should not be an afterthought. But after a breach, it is going to be fixed. Oh, the check is in the mail.
Cynthia Murrell, February 6, 2025
Yo, MSFT-Types, Listen Up
January 23, 2025
Developers concerned about security should check out “Seven Types of Security Issues in Software Design” at InsBug. The article does leave out a few points we would have included. Using Microsoft software, for example, or paying for cyber security solutions that don’t work as licensees believe. And don’t forget engineering for security rather than expediency and cost savings. Nevertheless, the post makes some good points. It begins:
“Software is gradually defining everything, and its forms are becoming increasingly diverse. Software is no longer limited to the applications or apps we see on computers or smartphones. It is now an integral part of hardware devices and many unseen areas, such as cars, televisions, airplanes, warehouses, cash registers, and more. Besides sensors and other electronic components, the actions and data of hardware often rely on software, whether in small amounts of code or in hidden or visible forms. Regardless of the type of software, the development process inevitably encounters bugs that need to be identified and fixed. While major bugs are often detected and resolved before release or deployment by developers or testers, security vulnerabilities don’t always receive the same attention.”
Sad but true. The seven categories include: Misunderstanding of Security Protection Technologies; Component Integration and Hidden Security Designs; Ignoring Security in System Design; Security Risks from Poor Exception Handling; Discontinuous or Inconsistent Trust Relationships; Over-Reliance on Single-Point Security Measures; and Insufficient Assessment of Scenarios or Environments. See the write-up for details on each point. We note a common thread—a lack of foresight. The post concludes:
“To minimize security risks and vulnerabilities in software design and development, one must possess solid technical expertise and a robust background in security offense and defense. Developing secure software is akin to crafting fine art — it requires meticulous thought, constant consideration of potential threats, and thoughtful design solutions. This makes upfront security design critically important.”
Security should not be an afterthought. What a refreshing perspective.
Cynthia Murrell, January 23, 2025
FOGINT: Are Secure Communications Possible? DekkoSecure Says, “Yes
January 15, 2025
Prepared by the FOGINT research team.
For a project in 2023 and 2024, the FOGINT team worked on secure communications. We discovered that most of the alleged end-to-end messaging systems were not secure. The firm commissioning our report seemed surprised when we identified common points of vulnerability in existing E2EE systems. Furthermore, the FOGINT team itself was impressed with a handful of organizations resolving secure messaging issues in well-engineered ways. Furthermore, we noted that some of the most significant secure communication tools were drowned out by the consumer-centric solutions available. The idea that by making certain software available as open source was proof that these tools were indeed secure.
A telling example is the perception of Telegram Messenger as an end-to-end solution. It is not. And what about Zoom, a service which exploded during the Covid panic. Presumably hiring a “security guru” solved its problems of Zoom bombing and delivered “total security” addressed Zoom’s issues. That high profile hiring delivered PR, not security.
FOGINT wants to provide some information about a secure communications service that does provide a secure way to share content in image, audio, video, or text form. The solution was developed by Dmytro Bablinyuk and Jay Haybatov. In 2015 DekkoSecure began marketing its system. In the last decade DekkoSecure has emerged as a reliable provider of secure communication and collaboration tools, with a specialization in encrypted solutions tailored for the law enforcement, legal, healthcare, defense and government sectors. Their comprehensive platform seamlessly integrates four main product lines, each designed to address critical security and usability needs.
The firm’s Digital Signature Software offers robust features such as audit trails for document tracking, mobile signature support, customizable templates, automated reminder systems, and strong authentication protocols. This software ensures that document signing processes are both secure and efficient, meeting the stringent requirements of various industries. Key features of the solution include:
- Secure File Sharing is another cornerstone of DekkoSecure’s platform, providing end-to-end encryption for files both in transit and at rest. It supports real-time collaboration, version control, integrated workflow management, and a user-friendly drag-and-drop interface. These features enable secure and efficient file management and collaboration across teams.
- The company’s Cloud Storage Service boasts granular access controls, cross-device synchronization, compliant archiving and retention, and version history management. This service ensures that sensitive data is stored securely, accessible when needed, and meets regulatory compliance standards. The firm’s Zero Trust/Zero Knowledge encryption is new to the U.S. law enforcement market and provides clients comfort that only authorized and authenticated users can access files, which includes DekkoSecure not having access to the files.
- Security Software — The company incorporates
Key competitive advantages of DekkoSecure include its all-in-one platform integration, user-friendly interface, strong security focus, and a comprehensive feature set. These strengths make it an attractive option for various target markets, including small to medium-sized businesses, large enterprises, government agencies, and remote workforces.
However, DekkoSecure faces certain challenges. The system is tailored to the needs of law enforcement, courts, and healthcare. The company employs a data usage pricing structure and does not limit the number of users in an organization. Also, although the system is easy-to-use, the firm’s engineers work with clients to ensure that the platform has the processes, look and feel they require prior to implementation. And, looking ahead to 2025, DekkoSecure will benefit from the US FBI’s suggestion that encrypted communications become the standard for organizations and individuals.
Net net: DekkoSecure’s focus on encryption and user experience, combined with its broad feature set, makes it particularly appealing to organizations handling sensitive data. Despite the platform’s complexity posing challenges for some users, its integrated approach to secure communication and collaboration offers significant value for businesses seeking to consolidate their security tools.
Stephen E Arnold, January 15, 2025
FOGINT: A Shocking Assertion about Israeli Intelligence Before the October 2023 Attack
January 13, 2025
One of my colleagues alerted me to a new story in the Jerusalem Post. The article is “IDF Could’ve Stopped Oct. 7 by Monitoring Hamas’s Telegram, Researchers Say.” The title makes clear that this is an “after action” analysis. Everyone knows that thinking about the whys and wherefores right of bang is a safe exercise. Nevertheless, let’s look at what the Jerusalem Post reported on January 5, 2025.
First, this statement:
“These [Telegram] channels were neither secret nor hidden — they were open and accessible to all.” — Lt.-Col. (res.) Jonathan Dahoah-Halevi
Telegram puts some “silent” barriers to prevent some third parties from downloading in real time active discussions. I know of one Israeli cyber security firm which asserts that it monitors Telegram public channel messages. (I won’t ask the question, “Why didn’t analysts at that firm raise an alarm or contact their former Israeli government employers with that information? Those are questions I will sidestep.)
Second, the article reports:
These channels [public Telegram channels like Military Tactics] were neither secret nor hidden — they were open and accessible to all. The “Military Tactics” Telegram channel even shared professional content showcasing the organization’s level of preparedness and operational capabilities. During the critical hours before the attack, beginning at 12:20 a.m. on October 7, the channel posted a series of detailed messages that should have raised red flags, including: “We say to the Zionist enemy, [the operation] coming your way has never been experienced by anyone,” “There are many, many, many surprises,” “We swear by Allah, we will humiliate you and utterly destroy you,” and “The pure rifles are loaded, and your heads are the target.”
Third, I circled this statement:
However, Dahoah-Halevi further asserted that the warning signs appeared much earlier. As early as September 17, a message from the Al-Qassam Brigades claimed, “Expect a major security event soon.” The following day, on September 18, a direct threat was issued to residents of the Gaza border communities, stating, “Before it’s too late, flee and leave […] nothing will help you except escape.”
The attack did occur, and it had terrible consequences for the young people killed and wounded and for the Israeli cyber security industry, which some believe is one of the best in the world. The attack suggested that marketing rather than effectiveness created an impression at odds with reality.
What are the lessons one can take from this report? The FOGINT team will leave that to you to answer.
Stephen E Arnold, January 13, 2025
Identifying Misinformation: A Task Not Yet Mastered
January 8, 2025
This is an official dinobaby post. No smart software involved in this blog post.
On New Year’s eve the US Department of Treasury issued a news release about Russian interference in the recent US presidential election. Tucked into the document “Treasury Sanctions Entities in Iran and Russia That Attempted to Interfere in the U.S. 2024 Election” was this passage:
GRU-AFFILIATED ENTITY USES ARTIFICIAL INTELLIGENCE TOOLS TO INTERFERE IN THE U.S. 2024 ELECTION
The Moscow-based Center for Geopolitical Expertise (CGE), founded by OFAC-designated [Office of Foreign Asset Control — Editor] Aleksandr Dugin, directs and subsidizes the creation and publication of deepfakes and circulated disinformation about candidates in the U.S. 2024 general election. CGE personnel work directly with a GRU unit that oversees sabotage, political interference operations, and cyberwarfare targeting the West. Since at least 2024, a GRU officer and CGE affiliate directed CGE Director Valery Mikhaylovich Korovin (Korovin) and other CGE personnel to carry out various influence operations targeting the U.S. 2024 presidential election. At the direction of, and with financial support from, the GRU, CGE and its personnel used generative AI tools to quickly create disinformation that would be distributed across a massive network of websites designed to imitate legitimate news outlets to create false corroboration between the stories, as well as to obfuscate their Russian origin. CGE built a server that hosts the generative AI tools and associated AI-created content, in order to avoid foreign web-hosting services that would block their activity. The GRU provided CGE and a network of U.S.-based facilitators with financial support to: build and maintain its AI-support server; maintain a network of at least 100 websites used in its disinformation operations; and contribute to the rent cost of the apartment where the server is housed. Korovin played a key role in coordinating financial support from the GRU to his employees and U.S.-based facilitators. In addition to using generative AI to construct and disseminate disinformation targeting the U.S. electorate in the lead up to the U.S. 2024 general election, CGE also manipulated a video it used to produce baseless accusations concerning a 2024 vice presidential candidate in an effort to sow discord amongst the U.S. electorate. Today, OFAC is designating CGE and Korovin pursuant to E.O. 13848 for having directly or indirectly engaged in, sponsored, concealed, or otherwise been complicit in foreign malign influence in the 2024 U.S. election. Additionally, OFAC is designating CGE pursuant to E.O. 13694, as amended, E.O. 14024, and section 224 of the Countering America’s Adversaries Through Sanctions Act of 2017 (CAATSA) for being owned or controlled by, or having acted or purported to act for or on behalf of, directly or indirectly, the GRU, a person whose property and interests in property are blocked pursuant to E.O. 13694, as amended, E.O. 14024, and section 224 of CAATSA. OFAC is also designating Korovin pursuant to E.O. 14024 for being or having been a leader, official, senior executive officer, or member of the board of directors of CGE, a person whose property and interests in property are blocked pursuant to E.O. 14024.
Several questions arise:
- Was the smart software open source or commercial? What model or models powered the misinformation effort?
- What functions could intermediaries / service providers add to their existing systems to identify and block the actions of an adversary’s operative? (Obviously existing software to identify “fake” content do not work particularly well.)
- What safeguard standards can be used to prevent misuse of smart software? Are safeguard standards possible or too difficult to implement in a “run fast and break things” setting?
- What procedures and specialized software are required to provide security professionals with a reliable early warning system? The fact of this interference illustrates that the much-hyped cyber alert services do not function in a way sufficiently accurate to deal with willful misinformation “factories.”
Stephen E Arnold, January 8, 2025