Researchers Reveal Vulnerabilities Across Pinyin Keyboard Apps
May 9, 2024
Conventional keyboards were designed for languages based on the Roman alphabet. Fortunately, apps exist to adapt them to script-based languages like Chinese, Japanese, and Korean. Unfortunately, such tools can pave the way for bad actors to capture sensitive information. Researchers at the Citizen Lab have found vulnerabilities in many pinyin keyboard apps, which romanize Chinese languages. Gee, how could those have gotten there? The post, “The Not-So-Silent Type,” presents their results. Writers Jeffrey Knockel, Mona Wang, and Zoë Reichert summarize the key findings:
- “We analyzed the security of cloud-based pinyin keyboard apps from nine vendors — Baidu, Honor, Huawei, iFlytek, OPPO, Samsung, Tencent, Vivo, and Xiaomi — and examined their transmission of users’ keystrokes for vulnerabilities.
- Our analysis revealed critical vulnerabilities in keyboard apps from eight out of the nine vendors in which we could exploit that vulnerability to completely reveal the contents of users’ keystrokes in transit. Most of the vulnerable apps can be exploited by an entirely passive network eavesdropper.
- Combining the vulnerabilities discovered in this and our previous report analyzing Sogou’s keyboard apps, we estimate that up to one billion users are affected by these vulnerabilities. Given the scope of these vulnerabilities, the sensitivity of what users type on their devices, the ease with which these vulnerabilities may have been discovered, and that the Five Eyes have previously exploited similar vulnerabilities in Chinese apps for surveillance, it is possible that such users’ keystrokes may have also been under mass surveillance.
- We reported these vulnerabilities to all nine vendors. Most vendors responded, took the issue seriously, and fixed the reported vulnerabilities, although some keyboard apps remain vulnerable.”
See the article for all the details. It describes the study’s methodology, gives specific findings for each of those app vendors, and discusses the ramifications of the findings. Some readers may want to skip to the very detailed Summary of Recommendations. It offers suggestions to fellow researchers, international standards bodies, developers, app store operators, device manufacturers, and, finally, keyboard users.
The interdisciplinary Citizen Lab is based at the Munk School of Global Affairs & Public Policy, University of Toronto. Its researchers study the intersection of information and communication technologies, human rights, and global security.
Cynthia Murrell, May 9, 2024
Which Came First? Cliffs Notes or Info Short Cuts
May 8, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The first online index I learned about was the Stanford Research Institute’s Online System. I think I was a sophomore in college working on a project for Dr. William Gillis. He wanted me to figure out how to index poems for a grant he had. The SRI system opened my eyes to what online indexes could do.
Later I learned that SRI was taking ideas from people like Valerius Maximus (30 CE) and letting a big, expensive, mostly hot group of machines do what a scribe would do in a room filled with rolled up papyri. My hunch is that other workers in similar “documents” figures out that some type of labeling and grouping system made sense. Sure, anyone could grab a roll, untie the string keeping it together, and check out its contents. “Hey,” someone said, “Put a label on it and make a list of the labels. Alphabetize the list while you are at it.”
An old-fashioned teacher struggles to get students to produce acceptable work. She cannot write TL;DR. The parents will find their scrolling adepts above such criticism. Thanks, MSFT Copilot. How’s the security work coming?
I thought about the common sense approach to keeping track of and finding information when I read “The Defensive Arrogance of TL;DR.” The essay or probably more accurately the polemic calls attention to the précis, abstract, or summary often included with a long online essay. The inclusion of what is now dubbed TL;DR is presented as meaning, “I did not read this long document. I think it is about this subject.”
On one hand, I agree with this statement:
We’re at a rolling boil, and there’s a lot of pressure to turn our work and the work we consume to steam. The steam analogy is worthwhile: a thirsty person can’t subsist on steam. And while there’s a lot of it, you’re unlikely to collect enough as a creator to produce much value.
The idea is that content is often hot air. The essay includes a chart called “The Rise of Dopamine Culture, created by Ted Gioia. Notice that the world of Valerius Maximus is not in the chart. The graphic begins with “slow traditional culture” and zips forward to the razz-ma-tazz datasphere in which we try to survive.
I would suggest that the march from bits of grass, animal skins, clay tablets, and pieces of tree bark to such examples of “slow traditional culture” like film and TV, albums, and newspapers ignores the following:
- Indexing and summarizing remained unchanged for centuries until the SRI demonstration
- In the last 61 years, manual access to content has been pushed aside by machine-centric methods
- Human inputs are less useful
As a result, the TL;DR tells us a number of important things:
- The person using the tag and the “bullets” referenced in the essay reveal that the perceived quality of the document is low or poor. I think of this TL;DR as a reverse Good Housekeeping Seal of Approval. We have a user assigned “Seal of Disapproval.” That’s useful.
- The tag makes it possible to either NOT out the content with a TL;DR tag or group documents by the author so tagged for review. It is possible an error has been made or the document is an aberration which provides useful information about the author.
- The person using the tag TL;DR creates a set of content which can be either processed by smart software or a human to learn about the tagger. An index term is a useful data point when creating a profile.
I think the speed with which electronic content has ripped through culture has caused a number of jarring effects. I won’t go into them in this brief post. Part of the “information problem” is that the old-fashioned processes of finding, reading, and writing about something took a long time. Now Amazon presents machine-generated books whipped up in a day or two, maybe less.
TL;DR may have more utility in today’s digital environment.
Stephen E Arnold, May 8, 2024
Google Trial: An Interesting Comment Amid the Yada Yada
May 8, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read “Google’s Antitrust Trial Spotlights Search Ads on the Final Day of Closing Arguments.” After decades of just collecting Google tchotchkes, US regulators appear to be making some progress. It is very difficult to determine if a company is a monopoly. It was much easier to count barrels of oil, billets of steel, and railroad cars than digital nothingness, wasn’t it?
A giant whose name is Googzilla has most of the toys. He is reminding those who want the toys about his true nature. I believe Googzilla. Do you? Thanks, Microsoft Copilot. Good enough.
One of the many reports of the Google monopoly legal activity finally provided to me a quite useful, clear statement. Here’s the passage which caught my eye:
a coalition of state attorneys said Google’s search advertising business has trapped advertisers into its ecosystem while higher ad prices haven’t led to higher returns.
I want to consider this assertion. Please, read the original write up on Digiday to get the “real” news report. I am not a journalist; I am a dinobaby, and I have some thoughts to capture.
First, the Google has been doing Googley things for about a quarter of a century. A bit longer if one counts the Backrub service in an estimable Stanford computer building. From my point of view, Google has been doing “clever.” That means to just apologize, not ask permission. That means seek inspiration from others; for example, the IBM Clever system, the Yahoo-Overture advertising system, and the use of free to gain access to certain content like books, and pretty much doing what it wants. After figuring out that Google had to make money, it “innovated” with advertising, paid a fine, and acquired people and technology to match ads to queries. Yep, Oingo (Applied Semantics) helped out. The current antitrust matter will be winding down in 2024 and probably drag through 2025. Appeals for a company with lots of money can go slowly. Meanwhile Google’s activity can go faster.
Second, the data about Google monopoly are not difficult to identify. There is the state of the search market. Well, Eric Schmidt said years ago, Qwant kept him awake at night. I am not sure that was a credible statement. If Mr. Schmidt were awake at night, it might be the result of thinking about serious matters like money. His money. When Google became widely available, there were other Web search engines. I posted a list on my Web site which had a couple of hundred entries. Now the hot new search engines just recycle Bing and open source indexes, tossing in a handful of “special” sources like my mother jazzing up potato salad. There is Google search. And because of the reach of Google search, Google can sell ads.
Third, the ads are not just for search. Any click on a Google service is a click. Due to cute tricks like Chrome and ubiquitous services like maps, Google can slap ads many place. Other outfits cannot unless they are Google “partners.” Those partners are Google’s sales force. SEO customers become buyers of Google ads because that’s the most effective way to get traffic. Does a small business owner expect a Web site to be “found” without Google Local and maybe some advertising juice. Nope. No one but OSINT experts can get Google search to deliver useful results. Google Dorks exists for a reason. Google search quality drives ad sales. And YouTube ads? Lots of ads. Want an alternative? Good luck with Facebook, TikTok, ok.ru, or some other service.
Where’s the trial now? Google has asserted that it does not understand its own technology. The judge says he is circling down the drain of the marketing funnel. But the US government depends on the Google. That may be a factor or just the shadow of Googzilla.
Stephen E Arnold, May 8, 2024
A Look at Several Cyber Busts of 2023
May 8, 2024
Curious about cybercrime and punishment? Darknet data firm DarkOwl gives us a good run down of selective take downs in its blog post, “Cybercriminal Arrests and Disruptions: 2023 Look Back.” The post asserts law enforcement is getting more proactive about finding and disrupting hackers. (Whether that improvement is keeping pace with the growth of hacking is another matter.) We are given seven high-profile examples.
First was the FBI’s takedown of New York State’s Conor Fitzpatrick, admin of the dark web trading post BreachForums. Unfortunately, the site was back up and running in no time under Fitzpatrick’s partner. The FBI seems to have had more success disrupting the Hive Ransomware group, seizing assets and delivering decryption keys to victims. Europol similarly disrupted the Ragnar Locker Ransomware group and even arrested two key individuals. Then there were a couple of kids from the Lapsus$ Gang. Literally, these hackers were UK teenagers responsible for millions of dollars worth of damage and leaked data. See the write-up for more details on these and three other 2023 cases. The post concludes:
“Only some of the law enforcement action that took place in 2023 are described in this blog. Law enforcement are becoming more and more successful in their operations against cybercriminals both in terms of arrests and seizure of infrastructure – including on the dark web. However, events this year (2024) have already shown that some law enforcement action is not enough to take down groups, particularly ransomware groups. Notable activity against BlackCat/ALPHV and LockBit have shown to only take the groups out for a matter of days, when no arrests take place. BlackCat are reported to have recently conducted an exit scam after a high-profile ransomware was paid, and Lockbit seem intent on revenge after their recent skirmish with the law. It is unlikely that law enforcement will be able to eradicate cybercrime and the game whack-a-mole will continue. However, the events of 2023 show that the law enforcement bodies globally are taking action and standing up to the criminals creating dire consequences for some, which will hopefully deter future threat actors.”
One can hope.
Cynthia Murrell, May 8, 2024
Google Stomps into the Threat Intelligence Sector: AI and More
May 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Before commenting on Google’s threat services news. I want to remind you of the link to the list of Google initiatives which did not survive. You can find the list at Killed by Google. I want to mention this resource because Google’s product innovation and management methods are interesting to say the least. Operating in Code Red or Yellow Alert or whatever the Google crisis buzzword is, generating sustainable revenue beyond online advertising has proven to be a bit of a challenge. Google is more comfortable using such methods as [a] buying and trying to scale it, [b] imitating another firm’s innovation, and [c] dumping big money into secret projects in the hopes that what comes out will not result in the firm’s getting its “glass” kicked to the curb.
Google makes a big entrance at the RSA Conference. Thanks, MSFT Copilot. Have you considerate purchasing Google’s threat intelligence service?
With that as background, Google has introduced an “unmatched” cyber security service. The information was described at the RSA security conference and in a quite Googley blog post “Introducing Google Threat Intelligence: Actionable threat intelligence at Google Scale.” Please, note the operative word “scale.” If the service does not make money, Google will “not put wood behind” the effort. People won’t work on the project, and it will be left to dangle in the wind or just shot like Cricket, a now famous example of animal husbandry. (Google’s Cricket was the Google Appliance. Remember that? Take over the enterprise search market. Nope. Bang, hasta la vista.)
Google’s new service aims squarely at the comparatively well-established and now maturing cyber security market. I have to check to see who owns what. Venture firms and others with money have been buying promising cyber security firms. Google owned a piece of Recorded Future. Now Recorded Future is owned by a third party outfit called Insight. Darktrace has been or will be purchased by Thoma Bravo. Consolidation is underway. Thus, it makes sense to Google to enter the threat intelligence market, using its Mandiant unit as a springboard, one of those home diving boards, not the cliff in Acapulco diving platform.
The write up says:
we are announcing Google Threat Intelligence, a new offering that combines the unmatched depth of our Mandiant frontline expertise, the global reach of the VirusTotal community, and the breadth of visibility only Google can deliver, based on billions of signals across devices and emails. Google Threat Intelligence includes Gemini in Threat Intelligence, our AI-powered agent that provides conversational search across our vast repository of threat intelligence, enabling customers to gain insights and protect themselves from threats faster than ever before.
Google to its credit did not trot out the “quantum supremacy” lingo, but the marketers did assert that the service offers “unmatched visibility in threats.” I like the “unmatched.” Not supreme, just unmatched. The graphic below illustrates the elements of the unmatchedness:
Credit to the Google 2024
But where is artificial intelligence in the diagram? Don’t worry. The blog explains that Gemini (Google’s AI “system”) delivers
AI-driven operationalization
But the foundation of the new service is Gemini, which does not appear in the diagram. That does not matter, the Code Red crowd explains:
Gemini 1.5 Pro offers the world’s longest context window, with support for up to 1 million tokens. It can dramatically simplify the technical and labor-intensive process of reverse engineering malware — one of the most advanced malware-analysis techniques available to cybersecurity professionals. In fact, it was able to process the entire decompiled code of the malware file for WannaCry in a single pass, taking 34 seconds to deliver its analysis and identify the kill switch. We also offer a Gemini-driven entity extraction tool to automate data fusion and enrichment. It can automatically crawl the web for relevant open source intelligence (OSINT), and classify online industry threat reporting. It then converts this information to knowledge collections, with corresponding hunting and response packs pulled from motivations, targets, tactics, techniques, and procedures (TTPs), actors, toolkits, and Indicators of Compromise (IoCs). Google Threat Intelligence can distill more than a decade of threat reports to produce comprehensive, custom summaries in seconds.
I like the “indicators of compromise.”
Several observations:
- Will this service be another Google Appliance-type play for the enterprise market? It is too soon to tell, but with the pressure mounting from regulators, staff management issues, competitors, and savvy marketers in Redmond “indicators” of success will be known in the next six to 12 months
- Is this a business or just another item on a punch list? The answer to the question may be provided by what the established players in the threat intelligence market do and what actions Amazon and Microsoft take. Is a new round of big money acquisitions going to begin?
- Will enterprise customers “just buy Google”? Chief security officers have demonstrated that buying multiple security systems is a “safe” approach to a job which is difficult: Protecting their employers from deeply flawed software and years of ignoring online security.
Net net: In a maturing market, three factors may signal how the big, new Google service will develop. These are [a] price, [b] perceived efficacy, and [c] avoidance of a major issue like the SolarWinds’ matter. I am rooting for Googzilla, but I still wonder why Google shifted from Recorded Future to acquisitions and me-too methods. Oh, well. I am a dinobaby and cannot be expected to understand.
Stephen E Arnold, May 7, 2024
Buffeting AI: A Dinobaby Is Nervous
May 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I am not sure the “go fast” folks are going to be thrilled with a dinobaby rich guy’s view of smart software. I read “Warren Buffett’s Warning about AI.” The write up included several interesting observations. The only problem is that smart software is out of the bag. Outfits like Meta are pushing the open source AI ball forward. Other outfits are pushing, but Meta has big bucks. Big bucks matter in AI Land.
Yes, dinobaby. You are on the right wavelength. Do you think anyone will listen? I don’t. Thanks, MSFT Copilot. Keep up the good work on security.
Let’s look at a handful of statements from the write up and do some observing while some in the Commonwealth of Kentucky recover from the Derby.
First, the oracle of Omaha allegedly said:
“When you think about the potential for scamming people… Scamming has always been part of the American scene. If I was interested in investing in scamming— it’s gonna be the growth industry of all time.”
Mr. Buffet has nailed the scamming angle. I particularly liked the “always.” Imagine a country built upon scamming. That makes one feel warm and fuzzy about America. Imagine how those who are hostile to US interests interpret the comment. Ill will toward the US can now be based on the premise that “scamming has always been part of the American scene.” Trust us? Just ignore the oracle of Omaha? Unlikely.
Second, the wise, frugal icon allegedly communicated that:
the technology would affect “anything that’s labor sensitive” and that for workers it could “create an enormous amount of leisure time.”
What will those individuals do with that “leisure time”? Gobbling down social media? Working on volunteer projects like picking up trash from streets and highways?
The final item I will cite is his 2018 statement:
“Cyber is uncharted territory. It’s going to get worse, not better.”
Is that a bit negative?
Stephen E Arnold, May 7, 2024
The Everything About AI Report
May 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read the Stanford Artificial Intelligence Report. If you have have not seen the 500 page document, click here. I spotted an interesting summary of the document. “Things Everyone Should Understand About the Stanford AI Index Report” is the work of Logan Thorneloe, an author previously unknown to me. I want to highlight three points I carried away from Mr. Thorneloe’s essay. These may make more sense after you have worked through the beefy Stanford document, which, due to its size, makes clear that Stanford wants to be linked to the the AI spaceship. (Does Stanford’s AI effort look like Mr. Musk’s or Mr. Bezos’ rocket? I am leaning toward the Bezos design.)
An amazed student absorbs information about the Stanford AI Index Report. Thanks, MSFT. Good enough.
The summary of the 500 page document makes clear that Stanford wants to track the progress of smart software, provide a policy document so that Stanford can obviously influence policy decisions made by people who are not AI experts, and then “highlight ethical considerations.” The assumption by Mr. Thorneloe and by the AI report itself is that Stanford is equipped to make ethical anything. The president of Stanford departed under a cloud for acting in an unethical manner. Plus some of the AI firms have a number of Stanford graduates on their AI teams. Are those teams responsible for depictions of inaccurate historical personages? Okay, that’s enough about ethics. My hunch is that Stanford wants to be perceived as a leader. Mr. Thorneloe seems to accept this idea as a-okay.
The second point for me in the summary is that Mr. Thorneloe goes along with the idea that the Stanford report is unbiased. Writing about AI is, in my opinion of course, inherently biased. That’s’ the reason there are AI cheerleaders and AI doomsayers. AI is probability. How the software gets smart is biased by [a] how the thresholds are rigged up when a smart system is built, [b] the humans who do the training of the system and then “fine tune” or “calibrate” the smart software to produce acceptable results, and [b] the information used to train the system. More recently, human developers have been creating wrappers which effectively prevent the smart software from generating pornography or other “improper” or “unacceptable” outputs. I think the “bias” angle needs some critical thinking. Stanford’s report wants to cover the AI waterfront as Stanford maps and presents the geography of AI.
The final point is the rundown of Mr. Thorneloe’s take-aways from the report. He presents ten. I think there may just be three. First, the AI work is very expensive. That leads to the conclusion that only certain firms can be in the AI game and expect to win and win big. To me, this means that Stanford wants the good old days of Silicon Valley to come back again. I am not sure that this approach to an important, yet immature technology, is a particularly good idea. One does not fix up problems with technology. Technology creates some problems, and like social media, what AI generates may have a dark side. With big money controlling the game, what’s that mean? That’s a tough question to answer. The US wants China and Russia to promise not to use AI in their nuclear weapons system. Yeah, that will work.
Another take-away which seems important is the assumption that workers will be more productive. This is an interesting assertion. I understand that one can use AI to eliminate call centers. However, has Stanford made a case that the benefits outweigh the drawbacks of AI? Mr. Thorneloe seems to be okay with the assumption underlying the good old consultant-type of magic.
The general take-away from the list of ten take-aways is that AI is fueled by “industry.” What happened the Stanford Artificial Intelligence Lab, synthetic data, and the high-confidence outputs? Nothing has happened. AI hallucinates. AI gets facts wrong. AI is a collection of technologies looking for problems to solve.
Net net: Mr. Thorneloe’s summary is useful. The Stanford report is useful. Some AI is useful. Writing 500 pages about a fast moving collection of technologies is interesting. I cannot wait for the 2024 edition. I assume “everyone” will understand AI PR.
Stephen E Arnold, May 7, 2024
Torrent Search Platform Tribler Works to Boost Decentralization with AI
May 7, 2024
Can AI be the key to a decentralized Internet? The group behind the BitTorrent-based search engine Tribler believe it can. TorrentFreak reports, “Researchers Showcase Decentralized AI-Powered Torrent Search Engine.” Even as the online world has mostly narrowed into commercially controlled platforms, researchers at the Netherlands’ Delft University of Technology have worked to decentralize and anonymize search. Their goal has always been to empower John Q. Public over governments and corporations. Now, the team has demonstrated the potential of AI to significantly boost those efforts. Writer Ernesto Van der Sal tells us:
“Tribler has just released a new paper and a proof of concept which they see as a turning point for decentralized AI implementations; one that has a direct BitTorrent link. The scientific paper proposes a new framework titled ‘De-DSI’, which stands for Decentralised Differentiable Search Index. Without going into technical details, this essentially combines decentralized large language models (LLMs), which can be stored by peers, with decentralized search. This means that people can use decentralized AI-powered search to find content in a pool of information that’s stored across peers. For example, one can ask ‘find a magnet link for the Pirate Bay documentary,’ which should return a magnet link for TPB-AFK, without mentioning it by name. This entire process relies on information shared by users. There are no central servers involved at all, making it impossible for outsiders to control.”
Van der Sal emphasizes De-DSI is still in its early stages—the demo was created with a limited dataset and starter AI capabilities. The write-up briefly summarizes the approach:
“In essence, De-DSI operates by sharing the workload of training large language models on lists of document identifiers. Every peer in the network specializes in a subset of data, which other peers in the network can retrieve to come up with the best search result.”
The team hopes to incorporate this tech into an experimental version of Tribler by the end of this year. Stay tuned.
Cynthia Murrell, May 7, 2024
Microsoft Security Messaging: Which Is What?
May 6, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I am a dinobaby. I am easily confused. I read two “real” news items and came away confused. The first story is “Microsoft Overhaul Treats Security As Top Priority after a Series of Failures.” The subtitle is interesting too because it links “security” to monetary compensation. That’s an incentive, but why isn’t security just part of work at an alleged monopoly’s products and services? I surmise the answer is, “Because security costs money, a lot of money.” That article asserts:
After a scathing report from the US Cyber Safety Review Board recently concluded that “Microsoft’s security culture was inadequate and requires an overhaul,” it’s doing just that by outlining a set of security principles and goals that are tied to compensation packages for Microsoft’s senior leadership team.
Okay. But security emerges from basic engineering decisions; for instance, does a developer spend time figuring out and resolving security when dependencies are unknown or documented only by a grousing user in a comment posted on a technical forum? Or, does the developer include a new feature and moves on to the next task, assuming that someone else or an automated process will make sure everything works without opening the door to the curious bad actor? I think that Microsoft assumes it deploys secure systems and that its customers have the responsibility to ensure their systems’ security.
The cyber racoons found the secure picnic basket was easily opened. The well-fed, previously content humans seem dismayed that their goodies were stolen. Thanks, MSFT Copilot. Definitely good enough.
The write up adds that Microsoft has three security principles and six security pillars. I won’t list these because the words chosen strike me like those produced by a lawyer, an MBA, and a large language model. Remember. I am a dinobaby. Six plus three is nine things. Some car executive said a long time ago, “Two objectives is no objective.” I would add nine generalizations are not a culture of security. Nine is like Microsoft Word features. No one can keep track of them because most users use Word to produce Words. The other stuff is usually confusing, in the way, or presented in a way that finding a specific feature is an exercise in frustration. Is Word secure? Sure, just download some nifty documents from a frisky Telegram group or the Dark Web.
The write up concludes with a weird statement. Let me quote it:
I reported last month that inside Microsoft there is concern that the recent security attacks could seriously undermine trust in the company. “Ultimately, Microsoft runs on trust and this trust must be earned and maintained,” says Bell. “As a global provider of software, infrastructure and cloud services, we feel a deep responsibility to do our part to keep the world safe and secure. Our promise is to continually improve and adapt to the evolving needs of cybersecurity. This is job #1 for us.”
First, there is the notion of trust. Perhaps Edge’s persistence and advertising in the start menu, SolarWinds, and the legions of Chinese and Russian bad actors undermine whatever trust exists. Most users are clueless about security issues baked into certain systems. They assume; they don’t trust. Cyber security professionals buy third party security solutions like shopping at a grocery store. Big companies’ senior executive don’t understand why the problem exists. Lawyers and accountants understand many things. Digital security is often not a core competency. “Let the cloud handle it,” sounds pretty good when the fourth IT manager or the third security officer quit this year.
Now the second write up. “Microsoft’s Responsible AI Chief Worries about the Open Web.” First, recall that Microsoft owns GitHub, a very convenient source for individuals looking to perform interesting tasks. Some are good tasks like snagging a script to perform a specific function for a church’s database. Other software does interesting things in order to help a user shore up security. Rapid 7 metasploit-framework is an interesting example. Almost anyone can find quite a bit of useful software on GitHub. When I lectured in a central European country’s main technical university, the students were familiar with GitHub. Oh, boy, were they.
In this second write up I learned that Microsoft has released a 39 page “report” which looks a lot like a PowerPoint presentation created by a blue-chip consulting firm. You can download the document at this link, at least you could as of May 6, 2024. “Security” appears 78 times in the document. There are “security reviews.” There is “cybersecurity development” and a reference to something called “Our Aether Security Engineering Guidance.” There is “red teaming” for biosecurity and cybersecurity. There is security in Azure AI. There are security reviews. There is the use of Copilot for security. There is something called PyRIT which “enables security professionals and machine learning engineers to proactively find risks in their generative applications.” There is partnering with MITRE for security guidance. And there are four footnotes to the document about security.
What strikes me is that security is definitely a popular concept in the document. But the principles and pillars apparently require AI context. As I worked through the PowerPoint, I formed the opinion that a committee worked with a small group of wordsmiths and crafted a rather elaborate word salad about going all in with Microsoft AI. Then the group added “security” the way my mother would chop up a red pepper and put it in a salad for color.
I want to offer several observations:
- Both documents suggest to me that Microsoft is now pushing “security” as Job One, a slogan used by the Ford Motor Co. (How are those Fords fairing in the reliability ratings?) Saying words and doing are two different things.
- The rhetoric of the two documents remind me of Gertrude’s statement, “The lady doth protest too much, methinks.” (Hamlet? Remember?)
- The US government, most large organizations, and many individuals “assume” that Microsoft has taken security seriously for decades. The jargon-and-blather PowerPoint make clear that Microsoft is trying to find a nice way to say, “We are saying we will do better already. Just listen, people.”
Net net: Bandying about the word trust or the word security puts everyone on notice that Microsoft knows it has a security problem. But the key point is that bad actors know it, exploit the security issues, and believe that Microsoft software and services will be a reliable source of opportunity of mischief. Ransomware? Absolutely. Exposed data? You bet your life. Free hacking tools? Let’s go. Does Microsoft have a security problem? The word form is incorrect. Does Microsoft have security problems? You know the answer. Aether.
Stephen E Arnold, May 6, 2024
Reflecting on the Value Loss from a Security Failure
May 6, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Right after the October 2023 security lapse in Israel, I commented to one of the founders of a next-generation Israeli intelware developer, “Quite a security failure.” The response was, “It is Israel’s 9/11.” One of the questions that kept coming to my mind was, “How could such sophisticated intelligence systems, software, and personnel have dropped the ball?” I have arrived at an answer: Belief in the infallibility of in situ systems. Now I am thinking about the cost of a large-scale security lapse.
It seems the young workers are surprised the security systems did not work. Thanks, MSFT Copilot. Good enough which may be similar to some firms’ security engineering.
Globes published “Big Tech 50 Reveals Sharp Falls in Israeli Startup Valuations.” The write up provides some insight into the business cost of security which did not live up to its marketing. The write up says:
The Israeli R&D partnership has reported to the TASE [Tel Aviv Stock Exchange] that 10 of the 14 startups in which it has invested have seen their valuations decline.
Interesting.
What strikes me is that the cost of a security lapse is obviously personal and financial. One of the downstream consequences is a loss of confidence or credibility. Israel’s hardware and software security companies have had, in my opinion, a visible presence at conferences addressing specialized systems and software. The marketing of the capabilities of these systems has been maturing and becoming more like Madison Avenue efforts.
I am not sure which is worse: The loss of “value” or the loss of “credibility.”
If we transport the question about the cost of a security lapse to large US high-technology company, I am not sure a Globes’ type of article captures the impact. Frankly, US companies suffer security issues on a regular basis. Only a few make headlines. And then the firms responsible for the hardware or software which are vulnerable because of poor security issue a news release, provide a software update, and move on.
Several observations:
- The glittering generalities about the security of widely used hardware and software is simply out of step with reality
- Vendors of specialized software such as intelware suggest that their systems provide “protection” or “warnings” about issues so that damage is minimized. I am not sure I can trust these statements.
- The customers, who may have made security configuration errors, have the responsibility to set up the systems, update, and have trained personnel operate them. That sounds great, but it is simply not going to happen. Customers are assuming what they purchase is secure.
Net net: The cost of security failure is enormous: Loss of life, financial disaster, and undermining the trust between vendor and customer. Perhaps some large outfits should take the security of the products and services they offer beyond a meeting with a PR firm, a crisis management company, or a go-go marketing firm? The “value” of security is high, but it is much more than a flashy booth, glib presentations at conferences, or a procurement team assuming what vendors present correlates with real world deployment.
Stephen E Arnold, May 6, 2024