Big Tech, Big Fakes, Bigger Money: What Will AI Kill?
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I don’t read The Hollywood Reporter. I did one job for a Hollywood big wheel. That was enough for me. I don’t drink. I don’t take drugs unless prescribed by my comic book addicted medical doctor in rural Kentucky. I don’t dress up and wear skin bronzers in the hope that my mobile will buzz. I don’t stay out late. I don’t fancy doing things which make my ethical compass buzz more angrily than my mobile phone. Therefore, The Hollywood Reporter does not speak to me.
One of my research team sent me a link to “The Rise of AI-Powered Stars: Big Money and Risks.” I scanned the write up and then I went through it again. By golly, The Hollywood Reporter hit on an “AI will kill us” angle not getting as much publicity as Sam AI-Man’s minimal substance interview.
Can a techno feudalist generate new content using what looks like “stars” or “well known” people? Probably. A payoff has to be within sight. Otherwise, move on to the next next big thing. Thanks, MSFT Copilot. Good enough cartoon.
Please, read the original and complete article in The Hollywood Reporter. Here’s the passage which rang the insight bell for me:
tech firms are using the power of celebrities to introduce the underlying technology to the masses. “There’s a huge possible business there and I think that’s what YouTube and the music companies see, for better or for worse
Let’s think about these statements.
First, the idea of consumerizing AI for the masses is interesting. However, I interpret the insight as having several force vectors:
- Become the plumbing for the next wave of user generated content (USG)
- Get paid by users AND impose an advertising tax on the USG
- Obtain real-time data about the efficacy of specific smart generation features so that resources can be directed to maintain a “moat” from would-be attackers.
Second, by signing deals with people who to me are essentially unknown, the techno giants are digging some trenches and putting somewhat crude asparagus obstacles where the competitors are like to drive their AI machines. The benefits include:
- First hand experience with the stars’ ego system responds
- The data regarding cost of signing up a star, payouts, and selling ads against the content
- Determining what push back exists [a] among fans and [b] the historical middlemen who have just been put on notice that they can find their future elsewhere.
Finally, the idea of the upside and the downside for particular entities and companies is interesting. There will be winners and losers. Right now, Hollywood is a loser. TikTok is a winner. The companies identified in The Hollywood Reporter want to be winners — big winners.
I may have to start paying more attention to this publication and its stories. Good stuff. What will AI kill? The cost of some human “talent”?
Stephen E Arnold, December 7, 2023
Will TikTok Go Slow in AI? Well, Sure
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The AI efforts of non-governmental organizations, government agencies, and international groups are interesting. Many resolutions, proclamations, and blog polemics, etc. have been saying, “Slow down AI. Smart software will put people out of work. Destroy humans’ ability to think. Unleash the ‘I’ll be back guy.'”
Getting those enthusiastic about smart software is a management problem. Thanks, MSFT Copilot. Good enough.
My stance in the midst of this fearmongering has been bemusement. I know that predicting the future perturbations of technology is as difficult as picking a Kentucky Derby winner and not picking a horse that will drop dead during the race. When groups issue proclamations and guidelines without an enforcement mechanism, not much is going to happen in the restraint department.
I submit as partial evidence for my bemusement the article “TikTok Owner ByteDance Joins Generative AI Frenzy with Service for Chatbot Development, Memo Says.” What seems clear, if the write up is mostly on the money, is that a company linked to China is joining “the race to offer AI model development as a service.”
Two quick points:
- Model development allows the provider to get a sneak peak at what the user of the system is trying to do. This means that information flows from customer to provider.
- The company in the “race” is one of some concern to certain governments and their representatives.
The write up says:
ByteDance, the Chinese owner of TikTok, is working on an open platform that will allow users to create their own chatbots, as the company races to catch up in generative artificial intelligence (AI) amid fierce competition that kicked off with last year’s launch of ChatGPT. The “bot development platform” will be launched as a public beta by the end of the month…
The cited article points out:
China’s most valuable unicorn has been known for using some form of AI behind the scenes from day one. Its recommendation algorithms are considered the “secret sauce” behind TikTok’s success. Now it is jumping into an emerging market for offering large language models (LLMs) as a service.
What other countries are beavering away on smart software? Will these drive in the slow lane or the fast lane?
Stephen E Arnold, December 7, 2023
Just for the Financially Irresponsible: Social Shopping
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Amazon likes to make it as easy as possible for consumers to fork over their hard-earned cash on a whim. More steps between seeing a product and checking out means more time to reconsider a spontaneous purchase, after all. That is why the company has been working to integrate purchases into social media platforms. Payment-platform news site PYMNTS reports on the latest linkage in, “Amazon Extends Social Shopping Efforts with Snapchat Deal.” Amazon’s partnership with Meta had already granted it quick access to eyeballs and wallets at Facebook and Instagram. Now users of all three platforms will be able to link those social media accounts to their Amazon accounts. We are told:
“It’s a partnership that lets both companies play to their strengths: Amazon gets to help merchants find customers who might not have actively sought out their products. And Meta’s discovery-based model lets users receive targeted ads without searching for them. Amazon also has a deal with Pinterest, signed in April, designed to create more shoppable content by enhancing the platform’s offering of relevant products and brands. These partnerships are happening at a moment when social media has become a crucial tool for consumers to find new products.”
That is one way to put it. Here is another: The deals let Amazon take advantage of users’ cognitive haze: scrolling social media has been linked to information overload, shallow thinking, reduced attention span, and fragmented thoughts. A recipe for perfect victims. I mean, customers. We wonder what Meta is getting in exchange for handing them over?
Cynthia Murrell, December 7, 2023
Gemini Twins: Which Is Good? Which Is Evil? Think Hard
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I received a link to a Google DeepMind marketing demonstration Web page called “Welcome to Gemini.” To me, Gemini means Castor and Pollux. Somewhere along the line — maybe a wonky professor named Chapman — told my class that these two represented Zeus and Hades. Stated another way, one was a sort of “good” deity with a penchant for non-godlike behavior. The other downright awful most of the time. I assume that Google knows about Gemini, its mythological baggage, and the duality of a Superman type doing the trust, justice, American way, and the other inspiring a range of bad actors. Imagine. Something that is good and bad. That’s smart software I assume. The good part sells ads; the bad part fails at marketing perhaps?
Two smart Googlers in New York City learn the difference between book learning for a PhD and street learning for a degree from the Institute of Hard Knocks. Thanks, MSFT Copilot. (Are you monitoring Google’s effort to dominate smart software by announcing breakthroughs very few people understand? Are you finding Google’s losses at the AI shell game entertaining?
Google’s blog post states with rhetorical aplomb:
Gemini is built from the ground up for multimodality — reasoning seamlessly across text, images, video, audio, and code.
Well, any other AI using Google’s previous technology is officially behind the curve. That’s clear to me. I wonder if Sam AI-Man, Microsoft, and the users of ChatGPT are tuned to the Google wavelength? There’s a video or more accurately more than a dozen of them, but I don’t like video so I skipped them all. There are graphs with minimal data and some that appear to jiggle in “real” time. I skipped those too. There are tables. I did read the some of the data and learned that Gemini can do basic arithmetic and “challenging” math like geometry. That is the 3, 4, 5 triangle stuff. I wonder how many people under the age of 18 know how to use a tape measure to determine if a corner is 90 degrees? (If you don’t, why not ask ChatGPT or MSFT Copilot.) I processed the twin’s size which come in three sizes. Do twins come in triples? Sigh. Anyway one can use Gemini Ultra, Gemini Pro, and Gemini Nano. Okay, but I am hung up on the twins and the three sizes. Sorry. I am a dinobaby. There are more movies. I exited the site and navigated to YCombinator’s Hacker News. Didn’t Sam AI-Man have a brush with that outfit?
You can find the comments about Gemini at this link. I want to highlight several quotations I found suggestive. Then I want to offer a few observations based on my conversation with my research team.
Here are some representative statements from the YCombinator’s forum:
- Jansan said: Yes, it [Google] is very successful in replacing useful results with links to shopping sites.
- FrustratedMonkey said: Well, deepmind was doing amazing stuff before OpenAI. AlphaGo, AlphaFold, AlphaStar. They were groundbreaking a long time ago. They just happened to miss the LLM surge.
- Wddkcs said: Googles best work is in the past, their current offerings are underwhelming, even if foundational to the progress of others.
- Foobar said: The whole things reeks of being desperate. Half the video is jerking themselves off that they’ve done AI longer than anyone and they “release” (not actually available in most countries) a model that is only marginally better than the current GPT4 in cherry-picked metrics after nearly a year of lead-time?
- Arson9416 said: Google is playing catchup while pretending that they’ve been at the forefront of this latest AI wave. This translates to a lot of talk and not a lot of action. OpenAI knew that just putting ChatGPT in peoples hands would ignite the internet more than a couple of over-produced marketing videos. Google needs to take a page from OpenAI’s playbook.
Please, work through the more than 600 comments about Gemini and reach your own conclusions. Here are mine:
- The Google is trying to market using rhetorical tricks and big-brain hot buttons. The effort comes across to me as similar to Ford’s marketing of the Edsel.
- Sam AI-Man remains the man in AI. Coups, tension, and chaos — irrelevant. The future for many means ChatGPT.
- The comment about timing is a killer. Google missed the train. The company wants to catch up, but it is not shipping products nor being associated to features grade school kids and harried marketers with degrees in art history can use now.
Sundar Pichai is not Sam AI-Man. The difference has become clear in the last year. If Sundar and Sam are twins, which represents what?
Stephen E Arnold, December 6, 2023
x
x
x
x
xx
Forget Deep Fakes. Watch for Shallow Fakes
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
“A Tech Conference Listed Fake Speakers for Years: I Accidentally Noticed” revealed a factoid about which I knew absolutely zero. The write up reveals:
For 3 years straight, the DevTernity conference listed non-existent software engineers representing Coinbase and Meta as featured speakers. When were they added and what could have the motivation been?
The article identifies and includes what appear to be “real” pictures of a couple of these made-up speakers. What’s interesting is that only females seem to be made up. Is that perhaps because conference organizers like to take the easiest path, choosing people who are “in the news” or “friends.” In the technology world, I see more entities which appear to be male than appear to be non-males.
Shallow fakes. Deep fakes. What’s the problem? Thanks, MSFT Copilot. Nice art which you achieved exactly how? Oh, don’t answer that question. I don’t want to know.
But since I don’t attend many conferences, I am not in touch with demographics. Furthermore, I am not up to speed on fake people. To be honest, I am not too interested in people, real or fake. After a half century of work, I like my French bulldog.
The write up points out:
We’ve not seen anything of this kind of deceit in tech – a conference inventing speakers, including fake images – and the mainstream media covered this first-of-a-kind unethical approach to organizing a conference,
That’s good news.
I want to offer a handful of thoughts about creating “fake” people for conferences and other business efforts:
- Why not? The practice went unnoticed for years.
- Creating digital “fakes” is getting easier and the tools are becoming more effective at duplicating “reality” (whatever that is). It strikes me that people looking for a short cut for a diverse Board of Directors, speaker line up, or a LinkedIn reference might find the shortest, easiest path to shape reality for a purpose.
- The method used to create a fake speaker is more correctly termed ka “shallow” fake. Why? As the author of the cited paper points out. Disproving the reality of the fakes was easy and took little time.
Let me shift gears. Why would conference organizers find fake speakers appealing? Here are some hypotheses:
- Conferences fall into a “speaker rut”; that is, organizers become familiar with certain speakers and consciously or unconsciously slot them into the next program because they are good speakers (one hopes), friendly, or don’t make unwanted suggestions to the organizers
- Conference staff are overworked and understaffed. Applying some smart workflow magic to organizing and filling in the blanks spaces on the program makes the use of fakery appealing, at least at one conference. Will others learn from this method?
- Conferences have become more dependent on exhibitors. Over the years, renting booth space has become a way for a company to be featured on the program. Yep, advertising, just advertising linked to “sponsors” of social gatherings or Platinum and Gold sponsors who get to put marketing collateral in a cheap nylon bag foisted on every registrant.
I applaud this write up. Not only will it give people ideas about how to use “fakes.” It will also inspire innovation in surprising ways. Why not “fake” consultants on a Zoom call? There’s an idea for you.
Stephen E Arnold, December 6, 2023
How about Fear and Paranoia to Advance an Agenda?
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I thought sex sells. I think I was wrong. Fear seems to be the barn burner at the end of 2023. And why not? We have the shadow of another global pandemic? We have wars galore. We have craziness on US air planes. We have a Cybertruck which spells the end for anyone hit by the behemoth.
I read (but did not shake like the delightful female in the illustration “AI and Mass Spying.” The author is a highly regarded “public interest technologist,” an internationally renowned security professional, and a security guru. For me, the key factoid is that he is a fellow at the Berkman Klein Center for Internet & Society at Harvard University and a lecturer in public policy at the Harvard Kennedy School. Mr. Schneier is a board member of the Electronic Frontier Foundation and the most, most interesting organization AccessNow.
Fear speaks clearly to those in retirement communities, elder care facilities, and those who are uninformed. Let’s say, “Grandma, you are going to be watched when you are in the bathroom.” Thanks, MSFT Copilot. I hope you are sending data back to Redmond today.
I don’t want to make too much of the Harvard University connection. I feel it is important to note that the esteemed educational institution got caught with its ethical pants around its ankles, not once, but twice in recent memory. The first misstep involved an ethics expert on the faculty who allegedly made up information. The second is the current hullabaloo about a whistleblower allegation. The AP slapped this headline on that report: “Harvard Muzzled Disinfo Team after $500 Million Zuckerberg Donation.” (I am tempted to mention the Harvard professor who is convinced he has discovered fungible proof of alien technology.)
So what?
The article “AI and Mass Spying” is a baffler to me. The main point of the write up strikes me as:
Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.
I interpret the passage to mean that smart software in the hands of law enforcement, intelligence operatives, investigators in one of the badge-and-gun agencies in the US, or a cyber lawyer is really, really bad news. Smart surveillance has arrived. Smart software can process masses of data. Plus the outputs may be wrong. I think this means the sky is falling. The fear one is supposed to feel is going to be the way a chicken feels when it sees the Chik-fil-A butcher truck pull up to the barn.
Several observations:
- Let’s assume that smart software grinds through whatever information is available to something like a spying large language model. Are those engaged in law enforcement are unaware that smart software generates baloney along with the Kobe beef? Will investigators knock off the verification processes because a new system has been installed at a fusion center? The answer to these questions is, “Fear advances the agenda of using smart software for certain purposes; specifically, enforcement of rules, regulations, and laws.”
- I know that the idea that “all” information can be processed is a jazzy claim. Google made it, and those familiar with Google search results knows that Google does not even come close to all. It can barely deliver useful results from the Railway Retirement Board’s Web site. “All” covers a lot of ground, and it is unlikely that a policeware vendor will be able to do much more than process a specific collection of data believed to be related to an investigation. “All” is for fear, not illumination. Save the categorical affirmatives for the marketing collateral, please.
- The computational cost for applying smart software to large domains of data — for example, global intercepts of text messages — is fun to talk about over lunch. But the costs are quite real. Then the costs of the computational infrastructure have to be paid. Then the cost of the downstream systems and people who have to figure out if the smart software is hallucinating or delivering something useful. I would suggest that Israel’s surprise at the unhappy events in October 2023 to the present day unfolded despite the baloney for smart security software, a great intelligence apparatus, and the tons of marketing collateral handed out at law enforcement conferences. News flash: The stuff did not work.
In closing, I want to come back to fear. Exactly what is accomplished by using fear as the pointy end of the stick? Is it insecurity about smart software? Are there other messages framed in a different way to alert people to important issues?
Personally, I think fear is a low-level technique for getting one’s point across. But when those affiliated with an outfit with the ethics matter and now the payola approach to information, how about putting on the big boy pants and select a rhetorical trope that is unlikely to anything except remind people that the Covid thing could have killed us all. Err. No. And what is the agenda fear advances?
So, strike the sex sells trope. Go with fear sells.
Stephen E Arnold, December 6, 2023
AI: Big Ideas Become Money Savers and Cost Cutters
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this week (November 28, 2023,) The British newspaper The Guardian published “Sports Illustrated Accused of Publishing Articles Written by AI.” The main idea is that dependence on human writers became the focus of a bunch of bean counters. The magazine has a reasonably high profile among a demographic not focused on discerning the difference between machine output and sleek, intellectual, well groomed New York “real” journalists. Some cared. I didn’t. It’s money ball in the news business.
The day before the Sports Illustrated slick business and PR move, I noted a Murdoch-infused publication’s revelation about smart software. Barron’s published “AI Will Create—and Destroy—Jobs. History Offers a Lesson.” Barron’s wrote about it; Sports Illustrated got snared doing it.
Barron’s said:
That AI technology will come for jobs is certain. The destruction and creation of jobs is a defining characteristic of the Industrial Revolution. Less certain is what kind of new jobs—and how many—will take their place.
Okay, the Industrial Revolution. Exactly how long did that take? What jobs were destroyed? What were the benefits at the beginning, the middle, and end of the Industrial Revolution? What were the downsides of the disruption which unfolded over time? Decades wasn’t it?
The AI “revolution” is perceived to be real. Investors, testosterone-charged venture capitalists, and some Type A students are going to make the AI Revolution a reality. Damn, the regulators, the copyright complainers, and the dinobabies who want to read, think, and write themselves.
Barron’s noted:
A survey conducted by LinkedIn for the World Economic Forum offers hints about where job growth might come from. Of the five fastest-growing job areas between 2018 and 2022, all but one involve people skills: sales and customer engagement; human resources and talent acquisition; marketing and communications; partnerships and alliances. The other: technology and IT. Even the robots will need their human handlers.
I can think of some interesting jobs. Thanks, MSFT Copilot. You did ingest some 19th century illustrations, didn’t you, you digital delight.
Now those are rock solid sources: Microsoft’s LinkedIn and the charming McKinsey & Company. (I think of McKinsey as the opioid innovators, but that’s just my inexplicable predisposition toward an outstanding bastion of ethical behavior.)
My problem with the Sports Illustrated AI move and the Barron’s essay boils down to the bipolarism which surfaces when a new next big thing appears on the horizon. Predicting what will happen when a technology smashes into business billiard balls is fraught with challenges.
One thing is clear: The balls are rolling, and journalists, paralegals, consultants, and some knowledge workers are going to find themselves in the side pocket. The way out might be making TikToks or selling gadgets on eBay.
Some will say, “AI took our jobs, Billy. Now what?” Yes, now what?
Stephen E Arnold, December 6, 2023
Is Crypto the Funding Mechanism for Bad Actors?
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Allegations make news. The United States and its allies are donating monies and resources to Israel as they fight against Hamas. As a rogue group, Hamas is not as well-funded Israel and people are speculative about how it is financing its violent attacks. The Marketplace explains how the Palestinian group is receiving some of its funding and it’s a very obvious answer: “Crypto Is One Way Hamas Gets Its Funding.” David Brancaccio, host of the Marketplace Morning Report, interviewed former federal prosecutor and US Treasury Department official and current head of TRM Labs, Ari Redford. TRM Labs is a cryptocurrency compliance firm. Redford and Brancaccio discuss how Hamas uses crypto.
Hamas is subject to sanctions from the US Treasury Department, so the group’s access to international banking is restricted. Cryptocurrency allows Hamas to circumvent those sanctions. Ironically, cryptocurrency might make it easier for authorities to track illegal use of money because the ledger can’t be forged. Crypto moves along a network of computers known as blockchains. The blockchains are public, therefore traceable and transparent. Companies like TRM allow law enforcement and other authorities to track blockchains.
The US Department of Justice, IRS-CI, and FBI removed 150 crypto wallets associated with Hamas in 2020. TRM Labs is continuously tracking Hamas and its financial supporters, most appear to be in Iran. Hamas doesn’t accept bitcoin donations anymore:
“Brancaccio: I think it was April of this year, Hamas announced it would no longer take donations in bitcoin.. Perhaps it’s because of its traceability? Redbord: Yeah, really important point. And that’s essentially what Hamas itself said that, you know, law enforcement and other authorities have been coming down on their supporters because they’ve been able to trace and track these flows. And announced in April that they would not be soliciting donations in cryptocurrency. Now, whether that’s entirely true or not, it’s hard to say. We’re obviously seeing at least supporters of Hamas go out there raising funds in crypto.”
What will bad actors do to get money? Find options and use them.
Whitney Grace, December 18, 2023
Harvard University: Does Money Influence Academic Research?
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.
Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.
Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.
The write up asserts:
Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.
Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.
If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.
What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.
If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.
Stephen E Arnold, December 5, 2023
23andMe: Those Users and Their Passwords!
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.
“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”
According to the peripatetic Lorenzo Franceschi-Bicchierai:
In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.
Users!
What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.
I find the leak estimate inflation interesting for three reasons:
- Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
- The concept of “security” continues to suffer some set backs. “Security,” one may ask?
- The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”
Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.
Stephen E Arnold, December 5, 2023

