Big Plays or Little Plays: The Key to AI Revenue

July 11, 2024

I keep thinking about the billions and trillions of dollars required to create a big AI win. A couple of snappy investment banks have edged toward the idea that AI might not pay off with tsunamis of money right away. The fix is to become brokers for GPU cycles or “humble brags” about how more money is needed to fund the next big thing in what venture people want to be the next big thing. Yep, AI: A couple of winners and the rest are losers at least in terms of the pay off scale whacked around like a hapless squash ball at the New York Athletic Club.

However, a radical idea struck me as I read a report from the news service that oozes “trust.” The Reuters’ story is “China Leads the World in Adoption of Generative AI Survey Shows.” Do I trust surveys? Not really. Do I trust trusted “real” news outfits? Nope, not really. But the write up includes an interesting statement, and the report sparked what is for me a new idea.

First, here’s the passage I circled:

“Enterprise adoption of generative AI in China is expected to accelerate as a price war is likely to further reduce the cost of large language model services for businesses. The SAS report also said China led the world in continuous automated monitoring (CAM), which it described as “a controversial but widely-deployed use case for generative AI tools”.”

I interpreted this to mean:

  • Small and big uses of AI in somewhat mundane tasks
  • Lots of small uses with more big outfits getting with the AI program
  • AI allows nifty monitoring which is going to catch the attention of some Chinese government officials who may be able to repurpose these focused applications of smart software

With models available as open source like the nifty Meta Facebook Zuck concoction, big technology is available. Furthermore the idea of applying smart software to small problems makes sense. The approach avoids the Godzilla lumbering associated with some outfits and, second, fast iteration with fast failures provides useful factoids for other developers.

The “real” news report does not provide numbers or much in the way of analysis. I think the idea of small-scale applications does not make sense when one is eating fancy food at a smart software briefing in mid town Manhattan. Small is not going to generate that. big wave of money from AI. The money is needed to raise more money.

My thought is that the Chinese approach has value because it is surfing on open source and some proprietary information known to Chinese companies solving or trying to solve a narrow problem. Also, the crazy pace of try-fail, try-fail enables acceleration of what works. Failures translate to lessons about what lousy path to follow.

Therefore, my reaction to the “real” news about the survey is that China may be in a position to do better, faster, and cheaper AI applications that the Godzilla outfits. The chase for big money exists, but in the US without big money, who cares? In China, big money may not be as large as the pile of cash some VCs and entrepreneurs argue is absolutely necessary.

So what? The “let many flowers bloom” idea applies to AI. That’s a strength possibly neither appreciated or desired by the US AI crowd. Combined with China’s patent surge, my new thought translates to “oh, oh.”

Stephen E Arnold, July 11, 2024

Common Sense from an AI-Centric Outfit: How Refreshing

July 11, 2024

green-dino_thumb_thumb_thumb_thumb_tThis essay is the work of a dumb dinobaby. No smart software required.

In the wild and wonderful world of smart software, common sense is often tucked beneath a stack of PowerPoint decks and vaporized in jargon-spouting experts in artificial intelligence. I want to highlight “Interview: Nvidia on AI Workloads and Their Impacts on Data Storage.” An Nvidia poohbah named Charlie Boyle output some information that is often ignored by quite a few of those riding the AI pony to the pot of gold at the end of the AI rainbow.

image

The King Arthur of senior executives is confident that in his domain he is the master of his information. By the way, this person has an MBA, a law degree, and a CPA certification. His name is Sir Walter Mitty of Dorksford, near Swindon. Thanks, MSFT Copilot.  Good enough.

Here’s the pivotal statement in the interview:

… a big part of AI for enterprise is understanding the data you have.

Yes, the dwellers in carpetland typically operate with some King Arthur type myths galloping around the castle walls; specifically:

Myth 1: We have excellent data

Myth 2: We have a great deal of data and more arriving every minute our systems are online

Myth 3: Out data are available and in just a few formats. Processing the information is going to be pretty easy.

Myth 4: Out IT team can handle most of the data work. We may not need any outside assistance for our AI project.

Will companies map these myths to their reality? Nope.

The Nvidia expert points out:

…there’s a ton of ready-made AI applications that you just need to add your data to.

“Ready made”: Just like a Betty Crocker cake mix my grandmother thought tasted fake, not as good as home made. Granny’s comment could be applied to some of the AI tests my team have tracked; for example, the Big Apple’s chatbot outputting  comments which violated city laws or the exciting McDonald’s smart ordering system. Sure, I like bacon on my on-again, off-again soft serve frozen dessert. Doesn’t everyone?

The Nvidia experts offers this comment about storage:

If it’s a large model you’re training from scratch you need very fast storage because a lot of the way AI training works is they all hit the same file at the same time because everything’s done in parallel. That requires very fast storage, very fast retrieval.

Is that a problem? Nope. Just crank up the cloud options. No big deal, except it is. There are costs and time to consider. But otherwise this is no big deal.

The article contains one gems and wanders into marketing “don’t worry” territory.

From my point of view, the data issue is the big deal. Bad, stale, incomplete, and information in odd ball formats — these exist in organizations now. The mass of data may have 40 percent or more which has never been accessed. Other data are back ups which contain versions of files with errors, copyright protected data, and Boy Scout trip plans. (Yep, non work information on “work” systems.)

Net net: The data issue is an important one to consider before getting into the let’s deploy a customer support smart chatbot. Will carpetland dwellers focus on the first step? Not too often. That’s why some AI projects get lost or just succumb to rising, uncontrollable costs. Moving data? No problem. Bad data? No problem. Useful AI system? Hmmm. How much does storage cost anyway? Oh, not much.

Stephen E Arnold, July 11, 2024

Oxygen: Keep the Bait Alive for AI Revenue

July 10, 2024

Andreessen Horowitz published “Who Owns the Generative AI Platform?” in January 2023. The rah-rah appeared almost at the same time as the Microsoft OpenAI deal marketing coup.  In that essay, the venture firm and publishing firm stated this about AI: 

…there is enough early data to suggest massive transformation is taking place. What we don’t know, and what has now become the critical question, is: Where in this market will value accrue?

Now a partial answer is emerging. 

The Information, an online information service with a paywall revealed “Andreessen Horowitz Is Building a Stash of More Than 20,000 GPUs to Win AI Deals.” That report asserts:

The firm has secured thousands of AI chips, including Nvidia H100 graphics processing units, and is renting them to portfolio companies, according to a person who has discussed the initiative with the firm’s partners…. Andreessen Horowitz has told startup founders the initiative is called “oxygen.”

The initiative reflects what might be a way to hook promising AI outfits and plop them into the firm’s large foldable floating fish basket for live caught gill-bearing vertebrate animals, sometimes called chum.

This factoid emerges shortly after a big Silicon Valley venture outfit raved about the oodles of opportunity AI represents. Plus reports about Blue Chip consulting firms’ through-the-roof AI consulting has encouraged a couple of the big outfits to offer AI services. In addition to opining and advising, the consulting firms are moving aggressively into the AI implementing and operating business. 

The morphing of a venture firm into a broker of GPU cycles complements the thinking-for-money firms’ shifting gears to a more hands-on approach.

There are several implications from my point of view:

  • The fastest way to make money from the AI frenzy is to charge people so they can “do” AI
  • Without a clear revenue stream of sufficient magnitude to foot the bill for the rather hefty costs of “doing” AI with a chance of making cash, selling blue jeans to the miners makes sense. But changing business tactics can add an element of spice to an unfamiliar restaurant’s special of the day
  • The move from passive (thinking and waiting) to a more active (doing and charging for hardware and services) brings a different management challenge to the companies making the shift.

These factors suggest that the best way to cash in on AI is to provide what Andreessen Horowitz calls oxygen. It is a clear indication that the AI fish will die without some aggressive intervention. 

I am a dinobaby, sitting in my rocker on the porch of the rest home watching the youngsters scramble to make money from what was supposed to be a sure-fire winner. What we know from watching those lemonade stand operators is that success is often difficult to achieve. The grade school kids setting up shop in a subdivision where heat and fatigue take their toll give up and go inside where the air is cool and TikTok waits.

Net net: The Andreessen Horowitz revelation is one more indication that the costs of AI and the difficulty of generating sufficient revenue is starting to hit home. Therefore, advisors’ thoughts seems to be turning to actions designed to produce cash, magnetism, and success. Will the efforts produce the big payoffs? I wonder if these tactical plays are brilliant moves or another neighborhood lemonade stand?

Stephen E Arnold, July 10, 2024

Market Research Shortcut: Fake Users Creating Fake Data

July 10, 2024

Market research can be complex and time consuming. It would save so much time if one could consolidate thousands of potential respondents into one model. A young AI firm offers exactly that, we learn from Nielsen Norman Group’s article, “Synthetic Users: If, When, and How to Use AI Generated ‘Research.’

But are the results accurate? Not so much, according to writers Maria Rosala and Kate Moran. The pair tested fake users from the young firm Synthetic Users and ones they created using ChatGPT. They compared responses to sample questions from both real and fake humans. Each group gave markedly different responses. The write-up notes:

“The large discrepancy between what real and synthetic users told us in these two examples is due to two factors:

  • Human behavior is complex and context-dependent. Synthetic users miss this complexity. The synthetic users generated across multiple studies seem one-dimensional. They feel like a flat approximation of the experiences of tens of thousands of people, because they are.
  • Responses are based on training data that you can’t control. Even though there may be proof that something is good for you, it doesn’t mean that you’ll use it. In the discussion-forum example, there’s a lot of academic literature on the benefits of discussion forums on online learning and it is possible that the AI has based its response on it. However, that does not make it an accurate representation of real humans who use those products.”

That seems obvious to us, but apparently some people need to be told. The lure of fast and easy results is strong. See the article for more observations. Here are a couple worth noting:

“Real people care about some things more than others. Synthetic users seem to care about everything. This is not helpful for feature prioritization or persona creation. In addition, the factors are too shallow to be useful.”

Also:

“Some UX [user experience] and product professionals are turning to synthetic users to validate or product concepts or solution ideas. Synthetic Users offers the ability to run a concept test: you describe a potential solution and have your synthetic users respond to it. This is incredibly risky. (Validating concepts in this way is risky even with human participants, but even worse with AI.) Since AI loves to please, every idea is often seen as a good one.”

So as appealing as this shortcut may be, it is a fast track to incorrect results. Basing business decisions on “insights” from shallow, eager-to-please algorithms is unwise. The authors interviewed Synthetic Users’ cofounder Hugo Alves. He acknowledged the tools should only be used as a supplement to surveys of actual humans. However, the post points out, the company’s website seems to imply otherwise: it promises “User research. Without the users.” That is misleading, at best.

Cynthia Murrell, July 10, 2024

The AI Revealed: Look Inside That Kimono and Behind It. Eeew!

July 9, 2024

green-dino_thumb_thumb_thumb_thumb_t_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Guardian article “AI scientist Ray Kurzweil: ‘We Are Going to Expand Intelligence a Millionfold by 2045’” is quite interesting for what it does not do: Flip the projection output by a Googler hired by Larry Page himself in 2012.

image

Putting toothpaste back in a tube is easier than dealing with the uneven consequences of new technology. What if rosy descriptions of the future are just marketing and making darned sure the top one percent remain in the top one percent? Thanks Chat GPT4o. Good enough illustration.

First, a bit of math. Humans have been doing big tech for centuries. And where are we? We are post-Covid. We have homelessness. We have numerous armed conflicts. We have income inequality in the US and a few other countries I have visited. We have a handful of big tech companies in the AI game which want to be God to use Mark Zuckerberg’s quaint observation. We have processed food. We have TikTok. We have systems which delight and entertain each day because of bad actors’ malware, wild and crazy education, and hybrid work with the fascinating phenomenon of coffee badging; that is, going to the office, getting a coffee, and then heading to the gym.

Second, the distance in earth years between 2024 and 2045 is 21 years. In the humanoid world, a 20 year old today will be 41 when the prediction arrives. Is that a long time? Not for me. I am 80, and I hope I am out of here by then.

Third, let’s look at the assertions in the write up.

One of the notable statements in my opinion is this one:

I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.

I like the quality of modesty and humblebrag. Googlers excel at both.

Another statement I circled is:

The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one.

I like the idea that the energy consumption required to deliver this merging will be cheap and plentiful. Googlers do not worry about a power failure, the collapse of a dam due to the ministrations of the US Army Corps of Engineers and time, or dealing with the environmental consequences of producing and moving energy from Point A to Point B. If Google doesn’t worry, I don’t.

Here’s a quote from the article allegedly made by Mr. Singularity aka Ray Kurzweil:

I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing.

I wonder if the Asilomar AI Principles are embedded in Google’s system recommending that one way to limit cheese on a pizza from sliding from the pizza to an undesirable location embraces these principles? Is the dispute between the “go fast” AI crowd and the “go slow” group not aware of the Asilomar AI Principles. If they are, perhaps the Principles are balderdash? Just asking, of course.

Okay, I think these points are sufficient for going back to my statements about processed food, wars, big companies in the AI game wanting to be “god” et al.

The trajectory of technology in the computer age has been a mixed bag of benefits and liabilities. In the next 21 years, will this report card with some As, some Bs, lots of Cs, some Ds, and the inevitable Fs be different? My view is that the winners with human expertise and the know how to make money will benefit. I think that the other humanoids may be in for a world of hurt. That’s the homelessness stuff, the being dumb when it comes to doing things like reading, writing, and arithmetic, and consuming chemicals or other “stuff” that parks the brain will persist.

The future of hooking the human to the cloud is perfect for some. Others may not have the resources to connect, a bit like farmers in North Dakota with no affordable or reliable Internet access. (Maybe Starlink-type services will rescue those with cash?)

Several observations are warranted:

  1. Technological “progress” has been and will continue to be a mixed bag. Sorry, Mr. Singularity. The top one percent surf on change. The other 99 percent are not slam dunk winners.
  2. The infrastructure issue is simply ignored, which is convenient. I mean if a person grew up with house servants, it is difficult to imagine not having people do what you tell them to do. (Could people without access find delight in becoming house servants to the one percent who thrive in 2045?)
  3. The extreme contention created by the deconstruction of shared values, norms, and conventions for social behavior is something that cannot be reconstructed with a cloud and human mind meld. Once toothpaste is out of the tube, one has a mess. One does not put the paste back in the tube. One blasts it away with a zap of Goo Gone. I wonder if that’s another omitted consequence of this super duper intelligence behavior: Get rid of those who don’t get with the program?

Net net: Googlers are a bit predictable when they predict the future. Oh, where’s the reference to online advertising?

Stephen E Arnold, July 9, 2024

A Signal That Money People Are Really Worried about AI Payoffs

July 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI’s $600B Question” is an interesting signal. The subtitle for the article is the pitch that sent my signal processor crazy: The AI bubble is reaching a tipping point. Navigating what comes next will be essential.”

image

Executives on a thrill ride seem to be questioning the wisdom of hopping on the roller coaster. Thanks, MSFT Copilot. Good enough.

When money people output information that raises a question, something is happening. When the payoff is nailed, the financial types think about yachts, Bugatti’s, and getting quoted in the Financial Times. Doubts are raised because of these headline items: AI and $600 billion.

The write up says:

A huge amount of economic value is going to be created by AI. Company builders focused on delivering value to end users will be rewarded handsomely. We are living through what has the potential to be a generation-defining technology wave. Companies like Nvidia deserve enormous credit for the role they’ve played in enabling this transition, and are likely to play a critical role in the ecosystem for a long time to come. Speculative frenzies are part of technology, and so they are not something to be afraid of.

If I understand this money talk, a big time outfit is directly addressing fears that AI won’t generate enough cash to pay its bills and make the investors a bundle of money. If the AI frenzy was on the Money Train Express, why raise questions and provide information about the tough-to-control costs for making AI knock off the hallucination, the product recalls, the lawsuits, and the growing number of AI projects which just don’t work?

The fact of the article’s existence makes it clear to me that some folks are indeed worried. Does the write up reassure those with big bucks on the line? Does the write up encourage investors to pump more money into a new AI start up? Does the write up convert tests into long-term contracts with the big AI providers?

Nope, nope, and nope.

But here’s the unnerving part of the essay:

In reality, the road ahead is going to be a long one. It will have ups and downs. But almost certainly it will be worthwhile.

Translation: We will take your money and invest it. Just buckle up, butter cup. The ride on this roller coaster may end with the expensive cart hurtling from the track to the asphalt below. But don’t worry about us venture types. We will surf on churn and the flows of money. Others? Not so much.

Stephen E Arnold, July 8, 2024

Googzilla, Man Up, Please

July 8, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a couple of “real” news stories about Google and its green earth / save the whales policies in the age of smart software. The first write   up is okay and not to exciting for a critical thinker wearing dinoskin. “The Morning After: Google’s Greenhouse Gas Emissions Climbed Nearly 50 Percent in Five Years Due to AI” states what seems to be a PR-massaged write up. Consider this passage:

According to the report, Google said it expects its total greenhouse gas emissions to rise “before dropping toward our absolute emissions reduction target,” without explaining what would cause this drop.

Yep, no explanation. A PR win.

The BBC published “AI Drives 48% Increase in Google Emissions.” That write up states:

Google says about two thirds of its energy is derived from carbon-free sources.

image

Thanks, MSFT Copilot. Good enough.

Neither these two articles nor the others I scanned focused on one key fact about Google’s saying green and driving snail darters to their fate. Google’s leadership team did not plan its energy strategy. In fact, my hunch is that no one paid any attention to how much energy Google’s AI activities were sucking down. Once the company shifted into Code Red or whatever consulting term craziness it used to label its frenetic response to the Microsoft OpenAI tie up, absolutely zero attention was directed toward the few big eyed tunas which might be taking their last dip.

Several observations:

  1. PR speak and green talk are like many assurances emitted by the Google. Talk is not action.
  2. The management processes at Google are disconnected from what happens when the wonky Code Red light flashes and the siren howls at midnight. Shouldn’t management be connected when the Tapanuli Orangutang could soon be facing the Big Ape in the sky?
  3. The AI energy consumption is not a result of AI. The energy consumption is a result of Googlers who do what’s necessary to respond to smart software. Step on the gas. Yeah, go fast. Endanger the Amur leopard.

Net net: Hey, Google, stand up and say, “My leadership team is responsible for the energy we consume.” Don’t blame your up-in-flames “green” initiative on software you invented. How about less PR and more focus on engineering more efficient data center and cloud operations? I know PR talk is easier, but buckle up, butter cup.

Stephen E Arnold, July 8, 2024

AI: Hurtful and Unfair. Obviously, Yes

July 5, 2024

It will be years before AI is “smart” enough to entirely replace humans, but it’s in the immediate future. The problem with current AI is that they’re stupid. They don’t know how to do anything unless they’re trained on huge datasets. These datasets contain the hard, copyrighted, trademarked, proprietary, etc. work of individuals. These people don’t want their work used to train AI without their permission, much less replace them. Futurism shares that even AI engineers are worried about their creations, “Video Shows OpenAI Admitting It’s ‘Deeply Unfair’ To ‘Build AI And Take Everyone’s Job Away.”

The interview with an AI software engineer’s admission of guilt originally appeared in The Atlantic, but their morality is quickly covered by their apathy. Brian Wu is the engineer in question. He feels about making jobs obsolete, but he makes an observation that happens with progress and new technology: things change and that is inevitable:
“It won’t be all bad news, he suggests, because people will get to ‘think about what to do in a world where labor is obsolete.’

But as he goes on, Wu sounds more and more unconvinced by his own words, as if he’s already surrendered himself to the inevitability of this dystopian AI future.

‘I don’t know,’ he said. ‘Raise awareness, get governments to care, get other people to care.’ A long pause. ‘Yeah. Or join us and have one of the few remaining jobs. I don’t know. It’s rough.’”

Wu’s colleague Daniel Kokotajlo believes human will invent an all-knowing artificial general intelligence (AGI). The AGI will create wealth and it won’t be distributed evenly, but all humans will be rich. Kokotaljo then delves into the typical science-fiction story about a super AI becoming evil and turning against humanity. The AI engineers, however, aren’t concerned with the moral ambiguity of AI. They want to invent, continuing building wealth, and are hellbent on doing it no matter the consequences. It’s pure motivation but also narcissism and entitlement.

Whitney Grace, July 5, 2024

Smart Software and Knowledge Skills: Nothing to Worry About. Nothing.

July 5, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read an article in Bang Premier (an estimable online publication with which I had no prior knowledge). It is now a “fave of the week.” The story “University Researchers Reveal They Fooled Professors by Submitting AI Exam Answers” was one of those experimental results which caused me to chuckle. I like to keep track of sources of entertaining AI information.

image

A doctor and his surgical team used smart software to ace their medical training. Now a patient learns that the AI system does not have the information needed to perform life-saving surgery. Thanks, MSFT Copilot. Good enough.

The Bang Premier article reports:

Researchers at the University of Reading have revealed they successfully fooled their professors by submitting AI-generated exam answers. Their responses went totally undetected and outperformed those of real students, a new study has shown.

Is anyone surprised?

The write up noted:

Dr Peter Scarfe, an associate professor at Reading’s school of psychology and clinical language sciences, said about the AI exams study: “Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. “We won’t necessarily go back fully to handwritten exams, but the global education sector will need to evolve in the face of AI.”

But the knee slapper is this statement in the write up:

In the study’s endnotes, the authors suggested they might have used AI to prepare and write the research. They stated: “Would you consider it ‘cheating’? If you did consider it ‘cheating’ but we denied using GPT-4 (or any other AI), how would you attempt to prove we were lying?” A spokesperson for Reading confirmed to The Guardian the study was “definitely done by humans”.

The researchers may not have used AI to create their report, but is it possible that some of the researchers thought about this approach?

Generative AI software seems to have hit a plateau for technology, financial, or training issues. Perhaps those who are trying to design a smart system to identify bogus images, machine-produced text and synthetic data, and nifty videos which often look like “real” TikTok-type creations will catch up? But if the AI innovators continue to refine their systems, the “AI identifier” software is effectively in a game of cat-and-mouse. Reacting to smart software means that existing identifiers will be blind to the new systems’ outputs.

The goal is a noble one, but the advantage goes to the AI companies, particularly those who want to go fast and break things. Academics get some benefit. New studies will be needed to determine how much fakery goes undetected. Will a surgeon who used AI to get his or her degree be able to handle a tricky operation and get the post-op drugs right?

Sure. No worries. Some might not think this is a laughing matter. Hey, it’s AI. It is A-Okay.

Stephen E Arnold, July 5, 2024

Microsoft Recall Continues to Concern UK Regulators

July 4, 2024

A “feature” of the upcoming Microsoft Copilot+, dubbed Recall, looks like a giant, built-in security risk. Many devices already harbor software that can hunt through one’s files, photos, emails, and browsing history. Recall intrudes further by also taking and storing a screenshot every few seconds. Wait, what? That is what the British Information Commissioner’s Office (ICO) is asking. The BBC reports, “UK Watchdog Looking into Microsoft AI Taking Screenshots.”

Microsoft asserts users have control and that the data Recall snags is protected. But the company’s pretty words are not enough to convince the ICO. The agency is grilling Microsoft about the details and will presumably update us when it knows more. Meanwhile, journalist Imran Rahman-Jones asked experts about Recall’s ramifications. He writes:

“Jen Caltrider, who leads a privacy team at Mozilla, suggested the plans meant someone who knew your password could now access your history in more detail. ‘[This includes] law enforcement court orders, or even from Microsoft if they change their mind about keeping all this content local and not using it for targeted advertising or training their AIs down the line,’ she said. According to Microsoft, Recall will not moderate or remove information from screenshots which contain passwords or financial account information. ‘That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry,’ said Ms. Caltrider. ‘I wouldn’t want to use a computer running Recall to do anything I wouldn’t do in front of a busload of strangers. ‘That means no more logging into financial accounts, looking up sensitive health information, asking embarrassing questions, or even looking up information about a domestic violence shelter, reproductive health clinic, or immigration lawyer.’”

Calling Recall a privacy nightmare, AI and privacy adviser Dr Kris Shrishak notes just knowing one’s device is constantly taking screenshots will have a chilling effect on users. Microsoft appears to have “pulled” the service. But data and privacy expert Daniel Tozer made a couple more points: How will a company feel if a worker’s Copilot snaps a picture of their proprietary or confidential information? Will anyone whose likeness appears in video chat or a photo be asked for consent before the screenshot is taken? Our guess—not unless it is forced to.

Cynthia Murrell, July 4, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta