Microsoft: Not Deteriorating, Just Normal Behavior
June 26, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Gee, Microsoft, you are amazing. We just fired up a new Windows 11 Professional machine and guess what? Yep, the printers are not recognized. Nice work and consistent good enough quality.
Then I read “Microsoft Admits to Problems Upgrading Windows 11 Pro to Enterprise.” That write up says:
There are problems with Microsoft’s last few Windows 11 updates, leaving some users unable to make the move from Windows 11 Pro to Enterprise. Microsoft made the admission in an update to the "known issues" list for the June 11, 2024, update for Windows 11 22H2 and 23H2 – KB5039212. According to Microsoft, "After installing this update or later updates, you might face issues while upgrading from Windows Pro to a valid Windows Enterprise subscription."
Bad? Yes. But then I worked through this write up: “Microsoft Chose Profit Over Security and Left U.S. Government Vulnerable to Russian Hack, Whistleblower Says.” Is the information in the article on the money? I don’t know. I do know that bad actors find Windows the equivalent of an unlocked candy store. Goodies are there for greedy teens to cart off the chocolate-covered peanuts and gummy worms.
Everyone interested in entering the Microsoft Windows Theme Park wants to enjoy the thrills of a potentially lucrative experience. Thanks, MSFT Copilot. Why is everyone in your illustration the same?
This remarkable story of willful ignorance explains:
U.S. officials confirmed reports that a state-sponsored team of Russian hackers had carried out SolarWinds, one of the largest cyberattacks in U.S. history.
How did this happen? The write up asserts:
The federal government was preparing to make a massive investment in cloud computing, and Microsoft wanted the business. Acknowledging this security flaw could jeopardize the company’s chances, Harris [a former Microsoft security expert and whistleblower] recalled one product leader telling him. The financial consequences were enormous. Not only could Microsoft lose a multibillion-dollar deal, but it could also lose the race to dominate the market for cloud computing.
Bad things happened. The article includes this interesting item:
From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds.
Okay, that’s the main idea: Money.
Several observations are warranted:
- There seems to be an issue with procurement. The US government creates an incentive for Microsoft to go after big contracts and then does not require Microsoft products to work or be secure. I know generals love PowerPoint, but it seems that national security is at risk.
- Microsoft itself operates with a policy of doing what’s necessary to make as much money as possible and avoiding the cost of engineering products that deliver what the customer wants: Stable, secure software and services.
- Individual users have to figure out how to make the most basic functions work without stopping business operations. Printers should print; an operating system should be able to handle what my first personal computer could do in the early 1980s. After 25 years, printing is not a new thing.
Net net: In a consequence-filled business environment, I am concerned that Microsoft will not improve its security and the most basic computer operations. I am not sure the company knows how to remediate what I think of as a Disneyland for bad actors. And I wanted the new Windows 11 Professional to work. How stupid of me?
Stephen E Arnold, June 26, 2024
Falling Apples: So Many to Harvest and Sell to Pay the EU
June 25, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
What’s goes up seems to come down. Apple is peeling back on the weird headset gizmo. The company’s AI response — despite the thrills Apple Intelligence produced in some acolytes — is “to be” AI or vaporware. China dependence remains a sticky wicket. And if the information in “Apple Has Very Serious Issues Under Sweeping EU Digital Rules, Competition Chief Says,” the happy giant in Cupertino will be writing some Jupiter-sized checks. Imagine. Pesky Europeans are asserting that Apple has a monopoly and has been acting less like Johnny Appleseed and more like Andrew Carnegie.
A powerful force causes Tim Apple to wonder why so many objects are falling on his head. Thanks, MSFT Copilot. Good enough.
The write up says:
… regulators are preparing charges against the iPhone maker. In March [2024], the European Commission, the EU’s executive arm, opened a probe into Apple, Alphabet and Meta, under the sweeping Digital Markets Act tech legislation that became applicable this year. The investigation featured several concerns about Apple, including whether the tech giant is blocking businesses from telling their users about cheaper options for products or about subscriptions outside of the App Store.
Would Apple, the flag bearer for almost-impossible to repaid products and software that just won’t charge laptop batteries no matter what the user needs to do prior to a long airplane flight prevent the free flow of information?
The EU nit pickers believe that Apple’s principles and policies are a “serious issue.”
How much money is possibly involved if the EU finds Apple a — pardon the pun — a bad apple in a barrel of rotten US high technology companies? The write up says:
If it is found in breach of Digital Markets Act rules, Apple could face fines of up to 10% of the company’s total worldwide annual turnover.
For FY2023, Apple captured about $380 billion, this works out to a potential payday for the EU of about US$ 38 billion and change.
Speaking of change, will a big fine cause those Apples to levitate? Nope.
Stephen E Arnold, June 25, 2024
Thomson Reuters: A Trust Report about Trust from an Outfit with Trust Principles
June 21, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Thomson Reuters is into trust. The company has a Web page called “Trust Principles.” Here’s a snippet:
The Trust Principles were created in 1941, in the midst of World War II, in agreement with The Newspaper Proprietors Association Limited and The Press Association Limited (being the Reuters shareholders at that time). The Trust Principles imposed obligations on Reuters and its employees to act at all times with integrity, independence, and freedom from bias. Reuters Directors and shareholders were determined to protect and preserve the Trust Principles when Reuters became a publicly traded company on the London Stock Exchange and Nasdaq. A unique structure was put in place to achieve this. A new company was formed and given the name ‘Reuters Founders Share Company Limited’, its purpose being to hold a ‘Founders Share’ in Reuters.
Trust nestles in some legalese and a bit of business history. The only reason I mention this anchoring in trust is that Thomson Reuters reported quarterly revenue of $1.88 billion in May 2024, up from $1.74 billion in May 2023. The financial crowd had expected $1.85 billion in the quarter, and Thomson Reuters beat that. Surplus funds makes it possible to fund many important tasks; for example, a study of trust.
The ouroboros, according to some big thinkers, symbolizes the entity’s journey and the unity of all things; for example, defining trust, studying trust, and writing about trust as embodied in the symbol.
My conclusion is that trust as a marketing and business principle seems to be good for business. Therefore, I trust, and I am confident that the information in “Global Audiences Suspicious of AI-Powered Newsrooms, Report Finds.” The subject of the trusted news story is the Reuters Institute for the Study of Journalism. The Thomson Reuters reporter presents in a trusted way this statement:
According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists’ work more efficient.
To make the point a person working for the trusted outfit’s trusted report says in what strikes me as a trustworthy way:
“It was surprising to see the level of suspicion,” said Nic Newman, senior research associate at the Reuters Institute and lead author of the Digital News Report. “People broadly had fears about what might happen to content reliability and trust.”
In case you have lost the thread, let me summarize. The trusted outfit Thomson Reuters funded a study about trust. The research was conducted by the trusted outfit’s own Reuters Institute for the Study of Journalism. The conclusion of the report, as presented by the trusted outfit, is that people want news they can trust. I think I have covered the post card with enough trust stickers.
I know I can trust the information. Here’s a factoid from the “real” news report:
Vitus “V” Spehar, a TikTok creator with 3.1 million followers, was one news personality cited by some of the survey respondents. Spehar has become known for their unique style of delivering the top headlines of the day while laying on the floor under their desk, which they previously told Reuters is intended to offer a more gentle perspective on current events and contrast with a traditional news anchor who sits at a desk.
How can one not trust a report that includes a need met by a TikTok creator? Would a Thomson Reuters’ professional write a news story from under his or her desk or cube or home office kitchen table?
I think self funded research which finds that the funding entity’s approach to trust is exactly what those in search of “real” news need. Wikipedia includes some interesting information about Thomson Reuters in its discussion of the company in the section titled “Involvement in Surveillance.” Wikipedia alleges that Thomson Reuters licenses data to Palantir Technologies, an assertion which if accurate I find orthogonal to my interpretation of the word “trust.” But Wikipedia is not Thomson Reuters.
I will not ask questions about the methodology of the study. I trust the Thomson Reuters’ professionals. I will not ask questions about the link between revenue and digital information. I have the trust principles to assuage any doubt. I will not comment on the wonderful ouroboros-like quality of an enterprise embodying trust, funding a study of trust, and converting those data into a news story about itself. The symmetry is delicious and, of course, trustworthy. For information about Thomson Reuters’s trust use of artificial intelligence see this Web page.
Stephen E Arnold, June 21, 2024
There Must Be a Fix? Sorry. Nope.
June 20, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I enjoy stories like “Microsoft Chose Profit Over Security and Left U.S. Government Vulnerable to Russian Hack, Whistleblower Says.” It combines a number of fascinating elements; for example, corporate green, Russia, a whistleblower, and the security of the United States. Figuring out who did what to whom when and under what circumstances is not something a dinobaby at my pay grade of zero can do. However, I can highlight some of the moving parts asserted in the write up and pose a handful of questions. Will these make you feel warm and fuzzy? I hope not. I get a thrill capturing the ideas as they manifest in my very aged brain.
The capture officer proudly explains to the giant corporation, “You have won the money?” Can money buy security happiness? Answer: Nope. Thanks, MSFT Copilot. Good enough, the new standard of excellence.
First, what is the primum movens for this exposé? I think that for this story, one candidate is Microsoft. The company has to decide to do what slays the evil competitors, remains the leader in all things smart, and generates what Wall Street and most stakeholders crave: Money. Security is neither sexy nor a massive revenue producer when measured in terms of fixing up the vulnerabilities in legacy code, the previous fixes, and the new vulnerabilities cranked out with gay abandon. Recall any recent MSFT service which may create a small security risk or two? Despite this somewhat questionable approach to security, Microsoft has convinced the US government that core software like PowerPoint definitely requires the full panoply of MSFT software, services, features, and apps. Unfortunately articles like “Microsoft Chose Profit Over Security” converts the drudgery of cyber security into a snazzy story. A hard worker finds the MSFT flaw, reports it, and departs for a more salubrious work life. The write up says:
U.S. officials confirmed reports that a state-sponsored team of Russian hackers had carried out SolarWinds, one of the largest cyberattacks in U.S. history. They used the flaw Harris had identified to vacuum up sensitive data from a number of federal agencies, including, ProPublica has learned, the National Nuclear Security Administration, which maintains the United States’ nuclear weapons stockpile, and the National Institutes of Health, which at the time was engaged in COVID-19 research and vaccine distribution. The Russians also used the weakness to compromise dozens of email accounts in the Treasury Department, including those of its highest-ranking officials. One federal official described the breach as “an espionage campaign designed for long-term intelligence collection.”
Cute. SolarWinds, big-money deals, and hand-waving about security. What has changed? Nothing. A report criticized MSFT; the company issued appropriate slick-talking, lawyer-vetted, PR-crafted assurances that security is Job One. What has changed? Nothing.
The write up asserts about MSFT’s priorities:
the race to dominate the market for new and high-growth areas like the cloud drove the decisions of Microsoft’s product teams. “That is always like, ‘Do whatever it frickin’ takes to win because you have to win.’ Because if you don’t win, it’s much harder to win it back in the future. Customers tend to buy that product forever.”
I understand. I am not sure corporations and government agencies do. That PowerPoint software is the go-to tool for many agencies. One high-ranking military professional told me: “The PowerPoints have to be slick.” Yep, slick. But reports are written in PowerPoints. Congress is briefed with PowerPoints. Secret operations are mapped out in PowerPoints. Therefore, buy whatever it takes to make, save, and distribute the PowerPoints.
The appropriate response is, “Yes, sir.”
So what’s the fix? There is no fix. The Microsoft legacy security, cloud, AI “conglomeration” is entrenched. The Certified Partners will do patch ups. The whistleblowers will toot, but their tune will be downed out in the post-contract-capture party at the Old Ebbitt Grill.
Observations:
- Third-party solutions are going to have to step up. Microsoft does not fix; it creates.
- More serious breaches are coming. Too many nation-states view the US as a problem and want to take it down and put it out.
- Existing staff in the government and at third-party specialist firms are in “knee jerk mode.” The idea of pro-actively getting ahead of the numerous bad actors is an interesting thought experiment. But like most thought experiments, it can morph into becoming a BFF of Don Quixote and going after those windmills.
Net net: Folks, we have some cyber challenges on our hands, in our systems, and in the cloud. I wish reality were different, but it is what it is. (Didn’t President Clinton define “is”?)
Stephen E Arnold, June 20, 2024
The Gray Lady Tap Dances
June 17, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The collision of myth, double talk, technology, and money produces some fascinating tap dancing. Tip tap tip tap. Tap tap. That’s the sound of the folks involved with explaining that technology is no big deal. Drum roll. Then the coda. Tip tap tip tap. Tap tap tap. It is not money. Tip tap tip tap. tap tap.
I think quite a few business decisions are about money; specifically, getting a bonus or a hefty raise because “efficiency” improves “quality.” One can dance around the dead horse, but at some point that horse needs to be relocated.
The “real” Mona Lisa. Can she be enhanced, managed, and be populated with metadata without a human art director? Yep. Thanks, MSFT Copilot. Good enough.
I read “New York Times Union Urges Management to Reconsider 9 Art Department Cuts as Paper Ramps Up AI Tools | Exclusive.” The write up weaves a number of themes together. There is the possibility of management waffling, a common practice these days. Recall, an incident, Microsoft? The ever-present next big thing makes an appearance. Plus, there is the Gray Lady, working hard to maintain its position as the newspaper for for the USA today. (That sounds familiar, doesn’t it?)
The main point of the write up is that the NYT’s art department might lose staff. The culprit is not smart software. Money is not the issue. Quality will not suffer. Yada yada. The write up says:
The Times denies that the reductions are in any way related to the newspaper’s AI initiatives.
And the check is in the mail.
I also noted:
A spokesman for the Times said the affected employees are being offered a buyout, and have nothing to do with the use of AI. “Last month, The Times’s newsroom made the difficult decision to reduce the size of its art production team with workflow changes to make photo toning and color correction work more efficient,” Charlie Stadtlander told TheWrap.”On May 30th, we offered generous voluntary buyouts for 9 employees to accept. These changes involve the adoption of new workflows and the expanded use of industry-standard tools that have been in use for years — they are not related to The Times’s AI efforts.”
Nope. Never. Impossible. Unthinkable.
What is the smart software identified as a staff reducer? It is Claro but that is not the name of the company. The current name of the service is Pixometry, which is a mashup of Claro and Elpical. So what does this controversial smart software do? The firm’s Web site says:
Pixometry is the latest evolution of Claro, the leading automated image enhancement platform for Publishers and Retailers around the globe. Combining exceptional software with outstanding layered AI services, Pixometry delivers a powerful image processing engine capable of creating stunning looking images, highly accurate cut-outs and automatic keywording in seconds. Reducing the demands upon the Photoshop teams, Pixometry integrates seamlessly with production systems and prepares images for use in printed and digital media.
The Pixometry software delivers:
Cloud based automatic image enhancement & visual asset management solutions for publishers & retail business.
Its functions include:
- Automatic image “correction” because “real” is better than real
- Automatic cut outs and key wording (I think a cut out is a background remover so a single image can be plucked from a “real” photo
- Consistent, high quality results. None of that bleary art director eye contact.
- Multi-channel utilization. The software eliminates telling a Photoshop wizard I need a high-res image for the magazine and a then a 96 spot per inch version for the Web. How long will that take? What? I need the images now.
- Applied AI image intelligence. Hey, no hallucinations here. This is “real” image enhancement and better than what those Cooper Union space cadets produce when they are not wandering around looking for inspiration or whatever.
Does that sound like reality shaping or deep fake territory? Hmmm. That’s a question none of the hair-on-fire write ups addresses. But if you are a Photoshop and Lightroom wizard, the software means hasta la vista in my opinion. Smart software may suck at office parties but it does not require vacays, health care (just minor system updates), or unions. Software does not argue, wear political buttons, or sit around staring into space because of a late night at the “library.”
Pretty obscure unless you are a Photoshop wizard. The Pixometry Web site explains that it provides a searchable database of images and what looks like one click enhancement of images. Hey, every image needs a bit of help to be “real”, just like “real” news and “real” management explanations. The Pixometry Web site identifies some organizations as “loving” Pixometry; for example, the star-crossed BBC, News UK, El Mercurio, and the New York Times. Yes, love!
Let’s recap. Most of the reporting about this use of applied smart software gets the name of the system wrong. None of the write ups point out that art director functions in the hands of a latte guzzling professional are not quick, easy, or without numerous glitches. Furthermore, the humans in the “art” department must be managed.
The NYT is, it appears, trying to do the two-step around software that is better, faster, and cheaper than the human powered options. Other observations are:
- The fast-talking is not going to change the economic benefit of smart software
- The notion of a newspaper fixing up photos underscores that deep fakes have permeated institutions which operate as if it were 1923 skidoo time
- The skilled and semi-skilled workers in knowledge industries may taste blood when the titanium snake of AI bites them on the ankle. Some bites will be fatal.
Net net: Being up front may have some benefits. Skip the old soft shoe, please.
Stephen E Arnold, June 17, 2024
A Fancy Way of Saying AI May Involve Dragons
June 14, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The essay “What Apple’s AI Tells Us: Experimental Models” makes clear that pinning down artificial intelligence is proving to be more difficult than some anticipated in January 2023, the day when Google’s Red Alert squawked and many people said, “AI is the silver bullet I want for my innovation cannon.”
Image source: https://www.geographyrealm.com/here-be-dragons/
Here’s a sentence I found important in the One Useful Thing essay:
What is worth paying attention to is how all the AI giants are trying many different approaches to see what works.
The write up explains different approach to AI that the author has identified. These are:
- Apps
- Business models with subscription fees
The essay concludes with a specter “haunting AI.” The write up says:
I do not know if AGI[artificial general intelligence] is achievable, but I know that the mere idea of AGI being possible soon bends everything around it, resulting in wide differences in approach and philosophy in AI implementations.
Today’s smart software environment has an upside other than the money churn the craziness vortices generate:
Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.
Several observations are warranted.
First, the confessions of McKinsey’s AI team make it clear that smart outfits may not know what they are doing. The firms just plunge forward and then after months of work recycle the floundering into lessons. Presumably these lessons are “hire McKinsey.” See my write up “What Is McKinsey & Co. Telling Its Clients about AI?”
Second, another approach is to use AI in the hopes that staff costs can be reduced. I think this is the motivation of some AI enthusiasts. PwC (I am not sure if it is a consulting firm, an accounting firm, or some 21st century mutation) fell in lust with OpenAI. Not only did the firm kick OpenAI’s tires, PwC signed up to be what’s called an “enterprise reseller.” A client pays PwC to just make something work. In this case, PwC becomes the equivalent of a fix it shop with a classy address and workers with clean fingernails. The motivation, in my opinion, is cutting staff. “PwC Is Doing Quiet Layoffs. It’s a Brilliant Example of What Not to Do” says:
This is PwC in the U.K., and obviously, they operate under different laws than we do here in the United States. But in case you’re thinking about following this bad example, I asked employment attorney Jon Hyman for advice. He said, "This request would seem to fall under the umbrella of ‘protected concerted activity’ that the NLRB would take issue with. That said, the National Labor Relations Act does not apply to supervisors — defined as one with the authority to make personnel decisions using independent judgment. "Thus," he continues, "whether this specific PwC request runs afoul of the NLRA’s legal protections for employees to engage in protected concerted activity would depend on whether the laid-off employees were ‘supervisors’ under the Act."
I am a simpler person. The quiet layoffs complement the AI initiative. Quiet helps keep staff from making the connection I am suggesting. But consulting firms keep one eye on expenses and the other on partners’ profits. AI is a catalyst, not a technology.
Third, more AI fatigue write ups are appearing. One example is “The AI Fatigue: Are We Getting Tired of Artificial Intelligence?” reports:
Hema Sridhar, Strategic Advisor for Technological Futures at the University of Auckland, says that there is a lot of “noise on the topic” so it is clear that “people are overwhelmed”. “Almost every company is using AI. Pretty much every app that you’re currently using on your phone has recently released some version with some kind of AI-feature or AI-enhanced features,” she adds. “Everyone’s using it and [it’s] going to be part of day-to-day life, so there are going to be some significant improvements in everything from how you search for your own content on your phone, to more improved directions or productivity tools that just fundamentally change the simple things you do every day that are repetitive.”
Let me reference Apple Intelligence to close this write up. Apple did not announce hardware. It talked about “to be” services. Instead of doing the Meta open source thing, the Google wrong answers with historically flawed images, or the MSFT on-again, off-again roll outs — Apple just did “to be.”
My hunch is that Apple is not cautious; its professionals know that AI products and services may be like those old maps which say, “Here be dragons.” Sailing close to the shore makes sense.
Stephen E Arnold, June 14, 2024
Will the Judge Notice? Will the Clients If Convicted?
June 12, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:
“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”
But that was before tailor-made retrieval-augmented generation tools. The article continues:
“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”
So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:
“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”
These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.
The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?
Cynthia Murrell, June 12, 2024
Will AI Kill Us All? No, But the Hype Can Be Damaging to Mental Health
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I missed the talk about how AI will kill us all. Planned? Nah, heavy traffic. From what I heard, none of the cyber investigators believed the person trying hard to frighten law enforcement cyber investigators. There are other — slightly more tangible threats. One of the attendees whose name I did not bother to remember asked me, “What do you think about artificial intelligence?” My answer was, “Meh.”
A contrarian walks alone. Why? It is hard to make money being negative. At the conference I attended June 4, 5, and 6, attendees with whom I spoke just did not care. Thanks, MSFT Copilot. Good enough.
Why you may ask? My method of handling the question is to refer to articles like this: “AI Appears to Rapidly Be Approaching Be Approaching a Brick Wall Where It Can’t Get Smarter.” This write up offers an opinion not popular among the AI cheerleaders:
Researchers are ringing the alarm bells, warning that companies like OpenAI and Google are rapidly running out of human-written training data for their AI models. And without new training data, it’s likely the models won’t be able to get any smarter, a point of reckoning for the burgeoning AI industry
Like the argument that AI will change everything, this claim applies to systems based upon indexing human content. I am reasonably certain that more advanced smart software with different concepts will emerge. I am not holding my breath because much of the current AI hoo-hah has been gestating longer than new born baby elephant.
So what’s with the doom pitch? Law enforcement apparently does not buy the idea. My team doesn’t. For the foreseeable future, applied smart software operating within some boundaries will allow some tasks to be completed quickly and with acceptable reliability. Robocop is not likely for a while.
One interesting question is why the polarization. First, it is easy. And, second, one can cash in. If one is a cheerleader, one can invest in a promising AI start and make (in theory) oodles of money. By being a contrarian, one can tap into the segment of people who think the sky is falling. Being a contrarian is “different.” Plus, by predicting implosion and the end of life one can get attention. That’s okay. I try to avoid being the eccentric carrying a sign.
The current AI bubble relies in a significant way on a Google recipe: Indexing text. The approach reflects Google’s baked in biases. It indexes the Web; therefore, it should be able to answer questions by plucking factoids. Sorry, that doesn’t work. Glue cheese to pizza? Sure.
Hopefully new lines of investigation may reveal different approaches. I am skeptical about synthetic (or made up data that is probably correct). My fear is that we will require another 10, 20, or 30 years of research to move beyond shuffling content blocks around. There has to be a higher level of abstraction operating. But machines are machines and wetware (human brains) are different.
Will life end? Probably but not because of AI unless someone turns over nuclear launches to “smart” software. In that case, the crazy eccentric could be on the beam.
Stephen E Arnold, June 11, 2024
AI May Not Be Magic: The Salesforce Signal
June 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Salesforce has been a steady outfit. However, the company suffered a revenue miss, its first in about a quarter century. The news reports cited broad economic factors like “macro headwinds.” Salesforce, according to the firm’s chief people officer, the company has been experimenting with AI for “over a decade.” But the magic of AI was not able to ameliorate the company’s dip or add some chrome trim to its revenue guidance.
John Milton’s god character from Paradise Lost watches the antics of super-sophisticated artificial intelligence algorithms. This character quickly realizes that zeros and ones crafted by humans and enhanced by smart machines is definitely not omniscient, omnipresent, and omnipotent character who knows everything before it happens no matter what the PR firms or company spokesperson asserts. Thanks, MSFT Copilot. Good enough.
Piecing together fragments of information, it appears that AI has added to the company’s administrative friction. In a Fortune interview, recycled for MSN.com, consider these administrative process examples:
- The company has deployed 50 AI tools.
- Salesforce has an AI governance council.
- There is an Office of Ethical and Humane Use, started in 2019.
- Salesforce uses surveys to supplement its “robust listening strategies.”
- There are phone calls and meetings.
Some specific uses of AI appear to address inherent design constraints in Salesforce software; for example, AI has:
saved employees 50,000 hours within one business quarter, and the bot answered nearly 370,000 employee questions, according to the company. Merging into Project Basecamp, the company’s project management platform, has resolved 88,000 worker requests, speeding up issue resolution from an average of 48 hours to just 30 minutes.
What’s the payoff to the bottom line? That information is scant. What we know is that Salesforce may not be benefiting from the additional AI investment or the friction AI’s bureaucratic processes imposes on the company.
What’s this mean for those who predict that AI will change everything? I continue to think about the two ends of the spectrum: Go fast and break things crowd and the stop AI contingent.
First, the type of AI which is the one that does high school essay writing is easy to understand. These systems work as long as the subject matter clumps into piles of factoids which limit the craziness of the algorithms’ outputs. The topic “How to make a taco” is nailed down. The topic “How to decrypt Telegram’s encryption system” is not. Big brains can explain why the taco question is relatively hallucination free but not why the Telegram question generates useless drivel. I have, therefore, concluded, “Limited, narrow domain questions are okay for AI.”
Second, the current systems are presented as super wonderful. An example is the steady flow of PR about Google DeepMind’s contributions to biological science. Yet Google’s search system generates baloney. I think the different is that whacking away at proteins is a repetitive combinatorial problem. Calling the methods AI is similar to describing Daylight Chemical Information Systems a manifestation of the Oracle at Delphi is hogwash. PR erases important differences in critical lines of research. Does Google DeepMind feel shame? Let’s ask IBM Watson. That will be helpful. PR has a role; it is not AI.
Third, the desire for a silver bullet is deep-seated in many Peter Principle managers. These “leaders” of “leadership teams” don’t know what to do. Managing becomes figuring out risks. AI has legs, so let’s give that pony a chance to win the cart race. But pony cart races are trivial. The real races require winning three competitions. Few horses pull of that trick. I watch in wonder the launch, retreat, PR explanation, and next launch of some AI outfits. The focus seems to be on getting $20 per month. Degrading the service. Asking for more money. Then repeat.
The lack of AI innovation is becoming obvious. From the starter’s gun cracking in time with Microsoft’s AI announcement in January 2023 how much progress has been made?
We have the Salesforce financial report. We have the management craziness at OpenAI. We have Microsoft investing in or partnering with a number of technology outfits, including one in Paris. We have Google just doddering and fumbling. We have lawsuits. We have craziness like Adobe’s “owning” any image created with its software. We have start ups which bandy about the term “AI” like a shuttlecock at a high school in India badminton league. We have so many LinkedIn AI experts, I marvel that no one pins these baloney artists to a piece of white bread. We have the Dutch police emphasizing home-grown AI which helped make sense of the ANOM phone stings when the procedures are part of most policeware systems. Statistics, yes. AI, no. Clustering, yes. AI, no. Metadata assignment, yes. AI, no. The ANOM operation took place about 2017 to its shut down four years later. AI? Nope.
What does the lack of financial payoff and revenue generating AI solutions tell me? My answer to this question is:
- The cost of just using and letting prospects use an AI system are high. Due to the lack of a Triple Crown contender, no company has the horse or can afford the costs of getting the nag ready to race and keeping the animal from keeling over dead.
- The tangible results are tough to express. Despite the talk about reducing the costs of customer service, the cost of the AI system and the need to have humans ride herd on what the crazed cattle-like algorithms yield is not evident to me. The Salesforce experience is that AI cannot fix or make the Slack system generate oodles of cost savings or revenues from new, happy customers.
- The AI systems, particularly the services promoted via Product Hunt, are impossible for me to differentiate. Some do images, but the functions are similar. Some AI system do text things. Okay. But what’s new? Money is being spent to produce endless variations and me-too services. Fun for some. But boring and a waste of time to a dinobaby like me.
Net net: With economic problems growing in numerous sectors, those with money or a belief that garlic will kill Count Vampire, Baron of Revenue Loss are in for a surprise. Sorry. No software equivalent to Milton’s eternal, all-knowing, omnipotent God. I won’t tell the PR people. That Salesforce signal is meaningful.
Stephen E Arnold, June 10, 2024
Now Teachers Can Outsource Grading to AI
June 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
In a prime example of doublespeak, the “No Child Left Behind” act of 2002 ushered in today’s teach-to-the-test school environment. Once upon a time, teachers could follow student interest deeper into subject, explore topics tangential to the curriculum, and encourage children’s creativity. Now it seems if it won’t be on the test, there is no time for it. Never mind evidence that standardized tests do not even accurately measure learning. Or the psychological toll they take on students. But education degradation is about to get worse.
Get ready for the next level in impersonal instruction. Graded.Pro is “AI Grading and Marking for Teachers and Educators.” Now teachers can hand the task of evaluating every classroom assignment off to AI. On the Graded.Pro website, one can view explanatory videos and see examples of AI-graded assignments. Math, science, history, English, even art. The test maker inputs the criteria for correct responses and the AI interprets how well answers adhere to those descriptions. This means students only get credit for that which an AI can measure. Sure, there is an opportunity for teachers to review the software’s decisions. And some teachers will do so closely. Others will merely glance at the results. Most will fall somewhere in between.
Here are the assignment and solution description from the Art example: “Draw a lifelike skull with emphasis on shading to develop and demonstrate your skills in observational drawing.
Solutions:
- The skull dimensions and proportions are highly accurate.
- Exceptional attention to fine details and textures.
- Shading is skillfully applied to create a dynamic range of tones.
- Light and shadow are used effectively to create a realistic sense of volume and space.
- Drawing is well-composed with thoughtful consideration of the placement and use of space.”
See the website for more examples as well as answers and grades. Sure, these are all relevant skills. But evaluation should not stop at the limits of an AI’s understanding. An insightful interpretation in a work of art? Brilliant analysis in an essay? A fresh take on an historical event? Qualities like those take a skilled human teacher to spot, encourage, and develop. But soon there may be no room for such niceties in education. Maybe, someday, no room for human teachers at all. After all, software is cheaper and does not form pesky unions.
Most important, however, is that teaching is a bummer. Every child is exceptional. So argue with the robot that little Debbie got an F.
Cynthia Murrell, June 10, 2024