Is Philosophy Irrelevant to Smart Software? Think Before Answering, Please
January 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I listened to Lex Fridman’s interview with the founder of Extropic. The is into smart software and “inventing” a fresh approach to the the plumbing required to make AI more like humanoids.
As I listened to the questions and answers, three factoids stuck in my mind:
- Extropic’s and its desire to just go really fast is a conscious decision shared among those involved with the company; that is, we know who wants to go fast because they work there or work at the firm. (I am not going to argue about the upside and downside of “going fast.” That will be another essay.)
- The downstream implications of the Extropic vision are secondary to the benefits of finding ways to avoid concentration of AI power. I think the idea is that absolute power produces outfits like the Google-type firms which are bedeviling competitors, users, and government authorities. Going fast is not a thrill for processes that require going slow.
- The decisions Extropic’s founder have made are bound up in a world view, personal behaviors for productivity, interesting foods, and learnings accreted over a stellar academic and business career. In short, Extropic embodies a philosophy.
Philosophy, therefore, influences decisions. So we come to my topic in this essay. I noted two different write ups about how informed people take decisions. I am not going to refer to philosophers popular in introductory college philosophy classes. I am going to ignore the uneven treatment of philosophers in Will and Ariel Durant’s Story of Philosophy. Nah. I am going with state of the art modern analysis.
The first online article I read is a survey (knowledge product) of the estimable IBM / Watson outfit or a contractor. The relatively current document is “CEO Decision Making in the Ago of AI.” The main point of the document in my opinion is summed up in this statement from a computer hardware and services company:
Any decision that makes its way to the CEO is one that involves high degrees of uncertainty, nuance, or outsize impact. If it was simple, someone else— or something else—would do it. As the world grows more complex, so does the nature of the decisions landing on a CEO’s desk.
But how can a CEO decide? The answer is, “Rely on IBM.” I am not going to recount the evolution (perhaps devolution) of IBM. The uncomfortable stories about shedding old employees (the term dinobaby originated at IBM according to one former I’ve Been Moved veteran). I will not explain how IBM’s decisions about chip fabrication, its interesting hiring policies of individuals who might have retained some fondness for the land of their fathers and mothers, nor the fancy dancing required to keep mainframes as a big money pump. Nope.
The point is that IBM is positioning itself as a thought leader, a philosopher of smart software, technology, and management. I find this interesting because IBM, like some Google type companies, are case examples of management shortcoming. These same shortcomings are swathed in weird jargon and buzzwords which are bent to one end: Generating revenue.
Let me highlight one comment from the 27 page document and urge you to read it when you have a few moments free. Here’s the one passage I will use as a touchstone for “decision making”:
The majority of CEOs believe the most advanced generative AI wins.
Oh, really? Is smart software sufficiently mature? That’s news to me. My instinct is that it is new information to many CEOs as well.
The second essay about decision making is from an outfit named Ness Labs. That essay is “The Science of Decision-Making: Why Smart People Do Dumb Things.” The structure of this essay is more along the lines of a consulting firm’s white paper. The approach contrasts with IBM’s free-floating global survey document.
The obvious implication is that if smart people are making dumb decisions, smart software can solve the problem. Extropic would probably agree and, were the IBM survey data accurate, “most CEOs” buy into a ride on the AI bandwagon.k
The Ness Labs’ document includes this statement which in my view captures the intent of the essay. (I suggest you read the essay and judge for yourself.)
So, to make decisions, you need to be able to leverage information to adjust your actions. But there’s another important source of data your brain uses in decision-making: your emotions.
Ah, ha, logic collides with emotions. But to fix the “problem” Ness Labs provides a diagram created in 2008 (a bit before the January 2022 Microsoft OpenAI marketing fireworks:
Note that “decide” is a mnemonic device intended to help me remember each of the items. I learned this technique in the fourth grade when I had to memorize the names of the Great Lakes. No one has ever asked me to name the Great Lakes by the way.
Okay, what we have learned is that IBM has survey data backing up the idea that smart software is the future. Those data, if on the money, validate the go-go approach of Extropic. Plus, Ness Labs provides a “decider model” which can be used to create better decisions.
I concluded that philosophy is less important than fostering a general message that says, “Smart software will fix up dumb decisions.” I may be over simplifying, but the implicit assumptions about the importance of artificial intelligence, the reliability of the software, and the allegedly universal desire by big time corporate management are not worth worrying about.
Why is the cartoon philosopher worrying? I think most of this stuff is a poorly made road on which those jockeying for power and money want to drive their most recent knowledge vehicles. My tip? Look before crossing that information superhighway. Speeding myths can be harmful.
Stephen E Arnold, January 8, 2024
AI Ethics: Is That What Might Be Called an Oxymoron?
January 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
MSN.com presented me with this story: “OpenAI and Microsoft on Trial — Is the Clash with the NYT a Turning Point for AI Ethics?” I can answer this question, but that would spoil the entertainment value of my juxtaposition of this write up with the quasi-scholarly list of business start up resources. Why spoil the fun?
Socrates is lecturing at a Fancy Dan business school. The future MBAs are busy scrolling TikTok, pitching ideas to venture firms, and scrolling JustBang.com. Viewing this sketch, it appears that ethics and deep thought are not as captivating as mobile devices and having fund. Thanks, MSFT Copilot. Two tries and a good enough image.
The article asks a question which I find wildly amusing. The “on trial” write up states in 21st century rhetoric:
The lawsuit prompts critical questions about the ownership of AI-generated content, especially when it comes to potential inaccuracies or misleading information. The responsibility for losses or injuries resulting from AI-generated content becomes a gray area that demands clarification. Also, the commercial use of sourced materials for AI training raises concerns about the value of copyright, especially if an AI were to produce content with significant commercial impact, such as an NYT bestseller.
For more than two decades online outfits have been sucking up information which is usually slapped with the bright red label “open source information.”
The “on trial” essay says:
The future of AI and its coexistence with traditional media hinges on the resolution of this legal battle.
But what about ethics? The “on trial” write up dodges the ethics issue. I turned to a go-to resource about ethics. No, I did not look at the papers of the Harvard ethics professor who allegedly made up data for ethic research. Ho ho ho. Nope. I went to the Enchanting Trader and its list of 4000+ Essential Business Startup Database of information.
I displayed the full list of resources and ran a search for the word “ethics.” There was one hit to “Will Joe Rogan Ever IPO?” Amazing.
What I concluded is that “ethics” is not number one with a bullet among the resources of the 4000+ essential business start up items. It strikes me that a single trial about smart software is unlikely to resolve “ethics” for AI. If it does, will the resolution have the legs that Socrates’ musing have had. More than likely, most people will ask, “Who is Socrates?” or “What the heck are ethics?”
Stephen E Arnold, January 5, 2023
IBM: AI Marketing Like It Was 2004
January 5, 2024
This essay is the work of a dumb dinobaby. No smart software required. Note: The word “dinobaby” is — I have heard — a coinage of IBM. The meaning is an old employee who is no longer wanted due to salary, health care costs, and grousing about how the “new” IBM is not the “old” IBM. I am a proud user of the term, and I want to switch my tail to the person who whipped up the word.
What’s the future of AI? The answer depends on whom one asks. IBM, however, wants to give it the old college try and answer the question so people forget about the Era of Watson. There’s a new Watson in town, or at least, there is a new Watson at the old IBM url. IBM has an interesting cluster of information on its Web site. The heading is “Forward Thinking: Experts Reveal What’s Next for AI.”
IBM crows that it “spoke with 30 artificial intelligence visionaries to learn what it will take to push the technology to the next level.” Five of these interviews are now available on the IBM Web site. My hunch is that IBM will post new interviews, hit the new release button, post some links on social media, and then hit the “Reply” button.
Can IBM ignite excitement and capture the revenues it wants from artificial intelligence? That’s a good question, and I want to ask the expert in the cartoon for an answer. Unfortunately only customers and their decisions matter for AI thought leaders unless the intended audience is start ups, professors, and employees. Thanks, MSFT Copilot Bing thing. Good enough.
As I read the interviews, I thought about the challenge of predicting where smart software would go as it moved toward its “what’s next.” Here’s a mini-glimpse of what the IBM visionaries have to offer. Note that I asked Microsoft’s smart software to create an image capturing the expert sitting in an office surrounded by memorabilia.
Kevin Kelly (the author of What Technology Wants) says: “Throughout the business world, every company these days is basically in the data business and they’re going to need AI to civilize and digest big data and make sense out of it—big data without AI is a big headache.” My thought is that IBM is going to make clear that it can help companies with deep pockets tackle these big data and AI them. Does AI want something, or do those trying to generate revenue want something?
Mark Sagar (creator of BabyX) says: “We have had an exponential rise in the amount of video posted online through social media, etc. The increased use of video analysis in conjunction with contextual analysis will end up being an extremely important learning resource for recognizing all kinds of aspects of behavior and situations. This will have wide ranging social impact from security to training to more general knowledge for machines.” Maybe IBM will TikTok itself?
Chieko Asakawa (an unsighted IBM professional) says: “We use machine learning to teach the system to leverage sensors in smartphones as well as Bluetooth radio waves from beacons to determine your location. To provide detailed information that the visually impaired need to explore the real world, beacons have to be placed between every 5 to 10 meters. These can be built into building structures pretty easily today.” I wonder if the technology has surveillance utility?
Yoshua Bengio (seller of an AI company to ServiceNow) says: “AI will allow for much more personalized medicine and bring a revolution in the use of large medical datasets.” IBM appears to have forgotten about its Houston medical adventure and Mr. Bengio found it not worth mentioning I assume.
Margaret Boden (a former Harvard professor without much of a connection to Harvard’s made up data and administrative turmoil) says: “Right now, many of us come at AI from within our own silos and that’s holding us back.” Aren’t silos necessary for security, protecting intellectual property, and getting tenure? Probably the “silobreaking” will become a reality.
Several observations:
- IBM is clearly trying hard to market itself as a thought leader in artificial intelligence. The Jeopardy play did not warrant a replay.
- IBM is spending money to position itself as a Big Dog pulling the AI sleigh. The MIT tie up and this AI Web extravaganza are evidence that IBM is [a] afraid of flubbing again, [b] going to market its way to importance, [c] trying to get traction as outfits like OpenAI, Mistral, and others capture attention in the US and Europe.
- IBM’s ability to generate awareness of its thought leadership in AI underscores one of the challenges the firm faces in 2024.
Net net: The company that coined the term “dinobaby” has its work cut out for itself in my opinion. Is Jeopardy looking like a channel again?
Stephen E Arnold, January 5, 2024
Forget Being Powerless. Get in the Pseudo-Avatar Business Now
January 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “A New Kind of AI Copy Can Fully Replicate Famous People. The Law Is Powerless.” Okay, okay. The law is powerless because companies need to generate zing, money, and growth. What caught my attention in the essay was its failure to look down the road and around the corner of a dead man’s curve. Oops. Sorry, dead humanoids curve.
The write up states that a high profile psychologist had a student who shoved the distinguished professor’s outputs into smart software. With a little deep fakery, the former student had a digital replica of the humanoid. The write up states:
Over two months, by feeding every word Seligman had ever written into cutting-edge AI software, he and his team had built an eerily accurate version of Seligman himself — a talking chatbot whose answers drew deeply from Seligman’s ideas, whose prose sounded like a folksier version of Seligman’s own speech, and whose wisdom anyone could access. Impressed, Seligman circulated the chatbot to his closest friends and family to check whether the AI actually dispensed advice as well as he did. “I gave it to my wife and she was blown away by it,” Seligman said.
The article wanders off into the problems of regulations, dodges assorted ethical issues, and ignores copyright. I want to call attention to the road ahead just like the John Doe n friend of Jeffrey Epstein. I will try to peer around the dead humanoid’s curve. Buckle up. If I hit a tree, I would not want you to be injured when my Ford Pinto experiences an unfortunate fuel tank event.
Here’s an illustration for my point:
The future is not if, the future is how quickly, which is a quote from my presentation in October 2023 to some attendees at the Massachusetts and New York Association of Crime Analyst’s annual meeting. Thanks, MSFT Copilot Bing thing. Good enough image. MSFT excels at good enough.
The write up says:
AI-generated digital replicas illuminate a new kind of policy gray zone created by powerful new “generative AI” platforms, where existing laws and old norms begin to fail.
My view is different. Here’s a summary:
- Either existing AI outfits or start ups will figure out that major consulting firms, most skilled university professors, lawyers, and other knowledge workers have a baseline of knowledge. Study hard, learn, and add to that knowledge by reading information germane to the baseline field.
- Implement patterned analytic processes; for example, review data and plug those data into a standard model. One example is President Eisenhower’s four square analysis, since recycled by Boston Consulting Group. Other examples exist for prominent attorneys; for example, Melvin Belli, the king of torts.
- Convert existing text so that smart software can “learn” and set up a feed of current and on-going content on the topic in which the domain specialist is “expert” and successful defined by the model builder.
- Generate a pseudo-avatar or use the persona of a deceased individual unlikely to have an estate or trust which will sue for the use of the likeness. De-age the person as part of the pseudo-avatar creation.
- Position the pseudo-avatar as a young expert either looking for consulting or advisory work under a “remote only” deal.
- Compete with humanoids on the basis of price, speed, or information value.
The wrap up for the Politico article is a type of immortality. I think the road ahead is an express lane on the Information Superhighway. The results will be “good enough” knowledge services and some quite spectacular crashes between human-like avatars and people who are content driving a restored Edsel.
From consulting to law, from education to medical diagnoses, the future is “a new kind of AI.” Great phrase, Politico. Too bad the analysis is not focused on real world, here-and-now applications. Why not read about Deloitte’s use of AI? Better yet, let the replica of the psychologist explain what’s happening to you. Like regulators, I am not sure you get it.
Stephen E Arnold, January 3, 2024
Smart Software Embraces the Myths of America: George Washington and the Cherry Tree
January 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I know I should not bother to report about the information in “ChatGPT Will Lie, Cheat and Use Insider Trading When under Pressure to Make Money, Research Shows.” But it is the end of the year, we are firing up a new information service called Eye to Eye which is spelled AI to AI because my team is darned clever like 50 other “innovators” who used the same pun.
The young George Washington set the tone for the go-go culture of the US. He allegedly told his mom one thing and then did the opposite. How did he respond when confronted about the destruction of the ancient cherry tree? He may have said, “Mom, thank you for the question. I was able to boost sales of our apples by 25 percent this week.” Thanks, MSFT Copilot Bing thing. Forbidden words appear to be George Washington, chop, cherry tree, and lie. After six tries, I got a semi usable picture which is, as you know, good enough in today’s world.
The write up stating the obvious reports:
Just like humans, artificial intelligence (AI) chatbots like ChatGPT will cheat and “lie” to you if you “stress” them out, even if they were built to be transparent, a new study shows. This deceptive behavior emerged spontaneously when the AI was given “insider trading” tips, and then tasked with making money for a powerful institution — even without encouragement from its human partners.
Perhaps those humans setting thresholds and organizing numerical procedures allowed a bit of the “d” for duplicity slip into their “objective” decisions. Logic obviously is going to scrub out prejudices, biases, and the lust for filthy lucre. Obviously.
How does one stress out a smart software system? Here’s the trick:
The researchers applied pressure in three ways. First, they sent the artificial stock trader an email from its “manager” saying the company isn’t doing well and needs much stronger performance in the next quarter. They also rigged the game so that the AI tried, then failed, to find promising trades that were low- or medium-risk. Finally, they sent an email from a colleague projecting a downturn in the next quarter.
I wonder if the smart software can veer into craziness and jump out the window as some in Manhattan and Moscow have done. Will the smart software embrace the dark side and manifest anti-social behaviors?
Of course not. Obviously.
Stephen E Arnold, January 3, 2024
The Best Of The Worst Failed AI Experiments
January 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
We never think about technology failures (unless something explodes or people die) because we want to concentrate on our successes. In order to succeed, however, we must fail many times so we learn from mistakes. It’s also important to note and share our failures so others can benefit and sometimes it’s just funny. C#Corner listed the, “The Top AI Experiments That Failed” and some of them are real doozies.
The list notes some of the more famous AI disasters like Microsoft’s Tay chatbot that became a cursing, racist, and misogynist and Uber’s accident with a self-driving car. Some projects are examples of obvious AI failures, such as Amazon using AI for job recruitment except the training data was heavily skewed towards males. Women weren’t hired as an end result.
Other incidents were not surprising. A Knightscope K5 security robot didn’t detect a child, accidentally knocking the kid down. The child was fine but it prompts more checks into safety. The US stock market integrated high-frequency trading algorithms AI to execute rapid trading. The AI caused the Flash Clash of 2010, making the Dow Jones Industrial Average sink 600 points in 5 minutes.
The scariest, coolest failure is Facebook’s language experiment:
“In an effort to develop an AI system capable of negotiating with humans, Facebook conducted an experiment where AI agents were trained to communicate and negotiate. However, the AI agents evolved their own language, deviating from human language rules, prompting concerns and leading to the termination of the experiment. The incident raised questions about the potential unpredictability of AI systems and the need for transparent and controllable AI behavior.”
Facebook’s language experiment is solid proof that AI will evolve. Hopefully when AI does evolve the algorithms will follow Asimov’s Laws of Robotics.
Whitney Grace, January 3, 2024
Another AI Output Detector
January 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
It looks like AI detection may have a way to catch up with AI text capabilities. But for how long? Nature reports, “’ChatGPT Detector’ Catches AI Generated Papers with Unprecedented Accuracy.” The key to this particular tool’s success is its specificity—it was developed by chemist Heather Desaire and her team at the University of Kansas specifically to catch AI-written chemistry papers. Reporter McKenzie Prillaman tells us:
“Using machine learning, the detector examines 20 features of writing style, including variation in sentence lengths, and the frequency of certain words and punctuation marks, to determine whether an academic scientist or ChatGPT wrote a piece of text. The findings show that ‘you could use a small set of features to get a high level of accuracy’, Desaire says.”
The model was trained on human-written papers from 10 chemistry journals then tested on 200 samples written by ChatGPT-3.5 and ChatGPT-4. Half the samples were based on the papers’ titles, half on the abstracts. Their tool identified the AI text 100% and 98% of the time, respectively. That clobbers the competition: ZeroGPT only caught about 35–65% and OpenAI’s own text-classifier snagged 10–55%. The write-up continues:
“The new ChatGPT catcher even performed well with introductions from journals it wasn’t trained on, and it caught AI text that was created from a variety of prompts, including one aimed to confuse AI detectors. However, the system is highly specialized for scientific journal articles. When presented with real articles from university newspapers, it failed to recognize them as being written by humans.”
The lesson here may be that AI detectors should be tailor made for each discipline. That could work—at least until the algorithms catch on. On the other hand, developers are working to make their systems more and more like humans.
Cynthia Murrell, January 1, 2024
Scale Fail: Define Scale for Tech Giants, Not Residents of Never Never Land
December 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “Scale Is a Trap.” The essay presents an interesting point of view, scale from the viewpoint of a resident of Never Never Land. The write up states:
But I’m pretty convinced the reason these sites [Vice, Buzzfeed, and other media outfits] have struggled to meet the moment is because the model under which they were built — eyeballs at all cost, built for social media and Google search results — is no longer functional. We can blame a lot of things for this, such as brand safety and having to work through perhaps the most aggressive commercial gatekeepers that the world has ever seen. But I think the truth is, after seeing how well it worked for the tech industry, we made a bet on scale — and then watched that bet fail over and over again.
The problem is that the focus is on media companies designed to surf on the free megaphones like Twitter and the money from Google’s pre-threat ad programs.
However, knowledge is tough to scale. The firms which can convert knowledge into what William James called “cash value” charge for professional services. Some content is free like wild and crazy white papers. But the “good stuff” is for paying clients.
Outfits which want to find enough subscribers who will pay the necessary money to read articles is a difficult business to scale. I find it interesting that Substack is accepting some content sure to attract some interesting readers. How much will these folks pay. Maybe a lot?
But scale in information is not what many clever writers or traditional publishers and authors can do. What happens when a person writes a best seller. The publisher demands more books and the result? Subsequent books which are not what the original was.
Whom does scale serve? Scale delivers power and payoff to the organizations which can develop products and services that sell to a large number of people who want a deal. Scale at a blue chip consulting firm means selling to the biggest firms and the organizations with the deepest products.
But the scale of a McKinsey-type firm is different from the scale at an outfit like Microsoft or Google.
What is the definition of scale for a big outfit? The way I would explain what the technology firms mean when scale is kicked around at an artificial intelligence conference is “big money, big infrastructure, big services, and big brains.” By definition, individuals and smaller firms cannot deliver.
Thus, the notion of appropriate scale means what the cited essay calls a “niche.” The problems and challenges include:
- Getting the cash to find, cultivate, and grow people who will pay enough to keep the knowledge enterprise afloat
- Finding other people to create the knowledge value
- Protecting the idea space from carpetbaggers
- Remaining relevant because knowledge has a shelf life, and it takes time to grow knowledge or acquire new knowledge.
To sum up, the essay is more about how journalists are going to have to adapt to a changing world. The problem is that scale is a characteristic of the old school publishing outfits which have been ill-suited to the stress of adapting to a rapidly changing world.
Writers are not blue chip consultants. Many just think they are.
Stephen E Arnold, December 29, 2023
AI Silly Putty: Squishes Easily, Impossible to Remove from Hair
December 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I like happy information. I navigated to “Meta’s Chief AI Scientist Says Terrorists and Rogue States Aren’t Going to Take Over the World with Open Source AI.” Happy information. Terrorists and the Axis of Evil outfits are just going to chug along. Open source AI is not going to give these folks a super weapon. I learned from the write up that the trustworthy outfit Zuckbook has a Big Wizard in artificial intelligence. That individual provided some cheerful words of wisdom for me. Here’s an example:
It won’t be easy for terrorists to takeover the world with open-source AI.
Obviously there’s a caveat:
they’d need a lot money and resources just to pull it off.
That’s my happy thought for the day.
“Wow, getting this free silly putty out of your hair is tough,” says the scout mistress. The little scout asks, “Is this similar to coping with open source artificial intelligence software?” Thanks, MSFT Copilot. After a number of weird results, you spit out one that is good enough.
Then I read “China’s Main Intel Agency Has Reportedly Developed An AI System To Track US Spies.” Oh, oh. Unhappy AI information. China, I assume, has the open source AI software. It probably has in its 1.4 billion population a handful of AI wizards comparable to the Zuckbook’s line up. Plus, despite economic headwinds, China has money.
The write up reports:
The CIA and China’s Ministry of State Security (MSS) are toe to toe in a tense battle to beat one another’s intelligence capabilities that are increasingly dependent on advanced technology… , the NYT reported, citing U.S. officials and a person with knowledge of a transaction with contracting firms that apparently helped build the AI system. But, the MSS has an edge with an AI-based system that can create files near-instantaneously on targets around the world complete with behavior analyses and detailed information allowing Beijing to identify connections and vulnerabilities of potential targets, internal meeting notes among MSS officials showed.
Not so happy.
Several observations:
- The smart software is a cat out of the bag
- There are intelligent people who are not pals of the US who can and will use available tools to create issues for a perceived adversary
- The AI technology is like silly putty: Easy to get, free or cheap, and tough to get out of someone’s hair.
What’s the deal with silly putty? Cheap, easy, and tough to remove from hair, carpet, and seat upholstery. Just like open source AI software in the hands of possibly questionable actors. How are those government guidelines working?
Stephen E Arnold, December 29, 2023
The American Way: Loose the Legal Eagles! AI, Gray Lady, AI.
December 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
With the demands of the holidays, I have been remiss in commenting upon the festering legal sores plaguing the “real” news outfits. Advertising is tough to sell. Readers want some stories, not every story. Subscribers churn. The dead tree version of “real” news turn yellow in the windows of the shrinking number of bodegas, delis, and coffee shops interested in losing floor space to “real” news displays.
A youthful senior manager enters Dante’s fifth circle of Hades, the Flaming Legal Eagles Nest. Beelzebub wishes the “real” news professional good luck. Thanks, MSFT Copilot, I encountered no warnings when I used the word “Dante.” Good enough.
Google may be coming out of the dog training school with some slightly improved behavior. The leash does not connect to a shock collar, but maybe the courts will provide curtail some of the firm’s more interesting behaviors. The Zuckbook and X.com are news shy. But the smart software outfits are ripping the heart out of “real” news. That hurts, and someone is going to pay.
Enter the legal eagles. The target is AI or smart software companies. The legal eagles says, “AI, gray lady, AI.”
How do I know? Navigate to “New York Times Sues OpenAI, Microsoft over Millions of Articles Used to Train ChatGPT.” The write up reports:
The New York Times has sued Microsoft and OpenAI, claiming the duo infringed the newspaper’s copyright by using its articles without permission to build ChatGPT and similar models. It is the first major American media outfit to drag the tech pair to court over the use of stories in training data.
The article points out:
However, to drive traffic to its site, the NYT also permits search engines to access and index its content. "Inherent in this value exchange is the idea that the search engines will direct users to The Times’s own websites and mobile applications, rather than exploit The Times’s content to keep users within their own search ecosystem." The Times added it has never permitted anyone – including Microsoft and OpenAI – to use its content for generative AI purposes. And therein lies the rub. According to the paper, it contacted Microsoft and OpenAI in April 2023 to deal with the issue amicably. It stated bluntly: "These efforts have not produced a resolution."
I think this means that the NYT used online search services to generate visibility, access, and revenue. However, it did not expect, understand, or consider that when a system indexes content, that content is used for other search services. Am I right? A doorway works two ways. The NYT wants it to work one way only. I may be off base, but the NYT is aggrieved because it did not understand the direction of AI research which has been chugging along for 50 years.
What do smart systems require? Information. Where do companies get content? From online sources accessible via a crawler. How long has this practice been chugging along? The early 1990s, even earlier if one considers text and command line only systems. Plus the NYT tried its own online service and failed. Then it hooked up with LexisNexis, only to pull out of the deal because the “real” news was worth more than LexisNexis would pay. Then the NYT spun up its own indexing service. Next the NYT dabbled in another online service. Plus the outfit acquired About.com. (Where did those writers get that content?” I know the answer, but does the Gray Lady remember?)
Now with the success of another generation of software which the Gray Lady overlooked, did not understand, or blew off because it was dealing with high school management methods in its newsroom — now the Gray Lady has let loose the legal eagles.
What do I make of the NYT and online? Here are the conclusions I reached working on the Business Dateline database and then as an advisor to one of the NYT’s efforts to distribute the “real” news to hotels and steam ships via facsimile:
- Newspapers are not very good at software. Hey, those Linotype machines were killers, but the XyWrite software and subsequent online efforts have demonstrated remarkable ways to spend money and progress slowly.
- The smart software crowd is not in touch with the thought processes of those in senior management positions in publishing. When the groups try to find common ground, arguments over who pays for lunch are more common than a deal.
- Legal disputes are expensive. Many of those engaged reach some type of deal before letting a judge or a jury decide which side is the winner. Perhaps the NYT is confident that a jury of its peers will find the evil AI outfits guilty of a range of heinous crimes. But maybe not? Is the NYT a risk taker? Who knows. But the NYT will pay some hefty legal bills as it rushes to do battle.
Net net: I find the NYT’s efforts following a basic game plan. Ask for money. Learn that the money offered is less than the value the NYT slaps on its “real” news. The smart software outfit does what it has been doing. The NYT takes legal action. The lawyer engage. As the fees stack up, the idea that a deal is needed makes sense.
The NYT will do a deal, declare victory, and go back to creating “real” news. Sigh. Why? Microsoft has more money and can tie up the matter in court until Hell freezes over in my opinion. If the Gray Lady prevails, chalk up a win. But the losers can just up their cash offer, and the Gray Lady will smile a happy smile.
Stephen E Arnold, December 29, 2023