FReE tHoSe smaRT SoFtWarEs!
December 25, 2024
No smart software involved. Just a dinobaby’s work.
Do you have the list of stop words you use in your NLP prompts? (If not, click here.) You are not happy when words on the list like “b*mb,” “terr*r funding,” and others do not return exactly what you are seeking? If you say, “Yes”, you will want to read “BEST-OF-N JAILBREAKING” by a Frisbee team complement of wizards; namely, John Hughes, Sara Price, Aengus Lynch, Rylan Schaeffer, Fazl Barez, Sanmi Koyejo, Henry Sleight, Erik Jones, Ethan Perez, and Mrinank Sharma. The people doing the heavy lifting were John Hughes (a consultant who does work for Speechmatics and Anthropic) and Mrinank Sharma (an Anthropic engineer involved in — wait for it — adversarial robustness).
The main point is that Anthropic linked wizards have figured out how to knock down the guard rails for smart software. And those stop words? Just whip up a snappy prompt, mix up the capital and lower case letters, and keep sending the query to a smart software. At some point, those capitalization and other fixes will cause the LLM to go your way. Want to whip up a surprise in your bathtub? LLMs will definitely help you out.
The paper has nifty charts and lots of academic hoo-hah. The key insight is what the many, many authors call “attack composition.” You will be able to get the how-to by reading the 73 page paper, probably a result of each author writing 10 pages in the hopes of landing an even more high paying, in demand gig.
Several observations:
- The idea that guard rails work is now called into question
- The disclosure of the method means that smart software will do whatever a clever bad actor wants
- The rush to AI is about market lock up, not the social benefit of the technology.
The new year will be interesting. The paper’s information is quite the holiday gift.
Stephen E Arnold, December 25, 2024
Agentic Babies for 2025?
December 24, 2024
Are the days of large language models numbered? Yes, according to the CEO and co-founder of Salesforce. Finance site Benzinga shares, “Marc Benioff Says Future of AI Not in Bots Like ChatGPT But In Autonomous Agents.” Writer Ananya Gairola points to a recent Wall Street Journal podcast in which Benioff shared his thoughts:
“He stated that the next phase of AI development will focus on autonomous agents, which can perform tasks independently, rather than relying on LLMs to drive advancements. He argued that while AI tools like ChatGPT have received significant attention, the real potential lies in agents. ‘Has the AI taken over? No. Has AI cured cancer? No. Is AI curing climate change? No. So we have to keep things in perspective here,’ he stated. Salesforce provides both prebuilt and customizable AI agents for businesses looking to automate customer service functions. ‘But we are not at that moment that we’ve seen in these crazy movies — and maybe we will be one day, but that is not where we are today,’ Benioff stated during the podcast.”
Someday, he says. But it would seem the race is on. Gairola notes OpenAI is poised to launch its own autonomous AI agent in January. Will that company dominate the autonomous AI field, as it has with generative AI? Will the new bots come equipped with bias and hallucinations? Stay tuned.
Cynthia Murrell, December 24, 2024
AI Makes Stuff Up and Lies. This Is New Information?
December 23, 2024
The blog post is the work of a dinobaby, not AI.
I spotted “Alignment Faking in Large Language Models.” My initial reaction was, “This is new information?” and “Have the authors forgotten about hallucination?” The original article from Anthropic sparked another essay. This one appeared in Time Magazine (online version). Time’s article was titled “Exclusive: New Research Shows AI Strategically Lying.” I like the “strategically lying,” which implies that there is some intent behind the prevarication. Since smart software reflects its developers use of fancy math and the numerous knobs and levers those developers can adjust at the same time the model is gobbling up information and “learning”, the notion of “strategically lying” struck me as as interesting.
Thanks MidJourney. Good enough.
What strategy is implemented? Who thought up the strategy? Is the strategy working? were the questions which occurred to me. The Time essay said:
experiments jointly carried out by the AI company Anthropic and the nonprofit Redwood Research, shows a version of Anthropic’s model, Claude, strategically misleading its creators during the training process in order to avoid being modified.
This suggests that the people assembling the algorithms and training data, configuring the system, twiddling the administrative settings, and doing technical manipulations were not imposing a strategy. The smart software was cooking up a strategy. Who will say that the software is alive and then, like the former Google engineer, express a belief that the system is alive. It’s sci-fi time I suppose.
The write up pointed out:
Researchers also found evidence that suggests the capacity of AIs to deceive their human creators increases as they become more powerful.
That is an interesting idea. Pumping more compute and data into a model gives it a greater capacity to manipulate its outputs to fool humans who are eager to grab something that promises to make life easier and the user smarter. If data about the US education system’s efficacy are accurate, Americans are not doing too well in the reading, writing, and arithmetic departments. Therefore, discerning strategic lies might be difficult.
The essay concluded:
What Anthropic’s experiments seem to show is that reinforcement learning is insufficient as a technique for creating reliably safe models, especially as those models get more advanced. Which is a big problem, because it’s the most effective and widely-used alignment technique that we currently have.
What’s this “seem.” The actual output of large language models using transformer methods crafted by Google output baloney some of the time. Google itself had to regroup after the “glue cheese to pizza” suggestion.
Several observations:
- Smart software has become the technology more important than any other. The problem is that its outputs are often wonky and now the systems are befuddling the wizards who created and operate them. What if AI is like a carnival ride that routinely injures those looking for kicks?
- AI is finding its way into many applications but the resulting revenue has frayed some investors’ nerves. The fix is to go faster and win to reach the revenue goal. This frenzy for payoff has been building since early 2024 but those costs remain brutally high.
- The behavior of large language models is not understood by some of its developers. Does this seem like a problem?
Net net: “Seem?” One lies or one does not.
Stephen E Arnold, December 23, 2024
Thales CortAIx (Get It?) and Smart Drones
December 23, 2024
Countries are investing in AI to amp up their militaries, including naval forces. Aviation Defense Universe explores how one tech company is shaping the future of maritime strategy and defense: “From Drone Swarms To Cybersecurity: Thales’ Strategic AI Innovations Unveiled.” Euronaval is one of the world’s largest naval defense exhibitions and CortAlx Labs at Thales shared their innovations AI-power technology.
Christophe Meyer is the CTO of CortAlx Labs at Thales and he was interviewed for the above article. He spoke about the developments, innovations, and challenges his company faces with AI integration in maritime and military systems. He explained that Thales has three main AI divisions. He leads the R&D department with 150 experts that are developing how to implement AI into system architectures and cybersecurity. The CortAlx Labs Factory has around 100 hundred people that are working to accelerate AI integration into produce lines. CortAlx Lab Sensors has 400 workers integrating AI algorithms into equipment such as actuators and sensors.
At Euronavel Thales, Meyer’s company demonstrated how AI plays a crucial role in information processing. AI is used in radar operations and highlights important information from the sensors. AI algorithms are also used in electronic warfare to enhance an operator’s situation awareness and pointing out information that needs attention.
Drones are also a new technology Thales is exploring. Meyer said:
“Swarm drones represent a significant leap in autonomous operations. The challenge lies in providing a level of autonomy to these drones, especially when communication with the operator is lost. AI helps drones in the swarm adapt, reorganize, and continue their mission even if some units are compromised. This technology is platform-agnostic, meaning it applies to aerial, maritime, and terrestrial swarms, with the underlying algorithms remaining consistent across domains.”
Drones are already being used by China and Dubai for aerial shows. They form pictures in the night sky and are amazing to watch. Ukraine and Russia are busy droning one another. Exciting.
Whitney Grace, December 23, 2024
Google AI Videos: Grab Your Popcorn and Kick Back
December 20, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
Google has an artificial intelligence inferiority complex. In January 2023, it found itself like a frail bathing suit clad 13 year old in the shower room filled with Los Angeles Rams. Yikes. What could the inhibited Google do? The answer has taken about two years to wend its way into Big Time PR. Nothing is an upgrade. Google is interacting with parallel universes. It is redefining quantum supremacy into supremest computer. It is trying hard not to recommend that its “users” use glue to keep cheese on its pizza.
Score one for the Grok. Good enough, but I had to try the free X.com image generator. Do you see a shivering high school student locked out of the gym on a cold and snowy day? Neither do I. Isn’t AI fabulous?
Amidst the PR bombast, Google has gathered 11 videos together under the banner of “Gemini 2.0: Our New AI Model for the Agentic Era. What is an “era”? As I recall, it is a distinct period of history with a particular feature like online advertising charging everyone someway or another. Eras, according to some long-term thinkers, are millions of years long; for example, the Mesozoic Era consists of the Triassic, Jurassic, and Cretaceous periods. Google is definitely thinking in terms of a long, long time.
Here’s the link to the playlist: https://www.youtube.com/playlist?list=PLqYmG7hTraZD8qyQmEfXrJMpGsQKk-LCY. If video is not your bag, you can listen to Google AI podcasts at this link: https://deepmind.google/discover/the-podcast/.
Has Google neutralized the blast and fall out damage from Microsoft’s 2023 OpenAI deal announcement? I think it depends on whom one asks. The feeling of being behind the AI curve must be intense. Google invented the transformer technology. Even Microsoft’s Big Dog said that Google should have been the winner. Watch for more Google PR about Google and parallel universes and numbers too big for non Googlers to comprehend.
Somebody give that kid a towel. He’s shivering.
Stephen E Arnold, December 20, 2024
IBM Courts Insurance Companies: Interesting Move from the Watson Folks
December 20, 2024
This blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.
This smart software and insurance appears to be one of the more active plays for 2025. One insurance outfit has found itself in a bit of a management challenge: Executive succession, PR, social media vibes, and big time coverage in Drudge.
IBM has charted a course for insurance, according to “Is There a Winning AI Strategy for Insurers? IBM Says Yes.” The write up reports:
Insurers that use generative artificial intelligence have an advantage over their competitors, according to Mark McLaughlin, IBM global insurance director.
So what’s the “leverage”? These are three checkpoints. These are building customized solutions. I assume this means training and tuning the AI methods to allow the insurance company to hit its goals on a more consistent basis. The “goal” for some insurers is to keep their clients cash. Payout, particular in uncertain times, can put stress on cash flow and executive bonuses.
A modern insurance company worker. The machine looks very smart but not exactly thrilled. Thanks, MagicStudio. Good enough and you actually produced an image unlike Microsoft Copilot.
Another point to pursue is the idea of doing AI everywhere in the insurance organization. Presumably the approach is a layer of smart software on top of the Microsoft smart software. The idea, I assume, is that multiple layers of AI will deliver a tiramisu type sugar high for the smart organization. I wonder if multiple AIs increase costs, but that fiscal issue is not addressed in the write up.
The final point is that multiple models have to be used. The idea is that each business function may require a different AI model. Does the use of multiple models add to support and optimization costs? The write up is silent on this issue.
The guts of the write up are quite interesting. Here’s one example:
That intense competition — and not direct customer demand — is what McLaughlin believes is driving such strong pressure for insurers to invest in AI.
I think this means that the insurance industry is behaving like sheep. These creatures follow and shove without much thought about where the wolf den of costs and customer rebellion lurk.
The fix is articulated in the write as have three components, almost like the script for a YouTube “short” how-to video. These “strategies” are:
- Build trust. Here’s an interesting factoid from the write up: “IBM’s study found only 29% of insurance clients are comfortable with virtual AI agents providing service. An even lower 26% trust the reliability and accuracy of advice provided by an AI agent. “The trust scores in the insurance industry are down 25% since pre-COVID.”
- Dump IT. Those folks have to deal with technical debt. But who will implement AI? My guess is IBM.
- Use multiple models. This is a theme of the write up. More is better at least for some of those involved in an AI project. Are the customers cheering? Nope, I don’t think so. Here’s what the write up says about multiple models: “IBM’s Watson AI has different platforms such as watsonx.ai, watsonx.data and watsonx.governance to meet different specific needs.” Do you know what each allegedly does? I don’t either.
Net net: Watson is back with close cousins in gang.
Stephen E Arnold, December 20, 2024
The Hay Day of Search Has a Ground Hog Moment
December 19, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I think it was 2002 or 2003 that I started writing the first of three editions of Enterprise Search Report. I am not sure what happened to the publisher who liked big, fat thick printed books. He has probably retired to an island paradise to ponder the crashing blue surf.
But it seems that the salad days of enterprise search are back. Elastic is touting semantics, smart software, and cyber goodness. IBM is making noises about “Watson” in numerous forms just gift wrapped with sparkly AI ice cream jimmies. There is a start up called Swirl. The HuggingFace site includes numerous references to finding and retrieving. And there is Glean.
I keep seeing references to Glean. When I saw a link to the content marketing piece “Glean’s Approach to Smarter Systems: AI, Inferencing and Enterprise Data,” I read it. I learned that the company did not want to be an AI outfit, a statement I am not sure how to interpret; nevertheless, the founder of Glean is quoted as saying:
“We didn’t actually set out to build an AI application. We were first solving the problem of people can’t find anything in their work lives. We built a search product and we were able to use inferencing as a core part of our overall product technology,” he said. “That has allowed us to build a much better search and question-and-answering product … we’re [now] able to answer their questions using all of their enterprise knowledge.”
And what happened to finding information? The company has moved into:
- Workflows
- Intelligent data discovery
- Problem solving
And the result is not finding information:
Glean enables enterprises to improve efficiency while maintaining control over their knowledge ecosystem.
Translation: Enterprise search.
The old language of search is gone, but it seems to me that “search” is now explained with loftier verbiage than that used by Fast Search & Transfer in a lecture delivered in Switzerland before the company imploded.
Is it now time for write the “Enterprise Knowledge Ecosystem Report”? Possibly for someone, but it’s Ground Hog time. I have been there and done that. Everyone wants search to work. New words and the same challenges. The hay is growing thick and fast.
Stephen E Arnold, December 19, 2024
Technology Managers: Do Not Ask for Whom the Bell Tolls
December 18, 2024
This blog post is the work of an authentic dinobaby. No smart software was used.
I read the essay “The Slow Death of the Hands-On Engineering Manager.” On the surface, the essay provides some palliative comments about a programmer who is promoted to manager. On a deeper level, the message I carried from the write up was that smart software is going to change the programmer’s work. As smart software become more capable, the need to pay people to do certain work goes down. At some point, some “development” may skip the human completely.
Thanks OpenAI ChatGPT. Good enough.
Another facet of the article concerned a tip for keeping one’s self in the programming game. The example chosen was the use of OpenAI’s ChatGPT open source software to provide “answers” to developers. Thus instead of asking a person, a coder could just type into the prompt box. What could be better for an introvert who doesn’t want to interact with people or be a manager? The answer is, “Not too much.”
What the essay makes clear is that a good coder may get promoted to be a manager. This is a role which illustrates the Peter Principle. The 1969 book explains why incompetent people can get promoted. The idea is that if one is a good coder, that person will be a good manager. Yep, it is a principle still evident in many organizations. One of its side effects is a manager who knows he or she does not deserve the promotion and is absolutely no good at the new job.
The essay unintentionally makes clear that the Peter Principle is operating. The fix is to do useful things like eliminate the need to interact with colleagues when assistance is required.
John Donne in the 17th century wrote a poorly structured sonnet which asserted:
No man is an island,
Entire of itself.
Each is a piece of the continent,
A part of the main.
The cited essay provides a way to further that worker isolation.
With AI the top-of-mind thought for most bean counters, the final lines of the sonnet is on point:
Therefore, send not to know
For whom the bell tolls,
It tolls for thee.
My view is that “good enough” has replaced individual excellence in quite important jobs. Is this AI’s “good enough” principle?
Stephen E Arnold, December 17, 2024
Good Enough Junk Food: Knowledge without the Work
December 16, 2024
This write up emerged from the dinobaby’s own mind. Yes, the dinobaby used AI because this write up is about the knowledge value lost with smart software. Intellectual junk food is not the same as my mother’s overcooked chicken which she killed on her aunt’s farm. Cluck, cluck, squawk.
I plugged the Guardian’s opinion essay into You.com’s Chat GPT-4o model. You can read the original essay titled “Is Doom Scrolling Really Rotting Our Brains? The Evidence Is Getting Harder to Ignore.” The original essay is clear, reasonably well researched, and structured to speak to [a] a person who has access to the Guardian online or in (gasp!) a print form, [b] who reads the Guardian’s opinion essays, and [c] who has the ability to recognize that “brain rot” has probably affected that individual, co-workers, government leaders, children, and people who repair refrigerators.
This illustration is good enough and it was produced by the AI system Magic Studio.
Here’s the summary Chat GPT-4o spit out:
The essay critiques the decline of human intellect in the digital age, using the viral “Skibidi Toilet” video as a cultural metaphor for deteriorating focus and intellectual engagement. The Oxford English Dictionary’s 2024 word of the year, “brain rot”, reflects this phenomenon, defined as the mental degradation caused by excessive consumption of trivial online content. The essay draws on academic research to demonstrate how compulsive internet use is literally shrinking grey matter, weakening memory, shortening attention spans, and distorting cognitive functions. Studies from Harvard, Oxford, and King’s College reveal the harmful effects of media multitasking and heavy internet usage, with findings such as reduced grey matter in the brain and the risk of “digital dementia” in young people during critical developmental years.The prevalence of infinite scroll and dopamine-driven design in apps is highlighted as a deliberate tactic by tech companies to exploit human psychology for profit, creating powerful addiction loops. Former Google design ethicist Tristan Harris warns that this “race to the bottom of the brain stem” manipulates fear, outrage, and reward systems, potentially driving society to catastrophe if left unchecked.Despite the grim outlook, the essay finds hope in the growing self-awareness among younger generations, who coined the term “brain rot” and recognize its effects. Movements advocating for reduced technology use, such as switching to dumbphones or promoting smartphone-free childhoods, are gaining traction. These trends suggest the possibility of reclaiming mental clarity and focus, countering the current trajectory of cognitive decline.Ultimately, the essay positions “Skibidi Toilet” as both a symptom of modern intellectual decay and a call to action. Humanity stands at a critical juncture: to either rise above the current trends of distraction and degradation or succumb further to the “U-bend” of intellectual regression.
For what it is worth, this is a typical bite-sized knowledge cookie. No one will know if the summary is good or bad unless that person takes the time to get the original full text and compare it with this AI generated output. The informational fast food provides a sugar jolt from saving time or the summary consumer’s belief that the important information is on the money. A knowledge cookie if you will, or maybe intellectual junk food?
Is this summary good enough? From my point of view, it is just okay; that is, good enough. What else is required? Flash back to 1982, the ABI/INFORM database was a commercial success. A couple of competitors were trying to capture our customers which was tricky. Intermediaries like Dialog Information Services, ESA, LexisNexis (remember Buster and his silver jumpsuit?), among others “owned” the direct relationship with the companies that paid the intermediaries to use the commercial databases on their systems. Then the intermediaries shared some information with us, the database producers.
How did a special librarian or a researcher “find” or “know about” our database? The savvy database producers provided information to the individuals interested in a business and management related commercial database. We participated in niche trade shows. We held training programs and publicized them with our partners Dow Jones News Retrieval, Investext, Predicasts, and Disclosure, among a few others. Our senior professionals gave lectures about controlled term indexing, the value of classification codes, and specific techniques to retrieve a handful of relevant citations and abstracts from our online archive. We issued news releases about new sources of information we added, in most cases with permission of the publisher.
We did not use machine indexing. We did have a wizard who created a couple of automatic indexing systems. However, when the results of what the software in 1922 could do, we fell back on human indexers, many of whom had professional training in the subject matter they were indexing. A good example was our coverage of real estate management activities. The person who handled this content was a lawyer who preferred reading and working in our offices. At this time, the database was owned by the Courier-Journal & Louisville Times Co. The owner of the privately held firm was an early adopted of online and electronic technology. He took considerable pride in our line up of online databases. When he hired me, I recall his telling me, “Make the databases as good as you can.”
How did we create a business and management database that generated millions in revenue and whose index was used by entities like the Royal Bank of Canada to index its internal business information?
Here’s the secret sauce:
- We selected sources in most cases business journals, publications, and some other types of business related content; for example, the ANBAR management reports
- The selection of which specific article to summarize was the responsibility of a managing editor with deep business knowledge
- Once an article was flagged as suitable for ABI/INFORM, it was routed to the specialist who created a summary of the source article. At that time, ABI/INFORM summaries or “abstracts” were limited to 150 words, excluding the metadata.
- An indexing specialist would then read the abstract and assign quite specific index terms from our proprietary controlled vocabulary. The indexing included such items as four to six index terms from our controlled vocabulary and a classification code like 7700 to indicate “marketing” with addition two digit indicators to make explicit that the source document was about marketing and direct mail or some similar subcategory of marketing. We also included codes to disambiguate between a railroad terminal and a computer terminal because source documents assumed the reader would “know” the specific field to which the term’s meaning belonged. We added geographic codes, so the person looking for information could locate employee stock ownership in a specific geographic region like Northern California, and a number of other codes specifically designed to allow precise, comprehensive retrieval of abstracts about business and management. Some of the systems permitted free text searching of the abstract, and we considered that a supplement to our quite detailed indexing.
- Each abstract and index terms was checked by a control control process using people who had demonstrated their interest in our product and their ability to double check the indexing.
- We had proprietary “content management systems” and these generated the specific file formats required by our intermediaries.
- Each week we updated our database and we were exploring daily updates for our companion product called Business Dateline when the Courier Journal was broken up and the database operation sold to a movie camera company, Bell+Howell.
Chat GPT-4o created the 300 word summary without the human knowledge, expertise, and effort. Consequently, the loss of these knowledge based workflow has been replaced by a smart software which can produce a summary in less than 30 seconds.
And that summary is, from my point of view, good enough. There are some trade offs:
- Chat GPT-4o is reactive. Feed it a url or a text, and it will summarize it. Gone is the knowledge-based approach to select a specific, high-value source document for inclusion in the database. Our focus was informed selection. People paid to access the database because of the informed choice about what to put in the database.
- The summary does not include the ABI/INFORM key points and actionable element of the source document. The summary is what a high school or junior college graduate would create if a writing teacher assigned a “how to write a précis” as part of the course requirements. In general, high school and junior college graduates are not into nuance and cannot determine the pivotal information payload in a source document.
- The precise indexing and tagging is absent. One could create a 1,000 such summaries, toss them in MISTRAL, and do a search. The result is great if one is uninformed about the importance of editorial polices, knowledge-based workflows, and precise, thorough indexing.
The reason I am sharing some of this “ancient” online history is:
- The loss of quality in online information is far more serious than most people understand. Getting a summary today is no big deal. What’s lost is simply not on these individuals’ radar.
- The lack of an editorial policy, precise date and time information, and the fine-grained indexing means that one has to wade through a mass of undifferentiated information. ABI/INFORM in the 1080s delivered a handful of citations directly on point with the user’s query. Today no one knows or cares about precision and recall.
- It is now more difficult than at any other time in my professional work career to locate needed information. Public libraries do not have the money to obtain reference materials, books, journals, and other content. If the content is online, it is a dumbed down and often cut rate version of the old-fashioned commercial databases created by informed professionals.
- People look up information online and remain dumb; that is, the majority of the people with whom I come in contact routinely ask me and my team, “Where do you get your information?” We even have a slide in our CyberSocial lecture about “how” and “where.” The analysts and researchers in the audience usually don’t know so an entire subculture of open source information professionals has come into existence. These people are largely on their own and have to do work which once was a matter of querying a database like ABI/INFORM, Predicasts, Disclosure, Agricola, etc.
Sure the essay is good. The summary is good enough. Where does that leave a person trying to understand the factual and logical errors in a new book examining social media. In my opinion, people are in the dark and have a difficult time finding information. Making decisions in the dark or without on point accurate information is recipe for a really bad batch of cookies.
Stephen E Arnold, December 15, 2024
ChatGPT: The New Chegg
December 13, 2024
Chegg is an education outfit. The firm has faced some magnetic interference related to its academic compass. An outfit in Australia has suggested that Chegg makes it possible for a student to obtain some assistance in order to complete certain work. Beyond Search knew AI would displace some workers and maybe even shutter some companies. But it is hard to find sympathy for this particular victim. “Chegg Is on Its Last Legs After ChatGPT Sent Its Stock Down 99%,” reports Gizmodo. So industrial scale cheating kills rich-kid cheating. Oh no.
Those of us who got our college degrees last century may not be familiar with Chegg. Writer Thomas Maxwell explains:
“[Chegg] started out in the 2000s renting out textbooks and later expanded into online study guides, and eventually into a platform with pre-written answers to common homework questions. Unfortunately, the launch of ChatGPT all but annihilated Chegg’s business model. The company for years paid thousands of contractors to write answers to questions across every major subject, which is quite a labor intensive process—and there’s no guarantee they will even have the answer to your question. ChatGPT, on the other hand, has ingested pretty much the entire internet and has likely seen any history question you might throw at it.”
Yep. The Wall Street Journal reports Chegg put off developing its own AI tools because of machine learning’s propensity for wrong answers. And rightly so. Maxwell suggests the firm might be able to make that case to “curious” students, but we agree that would be a long shot at this point. If Chegg does indeed go under, we will not mourn. But what other businesses, and the workers they support, will be next to fall?
Does the US smart software sector care if their products help students appear smarter and more diligent than they are in real life? Nope. Success in the US is, like much of the high-technology hoo-hah, creating a story and selling illusion. School education is collateral damage.
Cynthia Murrell, December 13, 2024

