Satire or Marketing: Let Smart Software Decide
July 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
What’s PhD level intelligence? In 1962, I had a required class in one of the -ologies. I vaguely remember that my classmates and I had to learn about pigeons, rats, and people who would make decisions that struck me as off the wall. The professor was named after a Scottish family from the Highlands. I do recall looking up the name and finding that it meant “crooked nose.” But the nose, as nice as it was, was nothing to the bed springs the good professor suspended from a second story window. I asked him, “What’s the purpose of the bed springs?” (None of the other students in the class cared, but I found the sight interesting.) His reply was, “I am using it as an antenna.” Okay, that is one example of PhD-level intelligence. I have encountered others, but I will not regale you with are somewhat idiosyncratic behaviors.
The illustration demonstrates the common sense approach to problem solving. Thanks, MSFT Copilot. Chugging away on Recall and security over the holiday. Yep, I bet you are.
Why am I thinking about a story about bedsprings suspend from a second story window? I just read “ChatGPT Now Has PhD Level Intelligence, and the Poor Personal Choices to Prove It.” The write up states:
ChatGPT has become indispensable to plagiarists and spambots worldwide. Now, OpenAI is thrilled to introduce ChatGPT 5.0, the most advanced version of the popular virtual assistant to date. With groundbreaking improvements, GPT-5 is like having a doctor of philosophy right at your fingertips.
The write up (in a humorous vein I really hope) identifies some characteristics of the next big thing from Sam AI-Man, the Apple partner he hopes. The article suggests that ChatGPT 5 can cherish a saloon mug with this snappy phrase printed on the container:
Suck it up, liver.
No, I don’t want the mug, nor have I abused my liver as an undergraduate. I actually enjoyed spending time in the so-called university’s library.
The write up includes an short commentary about the ethical concerns or guardrails Sam AI-Man has on the Information Highway populated with smart vehicles and presumably less smart humans who are confident the zeros and ones don’t get scrambled:
Numerous questions have arisen regarding the ethics and legality of training ChatGPT on copyrighted text data without permission. In this latest version, however, reliance on authors’ intellectual property has been dramatically reduced. While GPT-5 started training from a knowledge base of millions of texts, it got around to reading only Frankenstein, plus maybe half of a Donna Haraway book. It basically bluffed its way through prelims by talking about “embodiment” a lot.
I get the Mary Shelly reference. But Donna Haraway forced me to ask Claude Sonnet 3.5, “Please, explain Donna Haraway’s contribution to ethical discourse.” Here’s what I received from the smart system competing with Sam AI-Man’s presumably smarter system:
Here’s Donna Haraway’s contribution to ethical discourse in three sentences:
- Donna Haraway challenged traditional ideas about what it means to be human by introducing the concept of a “cyborg” – a blend of human and machine.
- She argued that the lines between nature and culture, human and animal, and physical and non-physical are not as clear as we think, which forces us to rethink our ethical responsibilities.
- Her work encourages us to consider how technology and science affect our understanding of ourselves and our relationships with other living things, pushing us to develop new ethical frameworks for our changing world.
Thank you, Claude Sonnet 3.5! I have stated that my IQ score pegs me in the “Dumb Cod” percentile. I think Ms. Haraway is into the Ray Kurzweil and Elon Musk concept space. I know I am looking forward to nanodevices able to keep me alive for many, many years. I want to poke fun at smart software, and I quite like to think about PhD level software.
To close, I want to quote the alleged statement of a very smart person who could not remember if OpenAI used YouTube-type content to train ChatGPT. (Hey, even crooked nose remembered that he suspended the bed springs to function like an antenna.) The CTO of OpenAI allegedly said:
“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence… and then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at PhD intelligence…” — Open AI CTO Mira Murati, in an interview with Dartmouth Engineering
I wonder if a person without a PhD can recognize “PhD intelligence”? Sure. Why not? It’s marketing.
Stephen E Arnold, July 3, 2024
Another Open Source AI Voice Speaks: Yo, Meta!
July 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The open source software versus closed source software demonstrates ebbs and flows. Like the “go fast” with AI and “go slow” with AI, strong opinions suggest that big money and power are swirling like the storms on a weather app for Oklahoma in tornado season. The most recent EF5 is captured in “Zuckerberg Disses Closed-Source AI Competitors As Trying to Create God.” The US government seems to be concerned about open source smart software finding its way into the hands of those who are not fans of George Washington-type thinking.
Which AI philosophy will win the big pile of money? Team Blue representing the Zuck? Or, the rag tag proprietary wizards? Thanks, MSFT Copilot. You are into proprietary, aren’t you?
The “move fast and break things” personage of Mark Zuckerberg is into open source smart software. In the write up, he allegedly said in a YouTube bit:
“I don’t think that AI technology is a thing that should be kind of hoarded and … that one company gets to use it to build whatever central, single product that they’re building,” Zuckerberg said in a new YouTube interview with Kane Sutter (@Kallaway).
The write up includes this passage:
In the conversation, Zuckerberg said there needs to be a lot of different AIs that get created to reflect people’s different interests.
One interesting item in the article, in my opinion, is this:
“You want to unlock and … unleash as many people as possible trying out different things,” he continued. “I mean, that’s what culture is, right? It’s not like one group of people getting to dictate everything for people.”
But the killer Meta vision is captured in this passage:
Zuckerberg said there will be three different products ahead of convergence: display-less smart glasses, a heads-up type of display and full holographic displays. Eventually, he said that instead of neural interfaces connected to their brain, people might one day wear a wristband that picks up signals from the brain communicating with their hand. This would allow them to communicate with the neural interface by barely moving their hand. Over time, it could allow people to type, too. Zuckerberg cautioned that these types of inputs and AI experiences may not immediately replace smartphones, though. “I don’t think, in the history of technology, the new platform — it usually doesn’t completely make it that people stop using the old thing. It’s just that you use it less,” he said.
In short, the mobile phone is going down, not tomorrow, but definitely to the junk drawer.
Several observations which I know you are panting to read:
- Never under estimate making something small or re-invented as a different form factor. The Zuck might be “right.”
- The idea of “unleash” is interesting. What happens if employees at WhatsApp unleash themselves? How will the Zuck construct react? Like the Google? Something new like blue chip consulting firms replacing people with smart software? “Unleash” can be interpreted in different ways, but I am thinking of turning loose a pack of hyenas. The Zuck may be thinking about eager kindergartners. Who knows?
- The Zuck’s position is different from the government officials who are moving toward restrictions on “free and open” smart software. Those hallucinating large language models can be repurposed into smart weapons. Close enough for horseshoes with enough RDX may do the job.
Net net: The Zuck is an influential and very powerful information channel owner. “Unleash” what? Hungry predators or those innovating children? Perhaps neither. But as OpenAI seems to be closing; the Zuck AI is into opening. Ah, uncertainty is unfolding before my eyes in real time.
Stephen E Arnold, July 3, 2024
x
x
Scale Is All You Need for Superintelligence: Interesting Assertion
July 2, 2024
This essay is the work of a dumb dinobaby. No smart software required. This blog post includes information from Claude 3.5 Sonnet. The information from smart software is in blue.
At this time, smart software boils down to the “transformer.” I navigated to Claude 3.5 Sonnet and asked the smart software to explain “transformer.” Here’s what the system generated:
The Google Transformer innovation, introduced in 2017, revolutionized natural language processing by introducing a novel architecture that relies entirely on attention mechanisms. This approach allows the model to process input sequences in parallel, rather than sequentially, leading to significant improvements in translation quality and training efficiency. The Transformer’s key features include self-attention, which enables it to capture relationships between words regardless of their position in a sentence, and positional encoding, which helps the model understand word order without relying on recurrent neural networks
I then asked, “Are there other ways to achieve smart software or AI information functions? Claud 3.5 Sonnet spit out this list:
- Machine Learning Algorithms
- Expert Systems
- Neural Networks.
Options are good. But the buzz focuses on transformers, a Google “invention” allegedly a decade old (but some suggest its roots reach back into the mists of time). But let’s stick with the Google and a decade.
The future is on the horizon. Thanks, MSFT Copilot. Good enough and you spelled “future” correctly.
“Etched Is Making the Biggest Bet in AI” That’s is an interesting statement. The company states what its chip is not:
By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either. But for transformers, Sohu is the fastest chip of all time.
What does the chip do? The company says:
With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs.
The company again points out the downside of its “bet the farm” approach:
Today, every state-of-the-art AI model is a transformer: ChatGPT, Sora, Gemini, Stable Diffusion 3, and more. If transformers are replaced by SSMs, RWKV, or any new architecture, our chips will be useless.
Yep, useless.
What is Etched’s big concept? The company says:
Scale is all you need for superintelligence.
This means in my dinobaby-impaired understanding that big delivers a really smarter smart software. Skip the power, pipes, and pings. Just scale everything. The company agrees:
By feeding AI models more compute and better data, they get smarter. Scale is the only trick that’s continued to work for decades, and every large AI company (Google, OpenAI / Microsoft, Anthropic / Amazon, etc.) is spending more than $100 billion over the next few years to keep scaling.
Because existing chips are “hitting a wall,” a number of companies are in the smart software chip business. The write up mentions 12 of them, and I am not sure the list is complete.
Etched is different. The company asserts:
No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.
The company walks through the problems of existing chips and delivers it knock out punch:
But since Sohu only runs transformers, we only need to write software for transformers!
Reduced coding and an optimized chip: Superintelligence is in sight. Does the company want you to write a check? Nope. Here’s the wrap up for the essay:
What happens when real-time video, calls, agents, and search finally just work? Soon, you can find out. Please apply for early access to the Sohu Developer Cloud here. And if you’re excited about solving the compute crunch, we’d love to meet you. This is the most important problem of our time. Please apply for one of our open roles here.
What’s the timeline? I don’t know. What’s the cost of an Etched chip? I don’t know. What’s the infrastructure required. I don’t know. But superintelligence is almost here.
Stephen E Arnold, July 2, 2024
OpenAI: Do You Know What Open Means? Does Anyone?
July 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The backstory for OpenAI was the concept of “open.” Well, the meaning of “open” has undergone some modification. There was a Musk up, a board coup, an Apple announcement that was vaporous, and now we arrive at the word “open” as in “OpenAI.”
Open source AI is like a barn that burned down. Hopefully the companies losing their software’s value have insurance. Once the barn is gone, those valuable animals may be gone. Thanks, MSFT Copilot. Good enough. How’s that Windows update going this week?
“OpenAI Taking Steps to Block China’s Access to Its AI Tools” reports with the same authority Bloomberg used with its “your motherboard is phoning home” crusade a few years ago [Note: If the link doesn’t render, search Bloomberg for the original story]:
OpenAI is taking additional steps to curb China’s access to artificial intelligence software, enforcing an existing policy to block users in nations outside of the territory it supports. The Microsoft Corp.-backed startup sent memos to developers in China about plans to begin blocking their access to its tools and software from July, according to screenshots posted on social media that outlets including the Securities Times reported on Tuesday. In China, local players including Alibaba Group Holding Ltd. and Tencent Holdings Ltd.-backed Zhipu AI posted notices encouraging developers to switch to their own products.
Let’s assume the information in the cited article is on the money. Yes, I know this is risky today, but do you know an 80-year-old who is not into thrills and spills?
According to Claude 3.5 Sonnet (which my team is testing), “open” means:
Not closed or fastened
Accessible or available
Willing to consider or receive
Exposed or vulnerable
The Bloomberg article includes this passage:
OpenAI supports access to its services in dozens of countries. Those accessing its products in countries not included on the list, such as China, may have their accounts blocked or suspended, according to the company’s guidelines. It’s unclear what prompted the move by OpenAI. In May, Sam Altman’s startup revealed it had cut off at least five covert influence operations in past months, saying they were using its products to manipulate public opinion.
I found this “real” news interesting:
From Baidu Inc. to startups like Zhipu, Chinese firms are trying to develop AI models that can match ChatGPT and other US industry pioneers. Beijing is openly encouraging local firms to innovate in AI, a technology it considers crucial to shoring up China’s economic and military standing.
It seems to me that “open” means closed.
Another angle surfaces in the Nature Magazine’s article “Not All Open Source AI Models Are Actually Open: Here’s a Ranking.” OpenAI is not alone in doing some linguistic shaping with the word “open.” The Nature article states:
Technology giants such as Meta and Microsoft are describing their artificial intelligence (AI) models as ‘open source’ while failing to disclose important information about the underlying technology, say researchers who analysed a host of popular chatbot models. The definition of open source when it comes to AI models is not yet agreed, but advocates say that ’full’ openness boosts science, and is crucial for efforts to make AI accountable.
Now this sure sounds to me as if the European Union is defining “open” as different from the “open” of OpenAI.
Let’s step back.
Years ago I wrote a monograph about open source search. At that time IDC was undergoing what might charitably be called “turmoil.” Chapters of my monograph were published by IDC on Amazon. I recycled the material for consulting engagements, but I learned three useful things in the research for that analysis of open source search systems:
- Those making open source search systems available at free and open source software wanted the software [a] to prove their programming abilities, [b] to be a foil for a financial play best embodied in the Elastic go-public and sell services “play”; [c] be a low-cost, no-barrier runway to locking in users; that is, a big company funds the open source software and has a way to make money every which way from the “free” bait.
- Open source software is a product testing and proof-of-concept for developers who are without a job or who are working in a programming course in a university. I witnessed this approach when I lectured in Tallinn, Estonia, in the 2000s. The “maybe this will stick” approach yields some benefits, primarily to the big outfits who co-opt an open source project and support it. When the original developer gives up or gets a job, the big outfit has its hands on the controls. Please, see [c] in item 1 above.
- Open source was a baby buzzword when I was working on my open source search research project. Now “open source” is a full-scale, AI-jargonized road map to making money.
The current mix up in the meaning of “open” is a direct result of people wearing suits realizing that software has knowledge value. Giving value away for nothing is not smart. Hence, the US government wants to stop its nemesis from having access to open source software, specifically AI. Big companies do not want proprietary knowledge to escape unless someone pays for the beast. Individual developers want to get some fungible reward for creating “free” software. Begging for dollars, offering a disabled version of software or crippleware, or charging for engineering “support” are popular ways to move from free to ka-ching. Big companies have another angle: Lock in. Some outfits are inept like IBM’s fancy dancing with Red Hat. Other companies are more clever; for instance, Microsoft and its partners and AI investments which allow “open” to become closed thank you very much.
Like many eddies in the flow of the technology river, change is continuous. When someone says, “Open”, keep in mind that thing may be closed and have a price tag or handcuffs.
Net net: The AI secrets have flown the coop. It has taken about 50 years to reach peak AI. The new angles revealed in the last year are not heart stoppers. That smoking ruin over there. That’s the locked barn that burned down. Animals are gone or “transformed.”
Stephen E Arnold, July 1, 2024
Is There a Problem with AI Detection Software?
July 1, 2024
Of course not.
But colleges and universities are struggling to contain AI-enabled cheating. Sadly, it seems the easiest solution is tragically flawed. Times Higher Education considers, “Is it Time to Turn Off AI Detectors?” The post shares a portion of the new book, “Teaching with AI: A Practical Guide to a New Era of Human Learning” by José Antonio Bowen and C. Edward Watson. The excerpt begins by looking at the problem:
“The University of Pennsylvania’s annual disciplinary report found a seven-fold (!) increase in cases of ‘unfair advantage over fellow students’, which included ‘using ChatGPT or Chegg’. But Quizlet reported that 73 per cent of students (of 1,000 students, aged 14 to 22 in June 2023) said that AI helped them ‘better understand material’. Watch almost any Grammarly ad (ubiquitous on TikTok) and ask first, if you think clicking on ‘get citation‘ or ‘paraphrase‘ is cheating. Second, do you think students might be confused?”
Probably. Some universities are not exactly clear on what is cheating and what is permitted usage of AI tools. At the same time, a recent study found 51 percent of students will keep using them even if they are banned. The boost to their GPAs is just too tempting. Schools’ urge to fight fire with fire is understandable, but detection tools are far from perfect. We learn:
“AI detectors are already having to revise claims. Turnitin initially claimed a 1 per cent false-positive rate but revised that to 4 per cent later in 2023. That was enough for many institutions, including Vanderbilt, Michigan State and others, to turn off Turnitin’s AI detection software, but not everyone followed their lead. Detectors vary considerably in their accuracy and rate of false positives. One study looked at 14 different detectors and found that five of the 14 were only 50 per cent accurate or worse, but four of them (CheckforAI, Winston AI, GPT-2 Output and Turnitin) missed only one of the 18 AI-written samples. Detectors are not all equal, but the best are better than faculty at identifying AI writing.”
But is that ability is worth the false positives? One percent may seem small, but to those students it can mean an end to their careers before they even begin. For institutions that do not want to risk false accusations, the authors suggest several alternatives that seem to make a difference. They advise instructors to discuss the importance of academic integrity at the beginning of the course and again as the semester progresses. Demonstrating how well detection tools work can also have an impact. Literally quizzing students on the school’s AI policies, definitions, and consequences can minimize accidental offenses. Schools could also afford students some wiggle room: allow them to withdraw submissions and take the zero if they have second thoughts. Finally, the authors suggest schools normalize asking for help. If students get stuck, they should feel they can turn to a human instead of AI.
Cynthia Murrell, July 1, 2024
Some Tension in the Datasphere about Artificial Intelligence
June 28, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I generally try to avoid profanity in this blog. I am mindful of Google’s stopwords. I know there are filters running to protect those younger than I from frisky and inappropriate language. Therefore, I will cite the two articles and then convert the profanity to a suitably sanitized form.
The first write up is “I Will F…ing Piledrive You If You Mention AI Again”. Sorry, like many other high-technology professionals I prevaricated and dissembled. I have edited the F word to be less superficially offensive. (One simply cannot trust high-technology types, can you? I am not Thomson Reuters obviously.) The premise of this write up is that smart software is over-hyped. Here’s a passage I found interesting:
Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists.
I will leave it to you to ponder the wisdom of these words. I, for instance, do not know exactly what I am going to do until I do something, fiddle with it, and either change it up or trash it. You and most AI enthusiasts are probably different. That’s good. I envy your certitude. The author of the first essay is not gentle; he wants to piledrive you if you talk about smart software. I do not advocate violence under any circumstances. I can tolerate baloney about smart software. The piledriver person has hate in his heart. You have been warned.
The second write up is “ChatGPT Is Bullsh*t,” and it is an article published in SpringerLink, not a personal blog. Yep, bullsh*t as a term in an academic paper. Keep in mind, please, that Stanford University’s president and some Harvard wizards engaged in the bullsh*t business as part of their alleged making up data. Who needs AI when humans are perfectly capable of hallucinating, but I digress?
I noted this passage in the academic write up:
So perhaps we should, strictly, say not that ChatGPT is bullshit but that it outputs bullshit in a way that goes beyond being simply a vector of bullshit: it does not and cannot care about the truth of its output, and the person using it does so not to convey truth or falsehood but rather to convince the hearer that the text was written by a interested and attentive agent.
Please, read the 10 page research article about bullsh*t, soft bullsh*t, and hard bullsh*t. Form your own opinion.
I have now set the stage for some observations (probably unwanted and deeply disturbing to some in the smart software game).
- Artificial intelligence is a new big thing, and the hyperbole, misdirection, and outright lying like my saying I would use forbidden language in this essay irrelevant. The object of the new big thing is to make money, get power, maybe become an influencer on TikTok.
- The technology which seems to have flowered in January 2023 when Microsoft said, “We love OpenAI. It’s a better Clippy.” The problem is that it is now June 2024 and the advances have been slow and steady. This means that after a half century of research, the AI revolution is working hard to keep the hypemobile in gear. PR is quick; smart software improvement less speedy.
- The ripples the new big thing has sent across the datasphere attenuate the farther one is from the January 2023 marketing announcement. AI fatigue is now a thing. I think the hostility is likely to increase because real people are going to lose their jobs. Idle hands are the devil’s playthings. Excitement looms.
Net net: I think the profanity reveals the deep disgust some pundits and experts have for smart software, the companies pushing silver bullets into an old and rusty firearm, and an instinctual fear of the economic disruption the new big thing will cause. Exciting stuff. Oh, I am not stating a falsehood.
Stephen E Arnold, June 23, 2024
Can the Bezos Bulldozer Crush Temu, Shein, Regulators, and AI?
June 27, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The question, to be fair, should be, “Can the Bezos-less bulldozer crush Temu, Shein, Regulators, Subscriptions to Alexa, and AI?” The article, which appeared in the “real” news online service Venture Beat, presents an argument suggesting that the answer is, “Yes! Absolutely.”
Thanks MSFT Copilot. Good bulldozer.
The write up “AWS AI Takeover: 5 Cloud-Winning Plays They’re [sic] Using to Dominate the Market” depends upon an Amazon Big Dog named Matt Wood, VP of AI products at AWS. The article strikes me as something drafted by a small group at Amazon and then polished to PR perfection. The reasons the bulldozer will crush Google, Microsoft, Hewlett Packard’s on-premises play, and the keep-on-searching IBM Watson, among others, are:
- Covering the numbers or logo of the AI companies in the “game”; for example, Anthropic, AI21 Labs, and other whale players
- Hitting up its partners, customers, and friends to get support for the Amazon AI wonderfulness
- Engineering AI to be itty bitty pieces one can use to build a giant AI solution capable of dominating D&B industry sectors like banking, energy, commodities, and any other multi-billion sector one cares to name
- Skipping the Google folly of dealing with consumers. Amazon wants the really big contracts with really big companies, government agencies, and non-governmental organizations.
- Amazon is just better at security. Those leaky S3 buckets are not Amazon’s problem. The customers failed to use Amazon’s stellar security tools.
Did these five points convince you?
If you did not embrace the spirit of the bulldozer, the Venture Beat article states:
Make no mistake, fellow nerds. AWS is playing a long game here. They’re not interested in winning the next AI benchmark or topping the leaderboard in the latest Kaggle competition. They’re building the platform that will power the AI applications of tomorrow, and they plan to power all of them. AWS isn’t just building the infrastructure, they’re becoming the operating system for AI itself.
Convinced yet? Well, okay. I am not on the bulldozer yet. I do hear its engine roaring and I smell the no-longer-green emissions from the bulldozer’s data centers. Also, I am not sure the Google, IBM, and Microsoft are ready to roll over and let the bulldozer crush them into the former rain forest’s red soil. I recall researching Sagemaker which had some AI-type jargon applied to that “smart” service. Ah, you don’t know Sagemaker? Yeah. Too bad.
The rather positive leaning Amazon write up points out that as nifty as those five points about Amazon’s supremacy in the AI jungle, the company has vision. Okay, it is not the customer first idea from 1998 or so. But it is interesting. Amazon will have infrastructure. Amazon will provide model access. (I want to ask, “For how long?” but I won’t.), and Amazon will have app development.
The article includes a table providing detail about these three legs of the stool in the bulldozer’s cabin. There is also a run down of Amazon’s recent media and prospect directed announcements. Too bad the article does not include hyperlinks to these documents. Oh, well.
And after about 3,300 words about Amazon, the article includes about 260 words about Microsoft and Google. That’s a good balance. Too bad IBM. You did not make the cut. And HP? Nope. You did not get an “Also participated” certificate.
Net net: Quite a document. And no mention of Sagemaker. The Bezos-less bulldozer just smashes forward. Success is in crushing. Keep at it. And that “they” in the Venture Beat article title: Shouldn’t “they” be an “it”?
Stephen E Arnold, June 27, 2024
Nerd Flame War: AI AI AI
June 27, 2024
The Internet is built on trolls and their boorish behavior. The worst of the trolls are self-confessed “experts” on anything. Every online community has their loitering trolls and tech enthusiasts aren’t any different. In the old days of Internet lore, online verbal battles were dubbed “flame wars” and XDA-Developers reports that OpenAI started one: “AI Has Thrown Stack Overflow Into Civil War.”
A huge argument in AI development is online content being harvested for large language models (LLMs) to train algorithms. Writers and artists were rightly upset were used to train image and writing algorithms. OpenAI recently partnered with Stack Overflow to collect data and the users aren’t happy. Stack Overflow is a renowned tech support community for sysadmin, developers, and programmers. Stack Overflow even brags that it is world’s largest developer community.
Stack Overflow users are angry, because they weren’t ask permission to use their content for AI training models and they don’t like the platform’s response to their protests. Users are deleting their posts or altering them to display correct information. In response, Stack Overflow is restoring deleted and incorrect information, temporarily suspending users who delete content, and hiding behind the terms of service. The entire situation is explained here:
“Delving into discussion online about OpenAI and Stack Overflow’s partnership, there’s plenty to unpack. The level of hostility towards Stack Overflow varies, with some users seeing their answers as being posted online without conditions – effectively free for all to use, and Stack Overflow granting OpenAI access to that data as no great betrayal. These users might argue that they’ve posted their answers for the betterment of everyone’s knowledge, and don’t place any conditions on its use, similar to a highly permissive open source license.
Other users are irked that Stack Overflow is providing access to an open-resource to a company using it to build closed-source products, which won’t necessarily better all users (and may even replace the site they were originally posted on.) Despite OpenAI’s stated ambition, there is no guarantee that Stack Overflow will remain freely accessible in perpetuity, or that access to any AIs trained on this data will be free to the users who contributed to it.”
Reddit and other online communities are facing the same problems. LLMs are made from Stack Overflow and Reddit to train generative AI algorithms like ChatGPT. OpenAI’s ChatGPT is regarded as overblown because it continues to fail multiple tests. We know, however, that generative AI will improve with time. We also know that people will use the easiest solution and generative AI chatbots will become those tools. It’s easier to verbally ask or write a question than searching.
Whitney Grace, June 27, 2024
Two EU Firms Unite in Pursuit of AI Sovereignty
June 25, 2024
Europe would like to get out from under the sway of North American tech firms. This is unsurprising, given how differently the EU views issues like citizen privacy. Then there are the economic incentives of localizing infrastructure, data, workforce, and business networks. Now, two generative AI firms are uniting with that goal in mind. The Next Web reveals, “European AI Leaders Aleph Alpha and Silo Ink Deal to Deliver ‘Sovereign AI’.” Writer Thomas Macaulay reports:
“Germany’s Aleph Alpha and Finland’s Silo AI announced the partnership [on June 13, 2024]. The duo plan to create a ‘one-stop-solution’ for European industrial firms exploring generative AI. Their collaboration brings together distinctive expertise. Aleph Alpha has been described a European rival to OpenAI, but with a stronger focus on data protection, security, and transparency. The company also claims to operate Europe’s fastest commercial AI data center. Founded in 2019, the firm has become Germany’s leading AI startup. In November, it raised $500mn in a funding round backed by Bosch, SAP, and Hewlett Packard Enterprise. Silo AI, meanwhile, calls itself ‘Europe’s largest private AI lab.’ The Helsinki-based startup provides custom LLMs through a SaaS subscription. Use cases range from smart devices and cities to autonomous vehicles and industry 4.0. Silo also specializes in building LLMs for low-resource languages, which lack the linguistic data typically needed to train AI models. By the end of this year, the company plans to cover every official EU language.”
Both Aleph Alpha CEO Jonas Andrulis and Silo AI CEO Peter Sarlin enthusiastically advocate European AI sovereignty. Will the partnership strengthen their mutual cause?
Cynthia Murrell, June 25, 2024
A Discernment Challenge for Those Who Are Dull Normal
June 24, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Techradar, an online information service, published “Ahead of GPT-5 Launch, Another Test Shows That People Cannot Distinguish ChatGPT from a Human in a Conversation Test — Is It a Watershed Moment for AI?” The headline implies “change everything” rhetoric, but that is routine AI jargon-hype.
Once again, academics who are unable to land a job in a “real” smart software company studied the work of their former colleagues who make a lot more money than those teaching do. Well, what do academic researchers do when they are not sitting in the student union or the snack area in the lab whilst waiting for a graduate student to finish a task? In my experience, some think about their CVs or résumés. Others ponder the flaws in a commercial or allegedly commercial product or service.
A young shopper explains that the outputs of egg laying chickens share a similarity. Insightful observation from a dumb carp. Thanks, MSFT Copilot. How’s that Recall project coming along?
The write up reports:
The Department of Cognitive Science at UC San Diego decided to see how modern AI systems fared and evaluated ELIZA (a simple rules-based chatbot from the 1960’s included as a baseline in the experiment), GPT-3.5, and GPT-4 in a controlled Turing Test. Participants had a five-minute conversation with either a human or an AI and then had to decide whether their conversation partner was human.
Here’s the research set up:
In the study, 500 participants were assigned to one of five groups. They engaged in a conversation with either a human or one of the three AI systems. The game interface resembled a typical messaging app. After five minutes, participants judged whether they believed their conversation partner was human or AI and provided reasons for their decisions.
And what did the intrepid academics find? Factoids that will get them a job at a Perplexity-type of company? Information that will put smart software into focus for the elected officials writing draft rules and laws to prevent AI from making The Terminator come true?
The results were interesting. GPT-4 was identified as human 54% of the time, ahead of GPT-3.5 (50%), with both significantly outperforming ELIZA (22%) but lagging behind actual humans (67%). Participants were no better than chance at identifying GPT-4 as AI, indicating that current AI systems can deceive people into believing they are human.
What does this mean for those labeled dull normal, a nifty term applied to some lucky people taking IQ tests. I wanted to be a dull normal, but I was able to score in the lowest possible quartile. I think it was called dumb carp. Yes!
Several observations to disrupt your clear thinking about smart software and research into how the hot dogs are made:
- The smart software seems to have stalled. Our tests of You.com which allows one to select which object models parrots information, it is tough to differentiate the outputs. Cut from the same transformer cloth maybe?
- Those judging, differentiating, and testing smart software outputs can discern differences if they are way above dull normal or my classification dumb carp. This means that indexing systems, people, and “new” models will be bamboozled into thinking what’s incorrect is a-okay. So much for the informed citizen.
- Will the next innovation in smart software revolutionize something? Yep, some lucky investors.
Net net: Confusion ahead for those like me: Dumb carp. Dull normals may be flummoxed. But those super-brainy folks have a chance to rule the world. Bust out the party hats and little horns.
Stephen E Arnold, June 24, 2024