Google and the Tom Sawyer Method, Part Two

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

What does a large online advertising company do when it cannot figure out what’s fake and what’s not? The answer, as I suggested in this post, is to get other people to do the work. The approach is cheap, shifts the burden to other people, and sidesteps direct testing of an automated “smart” system to detect fake data in the form of likenesses of living people or likenesses for which fees must be paid to use the likeness.

YouTube Will Let Musicians and Actors Request Takedowns of Their Deepfakes” explains (sort of):

YouTube is making it “possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” Individuals can submit calls for removal through YouTube’s privacy request process

I find this angle on the process noted in my “Google Solves Fake Information with the Tom Sawyer Method” a useful interpretation of what Google is doing.

From my point of view, Google wants others to do the work of monitoring, identifying, and filling out a form to request fake information be removed. Nevermind that Google has the data, the tags, and (in theory) the expertise to automate the process.

I admire Google. I bet Tom Sawyer’s distant relative now works at Google and cooked up this approach. Well done. Hit that Foosball game while others hunt for their fake or unauthorized likeness, their music, or some other copyrighted material.

Stephen E Arnold, November 15, 2023

Hitting the Center Field Wall, AI Suffers an Injury!

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb, dinobaby humanoid. No smart software required.

At a reception at a government facility in Washington, DC, last week, one of the bright young sparks told me, “Every investment deal I see gets fund if it includes the words ‘artificial intelligence.’” I smiled and moved to another conversation. Wow, AI has infused the exciting world of a city built on the swampy marge of the Potomac River.

I think that the go-go era of smart software has reached a turning point. Venture firms and consultants may not have received the email with this news. However, my research team has, and the update contains information on two separate thrusts of the AI revolution.

image

The heroic athlete, supported by his publicist, makes a heroic effort to catch the long fly ball. Unfortunately our star runs into the wall, drops the ball, and suffers what may be a career-ending injury to his left hand. (It looks broken, doesn’t it?)Oh, well. Thanks, MSFT Bing. The perspective is weird and there is trash on the ground, but the image is good enough.

The first signal appears in “AI Companies Are Running Out of Training Data.” The notion that online information is infinite is a quaint one. But in the fever of moving to online, reality is less interesting that the euphoria of the next gold rush or the new Industrial Revolution. Futurism reports:

Data plays a central role, if not the central role, in the AI economy. Data is a model’s vital force, both in basic function and in quality; the more natural — as in, human-made — data that an AI system has to train on, the better that system becomes. Unfortunately for AI companies, though, it turns out that natural data is a finite resource — and if that tap runs dry, researchers warn they could be in for a serious reckoning.

The information or data in question is not the smog emitted by modern automobiles’ chip stuffed boxes. Nor is the data the streams of geographic information gathered by mobile phone systems. The high value data are those which matter; for example, in a stream of security information, which specific stock is moving because it is being manipulated by one of those bright young minds I met at the DC event.

The article “AI Companies Are Running Out of Training Data” adds:

But as data becomes increasingly valuable, it’ll certainly be interesting to see how many AI companies can actually compete for datasets — let alone how many institutions, or even individuals, will be willing to cough their data over to AI vacuums in the first place. But even then, there’s no guarantee that the data wells won’t ever run dry. As infinite as the internet seems, few things are actually endless.

The fix is synthetic or faked data; that is, fabricated data which appears to replicate real-life behavior. (Don’t you love it when Google predicts the weather or a smarty pants games the crypto market?)

The message is simple: Smart software has ground through the good stuff and may face its version of an existential crisis. That’s different from the rah rah one usually hears about AI.

The second item my team called to my attention appears in a news story called “OpenAI Pauses New ChatGPT Plus Subscriptions De to Surge in Demand.” I read the headline as saying, “Oh, my goodness, we don’t have the money or the capacity to handle more users requests.”

The article expresses the idea in this snappy 21st century way:

The decision to pause new ChatGPT signups follows a week where OpenAI services – including ChatGPT and the API – experienced a series of outages related to high-demand and DDoS attacks.

Okay, security and capacity.

What are the implications of these two unrelated stories:

  1. The run up to AI has been boosted with system operators ignoring copyright and picking low hanging fruit. The orchard is now looking thin. Apples grow on trees, just not quickly and over cultivation can ruin the once fertile soil. Think a digital Dust Bowl perhaps?
  2. The friction of servicing user requests is causing slow downs. Can the heat be dissipated? Absolutely but the fix requires money, more than high school science club management techniques, and common sense. Do AI companies exhibit common sense? Yeah, sure. Everyday.
  3. The lack of high-value or sort of good information is a bummer. Machines producing insights into the dark activities of bad actors and the thoughts of 12-year-olds are grinding along. However, the value of the information outputs seems to be lagging behind the marketers’ promises. One telling example is the outright failure of Israel’s smart software to have utility in identifying the intent of bad actors. My goodness, if any country has smart systems, it’s Israel. Based on events in the last couple of months, the flows of data produced what appears to be a failing grade.

If we take these two cited articles’ information at face value, one can make a case that the great AI revolution may be facing some headwinds. In a winner-take-all game like AI, there will be some Sad Sacks at those fancy Washington, DC receptions. Time to innovate and renovate perhaps?

Stephen E Arnold, November 15, 2023

Cyberwar Crimes? Yep and Prosecutions Coming Down the Pike

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Existing international law has appeared hamstrung in the face of cyber-attacks for years, with advocates calling for new laws to address the growing danger. It appears, however, that step will no longer be necessary. Wired reports, “The International Criminal Court Will Now Prosecute Cyberwar Crimes.” The Court’s lead prosecutor, Karim Khan, acknowledged in an article published by Foreign Policy Analytics that cyber warfare perpetuates serious harm in the real world. Attacks on critical infrastructure like medical facilities and power grids may now be considered “war crimes, crimes against humanity, genocide, and/or the crime of aggression” as defined in the 1998 Rome Statute. That is great news, but why now? Writer Andy Greenberg tells us:

“Neither Khan’s article nor his office’s statement to WIRED mention Russia or Ukraine. But the new statement of the ICC prosecutor’s intent to investigate and prosecute hacking crimes comes in the midst of growing international focus on Russia’s cyberattacks targeting Ukraine both before and after its full-blown invasion of its neighbor in early 2022. In March of last year, the Human Rights Center at UC

Berkeley’s School of Law sent a formal request to the ICC prosecutor’s office urging it to consider war crime prosecutions of Russian hackers for their cyberattacks in Ukraine—even as the prosecutors continued to gather evidence of more traditional, physical war crimes that Russia has carried out in its invasion. In the Berkeley Human Rights Center’s request, formally known as an Article 15 document, the Human Rights Center focused on cyberattacks carried out by a Russian group known as Sandworm, a unit within Russia’s GRU military intelligence agency. Since 2014, the GRU and Sandworm, in particular, have carried out a series of cyberwar attacks against civilian critical infrastructure in Ukraine beyond anything seen in the history of the internet.”

See the article for more details of Sandworm’s attacks. Greenberg consulted Lindsay Freeman, the Human Rights Center’s director of technology, law, and policy, who expects the ICC is ready to apply these standards well beyond the war in Ukraine. She notes the 123 countries that signed the Rome Statute are obligated to detain and extradite convicted war criminals. Another expert, Strauss Center director Bobby Chesney, points out Khan paints disinformation as a separate, “gray zone.” Applying the Rome Statute to that tactic may prove tricky, but he might make it happen. Khan seems determined to hold international bad actors to account as far as the law will possibly allow.

Cynthia Murrell, November 15, 2023

A Musky Odor Thwarts X Academicians

November 15, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How does a tech mogul stamp out research? The American way, of course! Ars Technica reveals, “100+ Researchers Say they Stopped Studying X, Fearing Elon Musk Might Sue Them.” A recent Reuters report conducted by the Coalition for Independent Technology Research found a fear of litigation and jacked-up data-access fees are hampering independent researchers. All while X (formerly Twitter) is under threat of EU fines for allowing Israel/Hamas falsehoods. Meanwhile, the usual hate speech, misinformation, and disinformation continue. The company insists its own, internal mechanisms are doing a fine job, thank you very much, but it is getting harder and harder to test that claim. Writer Ashley Belanger tells us:

“Although X’s API fees and legal threats seemingly have silenced some researchers, X has found some other partners to support its own research. In a blog last month, Yaccarino named the Technology Coalition, Anti-Defamation League (another group Musk threatened to sue), American Jewish Committee, and Global Internet Forum to Counter Terrorism (GIFCT) among groups helping X ‘keep up to date with potential risks’ and supporting X safety measures. GIFCT, for example, recently helped X identify and remove newly created Hamas accounts. But X partnering with outside researchers isn’t a substitute for external research, as it seemingly leaves X in complete control of spinning how X research findings are characterized to users. Unbiased research will likely become increasingly harder to come by, Reuters’ survey suggested.”

Indeed. And there is good reason to believe the company is being less than transparent about its efforts. We learn:

“For example, in July, X claimed that a software company that helps brands track customer experiences, Sprinklr, supplied X with some data that X Safety used to claim that ‘more than 99 percent of content users and advertisers see on Twitter is healthy.’ But a Sprinklr spokesperson this week told Reuters that the company could not confirm X’s figures, explaining that ‘any recent external reporting prepared by Twitter/X has been done without Sprinklr’s involvement.’”

Musk is famously a “free speech absolutist,” but only when it comes to speech he approves of. Decreasing transparency will render X more dangerous, unless and until its decline renders it irrelevant. Fear the musk ox.

Cynthia Murrell, November 15, 2023

Copyright Trolls: An Explanation Which Identifies Some Creatures

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

If you are not familiar with firms which pursue those who intentionally or unintentionally use another person’s work in their writings, you may not know what a “copyright troll” is. I want to point you to an interesting post from IntoTheMinds.com. The write up “PicRights + AFP: Une Opération de Copyright Trolling Bien Rodée.” appeared in 2021, and it was updated in June 2023. The original essay is in French, but you may want to give Google Translate a whirl if your high school French is but a memoire dou dou.

image

A copyright troll is looking in the window of a blog writer. The troll is waiting for the writer to use content covered by copyright and for which a fee must be paid. The troll is patient. The blog writer is clueless. Thanks, Microsoft Bing. Nice troll. Do you perhaps know one?

The write up does a good job of explaining trollism with particular reference to an estimable outfit called PicRights and the even more estimable Agence France-Presse. It also does a bit of critical review of the PicRights’ operation, including the use of language to alleged copyright violators about how their lives will take a nosedive if money is not paid promptly for the alleged transgression. There are some thoughts about what to do if and when a copyright troll like the one pictured courtesy of Microsoft Bing’s art generator. Some comments about the rules and regulations regarding trollism. The author includes a few observations about the rights of creators. And a few suggested readings are included. Of particular note is the discussion of an estimable legal eagle outfit doing business as Higbee and Associates. You can find that document at this link.

If you are interested in copyright trolling in general and PicRights in particular, I suggest you download the document. I am not sure how long it will remain online.

Stephen E Arnold, November 14, 2023

Google Solves Fake Information with the Tom Sawyer Method

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

How does one deliver “responsible AI”? Easy. Shift the work to those who use a system built on smart software. I call the approach the “Tom Sawyer Method.” The idea is that the fictional character (Tom) convinced lesser lights to paint the fence for him. Sammy Clemmons (the guy who invested in the typewriter) said:

“Work consists of whatever a body is obliged to do. Play consists of whatever a body is not obliged to do.”

Thus the information in “Our Approach to Responsible AI Innovation” is play. The work is for those who cooperate to do the real work. The moral is, “We learn more about Google than we do about responsible AI innovation.”

image

The young entrepreneur says, “You fellows chop the wood.  I will go and sell it to one of the neighbors. Do a good job. Once you finish you can deliver the wood and I will give you your share of the money. How’s that sound?” The friends are eager to assist their pal. Thanks Microsoft Bing. I was surprised that you provided people of color when I asked for “young people chopping wood.” Interesting? I think so.

The Google write up from a trio of wizard vice presidents at the online advertising company says:

…we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material.

Yep, “require.” But what I want to do is to translate Google speak into something dinobabies understand. Here’s my translation:

  1. Google cannot determine what content is synthetic and what is not; therefore, the person using our smart software has to tell us, “Hey, Google, this is fake.”
  2. Google does not want to increase headcount and costs related to synthetic content detection and removal. Therefore, the work is moved via the Tom Sawyer Method to YouTube “creators” or fence painters. Google gets the benefit of reduced costs, hopefully reduced liability, and “play” like Foosball.
  3. Google can look at user provided metadata and possibly other data in the firm’s modest repository and determine with acceptable probability that a content object and a creator should be removed, penalized, or otherwise punished by a suitable action; for example, not allowing a violator to buy Google merchandise. (Buying Google AdWords is okay, however.)

The write up concludes with this bold statement: “The AI transformation is at our doorstep.” Inspiring. Now wood choppers, you can carry the firewood into the den and stack it buy the fireplace in which we burn the commission checks the offenders were to receive prior to their violating the “requirements.”

Ah, Google, such a brilliant source of management inspiration: A novel written in 1876. I did not know that such old information was in the Google index. I mean DejaVu is consigned to the dust bin. Why not Mark Twain’s writings?

Stephen  E Arnold, November 14, 2023

test

Google: Slip Slidin Away? Not Yet. Defaults Work

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

I spotted a short item in the online information service called Quartz. The story had a click magnet title, and it worked for me. “Is This the Beginning of the End of Google’s Dominance in Search?” asks a rhetorical question without providing much of an answer. The write up states:

The tech giant’s market share is being challenged by an increasingly crowded field

I am not sure what this statement means. I noticed during the week of November 6, 2023, that the search system 50kft.com stopped working. Is the service dead? Is it experiencing technical problems? No one knows. I also checked Newslookup.com. That service remains stuck in the past. And Blogsurf.io seems to be a goner. I am not sure where the renaissance in Web search is. Is there a digital Florence, Italy, I have overlooked?

image

A search expert lounging in the hammock of habit. Thanks, Microsoft Bing. You do understand some concepts like laziness when it comes to changing search defaults, don’t you?

The write up continues:

Google has been the world’s most popular search engine since its launch in 1997. In October, it was holding a market share of 91.6%, according to web analytics tracker StatCounter. That’s down nearly 80 basis points from a year before, though a relatively small dent considering OpenAI’s ChatGPT was introduced late last year.

And what’s number two? How about Bing with a market share of 3.1 percent according to the numbers in the article.

Some people know that Google has spent big bucks to become the default search engine in places that matter. What few appreciate is that being a default is the equivalent of finding oneself in a comfy habit hammock. Changing the default setting for search is just not worth the effort.

What I think is happening is the conflation of search and retrieval with another trend. The new thing is letting software generate what looks like an answer. Forget that the outputs of a system based on smart software may be wonky or just incorrect. Thinking up a query is difficult.

But Web search sucks. Google is in a race to create bigger, more inviting hammocks.

image

Google is not sliding into a loss of market share. The company is coming in for the kill as it demonstrates its financial resolve with regard to the investment in Character.ai.

Let me be clear: Finding actionable information today is more difficult than at any previous time in my 50 year career in online information. Why? Software struggles to match content to what a human needs to solve certain problems. Finding a pizza joint or getting a list of results for further reading just looks like an answer. To move beyond good enough so the pizza joint does not gag a maggot or the list of citations is beyond the user’s reading level is not what’s required.

We are stuck in the Land of Good Enough, lounging in habit hammocks, and living the good life. Some people wear a T shirt with the statement, “Ignorance is bliss. Hello, Happy.”

Net net: I think the write up projects a future in which search becomes really easy and does the thinking for the humanoids. But for now, it’s the Google.

Stephen E Arnold, November 14, 2023

Pundit Recounts Amazon Sins and Their Fixes

November 14, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Sci-fi author and Pluralistic blogger Cory Doctorow is not a fan of Amazon. In fact, he declares, “Amazon Is a Ripoff.” His article references several sources to support this assertion, beginning with Lina Khan’s 2017 cautionary paper published in the Yale Law Journal. Now head of the FTC, Khan is bringing her expertise to bear in a lawsuit against the monopoly. We are reminded how tech companies have been able to get away with monopolistic practices thus far:

“There’s a cheat-code in US antitrust law, one that’s been increasingly used since the Reagan administration, when the ‘consumer welfare’ theory (‘monopolies are fine, so long as the lower prices’) shoved aside the long-established idea that antitrust law existed to prevent monopolies from forming at all. The idea that a company can do anything to create or perpetuate a monopoly so long as its prices go down and/or its quality goes up is directly to blame for the rise of Big Tech.”

But what, exactly, is shady about Amazon’s practices? From confusing consumers through complexity and gouging them with “drip pricing” to holding vendors over a barrel, Doctorow describes the company’s sins in this long, specific, and heavily linked diatribe. He then pulls three rules to hold Amazon accountable from a paper by researchers Tim O’Reilly, Ilan Strauss, and Mariana Mazzucato: Force the company to halt its most deceptive practices, mandate interoperability between it and comparison shopping sites, and create legal safe harbors for the scraping that underpins such interoperability. The invective concludes:

“I was struck by how much convergence there is among different kinds of practitioners, working against the digital sins of very different kinds of businesses. From the CFPB using mandates and privacy rules to fight bank rip-offs to behavioral economists thinking about Amazon’s manipulative search results. This kind of convergence is exciting as hell. After years of pretending that Big Tech was good for ‘consumers,’ we’ve not only woken up to how destructive these companies are, but we’re also all increasingly in accord about what to do about it. Hot damn!”

He sounds so optimistic. Are big changes ahead? Don’t forget to sign up for Prime.

Cynthia Murrell, November 14, 2023

Google Apple: These Folks Like Geniuses and Numbers in the 30s

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The New York Post published a story which may or may not be one the money. I would suggest that the odds of it being accurate are in the 30 percent range. In fact, 30 percent is emerging as a favorite number. Apple, for instance, imposes what some have called a 30 percent “Apple tax.” Don’t get me wrong. Apple is just trying to squeak by in a tough economy. I love the connector on the MacBook Air which is unlike any Apple connector in my collection. And the $130 USB cable? Brilliant.

image

The poor Widow Apple is pleading with the Bank of Googzilla for a more favorable commission. The friendly bean counter is not willing to pay more than one third of the cash take. “I want to pay you more, but hard times are upon us, Widow Apple. Might we agree on a slightly higher number?” The poor Widow Apple sniffs and nods her head in agreement as the frail child Mac Air the Third whimpers.

The write up which has me tangled in 30s is “Google Witness Accidentally Reveals Company Pays Apple 36% of Search Ad Revenue.” I was enthralled with the idea that a Google witness could do something by accident. I assumed Google witnesses were in sync with the giant, user centric online advertising outfit.

The write up states:

Google pays Apple a 36% share of search advertising revenue generated through its Safari browser, one of the tech giant’s witnesses accidentally revealed in a bombshell moment during the Justice Department’s landmark antitrust trial on Monday. The flub was made by Ken Murphy, a University of Chicago economist and the final witness expected to be called by Google’s defense team.

Okay, a 36 percent share: Sounds fair. True, it is a six percent premium on the so-called “Apple tax.” But Google has the incentive to pay more for traffic. That “pay to play” business model is indeed popular it seems.

The write up “Usury in Historical Perspective” includes an interesting passage; to wit:

Mews and Abraham write that 5,000 years ago Sumer (the earliest known human civilization) had its own issues with excessive interest. Evidence suggests that wealthy landowners loaned out silver and barley at rates of 20 percent or more, with non-payment resulting in bondage. In response, the Babylonian monarch occasionally stepped in to free the debtors.

A measly 20 percent? Flash forward to the present. At 36 percent inflation has not had much of an impact on the Apple Google deal.

Who is University of Chicago economist who allegedly revealed a super secret number? According to the always-begging Wikipedia, he is a person who has written more than 50 articles. He is a recipient of the MacArthur Fellowship sometimes known as a “genius grant.” Ergo a genius.

I noted this passage in the allegedly accurate write up:

Google had argued as recently as last week that the details of the agreement were sensitive company information – and that revealing the info “would unreasonably undermine Google’s competitive standing in relation to both competitors and other counterparties.” Schmidtlein [Google’s robust legal eagle]  and other Google attorneys have pushed back on DOJ’s assertions regarding the default search engine deals. The company argues that its payments to Apple, AT&T and other firms are fair compensation.

I like the phrase “fair compensation.” It matches nicely with the 36 percent commission on top of the $25 billion Google paid Apple to make the wonderful Google search system the default in Apple’s Safari browser. The money, in my opinion, illustrates the depth of love users have for the Google search system. Presumably Google wants to spare the Safari user the hassle required to specify another Web search system like Bing.com or Yandex.com.

Goodness, Google cares about its users so darned much, I conclude.

Despite the heroic efforts of Big Tech on Trial, I find that getting information about a trial between the US and everyone’s favorite search system difficult. Why the secrecy? Why the redactions? Why the cringing when the genius revealed the 36 percent commission?

I think I know why. Here are three reasons for the cringe:

  1. Google is thin skinned. Criticism is not part of the game plan, particularly with high school reunions coming up.
  2. Google understands that those not smart enough (like the genius Ken Murphy) would not understand the logic of the number. Those who are not Googley won’t get it, so why bother to reveal the number?
  3. Google hires geniuses. Geniuses don’t make mistakes. Therefore, the 36 percent reveal is numeric proof of the sophistication of Google’s analytic expertise. Apple could have gotten more money; Google is the winner.

Net net: My hunch is that the cloud of unknowing wrapped around the evidence in this trial makes clear that the Google is just doing what anyone smart enough to work at Google would do. Cleverness is good. Being a genius is good. Appearing to be dumb is not Googley.  Oh, oh. I am not smart enough to see the sheer brilliance of the number, its revelation, and how it makes Google even more adorable with its super special deals.

Stephen E Arnold, November 13, 2023

The OpenAI Algorithm: More Data Plus More Money Equals More Intelligence

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The Financial Times (I continue to think of this publication as the weird orange newspaper) published an interview converted to a news story. The title is an interesting one; to wit: “OpenAI Chief Seeks New Microsoft Funds to Build Superintelligence.” Too bad the story is about the bro culture in the Silicon Valley race to become the king of smart software’s revenue streams.

The hook for the write up is Sam Altman (I interpret the wizard’s name as Sam AI-Man), who appears to be fighting a bro battle with the Google’s, the current champion of online advertising. At stake is a winner takes all goal in the next big thing, smart software.

In the clubby world of smart software, I find the posturing of Google and OpenAI an extension of the mentality which pits owners of Ferraris (slick, expensive, and novel machines) in a battle of for the opponent’s hallucinating machine. The patter goes like this, “My Ferrari is faster, better looking, and brighter red than yours,” one owner says. The other owner says, “My Ferrari is newer, better designed, and has a storage bin”.) This is man cave speak for what counts.

image

When tech bros talk about their powerful machines, the real subject is what makes a man a man. In this case the defining qualities are money and potency. Thanks, Microsoft Bing, I have looked at the autos in the Microsoft and Google parking lots. Cool, macho.

The write up introduces what I think is a novel term: “Magic intelligence.” That’s T shirt grade sloganeering. The idea is that smart software will become like a person, just smarter.

One passage in the write up struck me as particularly important. The subject is orchestration, which is not the word Sam AI-Man uses. The idea is that the smart software will knit together the processes necessary to complete complex tasks. By definition, some tasks will be designed for the smart software. Others will be intended to make super duper for the less intelligent humanoids. Sam AI-Man is quoted by the Financial Times as saying:

“The vision is to make AGI, figure out how to make it safe . . . and figure out the benefits,” he said. Pointing to the launch of GPTs, he said OpenAI was working to build more autonomous agents that can perform tasks and actions, such as executing code, making payments, sending emails or filing claims. “We will make these agents more and more powerful . . . and the actions will get more and more complex from here,” he said. “The amount of business value that will come from being able to do that in every category, I think, is pretty good.”

The other interesting passage, in my opinion, is the one which suggests that the Google is not embracing the large language model approach. If the Google has discarded LLMs, the online advertising behemoth is embracing other, unnamed methods. Perhaps these are “small language models” in order to reduce costs and minimize the legal vulnerability some thing the LLM method beckons. Here’s the passage from the FT’s article:

While OpenAI has focused primarily on LLMs, its competitors have been pursuing alternative research strategies to advance AI. Altman said his team believed that language was a “great way to compress information” and therefore developing intelligence, a factor he thought that the likes of Google DeepMind had missed. “[Other companies] have a lot of smart people. But they did not do it. They did not do it even after I thought we kind of had proved it with GPT-3,” he said.

I find the bro jockeying interesting for three reasons:

  1. An intellectual jousting tournament is underway. Which digital knight will win? Both the Google and OpenAI appear to believe that the winner comes from a small group of contestants. (I wonder if non-US jousters are part of the equation “more data plus more money equals more intelligence”?
  2. OpenAI seems to be driving toward “beyond human” intelligence or possibly a form of artificial general intelligence. Google, on the other hand, is chasing a wimpier outcome.
  3. Outfits like the Financial Times are hot on the AI story. Why? The automated newsroom without humans promises to reduce costs perhaps?

Net net: AI vendors, rev your engines for superintelligence or magic intelligence or whatever jargon connotes more, more, more.

Stephen E Arnold, November 13, 2023

test

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta