The DoJ Wants to Break Up Google and Maybe Destroy the Future of AI

October 25, 2024

Contrary to popular belief, the United States is an economically frisky operation. The country runs on a fluid system that mixes aspects regulation, the Wild West, monopolies, oligopolies, and stuff operating off the reservation. The government steps in when something needs regulation. The ageing Sherman Anti-Trust Act forbids monopolies. Yahoo Finance says that “Google Is About To Learn How DOJ Wants To Remake Its Empire.”

There have been rumblings about breaking up Big Tech companies like Google for a while. District of Columbia Judge Amit Mehta ruled that Google abused its power and that its search and ad businesses violated antitrust law. Nothing is clear about what will happen to Google, but a penalty may emerge in 2025. Judge Mehta could potentially end Google’s business agreements that make it the default search engine of devices and force search data to be available to competition. Google’s products: AdWords, Chrome browser, and the Android OS could be broken up and no longer send users to the search engine.

Judge Mehta must consider how breaking up Google will affect third parties, especially those who rely on Google and associated products to (basically) run society. Mehta has a lot to think about: Judge Mehta, however, may have to consider how remedies to restore competition in the traditional search engine market may impact competition in the emerging market for AI-assisted search.

One concern, legal experts said, is that Google’s search dominance could unfairly entrench its position in the market for next-generation search.

At the same time, these fresh threats may work to Google’s advantage in the remedies trial, allowing it to argue that its overall search dominance is already under threat.”

Nothing is going to happen quickly. The 2024 presidential election results will influence Mehta’s decision. Politicians will definitely have their say and the US government needs to evaluate how they use Google.

What’s Google’s answer to these charges? The company is suggesting that fiddling with Google could end the future of AI. Promise or threat?

Whitney Grace, October 25, 2024

Meta, Politics, and Money

October 24, 2024

Meta and its flagship product, Facebook, makes money from advertising. Targeted advertising using Meta’s personalization algorithm is profitable and political views seem to turn the money spigot. Remember the January 6 Riots or how Russia allegedly influenced the 2016 presidential election? Some of the reasons those happened was due to targeted advertising through social media like Facebook.

Gizmodo reviews how much Meta generates from political advertising in: “How Meta Brings In Millions Off Political Violence.” The Markup and CalMatters tracked how much money Meta made from Trump’s July assassination attempt via merchandise advertising. The total runs between $593,000 -$813,000. The number may understate the actual money:

“If you count all of the political ads mentioning Israel since the attack through the last week of September, organizations and individuals paid Meta between $14.8 and $22.1 million dollars for ads seen between 1.5 billion and 1.7 billion times on Meta’s platforms. Meta made much less for ads mentioning Israel during the same period the year before: between $2.4 and $4 million dollars for ads that were seen between 373 million and 445 million times.  At the high end of Meta’s estimates, this was a 450 percent increase in Israel-related ad dollars for the company. (In our analysis, we converted foreign currency purchases to current U.S. dollars.)”

The organizations that funded those ads were supporters of Palestine or Israel. Meta doesn’t care who pays for ads. Tracy Clayton is a Meta spokesperson and she said that ads go through a review process to determine if they adhere to community standards. She also that advertisers don’t run their ads during times of strife, because they don’t want their goods and services associates with violence.

That’s not what the evidence shows. The Markup and CalMatters researched the ads’ subject matter after the July assassination attempt. While they didn’t violate Meta’s guidelines, they did relate to the event. There were ads for gun holsters and merchandise about the shooting. It was a business opportunity and people ran with it with Meta holding the finish line ribbon.

Meta really has an interesting ethical framework.

Whitney Grace, October 24, 2024

Google Meet: Going in Circles Is Either Brilliant or Evidence of a Management Blind Spot

October 24, 2024

dino orange_thumbNo smart software but we may use image generators to add some modern spice to the dinobaby’s output.

I read an article which seems to be a rhetorical semantic floor routine. “Google Meet (Original) Is Finally, Properly Dead” explains that once there was Google Meet. Actually there was something called Hangouts, which as I recall was not exactly stable on my steam powered system in rural Kentucky. Hangouts morphed into Hangouts Meet. Then Hangouts Meet forked itself (maybe Google forked its users?) and there was Hangouts Meet and Hangouts Chat. Hangouts Chat then became Google Chat.

The write up focuses on Hangouts Meet, which is now dead. But the write up says:

In April 2020, Google rebranded Hangouts Meet to just “Meet.” A couple of years later, in 2022, the company merged Google Duo into Google Meet due to Duo’s larger user base, aiming to streamline its video chat services. However, to avoid confusion between the two Meet apps, Google labeled the former Hangouts Meet as “Meet (Original)” and changed its icon to green. However, having two Google Meet apps didn’t make sense and the company began notifying users of the “Meet (Original)” app to uninstall it and switch to the Duo-rebranded Meet. Now, nearly 18 months later, Google is officially discontinuing the Meet (Original) app, consolidating everything and leaving just one version of Meet on the Play Store.

Got that? The article explains:

Phasing out the original Meet app is a logical move for Google as it continues to focus on developing and enhancing the newer, more widely used version of Meet. The Duo-rebranded Google Meet has over 5 billion downloads on the Play Store and is where Google has been adding new features. Redirecting users to this app aligns with Google’s goal of consolidating its video services into a single, feature-rich platform.

Let’s step back. What does this Meet tell us about Google’s efficiency? Here are my views:

  1. Without its monopoly money, Google could not afford the type of inefficiency evidenced by the tale of the Meets
  2. The product management process appears to operate without much, if any, senior management oversight
  3. Google allows internal developers to whack away, release services, and then flounder until a person decides, “Let’s try again, just with different Googlers.”

So  how has that worked out for Google? First, I think Microsoft Teams is a deeply weird product. The Softies want Teams to have more functions than the elephantine Microsoft Word. But lots of companies use Word and they now use Teams. And there is Zoom. Poor Zoom has lost its focus on allowing quick and easy online video conferences. Now I have to hunt for options between a truly peculiar Zoom app and the even more clumsy Zoom Web site.

Then there is Google Meet Duo whatever. Amazing. The services are an example of a very confused dog chasing its tail. Round and round she goes until some adult steps in and says, “Down, girl, before you die.”

PS. Who Google Chats from email?

Stephen E Arnold, October 24, 2024

Google Is AI, Folks

October 24, 2024

Google’s legal team is certainly creative. In the face of the Justice Department’s push to break up the monopoly, reports Yahoo Finance, “Google’s New Antitrust Defense is AI.” Wait, what? Reporter Hamza Shaban points to a blog post by Google VP Lee-Anne Mulholland, writing:

“In Google’s view, the government’s heavy-handed approach to transforming the search market ignores the nascent developments in AI, the fresh competition in the space, and new modes of seeking information online, like AI-powered answer engines. The energy around AI and the potential disruption of how users interact with search is, competitively speaking, a negative for Google, said Wedbush analyst Dan Ives. But in another way, as a defense against antitrust charges, it’s a positive. ‘That’s an argument against monopoly that bodes well for Google,’ he said.”

Really? Some believe quite the opposite. We learn:

“‘The DOJ has specifically noted that this evolution in technology is precisely why they are intervening at this point in time,’ said Gil Luria, an analyst at DA Davidson. ‘They want to make sure that Google is not able to convert the monopoly it currently has in Search into a monopoly in AI Enhanced Search.’”

Exactly. Google is clearly a monopoly. We think their assertion means, "treat us special because we are special." This church-lady thinking may or may not work. We live in an interesting judicial moment.

Cynthia Murrell, October 24, 2024

OpenAI: An Illustration of Modern Management Acumen

October 23, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

The Hollywood Reporter (!) published “What the Heck Is Going On At OpenAI? As executives flee with Warnings of Danger, the Company Says It Will Plow Ahead.” When I compare the Hollywood Reporter with some of the poohbah “real” news discussion of a company on track to lose an ballpark figure of $5 billion in 2024, the write up does a good job of capturing the managerial expertise on display at the company.

image

The wanna-be lion of AI is throwing a party. Will there be staff to attend? Thanks, MSFT Copilot. Good enough.

I worked through the write up and noted a couple of interesting passages. Let’s take a look at them and then ponder the caption in the smart software generated for my blog post. Full disclosure: I used the Microsoft Copilot version of OpenAI’s applications to create the art. Is it derivative? Heck, who knows when OpenAI is involved in crafting information with a click?

The first passage I circled is the one about the OpenAI chief technology officer bailing out of the high-flying outfit:

she left because she’d given up on trying to reform or slow down the company from within. Murati was joined in her departure from the high-flying firm by two top science minds, chief research officer Bob McGrew and researcher Barret Zoph (who helped develop ChatGPT). All are leaving for no immediately known opportunity.

That suggests stability in the virtual executive suite. I suppose the the prompt used to aid these wizards in their decision to find their future elsewhere was something like “Hello, ChatGTP 4o1, I want to work in a technical field which protects intellectual property, helps save the whales, and contributes to the welfare of those without deep knowledge of multi-layer neural networks. In order to find self-fulfillment not possible with YouTube TikTok videos, what do you suggest for a group of smart software experts? Please, provide examples of potential work paths and provide sources for the information. Also, do not include low probability job opportunities like sanitation worker in the Mission District, contract work for Microsoft, or negotiator for the countries involved in a special operation, war, or regional conflict. Thanks!”

The output must have been convincing because the write up says: “All are leaving for no immediately known opportunity.” Interesting.

The second passage warranting a blue underline is a statement attributed to another former OpenAI wizard, William Saunders. He apparently told a gathering of esteemed Congressional leaders:

“AGI [artificial general intelligence or a machine smarter than every humanoid] would cause significant changes to society, including radical changes to the economy and employment. AGI could also cause the risk of catastrophic harm via systems autonomously conducting cyberattacks, or assisting in the creation of novel biological weapons,” he told lawmakers. “No one knows how to ensure that AGI systems will be safe and controlled … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

I wonder if he asked the OpenAI smart software for tips about testifying before a Senate Committee. If he did, he seems to be voicing  the idea that smart software will help some people to develop “novel biological weapons.” Yep, we could all die in a sequel Covid 2.0: The Invisible Global Killer. (Does that sound like a motion picture suitable for Amazon, Apple, or Netflix? I have a hunch some people in Hollywood will do some tests in Peoria or Omaha wherever the “middle” of America is now.

The final snippet I underlined is:

OpenAI has something of a history of releasing products before the industry thinks they’re ready.

No kidding. But the object of the technology game is to become the first mover, obtain market share, and kill off any pretenders like a lion in Africa goes for the old, lame, young, and dumb. OpenAI wants to be the king of the AI jungle. The one challenge may be that the AI lion at the company is getting staff to attend his next party. I see empty cubicles.

Stephen E Arnold, October 23, 2024

FOGINT: FBI Nabs Alleged Crypto Swindlers

October 23, 2024

Nowhere does the phrase “buyer beware” apply more than the cryptocurrency market. But the FBI is on it. Crypto Briefing reports, “FBI Creates Crypto Token to Catch Fraudsters in Historic Market Manipulation Case.” The agency used its “NexFundAI” token to nab 18 entities—some individuals and also four major crypto firms: Gotbit, ZM Quant, CLS Global, and MyTrade. The mission was named “Operation Token Mirrors.” Snazzy. Writer Estefano Gomez explains:

“The charges stem from widespread fraud involving market manipulation and ‘wash trading’ designed to deceive investors and inflate crypto values. Working covertly, the FBI launched the token to attract the indicted firms’ services, which allegedly specialized in inflating trading volumes and prices for profit. The charges cover a broad scheme of wash trading, where defendants artificially inflated the value of more than 60 tokens, including the Saitama Token, which at its peak reached a market capitalization of $7.5 billion. The conspirators are alleged to have made false claims about the tokens and used deceptive tactics to mislead investors. After artificially pumping up the token prices, they would cash out at these inflated values, defrauding investors in a classic ‘pump and dump’ scheme. The crypto companies also allegedly hired market makers like ZM Quant and Gotbit to carry out these wash trades. These firms would execute sham trades using multiple wallets, concealing the true nature of the activity while creating fake trading volume to make the tokens seem more appealing to investors.”

If convicted, defendants could face up to two decades in prison. Several of those charged have already pled guilty. Authorities also shut down several trading bots used for wash trades and seized over $25 million in cryptocurrency. Assistant US Attorney Joshua Levy stresses that wash trading, long since illegal in traditional financial markets, is now also illegal in the crypto industry.

Cynthia Murrell, October 23, 2024

Money and Open Source: Unpleasant Taste?

October 23, 2024

Open-source veteran and blogger Armin Ronacher ponders “The Inevitability of Mixing Open Source and Money.” It is lovely when developers work on open-source projects for free out of the goodness of their hearts. However, the truth is these folks can only afford to spend so much time working for free. (A major reason open source documentation is a mess, by the way.)

For his part, Ronacher helped launch Sentry’s Open Source Pledge. That initiative asks companies to pledge funding to open source projects they actively use. It is particularly focused on small projects, like xz, that have a tougher time attracting funds than the big names. He acknowledges the perils of mixing open source and money, as described by Word Press’s David Heinemeier Hansson. But he insists the blend is already baked in. He considers:

“At face value, this suggests that Open Source and money shouldn’t mix, and that the absence of monetary rewards fosters a unique creative process. There’s certainly truth to this, but in reality, Open Source and money often mix quickly. If you look under the cover of many successful Open Source projects you will find companies with their own commercial interests supporting them (eg: Linux via contributors), companies outright leading projects they are also commercializing (eg: MariaDB, redis) or companies funding Open Source projects primarily for marketing / up-sell purposes (uv, next.js, pydantic, …). Even when money doesn’t directly fund an Open Source project, others may still profit from it, yet often those are not the original creators. These dynamics create stresses and moral dilemmas.”

For example, the conflict between Hansson and WP Engine. The tension can also personal stress. Ronacher shares doubts that have plagued him: to monetize or not to monetize? Would a certain project have taken off had he poured his own money into it? He has watched colleagues wrestle with similar questions that affected their health and careers. See his post for more on those issues. The write-up concludes:

“I firmly believe that the current state of Open Source and money is inadequate, and we should strive for a better one. Will the Pledge help? I hope for some projects, but WordPress has shown that we need to drive forward that conversation of money and Open Source regardless of the size of the project.”

Clearly, further discussion is warranted. New ideas from open-source enthusiasts are also needed. Can a balance be found?

Cynthia Murrell, October 23, 2024

A Little AI Surprise: Reasoning Fail

October 22, 2024

Generative AI models predict text. That is it. Oh certainly, those predictions paths can be quite elaborate and complex. But no matter how complicated, LLM processes are simply not akin to human reasoning. So we are not surprised to learn that “Apple’s Study Proves that LLM-Based AI Models Are Flawed Because They Cannot Reason,” as Apple Insider reports. That a study was required to prove the point highlights how poorly this widely-deployed technology is understood.

Apple’s researchers set out to see if they could trip up popular LLMs by adding irrelevant, contextual information to mathematical queries. The answer was a resounding yes. In fact, the more of these extraneous details they added, the worse the models did. But even one was found to reduce the output’s accuracy by as much as 65%. Contributing Editor Charles Martin writes:

“The task the team developed, called ‘GSM-NoOp’ was similar to the kind of mathematic ‘word problems’ an elementary student might encounter. The query started with the information needed to formulate a result. ‘Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday.’ The query then adds a clause that appears relevant, but actually isn’t with regards to the final answer, noting that of the kiwis picked on Sunday, ‘five of them were a bit smaller than average.’ The answer requested simply asked ‘how many kiwis does Oliver have?’ The note about the size of some of the kiwis picked on Sunday should have no bearing on the total number of kiwis picked. However, OpenAI’s model as well as Meta’s Llama3-8b subtracted the five smaller kiwis from the total result.”

Unlike schoolchildren, LLMs do not get better at this sort of problem with practice. Martin reminds us these results mirror those of a study done five years ago:

“The faulty logic was supported by a previous study from 2019 which could reliably confuse AI models by asking a question about the age of two previous Super Bowl quarterbacks. By adding in background and related information about the games they played in, and a third person who was quarterback in another bowl game, the models produced incorrect answers.”

Of course they did. Because LLMs cannot reason. Perhaps another type of AI is, or will be, up to these tasks. But if so, it is by definition something other than generative AI? What we know is that some AI wizards cannot get along with their business partners? Is that reasonable? Sure.

Cynthia Murrell, October 22, 2024

Four Years of Research Proves What a Teacher Knows in Five Minutes

October 22, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

The write up “The Phone Ban Has Had a Big Impact on School Work.” No kidding. The article reports a study in Iceland after schools told students, “No mobiles.” The write up says:

A phone ban has been in place at Öldutún School since the beginning of 2019, and according to the principal, it has worked well. The school’s atmosphere and culture have changed for the better, and there is more peace in the classroom.

I assume “peace” means students sort of paying attention, not scrolling TikTok and firing off Snapchats of total coolness. (I imagine a nice looking codfish on the school cafeteria food line. But young people may have different ideas about what’s cool. But I’ve been to Iceland, and to some, fish are quite fetching.)

image

A typical classroom somewhere in Kentucky. Thanks, MSFT Copilot. The “new and improved version” is a struggle. But so are MSFT security and Windows updates. How is Sam AI-Man these days?

Unfortunately the school without mobiles has not been able to point to newly sprouted genius level performance since the 2019 ban. I am okay with the idea of peace in the classroom.

The write up points out:

It has been reported in Morgunblaðið that students who spend more time on smartphones are less interested in reading than those who use their phones little or not at all. The interest in reading is waning faster and faster as students spend more time on their smart devices. These are the results of research by Kristján Ketill Stefánsson, assistant professor of pedagogy at the University of Iceland’s Faculty of Education. The research is based on data from more than fifteen thousand students in grades 6 to 10 in 120 elementary schools across the country.

I noted this surprising statement:

Both students and parents have welcomed the phone ban, as it was prepared for a whole year in collaboration with the board of the student association, school council and parents, according to Víðisson.

Would this type of ban on mobiles in the classroom work in the expensive private schools in some cities? What about schools in what might be called less salubrious geographic areas? Iceland is one culture; rural Kentucky is another.

My reaction to the write up is positive. The conclusions seem obvious to me and no study was needed. My instincts are that mobile devices are not appropriate for any learning environment. That includes college classrooms and lecture rooms for continuing education credits. But I am a dinobaby. (I look like the little orange dinosaur. What do I know?)

Stephen E Arnold, October 22, 2024

Google Search: AI Images Are Maybe Reality

October 22, 2024

AI generated images, videos, and text are infiltrating the Internet like COVID-19. 0x00000 posted on X the following thread: “Google está muerto.” The thread is Google image search for “baby peacock.” In the past, the image search would yield results of tiny brown chicks from nature blogs, zoos, Wikipedia, a few illustrations, and some social media accounts. The results would be mostly accurate.

Those days are dead.

Why?

The Google search for “baby peacock” returned images of blue, white, and other avian-like things that don’t resemble real peacock chicks. The images, in fact, look like “the idea of a baby peacock.” What does that mean?

The images from the Google search results were all AI generated with only a few being true photos of baby peacocks. Insane Facebook AI slop responded:

“Boomers told us not to trust Wikipedia only to fall for this”

That comment refers to a repost of a so-called white baby peacock with a full tail of plumage. What? The “white baby peacock” resembles someone’s craft project or a Christmas ornament than a real chick. I doubt everyone will pay that close attention, especially because the white baby peacock is adorable.

What are we going to do? Who knows. One approach is to accept AI images as reality. Who will know?

Whitney Grace, October 22, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta