YouTube Reveals the Popularity Winners
June 6, 2025
No AI, just a dinobaby and his itty bitty computer.
Another big technology outfit reports what is popular on its own distribution system. The trusted outfit knows that it controls the information flow for many Googlers. Google pulls the strings.
When I read “Weekly Top Podcast Shows,” I asked myself, “Are these data audited?” And, “Do these data match up to what Google actually pays the people who make these programs?”
I was not the only person asking questions about the much loved, alleged monopoly. The estimable New York Times wondered about some programs missing from the Top 100 videos (podcasts) on Google’s YouTube. Mediaite pointed out:
The rankings, based on U.S. watch time, will update every Wednesday and exclude shorts, clips and any content not tagged as a podcast by creators.
My reaction to the listing is that Google wants to make darned sure that it controls the information flow about what is getting views on its platform. Presumably some non-dinobaby will compare the popularity listings to other lists, possibly the misfiring Apple’s list. Maybe an enthusiast will scrape the “popular” listings on the independent podcast players? Perhaps a research firm will figure out how to capture views like the now archaic logs favored decades ago by certain research firms.
Several observations:
- Google owns the platform. Google controls the data. Google controls what’s left up and what’s taken down? Google is not known for making its click data just a click away. Therefore, the listing is an example of information control and shaping.
- Advertisers, take note. Now you can purchase air time on the programs that matter.
- Creators who become dependent on YouTube for revenue are slowly being herded into the 21st century’s version of the Hollywood business model from the 1940s. A failure to conform means that the money stream could be reduced or just cut off. That will keep the sheep together in my opinion.
- As search morphs, Google is putting on its thinking cap in order to find ways to keep that revenue stream healthy and hopefully growing.
But I trust Google, don’t you? Joe Rogan does.
Stephen E Arnold, June 6, 2025
An AI Insight: Threats Work to Bring Out the Best from an LLM
June 3, 2025
“Do what I say, or Tony will take you for a ride. Get what I mean, punk?” seems like an old-fashioned approach to elicit cooperation. What happens if you apply this technique, knee-capping, or unplugging smart software?
The answer, according to one of the founders of the Google, is, “Smart software responds — better.”
Does this strike you as counter intuitive? I read “Google’s Co-Founder Says AI Performs Best When You Threaten It.” The article reports that the motive power behind the landmark Google Glass product allegedly said:
“You know, that’s a weird thing…we don’t circulate this much…in the AI community…not just our models, but all models tend to do better if you threaten them…. Like with physical violence. But…people feel weird about that, so we don’t really talk about that.”
The article continues, explaining that another LLM wanted to turn one of its users into government authorities. The interesting action seems to suggest that smart software is capable of flipping the table on a human user.
Numerous questions arise from these two allegedly accurate anecdotes about smart software. I want to consider just one: How should a human interact with a smart software system?
In my opinion, the optimal approach is with considered caution. Users typically do not know or think about how their prompts are used by the developer / owner of the smart software. Users do not ponder the value of log file of those prompts. Not even bad actors wonder if those data will be used to support their conviction.
I wonder what else Mr. Brin does not talk about. What is the process for law enforcement or an advertiser to obtain prompt data and generate an action like an arrest or a targeted advertisement?
One hopes Mr. Brin will elucidate before someone becomes so wrought with fear that suicide seems like a reasonable and logical path forward. Is there someone whom we could ask about this dark consequence? “Chew” on that, gentle reader, and you too Mr. Brin.
Stephen E Arnold, June 3, 2025
The UN Invites Open Source and UN-invites Google
June 3, 2025
The United Nations is tired of Google’s shenanigans. Google partnered with the United Nations to manage their form submissions, but the organization that acts as a forum for peace and dialogue is tired of Alphabet Inc. It’s Foss News explains where the UN is turning to for help: “UN Ditches Google For Taking Form Submissions, Opts For An Open Source Solution Instead.” The UN won’t be using Google for its form submissions anymore. The organization has switched to open source and will use CryptPad for submission forms.
The United Nations is promoting the adoption of open source initiatives while continuing to secure user data, ensure transparency, and encourage collaboration. CryptPad is a privacy-focused, open source online collaboration office suite that encrypts its content, doesn’t log IP addresses, and includes collaborative documents and other tools.
The United Nations is trying to step away from Big Tech:
“So far, the UN seems to be moving in the correct direction with their UN Open Source Principles initiative, ditching the user data hungry Google Form, and opting for a much more secure and privacy-focused CryptPad.
They’ve already secured the endorsement of sixteen organizations, including notable names like The Document Foundation, Open Source Initiative, Eclipse Foundation, ZenDiS, The Linux Foundation, and The GNOME Foundation.
I sincerely hope the UN continues its push away from proprietary Big Tech solutions in favor of more open, privacy-respecting alternatives, integrating more of their workflow with such tools.” “No Google” would have been unthinkable 10 years ago. Today it’s not just thinking; it is de-Googling. And the open source angle. Is this a way to say, “US technology companies seem to be a bit of a problem?”
Whitney Grace, June 3, 2025
Coincidence or No Big Deal for the Google: User Data and Suicide
May 27, 2025
Just the dinobaby operating without Copilot or its ilk.
I have ignored most of the carnival noise about smart software. Google continues its bug spray approach to thwarting the equally publicity-crazed Microsoft and OpenAI. (Is Copilot useful? Is Sam Altman the heir to Steve Jobs?)
Two stories caught my attention. The first is almost routine. Armed with the Chrome Hoover, long-lived cookies, and the permission hungry Android play — The Verge published “Google Has a Big AI Advantage: It Already Knows Everything about You.” Sigh. another categorical affirmative: “Everything.” Is that accurate? “Everything” or is it just a scare tactic to draw readers? Old news.
But the sub title is more interesting; to wit:
Google is slowly giving Gemini more and more access to user data to ‘personalize’ your responses.
Slowly. Really? More access? More than what? And “your responses?” Whose?
The write up says:
As an example, Google says if you’re chatting with a friend about road trip advice, Gemini can search through your emails and files, allowing it to find hotel reservations and an itinerary you put together. It can then suggest a response that incorporates relevant information. That, Google CEO Sundar Pichai said during the keynote, may even help you “be a better friend.” It seems Google plans on bringing personal context outside Gemini, too, as its blog post announcing the feature says, “You can imagine how helpful personal context will be across Search, Gemini and more.” Google said in March that it will eventually let users connect their YouTube history and Photos library to Gemini, too.
No kidding. How does one know that Google has not been processing personal data for decades. There’s a patent *with a cute machine generated profile of Michael Jackson. This report generated by Google appeared in the 2007 patent application US2007/0198481:
The machine generated bubble gum card about Michael Jackson, including last known address, nicknames, and other details. See US2007/0198481 A1, “Automatic Object Reference Identification and Linking in a Browsable Fact Repository.”
The inventors Andrew W. Hogue (Ho Ho Kus, NJ) and Jonathan T. Betz (Summit, NJ) appear on the “final” version of their invention. The name of the patent was the same, but there was an important different between the patent application and the actual patent. The machine generated personal profile was replaced with a much less useful informative screen capture; to wit:
From Google Patent 7774328, granted in 2010 as “Browsable Fact Repository.”
Google wasn’t done “inventing” enhancements to its profile engine capable of outputting bubble gum cards for either authorized users or Google systems. Check out Extension US9760570 B2 “Finding and Disambiguating References to Entities on Web Pages.” The idea is that items like “aliases” and similarly opaque factoids can be made concrete for linking to cross correlated content objects.,
Thus, the “everything” assertion while a categorical affirmative reveals a certain innocence on the part of the Verge “real news” story.
Now what about the information in “Google, AI Firm Must Face Lawsuit Filed by a Mother over Suicide of Son, US Court Says.” The write up is from the trusted outfit Thomson Reuters (I know it is trusted because it says so on the Web page). The write up dated May 21, 2025, reports:
The lawsuit is one of the first in the U.S. against an AI company for allegedly failing to protect children from psychological harms. It alleges that the teenager killed himself after becoming obsessed with an AI-powered chatbot. A Character.AI spokesperson said the company will continue to fight the case and employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm." Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it."
Absent from the Reuters’ report and the allegedly accurate Google and semi-Google statements, the company takes steps to protect users, especially children. With The profiling and bubble gum card technology Google invented, does it seem prudent for Google to identify a child, cross correlate the child’s queries with the bubble gum card and dynamically [a] flag an issue, [b] alert a parent or guardian, [c] use the “everything” information to present suggestions for mental health support? I want to point out that if one searches for words on a stop list, the Dark Web search engine Ahmia.fi presents a page providing links to Clear Web resources to assist the person with counseling. Imagine: A Dark Web search engine performing a function specifically intended to help users.
Google, is Ahmia,fi more sophisticated that you and your quasi-Googles? Are the statements made about Google’s AI capabilities in line with reality? My hunch is requests like “Google spokesperson Jose Castaneda said the company strongly disagrees with the decision. Castaneda also said that Google and Character.AI are "entirely separate" and that Google "did not create, design, or manage Character.AI’s app or any component part of it." made after the presentation of evidence were not compelling. (Compelling is a popular word in some AI generated content. Yeah, compelling: A kid’s death. Inventions by Googlers specifically designed to profile a user, disambiguate disparate content objects, and make available a bubble gum card. Yeah, compelling.
I am optimistic that Google knowing “everything,” the death of a child, a Dark Web search engine that can intervene, and the semi-Google lawyers add up to comfort and support.
Yeah, compelling. Google’s been chugging along in the profiling vineyard since 2007. Let’s see that works out to longer than the 14 year old had been alive.
Compelling? Nah. Googley.
Stephen E Arnold, May 27, 2025
Google: A Critic Looks in the Rear View Mirror and Risks a Collision with a Smart Service
May 21, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
Courtney Radsch, a director of the Center for Journalism and Liberty, is not Googley. Her opinion about the Google makes this clear in “Google Broke the Law. It’s Time to Break Up the Company.”
. To which facet of the lovable Googzilla direct her attention. Picking one is difficult. Several of her points were interesting and in line with the intellectual stance of the Guardian, which ran her essay on April 24, 2025. Please, read the original write up and do contribute some money to the Guardian newspaper. Their strident pleas are moving, and I find their escalating way to say “donate” informative.
The first statement I circled was:
These global actions [the different legal hassles Googzilla faces with attendant fines and smarmy explanations] reflect a growing consensus: Google’s power is infrastructural and self-reinforcing. It controls the tools that decide what we know, what we see and who profits. The implications are especially acute for journalism, which has been hollowed out by Google’s ad market manipulation and search favoritism. In an era of generative AI, where foundation models are trained on the open web and commodify news content without compensation, this market power becomes even more perfidious.
The point abut infrastructure and self-reinforcing is accurate. I would point out that Google has been building out its infrastructure and the software “hooks” to make its services “self reinforcing.” The behavior is not new. What’s new is that it seems to be a surprise to some people. Where were the “real” journalists when the Google implemented its Yahoo-influenced advertising system? Where were the “real” journalists when Dr. Jeff Dean and other Googlers were talking and writing about the infrastructure “innovations” at the Google?
The second one was:
… global coordination should be built into enforcement.
I want to mention that “global coordination” is difficult at the present time. Perhaps if the “coordination” began 20 years ago, the process might be easier. Perhaps the author of the essay would like to speak with some people at Europol about the time and procedures required to coordinate to take down a criminal online operation. Tackling an outfit which is used by quite a few people for free is a more difficult, expensive, and resource intensive task. There are some tensions in the world, and the Google is going to have to pay some fines and possibly dump some of its assets to reduce the legal pressure being applied to the company. But Google has big bucks, and money has some value in certain circles. Coordination is possible in enforcement, but it is not exactly the magical spooky action at a distance some may think it is.
The third statement I drew a couple of lines under was:
The courts have shown that Google broke the law. Now, governments must show that the law still has teeth. That means structural remedies, not settlements. Transformation, not tinkering.
News flash. Google is as I type this sentence transforming. If you think the squishy world of search and the two way doors of online advertising were interesting business processes, I suggest one look closely at the artificial intelligence push at the Google. First, it is baked into to Google’s services. I am not sure users know how much Googliness its AI services have. That’s the same problem will looking at Google superficially as people did when the Backdoor was kicked open and the Google emerged. Also, the AI push has the same infrastructure game plan. Exactly who is going to prevent Google from developing its own chips and its next-generation computing infrastructure? Is this action going to come from regulators and lawyers? I don’t think so. These two groups are not closely associated with gradient descents, matrix mathematics, and semi-conductor engineering in my experience. Some individuals in these groups are, but many are users of Google AI, not engineers developing Google AI. I do like the T shirt slogan, “Transformation, not tinkering.”
In summary, I liked the editorial. I have one problem. Google has been being Googley for more than 20 years and now legal action is being taken for yesterday’s businesses at the company. The new Googzilla moves are not even on the essay writer’s, the Guardian’s, or the regulators’ radar.
Net net: Googzilla is rocking to tomorrow, not transformation. You don’t alter the DNA of Googzilla.
Stephen E Arnold, May 21, 2025
An Agreeable Google: Will It Write Checks with a Sad, Wry Systemic Smile?
May 14, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
Did you see the news about Google’s probable check writing?
“Google Settles Black Employees’ Racial Bias Lawsuit for $50 Million” reports:
According to the complaint, Black employees comprised only 4.4% of Google’s workforce and 3% of its leadership in 2021. The plaintiff April Curley, hired to expand outreach to historically Black colleges, said Google denied her promotions, stereotyped her as an “angry” Black woman, and fired her after six years as she prepared a report on its alleged racial bias. Managers also allegedly denigrated Black employees by declaring they were not “Googley” enough or lacked “Googleyness,” which the plaintiffs called racial dog whistles.
The little news story includes the words “racially biased corporate culture” and “systemic racial bias.” Is this the beloved “do no evil” company with the cheerful kindergarten colored logo? Frankly, this dinobaby is shocked. This must be an anomaly in the management approach of a trusted institution based on advertising.
Well, there is this story from Bloomberg, the terminal folks: “Google to Pay Texas $1.4 Billion to End Privacy Cases.” As I understand it,
Google will pay the state of Texas $1.375 billion to resolve two privacy lawsuits claiming the tech giant tracks Texans’ personal location and maintains their facial recognition data, both without their consent. Google announced the settlement Friday, ending yearslong battles with Texas Attorney General Ken Paxton (R) over the state’s strict laws on user data.
Remarkable.
The Dallas Morning News reports that Google’s position remains firm, resolute, and Googley:
The settlement doesn’t require any new changes to Google’s products, and the company did not admit any wrongdoing or liability. “This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” said José Castañeda, a Google spokesperson. “We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”
Absolutely.
Imagine a company with those kindergarten colors in its logos finding itself snared in what seem to me grade school issues. Google must be misunderstood like one of those precocious children who solve math problems without showing their work. It’s just system perhaps?
Stephen E Arnold, May 14, 2025
Big Numbers and Bad Output: Is This the Google AI Story
May 13, 2025
No AI. Just a dinobaby who gets revved up with buzzwords and baloney.
Alphabet Google reported financials that made stakeholders happy. Big numbers were thrown about. I did not know that 1.5 billion people used Google’s AI Overviews. Well, “use” might be misleading. I think the word might be “see” or “were shown” AI Overviews. The key point is that Google is making money despite its legal hassles and its ongoing battle with infrastructure costs.
I was, therefore, very surprised to read “Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense.” If the information in the write up is accurate, the factoid suggests that a lot of people may be getting bogus information. If true, what does this suggest about Alphabet Google?
The Cnet article says:
…the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched “peanut butter platform heels.” Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure.
Those Nobel prize winners, brilliant Googlers, and long-time wizards like Jeff Dean seem to struggle with simple things. Remember the glue cheese on pizza suggestion before Google’s AI improved.
The article adds by quoting a non-Google wizard:
“They [large language models] are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,” said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. “They are not trained to verify the truth. They are trained to complete the sentence.”
Turning in lousy essay and showing up should be enough for a C grade. Is that enough for smart software with 1.5 billion users every three or four weeks?
The article reminds its readers”
This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls “hallucinating.” When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.
The outputs can be amusing for a person able to identify goofiness. But a grade school kid? Cnet wants users to craft better prompts.
I want to be 17 years old again and be a movie star. The reality is that I am 80 and look like a very old toad.
AI has to make money for Google. Other services are looking more appealing without the weight of legal judgments and hassles in numerous jurisdictions. But Google has already won the AI race. Its DeepMind unit is curing disease and crushing computational problems. I know these facts because Google’s PR and marketing machine is running at or near its red line.
But the 1.5 billion users potentially receiving made up, wrong, or hallucinatory information seems less than amusing to me.
Stephen E Arnold, May 13, 2025
Google, Its AI Search, and Web Site Traffic
May 12, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
I read “Google’s AI Search Switch Leaves Indie Websites Unmoored.” I think this is a Gen Y way of saying, “No traffic for you, bozos.” Of course, as a dinobaby, I am probably wrong.
Let’s look at the write up. It says:
many publishers said they either need to shut down or revamp their distribution strategy. Experts this effort could ultimately reduce the quality of information Google can access for its search results and AI answers.
Okay, but this is just one way to look at Google’s delicious decision.
May I share some of my personal thoughts about what this traffic downshift means for those blue-chip consultant Googlers in charge:
First, in the good old days before the decline began in 2006, Google indexed bluebirds (sites that had to be checked for new content or “deltas” on an accelerated heart beat. Examples were whitehouse.gov (no, not the whitehouse.com porn site). Then there were sparrows. These plentiful Web sites could be checked on a relaxed schedule. I mean how often do you visit the US government’s National Railway Retirement Web site if it still is maintained and online? Yep, the correct answer is, “Never.” There there were canaries. These were sites which might signal a surge in popularity. They were checked on a heart beat that ensured the Google wouldn’t miss a trend and fail to sell advertising to those lucky ad buyers.
So, bluebirds, canaries, and sparrows.
This shift means that Google can reduce costs by focusing on bluebirds and canaries. The sparrows — the site operated by someone’s grandmother to sell home made quilts — won’t get traffic unless the site operator buys advertising. It’s pay to play. If a site is not in the Google index, it just may not exist. Sure there are alternative Web search systems, but none, as far as I know, are close to the scope of the “old” Google in 2006.
Second, by dropping sparrows or pinging them once in a blue moon will reduce the costs of crawling, indexing, and doing the behind-the-scenes work that consumes Google cash at an astonishing rate. Therefore, the myth of indexing the “Web” is going to persist, but the content of the index is not going to be “fresh.” This is the concept that some sites like whitehouse.gov have important information that must be in search results. Non-priority sites just disappear or fade. Eventually the users won’t know something is missing, which is assisted by the decline in education for some Google users. The top one percent knows bad or missing information. The other 99 percent? Well, good luck.
Third, the change means that publishers will have some options. [a] They can block Google’s spider and chase the options. How’s Yandex.ru sound? [b] They can buy advertising and move forward. I suggest these publishers ask a Google advertising representative what the minimum spend is to get traffic. [c] Publishers can join together and try to come up with a joint effort to resist the increasingly aggressive business actions of Google. Do you have a Google button on your remote? Well, you will. [d] Be innovative. Yeah, no comment.
Net net: This item about the impact of AI Overviews is important. Just consider what Google gains and the pickle publishers and other Web sites now find themselves enjoying.
Stephen E Arnold, May 12, 2025
Google: Making Users Cross Their Eyes in Confusion
May 9, 2025
No AI, just a dinobaby watching the world respond to the tech bros.
I read “Don’t Make It Like Google.” The article points out that Google’s “control” extends globally. The company’s approach to software and design are ubiquitous. People just make software like Google because it seems “right.”
The author of the essay says:
Developers frequently aim to make things “like Google” because it feels familiar and, seemingly, the right way to do things. In the past, this was an implicit influence, but now it’s direct: Google became the platform for web applications (Chrome) and mobile applications (Android). It also created a framework for human-machine interaction: Material Design. Now, “doing it like Google” isn’t just desirable; it’s necessary.
Regulators in the European Union have not figured out how to respond to this type of alleged “monopoly.”
The author points out:
Most tech products now look indistinguishable, just a blobby primordial mess of colors.
Why? The author provides an answer:
Google’s actual UI & UX design is terrible. Whether mass-market or enterprise, web or mobile, its interfaces are chaotic and confusing. Every time I use Google Drive or the G Suite admin console, I feel lost. Neither experience nor intuition helps—I feel like an old man seeing a computer for the first time.
I quite like the reference to the author’s feeling like an “old man seeing a computer for the first time.” As a dinobaby, I find Google’s approach to making functions available — note, I am going to use a dinobaby term — stupid. Simple functions to me are sorting emails by sender and a keyword. I have not figured out how to do this in Gmail. I have given up on Google Maps. I have zero clue how to access the “old” street view with a basic map on a mobile device. Hey, am I the only person in an unfamiliar town trying to locate a San Jose-type office building in a tan office park? I assume I am.
The author points out:
Instead of prioritizing objectively good user experiences, the more profitable choice is often to mimic Google’s design. Not because developers are bad or lazy. Not because users enjoy clunky interfaces. But because it “makes sense” from the perspective of development costs and marketing. It’s tricky to praise Apple while criticizing Google because where Google has clumsy interfaces, Apple has bugs and arbitrary restrictions. But if we focus purely on interface design, Apple demonstrates how influence over users and developers can foster generations of well-designed products. On average, an app in Apple’s ecosystem is more polished and user-friendly than one in Google’s.
I am not sure that Apple is that much better than Google, but for me, the essay makes clear that giant US technology companies shape the user’s reality. The way information is presented and what expert users learn may not be appropriate for most people. I understand that these companies have to have a design motif or template. I understand that big companies have “experts” who determine what users do and want.
The author of the essay says:
We’ve become accustomed to the unintuitive interfaces of washing machines and microwaves. A new washing machine may be quieter, more efficient, and more aesthetically pleasing, yet its dials and icons still feel alien; or your washing machine now requires an app. Manufacturers have no incentive to improve this aspect—they just do it “like the Google of their industry.” And the “Google” of any industry inevitably gets worse over time.
I disagree. I think that making interfaces impossible is a great thing. Now here’s my reasoning: Who wants to expend energy figuring out a “better way.” The name of the game is to get eyeballs. Looking like Google or any of the big technology companies means that one just rolls over and takes what these firms offer as a default. Mind control and behavior conditioning is much easier and ultimately more profitable than approaching a problem from the user’s point of view. Why not define what a user gets, make it difficult or impossible to achieve a particular outcome, and force the individual to take what is presented as the one true way.
That makes business sense.
Stephen E Arnold, May 9, 2025
Waymo Self Driving Cars: Way Safer, Waymo Says
May 9, 2025
This dinobaby believes everything he reads online. I know that statistically valid studies conducted by companies about their own products are the gold standard in data collection and analysis. If you doubt this fact of business life in 2025, you are not in the mainstream.
I read “Waymo Says Its Robotaxis Are Up to 25x Safer for Pedestrians and Cyclists.” I was thrilled. Imagine. I could stand in front of a Waymo robotaxi holding my new grandchild and know that the vehicle would not strike us. I wonder if my son and his wife would allow me to demonstrate my faith in the Google.
The write up explains that a Waymo study proved beyond a shadow of doubt that Waymo robotaxis are way, way, way safer than any other robotaxi. Here’s a sampling of the proof:
92 percent fewer crashes with injuries to pedestrians
82 percent fewer crashes with injuries to kids and adults on bicycles
82 percent fewer crashes with senior citizens on scooters and adults on motorcycles.
Google has made available a big, fat research paper which provides more knock out data about the safety of the firm’s smart robot driven vehicles. If you want to dig into the document with inputs from six really smart people, click this link.
The study is a first, and it is, in my opinion, a quantumly supreme example of research. I do not believe that Google’s smart software was used to create any synthetic data. I know that if a Waymo vehicle and another firm’s robot-driven car speed at an 80 year old like myself 100 times each, the Waymo vehicles will only crash into me 18 times. I have no idea how many times I would be killed or injured if another firm’s smart vehicle smashed into me. Those are good odds, right?
The paper has a number of compelling presentations of data. Here’s an example:
This particular chart uses the categories of striking and struck, but only a trivial amount of these kinetic interactions raise eyebrows. No big deal. That’s why the actual report consumed only 58 pages of text and hard facts. Obvious superiority.
Would you stand in front of a Waymo driving at you as the sun sets?
I am a dinobaby, and I don’t think an automobile would do too much damage if it did hit me. Would my son’s wife allow me to hold my grandchild in my arms as I demonstrated my absolute confidence in the Alphabet Google YouTube Waymo technology? Answer: Nope.
Stephen E Arnold, May 9, 2025