The Famous Google Paper about Attention, a Code Word for Transformer Methods
June 20, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Wow, many people are excited a Bloomberg article called “The AI Boom Has Silicon Valley on Another Manic Quest to Change the World: A Guide to the New AI Technologies, Evangelists, Skeptics and Everyone Else Caught Up in the Flood of Cash and Enthusiasm Reshaping the Industry.”
In the tweets and LinkedIn posts one small factoid is omitted from the second hand content. If you want to read the famous DeepMind-centric paper which doomed the Google Brain folks to watch their future from the cheap seats, you can find “Attention Is All You Need”, branded with the imprimatur of the Neural Information Processing Systems Conference held in 2017. Here’s the link to the paper.
For those who read the paper, I would like to suggest several questions to consider:
- What economic gain does Google derive from proliferation of its transformer system and method; for example, the open sourcing of the code?
- What does “attention” mean for [a] the cost of training and [b] the ability to steer the system and method? (Please, consider the question from the point of view of the user’s attention, the system and method’s attention, and a third-party meta-monitoring system such as advertising.)
- What other tasks of humans, software, and systems can benefit from the user of the Transformer system and methods?
I am okay with excitement for a 2017 paper, but including a link to the foundation document might be helpful to some, not many, but some.
Net net: Think about Google’s use of the word “trust” and “responsibility” when you answer the three suggested questions.
Stephen E Arnold, June 20, 2023
Google: Smart Software Confusion
June 19, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I cannot understand. Not only am I old; I am a dinobaby. Furthermore, I am like one of William James’s straw men: Easy to knock down or set on fire. Bear with me this morning.
I read “Google Skeptical of AI: Google Doesn’t Trust Its Own AI Chatbots, Asks Employees Not to Use Bard.” The write up asserts as “real” information:
It seems that Google doesn’t trust any AI chatbot, including its own Bard AI bot. In an update to its security measures, Alphabet Inc., Google’s parent company has asked its employees to keep sensitive data away from public AI chatbots, including their own Bard AI.
The go-to word for the Google in the last few weeks is “trust.” The quote points out that Google doesn’t “trust” its own smart software. Does this mean that Google does not “trust” that which it created and is making available to its “users”?
MidJourney, an interesting but possibly insecure and secret-filled smart software system, generated this image of Googzilla as a gatekeeper. Are gatekeepers in place to make money, control who does what, and record the comings and goings of people, data, and content objects?
As I said, I am a dinobaby, and I think I am dumb. I don’t follow the circular reasoning; for example:
Google is worried that human reviewers may have access to the chat logs that these chatbots generate. AI developers often use this data to train their LLMs more, which poses a risk of data leaks.
Now the ante has gone up. The issue is one of protecting itself from its own software. Furthermore, if the statement is accurate, I take the words to mean that Google’s Mandiant-infused, super duper, security trooper cannot protect Google from itself.
Can my interpretation be correct? I hope not.
Then I read “This Google Leader Says ML Infrastructure Is Conduit to Company’s AI Success.” The “this” refers to an entity called Nadav Eiron, a Stanford PhD and Googley wizard. The use of the word “conduit” baffles me because I thought “conduit” was a noun, not a verb. That goes to support my contention that I am a dumb humanoid.
Now let’s look at the text of this write up about Google’s smart software. I noted this passage:
The journey from a great idea to a great product is very, very long and complicated. It’s especially complicated and expensive when it’s not one product but like 25, or however many were announced that Google I/O. And with the complexity that comes with doing all that in a way that’s scalable, responsible, sustainable and maintainable.
I recall someone telling me when I worked at a Fancy Dan blue chip consulting firm, “Stephen, two objectives are zero objectives.” Obviously Google is orders of magnitude more capable than the bozos at the consulting company. Google can do 25 objectives. Impressive.
I noted this statement:
we created the OpenXLA [an open-source ML compiler ecosystem co-developed by AI/ML industry leaders to compile and optimize models from all leading ML frameworks] because the interface into the compiler in the middle is something that would benefit everybody if it’s commoditized and standardized.
I think this means that Google wants to be the gatekeeper or man in the middle.
Now let’s consider the first article cited. Google does not want its employees to use smart software because it cannot be trusted.
Is it logical to conclude that Google and its partners should use software which is not trusted? Should Google and its partners not use smart software because it is not secure? Given these constraints, how does Google make advances in smart software?
My perception is:
- Google is not sure what to do
- Google wants to position its untrusted and insecure software as the industry standard
- Google wants to preserve its position in a workflow to maximize its profit and influence in markets.
You may not agree. But when articles present messages which are alarming and clearly focused on market control, I turn my skeptic control knob. By the way, the headline should be “Google’s Nadav Eiron Says Machine Learning Infrastructure Is a Conduit to Facilitate Google’s Control of Smart Software.”
Stephen E Arnold, June 19, 2023
Can One Be Accurate, Responsible, and Trusted If One Plagiarizes
June 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Now that AI is such a hot topic, tech companies cannot afford to hold back due to small flaws. Like a tendency to spit out incorrect information, for example. One behemoth seems to have found a quick fix for that particular wrinkle: simple plagiarism. Eager to incorporate AI into its flagship Search platform, Google recently released a beta version to select users. Forbes contributor Matt Novak was among the lucky few and shares his observations in, “Google’s New AI-Powered Search Is a Beautiful Plagiarism Machine.”
The author takes us through his query and results on storing live oysters in the fridge, complete with screenshots of the Googlebot’s response. (Short answer: you can for a few days if you cover them with a damp towel.) He highlights passages that were lifted from websites, some with and some without tiny tweaks. To be fair, Google does link to its source pages alongside the pilfered passages. But why click through when you’ve already gotten what you came for? Novak writes:
“There are positive and negative things about this new Google Search experience. If you followed Google’s advice, you’d probably be just fine storing your oysters in the fridge, which is to say you won’t get sick. But, again, the reason Google’s advice is accurate brings us immediately to the negative: It’s just copying from websites and giving people no incentive to actually visit those websites.
Why does any of this matter? Because Google Search is easily the biggest driver of traffic for the vast majority of online publishers, whether it’s major newspapers or small independent blogs. And this change to Google’s most important product has the potential to devastate their already dwindling coffers. … Online publishers rely on people clicking on their stories. It’s how they generate revenue, whether that’s in the sale of subscriptions or the sale of those eyeballs to advertisers. But it’s not clear that this new form of Google Search will drive the same kind of traffic that it did over the past two decades.”
Ironically, Google’s AI may shoot itself in the foot by reducing traffic to informative websites: it needs their content to answer queries. Quite the conundrum it has made for itself.
Cynthia Murrell, June 14, 2023
Google: FUD Embedded in the Glacier Strategy
June 9, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Fly to Alaska. Stand on a glacier and let the guide explains the glacier moves, just slowly. That’s the Google smart software strategy in a nutshell. Under Code Red or Red Alert or “My goodness, Microsoft is getting media attention for something other than lousy code and security services. We have to do something sort of quickly.”
One facet of the game plan is to roll out a bit of FUD or fear, uncertainty, and doubt. That will send chills to some interesting places, won’t it. You can see this in action in the article “Exclusive: Google Lays Out Its Vision for Securing AI.” Feel the fear because AI will kill humanoids unless… unless you rely on Googzilla. This is the only creature capable of stopping the evil that irresponsible smart software will unleash upon you, everyone, maybe your dog too.
The manager of strategy says, “I think the fireball of AI security doom is going to smash us.” The top dog says, “I know. Google will save us.” Note to image trolls: This outstanding illustration was generated in a nonce by MidJourney, not an under-compensated creator in Peru.
The write up says:
Google has a new plan to help organizations apply basic security controls to their artificial intelligence systems and protect them from a new wave of cyber threats.
Note the word “plan”; that is, the here and now equivalent of vaporware or stuff that can be written about and issued as “real news.” The guts of the Google PR is that Google has six easy steps for its valued users to take. Each step brings that user closer to the thumping heart of Googzilla; to wit:
- Assess what existing security controls can be easily extended to new AI systems, such as data encryption;
- Expand existing threat intelligence research to also include specific threats targeting AI systems;
- Adopt automation into the company’s cyber defenses to quickly respond to any anomalous activity targeting AI systems;
- Conduct regular reviews of the security measures in place around AI models;
- Constantly test the security of these AI systems through so-called penetration tests and make changes based on those findings;
- And, lastly, build a team that understands AI-related risks to help figure out where AI risk should sit in an organization’s overall strategy to mitigate business risks.
Does this sound like Mandiant-type consulting backed up by Google’s cloud goodness? It should because when one drinks Google juice, one gains Google powers over evil and also Google’s competitors. Google’s glacier strategy is advancing… slowly.
Stephen E Arnold, June 9, 2023
Google: Responsible and Trustworthy Chrome Extensions with a Dab of Respect the User
June 7, 2023
“More Malicious Extensions in Chrome Web Store” documents some Chrome extensions (add ins) which allegedly compromise a user’s computer. Google has been using words like responsible and trust with increasing frequency. With Chrome in use by more than half of those with computing devices, what’s the dividing line between trust and responsibility for Google smart software and stupid but market leading software like Chrome. If a non-Google third party can spot allegedly problematic extensions, why can’t Google? Is part of the answer, “Talk is cheap. Fixing software is expensive”? That’s a good question.
The cited article states:
… we are at 18 malicious extensions with a combined user count of 55 million. The most popular of these extensions are Autoskip for Youtube, Crystal Ad block and Brisk VPN: nine, six and five million users respectively.
The write up crawfishes, stating:
Mind you: just because these extensions monetized by redirecting search pages two years ago, it doesn’t mean that they still limit themselves to it now. There are way more dangerous things one can do with the power to inject arbitrary JavaScript code into each and every website.
My reaction is that why are these allegedly malicious components in the Google “store” in the first place?
I think the answer is obvious: Talk is cheap. Fixing software is expensive. You may disagree, but I hold fast to my opinion.
Stephen E Arnold, June 7, 2023
Trust in Google and Its Smart Software: What about the Humans at Google?
May 26, 2023
The buzz about Google’s injection of its smart software into its services is crowding out other, more interesting sounds. For example, navigate to “Texas Reaches $8 Million Settlement With Google Over Blatantly False Pixel Ads: Google Settled a Lawsuit Filed by AG Ken Paxton for Alleged False Advertisements for its Google Pixel 4 Smartphone.”
The write up reports:
A press release said Google was confronted with information that it had violated Texas laws against false advertising, but instead of taking steps to correct the issue, the release said, “Google continued its deceptive advertising, prioritizing profits over truthfulness.”
Google is pushing forward with its new mobile devices.
Let’s consider Google’s seven wonders of its software. You can find these at this link or summarized in my article “The Seven Wonders of the Google AI World.”
Let’s consider principle one: Be socially beneficial.
I am wondering how the allegedly deceptive advertising encourages me to trust Google.
Principle 4 is Be accountable to people.
My recollection is that Google works overtime to avoid being held accountable. The company relies upon its lawyers, its lobbyists, and its marketing to float above the annoyances of nation states. In fact, when greeted with substantive actions by the European Union, Google stalls and does not make available its latest and greatest services. The only accountability seems to be a legal action despite Google’s determined lawyerly push back. Avoiding accountability requires intermediaries because Google’s senior executives are busy working on principles.
Kindergarten behavior.
MidJourney captures the thrill of two young children squabbling over a piggy bank. I wonder if MidJourney knows what is going in the newly merged Google smart software units.
Google approaches some problems like kids squabbling over a piggy bank.
Net net: The Texas fine makes clear that some do not trust Google. The “principles” are marketing hoo hah. But everyone loves Google, including me, my French bulldog, and billions of users worldwide. Everyone will want a new $1800 folding Pixel, which is just great based on the marketing information I have seen. It has so many features and works wonders.
Stephen E Arnold, May 26, 2023
More Google PR: For an Outfit with an Interesting Past, Chattiness Is Now a Core Competency
May 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
How many speeches, public talks, and interviews did Sergey Brin, Larry Page, and Eric Schmidt do? To my recollection, not too many. And what about now? Larry Page is tough to find. Mr. Brin is sort of invisible. Eric Schmidt has backed off his claim that Qwant keeps him up at night? But Sundar Pichai, one half of the Sundar and Prabhakar Comedy Show, is quite visible. AI everywhere keynote speeches, essays about smart software, and now an original “he wrote it himself” essay in the weird salmon-tinted newspaper The Financial Times. Yeah, pinkish.
Smart software provided me with an illustration of a fast talker pitching the future benefits of a new product. Yep, future probabilities. Rock solid. Thank you, MidJourney.
What’s with the spotlight on the current Google big wheel? Gentle reader, the visibility is one way Google is trying to advance its agenda. Before I offer my opinion about the Alphabet Google YouTube agenda, I want to highlight three statements in “Google CEO: building AI Responsibly Is the Only Race That Really Matters.”
Statement from the Google essay #1
At Google, we’ve been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right.
The theme is that Google has been doing smart software for a long time. Let’s not forget that the GOOG released the Transformer model as open source and sat on its Googley paws while “stuff happened” starting in 2018. Was that responsible? If so, what does Google mean when it uses the word “responsible” as it struggles to cope with the meme “Google is late to the game.” For example, Microsoft pulled off a global PR coup with its Davos’ smart software announcements. Google responded with the Paris demonstration of Bard, a hoot for many in the information retrieval killing field. That performance of the Sundar and Prabhakar Comedy Show flopped. Meanwhile, Microsoft pushed its “flavor” of AI into its enterprise software and cloud services. My experience is that for every big PR action, there is an equal or greater PR reaction. Google is trying to catch faster race cars with words, not a better, faster, and cheaper machine. The notion that Google “gets it right” means to me one thing: Maintaining quasi monopolistic control of its market and generating the ad revenue. Google, after 25 years of walking the same old Chihuahua in a dog park with younger, more agile canines. After 25 years of me too and flopping with projects like solving death, revenue is the ONLY thing that matters to stakeholders. More of the Sundar and Prabhakar routine are wearing thin.
Statement from the Google essay #2
We have many examples of putting those principles into practice…
The “principles” apply to Google AI implementation. But the word principles is an interesting one. Google is paying fines for ignoring laws and its principles. Google is under the watchful eye of regulators in the European Union due to Google’s principles. China wanted Google to change and then beavered away on a China-acceptable search system until the cat was let out of the bag. Google is into equality, a nice principle, which was implemented by firing AI researchers who complained about what Google AI was enabling. Google is not the outfit I would consider the optimal source of enlightenment about principles. High tech in general and Google in particular is viewed with increasing concern by regulators in US states and assorted nation states. Why? The Googley notion of principles is not what others understand the word to denote. In fact, some might say that Google operates in an unprincipled manner. Is that why companies like Foundem and regulatory officials point out behaviors which some might find predatory, mendacious, or illegal? Principles, yes, principles.
Statement from the Google essay #3
AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more.
Many years ago, I was in a meeting in DC, and the Donald Rumsfeld quote about information was making the rounds. Good appointees loved to cite this Donald.Here’s the quote from 2002:
There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.
I would humbly suggest that smart software is chock full of known unknowns. But humans are not very good at predicting the future. When it comes to acting “responsibly” in the face of unknown unknowns, I dismiss those who dare to suggest that humans can predict the future in order to act in a responsible manner. Humans do not act responsibly with either predictability or reliability. My evidence is part of your mental furniture: Racism, discrimination, continuous war, criminality, prevarication, exaggeration, failure to regulate damaging technologies, ineffectual action against industrial polluters, etc. etc. etc.
I want to point out that the Google essay penned by one half of the Sundar and Prabhakar Comedy Show team could be funny if it were not a synopsis of the digital tragedy of the commons in which we live.
Stephen E Arnold, May 23, 2023
Neeva: Another Death from a Search Crash on the Information Highway
May 22, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
What will forensic search experts find when they examine the remains of Neeva? The “gee, we failed” essay “Next Steps for Neeva” presents one side of what might be an interesting investigation for a bushy tailed and wide eyed Gen Z search influencer. I noted some statements which may have been plucked from speeches at the original Search Engine Conferences ginned up by an outfit in the UK or academic post mortems at the old International Online Meeting once held in the companionable Olympia London.
I noted these statements from the cited document:
Statement 1: The users of a Web search system
We started Neeva with the mission to take search back to its users.
The reality is that 99 percent of people using a Web search engine are happy when sort of accurate information is provided free. Yep, no one wants to pay for search. That’s the reason that when a commercial online service like LexisNexis loses one big client, it is expensive, time consuming, and difficulty to replace the revenue. One former LexisNexis big wheel told me when we met in his limousine in the parking lot of the Cherry Hill Mall: “If one of the top 100 law firms goes belly up, we need a minimum of 200 new law firms to sign up for our service and pay for it.”
“Mommy, I failed Search,” says Timmy Neeva. Mrs. Neeva says, “What caused your delusional state, Timmy.” The art work is a result of the smart software MidJourney.
Users don’t care about for fee search when those users wouldn’t know whether a hit in a results list was right, mostly right, mostly wrong, or stupidly crazy. Free is the fuel that pulls users, and without advertising, there’s no chance a free service will be able to generate enough cash to index, update the index, and develop new features. At the same time, the plumbing is leaking. Plumbing repairs are expensive: New machines, new ways to reduce power consumption, and oodles of new storage devices.
Users want free. Users don’t want to compare the results from a for fee service and a free service. Users want free. After 25 years, the Google is the champion of free search. Like the old Xoogler search system Search2, Neeva’s wizards never figured that most users don’t care about Fancy Dan yip yap about search.
Statement 2: An answer engine.
We rallied the Neeva team around the vision to create an answer engine.
Shades of DR-LINK: Users want answers. In 1981, a former Predicasts’ executive named Paul Owen told me, “Dialog users want answers.” That sounds logical, and it is to many who are expert informationists the Gospel according to Online. The reality is that users want crunchy, bite sized chunks of information which appear to answer the question or almost right answers that are “good enough” or “close enough for horseshoes.”
Users cannot differentiate from correct and incorrect information. Heck, some developers of search engines don’t know the difference between weaponized information and content produced by a middle school teacher about the school’s graduation ceremony. Why? Weaponized information is abundant; non-weaponized information may not pass the user’s sniff test. And the middle school graduation ceremony may have a typo about the start time or the principal of the school changed his mind due to an active shooter situation. Something output from a computer is believed to be credible, accurate, and “right.” An answer engine is what a free Web search engine spits out. The TikTok search spits out answers, and no one wonders if the results list are shaped by Chinese interests.
Search and retrieval has been defined by Google. The company has a 90 plus percent share of the Web search traffic in North America and Western Europe. (In Denmark, the company has 99 percent of Danish users’ search traffic. People in Denmark are happier, and it is not because Google search delivers better or more accurate results. Google is free and it answers questions.
The baloney about it takes one-click to change search engines sounds great. The reality is as Neeva found out, no one wants to click away from what is perceived to work for them. Neeva’s yip yap about smart software proves that the jazz about artificial intelligence is unlikely to change how free Web search works in Google’s backyard. Samsung did not embrace Bing because users would rebel.
Answer engine. Baloney. Users want something free that will make life easier; for example, a high school student looking for a quick way to crank out a 250 word essay about global warming or how to make a taco. ChatGPT is not answering questions; the application is delivering something that is highly desirable to a lazy student. By the way, at least the lazy student had the git up and go to use a system to spit out a bunch of recycled content that is good enough. But an answer engine? No, an online convenience store is closer to the truth.
Statement 3:
We are actively exploring how we can apply our search and LLM expertise in these settings, and we will provide updates on the future of our work and our team in the next few weeks.
My interpretation of this statement is that a couple of Neeva professionals will become venture centric. Others will become consultants. A few will join the handful of big companies which are feverishly trying to use “smart software” to generate more revenue. Will there be some who end up working at Philz Coffee. Yeah, some. Perhaps another company will buy the “code,” but valuing something that failed is likely to prove tricky. Who remembers who bought Entopia? No one, right?
Net net: The GenZ forensic search failure exercise will produce some spectacular Silicon Valley news reporting. Neeva is explaining its failure, but that failure presaged when Fast Search & Transfer pivoted from Web search to the enterprise, failed, and was acquired by Microsoft. Where is Fast Search now as the smart Bing is soon to be everywhere. The reality is that Google has had 25 years to do cement its search monopoly. Neeva did not read the email. So Neeva sucked up investment bucks with a song and dance about zapping the Big Bad Google with a death ray. Yep, another example of high school science club mentality touched by spreadsheet fever.
Well, the fever broke.
Stephen E Arnold, May 22, 2023
Google DeepMind Risk Paper: 60 Pages with a Few Googley Hooks
May 22, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved in writing, just a dumb humanoid.
I read the long version of “Ethical and Social Risks of Harm from Language Models.” The paper is mostly statements and footnotes to individuals who created journal-type articles which prove the point of each research article. With about 25 percent of the peer reviewed research including shaped, faked, or weaponized data – I am not convinced by footnotes. Obviously the DeepMinders believe that footnotes make a case for the Google way. I am not convinced because the Google has to find a way to control the future of information. Why? Advertising money and hoped for Mississippis of cash.
The research paper dates from 2021 and is part of Google’s case for being ahead of the AI responsibility game. The “old” paper reinforces the myth that Google is ahead of everyone else in the AI game. The explanation for Sam AI-man’s and Microsoft’s markeitng coup is that Google had to go slow because Google knew that there were ethical and social risks of harm from the firm’s technology. Google cares about humanity! The old days of “move fast and break things” are very 1998. Today Google is responsible. The wild and crazy dorm days are over. Today’s Google is concerned, careful, judicious, and really worried about its revenues. I think the company worries about legal actions, its management controversies, and its interdigital dual with the Softies of Redmond.
A young researcher desperately seeking footnotes to support a specious argument. With enough footnotes, one can move the world it seems. Art generated by the smart software MidJourney.
I want to highlight four facets of the 60 page risks paper which are unlikely to get much, if any, attention from today’s “real” journalists.
Googley hook 1: Google wants to frame the discussion. Google is well positioned to “guide mitigation work.” The examples in the paper are selected to “guiding action to resolve any issues that can be identified in advance.” My comment: How magnanimous of Google. Framing stakes out the Googley territory. Why? Google wants to be Googzilla and reap revenue from its users, licensees, models, synthetic data, applications, and advertisers. You can find the relevant text in the paper on page 6 in the paragraph beginning “Responsible innovation.”
Googley hook 2: Google’s risks paper references fuzzy concepts like “acceptability” and “fair.” Like love, truth, and ethics, the notion of “acceptability” is difficult to define. Some might suggest that it is impossible to define. But Google is up to the task, particularly for application spaces unknown at this time. What happens when you apply “acceptability” to “poor quality information.” One just accepts the judgment of the outfit doing the framing. That’s Google. Game. Set. Match. You can find the discussion of “acceptability” on page 9.
Googley hook 3: Google is not going to make the mistake of Microsoft and its racist bot Tay. No way, José. What’s interesting is that the only company mentioned in the text of the 60 page paper is Microsoft. Furthermore, the toxic aspects of large language models are hard for technologies to detect (page18). Plus large language models can infer a person’s private data. So “providing true information is not always beneficial (Page 21). What’s the fix? Use smaller sets of training data… maybe. (Page 22). But one can fall back on trust — for instance, trust in Google the good — to deal with these challenges. In fact, trust Google to choose training data to deal with some of the downsides of large language models (Page 24).
Googley hook 4: Making smart software dependent on large language models that mitigates risk is expensive. Money, smart people who are in short supply, and computing resources are expensive. Therefore, one need not focus on the origin point (large language model training and configuration). Direct attention at those downstream. Those users can deal with the identified 21 problems. The Google method puts Google out of the primary line of fire. There are more targets for the aggrieved to seek and shoot at (Page 37).
When I step back from the article which is two years old, it is obvious Google was aware of some potential issues with its approach. Dr. Timnit Gebru was sacrificed on a pyre of spite. (She does warrant a couple of references and a footnote or two. But she’s now a Xoogler. The one side effect was that Dr. Jeff Dean, who was not amused by the stochastic parrot has been kicked upstairs and the UK “leader” is now herding the little wizards of Google AI.
The conclusion of the paper echoes the Google knows best argument. Google wants a methodological toolkit because that will keep other people busy. Google wants others to figure out fair, an approach that is similar to Sam Altman (OpenAI) who begs for regulation of a sector about which much is unknown.
The answer, according to the risk analysis is “responsible innovation.” I would suggest that this paper, the television interviews, the PR efforts to get the Google story in as many places as possible are designed to make the sluggish Google a player in the AI game.
Who will be fooled? Will Google catch up in this Silicon Valley venture invigorating hill climb? For me the paper with the footnotes is just part of Google’s PR and marketing effort. Your mileage may vary. May relevance be with you, gentle reader.
Stephen E Arnold, May 22, 2023
The Seven Wonders of the Google AI World
May 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read the content at this Google Web page: https://ai.google/responsibility/principles/. I found it darned amazing. In fact, I thought of the original seven wonders of the world. Let’s see how Google’s statements compare with the down-through-time achievements of mere mortals from ancient times.
Let’s imagine two comedians explaining the difference between the two important set of landmarks in human achievement. Here are the entertainers. These impressive individuals are a product of MidJourney’s smart software. The drawing illustrates the possibilities of artificial intelligence applied to regular intelligence and a certain big ad company’s capabilities. (That’s humor, gentle reader.)
Here are the seven wonders of the world according to the semi reliable National Geographic (l loved those old Nat Geos when I was in the seventh grade in 1956-1957!):
- The pyramids of Giza (tombs or alien machinery, take your pick)
- The hanging gardens of Babylon (a building with a flower show)
- The temple of Artemis (god of the hunt for maybe relevant advertising?)
- The statue of Zeus (the thunder god like Googzilla?)
- The mausoleum at Halicarnassus (a tomb)
- The colossus of Rhodes (Greek sun god who inspired Louis XIV and his just-so-hoity toity pals)
- The lighthouse of Alexandria (bright light which baffles some who doubt a fire can cast a bright light to ships at sea)
Now the seven wonders of the Google AI world:
- Socially beneficial AI (how does AI help those who are not advertisers?)
- Avoid creating or reinforcing unfair bias (What’s Dr. Timnit Gebru say about this?)
- Be built and tested for safety? (Will AI address video on YouTube which provide links to cracked software; e.g. this one?)
- Be accountable to people? (Maybe people who call for Google customer support?)
- Incorporate privacy design principles? (Will the European Commission embrace the Google, not litigate it?)
- Uphold high standards of scientific excellence? (Interesting. What’s “high” mean? What’s scientific about threshold fiddling? What’s “excellence”?)
- AI will be made available for uses that “accord with these principles”. (Is this another “Don’t be evil moment?)
Now let’s evaluate in broad strokes the two seven wonders. My initial impression is that the ancient seven wonders were fungible, not based on the future tense, the progressive tense, and breathing the exhaust fumes of OpenAI and others in the AI game. After a bit of thought, I am not sure Google’s management will be able to convince me that its personnel policies, its management of its high school science club, and its knee jerk reaction to the Microsoft Davos slam dunk are more than bloviating. Finally, the original seven wonders are either ruins or lost to all but a MidJourney reconstruction or a Bing output. Google is in the “careful” business. Translating: Google is Googley. OpenAI and ChatGPT are delivering blocks and stones for a real wonder of the world.
Net net: The ancient seven wonders represent something to which humans aspired or honored. The Google seven wonders of AI are, in my opinion, marketing via uncoordinated demos. However, Google will make more money than any of the ancient attractions did. The Google list may be perfect for the next Sundar and Prabhakar Comedy Show. Will it play in Paris? The last one there flopped.
Stephen E Arnold, May 12, 2023