The Google: Disrupting Education in the Covid Era
March 15, 2021
I thought the Covid thing disrupted education. As a result, Google’s video conferencing system failed to seize an opportunity. Even poor, confused Microsoft put some effort into Teams. Sure, Teams is not the most secure or easy to use video conferencing service, but it has more features than Google has chat apps and ad options. Google also watched the Middle Kingdom’s favorite video service “zoom” right into a great big lead. Arguably, Google’s video conferencing tool should have hooked into the Chromebook, which is in the hands of some students. But what’s happened? Zoom, zoom, zoom.
I read this crisp headline: “Inside Google’s Plan to Disrupt the College Degree (Exclusive). Get a First Look at Google’s New Certificate Programs and a New Feature of Google Search Designed to Help Job Seekers Everywhere.”
Wow. The write up is an enthusiastic extension of Google Gibru-ish. Here’s why:
- Two candidates. One is a PhD from Princeton with a degree in computer science. The other is a minority certificate graduate. Both compete for the same job. Which candidate gets the job?
- One candidate, either Timnit Gebru or Margaret Mitchell. Both complete a Google certification program. Will these individuals get a fair shake and maybe get hired?
- Many female candidates from India. Some are funded by Google’s grant to improve opportunities for Indian females. How many will get Google jobs? [a] 80 to 99 percent, [b] 60 to 79 percent, [c] fewer than 60 percent? (I am assuming this grant and certificate thing are more than a tax deduction or hand waving.)
High school science club management decisions are fascinating to me.
Got your answers? I have mine.
For the PhD versus the certificate holder, the answer is it depends. A PhD with non Googley notions about ethical AI is likely to be driving an Uber. The certificate holder with the right mental orientation gets to play Foosball and do Googley things.
For the Gebru – Mitchell question, my answer is neither. Female, non-Googley, and already Xooglers. Find your future elsewhere is what I intuit.
And the females in India. Hard to say. The country is far away. The $20 million or so is too little. The cultural friction within the still existing castes are too strong. Maybe a couple is my guess.
In short, Google can try to disrupt education. But Covid has disrupted education. Another outfit has zoomed into chinks in the Google carapace. So marketing it is. It may work. Google is indeed Google.
Stephen E Arnold, March 15, 2021
Encomium for Google AI: But What about the Ethics Issue?
March 9, 2021
The comments attached to a 2020 essay “Paths to the Future: A Year at Google Brain” are effusive. I noticed that there was no reference to the personnel issues roiling Google. The word “ethics” does not appear in the write up. Several statements caught my attention. Here these are with my question or comment in italics.
- “Brain was a magnet for Google’s celebrity employees.” Two tier system? Yep, the celebrities and the others. For the author, this celebrity thing is exciting. For the others, it may be the root of discontent among the non-celebrity employees. Remarkable revelation from a young employee with little work experience in the Google environment.
- “Google is an “AI-first” company, with the company seeking to implement machine learning in nearly everything do.” Smart software is important. It seems obvious to me that anyone questioning the fairness of such smart software is not going to fit into the celebrity category. Thus, a researcher with data suggesting systemic bias is a no-go. Hasta la vista, Dr. Gibru. The message is get with the program or get gone.
- “The culture of Google Brain reminded me of what I’ve read about Xerox PARC.” Yep, the Xerox. The famous PARC. Ethernet, the mouse, bouncy visualizations. Just zero common sense when commercialization was required. Mr. Jobs paid a visit. The wizard showed off. Mr. Jobs created a reasonably successful company; Xerox PARC. A legend, just no Apple like commercial success with the graphical interface and the zippy Alto.
These three statements appear in the introduction to the essay. They are important for several reasons:
First, Google’s class system is evident and one of the first things the young wizard noticed. The two tier structure enshrines the high school science club approach to managing the firm.
Second, AI is a big deal at Google. Anyone not getting in line is headed for the door.
Third, the PARC touchstone makes it clear that inventing the future and doing cool things is the real work of the celebrity engineers.
What’s this mean for the lesser folk at Google? Unionization, push back, insubordination, and scorn for rah rah essays that make the Googleplex and the GOOG into just the most special company.
Autographed pictures? Probably coming in the near future as Google works to generate non-ad revenue. And ethics? Sure, the celebrity engineers ponder that issue 24×7.
Stephen E Arnold, March 9, 2021
Google Gets Kicked Out of Wizard Class: Gibru Jibberish to Follow
March 5, 2021
I read “AI Ethics Research Conference Suspends Google Sponsorship.” Imagine, a science club type organization suspended. Assuming the “real” and ad-littered story is accurate, here’s the scoop:
The ACM Conference for Fairness, Accountability, and Transparency (FAccT) has decided to suspend its sponsorship relationship with Google, conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand confirmed today. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru. Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak.
The Association for Computing Machinery no less. How many Googlers and Xooglers are in this ACM entity? How many Google and Xoogle papers has the ACM accepted? Now suspended. Yikes, just a high school punishment for an outfit infused with the precepts of high school science club management and behavior.
What’s interesting is the injection of the notion of “ethical.” The world’s largest education and scientific organization is not into talking, understanding the Google point of view, or finding common ground.
Disruptors, losers, and non-fitting wizards and wizardettes are not appropriate for the ethic sub group of ACM. Oh, is that ethical? Good question.
But ACM knows who writes checks. The ad besotted article states:
Putting Google sponsorship on hold doesn’t mean the end of sponsorship from Big Tech companies, or even Google itself. DeepMind, another sponsor of the FAccT conference that incurred an AI ethics controversy in January, is also a Google company. Since its founding in 2018, FAccT has sought funding from Big Tech sponsors like Google and Microsoft, along with the Ford Foundation and the MacArthur Foundation. An analysis released last year that compares Big Tech funding of AI ethics research to Big Tobacco’s history of funding health research found that nearly 60% of researchers at four prominent universities have taken money from major tech companies.
Should I raise another question about the ethics of this wallet sensitive posture? Nah. Money talks.
I find the blip on the ethical radar screen quite amusing. One learns each day what really matters in the world of computers and smart software. That’s a plus.
I am waiting for Google Gibru gibberish to explain the situation. I am all ears.
Stephen E Arnold, March 5, 2021
Google Gets into Insurance
March 3, 2021
Worrying about the relevance of search results? You probably should. The online ad giant is facing some big problems. And what do giant corporations do when their core business faces competitive, legal, employee, management, and customer pressure?
Give up.
Here’s the answer: Sell insurance.
“Google Rolls Out First of Its Kind Cyber Insurance Program for Cloud Customers” reports:
Google LLC has teamed up with two major insurers to develop a cyber security insurance offering that will provide Google Cloud customers who sign up with coverage against cyber attacks.
Ask an actuary. Is insurance a good business? Listen to the answer… carefully.
The article notes:
The Risk Manager tool is available to Google Cloud customers by request. As for the cyber insurance coverage against data breaches, it will initially be offered to organizations in the U.S.
There are several implications of this deal. But it is early days, and one cannot purchase insurance to cover a ride in a Waymo infused vehicle directly from the GOOG yet.
The thoughts which ran through my mind after reading the news story were:
- Is Google cashing in on SolarWinds’ paranoia?
- Does selling insurance for cloud services suggest that cloud services are a big fat bad actor target which cannot be adequately protected?
- Will Google insure homes, yachts, and health?
- Has Google run out of ideas for generating revenue from its home brew and me too technology?
I have no answers, just hunches.
The Google has looked backwards to bottomry contracts shaped in Babylon. When did this insight dawn? Round about 4,000 before common era (that’s AD in thumbtyper speak).
Will Google innovate with stone flaking methods and sell non fungible tokens for these artifacts?
Stephen E Arnold, March 3, 2021
Google: The Curse of Search
March 2, 2021
Remember when Eric Schmidt objected to information about his illustrious career being made available? I sure do. As I recall, the journalist used Google search to locate interesting information. MarketWatch quoted the brilliant Mr. Schmidt as saying:
If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place, but if you really need that kind of privacy, the reality is that search engines including Google do retain this information for some time, and it’s important, for example that we are all subject in the United States to the Patriot Act. It is possible that that information could be made available to the authorities.
Nifty idea.
Forbes, the capitalist tool I believe, published “Google Issues Quality Warning For Millions Of Google Photos Users.” That write up pivots on using information retrieval to illustrate that Google overlooked its own “right to be forgotten” capability.
The capitalist tool states:
At its 2015 launch, Google Photos creator Anil Sabharwal promised that High Quality uploads offered “near-identical visual quality” when compared to your original photos. But now Google wants us to see a seemingly huge difference in quality between the two settings and to be willing to pay extra for it. It seems “Original Quality” is now suddenly something for which we should all be willing to pay extra.
So what?
Google, which is struggling to control its costs, wants to generate money. One way is to take away a free photo service and get “users” to pay for storage. And store what, you ask.
Google is saying that its 2015 high quality image format is no good. Time to use “original quality”; that is, larger file sizes and more storage requirements.
The only hitch in the git along is that in 2015 Google emitted hoo-hah about its brilliant image method. Now the Google is rewriting history.
The problem: Google’s search engine with some coaxing makes it easy to spot inconsistencies in the marketing spin. Nothing to hide. Words of wisdom.
Stephen E Arnold, March 2, 2021
Judge in Google Trial Not Googley
March 1, 2021
I read an inadvertently amusing story called “Judge in Google Case Disturbed That Incognito Users Are Tracked.” Google is engaged in one of its many legal battles. This case concerns Brown v. Google, 20-cv-03664, U.S. District Court, Northern District of California (San Jose). The presiding judge is U.S. District Judge Lucy Koh. The write up reports:
In this case, Google is accused of relying on pieces of its code within websites that use its analytics and advertising services to scrape users’ supposedly private browsing history and send copies of it to Google’s servers.Google makes it seem like private browsing mode gives users more control of their data, Amanda Bonn, a lawyer representing users, told Koh. In reality, “Google is saying there’s basically very little you can do to prevent us from collecting your data, and that’s what you should assume we’re doing,” Bonn said.
Just as “unlimited” means “you have to be kidding”, the word “incognito” does not mean hidden. Judge Koh apparently was not aware of the GOOG’s native language. Google’s lawyer alleged suggested that Google “expressly discloses” its practices.
I laughed so hard that my eyes watered. No, I was not emulating happy crying.
The judge did not find Google’s argument as funny as I did. The write up reports:
The judge demanded an explanation “about what exactly Google does,” while voicing concern that visitors to the court’s website are unwittingly disclosing information to the company.“I want a declaration from Google on what information they’re collecting on users to the court’s website, and what that’s used for.
My hunch is that Google’s legal eagle Stephen Broome may be swept clean. The door is now open in Judge Koh’s courtroom for more amusing Google speak and the resultant misunderstandings.
“Expressly disclosing.” That is a good one. Where’s Jack Benny when we need him to work the phrase into a skit with Phil Harris?
Stephen E Arnold, March 1, 2021
Gebru-Gibberish: A Promise, Consultants, and Surgical Management Action
March 1, 2021
I read “Google Reportedly Promises Change to Research Team after High Profile Firings.” The article explains that after female artificial intelligence researchers found their futures elsewhere, Google (the mom and pop neighborhood online ad agency) will:
will change its research review procedures this year.
Okay, 10 months.
The write up points out that the action is
an apparent bid to restore employee confidence in the wake of two high-profile firings of prominent women from the [AI ethics] division.
Yep, words. I found this passage redolent of Gebru-gibberish; that is, wordage which explains how smart software ethics became a bit of a problem for the estimable Google outfit:
By the end of the second quarter, the approvals process for research papers will be more smooth and consistent, division Chief Operating Officer Maggie Johnson reportedly told employees in the meeting. Research teams will have access to a questionnaire that allows them to assess their projects for risk and navigate review, and Johnson predicted that a majority of papers would not require additional vetting by Google. Johnson also said the division is bringing in a third-party firm to help it conduct a racial-equity impact assessment, Reuters reports, and she expects the assessment’s recommendations “to be pretty hard.”
Okay. A questionnaire. A third party firm. Pretty hard.
What’s this mean?
The Ars Technica write up does not translate. However, from my vantage point in rural Kentucky, I understand the Gebru-gibberish to mean:
- Talk about ethical smart software and the GOOG reacts in a manner informed by high school science club principles
- Female AI experts are perceived as soft targets but that may be a misunderstanding in the synapses of the Google
- The employee issues at Google are overshadowing other Google challenges; for example, the steady rise of Amazon product search, the legal storm clouds, and struggles with the relevance of ads displayed in response to user queries or viewed YouTube videos.
Do I expect more Gebru-gibberish?
Will Microsoft continue to insist that its SAML is the most wonderful business process in the whole wide world?
Stephen E Arnold, March 1, 2021
Google: Personal Data Unrelated to AI Ethics?
February 26, 2021
I read “Google Finally Reveals the Terrifying Amount of Data Gmail Collects on iPhone.” I thought, “Terrifying? From the Google?” I know that the company has some management challenges, particularly in its ethics unit, but startle, petrify, awe?
The write up asserts:
Google’s labels indicate that its apps will collect plenty of user data for several purposes. This includes third-party advertising, analytics, product personalization, app functionality, and — the most annoying one — other purposes. These categories also contain an “other data types” section that suggests the apps can collect even more information than they’re ready to disclose.
Several questions:
- Will Google’s definition of ethics allow some interesting cross correlation of user data?
- How does iPhone data collection components compare to Android device data collection components? More data? Less data?
- How will Google’s estimable, industry leading, super duper artificial intelligence make use of these data to deliver advertising?
Worth monitoring the Google, its data collection, and its use of those data.
Stephen E Arnold, February 26, 2021
Gebru-Gibberish Gives Google Gastroenteritis
February 24, 2021
At the outset, I want to address Google’s Gebru-gibberish
Definition: Gebru-gibberish refers to official statements from Alphabet Google about the personnel issues related to the departure of two female experts in artificial intelligence working on the ethics of smart software. Gebru-gibberish is similar to statements made by those in fear of their survival.
Gastroenteritis: Watch the ads on Fox News or CNN for video explanations: Adult diapers, incontinence, etc.
Psychological impact: Fear, paranoia, flight reaction, irrational aggressiveness. Feelings of embarrassment, failure, serious injury, and lots of time in the WC.
The details of the viral problem causing discomfort among the world’s most elite online advertising organization relates to the management of Dr. Timnit Gebru. To add to the need to keep certain facilities nearby, the estimable Alphabet Google outfit apparently dismissed Dr. Margaret Mitchell. The output from the world’s most sophisticated ad sales company was Gebru-gibberish. Now those words have characterized the shallowness of the Alphabet Google thing’s approach to smart software.
In order to appreciate the problem, take a look at “Underspecification Presents Challenges for Credibility in Modern Machine Learning.” Here’s the author listing and affiliation for the people who contributed to the paper available without cost on ArXiv.org:
The image is hard to read. Let me point out that the authors include more than 30 Googlers (who may become Xooglers in between dashes to the WC).
The paper is referenced in a chatty Medium write up called “Is Google’s AI Research about to Implode?” The essay raises an interesting possibility. The write up contains an interesting point, one that suggests that Google’s smart software may have some limitations:
Underspecification presents significant challenges for the credibility of modern machine learning.
Why the apparently illogical behavior with regard to Drs. Gebru and Mitchell?
My view is that the Gebru-gibberish released from Googzilla is directly correlated with the accuracy of the information presented in the “underspecification” paper. Sure, the method works in some cases, just as the 1998 Autonomy black box worked in some cases. However, to keep the accuracy high, significant time and effort must be invested. Otherwise, smart software evidences the charming characteristic of “drift”; that is, what was relevant before new content was processed is perceived as irrelevant or just incorrect in subsequent interactions.
What does this mean?
Small, narrow domains work okay. Larger content domains work less okay.
Heron Systems, using a variation of the Google DeepMind approach, was able to “kill” a human in a simulated dog flight. However, the domain was small and there were some “rules.” Perfect for smart software. The human top gun was dead fast. Larger domains like dealing with swarms of thousands of militarized and hardened unmanned aerial vehicles and a simultaneous series of targeted cyber attacks using sleeper software favored by some nation states means that smart software will be ineffective.
What will Google do?
As I have pointed out in previous blog posts, the high school science club management method employed by Backrub has become the standard operating procedure at today’s Alphabet Google.
Thus, the question, “Is Google’s AI research about to implode?” is a good one. The answer is, “No.” Google has money; it has staff who tow the line; and it has its charade of an honest, fair, and smart online advertising system.
Let me suggest a slight change to the question; to wit: “Is Google at a tipping point?” The answer to this question is, “Yes.”
Gibru-gibberish is similar to the information and other outputs of Icarus, who flew too close to the sun and flamed out in a memorable way.
Stephen E Arnold, February 24, 2021
Google: Adding Friction?
February 23, 2021
I read “Waze’s Ex-CEO Says App Could Have Grown Faster without Google.” Opinions are plentiful. However, reading about the idea of Google as an inhibitor is interesting. The write up reports:
Waze has struggled to grow within Alphabet Inc’s Google, the navigation app’s former top executive said, renewing concerns over whether it was stifled by the search giant’s $1 billion acquisition in 2013.
A counterpoint is that 140 million drivers use Waze each month. When Google paid about $1 billion for the traffic service in 2009, Waze attracted 10 million drivers.
The write up states:
But Waze usage is flat in some countries as Google Maps gets significant promotion, and Waze has lost money as it focuses on a little-used carpooling app and pursues an advertising business that barely registers within the Google empire…
Several observations about the points in the article:
- With litigation and other push back against Google and other large technology firms, it seems as if Google is in a defensive posture
- Wall Street is happy with Google’s performance, but that enjoyment may not be shared with that of some users and employees
- Google management methods may be generating revenue but secondary effects like the Waze case may become data points worth monitoring.
Google map related services are difficult for me to use. Some functions are baffling; others invite use of other services. Yep, friction as in slowing Waze’s growth maybe?
Stephen E Arnold, February 23, 2021