Beyond Search

Google and Its Big AI PR Campaign

I spotted “DeepMind Says Its New Language Model Can Beat Others 25 Times Its Size.” In my opinion, this is part of the Google play to sail forward with its alleged better, faster, cheaper method of training machine learning models. Most people won’t care or know what’s underway. That’s okay because “information” is now channeled through specific conduits. As long as an answer is good enough or the payoff is big enough for the gatekeepers, the engineering is doing its job.

The write up is happily unaware of this push to use 60 percent or “good enough” accuracy to create the foundation for downstream training set generation. But, oh, boy, is that relaxed supervision great for matching ads. Good enough burns down inventory and it allows machine learning models to be trained on content domains quickly and without the friction imposed by mother hen subject matter experts, rigorous analysis and tuning, and retraining using human intermediated data sets.

Plus, skew, drift, and biases are smoothed out or made to go away. Well, that’s the theory.

The jazzy name Retro is not old school. It is new school. The lessons users will learn will take a long time to understand and appreciate its nuances.

This is a big business play and its accompanying PR campaign is working. Just ask Dr. Timnit Gebru, that is the former Google employee who raised the specter of bias, wonky outputs, and the potential for nudging users down the Googley path.

For another example of Google’s AI PR push, navigate to “DeepMind’s New 280 Billion-Parameter Language Model Kicks GPT-3’s Butt in Accuracy.” Wow, just like quantum supremacy.

Stephen E Arnold, December 9, 2021

Exit mobile version