The Logic of Good Enough: GitHub

July 22, 2024

What happens when a big company takes over a good thing? Here is one possible answer. Microsoft acquired GitHub in 2018. Now, “‘GitHub’ Is Starting to Feel Like Legacy Software,” according Misty De Méo at The Future Is Now blog. And by “legacy,” she means outdated and malfunctioning. Paint us unsurprised.

De Méo describes being unable to use a GitHub feature she had relied on for years: the blame tool. She shares her process of tracking down what went wrong. Development geeks can see the write-up for details. The point is, in De Méo’s expert opinion, those now in charge made a glaring mistake. She observes:

“The corporate branding, the new ‘AI-powered developer platform’ slogan, makes it clear that what I think of as ‘GitHub’—the traditional website, what are to me the core features—simply isn’t Microsoft’s priority at this point in time. I know many talented people at GitHub who care, but the company’s priorities just don’t seem to value what I value about the service. This isn’t an anti-AI statement so much as a recognition that the tool I still need to use every day is past its prime. Copilot isn’t navigating the website for me, replacing my need to the website as it exists today. I’ve had tools hit this phase of decline and turn it around, but I’m not optimistic. It’s still plenty usable now, and probably will be for some years to come, but I’ll want to know what other options I have now rather than when things get worse than this.”

The post concludes with a plea for fellow developers to send De Méo any suggestions for GitHub alternatives and, in particular, a good local blame tool. Let us just hope any viable alternatives do not also get snapped up by big tech firms anytime soon.

Cynthia Murrell, July 23, 2024

AI: Helps an Individual, Harms Committee Thinking Which Is Often Sketchy at Best

July 16, 2024

dinosaur30a_thumb_thumb_thumb_thumb_[1]_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I spotted an academic journal article type write up called “Generative AI Enhances Individual Creativity But Reduces the Collective Diversity of Novel Content.” I would give the paper a C, an average grade. The most interesting point in the write up is that when one person uses smart software like a ChatGPT-type service, the output can make that person seem to a third party smarter, more creative, and more insightful than a person slumped over a wine bottle outside of a drug dealer’s digs.

The main point, which I found interesting, is that a group using ChatGPT drops down into my IQ range, which is “Dumb Turtle.” I think this is potentially significant. I use the word “potential” because the study relied upon human “evaluators” and imprecise subjective criteria; for instance, novelty and emotional characteristics. This means that if the evaluators are teacher or people who have to critique writing are making the judgments, these folks have baked in biases and preconceptions. I know first hand because one of my pieces of writing was published in the St. Louis Post Dispatch at the same time my high school English teacher clapped a C for narrative value and D for language choice. She was not a fan of my phrase “burger boat drive in.” Anyway I got paid $18 for the write up.

Let’s pick up this “finding” that a group degenerates or converges on mediocrity. (Remember, please, that a camel is a horse designed by a committee.) Here’s how the researchers express this idea:

While these results point to an increase in individual creativity, there is risk of losing collective novelty. In general equilibrium, an interesting question is whether the stories enhanced and inspired by AI will be able to create sufficient variation in the outputs they lead to. Specifically, if the publishing (and self-publishing) industry were to embrace more generative AI-inspired stories, our findings suggest that the produced stories would become less unique in aggregate and more similar to each other. This downward spiral shows parallels to an emerging social dilemma (42): If individual writers find out that their generative AI-inspired writing is evaluated as more creative, they have an incentive to use generative AI more in the future, but by doing so, the collective novelty of stories may be reduced further. In short, our results suggest that despite the enhancement effect that generative AI had on individual creativity, there may be a cautionary note if generative AI were adopted more widely for creative tasks.

I am familiar with the stellar outputs of committees. Some groups deliver zero and often retrograde outputs; that is, the committee makes a situation worse. I am thinking of the home owners’ association about a mile from my office. One aggrieved home owner attended a board meeting and shot one of the elected officials. Exciting plus the scene of the murder was a church conference room. Driveways can be hot topics when the group decides to change rules which affected this fellow’s own driveway.

Sometimes committees come up with good ideas; for example, at one government agency where I was serving as the IV&V professional (independent verification and validation) which decided to disband because there was a tiny bit of hanky panky in the procurement process. That was a good idea.

Other committee outputs are worthless; for example, the transcripts of the questions from elected officials directed to high-technology executives. I won’t name any committees of this type because I worked for a congress person, and I observe the unofficial rule: Button up, butter cup.

Let me offer several observations about smart software producing outputs that point to dumb turtle mode:

  1. Services firms (lawyers and blue chip consultants) will produce less useful information relying on smart software than on what crazed Type A achievers produce. Yes, I know that one major blue chip consulting firm helped engineer the excitement one can see in certain towns in West Virginia, but imagine even more negative downstream effects. Wow!
  2. Dumb committees relying on AI will be among the first to suggest, “Let AI set the agenda.” And, “Let AI provide the list of options.” Great idea and one that might be more exciting that an aircraft door exiting the airplane frame at 15,000 feet.
  3. The bean counters in the organization will look at the efficiency of using AI for committee work and probably suggest, “Let’s eliminate the staff who spend more than 85 percent of their time in committee meetings.” That will save money and produce some interesting downstream consequences. (I once had a job which was to attendee committee meetings.)

Net net: AI will help some; AI will produce surprises which cannot be easily anticipated it seems.

Stephen E Arnold, July 16, 2024

AI: Hurtful and Unfair. Obviously, Yes

July 5, 2024

It will be years before AI is “smart” enough to entirely replace humans, but it’s in the immediate future. The problem with current AI is that they’re stupid. They don’t know how to do anything unless they’re trained on huge datasets. These datasets contain the hard, copyrighted, trademarked, proprietary, etc. work of individuals. These people don’t want their work used to train AI without their permission, much less replace them. Futurism shares that even AI engineers are worried about their creations, “Video Shows OpenAI Admitting It’s ‘Deeply Unfair’ To ‘Build AI And Take Everyone’s Job Away.”

The interview with an AI software engineer’s admission of guilt originally appeared in The Atlantic, but their morality is quickly covered by their apathy. Brian Wu is the engineer in question. He feels about making jobs obsolete, but he makes an observation that happens with progress and new technology: things change and that is inevitable:
“It won’t be all bad news, he suggests, because people will get to ‘think about what to do in a world where labor is obsolete.’

But as he goes on, Wu sounds more and more unconvinced by his own words, as if he’s already surrendered himself to the inevitability of this dystopian AI future.

‘I don’t know,’ he said. ‘Raise awareness, get governments to care, get other people to care.’ A long pause. ‘Yeah. Or join us and have one of the few remaining jobs. I don’t know. It’s rough.’”

Wu’s colleague Daniel Kokotajlo believes human will invent an all-knowing artificial general intelligence (AGI). The AGI will create wealth and it won’t be distributed evenly, but all humans will be rich. Kokotaljo then delves into the typical science-fiction story about a super AI becoming evil and turning against humanity. The AI engineers, however, aren’t concerned with the moral ambiguity of AI. They want to invent, continuing building wealth, and are hellbent on doing it no matter the consequences. It’s pure motivation but also narcissism and entitlement.

Whitney Grace, July 5, 2024

Google YouTube: The Enhanced Turtle Walk?

July 4, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I like to figure out how a leadership team addresses issues lower on the priority list. Some outfits talk a good game when a problem arises. I typically think of this as a Microsoft-type response. Security is job one. Then there’s Recall and the weird de-release of a Windows 11 update. But stuff is happening.

image

A leadership team decides to lead my moving even more slowly, possibly not at all. Turtles know how to win by putting one claw in front of another…. just slowly. Thanks, MSFT Copilot.

Then there are outfits who just ignore everything. I think of this as the Boeing-type of approach to difficult situations. Doors fall off, astronauts are stranded, and the FAA does its government is run like a business thing. But can a cash-strapped airline ground jets from a single manufacturer when the company’s jets come from one manufacturer. The jets keep flying, the astronauts are really not stranded yet, and the government runs like a business.

Google does not fit into either category. I read “Two Years after an Open Letter to YouTube, Fact-Checkers Remain Dissatisfied with the Platform’s Inaction.” The write up describes what Google YouTube to do a better job at fact checking the videos it hoses to people and kids worldwide:

Two years ago, fact-checkers from all over the world signed an open letter to YouTube with four solutions for reducing disinformation and misinformation on the platform. As they convened this year at GlobalFact 11, the world’s largest annual fact-checking summit, fact-checkers agreed there has been no meaningful change.

This suggests that Google is less dynamic than a government agency and definitely not doing the yip yap thing associated with Microsoft-type outfits. I find this interesting.

The [YouTube] channel continued to publish livestreams with falsehoods and racked up hundreds of thousands of views, Kamath [the founder of Newschecker] said.

Google YouTube is a global resource. The write up says:

When YouTube does present solutions, it focuses on English and doesn’t give a timeline for applying it to other languages, [Lupa CEO Natália] Leal said.

The turtle play perhaps?

The big assertion in the article in my opinion is:

[The] system is ‘loaded against fact-checkers’

Okay, let’s summarize. At one end of the leadership spectrum we have the talkers and go slow or do nothing. At the other end of the spectrum we have the leaders who don’t talk and allegedly retaliate when someone does talk with the events taking place under the watchful eye of US government regulators.

The Google YouTube method involves several leadership practices:

  1. Pretend avoidance. Google did not attend the fact checking conference. This is the ostrich principle I think.
  2. Go really slow. Two years with minimal action to remove inaccurate videos.
  3. Don’t talk.

My hypothesis is that Google can’t be bothered. It has other issues demanding its leadership time.

Net net: Are inaccurate videos on the Google YouTube service? Will this issue be remediated? Nope. Why? Money. Misinformation is an infinite problem which requires infinite money to solve. Ergo. Just make money. That’s the leadership principle it seems.

Stephen E Arnold, July 4, 2024

The Check Is in the Mail and I Will Love You in the Morning. I Promise.

July 1, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Have you heard these phrases in a business context?

  • “I’ll get back to you on that”
  • “We should catch up sometime”
  • “I’ll see what I can do”
  • “I’m swamped right now”
  • “Let me check my schedule and get back to you”
  • “Sounds great, I’ll keep that in mind”

image

Thanks, MSFT Copilot. Good enough despite the mobile presented as a corded landline connected to a bank note. I understand and I will love you in the morning. No, really.

I read “It’s Safe to Update Your Windows 11 PC Again, Microsoft Reassures Millions after Dropping Software over Bug.” [If the linked article disappears, I would not be surprised.] The write up says:

Due to the severity of the glitch, Microsoft decided to ditch the roll-out of KB5039302 entirely last week. Since then, the Redmond-based company has spent time investigating the cause of the bug and determined that it only impacts those who use virtual machine tools, like CloudPC, DevBox, and Azure Virtual Desktop. Some reports suggest it affects VMware, but this hasn’t been confirmed by Microsoft.

Now the glitch has been remediated. Yes, “I’ll get back to you on that.” Okay, I am back:

…on the first sign that your Windows PC has started — usually a manufacturer’s logo on a blank screen — hold down the power button for 10 seconds to turn-off the device, press and hold the power button to turn on your PC again, and then when Windows restarts for a second time hold down the power button for 10 seconds to turn off your device again. Power-cycling twice back-to-back should means that you’re launched into Automatic Repair mode on the third reboot. Then select Advanced options to enter winRE. Microsoft has in-depth instructions on how to best handle this damaging bug on its forum.

No problem, grandma.

I read this reassurance the simple steps needed to get the old Windows 11 gizmo working again. Then I noted this article in my newsfeed this morning (July 1, 2024):  “Microsoft Notifies More Customers Their Emails Were Accessed by Russian Hackers.” This write up reports as actual factual this Microsoft announcement:

Microsoft has told more customers that their emails were compromised during a late 2023 cyberattack carried out by the Russian hacking group Midnight Blizzard.

Yep, Russians… again. The write up explains:

The attack began in late November 2023. Despite the lengthy period the attackers were present in the system, Microsoft initially insisted that that only a “very small percentage” of corporate accounts were compromised. However, the attackers managed to steal emails and attached documents during the incident.

I can hear in the back of my mind this statement: “I’ll see what I can do.” Okay, thanks.

This somewhat interesting revelation about an event chugging along unfixed since late 2023 has annoyed some other people, not your favorite dinobaby. The article concluded with this passage:

In April [2023], a highly critical report [pdf] by the US Cyber Safety Review Board slammed the company’s response to a separate 2023 incident where Chinese hackers accessed emails of high-profile US government officials. The report criticized Microsoft’s “cascade of security failures” and a culture that downplayed security investments in favor of new products. “Microsoft had not sufficiently prioritized rearchitecting its legacy infrastructure to address the current threat landscape,” the report said. The urgency of the situation prompted US federal agencies to take action in April [2023]. An emergency directive was issued by the US Cybersecurity and Infrastructure Security Agency (CISA), mandating government agencies to analyze emails, reset compromised credentials, and tighten security measures for Microsoft cloud accounts, fearing potential access to sensitive communications by Midnight Blizzard hackers. CISA even said the Microsoft hack posed a “grave and unacceptable risk” to government agencies.

“Sounds great, I’ll keep that in mind.”

Stephen E Arnold, July 1, 2024

What Is That Wapo Wapo Wapo Sound?

June 20, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Do you hear that thumping wapo wapo wapo sound? I do. It reminds me of an old school pickup truck with a flat tire on a hot summer’s day? Yep, wapo wapo wapo. That’s it!

Jeff Bezos Has Worst Response Ever to Washington Post Turmoil” emitted this sound when I read the essay in New Republic. The newspaper for Washington, DC and its environs is the Post. When I lived in Washington, DC, the newspaper was a must read. Before I trundled off to the cheerful workplace of Halliburton Nuclear and later to the incredibly sensitive and human blue chip consulting firm known affectionately as the Boozer, I would read the WaPo. I had to be prepared. If I were working with a Congress person like Admiral Craig Hosmer, USN Retired, I had to know what Miss Manners had to say that day. A faux pas could be fatal.

image

The old pickup truck has a problem because one of the tires went wapo wapo wapo and then the truck stopped. Thanks, MSFT Copilot. Good enough.

The WaPo is now a Jeff Bezos property. I have forgotten how the financial deal was structured, but he has a home in DC and every person who is in contention as one of the richest men on earth needs a newspaper. The write up explains:

In a memo to the paper’s top personnel on Tuesday, the billionaire technocrat backed the new CEO Will Lewis, a former lieutenant to right-wing media mogul Richard Murdoch, whose controversial appointment at the Post has made waves across the industry in the wake of reporting on his shady journalistic practices.

That’s inspiring for a newspaper: A political angle and “shady journalistic practices.” What happened to that old every day is Day One and the customer is important? I suppose a PR person could trot those out. But the big story seems to be the newspaper is losing readers and money. Don’t people in DC read? Oh, silly question. No, now the up-and-come movers and shakers doom scroll and watch YouTube. The cited article includes a snippet from the Bezos bulldozer it appears. That item states:

…the journalistic standards and ethics at The Post will not change… You have my full commitment to n maintaining the quality, ethics, and standards we all believe in.

Two ethics in one short item. Will those add up this way: ethics plus ethics equals trust? Sure. I believe everything one of the richest people in the world says. It seems that one of the new hires to drive the newspaper world’s version of Jack Benny’s wheezing Maxwell was involved in some hanky-panky from private telephone conversations.

Several observations:

  1. “Real” newspapers seem to be facing some challenges. These range from money to money to money. Did I mention money?
  2. The newspaper owner and the management team have to overcome the money hurdle. How does one do that? Maybe smart software from an outfit like AWS and the Sagemaker product line? The AI can output good enough content at a lower cost and without grousing humans, vacations, health care, and annoying reporters poking into the lifestyle of the rich, powerful, famous, and rich. Did I mention “rich” twice? But if Mr. Bezos can work two ethics into one short memo, I can fit two into a longer blog post.
  3. The readers and journalists are likely to lose. I think readers will just suck down content from their mobile devices and the journalists will have to find their futures elsewhere like certain lawyers, many customer service personnel, and gig workers who do “art” for publishers, among others.

Net net: Do you hear the wapo wapo wapo? How long will the Bezos pickup truck roll along?

Stephen E Arnold, June 20, 2024

DeepMind Is Going to Make Products, Not Science

June 18, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Crack that Google leadership whip. DeepMind is going to make products. Yes, just like that. I am easily confused. I thought Google consolidated its smart software efforts. I thought Dr. Jeffrey Dean did a lateral arabesque making way for new leadership. The company had new marching orders under the calming light of a Red Alert, hair-on-fire, OpenAI and Microsoft will be the new Big Dogs.

image

From Google DeepMind to greener pastures. Thanks, OpenAI art thing.

Now I learn from “Google’s DeepMind Shifting From Research Powerhouse To AI Product Giant, Redefining Industry Dynamics”:

Alphabet Inc‘s subsidiary Google DeepMind has decided to transition from a research lab to an AI product factory. This move could potentially challenge the company’s long-standing dominance in foundational research… Google DeepMind, has merged its two AI labs to focus on developing commercial services. This strategic change could potentially disrupt the company’s traditional strength in fundamental research

From wonky images of the US founding fathers to weird outputs which appear to be indicative of Google’s smart software and its knowledge of pizza cheese interaction, the company seems to be struggling. To further complicate matters, Google’s management finesse created this interesting round of musical chairs:

…the departure of co-founder Mustafa Suleyman to Microsoft in March adds another layer of complexity to DeepMind’s journey. Suleyman’s move to Microsoft, where he has described his experience as “truly transformational,” indicates the competitive and dynamic nature of the AI industry.

Several observations:

  1. Microsoft seems to be suffering the AI wobblies. The more it tries to stabilize its AI activities, the more unstable the company seems to be
  2. Who is in charge of AI at Google?
  3. Has Google turned off the blinking red and yellow alert lights and operates in what might be called low lumen normalcy?
  4. xx

However, Google’s thrashing may not matter. OpenAI cannot get its system to stay online. Microsoft has a herd of AI organizations to manage and has managed to create a huge PR gaffe with its “smart” Recall feature. Apple deals in “to be” smart products and wants to work with everyone just without paying.

Net net: Is Google representative of the unraveling of the Next Big Thing?

Stephen E Arnold, June 18, 2024

x

x

x

Google and Microsoft: The Twinning Is Evident

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Google and Microsoft have some interesting similarities. Both companies wish they could emulate one another’s most successful products. Microsoft wants search and advertising revenue. Google wants a chokehold on the corporate market for software and services. The senior executives have similar high school academic training. Both companies have oodles of legal processes with more on the horizo9n. Both companies are terminating with extreme prejudice employees. Both companies seem to have some trust issues. You get the idea.

image

Some neural malfunctions occur when one get too big and enjoys the finer things in life like not working on management tasks with diligence. Thanks, MSFT Copilot. Good enough

Google and Microsoft are essentially morphing into mirrors of one another. Is that a positive? From an MBA / bean counter point of view, absolutely. There are some disadvantages, but they are minor ones; for example, interesting quasi-monopoly pricing options, sucking the air from the room for certain types of start ups, and having the power of a couple of nation-states. What could go wrong? (Just check out everyday life. Clues are abundant.)

How about management methods which do not work very well. I want to cite two examples.

Google is scaling back its AI search plans after the summary feature told people to eat glue. How do I, recently dubbed scary grandpa cyber by an officer at the TechnoSecurity & Digital Forensics Conference in Wilmington, North Carolina, last week? The answer is that I read “Google Is Scaling Back Its AI Search Plans after the Summary Feature Told People to Eat Glue.” This is a good example of the minimum viable product not be minimal enough and certainly not viable. The write up says:

Reid [a Google wizard] wrote that the company already had systems in place to not show AI-generated news or health-related results. She said harmful results that encouraged people to smoke while pregnant or leave their dogs in cars were “faked screenshots.” The list of changes is the latest example of the Big Tech giant launching an AI product and circling back with restrictions after things get messy.

What a remarkable tactic. Blame the “users” and reducing the exposure of the online ad giant’s technological prowess. I think these two tactics illustrate the growing gulf between “leadership” and the poorly managed lower level geniuses who toil at Googzilla’s side.

I noted a weird parallel with Microsoft illustrating a similar disconnect between the Microsoft’s carpetland dwellers and those working in the weird disconnected buildings on the Campus. This disaster of a minimum viable product or MVP was rolled out with much fanfare at one of Microsoft’s many, hard-to-differentiate conferences. The idea was one I heard about decades ago. The individual with whom I associate the idea once worked at Bellcore (one of the spin offs of Bell Labs after Judge Green created the telecommunications wonderland we enjoy today. The idea is a surveillance dream come true — at least for law enforcement and intelligence professionals. MSFT software captures images of a users screen, converts the bitmap to text, and helpfully makes it searchable. The brilliant Softie allegedly suggested in “When Asked about Windows Recall Privacy Concerns, Microsoft Researcher Gives Non-Answer

Microsoft’s Recall feature is being universally slammed for the privacy implications that come from screenshotting everything you do on a computer. However, at least one person seems to think the concerns are overblown. Unsurprisingly, it’s Microsoft Research’s chief scientist, who didn’t really give an answer when asked about Recall’s negative points.

Then what did a senior super manager do? Answer: Back track like crazy. Here’s the passage:

Even before making Recall available to customers, we have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards. With that in mind we are announcing updates that will go into effect before Recall (preview) ships to customers on June 18.

The decision could have been made by a member of the Google leadership team. Heck, may the two companies’ senior leadership are on a mystical brain wave and think the same thoughts. Which is the evil twin? I will leave that to you to ponder.

Several observations are warranted:

  • For large, world-affecting companies, senior managers are simply out of touch with [a] their product development teams and [b] their “users.”
  • The outfits may be Wall Street darlings, but are their other considerations to weigh?The companies have been sufficiently large their communication neurons are no longer reliable. The messages they emit are double speak at best and PR speak at their worst.
  • The management controls are not working. One can delegate when one knows those in other parts of the organization make good decisions. What’s evident is that a lack of control, commitment to on point research, and good judgment illustrate a breakdown of the nervous system of these companies.

Net net: What’s ahead? More of the same dysfunction perhaps?

Stephen E Arnold, June 14, 2024

Does Google Follow Its Own Product Gameplan?

June 5, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

If I were to answer the question based on Google’s AI summaries, I would say, “Nope.” The latest joke added to the Sundar & Prabhakar Comedy Show is the one about pizza. Here’s the joke if I recall it correctly.

Sundar: Yo, Prabhakar, how do you keep cheese from slipping off a hot pizza?

Prabhakar: I don’t know. Please, tell me, oh gifted one.

Sundar: You have your cook mix it with non-toxic glue, faithful colleague.

Prabhakar: [Laughing loudly]. That’s a good one, luminescent soul.

Did Google muff the bunny with its high-profile smart software feature? To answer the question, I looked to the ever-objective Fast Company online publication. I found a write which appears to provide some helpful information. The article is called “Conduct Stellar User Research Even Faster with This Google Ventures Formula.” Google has game plans for creating MVPs or minimum viable products.

image

The confident comedians look concerned when someone in the audience throws a large tomato at the well-paid performers. Thanks, MSFT. Working on security or the AI PC today?

Let’s look at what one Google partner reveals as the equivalent of the formula for Coca-Cola or McDonald’s recipe for Big Mac sauce.

Here’s the game winning touchdown razzle dazzle:

  1. Use a bullseye customer sprint. The idea is to get five “customers” and show them three prototypes. Listen for pros and cons. Then debrief together in a “watch party.”
  2. Conduct sprints early. The idea is to get this feedback before “a team invests a lot of time, money, or reputational risk into building, launching, and marketing an MVP (that’s a minimum viable product, not necessarily a good or needed product I think).
  3. Keep research bite size. Avoid heavy duty research overkill is the way I interpret the Google speak. The idea is that massive research projects are not desirable. They are work. Nibble, don’t gobble, I assume.
  4. Keep the process simple. Keep the prototypes simple. Get those interviews. That’s fun. Plus, there is the “watch party”, remember?

Okay, now let’s think about what Google suggests are outliers or fiddled AI results. Why is Google AI telling people to eat a rock a day?

The “bullseye” baloney is bull output for sure. I am on reasonably firm ground because in Paris the Sundar & Prabhakar Comedy Act showed incorrect outputs from Google’s AI system. Then Google invented about a dozen variations on the theme of a scrambled egg at Google I/O. Now Google is faced with its AI system telling people dogs own hotels. No, some dogs live in hotels. Some dogs deliver outputs in hotels. Dogs do not own hotels unless it is in a crazy virtual reality headset created by Apple or Meta.

The write up uses the word “stellar” to describe this MVP product stuff. The reality is that Googlers are creating work for themselves. Listening to “customers” who know little about AI or anything other than buy ad-get traffic. The “stellar” part of the title is like the “quantum supremacy” horse feather assertion the company crafted.

Smart software can, when trained and managed, can do some useful things. However, the bullseye and quantum supremacy stuff is capable of producing social media memes, concern among some stakeholders, and evidence that Google cannot do anything useful at this time.

Maybe the company will get its act together? When it does, I will check out the next Sundar & Prabhakar Comedy Act. Maybe some of the jokes will work? Let’s hope they are more effective than the bull’s-eye method. (Sorry. I had to fix up the spelling, Google.)

Stephen E Arnold, June 5, 2024

In the AI Race, Is Google Able to Win a Sprint to a Feature?

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One would think that a sophisticated company with cash and skilled employees would avoid a mistake like shooting the CEO in the foot. The mishap has occurred again, and if it were captured in a TikTok, it would make an outstanding trailer for the Sundar & Prabhakar reprise of The Greatest Marketing Mistakes of the Year.

image

At age 25, which is quite the mileage when traveling on the Information Superhighway, the old timer is finding out that younger, speedier outfits may win a number of AI races. In the illustration, the Google runner seems stressed at the start of the race. Will the geezer win? Thanks, MidJourney. Good enough, which is the benchmark today I fear.

Google Is Taking ‘Swift Action’ to Remove Inaccurate AI Overview Responses” explains that Google rolled out with some fanfare its AI Overviews. The idea is that smart software would just provide the “user” of the Google ad delivery machine with an answer to a query. Some people have found that the outputs are crazier than one would expect from a Big Tech outfit. The article states:

… Google says, “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. “We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback,” Google adds. “We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

But others are much kinder. One notable example is Mashable’s “We Gave Google’s AI Overviews the Benefit of the Doubt. Here’s How They Did.” This estimable publication reported:

Were there weird hallucinations? Yes. Did they work just fine sometimes? Also yes.

The write up noted:

AI Overviews were a little worse in most of my test cases, but sometimes they were perfectly fine, and obviously you get them very fast, which is nice. The AI hallucinations I experienced weren’t going to steer me toward any danger.

Let’s step back and view the situation via several observations:

  1. Google’s big moment becomes a meme cemented to glue on pizza
  2. Does Google have a quality control process which flags obvious gaffes? Apparently not.
  3. Google management seems to suggest that humans have to intervene in a Google “smart” process. Doesn’t that defeat the purpose of using smart software to replace some humans?

Net net: The Google is ageing, and I am not sure a singularity will offset these quite obvious effects of ageing, slowed corporate processes, and stuttering synapses in the revamped AI unit.

Stephen E Arnold, May 31, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta