Encryption Battles Continue
June 4, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Privacy protections are great—unless you are law-enforcement attempting to trace a bad actor. India has tried to make it easier to enforce its laws by forcing messaging apps to track each message back to its source. That is challenging for a platform with encryption baked in, as Rest of World reports in, “WhatsApp Gives India an Ultimatum on Encryption.” Writer Russell Brandom tells us:
“IT rules passed by India in 2021 require services like WhatsApp to maintain ‘traceability’ for all messages, allowing authorities to follow forwarded messages to the ‘first originator’ of the text. In a Delhi High Court proceeding last Thursday, WhatsApp said it would be forced to leave the country if the court required traceability, as doing so would mean breaking end-to-end encryption. It’s a common stance for encrypted chat services generally, and WhatsApp has made this threat before — most notably in a protracted legal fight in Brazil that resulted in intermittent bans. But as the Indian government expands its powers over online speech, the threat of a full-scale ban is closer than it’s been in years.”
And that could be a problem for a lot of people. We also learn:
“WhatsApp is used by more than half a billion people in India — not just as a chat app, but as a doctor’s office, a campaigning tool, and the backbone of countless small businesses and service jobs. There’s no clear competitor to fill its shoes, so if the app is shut down in India, much of the digital infrastructure of the nation would simply disappear. Being forced out of the country would be bad for WhatsApp, but it would be disastrous for everyday Indians.”
Yes, that sounds bad. For the Electronic Frontier Foundation, it gets worse: The civil liberties organization insists the regulation would violate privacy and free expression for all users, not just suspected criminals.
To be fair, WhatsApp has done a few things to limit harmful content. It has placed limits on message forwarding and has boosted its spam and disinformation reporting systems. Still, there is only so much it can do when enforcement relies on user reports. To do more would require violating the platform’s hallmark: its end-to-end encryption. Even if WhatsApp wins this round, Brandom notes, the issue is likely to come up again when and if the Bharatiya Janata Party does well in the current elections.
Cynthia Murrell, June 4, 2024
Lunch at a Big Time Publisher: Humble Pie and Sour Words
June 4, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Years ago I did some work for a big time New York City publisher. The firm employed people who used words like “fungible” and “synergy” when talking with me. I took the time to read an article with this title: “So Much for Peer Review — Wiley Shuts Down 19 Science Journals and Retracts 11,000 Gobbledygook Papers.” Was this the staid, conservative, and big vocabulary?
Yep.
The essay is little more than a wrapper for a Wall Street Journal story with the title “Flood of Fake Science Forces Multiple Journal Closures Tainted by Fraud.” I quite like that title, particularly the operative word “fraud.” What in the world is going on?
The write up explains:
Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)
A group of publishing executives becomes the focal point of a Midtown lunch in an upscale restaurant. The titans of publishing are complaining about the taste of humble pie and user secret NYAC gestures to express their disapproval. Thanks, MSFT Copilot. Your security expertise may warrant a special banquet too.
The information in the cited article contains some tasty nuggets which complement humble pie in my opinion; for instance:
- The shut down of the junk food publications has required two years. If Sillycon Valley outfits can fire thousands via email or Zoom, “Why are those uptown shoes being dragged?” I asked myself.
- Other high-end publishers have been doing the same thing. Sadly there are no names.
- The bogus papers included something called a “AI gobbledygook sandwich.” Interesting. Human reviews who are experts could not recognize the vernacular of academic and research fraudsters.
- Some in Australia think that the credibility of universities might be compromised. Oh, come now. Just because the president of Stanford had to search for his future elsewhere after some intellectual fancy dancing and the head of the Harvard ethic department demonstrated allegedly sci-fi ethics in published research, what’s the problem? Don’t students just get As and Bs. Professors are engaged in research, chasing consulting gigs, and ginning up grant money. Actual research? Oh, come now.
- Academic journals are or were a $30 billion dollar industry.
Observations are warranted:
- In today’s datasphere, I am not surprised. Scams, frauds, and cheats seems to be as common as ants at a picnic. A cultural shift has occurred. Cheating has become the norm.
- Will the online databases, produced by some professional publishers and commercial database companies, be updated to remove or at least flag the baloney? Probably not. That costs money. Spending money is not a modern publishing CEO’s favorite activity. (Hence the two-year draw down of the fake information at the publishing house identified in the cited write up.)
- How many people have died or been put out of work because of specious research data? I am not holding my breath for the peer reviewed journals to provide this information.
Net net: Humiliating and a shame. Quite a cultural mismatch between what some publishers say and this alleged what the firm ordered from the deli. I thought the outfit had a knowledge-based reason to tell me that it takes the high road. It seems that on that road, there are places where a bad humble pie is served.
Stephen E Arnold, June 4, 2024
AI Will Not Definitely, Certainly, Absolute Not Take Some Jobs. Whew. That Is News
June 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Outfits like McKinsey & Co. are kicking the tires of smart software. Some bright young sprouts I have heard arrive with a penchant for AI systems to create summaries and output basic information on a subject the youthful masters of the universe do not know. Will consulting services firms, publishers, and customer service outfits embrace smart software? The answer is, “You bet your bippy.”
“Why?” Answer: Potential cost savings. Humanoids require vacations, health care, bonuses, pension contributions (ho ho ho), and an old-fashioned and inefficient five-day work week.
Cost reductions over time, cost controls in real time, and more consistent outputs mean that as long as smart software is good enough, the technologies will go through organizations with more efficiency than Union General William T. Sherman led some 60,000 soldiers on a 285-mile march from Atlanta to Savannah, Georgia. Thanks, MSFT Copilot. Working on security today?
Software is allegedly better, faster, and cheaper. Software, particularly AI, may not be better, faster, or cheaper. But once someone is fired, the enthusiasm to return to the fold may be diminished. Often the response is a semi-amusing and often negative video posted on social media.
“Here’s Why AI Probably Isn’t Coming for Your Job Anytime Soon” disagrees with my fairly conservative prediction that consulting, publishing, and some service outfits will be undergoing what I call “humanoid erosion” and “AI accretion.” The write up asserts:
We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence (AI) has begun to alter, even reverse, this evolution.
Okay, Econ 101. Wonderful. But… and there are some, of course. the write up says:
But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.
Yikes. I am not sure that AI is going in any direction. Senior managers are going toward reducing costs. “Good enough,” not excellence, is the high-water mark today.
Here’s another “but”:
But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.
I don’t remember much about economics. In fact, I only think about economics in terms of reducing costs and having more money for myself. Good old Adam wrote:
Wherever there is great property there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many.
When it comes to AI, inequality is baked in. The companies that are competing fiercely to dominate the core technology are not into equality. The senior managers who want to reduce costs associated with publishing, writing consulting reports based on business school baloney, or reviewing documents hunting for nuggets useful in a trial. AI is going into these and similar knowledge professions. Most of those knowledge workers will have an opportunity to find their future elsewhere. But what about in-take professionals in hospitals? What about dispatchers at trucking companies? What about government citizen service jobs? Sorry. Software is coming. Companies are developing orchestrator software to allow smart software to function across multiple related and inter-related tasks. Isn’t that what most work in a many organizations is?
Here’s another test question from Econ 101:
Discuss the meaning of “It was not by gold or by silver, but by labor, that all wealth of the world was originally purchased.” Give examples of how smart software will replace labor and generate more money for those who own the rights to digital gold or silver.
Send me you blue book answers within 24 hours. You must write in legible cursive. You are not permitted to use artificial intelligence in any form to answer this question which counts for 95 percent of your grade in Economics 102: Work in the Age of AI.
Stephen E Arnold, June 3, 2024
Price Fixing Is Price Fixing with or without AI
June 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Small time landlords, such as mom and pops who invested in property for retirement, shouldn’t be compared to large, corporate landlords. The corporate landlords, however, give them all a bad name. Why? Because of actions like price fixing. ProPublicia details how politicians are fighting against the bad act: “We Found That Landlords Could Be Using Algorithms To Fix Rent Prices. Now Lawmakers Want To make The Practice Illegal.”
RealPage sells software programmed with AI algorithm that collect rent data and recommends how much landlords should charge. Lawmakers want to ban AI-base price fixing so landlords won’t become cartels that coordinate pricing. RealPage and its allies defend the software while lawmakers introduced a bill to ban it.
The FTC also states that AI-based real estate software has problems: “Price Fixing By Algorithm Is Still Price Fixing.” The FTC isn’t against technology. They’re against technology being used as a tool to cheat consumers:
“Meanwhile, landlords increasingly use algorithms to determine their prices, with landlords reportedly using software like “RENTMaximizer” and similar products to determine rents for tens of millions(link is external) of apartments across the country. Efforts to fight collusion are even more critical given private equity-backed consolidation(link is external) among landlords and property management companies. The considerable leverage these firms already have over their renters is only exacerbated by potential algorithmic price collusion. Algorithms that recommend prices to numerous competing landlords threaten to remove renters’ ability to vote with their feet and comparison-shop for the best apartment deal around.”
This is an example of how to use AI for evil. The problem isn’t the tool it’s the humans using it.
Whitney Grace, June 3, 2024
Spot a Psyop Lately?
June 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Psyops or psychological operations is also known as psychological warfare. It’s defines as actions used to weaken an enemy’s morale. Psyops can range from simple propaganda poster to a powerful government campaign. According to Annalee Newitz on her Hypothesis Buttondown blog, psyops are everywhere and she explains: “How To Recognize A Psyop In Three Easy Steps.”
Newitz smartly condenses the history of American psyops into a paragraph: it’s a mixture of pulp fiction tropes, advertising techniques, and pop psychology. In the twentieth century, US military harnessed these techniques to make messages to hurt, demean, and distract people. Unlike weapons, psyops can be avoided with a little bit of critical thinking.
The first step is to pay attention when people claim something is “anti-American.” The term “anti-American” can be interpreted in many ways, but it comes down to media saying one group of people (foreign, skin color, sexual orientation, etc.) is against the American way of life.
The second step is spreading lies with hints of truth. Newitz advises to read psychological warfare military manuals and uses an example of leaflets the Japanese dropped on US soldiers in the Philippines. The leaflets warned the soldiers about venomous snakes in jungles and they were signed by with “US Army.” Soldiers were told the leaflets were false, but it made them believe there were coverups:
“Psyops-level lies are designed to destabilize an enemy, to make them doubt themselves and their compatriots, and to convince them that their country’s institutions are untrustworthy. When psyops enter culture wars, you start to see lies structured like this snake “warning.” They don’t just misrepresent a specific situation; they aim to undermine an entire system of beliefs.”
The third step is the easiest to recognize and the most extreme: you can’t communicate with anyone who says you should be dead. Anyone who believes you should be dead is beyond rational thought. Her advice is to ignore it and not engage.
Another way to recognize psyops tactics is to question everything. Thinking isn’t difficult, but thinking critically takes practice.
Whitney Grace, June 3, 2024
So AI Is — Maybe, Just Maybe — Not the Economic Big Kahuna?
June 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I find it amusing how AI has become the go-to marketing word. I suppose if I were desperate, lacking an income, unsure about what will sell, and a follow-the-hyperbole-type person I would shout, “AI.” Instead I vocalize, “Ai-Yai-Ai” emulating the tones of a Central American death whistle. Yep, “Ai-Yai-AI.”
Thanks, MSFT Copilot. A harbinger? Good enough.
I read “MIT Professor Hoses Down Predictions AI Will Put a Rocket under the Economy.” I won’t comment upon the fog of distrust which I discern around Big Name Universities, nor will I focus my adjustable Walgreen’s spectacles on MIT’s fancy dancing with the quite interesting and decidedly non-academic Jeffrey Epstein. Nope. Forget those two factoids.
The write up reports:
…Daron Acemoglu, professor of economics at Massachusetts Institute of Technology, argues that predictions AI will improve productivity and boost wages in a “blue-collar bonanza” are overly optimistic.
The good professor is rowing against the marketing current. According to the article, the good professor identifies some wild and crazy forecasts. One of these is from an investment bank whose clients are unlikely to be what some one percenters perceive as non-masters of the universe.
That’s interesting. But it pales in comparison to the information in “Few People Are Using ChatGPT and Other AI Tools Regularly, Study Suggests.” (I love suggestive studies!) That write up reports about a study involving Thomson Reuters, the “trust” outfit:
Carried out by the Reuters Institute and Oxford University and involving 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina, the researchers found that OpenAI’s ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products — Google Gemini and Microsoft Copilot. But despite all the hype surrounding generative AI over the last 18 months, only 1% of those surveyed are using ChatGPT on a daily basis in Japan, 2% in France and the UK, and 7% in the U.S. The study also found that between 19% and 30% of the respondents haven’t even heard of any of the most popular generative AI tools, and while many of those surveyed have tried using at least one generative-AI product, only a very small minority are, at the current time, regular users deploying them for a variety of tasks.
My hunch is that these contrarians want clicks. Well, the tactic worked for me. However, how many of those in AI-Land will take note? My thought is that these anti-AI findings are likely to be ignored until some of the Big Money folks lose their cash. Then the voices of negativity will be heard.
Several observations:
- The economics of AI seem similar to some early online ventures like Pets.com, not “all” mind you, just some
- Expertise in AI may not guarantee a job at a high-flying techno-feudalist outfit
- The difficulties Google appears to be having suggest that the road to AI-Land on the information superhighway may have some potholes. (If Google cannot pull AI off, how can Bob’s Trucking Company armed with Microsoft Word with Copilot?)
Net net: It will be interesting to monitor the frequency of “AI balloon deflating” analyses.
Stephen E Arnold, June 3, 2024
x
Google: Lost in Its Own AI Maze
May 31, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
One “real” news items caught my attention this morning. Let me tell you. Even with the interesting activities in the Manhattan court, these jumped at me. Let’s take a quick look and see if Googzilla (see illustration) can make a successful exit from the AI maze in which the online advertising giant finds itself.
Googzilla is lost in its own AI maze. Can it find a way out? Thanks, MSFT Copilot. Three tries and I got a lizard in a maze. Keep allocating compute cycles to security because obviously Copilot is getting fewer and fewer these days.
“Google Pins Blame on Data Voids for Bad AI Overviews, Will Rein Them In” makes it clear that Google is not blaming itself for some of the wacky outputs its centerpiece AI function has been delivering. I won’t do the guilty-34-times thing. I will just mention the non-toxic glue and pizza item. This news story reports:
Google thinks the AI Overviews for its search engine are great, and is blaming viral screenshots of bizarre results on "data voids" while claiming some of the other responses are actually fake. In a Thursday post, Google VP and Head of Google Search Liz Reid doubles down on the tech giant’s argument that AI Overviews make Google searches better overall—but also admits that there are some situations where the company "didn’t get it right."
So let’s look at that Google blog post titled “AI Overviews: About Last Week.”
How about this statement?
User feedback shows that with AI Overviews, people have higher satisfaction with their search results, and they’re asking longer, more complex questions that they know Google can now help with. They use AI Overviews as a jumping off point to visit web content, and we see that the clicks to webpages are higher quality — people are more likely to stay on that page, because we’ve done a better job of finding the right info and helpful webpages for them.
The statement strikes me as something that a character would say in an episode of the Twilight Zone, a TV series in the 50s and 60s. The TV show had a weird theme, and I thought I heard it playing when I read the official Googley blog post. Is this the Google “bullseye” method or a bullsh*t method?
The official Googley blog post notes:
This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available. (These are challenges that occur with other Search features too.) This approach is highly effective. Overall, our tests show that our accuracy rate for AI Overviews is on par with another popular feature in Search — featured snippets — which also uses AI systems to identify and show key info with links to web content.
Okay, we are into bullsh*t method. Google search is now a key moment in the Sundar & Prabhakar Comedy Act. Since the début in Paris which featured incorrect data, the Google has been in Code Red or Red Alert of red faced-embarrassment mode. Now the company wants people to eat rocks, and it is not the online advertising giant’s fault. The blog post explains:
There isn’t much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question. In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza.
Okay, I think one component of the bullsh*t method is that it is not Google’s fault. “Users” — not customers because Google has advertising clients, partners, and some lobbyists. Everyone else is a user, and it is users’ fault, the data creators’ fault, and probably Sam AI-Man’s fault. (Did I omit anyone on whom to blame the let them “eat rocks” result?)
And the Google cares. This passage is worthy of a Hallmark card with a foldout:
At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors. We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone. We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback.
What’s my take on this?
- The assumption that Google search is “good” is interesting, just not in line with what I hear, read, and experience when I do use Google. Note: That my personal usage has decreased over time.
- Google is trying to explain away its obvious flaws. The Google speak may work for some people, just not for me.
- The tone is that of a entitled seventh-grader from a wealthy family, not the type of language I find particularly helpful when the “smart” Google software has to be remediated by humans. Google is terminating humans, right? Now Google needs humans. What’s up Google?
Net net: Google is snagged it ins own AI maze. I am growing less confident in the company’s ability to extricate itself. The Sam AI-Man has crafted deals with two outfits big enough to make Google’s life more interesting. Google’s own management seems ineffectual despite the flashing red and yellow lights and the honking of alarms. Google’s wordsmiths and lawyers are running out of verbal wiggle room. But most important, the failure of the bullseye method and the oozing comfort of the bullsh*it method marks a turning point for the company.
Stephen E Arnold, May 31, 2024
NSO Group: Making Headlines Again and Again and Again
May 31, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
NSO Group continues to generate news. One example is the company’s flagship sponsorship of an interesting conference going on in Prague from June 4th to the 6th. What’s interesting mean? I think those who attend the conference are engaged in information-related activities connected in some way to law enforcement and intelligence. How do I know NSO Group ponied up big bucks to be the “lead sponsor”? Easy. I saw this advertisement on the conference organizer’s Web site. I know you want me to reveal the url, but I will treat the organizer in a professional manner. Just use those Google Dorks, and you will locate the event. The ad:
What’s the ad from the “lead sponsor” say? Here are a few snippets from the marketing arm of NSO Group:
NSO Group develops and provides state-of-the-art solutions, designed to assist in preventing terrorism and crime. Our solutions address diverse strategical, tactical and operational needs and scenarios to serve authorized government agencies including intelligence, military and law enforcement. Developed by the top technology and data science experts, the NSO portfolio includes cyber intelligence, network and homeland security solutions. NSO Group is proud to help to protect lives, security and personal safety of citizens around the world.
Innocent stuff with a flavor jargon-loving Madison Avenue types prefer.
Citizen’s Lab is a bit like mules in an old-fashioned grist mill. The researchers do not change what they think about. Source: Royal Mint Museum in the UK.
Just for some fun, let’s look at the NSO Group through a different lens. The UK newspaper The Guardian, which counts how many stories I look at a year, published “Critics of Putin and His Allies Targeted with Spyware Inside the EU.” Here’s a sample of the story’s view of NSO Group:
At least seven journalists and activists who have been vocal critics of the Kremlin and its allies have been targeted inside the EU by a state using Pegasus, the hacking spyware made by Israel’s NSO Group, according to a new report by security researchers. The targets of the hacking attempts – who were first alerted to the attempted cyber-intrusions after receiving threat notifications from Apple on their iPhones – include Russian, Belarusian, Latvian and Israeli journalists and activists inside the EU.
And who wrote the report?
Access Now, the Citizen Lab at the Munk School of Global Affairs & Public Policy at the University of Toronto (“the Citizen Lab”), and independent digital security expert Nikolai Kvantiliani
The Citizen Lab has been paying attention to NSO Group for years. The people surveilled or spied upon via the NSO Group’s Pegasus technology are anti-Russia; that is, none of the entities will be invited to a picnic at Mr. Putin’s estate near Sochi.
Obviously some outfit has access to the Pegasus software and its command-and-control system. It is unlikely that NSO Group provided the software free of charge. Therefore, one can conclude that NSO Group could reveal what country was using its software for purposes one might consider outside the bounds of the write up’s words cited above.
NSO Group remains one of the — if not the main — poster children for specialized software. The company continues to make headlines. Its technology remains one of the leaders in the type of software which can be used to obtain information for a mobile device. There are some alternatives, but NSO Group remains the Big Dog.
One wonders why Israel, presumably with the Pegasus tool, could not have obtained information relevant to the attack in October 2023. My personal view is that having Fancy Dan ways to get data from a mobile phone, human analysts have to figure out what’s important and what to identify as significant.
My point is that the hoo-hah about NSO Group and Pegasus may not be warranted. Information without the trained analysts and downstream software may have difficulty getting the information required to take a specific action. Israel’s lack of intelligence means that software alone can’t do the job. No matter what the marketing material says or how slick the slide deck used to brief those with a “need to know” appears — software is not intelligence.
Will NSO Group continue to make headlines? Probably. Those with access to Pegasus will make errors and disclose their ineptness. Citizen’s Lab will be at the ready. New reports will be forthcoming.
Net net: Is anyone surprised Mr. Putin is trying to monitor anti-Russia voices? Is Pegasus the only software pressed into service? My answer to this question is: “Mr. Putin will use whatever tool he can to achieve his objectives.” Perhaps Citizen’s Lab should look for other specialized software and expand its opportunities to write reports? When will Apple address the vulnerability which NSO Group continues to exploit?
Stephen E Arnold, May 31, 2024
In the AI Race, Is Google Able to Win a Sprint to a Feature?
May 31, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
One would think that a sophisticated company with cash and skilled employees would avoid a mistake like shooting the CEO in the foot. The mishap has occurred again, and if it were captured in a TikTok, it would make an outstanding trailer for the Sundar & Prabhakar reprise of The Greatest Marketing Mistakes of the Year.
At age 25, which is quite the mileage when traveling on the Information Superhighway, the old timer is finding out that younger, speedier outfits may win a number of AI races. In the illustration, the Google runner seems stressed at the start of the race. Will the geezer win? Thanks, MidJourney. Good enough, which is the benchmark today I fear.
“Google Is Taking ‘Swift Action’ to Remove Inaccurate AI Overview Responses” explains that Google rolled out with some fanfare its AI Overviews. The idea is that smart software would just provide the “user” of the Google ad delivery machine with an answer to a query. Some people have found that the outputs are crazier than one would expect from a Big Tech outfit. The article states:
… Google says, “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. “We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback,” Google adds. “We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”
But others are much kinder. One notable example is Mashable’s “We Gave Google’s AI Overviews the Benefit of the Doubt. Here’s How They Did.” This estimable publication reported:
Were there weird hallucinations? Yes. Did they work just fine sometimes? Also yes.
The write up noted:
AI Overviews were a little worse in most of my test cases, but sometimes they were perfectly fine, and obviously you get them very fast, which is nice. The AI hallucinations I experienced weren’t going to steer me toward any danger.
Let’s step back and view the situation via several observations:
- Google’s big moment becomes a meme cemented to glue on pizza
- Does Google have a quality control process which flags obvious gaffes? Apparently not.
- Google management seems to suggest that humans have to intervene in a Google “smart” process. Doesn’t that defeat the purpose of using smart software to replace some humans?
Net net: The Google is ageing, and I am not sure a singularity will offset these quite obvious effects of ageing, slowed corporate processes, and stuttering synapses in the revamped AI unit.
Stephen E Arnold, May 31, 2024
Amazon: Competition Heats Up in Some Carpetland Offices
May 31, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The tech industry is cutthroat and no one is safe in their position, no matter how high they are on the food chain. The Verge explains how one of Amazon’s CEOs might not be able to withstand competition: “Amazon Web Services CEO To Step Down.” Adam Selipsky is the acting CEO of Amazon Web Services and he will be stepping down June 3, 2024. He will be replaced by Matt Garman, who is currently the SVP of AWS sales, marketing, and global services. Garman has worked at Amazon for eighteen years in the AWS division.
AWS is responsible for 17% of Amazon’s total revenue and 6% of its operating income in the first quarter of 2024. AWS is known as an “invisible server empire” because it hosts the infrastructures of many organizations across all industries. When AWS experienced outages, there were ripple effects on the Internet and real world, i.e., Amazon delivery vans and warehouse bots couldn’t work. AWS is a big player in Amazon’s AI development: proprietary AI chips, Anthropic, Amazon Q, Amazon Bedrock, and Nvidia’s GH200 chips. Selipsky was a major leader in building Amazon’s AI foundations.
Andy Jassy wrote an email to AWS staff about the transfer of power that applauds Selipsky’s service, explains he’s moving onto another “challenge,” and is taking a “well-deserved respite.” The email then moves onto congratulating German. Selipsky replied with the following:
“Leading this amazing team and the AWS business is a big job, and I’m proud of all we’ve accomplished going from a start-up to where we are today. In the back of my head I thought there might be another chapter down the road at some point, but I never wanted to distract myself from what we are all working so hard to achieve. Given the state of the business and the leadership team, now is an appropriate moment for me to make this transition, and to take the opportunity to spend more time with family for a while, recharge a bit, and create some mental free space to reflect and consider the possibilities.
Matt and the AWS leadership team are ready for this next big opportunity. I’m excited to see what they and you do next, because I know it will be impressive. The future is bright for AWS (and for Amazon). I wish you all the very best of luck on this adventure.”
Selipsky, Jassy, Garman, and the AWS appear to be leaving on good terms. There might be something that happened behind closed doors and the verbiage indicates Selipsky can’t handle where AWS is going.
Whitney Grace, May 31, 2024