Google Leadership Versus Valued Googlers

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

The summer in rural Kentucky lingers on. About 2,300 miles away from the Sundar & Prabhakar Comedy Show’s nerve center, the Alphabet Google YouTube DeepMind entity is also “cyclonic heating from chaotic employee motion.” What’s this mean? Unsteady waters? Heat stroke? Confusion? Hallucinations? My goodness.

The Google leadership faces another round of employee pushback. I read “Workers at Google DeepMind Push Company to Drop Military Contracts.

How could the Google smart software fail to predict this pattern? My view is that smart software has some limitations when it comes to managing AI wizards. Furthermore, Google senior managers have not been able to extract full knowledge value from the tools at their disposal to deal with complexity. Time Magazine reports:

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

Why are AI Googlers grousing about military work? My personal view is that the recent hagiography of Palantir’s Alex Karp and the tie up between Microsoft and Palantir for Impact Level 5 services means that the US government is gearing up to spend some big bucks for warfighting technology. Google wants — really needs — this revenue. Penalties for its frisky behavior as what Judge Mehta describes and “monopolistic” could put a hit in the git along of Google ad revenue. Therefore, Google’s smart software can meet the hunger militaries have for intelligent software to perform a wide variety of functions. As the Russian special operation makes clear, “meat based” warfare is somewhat inefficient. Ukrainian garage-built drones with some AI bolted on perform better than a wave of 18 year olds with rifles and a handful of bullets. The example which sticks in my mind is a Ukrainian drone spotting a Russian soldier in the field partially obscured by bushes. The individual is attending to nature’s call.l The drone spots the “shape” and explodes near the Russian infantry man.

image

A former consultant faces an interpersonal Waterloo. How did that work out for Napoleon? Thanks, MSFT Copilot. Are you guys working on the IPv6 issue? Busy weekend ahead?

Those who study warfare probably have their own ah-ha moment.

The Time Magazine write up adds:

Those principles state the company [Google/DeepMind] will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

I love it when wizards “believe” something.

Will the Sundar & Prabhakar brain trust do believing or banking revenue from government agencies eager to gain access to advantage artificial intelligence services and systems? My view is that the “believers” underestimate the uncertainty arising from potential sanctions, fines, or corporate deconstruction the decision of Judge Mehta presents.

The article adds this bit of color about the Sundar & Prabhakar response time to Googlers’ concern about warfighting applications:

The [objecting employees’] letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

“No meaningful response” suggests that the Alphabet Google YouTube DeepMind rhetoric is not satisfactory.

The write up concludes with this paragraph:

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

With Microsoft and Palantir, among others, poised to capture some end-of-fiscal-year money from certain US government budgets, the comedy act’s headquarters’ planners want a piece of the action. How will the Sundar & Prabhakar Comedy Act handle the situation? Why procrastinate? Perhaps the comedy act hopes the issue will just go away. The complaining employees have short attention spans, rely on TikTok-type services for information, and can be terminated like other Googlers who grouse, picket, boycott the Foosball table, or quiet quit while working on a personal start up.

The approach worked reasonably well before Judge Mehta labeled Google a monopoly operation. It worked when ad dollars flowed like latte at Philz Coffee. But today is different, and the unsettled personnel are not a joke and add to the uncertainty some have about the Google we know and love.

Stephen E Arnold, August 23, 2024

AI Balloon: Losing Air and Boring People

August 22, 2024

Though tech bros who went all-in on AI still promise huge breakthroughs just over the horizon, Windows Central’s Kevin Okemwa warns: “The Generative AI Bubble Might Burst, Sending the Tech to an Early Deathbed Before Its Prime: ‘Don’t Believe the Hype’.” Sadly, it is probably too late to save certain career paths, like coding, from an AI takeover. But perhaps a slowdown would conserve some valuable resources. Wouldn’t that be nice? The write-up observes:

“While AI has opened up the world to endless opportunities and untapped potential, its hype might be short-lived, with challenges abounding. Aside from its high water and power demands, recent studies show that AI might be a fad and further claim that 30% of its projects will be abandoned after proof of concept. Similar sentiments are echoed in a recent Blood In The Machine newsletter, which points out critical issues that might potentially lead to ‘the beginning of the end of the generative AI boom.’ From the Blood in the Machine newsletter analysis by Brian Merchant, who is also the Los Angeles Times’ technology columnist:

‘This is it. Generative AI, as a commercial tech phenomenon, has reached its apex. The hype is evaporating. The tech is too unreliable, too often. The vibes are terrible. The air is escaping from the bubble. To me, the question is more about whether the air will rush out all at once, sending the tech sector careening downward like a balloon that someone blew up, failed to tie off properly, and let go—or, more slowly, shrinking down to size in gradual sputters, while emitting embarrassing fart sounds, like a balloon being deliberately pinched around the opening by a smirking teenager.’”

Such evocative imagery. Merchant’s article also notes that, though Enterprise AI was meant to be the way AI firms made their money, it is turning out to be a dud. There are several reasons for this, not the least of which is AI models’ tendency to “hallucinate.”

Okemwa offers several points to support Merchant’s deflating-balloon claim. For example, Microsoft was recently criticized by investors for wasting their money on AI technology. Then there NVIDIA: The chipmaker recently became the most valuable company in the world thanks to astronomical demand for its hardware to power AI projects. However, a delay of its latest powerful chip dropped its stock’s value by 5%, and market experts suspect its value will continue to decline. The write-up also points to trouble at generative AI’s flagship firm, OpenAI. The company is plagued by a disturbing exodus of top executives, rumors of pending bankruptcy, and a pesky lawsuit from Elon Musk.

Speaking of Mr. Musk, how do those who say AI will kill us all respond to the potential AI downturn? Crickets.

Cynthia Murrell, August 22, 2024

Microsoft and Palantir: Moving Up to Higher Impact Levels

August 20, 2024

Microsoft And Palantir Sell AI Spyware To Us Government

While AI is making the news about how it will end jobs, be used for deep fakes, and overturn creativity industries, there’s something that’s not being mentioned: spyware. The Verge writes about how two big technology players are planning to bring spyware to the US government: “Palantir Partners With Microsoft To Sell AI To The Government.”

Palantir and Microsoft recently announced they will combine their software to power services for US defense and intelligence services. Microsoft’s large language models (LLMs) will be used via Azure OpenAI Service with Palantir’s AI Platforms (AIP). These will be used through Microsoft’s classified government cloud environments. This doesn’t explain exactly what the combination of software will do, but there’s speculation.

Palantir is known for its software that analyses people’s personal data and helping governments and organizations with surveillance. Palantir has been very successful when it comes to government contracts:

“Despite its large client list, Palantir didn’t post its first annual profit until 2023. But the AI hype cycle has meant that Palantir’s “commercial business is exploding in a way we don’t know how to handle,” the company’s chief executive officer Alex Carp told Bloomberg in February. The majority of its business is from governments, including that of Israel — though the risk factors section of its annual filing notes that it does not and will not work with “the Chinese communist party.””

Eventually the details about Palantir’s and Microsoft’s partnership will be revealed. It probably won’t be off from what people imagine, but it is guaranteed to be shocking.

Whitney Grace, August 20, 2024

Good News: Meta To Unleash Automated AI Ads

August 19, 2024

Facebook generated its first revenue streams from advertising. Meta, Facebook’s parent company, continues to make huge profits from ads. Its products use cookies for targeted ads, collect user information to sell, and more. It’s not surprising that AI will soon be entering the picture says Computer Weekly: “Meta’s Zuckerberg Looks Ahead To AI-Generated Adverts.”

Meta increased its second-quarter revenues 22% from its first quarter. The company also reported that the cost of revenue increased by 23% due to higher infrastructure costs and Reality Labs needing a lot of cash. Zuckerberg explained that advertisers used to reach out to his company about the target audiences they wanted to reach. Meta eventually became so advanced that its ad systems predicted target audiences better than the advertisers. Zuckerberg plans for Meta to do the majority of work for advertising agencies. All they will need to provide Meta will be a budget and business objective.

Meta is investing and developing technology to make more money via AI. Meta is playing the long game:

“When asked about the payback time for investments in AI, Meta’s chief financial officer, Susan Li, said: ‘On our core AI work, we continue to take a very return on investment-based approach. We’re still seeing strong returns as improvements to both engagement and ad performance have translated into revenue gains, and it makes sense for us to continue investing here.’

Looking at generative AI (GenAI), she added: “We don’t expect our GenAI products to be a meaningful driver of revenue in 2024, but we do expect that they’re going to open up new revenue opportunities over time that will enable us to generate a solid return off of our investment…’”

Meta might see a slight dip in profit margins because it is investing in better technology, but AI generated ads will pay for themselves, literally.

Whitney Grace, August 19, 2024

An Ed Critique That Pans the Sundar & Prabhakar Comedy Act

August 16, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read Ed.

Ed refers to Edward Zitron, the thinker behind Where’s Your Ed At. The write up which caught my attention is “Monopoly Money.” I think that Ed’s one-liners will not be incorporated into the Sundar & Prabhakar comedy act. The flubbed live demos are knee slappers, but Ed’s write up is nipping at the heels of the latest Googley gaffe.

image

Young people are keen observers of certain high-technology companies. What happens if one of the giants becomes virtual and moves to a Dubai-type location? Who has jurisdiction? Regulatory enforcement delayed means big high-tech outfits are more portable than old-fashioned monopolies. Thanks, MSFT Copilot. Big industrial images are clearly a core competency you have.

Ed’s focus is on the legal decision which concluded that the online advertising company is a monopoly in “general text advertising.” The essay states:

The ruling precisely explains how Google managed to limit competition and choice in the search and ad markets. Documents obtained through discovery revealed the eye-watering amounts Google paid to Samsung ($8 billion over four years) and Apple ($20 billion in 2022 alone) to remain the default search engine on their devices, as well as Mozilla (around $500 million a year), which (despite being an organization that I genuinely admire, and that does a lot of cool stuff technologically) is largely dependent on Google’s cash to remain afloat.

Ed notes:

Monopolies are a big part of why everything feels like it stopped working.

Ed is on to something. The large technology outfits in the US control online. But one of the downstream consequences of what I call the Silicon Valley way or the Googley approach to business is that other industries and market sectors have watched how modern monopolies work. The result is that concentration of power has not been a regulatory priority. The role of data aggregation has been ignored. As a result, outfits like Kroger (a grocery company) is trying to apply Googley tactics to vegetables.

Ed points out:

Remember when “inflation” raised prices everywhere? It’s because the increasingly-dwindling amount of competition in many consumer goods companies allowed them to all raise their prices, gouging consumers in a way that should have had someone sent to jail rather than make $19 million for bleeding Americans dry. It’s also much, much easier for a tech company to establish one, because they often do so nestled in their own platforms, making them a little harder to pull apart. One can easily say “if you own all the grocery stores in an area that means you can control prices of groceries,” but it’s a little harder to point at the problem with the tech industry, because said monopolies are new, and different, yet mostly come down to owning, on some level, both the customer and those selling to the customer.

Blue chip consulting firms flip this comment around. The points Ed makes are the recommendations and tactics the would-be monopolists convert to action plans. My reaction is, “Thanks, Silicon Valley. Nice contribution to society.”

Ed then gets to artificial intelligence, definitely a hot topic. He notes:

Monopolies are inherently anti-consumer and anti-innovation, and the big push toward generative AI is a blatant attempt to create another monopoly — the dominance of Large Language Models owned by Microsoft, Amazon, Google and Meta. While this might seem like a competitive marketplace, because these models all require incredibly large amounts of cloud compute and cash to both train and maintain, most companies can’t really compete at scale.

Bingo.

I noted this Ed comment about AI too:

This is the ideal situation for a monopolist — you pay them money for a service and it runs without you knowing how it does so, which in turn means that you have no way of building your own version. This master plan only falls apart when the “thing” that needs to be trained using hardware that they monopolize doesn’t actually provide the business returns that they need to justify its existence.

Ed then makes a comment which will cause some stakeholders to take a breath:

As I’ve written before, big tech has run out of hyper-growth markets to sell into, leaving them with further iterations of whatever products they’re selling you today, which is a huge problem when big tech is only really built to rest on its laurels. Apple, Microsoft and Amazon have at least been smart enough to not totally destroy their own products, but Meta and Google have done the opposite, using every opportunity to squeeze as much revenue out of every corner, making escape difficult for the customer and impossible for those selling to them. And without something new — and no, generative AI is not the answer — they really don’t have a way to keep growing, and in the case of Meta and Google, may not have a way to sustain their companies past the next decade. These companies are not built to compete because they don’t have to, and if they’re ever faced with a force that requires them to do good stuff that people like or win a customer’s love, I’m not sure they even know what that looks like.

Viewed from a Googley point of view, these high-technology outfits are doing what is logical. That’s why the Google advertisement for itself troubled people. The person writing his child willfully used smart software. The fellow embodied a logical solution to the knotty problem of feelings and appropriate behavior.

Ed suggests several remedies for the Google issue. These make sense, but the next step for Google will be an appeal. Appeals take time. US government officials change. The appetite to fight legions of well resourced lawyers can wane. The decision reveals some interesting insights into the behavior of Google. The problem now is how to alter that behavior without causing significant market disruption. Google is really big, and changes can have difficult-to-predict consequences.

The essay concludes:

I personally cannot leave Google Docs or Gmail without a significant upheaval to my workflow — is a way that they reinforce their monopolies. So start deleting sh*t. Do it now. Think deeply about what it is you really need — be it the accounts you have and the services you need — and take action.  They’re not scared of you, and they should be.

Interesting stance.

Several observations:

  1. Appeals take time. Time favors outfits like losers of anti-trust cases.
  2. Google can adapt and morph. The size and scale equip the Google in ways not fathomable to those outside Google.
  3. Google is not Standard Oil. Google is like AT&T. That break up resulted in reconsolidation and two big Baby Bells and one outside player. So a shattered Google may just reassemble itself. The fancy word for this is emergent.

Ed hits some good points. My view is that the Google fumbles forward putting the Sundar & Prabhakar Comedy Act in every city the digital wagon can reach.

Stephen E Arnold, August 16, 2024

A Familiar Cycle: The Frustration of Almost Solving the Search Problem

August 16, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Search and retrieval is a difficult problem. The solutions have ranged from scrolls with labels to punched cards and rods to bags of words. Each innovation or advance sparked new ideas. Boolean gave way to natural language. Natural language evolved into semi-smart systems. Now we are in the era of what seems to be smart software. Like the punch card systems, users became aware of the value of consistent, accurate indexing. Today one expects a system to “know” what the user wants. Instead of knowing index terms, one learns to be a prompt engineer.

image

Search and retrieval is not “solved” using large language models. LLMs are a step forward on a long and difficult path. The potential financial cost of thinking that the methods are a sure-fire money machine is high. Thanks, MSFT Copilot. How was DEFCON?

I read “LLM Progress Is Slowing — What Will It Mean for AI?.” The write up makes clear that some of the excitement of smart software which can makes sense of natural language queries (prompts) has lost some of its shine. This type of insight is one that probably existed when a Babylonian tablet maker groused about not having an easy way to stack up clay tablets for the money guy. Search and retrieval is essential for productive work. A system which makes that process less of a hassle is welcomed. After a period of time one learns that the approach is not quite where the user wants it to be. Researchers and innovators hear the complaint and turn their attention to improving search and retrieval … again.

The write up states:

The leap from GPT-3 to GPT-3.5 was huge, propelling OpenAI into the public consciousness. The jump up to GPT-4 was also impressive, a giant step forward in power and capacity. Then came GPT-4 Turbo, which added some speed, then GPT-4 Vision, which really just unlocked GPT-4’s existing image recognition capabilities. And just a few weeks back, we saw the release of GPT-4o, which offered enhanced multi-modality but relatively little in terms of additional power. Other LLMs, like Claude 3 from Anthropic and Gemini Ultra from Google, have followed a similar trend and now seem to be converging around similar speed and power benchmarks to GPT-4. We aren’t yet in plateau territory — but do seem to be entering into a slowdown. The pattern that is emerging: Less progress in power and range with each generation.

This is an echo of the complaints I heard about Dr. Salton’s SMART search system.

The “fix” according to the write up may be to follow one of these remediation paths:

  • More specialization
  • New user interfaces
  • Open source large language models
  • More and better data
  • New large language model architectures.

These are ideas bolted to the large language model approach to search and retrieval. I think each has upsides and downsides. These deserve thoughtful discussion. However, the evolution of search-and-retrieval has been an evolutionary process. Those chaos and order thinkers at the Santa Fe Institute suggest that certain “things” self organize and emerge. The idea has relevance to what happens with each “new” approach to search and retrieval.

The cited write up concludes with this statement:

One possible pattern that could emerge for LLMs: That they increasingly compete at the feature and ease-of-use levels. Over time, we could see some level of commoditization set in, similar to what we’ve seen elsewhere in the technology world. Think of, say, databases and cloud service providers. While there are substantial differences between the various options in the market, and some developers will have clear preferences, most would consider them broadly interchangeable. There is no clear and absolute “winner” in terms of which is the most powerful and capable.

I think the idea about competition is mostly correct. However, what my impression of search and retrieval as a technology thread is that progress is being made. I find it encouraging that more users are interacting with systems. Unfortunately search and retrieval is not solved by generating a paragraph a high school student can turn into a history teacher as an original report.

Effective search and retrieval is not just a prompt box. Effective information access remains a blend of extraordinarily trivial activities. For instance, a conversation may suggest a new way to locate relevant information. Reading an article or a longer document may trigger an unanticipated connection between ant colonies and another task-related process. The act of looking at different sources may lead to a fact previously unknown which leads in turn to another knowledge insight. Software alone cannot replicate these mental triggers.

LLMs like stacked clay tablets provide challenges and utility. However, search and retrieval remains a work in progress. LLMs, like semantic ad matching, or using one’s search history as a context clue, are helpful. But opportunities for innovation exist. My view is that the grousing about LLM limitations is little more than a recognition that converting a human concept or information need to an “answer” is a work in progress. The difference is that today billions of dollars have been pumped into smart software in the hope that information retrieval is solved.

Sorry, it is not. Therefore, the stakes of realizing that the golden goose may not lay enough eggs to pay off the cost of the goose itself. Twenty years ago search and retrieval was not a sector consuming billions of dollars in the span of a couple of years. That’s what is making people nervous about LLMs. Watching Delphi or Entopia fail was expensive, but the scale of the financial loss and the emotional cost of LLM failure is a different kettle of fish.

Oh, and those five “fixes” in the bullet points from the write up. None will solve the problem of search and retrieval.

Stephen E Arnold, August 16, 2024

Pragmatic AI: Individualized Monitoring

August 15, 2024

dinosaur30a_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In June 2024 at the TechnoSecurity & Digital Forensics conference, one of the cyber investigators asked me, “What are some practical uses of AI in law enforcement?” I told the person that I would send him a summary of my earlier lecture called “AI for LE.” He said, “Thanks, but what should I watch to see some AI in action.” I told him to pay attention to the Kroger pricing methods. I had heard that Kroger was experimenting with altering prices based on certain signals. The example I gave is that if the Kroger is located in a certain zip code, then the Kroger stores in that specific area would use dynamic pricing. The example I gave was similar to Coca-Cola’s tests of a vending machine that charged more if the temperature was hot. In the Kroger example, a hot day would trigger a change in the price of a frozen dessert. He replied, “Kroger?” I said, “Yes, Kroger is experimenting with AI in order to detect specific behaviors and modify prices to reflect those signals.” What Kroger is doing will be coming to law enforcement and intelligence operations. Smart software monitors the behavior of a prisoner, for example, and automatically notifies an investigator when a certain signal is received. I recall mentioning that smart software, signals, and behavior change or direct action will become key components of a cyber investigator’s tool kit. He said, laughing, “Kroger. Interesting.”

image

Thanks, MSFT Copilot. Good enough.

I learned that Kroger’s surveillance concept is now not a rumor discussed at a neighborhood get together. “‘Corporate Greed Is Out of Control’: Warren Slams Kroger’s AI Pricing Scheme” reveals that elected officials and probably some consumer protection officials may be aware of the company’s plans for smart software. The write up reports:

Warren (D-Mass.) was joined by Sen. Bob Casey (D-Pa.) on Wednesday in writing a letter to the chairman and CEO of the Kroger Company, Rodney McMullen, raising concerns about how the company’s collaboration with AI company IntelligenceNode could result in both privacy violations and worsened inequality as customers are forced to pay more based on personal data Kroger gathers about them “to determine how much price hiking [they] can tolerate.” As the senators wrote, the chain first introduced dynamic pricing in 2018 and expanded to 500 of its nearly 3,000 stores last year. The company has partnered with Microsoft to develop an Electronic Shelving Label (ESL) system known as Enhanced Display for Grocery Environment (EDGE), using a digital tag to display prices in stores so that employees can change prices throughout the day with the click of a button.

My view is that AI orchestration will allow additional features and functions. Some of these may be appropriate for use in policeware and intelware systems. Kroger makes an effort to get individuals to sign up for a discount card. Also, Kroger wants users to install the Kroger app. The idea is that discounts or other incentives may be “awarded” to the customer who takes advantages of the services.

However, I am speculating that AI orchestration will allow Kroger to implement a chain of actions like this:

  1. Customer with a mobile phone enters the store
  2. The store “acknowledges” the customer
  3. The customer’s spending profile is accessed
  4. The customer is “known” to purchase upscale branded ice cream
  5. The price for that item automatically changes as the customer approaches the display
  6. The system records the item bar code and the customer ID number
  7. At check out, the customer is charged the higher price.

Is this type of AI orchestration possible? Yes. Is it practical for a grocery store to deploy? Yes because Kroger uses third parties to provide its systems and technical capabilities for many applications.

How does this apply to law enforcement? Kroger’s use of individualized tracking may provide some ideas for cyber investigators.

As large firms with the resources to deploy state-of-the-art technology to boost sales, know the customer, and adjust prices at the individual shopper level, the benefit of smart software become increasingly visible. Some specialized software systems lag behind commercial systems. Among the reasons are budget constraints and the often complicated procurement processes.

But what is at the grocery store is going to become a standard function in many specialized software systems. These will range from security monitoring systems which can follow a person of interest in an specific area to automatically updating a person of interest’s location on a geographic information module.

If you are interested in watching smart software and individualized “smart” actions, just pay attention at Kroger or a similar retail outfit.

Stephen E Arnold, August 15, 2024

AI Safety Evaluations, Some Issues Exist

August 14, 2024

Ah, corporate self regulation. What could go wrong? Well, as TechCrunch reports, “Many Safety Evaluations for AI Models Have Significant Limitations.” Writer Kyle Wiggers tells us:

“Generative AI models … are coming under increased scrutiny for their tendency to make mistakes and generally behave unpredictably. Now, organizations from public sector agencies to big tech firms are proposing new benchmarks to test these models’ safety. Toward the end of last year, startup Scale AI formed a lab dedicated to evaluating how well models align with safety guidelines. This month, NIST and the U.K. AI Safety Institute released tools designed to assess model risk. But these model-probing tests and methods may be inadequate. The Ada Lovelace Institute (ALI), a U.K.-based nonprofit AI research organization, conducted a study that interviewed experts from academic labs, civil society and those who are producing vendors models, as well as audited recent research into AI safety evaluations. The co-authors found that while current evaluations can be useful, they’re non-exhaustive, can be gamed easily and don’t necessarily give an indication of how models will behave in real-world scenarios.”

There are several reasons for the gloomy conclusion. For one, there are no established best practices for these evaluations, leaving each organization to go its own way. One approach, benchmarking, has certain problems. For example, for time or cost reasons, models are often tested on the same data they were trained on. Whether they can perform in the wild is another matter. Also, even small changes to a model can make big differences in behavior, but few organizations have the time or money to test every software iteration.

What about red-teaming: hiring someone to probe the model for flaws? The low number of qualified red-teamers and the laborious nature of the method make it costly, out of reach for smaller firms. There are also few agreed-upon standards for the practice, so it is hard to assess the effectiveness of red-team projects.

The post suggests all is not lost—as long as we are willing to take responsibility for evaluations out of AI firms’ hands. Good luck prying open that death grip. Government regulators and third-party testers would hypothetically fill the role, complete with transparency. What a concept. It would also be good to develop standard practices and context-specific evaluations. Bonus points if a method is based on an understanding of how each AI model operates. (Sadly, such understanding remains elusive.)

Even with these measures, it may never be possible to ensure any model is truly safe. The write-up concludes with a quote from the study’s co-author Mahi Hardalupas:

“Determining if a model is ‘safe’ requires understanding the contexts in which it is used, who it is sold or made accessible to, and whether the safeguards that are in place are adequate and robust to reduce those risks. Evaluations of a foundation model can serve an exploratory purpose to identify potential risks, but they cannot guarantee a model is safe, let alone ‘perfectly safe.’ Many of our interviewees agreed that evaluations cannot prove a model is safe and can only indicate a model is unsafe.”

How comforting.

Cynthia Murrell, August 14, 2024

Sakana: Can Its Smart Software Replace Scientists and Grant Writers?

August 13, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A couple of years ago, merging large language models seemed like a logical way to “level up” in the artificial intelligence game. The notion of intelligence aggregation implied that if competitor A was dumb enough to release models and other digital goodies as open source, an outfit in the proprietary software business could squish the other outfits’ LLMs into the proprietary system. The costs of building one’s own super-model could be reduced to some extent.

Merging is a very popular way to whip up pharmaceuticals. Take a little of this and a little of that and bingo one has a new drug to flog through the approval process. Another example is taking five top consultants from Blue Chip Company I and five top consultants from Blue Chip Company II and creating a smarter, higher knowledge value Blue Chip Company III. Easy.

A couple of Xooglers (former Google wizards) are promoting a firm called Sakana.ai. The purpose of the firm is to allow smart software (based on merging multiple large language models and proprietary systems and methods) to conduct and write up research (I am reluctant to use the word “original”, but I am a skeptical dinobaby.) The company says:

One of the grand challenges of artificial intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used to aid human scientists, e.g. for brainstorming ideas or writing code, they still require extensive manual supervision or are heavily constrained to a specific task. Today, we’re excited to introduce The AI Scientist, the first comprehensive system for fully automatic scientific discovery, enabling Foundation Models such as Large Language Models (LLMs) to perform research independently. In collaboration with the Foerster Lab for AI Research at the University of Oxford and Jeff Clune and Cong Lu at the University of British Columbia, we’re excited to release our new paper, The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery.

Sakana does not want to merge the “big” models. Its approach for robot generated research is to combine specialized models. Examples which came to my mind were drug discovery and providing “good enough” blue chip consulting outputs. These are both expensive businesses to operate. Imagine the payoff if the Sakana approach delivers high value results. Instead of merging big, the company wants to merge small; that is, more specialized models and data. The idea is that specialized data may sidestep some of the interesting issues facing Google, Meta, and OpenAI among others.

image

Sakana’s Web site provides this schematic to help the visitor get a sense of the mechanics of the smart software. The diagram is Sakana’s, not mine.

I don’t want to let science fiction get in the way of what today’s AI systems can do in a reliable manner. I want to make some observations about smart software making discoveries and writing useful original research papers or for BearBlog.dev.

  • The company’s Web site includes a link to a paper written by the smart software. With a sample of one, I cannot see much difference between it and the baloney cranked out by the Harvard medical group or Stanford’s former president. If software did the work, it is a good deep fake.
  • Should the software be able to assemble known items of information into something “novel,” the company has hit a home run in the AI ballgame. I am not a betting dinobaby. You make your own guess about the firm’s likelihood of success.
  • If the software works to some degree, quite a few outfits looking for a way to replace people with a Sakana licensing fee will sign up. Will these outfits renew? I have no idea. But “good enough” may be just what these companies want.

Net net: The Sakana.ai Web site includes a how it works, more papers about items “discovered” by the software, and a couple of engineers-do-philosophy-and-ethics write ups. A “full scientific report” is available at https://arxiv.org/abs/2408.06292. I wonder if the software invented itself, wrote the documents, and did the marketing which caught my attention. Maybe?

Stephen E Arnold, August 13, 2024

The Upside of the Google Olympics Ad

August 13, 2024

dinosaur30a_thumb_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I learned that Google’s AI advertisements “feel bad for a reason.” And what is that reason? The write up “Those Olympics AI Ads Feel Bad for a Reason. It’s Not Just Google’s ‘Dear Sydney’ Commercial That Feels Soulless and Strange.” (I want to mention that this headline seems soulless and strange, but I won’t.”)

The write up reveals the “secret” of the Googler using Google AI to write his Google progeny:

The latest spate of AI ad campaigns, for their part, have thus far failed to highlight how its products assist what the majority of Americans actually want to use AI for — namely, help with household chores — and instead end up showing how AI will be used for the things that most of us don’t want it to interfere with: our job prospects, our privacy, and experiences and skills that feel uniquely human. If the world already thinks of AI as menacing, wasteful, and yet another example of market overhype, these ads are only confirming our worst fears. No wonder they come off as so thoroughly insufferable.

I apologize for skipping the somewhat ho hum recitation of AI marketing gaffes. I bravely waded through the essay to identify the reason that AI ads make people “feel bad.” Am I convinced?

Nope.

I watched a version of the ad on my laptop. Based on my experience, I thought it was notable that the alleged Googley user remembered he had a family. I was impressed that the Googley father remembered where his Googley child was. I liked the idea of using AI to eliminate the need to use a smart software system to help craft a message with words that connoted interest, caring, familial warmth.

Let’s face it. The ad was more satisfying that converting a news story like a dead Google VP in a yacht.

image

How would Google’s smart software tell this story? I decided to find out. Here is what Gemini 1.5 Pro provided to me. Remember. I am a nerd dinobaby with a reputation for lacking empathy and noted for my work in certain specialized sectors:

It’s been a long time since Dean’s passing, but I wanted to reach out because I was thinking about him and his family. I came across an article about the woman who was with him when he passed. I know this might be a difficult thing to hear about, and I am so very sorry for your loss. Dean was such a bright light in this world, and I know how much he meant to you. Thinking of you during this time.

Amazing. The Google’s drug death in the presence of a prostitute has been converted to a paragraph I could not possibly write. I would use a phrase like “nuked by horse” instead of “passed.” The phrase “I am so very sorry” is not what I would have been able to craft. My instinct is to say something like “The Googler tried to have fun and screwed up big time.” Finally, never would a nerd dinobaby like me write “thinking of you.” I would write, “Get to your attorney pronto.”

I know that real Googlers are not like nerd dinobabies. Therefore, it is perfectly understandable that the ad presents a version of reality which is not aspirational. It is a way for certain types of professionals to simulate interest and norm-core values.

Let’s praise Google and its AI.

Stephen E Arnold, August 13, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta