Regulators Shift into Gear to Investigate an AI Tie Up
January 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Solicitors, lawyers, and avocats want to mark the anniversary of the AI big bang. About one year ago, Microsoft pushed Google into hitting its Code Red button. Investment firms, developers, and wild-eyed entrepreneurs knew smart software was the real deal, not a digital file of a cartoon like that NFT baloney. In the last 12 months, AI went from jargon and eliciting yawns to the treasure map to the fabled city of El Dorado (even if it was a suburb of Grants, New Mexico. Google got the message quickly. The lawyers. Well, not too quickly.
Regulators look through the technological pile of 2023 gadgets. Despite being last year’s big thing, the law makers and justice deciders move into action mode. Exciting. Thanks, MSFT Copilot Bing thing. Good enough.
“EU Joins UK in Scrutinizing OpenAI’s Relationship with Microsoft” documents what happens when lawyers — after decades of inaction — wake to do something constructive. Social media gutted the fabric of many cultural norms. AI isn’t going to be given a 20 year free pass. No way.
The write up reports:
Antitrust regulators in the EU have joined their British counterparts in scrutinizing Microsoft’s alliance with OpenAI.
What will happen now? Here’s my short list of actions:
- Legal eagles on both sides of the Atlantic will begin grooming their feathers in order to be selected to deal with the assorted forms, filings, hearings, and advisory meetings. Some of the lawyers will call Ferrari to make sure they are eligible to buy a supercar; others may cast an eye on an impounded oligarch-linked yacht. Yep, big bucks ahead.
- Microsoft and OpenAI will let loose an platoon of humanoid art history and business administration majors. These professionals will create a wide range of informative explainers. Smart software will be pressed into duty, and I anticipate some smart automation to provide Teflon the the flow of digital documentation.
- Firms — possibly some based in the EU and a few bold souls in the US — will present information making clear that competition is a good thing. Governments must regulate smart software
- Entities hostile to the EU and the US will also output information or disinformation. Which is what depends on one’s perspective.
In short, 2024 will be an interesting year because one of the major threat to the Google could be converted to the digital equivalent of a eunuch in an Assyrian ruler’s court. What will this mean? Google wins. Unanticipated consequence? Absolutely.
Stephen E Arnold, January 19, 2024
Information Voids for Vacuous Intellects
January 18, 2024
This essay is the work of a dumb dinobaby. No smart software required.
In countries around the world, 2024 is a critical election year, and the problem of online mis- and disinformation is worse than ever. Nature emphasizes the seriousness of the issue as it describes “How Online Misinformation Exploits ‘Information Voids’—and What to Do About It.” Apparently we humans are so bad at considering the source that advising us to do our own research just makes the situation worse. Citing a recent Nature study, the article states:
“According to the ‘illusory truth effect’, people perceive something to be true the more they are exposed to it, regardless of its veracity. This phenomenon pre-dates the digital age and now manifests itself through search engines and social media. In their recent study, Kevin Aslett, a political scientist at the University of Central Florida in Orlando, and his colleagues found that people who used Google Search to evaluate the accuracy of news stories — stories that the authors but not the participants knew to be inaccurate — ended up trusting those stories more. This is because their attempts to search for such news made them more likely to be shown sources that corroborated an inaccurate story.”
Doesn’t Google bear some responsibility for this phenomenon? Apparently the company believes it is already doing enough by deprioritizing unsubstantiated news, posting content warnings, and including its “about this result” tab. But it is all too easy to wander right past those measures into a “data void,” a virtual space full of specious content. The first impulse when confronted with questionable information is to copy the claim and paste it straight into a search bar. But that is the worst approach. We learn:
“When [participants] entered terms used in inaccurate news stories, such as ‘engineered famine’, to get information, they were more likely to find sources uncritically reporting an engineered famine. The results also held when participants used search terms to describe other unsubstantiated claims about SARS-CoV-2: for example, that it rarely spreads between asymptomatic people, or that it surges among people even after they are vaccinated. Clearly, copying terms from inaccurate news stories into a search engine reinforces misinformation, making it a poor method for verifying accuracy.”
But what to do instead? The article notes Google steadfastly refuses to moderate content, as social media platforms do, preferring to rely on its (opaque) automated methods. Aslett and company suggest inserting human judgement into the process could help, but apparently that is too old fashioned for Google. Could educating people on better research methods help? Sure, if they would only take the time to apply them. We are left with this conclusion: instead of researching claims from untrustworthy sources, one should just ignore them. But that brings us full circle: one must be willing and able to discern trustworthy from untrustworthy sources. Is that too much to ask?
Cynthia Murrell, January 18, 2024
Two Surveys. One Message. Too Bad
January 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Generative Artificial Intelligence Will Lead to Job Cuts This Year, CEOs Say.” The data come from a consulting/accounting outfit’s survey of executives at the oh-so-exclusive World Economic Forum meeting in the Piscataway, New Jersey, of Switzerland. The company running the survey is PwC (once an acronym for Price Waterhouse Coopers. The moniker has embraced a number of interesting investigations. For details, navigate to this link.)
Survey says, “Economic gain is the meaning of life.” Thanks, MidJourney, good enough.
The big finding from my point of view is:
A quarter of global chief executives expect the deployment of generative artificial intelligence to lead to headcount reductions of at least 5 per cent this year
Good, reassuring number from big gun world leaders.
However, the International Monetary Fund also did a survey. The percentage of jobs affected range from 26 percent in low income countries, 40 percent for emerging markets, and 60 percent for advanced economies.
What can one make of these numbers; specifically, the five percent to the 60 percent? My team’s thoughts are:
- The gap is interesting, but the CEOs appear to be either downplaying, displaying PR output, or working to avoid getting caught in sticky wicket.
- The methodology and the sample of each survey are different, but both are skewed. The IMF taps analysts, bankers, and politicians. PwC goes to those who are prospects for PwC professional services.
- Each survey suggests that government efforts to manage smart software are likely to be futile. On one hand, CEOs will say, “No big deal.” Some will point to the PwC survey and say, “Here’s proof.” The financial types will hold up the IMF results and say, “We need to move fast or we risk losing out on the efficiency payback.”
What does Bill Gates think about smart software? In “Microsoft Co-Founder Bill Gates on AI’s Impact on Jobs: It’s Great for White-Collar Workers, Coders” the genius for our time says:
I have found it’s a real productivity increase. Likewise, for coders, you’re seeing 40%, 50% productivity improvements which means you can get programs [done] sooner. You can make them higher quality and make them better. So mostly what we’ll see is that the productivity of white-collar [workers] will go up
Happy days for sure! What’s next? Smart software will move forward. Potential payouts are too juicy. The World Economic Forum and the IMF share one key core tenet: Money. (Tip: Be young.)
Stephen E Arnold, January 17, 2024
AI Inventors Barred from Patents. For Now
January 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
For anyone wondering whether an AI system can be officially recognized as a patent inventor, the answer in two countries is no. Or at least not yet. We learn from The Fashion Law, “UK Supreme Court Says AI Cannot Be Patent Inventor.” Inventor Stephen Thaler pursued two patents on behalf of DABUS, his AI system. After the UK’s Intellectual Property Office, High Court, and the Court of Appeal all rejected the applications, the intrepid algorithm advocate appealed to the highest court in that land. The article reveals:
“In the December 20 decision, which was authored by Judge David Kitchin, the Supreme Court confirmed that as a matter of law, under the Patents Act, an inventor must be a natural person, and that DABUS does not meet this requirement. Against that background, the court determined that Thaler could not apply for and/or obtain a patent on behalf of DABUS.”
The court also specified the patent applications now stand as “withdrawn.” Thaler also tried his luck in the US legal system but met with a similar result. So is it the end of the line for DABUS’s inventor ambitions? Not necessarily:
“In the court’s determination, Judge Kitchin stated that Thaler’s appeal is ‘not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable, nor is it concerned with the question whether the meaning of the term ‘inventor’ ought to be expanded … to include machines powered by AI ….’”
So the legislature may yet allow AIs into the patent application queues. Will being a “natural person” soon become unnecessary to apply for a patent? If so, will patent offices increase their reliance on algorithms to handle the increased caseload? Then machines would grant patents to machines. Would natural people even be necessary anymore? Once a techno feudalist with truckloads of cash and flocks of legal eagles pulls up to a hearing, rules can become — how shall I say it? — malleable.
Cynthia Murrell, January 17, 2024
Guidelines. What about AI and Warfighting? Oh, Well, Hmmmm.
January 16, 2024
This essay is the work of a dumb dinobaby. No smart software required.
It seems November 2023’s AI Safety Summit, hosted by the UK, was a productive gathering. At the very least, attendees drew up some best practices and brought them to agencies in their home countries. TechRepublic describes the “New AI Security Guidelines Published by NCSC, CISA, & More International Agencies.” Writer Owen Hughes summarizes:
“The Guidelines for Secure AI System Development set out recommendations to ensure that AI models – whether built from scratch or based on existing models or APIs from other companies – ‘function as intended, are available when needed and work without revealing sensitive data to unauthorized parties.’ Key to this is the ‘secure by default’ approach advocated by the NCSC, CISA, the National Institute of Standards and Technology and various other international cybersecurity agencies in existing frameworks. Principles of these frameworks include:
* Taking ownership of security outcomes for customers.
* Embracing radical transparency and accountability.
* Building organizational structure and leadership so that ‘secure by design’ is a top business priority.
A combined 21 agencies and ministries from a total of 18 countries have confirmed they will endorse and co-seal the new guidelines, according to the NCSC. … Lindy Cameron, chief executive officer of the NCSC, said in a press release: ‘We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up. These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.’”
Nice idea, but we noted “OpenAI’s Policy No Longer Explicitly Bans the Use of Its Technology for Military and Warfare.” The article reports that OpenAI:
updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare." While we’ve yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI.
We are told cybersecurity experts and analysts welcome the guidelines. But will the companies vending and developing AI products willingly embrace principles like “radical transparency and accountability”? Will regulators be able to force them to do so? We have our doubts. Nevertheless, this is a good first step. If only it had been taken at the beginning of the race.
Cynthia Murrell, January 16, 2024
Cybersecurity AI: Yet Another Next Big Thing
January 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Not surprisingly, generative AI has boosted the cybersecurity arms race. As bad actors use algorithms to more efficiently breach organizations’ defenses, security departments can only keep up by using AI tools. At least that is what VentureBeat maintains in, “How Generative AI Will Enhance Cybersecurity in a Zero-Trust World.” Writer Louis Columbus tells us:
“Deep Instinct’s recent survey, Generative AI and Cybersecurity: Bright Future of Business Battleground? quantifies the trends VentureBeat hears in CISO interviews. The study found that while 69% of organizations have adopted generative AI tools, 46% of cybersecurity professionals feel that generative AI makes organizations more vulnerable to attacks. Eighty-eight percent of CISOs and security leaders say that weaponized AI attacks are inevitable. Eighty-five percent believe that gen AI has likely powered recent attacks, citing the resurgence of WormGPT, a new generative AI advertised on underground forums to attackers interested in launching phishing and business email compromise attacks. Weaponized gen AI tools for sale on the dark web and over Telegram quickly become best sellers. An example is how quickly FraudGPT reached 3,000 subscriptions by July.”
That is both predictable and alarming. What should companies do about it? The post warns:
“‘Businesses must implement cyber AI for defense before offensive AI becomes mainstream. When it becomes a war of algorithms against algorithms, only autonomous response will be able to fight back at machine speeds to stop AI-augmented attacks,’ said Max Heinemeyer, director of threat hunting at Darktrace.”
Before AI is mainstream? Better get moving. We’re told the market for generative AI cybersecurity solutions is already growing, and Forrester divides it into three use cases: content creation, behavior prediction, and knowledge articulation. Of course, Columbus notes, each organization will have different needs, so adaptable solutions are important. See the write-up for some specific tips and links to further information. The tools may be new but the dynamic is a constant: as bad actors up their game, so too must security teams.
Cynthia Murrell, January 15, 2024
Believe in Smart Software? Sure, Why Not?
January 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Predictions are slippery fish. Grab one, a foot long Lake Michigan beastie. Now hold on. Wow, that looked easy. Predictions are similar. But slippery fish can get away or flop around and make those in the boat look silly. I thought about fish and predictions when I read “What AI will Never Be Able To Do.” The essay is a replay of an answer from an AI or smart software system.
My initial reaction was that someone came up with a blog post that required Google Bard and what seems to be minimal effort to create. I am thinking about how a high school student might rely on ChatGPT to write an essay about a current event or a how-to essay. I reread the write up and formulated several observations. The table below presents the “prediction” and my comment about that statement. I end the essay with a general comment about smart software.
The presentation of word salad reassurances underscores a fundamental problem of smart software. The system can be tuned to reassure. At the same time, the companies operating the software can steer, shape, and weaponize the information presented. Those without the intellectual equipment to research and reason about outputs are likely to accept the answers. The deterioration of education in the US and other countries virtually guarantees that smart software will replace critical thinking for many people.
Don’t believe me. Ask one of the engineers working on next generation smart software. Just don’t ask the systems or the people who use another outfit’s software to do the thinking.
Stephen E Arnold, January 12, 2024
Cheating: Is It Not Like Love, Honor, and Truth?
January 10, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I would like to believe the information in this story: “ChatGPT Did Not Increase Cheating in High Schools, Stanford Researchers Find.” My reservations can be summed up with three points: [a] The Stanford president (!) who made up datal, [b] The behavior of Stanford MBAs at certain go-go companies, and [c] How does one know a student did not cheat? (I know the answer: Surveillance technology, perchance. Ooops. That’s incorrect. That technology is available to Stanford graduates working at certain techno feudalist outfits.
Mom asks her daughter, “I showed you how to use the AI generator, didn’t I? Why didn’t you use it?” Thanks, MSFT Copilot Bing thing. Pretty good today.
The cited write up reports as actual factual:
The university, which conducted an anonymous survey among students at 40 US high schools, found about 60% to 70% of students have engaged in cheating behavior in the last month, a number that is the same or even decreased slightly since the debut of ChatGPT, according to the researchers.
I have tried to avoid big time problems in my dinobaby life. However, I must admit that in high school, I did these things: [a] Worked with my great grandmother to create a poem subsequently published in a national anthology in 1959. Granny helped me cheat; she was a deceitful septuagenarian as I recall. I put my name on the poem, omitting Augustus. Yes, cheating. [b] Sold homework to students not in my advanced classes. I would consider this cheating, but I was saving money for my summer university courses at the University of Illinois. I went for the cash. [c] After I ended up in the hospital, my girl friend at the time showed up at the hospital, reviewed the work covered in class, and finished a science worksheet because I passed out from the post surgery medications. Yes, I cheated, and Linda Mae who subsequently spent her life in Africa as a nurse, helped me cheat. I suppose I will burn in hell. My summary suggests that “cheating” is an interesting concept, and it has some nuances.
Did the Stanford (let’s make up data) University researchers nail down cheating or just hunt for the AI thing? Are the data reproducible? Was the methodology rigorous, the results validated, and micro analyses run to determine if the data were on the money? Yeah, sure, sure.
I liked this statement:
Stanford also offers an online hub with free resources to help teachers explain to high school students the dos and don’ts of using AI.
In the meantime, the researchers said they will continue to collect data throughout the school year to see if they find evidence that more students are using ChatGPT for cheating purposes.
Yep, this is a good pony to ride. I would ask is plain vanilla Google search a form of cheating? I think it is. With most of the people online using it, doesn’t everyone cheat? Let’s ask the Harvard ethics professor, a senior executive at a Facebook-type outfit, and the former president of Stanford.
Stephen E Arnold, January 10, 2023
Googley Gems: 2024 Starts with Some Hoots
January 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Another year and I will turn 80. I have seen some interesting things in my 58 year work career, but a couple of black swans have flown across my radar system. I want to share what I find anomalous or possibly harbingers of the new normal.
A dinobaby examines some Alphabet Google YouTube gems. The work is not without its AGonY, however. Thanks, MSFT Copilot Bing thing. Good enough.
First up is another “confession” or “tell all” about the wild, wonderful Alphabet Google YouTube or AGY. (Wow, I caught myself. I almost typed “agony”, not AGY. I am indeed getting old.)
I read “A Former Google Manager Says the Tech Giant Is Rife with Fiefdoms and the Creeping Failure of Senior Leaders Who Weren’t Making Tough Calls.” The headline is a snappy one. I like the phrase “creeping failure.” Nifty image like melting ice and tundra releasing exciting extinct biological bits and everyone’s favorite gas. Let me highlight one point in the article:
[Google has] “lots of little fiefdoms” run by engineers who didn’t pay attention to how their products were delivered to customers. …this territorial culture meant Google sometimes produced duplicate apps that did the same thing or missed important features its competitors had.
I disagree. Plenty of small Web site operators complain about decisions which destroy their businesses. In fact, I am having lunch with one of the founders of a firm deleted by Google’s decider. Also, I wrote about a fellow in India who is likely to suffer the slings and arrows of outraged Googlers because he shoots videos of India’s temples and suggests they have meanings beyond those inculcated in certain castes.
My observation is that happy employees don’t run conferences to explain why Google is a problem or write these weird “let me tell you what life is really like” essays. Something is definitely being signaled. Could it be distress, annoyance, or down-home anger? The “gem”, therefore, is AGY’s management AGonY.
Second, AGY is ramping up its thinking about monetization of its “users.” I noted “Google Bard Advanced Is Coming, But It Likely Won’t Be Free” reports:
Google Bard Advanced is coming, and it may represent the company’s first attempt to charge for an AI chatbot.
And why not? The Red Alert hooted because MIcrosoft’s 2022 announcement of its OpenAI tie up made clear that the Google was caught flat footed. Then, as 2022 flowed, the impact of ChatGPT-like applications made three facets of the Google outfit less murky: [a] Google was disorganized because it had Google Brain and DeepMind which was expensive and confusing in the way Abbott and Costello’s “Who’s on First Routine” made people laugh. [b] The malaise of a cooling technology frenzy yielded to AI craziness which translated into some people saying, “Hey, I can use this stuff for answering questions.” Oh, oh, the search advertising model took a bit of a blindside chop block. And [c] Google found itself on the wrong side of assorted legal actions creating a model for other legal entities to explore, probe, and probably use to extract Google’s life blood — Money. Imagine Google using its data to develop effective subscription campaigns. Wow.
And, the final Google gem is that Google wants to behave like a nation state. “Google Wrote a Robot Constitution to Make Sure Its New AI Droids Won’t Kill Us” aims to set the White House and other pretenders to real power straight. Shades of Isaac Asimov’s Three Laws of Robotics. The write up reports:
DeepMind programmed the robots to stop automatically if the force on its joints goes past a certain threshold and included a physical kill switch human operators can use to deactivate them.
You have to embrace the ethos of a company which does not want its “inventions” to kill people. For me, the message is one that some governments’ officials will hear: Need a machine to perform warfighting tasks?
Small gems but gems not the less. AGY, please, keep ‘em coming.
Stephen E Arnold, January 9, 2024
Cyber Security Software and AI: Man and Machine Hook Up
January 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My hunch is that 2024 is going to be quite interesting with regards to cyber security. The race among policeware vendors to add “artificial intelligence” to their systems began shortly after Microsoft’s ChatGPT moment. Smart agents, predictive analytics coupled to text sources, real-time alerts from smart image monitoring systems are three application spaces getting AI boosts. The efforts are commendable if over-hyped. One high-profile firm’s online webinar presented jargon and buzzwords but zero evidence of the conviction or closure value of the smart enhancements.
The smart cyber security software system outputs alerts which the system manager cannot escape. Thanks, MSFT Copilot Bing thing. You produced a workable illustration without slapping my request across my face. Good enough too.
Let’s accept as a working presence that everyone from my French bulldog to my neighbor’s ex wife wants smart software to bring back the good old, pre-Covid, go-go days. Also, I stipulate that one should ignore the fact that smart software is a demonstration of how numerical recipes can output “good enough” data. Hallucinations, errors, and close-enough-for-horseshoes are part of the method. What’s the likelihood the door of a commercial aircraft would be removed from an aircraft in flight? Answer: Well, most flights don’t lose their doors. Stop worrying. Those are the rules for this essay.
Let’s look at “The I in LLM Stands for Intelligence.” I grant the title may not be the best one I have spotted this month, but here’s the main point of the article in my opinion. Writing about automated threat and security alerts, the essay opines:
When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means. The better the crap, the longer time and the more energy we have to spend on the report until we close it. A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is consider one of the most important areas so it tends to trump almost everything else.
The idea is that strapping on some smart software can increase the outputs from a security alerting system. Instead of helping the overworked and often reviled cyber security professional, the smart software makes it more difficult to figure out what a bad actor has done. The essay includes this blunt section heading: “Detecting AI Crap.” Enough said.
The idea is that more human expertise is needed. The smart software becomes a problem, not a solution.
I want to shift attention to the managers or the employee who caused a cyber security breach. In what is another zinger of a title, let’s look at this research report, “The Immediate Victims of the Con Would Rather Act As If the Con Never Happened. Instead, They’re Mad at the Outsiders Who Showed Them That They Were Being Fooled.” Okay, this is the ostrich method. Deny stuff by burying one’s head in digital sand like TikToks.
The write up explains:
The immediate victims of the con would rather act as if the con never happened. Instead, they’re mad at the outsiders who showed them that they were being fooled.
Let’s assume the data in this “Victims” write up are accurate, verifiable, and unbiased. (Yeah, I know that is a stretch.)
What do these two articles do to influence my view that cyber security will be an interesting topic in 2024? My answers are:
- Smart software will allegedly detect, alert, and warn of “issues.” The flow of “issues” may overwhelm or numb staff who must decide what’s real and what’s a fakeroo. Burdened staff can make errors, thus increasing security vulnerabilities or missing ones that are significant.
- Managers, like the staffer who lost a mobile phone, with company passwords in a plain text note file or an email called “passwords” will blame whoever blows the whistle. The result is the willful refusal to talk about what happened, why, and the consequences. Examples range from big libraries in the UK to can kicking hospitals in a flyover state like Kentucky.
- Marketers of remediation tools will have a banner year. Marketing collateral becomes a closed deal making the art history majors writing copy secure in their job at a cyber security company.
Will bad actors pay attention to smart software and the behavior of senior managers who want to protect share price or their own job? Yep. Close attention.
Stephen E Arnold, January 8, 2024
THE I IN LLM STANDS FOR INTELLIGENCE
xx
x
x
x
x
x