When Wizards Squabble the Digital World Bleats, “AI Yi AI”
October 21, 2024
No smart software but we may use image generators to add some modern spice to the dinobaby’s output.
The world is abuzz with New York Times “real” news story. From my point of view, the write up reminds me of a script from “The Guiding Light.” The “to be continued” is implicit in the drama presented in the pitch for a new story line. AI wizard and bureaucratic marvel squabble about smart software.
According to “Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying”:
At an A.I. conference in Seattle this month, Microsoft didn’t spend much time discussing OpenAI. Asha Sharma, an executive working on Microsoft’s A.I. products, emphasized the independence and variety of the tech giant’s offerings. “We definitely believe in offering choice,” Ms. Sharma said.
Two wizards squabble over the AI goblet. Thanks, MSFT Copilot, good enough which for you is top notch.
What? Microsoft offers a choice. What about pushing Edge relentlessly? What about the default install of an intelligence officer’s fondest wish: Historical data on a bad actor’s computer? What about users who want to stick with Windows 7 because existing applications run on it without choking? What about users who want to install Windows 11 but cannot because of arbitrary Microsoft restrictions? Choice?
Several observations:
- The tension between Sam AI-Man and Satya Nadella, the genius behind today’s wonderful Microsoft software is not secret. Sam AI-Man found some acceptance when he crafted a deal with Oracle.
- When wizards argue the drama is high because both of the parties to the dispute know that AI is a winner take all game, with losers destined to get only 65 percent of the winner’s size. Others get essentially nothing. Winners get control.
- The anti-MBA organization of OpenAI, Microsoft’s odd deal, and the staffing shenanigans of both Microsoft and OpenAI suggest that neither MSFT’s Nadella or OpenAI’s Sam AI-Man are big picture thinkers.
What will happen now? I think that the Googlers will add a new act to the Sundar & Prabhakar Comedy Tour. The two jokers will toss comments back and forth about how both the Softies and the AI-Men need to let another firm’s AI provide information about organizational planning.
I think the story will be better as a comedy routine. Scrap that “Guiding Light” idea. A soap opera is far to serious for the comedy now on stage.
Stephen E Arnold, October 21, 2024
Can Prabhakar Do the Black Widow Thing to Technology at Google?
October 21, 2024
No smart software but we may use image generators to add some modern spice to the dinobaby’s output.
The reliable (mostly?) Wall Street Journal ran a story titled“Google Executive Overseeing Search and Advertising Leaves Role.” The executive in question is Prabhakar Raghavan, the other half of the Sundar and Prabhakar Comedy Team. The wizardly Prabhakar is the person Edward Zitron described as “The Man Who Killed Google Search.” I recommend reading that essay because it has more zip than the Murdoch approach to poohbah analysis.
I want to raise a question because I assume that Mr. Zitron is largely correct about the demise of Google Search. The sleek Prabhakar accelerated the decline. He was the agent of the McKinsey think infused in his comedy partner Sundar. The two still get laughs at their high school reunions amidst chums and more when classmates gather to explain their success to one another.
The Google approach: Who needs relevance? Thanks, MSFT Copilot. Not quite excellent.
What is the question? Here it is:
Will Prabhakar do to Google’s technology what he did to search?
My view is that Google’s technology has demonstrated corporate ossification. The company “invented”, according to Google lore, the transformer. Then Google — because it was concerned about its invention — released some of it as open source and then watched as Microsoft marketed AI as the next big thing for the Softies. And what was the outfit making Microsoft’s marketing coup possible? It was Sam AI-Man.
Microsoft, however, has not been a technology leader for how many years?
Suddenly the Google announced a crisis and put everyone on making Google the leader in AI. I assume the McKinsey think did not give much thought to the idea that MSFT’s transformer would be used to make Google look darned silly. In fact, it was Prabhakar who stole the attention of the pundits with a laughable AI demonstration in Paris.
Flash forward from early 2023 to late 2024 what’s Google doing with technology? My perception is that Google is trying to create AI winners, capture the corporate market from Microsoft, and convince as many people as possible that if Google is broken apart, AI in America will flop.
Yes, the fate of the nation hangs on Google’s remaining a monopoly. That sounds like a punch line to a skit in the Sundar and Prabhakar Comedy Show.
Here’s my hypothesis: The death of search (the Edward Zitron view) is a job well done. The curtains fall on Act I of the Google drama. Act II is about the Google technology. The idea is that the technology of the online advertising monopoly defines the future of America.
Stay tuned because the story will be streamed on YouTube with advertising, lots of advertising, of course.
Stephen E Arnold, October 21, 2024
AI: The Key to Academic Fame and Fortune
October 17, 2024
Just a humanoid processing information related to online services and information access.
Why would professors use smart software to “help” them with their scholarly papers? The question may have been answered in the Phys.org article “Analysis of Approximately 75 Million Publications Finds Those Employing AI Are More Likely to Be a ‘Hit Paper’” reports:
A new Northwestern University study analyzing 74.6 million publications, 7.1 million patents and 4.2 million university course syllabi finds papers that employ AI exhibit a “citation impact premium.” However, the benefits of AI do not extend equitably to women and minority researchers, and, as AI plays more important roles in accelerating science, it may exacerbate existing disparities in science, with implications for building a diverse, equitable and inclusive research workforce.
Years ago some universities had an “honor code”? I think the University of Virginia was one of those dinosaurs. Today professors are using smart software to help them crank out academic hits.
The write up continues by quoting a couple of the study’s authors (presumably without using smart software) as saying:
“These advances raise the possibility that, as AI continues to improve in accuracy, robustness and reach, it may bring even more meaningful benefits to science, propelling scientific progress across a wide range of research areas while significantly augmenting researchers’ innovation capabilities…”
What are the payoffs for the professors who probably take a dim view of their own children using AI to make life easier, faster, and smoother? Let’s look at a handful my team and I discussed:
- More money in the form of pay raises
- Better shot at grants for research
- Fame at conferences
- Groupies. I know it is hard to imagine but it happens. A lot.
- Awards
- Better committee assignments
- Consulting work.
When one considers the benefits from babes to bucks, the chit chat about doing better research is of little interest to professors who see virtue in smart software.
The president of Stanford cheated. The head of the Harvard Ethics department appears to have done it. The professors in the study sample did it. The conclusion: Smart software use is normative behavior.
Stephen E Arnold, October 17, 2024
Forget Surveillance Capitalism. Think Parasite Culture
October 15, 2024
Ted Gioia touts himself as The Honest Broker on his blog and he recently posted about the current state of the economy: “Are We Now Living In A Parasite Culture?” In the opening he provides examples of natural parasites before moving to his experience working with parasite strategies.
Gioia said that when he consulted fortune 500 companies, he and others used parasite strategies as thought exercises. Here’s what a parasite strategy is:
1. “You allow (or convince) someone else to make big investments in developing a market—so they cover the cost of innovation, or advertising, or lobbying the government, or setting up distribution, or educating customers, or whatever. But…
2. You invest your energy instead on some way of cutting off these dutiful folks at the last moment—at the point of sale, for example. Hence…
3. You reap the benefits of an opportunity that you did nothing to create.”
On first reading, it doesn’t seem that our economy is like that until he provides true examples: Facebook, Spotify, TikTok, and Google. All of these platforms are nothing more than a central location for people to post and share their content or they aggregate content from the Internet. These platforms thrive off the creativity of their users and their executive boards reap the benefits, while the creators struggle to rub two cents together.
Smart influencers know to diversify their income streams through sponsorship, branding, merchandise, and more. Gioia points out that the Forbes lists of billionaires includes people who used parasitical business strategies to get rich. He continues by saying that these parasites will continue to guzzle off their hosts’ lifeblood with a chance of killing said host.
Its happening now in the creative economy with Big Tech’s investment in AI and how, despite lawsuits and laws, these companies are illegally training AI on creative pursuits. He finishes with the obvious statement that politicians should be protecting people, but that they’re probably part of the problem. No duh.
Whitney Grace, October 15, 2024
An Emergent Behavior: The Big Tech DNA Proves It
October 14, 2024
Writer Mike Masnick at TechDirt makes quite the allegation: “Big Tech’s Promise Never to Block Access to Politically Embarrassing Content Apparently Only Applies to Democrats.” He contends:
“It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when that political figure is a Democrat. If it’s a Republican, then of course the content will be suppressed, and the GOP officials who demanded that big tech never ever again suppress such content will look the other way.”
The basis for Masnick’s charge of hypocrisy lies in a tale of two information leaks. Tech execs and members of Congress responded to each data breach very differently. Recently, representatives from both Meta and Google pledged to Senator Tom Cotton at a Senate Intelligence Committee hearing to never again “suppress” news as they supposedly did in 2020 with Hunter Biden laptop story. At the time, those platforms were leery of circulating that story until it could be confirmed.
Less than two weeks after that hearing, Journalist Ken Klippenstein published the Trump campaign’s internal vetting dossier on JD Vance, a document believed to have been hacked by Iran. That sounds like just the sort of newsworthy, if embarrassing, story that conservatives believe should never be suppressed, right? Not so fast—Trump mega-supporter Elon Musk immediately banned Ken’s X account and blocked all links to Klippenstein’s Substack. Similarly, Meta blocked links to the dossier across its platforms. That goes further than the company ever did with the Biden laptop story, the post reminds us. Finally, Google now prohibits users from storing the dossier on Google Drive. See the article for more of Masnick’s reasoning. He concludes:
“Of course, the hypocrisy will stand, because the GOP, which has spent years pointing to the Hunter Biden laptop story as their shining proof of ‘big tech bias’ (even though it was nothing of the sort), will immediately, and without any hint of shame or acknowledgment, insist that of course the Vance dossier must be blocked and it’s ludicrous to think otherwise. And thus, we see the real takeaway from all that working of the refs over the years: embarrassing stuff about Republicans must be suppressed, because it’s doxing or hacking or foreign interference. However, embarrassing stuff about Democrats must be shared, because any attempt to block it is election interference.”
Interesting. But not surprising.
Cynthia Murrell, October 14, 2024
Cyber Criminals Rejoice: Quick Fraud Development Kit Announced
October 11, 2024
I am not sure the well-organized and managed OpenAI intended to make cyber criminals excited about their future prospects. Several Twitter enthusiasts pointed out that OpenAI makes it possible to develop an app in 30 seconds. Prashant posted:
App development is gonna change forever after today. OpenAI can build an iPhone app in 30 seconds with a single prompt. [emphasis added]
The expert demonstrating this programming capability was Romain Huet. The announcement of the capability débuted at OpenAI’s Dev Day.
A clueless dinobaby is not sure what this group of youngsters is talking about. An app? Pictures of a slumber party? Thanks, MSFT Copilot, good enough.
What’s a single prompt mean? That’s not clear to me at the moment. Time is required to assemble the prompt, run it, check the outputs, and then fiddle with the prompt. Once the prompt is in hand, then it is easy to pop it into o1 and marvel at the 30 second output. Instead of coding, one prompts. Zip up that text file and sell it on Telegram. Make big bucks or little STARS and TONcoins. With some cartwheels, it is sort of money.
Is this quicker that other methods of cooking up an app; for example, some folks can do some snappy app development with Telegram’s BotFather service?
Let’s step back from the 30-second PR event.
Several observations are warranted.
First, programming certain types of software is becoming easier using smart software. That means that a bad actor may be able to craft a phishing play more quickly.
Second, specialized skills embedded in smart software open the door to scam automation. Scripts can generate other needed features of a scam. What once was a simple automated bogus email becomes an orchestrated series of actions.
Third, the increasing cross-model integration suggests that a bad actor will be able to add a video or audio delivering a personalized message. With some fiddling, a scam can use a phone call to a target and follow that up with an email. To cap off the scam, a machine-generated Zoom-type video call makes a case for the desired action.
The key point is that legitimate companies may want to have people they manage create a software application. However, is it possible that smart software vendors are injecting steroids into a market given little thought by most people? What is that market? I am thinking that bad actors are often among the earlier adopters of new, low cost, open source, powerful digital tools.
I like the gee whiz factor of the OpenAI announcement. But my enthusiasm is a fraction of that experienced by bad actors. Sometimes restraint and judgment may be more helpful than “wow, look at what we have created” show-and-tell presentations. Remember. I am a dinobaby and hopelessly out of step with modern notions of appropriateness. I like it that way.
Stephen E Arnold, October 11, 2024
Google Pulls Off a Unique Monopoly Play: Redefining Disciplines and Winning Awards
October 10, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
The monopolists of the past are a storied group of hard-workers. The luminaries blazing a path to glory have included John D. Rockefeller (the 1911 guy), J.P. Morgan and James J. Hill (railroads and genetic material contributor to JP Morgan and MorganStanley circa 2024, James B. Duke (nope, smoking is good for you), Andrew Carnegie (hey, he built “free” public libraries which are on the radar of today’s publishers I think), and Edward T. Bedford (starch seem unexciting until you own the business). None of these players were able to redefine Nobel Prizes.
A member of Google leadership explains to his daughter (who is not allowed to use smart software for her private school homework or her tutor’s assignments) that the Google is a bit like JP Morgan but better in so many other ways. Thanks, MSFT Copilot. How are the Windows 11 updates and the security fixes today?
The Google pulled it off. One Xoogler (that is the jargon for a former Google professional) and one honest-to-goodness chess whiz Googler won Nobel Prizes. Fortune Magazine reported that Geoffrey Hinton (the Xoogler) won a Nobel Prize for … wait for it … physics. Yep, the discipline associated with chasing dark matter and making thermonuclear bombs into everyday words really means smart software or the undefinable phrase “artificial intelligence.” Some physicists are wondering how one moves from calculating the mass of a proton to helping college students cheat. Dr. Sabine Hossenfelder asks, “Hello, Stockholm, where is our Nobel?” The answer is, “Politics, money, and publicity, Dr. Hossenfelder.” These are the three ingredients of achievement.
But wait! Google also won a Nobel Prize for … wait for it … chemistry. Yep, you remember high school chemistry class. Jars, experiments which don’t match the textbook, and wafts of foul smelling gas getting sucked into the lab’s super crappy air venting system. The Verge reported on how important computation chemistry is to the future of money-spinning confections like the 2020 virus of the year. The poohbahs (journalist-consultant-experts) at that publication with nary a comment about smart software which made the “chemistry” of Google do in “minutes” what ordinary computational chemistry solutions take hours longer to accomplish.
The Google and Xoogle winners are very smart people. Google, however, has done what the schlubs like J.P. Morgan could never accomplish: Redefine basic scientific disciplines. Physics means neural networks. Chemistry means repurposing a system to win chess games.
I suppose with AI eliminating the need for future students to learn. “University Professor ‘Terrified’ By The Sharp Decline In Student Performance — ’The Worst I’ve Ever Encountered’” quoted a college professor as saying:
The professor said her students ‘don’t read,’ write terrible essays, and ‘don’t even try’ in her class. The professor went on to say that when she recently assigned an exam focused on a reading selection, she "had numerous students inquire if it’s open book." That is, of course, preposterous — the entire point of a reading exam is to test your comprehension of the reading you were supposed to do! But that’s just it — she said her students simply "don’t read."
That makes sense. Physics is smart software; chemistry is smart software. Uninformed student won’t know the difference. What’s the big deal? That’s a super special insight into the zing in teaching and learning.
What’s the impact of these awards? In my opinion:
- The reorganization of DeepMind where the Googler is the Top Dog has been scrubbed of management hoo-hah by the award.
- The Xoogler will have an ample opportunity to explain that smart software will destroy mankind. That’s possible because the intellectual rot has already spread to students.
- The Google itself can now explain that it is not a monopoly. How is this possible? Simple. Physics is not about the goings on at Los Alamos National Laboratory. Chemistry is not dumping diluted hydrochloric acid into a beaker filled calcium carbide. It makes perfect sense to explain that Google is NOT a monopoly.
But the real payoff to the two awards is that Google’s management team can say:
Those losers like John D. Rockefeller, JP Morgan, the cigarette person, the corn starch king, and the tight fisted fellow from someplace with sheep are not smart like the Google. And, the Google leadership is indeed correct. That’s why life is so much better with search engine optimization, irrelevant search results, non-stop invasive advertising, a disable skip this ad button, and the remarkable Google speak which accompanies another allegation of illegal business conduct from a growing number of the 195 countries in the world.
That’s a win that old-timey monopolists could not put in their account books.
Stephen E Arnold, October 10, 2024
A Modern Employee Wants Love, Support, and Compassion
October 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Beyond Search is a “Wordpress” blog. I have followed with (to be honest) not much interest the dispute between a founder and a couple of organizations. WordPress has some widgets that one of the Beyond Search team “subscribes” to each year. These, based on my experience, are so-so. We have moved the blog to WordPress-friendly hosting services because [a] the service was not stable, [b] not speedy, and [c] not connected to any known communication service except Visa.
I read “I Stayed,” a blog post. The write up expresses a number of sentiments about WordPress, its employees, and its mission. (Who knew? A content management system with a “mission.” ) I noted this statement:
Listen, I’m struggling with medical debts and financial obligations incurred by the closing of my conference and publishing businesses.
I don’t know much about modern work practices, but this sentence suggests to me that a full-time employee was running two side gigs. Both of these failed, and the author of the post is in debt. I am a dinobaby, and I assumed that when a company hired me as a full time employee like Halliburton or Booz, Allen & Hamilton, my superiors expected me to focus on the tasks given to me by Halliburton and Booz, Allen & Hamilton. “Go to a uranium mine. Learn. Ask questions. Take photographs or ore processing,” so I went. No side gigs, no questions about breathing mine dust. Just do the work. Not now. The answer to a superior’s request apparently means, “Hey, you have spare time to pay attention to that conference and publishing business. No problemo.” Times have changed.
The write up includes this statement about not quitting or taking a buy out:
I stayed because I believe in the work we do. I believe in the open web and owning your own content. I’ve devoted nearly three decades of work to this cause, and when I chose to move in-house, I knew there was only one house that would suit me. In nearly six years at Automattic, I’ve been able to do work that mattered to me and helped others, and I know that the best is yet to come.
I think I am supposed to interpret this decision as noble or allegedly noble. My view is that WordPress professionals who remain on the job includes these elements:
- If you have a full-time job at a commercial or quasi-commercial enterprise, focus on the job. It would be great if WordPress fixed the wonky cursor movement in its editor. You know it really doesn’t work. In fact, it sucks on my machines both Mac and Windows.
- Think about the interface. Hiding frequently used functions is not helpful.
- Use words to make clear certain completely weird icons. Yep, actual words.
- Display explicate which are not confusing. I don’t find multiple uses of the word “Publish” particularly helpful.
To sum up: Suck it up, buttercup.
Stephen E Arnold, October 7, 2024
Skills You Can Skip: Someone Is Pushing What Seems to Be Craziness
October 4, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The Harvard ethics research scam has ended. The Stanford University president resigned over fake data late in 2023. A clump of students in an ethics class used smart software to write their first paper. Why not use smart software? Why not let AI or just dishonest professors make up data with the help of assorted tools like Excel and Photoshop? Yeah, why not?
A successful pundit and lecturer explains to his acolyte that learning to write is a waste of time. And what does the pundit lecture about? I think he was pitching his new book, which does not require that one learn to write. Logical? Absolutely. Thanks, MSFT Copilot. Good enough.
My answer to the question is: “Learning is fundamental.” No, I did not make that up, nor did I believe the information in “How AI Can Save You Time: Here Are 5 Skills You No Longer Need to Learn.” The write up has sources; it has quotes; and it has the type of information which is hard to believe assembled by humans who presumably have some education, maybe a college degree.
What are the five skills you no longer need to learn? Hang on:
- Writing
- Art design
- Data entry
- Data analysis
- Video editing.
The expert who generously shared his remarkable insights for the Euro News article is Bernard Marr, a futurist and internationally best-selling author. What did Mr. Marr author? He has written “Artificial Intelligence in Practice: How 50 Successful Companies Used Artificial Intelligence To Solve Problems,” “Key Performance Indicators For Dummies,” and “The Intelligence Revolution: Transforming Your Business With AI.”
One question: If writing is a skill one does not need to learn, why does Mr. Marr write books?
I wonder if Mr. Marr relies on AI to help him write his books. He seems prolific because Amazon reports that he has outputted more than a dozen, maybe more. But volume does not explain the tension between Mr. Marr’s “writing” (which may be outputting) versus the suggestion that one does not need to learn or develop the skill of writing.
The cited article quotes the prolific Mr. Marr as saying:
“People often get scared when you think about all the capabilities that AI now have. So what does it mean for my job as someone that writes, for example, will this mean that in the future tools like ChatGPT will write all our articles? And the answer is no. But what it will do is it will augment our jobs.”
Yep, Mr. Marr’s job is outputting. You don’t need to learn writing. Smart software will augment one’s job.
My conclusion is that the five identified areas are plucked from a listicle, either generated by a human or an AI system. Euro News was impressed with Mr. Marr’s laser-bright insight about smart software. Will I purchase and learn from Mr. Marr’s “Generative AI in Practice: 100+ Amazing Ways Generative Artificial Intelligence is Changing Business and Society.”
Nope.
Stephen E Arnold, October 4, 2024
SolarWinds Outputs Information: Does Anyone Other Than Microsoft and the US Government Remember?
October 3, 2024
I love these dribs and drops of information about security issues. From the maelstrom of emails, meeting notes, and SMS messages only glimpses of what’s going on when a security misstep takes place. That’s why the write up “SolarWinds Security Chief Calls for tighter Cyber Laws” is interesting to me. How many lawyer-type discussions were held before the Solar Winds’ professional spoke with a “real” news person from the somewhat odd orange newspaper. (The Financial Times used to give these things away in front of their building some years back. Yep, the orange newspaper caught some people’s eye in meetings which I attended.)
The subject of the interview was a person who is/was the chief information security officer at SolarWinds. He was on duty with the tiny misstep took place. I will leave it to you to determine whether the CrowdStrike misstep or the SolarWinds misstep was of more consequence. Neither affected me because I am a dinobaby in rural Kentucky running steam powered computers from my next generation office in a hollow.
A dinobaby is working on a blog post in rural Kentucky. This talented and attractive individual was not affected by either the SolarWinds or the CrowdStrike security misstep. A few others were not quite so fortunate. But, hey, who remembers or cares? Thanks, Microsoft Copilot. I look exactly like this. Or close enough.
Here are three statements from the article in the orange newspaper I noted:
First, I learned that:
… cyber regulations are still ‘in flux’ which ‘absolutely adds stress across the globe’ on cyber chiefs.
I am delighted to learn that those working in cyber security experience stress. I wonder, however, what about the individuals and organizations who must think about the consequences of having their systems breached. These folks pay to be secure, I believe. When that security fails, will the affected individuals worry about the “stress” on those who were supposed to prevent a minor security misstep? I know I sure worry about these experts.
Second, how about this observation by the SolarWinds’ cyber security professional?
When you don’t have rules to follow, it’s very hard to follow them,” said Brown [the cyber security leader at SolarWinds]. “Very few security people would ever do something that wasn’t right, but you just have to tell us what’s right in order to do it,” he added.
Let’s think about this statement. To be a senior cyber security professional one has to be trained, have some cyber security certifications, and maybe some specialized in-service instruction at conferences or specific training events. Therefore, those who attend these events allegedly “learn” what rules to follow; for instance, make systems secure, conduct routine stress tests, have third party firms conduct security audits, validate the code, widgets, and APIs one uses, etc., etc. Is it realistic to assume that an elected official knows anything about security systems at a cyber security firm? As a dinobaby, my view is that these cyber wizards need to do their jobs and not wait for non-experts to give them “rules.” Make the systems secure via real work, not chatting at conferences or drinking coffee in a conference room.
And, finally, here’s another item I circled in the orange newspaper:
Brown this month joined the advisory board of Israeli crisis management firm Cytactic but said he was still committed to staying in his role at SolarWinds. “As far as the incident at SolarWinds: It happened on my watch. Was I ultimately responsible? Well, no, but it happened on my watch and I want to get it right,” he said.
Wasn’t Israel the country caught flat footed in October 2023? How does a company in Israel — presumably with staff familiar with the tools and technologies used to alert Israel of hostile actions — learn from another security professional caught flatfooted? I know this is an easily dismissed question, but for a dinobaby, doesn’t one want to learn from a person who gets things right? As I said, I am old fashioned, old, and working in a log cabin on a steam powered computing device.
The reality is that egregious security breaches have taken place. The companies and their staff are responsible. Are there consequences? I am not so sure. That means the present “tell us the rules” attitude will persist. Factoid: Government regulations in the US are years behind what clever companies and their executives do. No gap closing, sorry.
Stephen E Arnold, October 3, 2024