Trust: Some in the European Union Do Not Believe the Google. Gee, Why?
June 13, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Google’s Ad Tech Dominance Spurs More Antitrust Charges, Report Says.” The write up seems to say that some EU regulators do not trust the Google. Trust is a popular word at the alleged monopoly. Yep, trust is what makes Google’s smart software so darned good.
A lawyer for a high tech outfit in the ad game says, “Commissioner, thank you for the question. You can trust my client. We adhere to the highest standards of ethical behavior. We put our customers first. We are the embodiment of ethical behavior. We use advanced technology to enhance everyone’s experience with our systems.” The rotund lawyer is a confection generated by MidJourney, an example of in this case, pretty smart software.
The write up says:
These latest charges come after Google spent years battling and frequently bending to the EU on antitrust complaints. Seeming to get bigger and bigger every year, Google has faced billions in antitrust fines since 2017, following EU challenges probing Google’s search monopoly, Android licensing, Shopping integration with search, and bundling of its advertising platform with its custom search engine program.
The article makes an interesting point, almost as an afterthought:
…Google’s ad revenue has continued increasing, even as online advertising competition has become much stiffer…
The article does not ask this question, “Why is Google making more money when scrutiny and restrictions are ramping up?”
From my vantage point in the old age “home” in rural Kentucky, I certainly have zero useful data about this interesting situation, assuming that it is true of course. But, for the nonce, let’s speculate, shall we?
Possibility A: Google is a monopoly and makes money no matter what laws, rules, and policies are articulated. Game is now in extra time. Could the referee be bent?
This idea is simple. Google’s control of ad inventory, ad options, and ad channels is just a good, old-fashioned system monopoly. Maybe TikTok and Facebook offer options, but even with those channels, Google offers options. Who can resist this pitch: “Buy from us, not the Chinese. Or, buy from us, not the metaverse guy.”
Possibility B: Google advertising is addictive and maybe instinctual. Mice never learn and just repeat their behaviors.
Once there is a cheese pay off for the mouse, those mice are learning creatures and in some wild and non-reproducible experiments inherit their parents’ prior learning. Wow. Genetics dictate the use of Google advertising by people who are hard wired to be Googley.
Possibility C: Google’s home base does not regulate the company in a meaningful way.
The result is an advanced and hardened technology which is better, faster, and maybe cheaper than other options. How can the EU, with is squabbling “union”, hope to compete with what is weaponized content delivery build on a smart, adaptive global system? The answer is, “It can’t.”
Net net: After a quarter century, what’s more organized for action, a regulatory entity or the Google? I bet you know the answer, don’t you?
Stephen E Arnold, June xx, 2023
Japan and Copyright: Pragmatic and Realistic
June 8, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Japan Goes All In: Copyright Doesn’t Apply To AI Training.” In a nutshell, Japan’s alleged stance is accompanied with a message for “creators”: Tough luck.
You are ripping off my content. I don’t think that is fair. I am a creator. The image of a testy office lady is the product of MidJourney’s derivative capabilities.
The write up asserts:
It seems Japan’s stance is clear – if the West uses Japanese culture for AI training, Western literary resources should also be available for Japanese AI. On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.
If this is the direction in which Japan is heading, what’s the posture in China, Viet-Nam and other countries in the region? How can the US regulate for an unknown future? We know Japan’s approach it seems.
Stephen E Arnold, June 8, 2023
OpenAI Clarifies What “Regulate” Means to the Sillycon Valley Crowd
May 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Sam AI-man begged (at least he did not get on his hands and knees) the US Congress to regulate artificial intelligence (whatever that means). I just read “Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation.” I know I am old. I know I lose my car keys a couple of times every 24 hours. I do recall Mr. AI-man wanted regulation.
However, the write up reports:
Though unlike in the AI-friendly U.S., Altman has threatened to take his big tech toys to the other end of the sandbox if they’re not willing to play by his rules.
The vibes of the Zuckster zip through my mind. Facebook just chugs along, pays fines, and mostly ignores regulators. China seems to be an exception for Facebook, the Google, and some companies I don’t know about. China had a mobile death-mobile. A person accused and convicted would be executed in the mobile death van as soon as it arrived at the location where the convicted bad actor was. Re-education camps and mobile death-mobiles suggest that some US companies choose to exit China. Lawyers who have to arrive quickly or their client has been processed are not much good in some of China’s efficient state machines. Fines, however, are okay. Write a check and move on.
Mr. AI-man is making clear that the word “regulate” means one thing to Mr. AI-man and another thing to those who are not getting with the smart software program. The write up states:
Altman said he didn’t want any regulation that restricted users’ access to the tech. He told his London audience he didn’t want anything that could harm smaller companies or the open source AI movement (as a reminder, OpenAI is decidedly more closed off as a company than it’s ever been, citing “competition”). That’s not to mention any new regulation would inherently benefit OpenAI, so when things inevitably go wrong it can point to the law to say they were doing everything they needed to do.
I think “regulate” means what the declining US fast food outfit who told me “have it your way” meant. The burger joint put in a paper bag whatever the professionals behind the counter wanted to deliver. Mr. AI-man doesn’t want any “behind the counter” decision making by a regulatory cafeteria serving up its own version of lunch.
Mr. AI-man wants “regulate” to mean his way.
In the US, it seems, that is exactly what big tech and promising venture funded outfits are going to get; that is, whatever each company wants. Competition is good. See how well OpenAI and Microsoft are competing with Facebook and Google. Regulate appears to mean “let us do what we want to do.”
I am probably wrong. OpenAI, Google, and other leaders in smart software are at this very moment consuming the Harvard Library of books to read in search of information about ethical behavior. The “moral” learning comes later.
Net net: Now I understand the new denotation of “regulate.” Governments work for US high-tech firms. Thus, I think the French term laissez-faire nails it.
Stephen E Arnold, May 25, 2023
AI Legislation: Can the US Regulate What It Does Understand Like a Dull Normal Student?
April 20, 2023
I read an essay by publishing and technology luminary Tim O’Reilly. If you don’t know the individual, you may recognize the distinctive art used on many of his books. Here’s what I call the parrot book’s cover:
You can get a copy at this link.
The essay to which I referred in the first sentence of this post is “You Can’t Regulate What You Don’t Understand.” The subtitle of the write up is “Or, Why AI Regulations Should Begin with Mandated Disclosures.” The idea is an interesting one.
Here’s a passage I found worth circling:
But if we are to create GAAP for AI, there is a lesson to be learned from the evolution of GAAP itself. The systems of accounting that we take for granted today and use to hold companies accountable were originally developed by medieval merchants for their own use. They were not imposed from without, but were adopted because they allowed merchants to track and manage their own trading ventures. They are universally used by businesses today for the same reason.
The idea is that those without first hand knowledge of something cannot make effective regulations.
The essay makes it clear that government regulators may be better off:
formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems. [Emphasis in the original.]
The essay states:
Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.
The conclusion is warranted by the arguments offered in the essay:
We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be.
My thought is that it may be useful to look at what generalities and self-regulation deliver in real life. As examples, I would point out:
- The report “Independent Oversight of the Auditing Professionals: Lessons from US History.” To keep it short and sweet: Self regulation has failed. I will leave you to work through the somewhat academic argument. I have burrowed through the document and largely agree with the conclusion.
- The US Securities & Exchange Commission’s decision to accept $1.1 billion in penalties as a result of 16 Wall Street firms’ failure to comply with record keeping requirements.
- The hollowness of the points set forth in “The Role of Self-Regulation in the Cryptocurrency Industry: Where Do We Go from Here?” in the wake of the Sam Bankman Fried FTX problem.
- The MBA-infused “ethical compass” of outfits operating with a McKinsey-type of pivot point?
My view is that the potential payoff from pushing forward with smart software is sufficient incentive to create a Wild West, anything-goes environment. Those companies with the most to gain and the resources to win at any cost can overwhelm US government professionals with flights of legal eagles.
With innovations in smart software arriving quickly, possibly as quickly as new Web pages in the early days of the Internet, firms that don’t move quickly, act expediently, and push toward autonomous artificial intelligence will be unable to catch up with firms who move with alacrity.
Net net: No regulation, imposed or self-generated, will alter the rocket launch of news services. The US economy is not set up to encourage snail-speed innovation. The objective is met by generating money. Money, not guard rails, common sense, or actions which harm a company’s self interest, makes the system work… for some. Losers are the exhaust from an economic machine. One doesn’t drive a Model T Ford. Today those who can drive a Tesla Plaid or McLaren. The “pet” is a French bulldog, not a parrot.
Stephen E Arnold, April 20, 2023
The Confluence: Big Tech, Lobbyists, and the US Government
March 13, 2023
I read “Biden Admin’s Cloud Security Problem: It Could Take Down the Internet Like a Stack of Dominos.” I was thinking that the take down might be more like the collapses of outfits like Silicon Valley Bank.
I noted this statement about the US government, which is
embarking on the nation’s first comprehensive plan to regulate the security practices of cloud providers like Amazon, Microsoft, Google and Oracle, whose servers provide data storage and computing power for customers ranging from mom-and-pop businesses to the Pentagon and CIA.
Several observations:
- Lobbyists have worked to make it easy for cloud providers and big technology companies to generate revenue is an unregulated environment.
- Government officials have responded with inaction and spins through the revolving door. A regulator or elected official today becomes tomorrow’s technology decision maker and then back again.
- The companies themselves have figured out how to use their money and armies of attorneys to do what is best for the companies paying them.
What’s the consequence? Wonderful wordsmithing is one consequence. The problem is that now there are Mauna Loas burbling in different places.
Three of them are evident: The fragility of Silicon Valley approach to innovation. That’s reactive and imitative at this time. The second issue is the complexity of the three body problem resulting from lobbyists, government methods, and monopolistic behaviors. Commercial enterprises have become familiar with the practice of putting their thumbs on the scale. Who will notice?
What will happen? The possible answers are not comforting. Waving a magic wand and changing what are now institutional behaviors established over decades of handcrafting will be difficult.
I touch on a few of the consequences in an upcoming lecture for the attendees at the 2023 National Cyber Crime Conference.
Stephen E Arnold, March 13, 2023
Adulting Desperation at TikTok? More of a PR Play for Sure
March 1, 2023
TikTok is allegedly harvesting data from its users and allegedly making that data accessible to government-associated research teams in China. The story “TikTok to Set One-Hour Daily Screen Time Limit by Default for Users under 18” makes clear that TikTok is in concession mode. The write up says:
TikTok announced Wednesday that every user under 18 will soon have their accounts default to a one-hour daily screen time limit, in one of the most aggressive moves yet by a social media company to prevent teens from endlessly scrolling….
Now here’s the part I liked:
Teenage TikTok users will be able to turn off this new default setting… [emphasis added]
The TikTok PR play misses the point. Despite the yip yap about Oracle as an intermediary, the core issue is suspicion that TikTok is sucking down data. Some of the information can be cross correlated with psychological profiles. How useful would it be to know that a TikTok behavior suggests a person who may be susceptible to outside pressure, threats, or bribes. No big deal? Well, it is a big deal because some young people enlist in the US military and others take jobs at government entities. How about those youthful contractors swarming around Executive Branch agencies’ computer systems, Congressional offices, and some interesting facilities involved with maps and geospatial work?
I have talked about TikTok risks for years. Now we get a limit on usage?
Hey, that’s progress like making a square wheel out of stone.
Stephen E Arnold, March 1, 2023
Is the UK Stupid? Well, Maybe, But Government Officials Have Identified Some Targets
February 27, 2023
I live in good, old Kentucky, rural Kentucky, according to my deceased father-in-law. I am not an Anglophile. The country kicked my ancestors out in 1575 for not going with the flow. Nevertheless, I am reluctant to slap “even more stupid” on ideas generated by those who draft regulations. A number of experts get involved. Data are collected. Opinions are gathered from government sources and others. The result is a proposal to address a problem.
The write up “UK Proposes Even More Stupid Ideas for Directly Regulating the Internet, Service Providers” makes clear that governments have not been particularly successful with its most recent ideas for updating the UK’s 1990 Computer Misuse Act. The reasons offered are good; for example, reducing cyber crime and conducting investigations. The downside of the ideas is that governments make mistakes. Governmental powers creep outward over time; that is, government becomes more invasive.
The article highlights the suggested changes that the people drafting the modifications suggest:
- Seize domains and Internet Protocol addresses
- Use of contractors for this process
- Restrict algorithm-manufactured domain names
- Ability to go after the registrar and the entity registering the domain name
- Making these capabilities available to other government entities
- A court review
- Mandatory data retention
- Redefining copying data as theft
- Expanded investigatory activities.
I am not a lawyer, but these proposals are troubling.
I want to point out that whoever drafted the proposal is like a tracking dog with an okay nose. Based on our research for an upcoming lecture to some US government officials, it is clear that domain name registries warrant additional scrutiny. We have identified certain ISPs as active enablers of bad actors because there is no effective oversight on these commercial and sometimes non-governmental organizations or non-profit “do good” entities. We have identified transnational telecommunications and service providers who turn a blind eye to the actions of other enterprises in the “chain” which enables Internet access.
The UK proposal seems interesting and a launch point for discussion, the tracking dog has focused attention on one of the “shadow” activities enabled by lax regulators. Hopefully more scrutiny will be directed at the complicated and essentially Wild West populated by enablers of criminal activity like human trafficking, weapons sales, contraband and controlled substance marketplaces, domain name fraud, malware distribution, and similar activities.
At least a tracking dog is heading along what might be an interesting path to explore.
Stephen E Arnold, February 27, 2023
Googzilla Squeezed: Will the Beastie Wriggle Free? Can Parents Help Google Wiggle Out?
January 25, 2023
How easy was it for our prehistoric predecessors to capture a maturing reptile. I am thinking of Googzilla. (That’s my way of conceptualizing the Alphabet Google DeepMind outfit.)
This capturing the dangerous dinosaur shows one regulator and one ChatGPT dev in the style of Normal Rockwell (who may be spinning in his grave). The art was output by the smart software in use at Craiyon.com. I love those wonky spellings and the weird video ads and the image obscuring Next and Stay buttons. Is this the type of software the Google fears? I believe so.
On one side of the creature is the pesky ChatGPT PR tsunami. Google’s management team had to call Google’s parents to come to the garage. The whiz kids find themselves in a marketing battle. Imagine, a technology that Facebook dismisses as not a big deal, needs help. So the parents come back home from their vacations and social life to help out Sundar and Prabhakar. I wonder if the parents are asking, “What now?” and “Do you think these whiz kids want us to move in with them.” Forbes, the capitalist tool with annoying pop ups, tells one side of the story in “How ChatGPT Suddenly Became Google’s Code Red, Prompting Return of Page and Brin.”
On the other side of Googzilla is a weak looking government regulator. The Wall Street Journal (January 25, 2023) published “US Sues to Split Google’s Ad Empire.” (Paywall alert!) The main idea is that after a couple of decades of Google is free, great, and gives away nice tchotchkes US Federal and state officials want the Google to morph into a tame lizard.
Several observations:
- I find it amusing that Google had to call its parents for help. There’s nothing like a really tough, decisive set of whiz kids
- The Google has some inner strengths, including lawyers, lobbyists, and friends who really like Google mouse pads, LED pins, and T shirts
- Users of ChatGPT may find that as poor as Google’s search results are, the burden of figuring out an “answer” falls on the user. If the user cooks up an incorrect answer, the Google is just presenting links or it used to. When the user accepts a ChatGPT output as ready to use, some unforeseen consequences may ensue; for example, getting called out for presenting incorrect or stupid information, getting sued for copyright violations, or assuming everyone is using ChatGPT so go with the flow
Net net: Capturing and getting the vet to neuter the beastie may be difficult. Even more interesting is the impact of ChatGPT on allegedly calm, mature, and seasoned managers. Yep, Code Red. “Hey, sorry to bother you. But we need your help. Right now.”
Stephen E Arnold, January 25, 2023
Japan Does Not Want a Bad Apple on Its Tax Rolls
January 25, 2023
Everyone is falling over themselves about a low-cost Mac Mini, just not a few Japanese government officials, however.
An accountant once gave me some advice: never anger the IRS. A governmental accounting agency that arms its employees with guns is worrisome. It is even more terrifying to anger a foreign government accounting agency. The Japanese equivalent of the IRS smacked Apple with the force of a tsunami in fees and tax penalties Channel News Asia reported: “Apple Japan Hit With $98 Million In Back Taxes-Nikkei.”
The Japanese branch of Apple is being charged with $98 million (13 billion yen) for bulk sales of Apple products sold to tourists. The product sales, mostly consisting of iPhones, were wrongly exempted from consumption tax. The error was caught when a foreigner was caught purchasing large amounts of handsets in one shopping trip. If a foreigner visits Japan for less than six months they are exempt from the ten percent consumption tax unless the products are intended for resale. Because the foreign shopper purchased so many handsets at once, it is believed they were cheating the Japanese tax system.
The Japanese counterpart to the IRS brought this to Apple Japan’s attention and the company handled it in the most Japanese way possible: quiet acceptance. Apple will pay the large tax bill:
“Apple Japan is believed to have filed an amended tax return, according to Nikkei. In response to a Reuters’ request for comment, the company only said in an emailed message that tax-exempt purchases were currently unavailable at its stores. The Tokyo Regional Taxation Bureau declined to comment.”
Apple America responded that the company invested over $100 billion in the Japanese supply network in the past five years.
Japan is a country dedicated to advancing technology and, despite its declining population, it possesses one of the most robust economies in Asia. Apple does not want to lose that business, so paying $98 million is a small hindrance to continue doing business in Japan.
Whitney Grace, January 25, 2023
How to Make Chinese Artificial Intelligence Professionals Hope Like Happy Bunnies
January 23, 2023
Happy New Year! It is the Year of the Rabbit, and the write up “Is Copyright Easting AI?” may make some celebrants happier than the contents of a red envelop. The article explains that the US legal system may derail some of the more interesting, publicly accessible applications of smart software. Why? US legal eagles and the thicket of guard rails which comprise copyright.
The article states:
… neural network developers, get ready for the lawyers, because they are coming to get you.
That means the the interesting applications on the “look what’s new on the Internet” news service Product Hunt will disappear. Only big outfits can afford to bring and fight some litigation. When I worked as an expert witness, I learned that money is not an issue of concern for some of the parties to a lawsuit. Those working as a robot repair technician for a fast food chain will want to avoid engaging in a legal dispute.
The write up also says:
If the AI industry is to survive, we need a clear legal rule that neural networks, and the outputs they produce, are not presumed to be copies of the data used to train them. Otherwise, the entire industry will be plagued with lawsuits that will stifle innovation and only enrich plaintiff’s lawyers.
I liked the word “survive.” Yep, continue to exist. That’s an interesting idea. Let’s assume that the US legal process brings AI develop to a halt. Who benefits? I am a dinobaby living in rural Kentucky. Nevertheless, it seems to me that a country will just keep on working with smart software informed by content. Some of the content may be a US citizen’s intellectual property, possibly a hard drive with data from Los Alamos National Laboratory, or a document produced by a scientific and technical publisher.
It seems to me that smart software companies and research groups in a country with zero interest in US laws can:
- Continue to acquire content by purchase, crawling, or enlisting the assistance of third parties
- Use these data to update and refine their models
- Develop innovations not available to smart software developers in the US.
Interesting, and with the present efficiency of some legal and regulatory system, my hunch is that bunnies in China are looking forward to 2023. Will an innovator use enhanced AI for information warfare or other weapons? Sure.
Stephen E Arnold, January 23, 2023