RightHub: Will It Supercharge IP Protection and Violation Trolls?
March 16, 2023
Yahoo believe it or not displayed an article I found interesting. The title was “Copy That: RightHub Wants To Be the Command Center for Intellectual Property Management.” The story originated on a Silicon Valley “real news” site called TechCrunch.
The write up explains that managing patent, trademark, and copyright information is a hassle. RightHub is, according to the story:
…something akin to what GoDaddy promises in the world of website creation, insofar as GoDaddy allows anyone to search, register, and renew domain names, with additional tools for building and hosting websites.
I am not sure that a domain-name type of model is going to have the professional, high-brow machinery that rights-sensitive outfits expect. I am not sure that many people understand that the domain-name model is fraught with manipulated expiry dates, wheeling and dealing, and possibly good old-fashioned fraud.
The idea of using a database and scripts to keep track of intellectual property is interesting. Tools are available to automate many of the discrete steps required to file, follow up, renew, and remember who did what and when.
But domain name processes as a touchstone.
Sorry. I think that the service will embrace a number of sub functions which may be of interest to some people; for example, enforcement trolls. Many are using manual or outmoded tools like decades old image recognition technology and partial Web content scanning methods. If RightHub offers a robust system, IP protection may become easier. Some trolls will be among the first to seek inspiration and possibly opportunities to be more troll-like.
Stephen E Arnold, March 16, 2023
Google: Good at Quantum and Maybe Better at Discarding Intra-Company Messages
February 28, 2023
Google has already declared quantum supremacy. The supremos have outsupremed themselves, if this story in the UK Independent is accurate:
Okay, supremacy but error problems. Supremacy but a significant shift. Then the word “plague.”
The write up states in what strikes me a Google PR recyclish way:
Google researchers say they have found a way of building the technology so that it corrects those errors. The company says it is a breakthrough on a par with its announcement three years ago that it had reached “quantum supremacy”, and represents a milestone on the way to the functional use of quantum computers.
The write up continues:
Dr Julian Kelly, director of quantum hardware at Google Quantum AI, said: “The engineering constraints (of building a quantum computer) certainly are feasible. “It’s a big challenge – it’s something that we have to work on, but by no means that blocks us from, for example, making a large-scale machine.”
What seems to be a similar challenge appears in “DOJ Seeks Court Sanctions against Google over Intentional Destruction of Chat Logs.” This write up is less of a rah rah for the quantum complexity crowd and more for a simpler problem: Retaining employee communications amidst the legal issues through which the Google is wading. The write up says:
Google should face court sanctions over “intentional and repeated destruction” of company chat logs that the US government expected to use in its antitrust case targeting Google’s search business, the Justice Department said Thursday [February 23, 2023]. Despite Google’s promises to preserve internal communications relevant to the suit, for years the company maintained a policy of deleting certain employee chats automatically after 24 hours, DOJ said in a filing in District of Columbia federal court. The practice has harmed the US government’s case against the tech giant, DOJ alleged.
That seems clear, certainly clearer than the assertions about 49 physical qubits and 17 physical qubits being equal to the quantum supremacy assertion several years ago.
How can one company be adept at manipulating qubits and mal-adept at saving chat messages? Wait! Wait!
Maybe Google is equally adept: Manipulating qubits and manipulating digital information.
Strike the quantum fluff and focus on the manipulating of information. Is that a breakthrough?
Stephen E Arnold, February 28, 2023
Is the UK Stupid? Well, Maybe, But Government Officials Have Identified Some Targets
February 27, 2023
I live in good, old Kentucky, rural Kentucky, according to my deceased father-in-law. I am not an Anglophile. The country kicked my ancestors out in 1575 for not going with the flow. Nevertheless, I am reluctant to slap “even more stupid” on ideas generated by those who draft regulations. A number of experts get involved. Data are collected. Opinions are gathered from government sources and others. The result is a proposal to address a problem.
The write up “UK Proposes Even More Stupid Ideas for Directly Regulating the Internet, Service Providers” makes clear that governments have not been particularly successful with its most recent ideas for updating the UK’s 1990 Computer Misuse Act. The reasons offered are good; for example, reducing cyber crime and conducting investigations. The downside of the ideas is that governments make mistakes. Governmental powers creep outward over time; that is, government becomes more invasive.
The article highlights the suggested changes that the people drafting the modifications suggest:
- Seize domains and Internet Protocol addresses
- Use of contractors for this process
- Restrict algorithm-manufactured domain names
- Ability to go after the registrar and the entity registering the domain name
- Making these capabilities available to other government entities
- A court review
- Mandatory data retention
- Redefining copying data as theft
- Expanded investigatory activities.
I am not a lawyer, but these proposals are troubling.
I want to point out that whoever drafted the proposal is like a tracking dog with an okay nose. Based on our research for an upcoming lecture to some US government officials, it is clear that domain name registries warrant additional scrutiny. We have identified certain ISPs as active enablers of bad actors because there is no effective oversight on these commercial and sometimes non-governmental organizations or non-profit “do good” entities. We have identified transnational telecommunications and service providers who turn a blind eye to the actions of other enterprises in the “chain” which enables Internet access.
The UK proposal seems interesting and a launch point for discussion, the tracking dog has focused attention on one of the “shadow” activities enabled by lax regulators. Hopefully more scrutiny will be directed at the complicated and essentially Wild West populated by enablers of criminal activity like human trafficking, weapons sales, contraband and controlled substance marketplaces, domain name fraud, malware distribution, and similar activities.
At least a tracking dog is heading along what might be an interesting path to explore.
Stephen E Arnold, February 27, 2023
Legal Eagles Will Have Ruffled Feathers and Emit Non-AI Screeches
February 6, 2023
The screech of an eagle is annoying. An eagle with ruffled feathers can make short work of a French bulldog. But legal eagles are likely to produce loud sounds and go hunting for prey; specifically, those legal eagles will want to make life interesting for a certain judge in Columbia. (Nice weather in Bogota, by the way.)
“A Judge Just Used ChatGPT to Make a Court Decision” reports:
Judge Juan Manuel Padilla Garcia, who presides over the First Circuit Court in the city of Cartagena, said he used the AI tool to pose legal questions about the case and included its responses in his decision, according to a court document dated January 30, 2023.
One attorney in the US wanted to use smart software in a US case. That did not work out. There are still job openings at Chick-f-A, by the way.
I am not convinced that outputs from today’s smart software is ready for prime time. In fact, much of the enthusiasm is a result of push back against lousy Google search results, a downer economic environment, and a chance to make a buck without ending up in the same pickle barrel as Sam Bankman Fried or El Chapo.
Lawyers have a reason to watch Sr. Garcia’s downstream activities. Here are the reasons behind what I think will be fear and loathing by legal eagles about the use of smart software:
- Billability. If software can do what a Duke law graduate does in a dusty warehouse in dim light in a fraction of the time, partners lose revenue. Those lawyers sifting through documents and pulling out the ones that are in their jejune view are germane to a legal matter can be replaced with fast software. Wow. Hasta la vista billing for that mindless document review work.
- Accuracy. Today’s smart software is in what I call “close enough for horseshoes” accuracy. But looking ahead, the software will become more accurate or at least as accurate as a judge or other legal eagle needs to be to remain a certified lawyer. Imagine. Replacing legal deliberations with a natural language interface and the information in a legal database with the spice of journal content. There goes the legal backlog or at least some of it with speedy but good enough decisions.
- Consistency. Legal decisions are all over the place. There are sentencing guidelines and those are working really well, right? A software system operating on a body of content will produce outputs that are accurate within a certain range. Lawyers and judges output decisions which can vary widely.
Nevertheless, after the ruffling and screeching die down, the future is clear. If a judge in Columbia can figure out how to use smart software, that means the traditional approach to legal eagle life is going to change.
Stephen E Arnold, February 6, 2023
Crypto and Crime: Interesting Actors Get Blues and Twos on Their Systems
January 31, 2023
I read a widely available document which presents information once described to me as a “close hold.” The article is “Most Criminal Crypto currency Is Funneled Through Just 5 Exchanges.” Most of the write up is the sort of breathless “look what we know” information. The article which recycles information from Wired and from the specialized services firm Chainalysis does not mention the five outfits currently under investigation. The write up does not provide much help to a curious reader by omitting open source intelligence tools which can rank order exchanges by dollar volume. Why not learn about this listing by CoinMarketCap and include that information instead of recycling OPI (other people’s info)? Also, why not point to resources on one of the start.me pages? I know. I know. That’s work that interferes with getting a Tall, Non-Fat Latte With Caramel Drizzle.
The key points for me is the inclusion of some companies/organizations allegedly engaged in some fascinating activities. (Fascinating for crime analysts and cyber fraud investigators. For the individuals involved with these firms, “fascinating” is not the word one might use to describe the information in the Ars Technica article.)
Here are the outfits mentioned in the article:
- Bitcoin Fog – Offline
- Bitzlato
- Chatex
- Garantex
- Helix – Offline
- Suex
- Tornado Cash – Offline
Is there a common thread connecting these organizations? Who are the stakeholders? Who are the managers? Where are these outfits allegedly doing business?
Could it be Russia?
Stephen E Arnold, February 1, 2023
Newton and Shoulders of Giants? Baloney. Is It Everyday Theft?
January 31, 2023
Here I am in rural Kentucky. I have been thinking about the failure of education. I recall learning from Ms. Blackburn, my high school algebra teacher, this statement by Sir Isaac Newton, the apple and calculus guy:
If I have seen further, it is by standing on the shoulders of giants.
Did Sir Isaac actually say this? I don’t know, and I don’t care too much. It is the gist of the sentence that matters. Why? I just finished reading — and this is the actual article title — “CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism. CNET’s AI-Written Articles Aren’t Just Riddled with Errors. They Also Appear to Be Substantially Plagiarized.”
How is any self-respecting, super buzzy smart software supposed to know anything without ingesting, indexing, vectorizing, and any other math magic the developers have baked into the system? Did Brunelleschi wake up one day and do the Eureka! thing? Maybe he stood on line and entered the Pantheon and looked up? Maybe he found a wasp’s nest and cut it in half and looked at what the feisty insects did to build a home? Obviously intellectual theft. Just because the dome still stands, when it falls, he is an untrustworthy architect engineer. Argument nailed.
The write up focuses on other ideas; namely, being incorrect and stealing content. Okay, those are interesting and possibly valid points. The write up states:
All told, a pattern quickly emerges. Essentially, CNET‘s AI seems to approach a topic by examining similar articles that have already been published and ripping sentences out of them. As it goes, it makes adjustments — sometimes minor, sometimes major — to the original sentence’s syntax, word choice, and structure. Sometimes it mashes two sentences together, or breaks one apart, or assembles chunks into new Frankensentences. Then it seems to repeat the process until it’s cooked up an entire article.
For a short (very, very brief) time I taught freshman English at a big time university. What the Futurism article describes is how I interpreted the work process of my students. Those entitled and enquiring minds just wanted to crank out an essay that would meet my requirements and hopefully get an A or a 10, which was a signal that Bryce or Helen was a very good student. Then go to a local hang out and talk about Heidegger? Nope, mostly about the opposite sex, music, and getting their hands on a copy of Dr. Oehling’s test from last semester for European History 104. Substitute the topics you talked about to make my statement more “accurate”, please.
I loved the final paragraphs of the Futurism article. Not only is a competitor tossed over the argument’s wall, but the Google and its outstanding relevance finds itself a target. Imagine. Google. Criticized. The article’s final statements are interesting; to wit:
As The Verge reported in a fascinating deep dive last week, the company’s primary strategy is to post massive quantities of content, carefully engineered to rank highly in Google, and loaded with lucrative affiliate links. For Red Ventures, The Verge found, those priorities have transformed the once-venerable CNET into an “AI-powered SEO money machine.” That might work well for Red Ventures’ bottom line, but the specter of that model oozing outward into the rest of the publishing industry should probably alarm anybody concerned with quality journalism or — especially if you’re a CNET reader these days — trustworthy information.
Do you like the word trustworthy? I do. Does Sir Isaac fit into this future-leaning analysis. Nope, he’s still pre-occupied with proving that the evil Gottfried Wilhelm Leibniz was tipped off about tiny rectangles and the methods thereof. Perhaps Futurism can blame smart software?
Stephen E Arnold, January 31, 2023
Have You Ever Seen a Killer Dinosaur on a Leash?
January 27, 2023
I have never seen a Tyrannosaurus Rex allow a European regulators to put a leash on its neck and lead the beastie around like a tamed circus animal?
Another illustration generated by the smart software outfit Craiyon.com. The copyright is up in the air just like the outcome of Google’s battles with regulators, OpenAI, and assorted employees.
I think something similar just happened. I read “Consumer Protection: Google Commits to Give Consumers Clearer and More Accurate Information to Comply with EU Rules.” The statement said:
Google has committed to limit its capacity to make unilateral changes related to orders when it comes to price or cancellations, and to create an email address whose use is reserved to consumer protection authorities, so that they can report and request the quick removal of illegal content. Moreover, Google agreed to introduce a series of changes to its practices…
The details appear in the this EU table of Google changes.
Several observations:
- A kind and more docile Google may be on parade for some EU regulators. But as the circus act of Roy and Siegfried learned, one must not assume a circus animal will not fight back
- More problematic may be Google’s internal management methods. I have used the phrase “high school science club management methods.” Now that wizards were and are being terminated like insects in a sophomore biology class, getting that old team spirit back may be increasingly difficult. Happy wizards do not create problems for their employer or former employer as the case may be. Unhappy folks can be clever, quite clever.
- The hyper-problem in my opinion is how the tide of online user sentiment has shifted from “just Google it” to ladies in my wife’s bridge club asking me, “How can I use ChatGPT to find a good hotel in Paris?” Yep, really old ladies in a bridge club in rural Kentucky. Imagine how the buzz is ripping through high school and college students looking for a way to knock out an essay about the Louisiana Purchase for that stupid required American history class? ChatGPT has not needed too much search engine optimization, has it.
Net net: The friendly Google faces a multi-bladed meat grinder behind Door One, Door Two, and Door Three. As Monte Hall, game show host of “Let’s Make a Deal” said:
“It’s time for the Big Deal of the Day!”
Stephen E Arnold, January 27, 2023
Googzilla Squeezed: Will the Beastie Wriggle Free? Can Parents Help Google Wiggle Out?
January 25, 2023
How easy was it for our prehistoric predecessors to capture a maturing reptile. I am thinking of Googzilla. (That’s my way of conceptualizing the Alphabet Google DeepMind outfit.)
This capturing the dangerous dinosaur shows one regulator and one ChatGPT dev in the style of Normal Rockwell (who may be spinning in his grave). The art was output by the smart software in use at Craiyon.com. I love those wonky spellings and the weird video ads and the image obscuring Next and Stay buttons. Is this the type of software the Google fears? I believe so.
On one side of the creature is the pesky ChatGPT PR tsunami. Google’s management team had to call Google’s parents to come to the garage. The whiz kids find themselves in a marketing battle. Imagine, a technology that Facebook dismisses as not a big deal, needs help. So the parents come back home from their vacations and social life to help out Sundar and Prabhakar. I wonder if the parents are asking, “What now?” and “Do you think these whiz kids want us to move in with them.” Forbes, the capitalist tool with annoying pop ups, tells one side of the story in “How ChatGPT Suddenly Became Google’s Code Red, Prompting Return of Page and Brin.”
On the other side of Googzilla is a weak looking government regulator. The Wall Street Journal (January 25, 2023) published “US Sues to Split Google’s Ad Empire.” (Paywall alert!) The main idea is that after a couple of decades of Google is free, great, and gives away nice tchotchkes US Federal and state officials want the Google to morph into a tame lizard.
Several observations:
- I find it amusing that Google had to call its parents for help. There’s nothing like a really tough, decisive set of whiz kids
- The Google has some inner strengths, including lawyers, lobbyists, and friends who really like Google mouse pads, LED pins, and T shirts
- Users of ChatGPT may find that as poor as Google’s search results are, the burden of figuring out an “answer” falls on the user. If the user cooks up an incorrect answer, the Google is just presenting links or it used to. When the user accepts a ChatGPT output as ready to use, some unforeseen consequences may ensue; for example, getting called out for presenting incorrect or stupid information, getting sued for copyright violations, or assuming everyone is using ChatGPT so go with the flow
Net net: Capturing and getting the vet to neuter the beastie may be difficult. Even more interesting is the impact of ChatGPT on allegedly calm, mature, and seasoned managers. Yep, Code Red. “Hey, sorry to bother you. But we need your help. Right now.”
Stephen E Arnold, January 25, 2023
Japan Does Not Want a Bad Apple on Its Tax Rolls
January 25, 2023
Everyone is falling over themselves about a low-cost Mac Mini, just not a few Japanese government officials, however.
An accountant once gave me some advice: never anger the IRS. A governmental accounting agency that arms its employees with guns is worrisome. It is even more terrifying to anger a foreign government accounting agency. The Japanese equivalent of the IRS smacked Apple with the force of a tsunami in fees and tax penalties Channel News Asia reported: “Apple Japan Hit With $98 Million In Back Taxes-Nikkei.”
The Japanese branch of Apple is being charged with $98 million (13 billion yen) for bulk sales of Apple products sold to tourists. The product sales, mostly consisting of iPhones, were wrongly exempted from consumption tax. The error was caught when a foreigner was caught purchasing large amounts of handsets in one shopping trip. If a foreigner visits Japan for less than six months they are exempt from the ten percent consumption tax unless the products are intended for resale. Because the foreign shopper purchased so many handsets at once, it is believed they were cheating the Japanese tax system.
The Japanese counterpart to the IRS brought this to Apple Japan’s attention and the company handled it in the most Japanese way possible: quiet acceptance. Apple will pay the large tax bill:
“Apple Japan is believed to have filed an amended tax return, according to Nikkei. In response to a Reuters’ request for comment, the company only said in an emailed message that tax-exempt purchases were currently unavailable at its stores. The Tokyo Regional Taxation Bureau declined to comment.”
Apple America responded that the company invested over $100 billion in the Japanese supply network in the past five years.
Japan is a country dedicated to advancing technology and, despite its declining population, it possesses one of the most robust economies in Asia. Apple does not want to lose that business, so paying $98 million is a small hindrance to continue doing business in Japan.
Whitney Grace, January 25, 2023
OpenAI Working on Proprietary Watermark for Its AI-Generated Text
January 24, 2023
Even before OpenAI made its text generator GPT-3 available to the public, folks were concerned the tool was too good at mimicking the human-written word. For example, what is to keep students from handing their assignments off to an algorithm? (Nothing, as it turns out.) How would one know? Now OpenAI has come up with a solution—of sorts. Analytics India Magazine reports, “Generated by Human or AI: OpenAI to Watermark its Content.” Writer Pritam Bordoloi describes how the watermark would work:
“We want it to be much harder to take a GPT output and pass it off as if it came from a human,’ [OpenAI’s Scott Aaronson] revealed while presenting a lecture at the University of Texas at Austin. ‘For GPT, every input and output is a string of tokens, which could be words but also punctuation marks, parts of words, or more—there are about 100,000 tokens in total. At its core, GPT is constantly generating a probability distribution over the next token to generate, conditional on the string of previous tokens,’ he said in a blog post documenting his lecture. So, whenever an AI is generating text, the tool that Aaronson is working on would embed an ‘unnoticeable secret signal’ which would indicate the origin of the text. ‘We actually have a working prototype of the watermarking scheme, built by OpenAI engineer Hendrik Kirchner.’ While you and I might still be scratching our heads about whether the content is written by an AI or a human, OpenAI—who will have access to a cryptographic key—would be able to uncover a watermark, Aaronson revealed.”
Great! OpenAI will be able to tell the difference. But … how does that help the rest of us? If the company just gifted the watermarking key to the public, bad actors would find a way around it. Besides, as Bordoloi notes, that would also nix OpenAI’s chance to make a profit off it. Maybe it will sell it as a service to certain qualified users? That would be an impressive example of creating a problem and selling the solution—a classic business model. Was this part of the firm’s plan all along? Plus, the killer question, “Will it work?”
Cynthia Murrell, January 24, 2023