Flailing and Theorizing: The Internet Is Dead. Swipe and Chill

February 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I do not spend much time with 20 somethings, 30 something, 40 somethings, 50 somethings, or any other somethings. I watch data flow into my office, sell a few consulting jobs, and chuckle at the downstream consequences of several cross-generation trends my team and I have noticed. What’s a “cross generational trend”? The phrase means activities and general perceptions which are shared among some youthful college graduates and a harried manager working in a trucking company. There is the mobile phone obsession. The software scheduler which strips time from an individual with faux urgency or machine-generated pings and dings. There is the excitement of sports events, many of which may feature scripting. There is anomie or the sense of being along in a kayak carried to what may be a financial precipice. You get the idea.

Now the shriek of fear is emanating from online sources known as champions of the digital way. In this short essay, I want to highlight one of these; specifically, “The Era of the AI-Generated Internet Is Already Here: And It’s Time to Talk about AI Model Collapse.” I want to zoom the conclusion of the “real” news report and focus on the final section of the article, “The Internet Isn’t Completely Doomed.”

Here we go.

First, I want to point out that communication technologies are not “doomed.” In fact, these methods or techniques don’t go away. A good example are the clay decorations in some homes which way, “We love our Frenchie” or an Etsy plaque like this one:

image

Just a variation of a clay tablet produced in metal for an old-timey look. The communication technologies abundant today are likely to have similar stickiness. Doom, therefore, is Karen rhetoric in my opinion.

Second, the future is a return to the 1980s when for-fee commercial databases were trusted and expensive sources of electronic information. The “doom” write up predicts that content will retreat behind paywalls. I would like to point out that you are reading an essay in a public blog. I put my short writings online in 2008, using the articles as a convenient archive. When I am asked to give a lecture, I check out my blog posts. I find it a way to “refresh” my memory about past online craziness. My hunch is that these free, ad-free electronic essays will persist. Some will be short and often incomprehensible items on Pinboard.in; others will be weird TikTok videos spun into a written item pumped out via a social media channel on the Clear Web or the Dark Web (which seems to persist, doesn’t it?) When an important scientific discovery becomes known, that information becomes findable. Sure, it might be a year after the first announcement, but those ArXiv.org items pop up and are often findable because people love to talk, post, complain, or convert a non-reproducible event into a job at Harvard or Stanford. That’s not going to change.

image

A collapsed AI robot vibrated itself to pieces. Its model went off the rails and confused zeros with ones and ones with zeros. Thanks, MSFT Copilot Bing thing. How are those security procedures today?

Third, search engine optimization is going to “change.” In order to get hired or become famous, one must call attention to oneself. Conferences, Zoom webinars, free posts on LinkedIn-type services — none of these will go away or… change. The reason is that unless one is making headlines or creating buzz, one becomes irrelevant. I am a dinobaby and I still get crazy emails about a blockchain report I did years ago. (The somewhat strident outfit does business as IGI with the url igi-global.com. When I open an email from this outfit, I can smell the desperation.) Other outfits are similar, very similar, but they hit the Amazon thing for some pricey cologne to convert the scent of overboardism into something palatable. My take on SEO: It’s advertising, promotion, PT Barnum stuff. It is, like clay tablets, in the long haul.

Finally, what about AI, smart software, machine learning, and the other buzzwords slapped on ho-hum products like a word processor? Meh. These are short cuts for the Cliff’s Notes’ crowd. Intellectual achievement requires more than a subscription to the latest smart software or more imagination than getting Mistral to run on your MacMini. The result of smart software is to widen the gap between people who are genuinely intelligent and knowledge value creators, and those who can use an intellectual automatic teller machine (ATM).

Net net: The Internet is today’s version of online. It evolves, often like gerbils or tribbles which plagued Captain Kirk. The larger impact is the return to a permanent one percent – 99 percent social structure. Believe me, the 99 percent are not going to be happy whether they can post on X.com, read craziness on a Dark Web forum, pay for an online subscription to someone on Substack, or give money to the New York Times. The loss of intellectual horsepower is the consequence of consumerizing online.

This dinobaby was around when online began. My colleagues and I knew that editorial controls, access policies, and copyright were important. Once the ATM-model swept over the online industry, today’s digital world was inevitable. Too bad no one listened when those who were creating online information were ignored and dismissed as Ivory Tower dwellers. “Doom”? No just a dawning of what digital information creates. Have fun. I am old and am unwilling to provide a coloring book and crayons for the digital information future and a model collapse. That’s the least of some folks’s worries. I need a nap.

Stephen E Arnold, February 1, 2024

Robots, Hard and Soft, Moving Slowly. Very Slooowly. Not to Worry, Humanoids

February 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

CNN that bastion of “real” journalism published a surprising story: “We May Not Lose Our Jobs to Robots So Quickly, MIT Study Finds.” Wait, isn’t MIT the outfit which had a tie up with the interesting Jeffrey Epstein? Oh, well.

The robots have learned that they can do humanoid jobs quickly and easily. But the robots are stupid, right? Yes, they are, but the managers looking for cost reductions and workforce reductions are not. Thanks, MSFT Copilot Bing thing. How the security of the MSFT email today?

The story presents as actual factual an MIT-linked study which seems to go against the general drift of smart software, smart machines, and smart investors. The story reports:

new research suggests that the economy isn’t ready for machines to put most humans out of work.

The fresh research finds that the impact of AI on the labor market will likely have a much slower adoption than some had previously feared as the AI revolution continues to dominate headlines. This carries hopeful implications for policymakers currently looking at ways to offset the worst of the labor market impacts linked to the recent rise of AI.

The story adds:

One key finding, for example, is that only about 23% of the wages paid to humans right now for jobs that could potentially be done by AI tools would be cost-effective for employers to replace with machines right now. While this could change over time, the overall findings suggest that job disruption from AI will likely unfurl at a gradual pace.

The intriguing facet of the report and the research itself is that it seems to suggest that the present approach to smart stuff is working just fine, thank you very much. Why speed up or slow down? The “unfurling” is a slow process. No need for these professionals to panic as major firms push forward with a range of hard and soft robots:

  1. Consulting firms. Has MIT checked out Deloitte’s posture toward smart software and soft robots?
  2. Law firms. Has MIT talked to any of the Top 20 law firms about their use of smart software?
  3. Academic researchers. Has MIT talked to any of the graduate students or undergraduates about their use of smart software or soft robots to generate bibliographies, summaries of possibly non-reproducible studies, or books mentioning their professor?
  4. Policeware vendors. Companies like Babel Street and Recorded Future are putting pedal to the metal with regard to smart software.

My hunch is that MIT is not paying attention to the happy robots at Tesla or the bad actors using software robots to poke through the cyber defenses of numerous outfits.

Does CNN ask questions? Not that I noticed. Plus, MIT appears to want good news PR. I would too if I were known to be pals with certain interesting individuals.

Stephen E Arnold, February 1, 2024

A Glimpse of Institutional AI: Patients Sue Over AI Denied Claims

January 31, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI algorithms are revolutionizing business practices, including whether insurance companies deny or accept medical coverage. Insurance companies are using more on AI algorithms to fast track paperwork. They are, however, over relying on AI to make decisions and it is making huge mistakes by denying coverage. Patients are fed up with their medical treatments being denied and CBS Moneywatch reports that a slew of “Lawsuits Take Aim At Use Of AI Tool By Health Insurance Companies To Process Claims.”

The defendants in the AI insurance lawsuits are Humana and United Healthcare. These companies are using the AI model nHPredict to process insurance claims. On December 12, 2023, a class action lawsuit was filed against Humana, claiming nHPredict denied medically necessary care for elderly and disabled patients under Medicare Advantage. A second lawsuit was filed in November 2023 against United Healthcare. United Healthcare also used nHPredict to process claims. The lawsuit claims the insurance company purposely used the AI knowing it was faulty and about 90% of its denials were overridden.

The AI model is supposed to work:

“NHPredicts is a computer program created by NaviHealth, a subsidiary of United Healthcare, that develops personalized care recommendations for ill or injured patients, based on “real world experience, data and analytics,’ according to its website, which notes that the tool “is not used to deny care or to make coverage determinations.’

But recent litigation is challenging that last claim, alleging that the “nH Predict AI Model determines Medicare Advantage patients’ coverage criteria in post-acute care settings with rigid and unrealistic predictions for recovery.” Both United Healthcare and Humana are being accused of instituting policies to ensure that coverage determinations are made based on output from nHPredicts’ algorithmic decision-making.”

Insurance companies deny coverage whenever they can. Now a patient can talk to an AI customer support system about an AI system’s denying a claim. Will the caller be faced with a voice answering call loop on steroids? Answer: Oh, yeah. We haven’t seen or experienced what’s coming down the cost-cutting information highway. The blip on the horizon is interesting, isn’t it?

Whitney Grace, January 31, 2024

Ho-Hum Write Up with Some Golden Nuggets

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.

image

“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.

Here these items are:

  1. Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
  2. The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
  3. And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?

Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.

Stephen E Arnold, January 30, 2024

AI Coding: Better, Faster, Cheaper. Just Pick Two, Please

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Visual Studio Magazine is not on my must-read list. Nevertheless, one of my research team told me that I needed to read “New GitHub Copilot Research Finds “Downward Pressure on Code Quality.” I had no idea what “downward pressure” means. I read the article trying to figure out what the plain English meaning of this tortured phrase meant. Was it the downward pressure on the metatarsals when a person is running to a job interview? Was it the deadly downward pressure exerted on the OceanGate submersible? Was it the force illustrated in the YouTube “Hydraulic Press Channel”?

image

A partner at a venture firms wants his open source recipients to produce more code better, faster, and cheaper. (He does not explain that one must pick two.) Thanks MSFT Copilot Bing thing. Good enough. But the green? Wow.

Wrong.

The writeup is a content marketing piece for a research report. That’s okay. I think a human may have written most of the article. Despite the frippery in the article, I spotted several factoids. If these are indeed verifiable, excitement in the world of machine generated open source software will ensue. Why does this matter? Well, in the words of the SmartNews content engine, “Read on.”

Here are the items of interest to me:

  1. Bad code is being created and added to the GitHub repositories.
  2. Code is recycled, despite smart efforts to reduce the copy-paste approach to programming.
  3. AI is preparing a field in which lousy, flawed, and possible worse software will flourish.

Stephen E Arnold, January 29, 2024

Modern Poison: Models, Data, and Outputs. Worry? Nah.

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.

image

How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.

Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:

AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Interesting. The article noted:

Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty …  Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.

Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:

"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen…  And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."

If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.

Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?

Nope.

Stephen E Arnold, January 29, 2024

AI Will Take Whose Job, Ms. Newscaster?

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Will AI take jobs? Abso-frickin-lutely. Why? Cost savings. Period. In an era of “good enough” is the new mark of excellence, hallucinating software is going to speed up some really annoying commercial functions and reduce costs. What if the customers object to being called dorks? Too bad. The company will apologize, take down the wonky system, and put up another smart service. Better? No, good enough. Faster? Yep. Cheaper? Bet your bippy on that, pilgrim. (See, for a chuckle, AI Chatbot At Delivery Firm DPD Goes Rogue, Insults Customer And Criticizes Company.)

image

Hey, MSFT Bing thing, good enough. How is that MSFT email security today, kiddo?

I found this Fox write up fascinating: “Two-Thirds of Americans Say AI Could Do Their Job.” That works out to about 67 percent of an estimated workforce of 120 million to a couple of Costco parking lots of people. Give or take a few, of course.

The write up says:

A recent survey conducted by Spokeo found that despite seeing the potential benefits of AI, 66.6% of the 1,027 respondents admitted AI could carry out their workplace duties, and 74.8% said they were concerned about the technology’s impact on their industry as a whole.

Oh, oh. Now it is 75 percent. Add a few more Costco parking lots of people holding signs like “Will broadcast for food”, “Will think for food,” or “Will hold a sign for Happy Pollo Tacos.” (Didn’t some wizard at Davos suggest that five percent of jobs would be affected? Yeah, that’s on the money.)

The write up adds:

“Whether it’s because people realize that a lot of work can be easily automated, or they believe the hype in the media that AI is more advanced and powerful than it is, the AI box has now been opened. … The vast majority of those surveyed, 79.1%, said they think employers should offer training for ChatGPT and other AI tools.

Yep, take those free training courses advertised by some of the tech feudalists. You too can become an AI sales person just like “search experts” morphed into search engine optimization specialists. How is that working out? Good for the Google. For some others, a way station on the bus ride to the unemployment bureau perhaps?

Several observations:

  1. Smart software can generate the fake personas and the content. What’s the outlook for talking heads who are not celebrities or influencers as “real” journalists?
  2. Most people overestimate their value. Now the jobs for which these individuals compete, will go to the top one percent. Welcome to the feudal world of 21st century.
  3. More than holding signs and looking sad will be needed to generate revenue for some people.

And what about Fox news reports like the one on which this short essay is based? AI, baby, just like Sports Illustrated and the estimable SmartNews.

Stephen E Arnold, January 29, 2024

AI and Web Search: A Meh-crosoft and Google Mismatch

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a shocking report summary. Is the report like one of those Harvard Medical scholarly articles or an essay from the former president of Stanford University? I don’t know. Nevertheless, let’s look at the assertions in “Report: ChatGPT Hasn’t Helped Bing Compete With Google.” I am not sure if the information provides convincing proof that Googzilla is a big, healthy market dominator or if Microsoft has been fooling itself about the power of the artificial intelligence revolution.

image

The young inventor presents his next big thing to a savvy senior executive at a techno-feudal company. The senior executive is impressed. Are you? I know I am. Thanks, MSFT Copilot Bing thing. Too bad you timed out and told me, “I apologize for the confusion. I’ll try to create a more cartoon-style illustration this time.” Then you crashed. Good enough, right?

Let’s look at the write up. I noted this passage which is coming to me third, maybe fourth hand, but I am a dinobaby and I go with the online flow:

Microsoft added the generative artificial intelligence (AI) tool to its search engine early last year after investing $10 billion in ChatGPT creator OpenAI. But according to a recent Bloomberg News report — which cited data analytics company StatCounter — Bing ended 2023 with just 3.4% of the worldwide search market, compared to Google’s 91.6% share. That’s up less than 1 percentage point since the company announced the ChatGPT integration last January.

I am okay with the $10 billion. Why not bet big? The tactics works for some each year at the Kentucky Derby. I don’t know about the 91.6 number, however. The point six is troubling. What’s with precision when dealing with a result that makes clear that of 100 random people on line at the ever efficient BWI Airport, only eight will know how to retrieve information from another Web search system; for example, the busy Bing or the super reliable Yandex.ru service.

If we assume that the Bing information of modest user uptake, those $10 billion were not enough to do much more than get the management experts at Alphabet to press the Red Alert fire alarm. One could reason: Google is a monopoly in spirit if not in actual fact. If we accept the market share of Bing, Microsoft is putting life preservers manufactured with marketing foam and bricks on its Paul Allen-esque super yacht.

The write up says via what looks like recycled information:

“We are at the gold rush moment when it comes to AI and search,” Shane Greenstein, an economist and professor at Harvard Business School, told Bloomberg. “At the moment, I doubt AI will move the needle because, in search, you need a flywheel: the more searches you have, the better answers are. Google is the only firm who has this dynamic well-established.”

Yeah, Harvard. Oh, well, the sweatshirts are recognized the world over. Accuracy, trust, and integrity implied too.

Net net: What’s next? Will Microsoft make it even more difficult to use another outfit’s search system. Swisscows.com, you may be headed for the abattoir. StartPage.com, you will face your end.

Stephen E Arnold, January 25, 2024

Content Mastication: A Controversial Business Tactic

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the midst of the unfolding copyright issues, I found this post quite interesting. Torrent Freak published a story titled “Meta Admits Use of ‘Pirated’ Book Dataset to Train AI.” Is the story spot on? I sure don’t know. Nevertheless, the headline is a magnetic one. The story reports:

The cases allege that tech companies, including Meta and OpenAI, used the controversial Books3 dataset to train their models. The Books3 dataset has a clear piracy angle. It was created by AI researcher Shawn Presser in 2020, who scraped the library of ‘pirate’ site Bibliotik. This book archive was publicly hosted by digital archiving collective ‘The Eye‘ at the time, alongside various other data sources.

image

A combination of old-fashioned content collection and smart systems move information from Point A (a copyright owner’s night table) to a smart software system. MSFT’s second class Copilot Bing thing created this cartoon. Sigh. Not even good enough now in my opinion.

What was in the Books3 data collection? The TF story elucidates:

The general vision was that the plaintext collection of more than 195,000 books, which is nearly 37GB…

What did Meta allegedly do to make its Llama smarter than the average member of the Camelidae family? Let’s roll the TF quote:

Responding to a lawsuit from writer/comedian Sarah Silverman, author Richard Kadrey, and other rights holders, the tech giant admits that “portions of Books3” were used to train the Llama AI model before its public release. “Meta admits that it used portions of the Books3 dataset, among many other materials, to train Llama 1 and Llama 2,” Meta writes in its answer [to a court].

The article does not include any statements like “Thank you for the question” or “I don’t know. My team will provide the answer at the earliest possible moment.” Nope. Just an alleged admission.

How will the Meta and parallel copyright legal matter evolve? Beyond Search has zero clue. The US judicial system has deep and mysterious logic. One thing is certain: Senior executives do not like uncertainty and risk. The copyright litigation seems tailored to cause some techno feudalists to imagine a world in which laws, annoying regulators, and people yapping about intellectual property were nudged into a different line of work. One example which comes to mind is building secure bunkers or taking care of the lawn.

Stephen E Arnold, January 25, 2024

Goat Trading: AI at Davos

January 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The AI supercars are racing along the Information Superhighway. Nikkei Asia published what I thought was the equivalent of archaeologists translating a Babylonian clay table about goat trading. Interesting but a bit out of sync with what was happening in a souk. Goat trading, if my understanding of Babylonian commerce, was a combination of a Filene’s basement sale and a hot rod parts swap meet. The article which evoked this thought was “Generative AI Regulation Dominates the Conversation at Davos.” No kidding? Really? I thought some at Davos were into money. I mean everything in Switzerland comes back to money in my experience.

Here’s a passage I found with a nod to the clay tablets of yore:

U.N. Secretary-General Antonio Guterres, during a speech at Davos, flagged risks that AI poses to human rights, personal privacy and societies, calling on the private sector to join a multi-stakeholder effort to develop a "networked and adaptive" governance model for AI.

Now visualize a market at which middlemen, buyers of goats, sellers of goats, funders of goat transactions, and the goats themselves are in the air. Heady. Bold. Like the hot air filling a balloon, an unlikely construct takes flight. Can anyone govern a goat market or the trajectory of the hot air balloons floated by avid outputters?

image

Intense discussions can cause a number of balloons to float with hot air power. Talk is input to AI, isn’t it? Thanks, MSFT Copilot Bing thing. Good enough.

The world of AI reminds me the ultimate outcome of intense discussions about the buying and selling of goats, horses, and AI companies. The official chatter and the “what ifs” are irrelevant in what is going on with smart software. Here’s another quote from the Nikkei write up:

In December, the European Union became the first to provisionally pass AI legislation. Countries around the world have been exploring regulation and governance around AI. Many sessions in Davos explored governance and regulations and why global leaders and tech companies should collaborate.

How are those official documents’ content changing the world of artificial intelligence? I think one can spot a hot air balloon held aloft on the heated emissions from the officials, important personages, and the individuals who are “experts” in all things “smart.”

Another quote, possibly applicable to goat trading in Babylon:

Vera Jourova, European Commission vice president for values and transparency, said during a panel discussion in Davos, that "legislation is much slower than the world of technologies, but that’s law." "We suddenly saw the generative AI at the foundation models of Chat GPT," she continued. "And it moved us to draft, together with local legislators, the new chapter in the AI act. We tried to react on the new real reality. The result is there. The fine tuning is still ongoing, but I believe that the AI act will come into force."

I am confident that there are laws regulating goat trading. I believe that some people follow those laws. On the other hand, when I was in a far off dusty land, I watched how goats were bought and sold. What does goat trading have to do with regulating, governing, or creating some global consensus about AI?

The marketplace is roaring along. You wanna buy a goat? There is a smart software vendor who will help you.

Stephen E Arnold, January xx, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta