AI Weird? Who Knew?
August 29, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Captain Obvious here. Today’s report comes from the IEEE, an organization for really normal people. Oh, you are not an electrical engineer? Then, you are not normal. Just ask an EE and inquire about normalcy?
Enough electrical engineer humor. Oh, well, one more: Which is a more sophisticated engineer? [a] Civil, [b] Mechanical, [c] Electrical, [d] Nuclear. The answer is [d] nuclear. Why? You have to be able to do math, chemistry, and fix a child’s battery powered toy. Get it? I must admit that I did not when Dr. James Terwilliger told it to me when I worked at the Halliburton nuclear outfit. Never heard of it? Well, there you go. Just ask a chatbot to fill you in.
I read “Why Today’s Chatbots Are Weird, Argumentative, and Wrong.” The IEEE article is going to create some tension in engineering-forward organizations. Most of these outfits are in the words of insightful leaders like the stars of the “All In” podcast. Booze, money, gambling, and confidence — a heady mixture indeed.
What does the write up say that Captain Obvious did not know? That’s a poor question. The answer is, “Not much.”
Here’s a passage which received the red marker treatment from this dinobaby:
[Generative AI services have] become way more fluent and more subtly wrong in ways that are harder to detect.
I love the “way more.” The key phrase in the extract, at least for me, is: “Harder to detect.” But why? Is it because developers are improving their generative systems a tweak and a human judgment at a time. The “detect” folks are in react mode. Does this suggest that at least for now the cat-and-mouse game ensures an advantage to the steadily improving generative systems. In simple terms, non-electrical engineers are going to be “subtly” fooled? It sure does.
A second example of my big Japanese chunky marker circling behavior is this snippet:
The problem is the answers do look vaguely correct. But [the chatbots] are making up papers, they’re making up citations or getting facts and dates wrong, but presenting it the same way they present actual search results. I think people can get a false sense of confidence on what is really just probability-based text.
Are you getting a sense that if a person who is not really informed about a topic will read baloney and perceive it as a truffle?
Captain Obvious is tired of this close reading game. For more AI insights, just navigate to the cited IEEE article. And be kind to electrical engineers. These individuals require respect and adulation. Make a misstep and your child’s battery powered toy will never emit incredibly annoying squeaks again.
Stephen E Arnold, August 29, 2023
Better and Modern Management
August 29, 2023
I spotted this amusing (at least to me) article: “Shares of Better.com — Whose CEO Fired 900 Workers on a Zoom Call — Slumped 95% on Their First Day of Trade.” The main idea of the story strikes me as “modern management.” The article explains that Better.com helps its customers get mortgages. The company went public. The IPO was interesting because shares cratered.
“Hmmm. I wonder if my management approach could be improved?” asks the bold leader. MidJourney has confused down pat.
Other highlights from the story struck me as reasonably important:
- The CEO fired 900 employees via a Zoom call in 2021
- The CEO allegedly accused 250 of those employees of false time keeping
- The CEO underwent “leadership training”
- The company is one of the semi-famous Softbank venture firm.
Several ideas passed through my mind:
- Softbank does have a knack for selecting companies to back
- Training courses may not be effective
- Former employers may find the management expertise of the company ineffectual.
I love the name Better. The question is, “Better at what?” Perhaps the Better management team could learn from the real superstars of leadership; for example, Google, X, and the Zuckbook?
Stephen E Arnold, August 29, 2023
Calls for AI Pause Futile At this Late Date
August 29, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Well, the nuclear sub has left the base. A group of technology experts recently called for a 6-month pause on AI rollouts in order to avoid the very “loss of control of our civilization” to algorithms. That might be a good idea—if it had a snowball’s chance of happening. As it stands, observes ComputerWorld‘s Rob Enderle, “Pausing AI Development Is a Foolish Idea.” We think foolish is not a sufficiently strong word. Perhaps regulation could have been established before the proverbial horse left the barn, but by now there are more than 500 AI startups according to Jason Calacanis, noted entrepreneur and promoter.
A sad sailor watches the submarine to which he was assigned leave the dock without him. Thanks, MidJourney. No messages from Mother MJ on this image.
Enderle opines as a premier pundit:
“Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything. … There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market.”
We are reminded that even development on clones, which is illegal in most of the world, continues apace. The only thing bans seem to have accomplished there is to obliterate transparency around cloning projects. There is simply no way to rein in all the world’s scientists. Not yet. Enderle offers a grain of hope on artificial intelligence, however. He notes it is not too late to do for general-purpose AI what we failed to do for generative AI:
“General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount. Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons.”
So we have that to look forward to. And clones, apparently. The write-up points to initiatives already in the works to protect against “hostile” AI. Perhaps they will even be effective.
Cynthia Murrell, August 16, 2023
The Age of the Ideator: Go Fast, Ideate!
August 28, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “To De-Risk AI, the Government Must Accelerate Knowledge Production.” The essay introduces a word I am not sure I have seen before; that is, “ideator.” The meaning of an ideator, I think, is a human (not a software machine) able to produce people “who can have outsized impact on the world.” I think the author is referring to the wizard El Zucko (father of Facebook), the affable if mercurial Elon Musk, or the AI leaning Tim Apple. I am reasonably certain that the “outsized influence” moniker does not apply to the lip smacking Spanish football executive, Vlad Putin, or or similar go-getters.
Share my information with a government agency. Are you crazy? asks the hard charging, Type A overachiever working wonders with smart software designed for autonomous weapons. Thanks, MidJourney. Not what I specified but close enough for horse shoes.
The pivotal idea is good for ideators. These individuals come up with ideas. These should be good ideas which flow from ideators of the right stripe. Solving problems requires information. Ideators like information, maybe crave it? The white hat ideators can neutralize non-white hat ideators. Therefore, white hat ideators need access to information. The non-white hat ideator won’t have a change. (No, I won’t ask, “What happens when a white hat ideator flips, changes to a non-white hat, and uses information in ways different from the white hat types’ actions?”)
What’s interesting about the essay is that the “fix” is to go fast when it comes to making information and then give the white hat folks access. To make the system work, a new government agency is needed. (I assume that the author is thinking about a US, Canadian, or Australian, or Western European government agency.)
That agency will pay the smart software outfits to figure out “AI alignment.” (I must admit I am a bit fuzzy on how commercial enterprises with trade secrets will respond to the “alignment.”) The new government agency will have oversight authority and will publish the work of its professionals. The government will not try to slow down or impede the “alignment.”
I have simplified most of the ideas for one reason. I want to conclude this essay with a single question, “How are today’s government agencies doing with homelessness, fiscal management, health care, and regulation of high-technology monopolies?”
Alignment? Yeah.
Stephen E Arnold, August 28, 2023
Content Moderation: Modern Adulting Is Too Much Work
August 28, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Content moderation requires editorial policies. Editorial policies cost money. Editorial policies must be communicated. Editorial policies must be enforced by individuals trained in what information is in bounds or out of bounds. Commercial database companies had editorial policies. One knew what was “in” Compendex, Predicasts, Business Dateline, and and similar commercial databases. Some of these professional publishers have worked to keep the old-school approach in place to serve their customers. Other online services dumped the editorial policies approach to online information because it was expensive and silly. I think that lax or no editorial policies is a bad idea. One can complain about how hard a professional online service was or is to use, but one knows the information placed into the database.
“No, I won’t take out the garbage. That’s a dirty job,” says the petulant child. Thanks, MidJourney, you did not flash me the appeal message this morning.
Fun fact. Business Dateline, originally created by the Courier Journal & Louisville Times, was the first online commercial database to include corrections to stories made by the service’s sources. I am not sure if that policy is still in place. I think today’s managers will have cost in mind. Extras like accuracy are going to be erased by the belief that the more information one has, the less a mistake means.
I thought about adulting and cost control when I read “Following Elon Musk’s Lead, Big Tech Is Surrendering to Disinformation.” The “real” news story reports:
Social media companies are receding from their role as watchdogs against political misinformation, abandoning their most aggressive efforts to police online falsehoods in a trend expected to profoundly affect the 2024 presidential election.
Creating, producing, and distributing electronic information works when those involved have a shared belief in accuracy, appropriateness, and the public good. One those old-fashioned ideas are discarded what’s the result? From my point of view, look around. What does one see in different places in the US and elsewhere? What can be believed? What is socially-acceptable behavior?
When one defines adulting in terms of cost, civil life is eroded in my opinion. Defining responsibility in terms of one’s self interest is one thing that seems to be the driving force of many decisions. I am glad I am a dinobaby. I am glad I am old. At least we tried to enforce editorial policies for ABI/INFORM, Business Dateline, the Health Reference Center, and the other electronic projects in which I was involved. Even our early Internet service ThePoint (Top 5% of the Internet) which became part of Lycos many years ago had an editorial policy.
Ah, the good old days when motivated professionals worked to provide accurate, reliable reference information. For those involved in those projects, I thank you. For those like the companies mentioned in the cited WaPo story, your adulting is indeed a childish response to an important task.
What is the fix? One approach is the Chinese government / TikTok paying Oracle to moderate TikTok content. I wonder what the punishment for doing a “bad” job is. Is this the method to make “correct” decisions? The surveillance angle is an expensive solution. What’s the alternative?
Stephen E Arnold, August 28, 2023
This Dinobaby Likes Advanced Search, Boolean Operators, and Precision. Most Do Not
August 28, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I am not sure of the chronological age of the author of “7 Reasons to Replace Advanced Search with Filters So Users Can Easily Find What They Need.” From my point of view, the author has a mental age of someone much younger than I. The article identifies a number of reasons why “advanced search” functions are lousy. As a dinobaby, I want to be crystal clear: A user should have an interface which allows that user to locate the information required to respond in a useful way to a query.
The expert online searcher says with glee, “I love it when free online search services make finding information easy. Best of all is Amazon. It suggests so many things I absolutely need.” Hey, MidJourney, thanks for the image without suggesting Mother MJ okay my word choice. “Whoever said, ‘Nothing worthwhile comes easy’ is pretty stupid,” shouts or sliding board slider.
Advanced search in my dinobaby mental space means Boolean operators like AND, OR, and NOT, among others. Advanced search requires other meaningful “tags” specifically designed to minimize the ambiguity of words; for example, terminal can mean transportation or terminal can mean computing device. English is notable because it has numerous words which make sense only when a context is provided. Thus, a Field Code can instruct the retrieval system to discard the computing device context and retrieve the transportation context.
The write up makes clear that for today’s users training wheels are important. Are these “aids” like icons, images, bundles of results under a category dark patterns or assistance for a user. I can only imagine the push back I would receive if I were in a meeting with today’s “user experience” designers. Sorry, kids. I am a dinobaby.
I really want to work through seven reasons advanced search sucks. But I won’t. The number of people who know how to use key word search is tiny. One number I heard when I was a consultant to a certain big search engine is less than three percent of the Web search users. The good news for those who buy into the arguments in the cited article is that dinobabies will die.
Is it a lack of education? Is it laziness? Is it what most of today’s users understand?
I don’t know. I don’t care. A failure to understand how to obtain the specific information one requires is part of the long slow slide down a descent gradient. Enjoy the non-advanced search.
Stephen E Arnold, August 28, 2023
Traveling to France? On a Watch List?
August 25, 2023
The capacity for surveillance has been lurking in our devices all along, of course. Now, reports Azerbaijan’s Azernews, “French Police Can Secretly Activate Phone Cameras, Microphones, and GPS to Spy on Citizens.” The authority to remotely activate devices was part of a larger justice reform bill recently passed. Officials insist, though, this authority will not be used willy-nilly:
“A judge must approve the use of the powers, and the recently amended bill forbids use against journalists, lawyers, and other ‘sensitive professions.’ The measure is also meant to limit use to serious cases, and only for a maximum of six months. Geolocation would be limited to crimes that are punishable by at least five years in prison.”
Surely, law enforcement would never push those limits. Apparently the Orwellian comparisons are evident even to officials, since Justice Minister Éric Dupond-Moretti preemptively batted them away. Nevertheless, we learn:
“French digital rights advocacy group, La Quadrature du Net, has raised serious concerns over infringements of fundamental liberties, and has argued that the bill violates the ‘right to security, right to a private life and to private correspondence’ and ‘the right to come and go freely.’ … The legislation comes as concerns about government device surveillance are growing. There’s been a backlash against NSO Group, whose Pegasus spyware has allegedly been misused to spy on dissidents, activists, and even politicians. The French bill is more focused, but civil liberties advocates are still alarmed at the potential for abuse. The digital rights group La Quadrature du Net has pointed out the potential for abuse, noting that remote access may depend on security vulnerabilities. Police would be exploiting security holes instead of telling manufacturers how to patch those holes, La Quadrature says.”
Smartphones, laptops, vehicles, and any other connected devices are all fair game under the new law. But only if one has filed the proper paperwork, we are sure. Nevertheless, progress.
Cynthia Murrell, August 25, 2023
Software Marches On: Should Actors Be Worried?
August 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“How AI Is Bringing Film Stars Back from the Dead” is going to raise hackles of some professionals in Hollywood. I wonder how many people alive today remember James Dean. Car enthusiasts may know about his driving skills, but not too much about his dramaturgical abilities. I must confess that I know zippo about Jimmy other than he was a driver prone to miscalculations.
An angry human actor — recycled and improved by smart software — snarls, “I didn’t go to acting school to be replaced by software. I have a craft, and it deserves respect.” MidJourney, I only had to describe what I wanted one time. Keep on improving or recursing or whatever it is you do.
The Beeb reports:
The digital cloning of Dean also represents a significant shift in what is possible. Not only will his AI avatar be able to play a flat-screen role in Back to Eden and a series of subsequent films, but also to engage with audiences in interactive platforms including augmented reality, virtual reality and gaming. The technology goes far beyond passive digital reconstruction or deepfake technology that overlays one person’s face over someone else’s body. It raises the prospect of actors – or anyone else for that matter – achieving a kind of immortality that would have been otherwise impossible, with careers that go on long after their lives have ended.
The write up does not reference the IBM study suggesting that 40 percent of workers will require reskilling. I am not sure that a reskilled actor will be able to do. I polled my team and it came up with some Hollywood possibilities:
- Become an AI adept with a mastery of python, Java, and C. Code software replacing studio executives with a product called DorkMBA
- Channel the anger into a co-ed game of baseball and discuss enthusiastically with the umpire corrective lenses
- Start an anger management podcast and, like a certain Stanford professor, admit the indiscretions of one’s childhood
- Use MidJourney and ChatGPT to write a manga for Amazon
- Become a street person.
I am not sure these ideas will be acceptable to those annoyed by the BBC write up. I want to point out that smart software can do some interesting things. My hunch is that software can do endless versions of classic hits with old-time stars quickly and more economically than humanoid involved professionals.
I am not Bogarting you.
Stephen E Arnold, August 25, 2023
The Secret Cultural Erosion Of Public Libraries: Who Knew?
August 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
It appears the biggest problem public and school libraries are dealing with are demands to ban controversial gay and trans titles. While some libraries are facing closures or complete withdrawals of funding, they mostly appear to be in decent standing. Karawynn Long unfortunately discovered that is not the case. She spills the printer’s ink in her Substack post: “The Coming [Cultural Erosion] Of Public Libraries” with the cleverly deplorable subtitle “global investment vampires have positioned themselves to suck our libraries dry.”
Before she details how a greedy corporation is bleeding libraries like a leech, Long explains how there is a looming cultural erosion brought on by capitalism. A capitalist economic system is not inherently evil but bad actors exploit it. Long uses a more colorful word to explain libraries’ cultural erosion. In essence the colorful word means when something good deteriorates into crap.
A great example is when corporations use a platform, i.e. Facebook, Twitter, and Amazon, to pit buyers and sellers against each other while the top runs away with heaps of cash.
This ties back to public libraries because they use a digital library app called OverDrive. Library patrons use OverDrive to access copies of digital books, videos, audiobooks, magazines, and other media. It is the only app available to public libraries to manage digital media. Patrons could access OverDrive via an app call Libby or a Web site portal. In May 2023, the Web site portal deleted a feature that allowed patrons to recommend new titles to their libraries.
OverDrive wants to force users to adopt their Libby app. The Libby app has a “notify me” option that alerts users when their library acquires an item. OverDrive’s overlords also want to collect sellable user data, like other companies. Among other details, OverDrive is owned by the global investment firm KKR, Kohlberg Kravis Roberts.
KKR’s goal is one of the vilest investment capital companies, dubbed a “vampire capitalist” company, and it has a fanged hold on the US’s public libraries. OverDrive flaunts its B corporation status but that does not mask the villain lurking behind the curtain:
“ As one library industry publication warned in advance of the sale to KKR, ‘This time, the acquisition of OverDrive is a ‘financial investment,’ in which the buyer, usually a private equity firm or other financial sponsor, expects to increase the value of the company over the short term, typically five to seven years.’ We are now three years into that five-to-seven, making it likely that KKR’s timeframe for completing maximum profit extraction is two to four more years. Typically this is accomplished by levying enormous annual “management fees” on the purchased company, while also forcing it (through Board of Director mandates) to make changes to its operations that will result in short-term profit gains regardless of long-term instability. When they believe the short-term gains are maxed out, the investment firm sells off the company again, leaving it with a giant pile of unsustainable debt from the leveraged buyout and often sending it into bankruptcy.”
OverDrive likely plans to sell user data then bleed the public libraries dry until local and federal governments shout, “Uncle!” Among book bans and rising inflation, public libraries will see a reckoning with their budgets before 2030.
Whitney Grace, August 25, 2023
Generative AI: Not So Much a Tool But Something Quite Different
August 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Thirty years ago I had an opportunity to do a somewhat peculiar job. I had written for a publisher in the UK a version of a report my team and I prepared about Japanese investments in its Fifth Generation Computer Revolution or some such government effort. A wealthy person who owned a medium-sized financial firm asked me if I would comment on a book called The Meaning of the Microcosm. “Sure,” I said.
This tiny, cute technology creature has just crawled from the ocean, and it is looking for lunch. Who knew that it could morph into a much larger and more disruptive beast? Thanks, MidJourney. No review committee for me this morning.
What I described was technology’s Darwinian behavior. I am not sure I was breaking new ground, but it seemed safe for me to point to how a technology survived. Therefore, I argued in a private report to this wealthy fellow, that if betting on a winner would make one rich. I tossed in an idea that I have thought about for many years; specifically, as technologies battle to “survive,” the technologies evolve and mutate. The angle I have commented about for many years is simple: Predicting how a technology mutates is a tricky business. Mutations can be tough to spot or just pop up. Change just says, “Hello, I am here.”
I thought about this “book commentary project” when I read “How ChatGPT Turned Generative AI into an Anything Tool.” The article makes a number of interesting observations. Here’s one I noted:
But perhaps inadvertently, these same changes let the successors to GPT3, like GPT3.5 and GPT4, be used as powerful, general-purpose information-processing tools—tools that aren’t dependent on the knowledge the AI model was originally trained on or the applications the model was trained for. This requires using the AI models in a completely different way—programming instead of chatting, new data instead of training. But it’s opening the way for AI to become general purpose rather than specialized, more of an “anything tool.”
I am not sure that “anything tool” is a phrase with traction, but it captures the idea of a technology that began as a sea creature, morphing, and then crawling out of the ocean looking for something to eat. The current hungry technology is smart software. Many people see the potential of combining repetitive processes with smart software in order to combine functions, reduce costs, or create alternatives to traditional methods of accomplishing a task. A good example is the use college students are making of the “writing” ability of free or low cost services like ChatGPT or You.com.
But more is coming. As I recall, in my discussion of the microcosm book, I made the point that Mr. Gilder’s point that small-scale systems and processes can have profound effects on larger systems and society as a whole. But a technology “innovation” like generative AI is simultaneously “small” and “large”. Perspective and point of view are important in software. Plus, the innovations of the transformer and the larger applications of generative AI to college essays illustrate the scaling impact.
What makes AI interesting for me at this time is that genetic / Darwinian change is occurring across the scale spectrum. On one hand, developers are working to create big applications; for instance, SaaS solutions that serve millions of users. On the other hand, shifting from large language models to smaller, more efficient methods of getting smart aim to reduce costs and speed the functioning of the plumbing.
The cited essay in Arstechnica is on the right track. However, the examples chosen are, it seems to me, ignoring the surprises the iterations of the technology will deliver. Is this good or bad? I have no opinion. What is important than wild and crazy ideas about control and regulation strike me as bureaucratic time wasting. It was millions a years ago to get out of the way of the hungry creature from the ocean of ones and zeros and try to figure out how to make catch the creature and have dinner, turn its body parts into jewelry which can be sold online, or processing the beastie into a heat-and-serve meal at Trader Joe’s.
My point is that the generative innovations do not comprise a “tool.” We’re looking at something different, semi-intelligent, and evolving with speed. Will it be let’s have lunch or one is lunch?
Stephen E Arnold, August 24, 2023