Killing Horses? Okay. Killing Digital Information? The Best Idea Ever!
August 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Fans at the 2023 Kentucky Derby were able to watch horses killed. True, the sport of kings parks vehicles and has people stand around so the termination does not spoil a good day at the races. It seems logical to me that killing information is okay too. Personally I want horses to thrive without brutalization with mint juleps, and in my opinion, information deserves preservation. Without some type of intentional or unintentional information, what would those YouTuber videos about ancient technology have to display and describe?
“In the Age of Culling” — an article in the online publication tedium.co — I noted a number of ideas which resonated with me. The first is one of the subheads in the write up; to wit:
CNet pruning its content is a harbinger of something bigger.
The basic idea in the essay is that killing content is okay, just like killing horses.
The article states:
I am going to tell you right now that CNET is not the first website that has removed or pruned its archives, or decided to underplay them, or make them hard to access. Far from it.
The idea is that eliminating content creates an information loss. If one cannot find some item of content, that item of content does not exist for many people.
I urge you to read the entire article.
I want to shift the focus from the tedium.co essay slightly.
With digital information being “disappeared,” the cuts away research, some types of evidence, and collective memory. But what happens when a handful of large US companies effectively shape the information training smart software. Checking facts becomes more difficult because people “believe” a machine more than a human in many situations.
Two girls looking at a museum exhibit in 2028. The taller girl says, “I think this is what people used to call a library.” The shorter girl asks, “Who needs this stuff. I get what I need to know online. Besides this looks like a funeral to me.” The taller girl replies, “Yes, let’s go look at the plastic dinosaurs. When you put on the headset, the animals are real.” Thanks MidJourney for not including the word “library” or depicting the image I requested. You are so darned intelligent!
Consider the power information filtering and weaponizing conveys to those relying on digital information. The statement “harbinger of something bigger” is correct. But if one looks forward, the potential for selective information may be the flip side of forgetting.
Trying to figure out “truth” or “accuracy” is getting more difficult each day. How does one talk about a subject when those in conversation have learned about Julius Caesar from a TikTok video and perceive a problem with tools created to sell online advertising?
This dinobaby understands that cars are speeding down the information highway, and their riders are in a reality defined by online. I am reluctant to name the changes which suggest this somewhat negative view of learning. One believes what one experiences. If those experiences are designed to generate clicks, reduce operating costs, and shape behavior — what’s the information landscape look like?
No digital archives? No past. No awareness of information weaponization? No future. Were those horses really killed? Were those archives deleted? Were those Shakespeare plays removed from the curriculum? Were the tweets deleted?
Let’s ask smart software. No thanks, I will do dinobaby stuff despite the efforts to redefine the past and weaponize the future.
Stephen E Arnold, August 14, 2023
MBAs Want to Win By Delivering Value. It Is Like an Abstraction, Right?
August 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Is it completely necessary to bring technology into every aspect of one’s business? Maybe, maybe not. But apparently some believe such company-wide “digital transformation” is essential for every organization these days. And, of course, there are consulting firms eager to help. One such outfit, Third Stage Consulting Group, has posted some advice in, “How to Measure Digital Transformation Results and Value Creation.” Value for whom? Third Stage, perhaps? Certainly, if one takes writer Eric Kimberling on his invitation to contact him for a customized strategy session.
Kimberling asserts that, when embarking on a digital transformation, many companies fail to consider how they will keep the project on time, on budget, and in scope while minimizing operational disruption. Even he admits some jump onto the digital-transformation bandwagon without defining what they hope to gain:
“The most significant and crucial measure of success often goes overlooked by many organizations: the long-term business value derived from their digital transformation. Instead of focusing solely on basic reasons and justifications for undergoing the transformation, organizations should delve deeper into understanding and optimizing the long-term business value it can bring. For example, in the current phase of digital transformation, ERP [Enterprise Resource Planning] software vendors are pushing migrations to new Cloud Solutions. While this may be a viable long-term strategy, it should not be the sole justification for the transformation. Organizations need to define and quantify the expected business value and create a benefits realization plan to achieve it. … Considering the significant investments of time, money, and effort involved, organizations should strive to emerge from the transformation with substantial improvements and benefits.”
So companies should consider carefully what, if anything, they stand to gain by going through this process. Maybe some will find the answer is “nothing” or “not much,” saving themselves a lot of hassle and expense. But if one decides it is worth the trouble, rest assured many consultants are eager to guide you through. For a modest fee, of course.
Cynthia Murrell, August 11, 2023
Generative AI: Good or Bad the Content Floweth Forth
August 11, 2023
Hollywood writers are upset that major studios want to replace them with AI algorithms. While writing bots have not replaced human writers yet AI algorithms such as ChatGPT, Ryter, Writing.io, and more are everywhere. Threat Source Newsletter explains that, “Every Company Has Its Own Version of ChatGPT Now.”
A flood of content. Thinking drowned. Thanks Mid Journey. I wanted words but got letters. Great Job.
AI writing algorithms are also known as AI assistants. They are programmed to answer questions and perform text-based tasks. The text-based tasks include writing résumés, outlines, press releases, Web site content, and more. While the AI assistants still cannot pass the Turing test, it is not stopping big tech companies from developing their own bots. Meta released Llama 2 and IBM rebranded its powerful computer system from Watson to watsonx (it went from a big W to a lower case w and got an “x” too).
While Llama 2, the “new” Watson, and ChatGPT are helpful automation tools they are also dangerous tools for bad actors. Bad actors use these tools to draft spam campaigns, phishing emails, and scripts. Author Jonathan Munshaw tested AI assistants to see how they responded to illegal prompts.
Llama 2 refused to assist in generating an email for malware, while ChatGPT “gladly” helped draft an email. When Munshaw asked both to write a script to ask a grandparent for a gift card, each interpreted the task differently. Llama 2 advised Munshaw to be polite and aware of the elderly relative’s financial situation. ChatGPT wrote a TV script.
Munshaw wrote that:
“I commend Meta for seeming to have tighter restrictions on the types of asks users can make to its AI model. But, as always, these tools are far from perfect and I’m sure there are scripts that I just couldn’t think of that would make an AI-generated email or script more convincing.”
It will be awhile before writers are replaced by AI assistants. They are wonderful tools to improve writing but humans are still needed for now.
Whitney Grace, August 10, 2023
The Zuckbook Becomes Cooperative?
August 10, 2023
The Internet empowers people to voice their opinions without fear of repercussions or so they think. While the Internet generally remains anonymous, social media companies must bow to the letter of the law or face fines or other reprisals. Ars Technnica shares how a European court forced Meta to share user information in a civil case: “Facebook To Unmask Anonymous Dutch User Accused Of Repeated Defamatory Posts.”
The Netherlands’ Court of the Hague determined that Meta Ireland must share the identity of a user who defamed the claimant, a male Facebook user. The anonymous user “defamed” the claimant by stating he secretly recorded women he dated. The anonymous user posted the negative statements in private Facebooks groups about dating experiences. The claimant could not access the groups but he did see screenshots. He claimed the posts have harmed his reputation.
After cooperating, executives at a big time technology firm celebrate with joy and enthusiasm. Thanks, MidJourney. You have happiness down pat.
The claimant asked Meta to remove the posts but the company refused based on the grounds of freedom of expression. Meta encouraged the claimant to contact the other user, instead the claimant decided to sue.
Initially, the claimant asked the court to order Meta to delete the posts, identify the anonymous user, and flag any posts in other private Facebook groups that could defame the claimant.
While arguing the case, Meta had defended the anonymous user’s right to freedom of expression, but the court decided that the claimant—whose name is redacted in court documents—deserved an opportunity to challenge the allegedly defamatory statements.
Partly for that reason, the court ordered Meta to provide “basic subscriber information” on the anonymous user, including their username, as well as any names, email addresses, or phone numbers associated with their Facebook account. The court did not order Meta to remove the posts or flag any others that may have been shared in private groups, though.”
The court decided that freedom of speech is not unlimited and the posts could be defamatory. The court also noted posts did not have to be deemed unlawful to de-anonymous a user.
This has the potential to be a landmark case in online user privacy and accountability on social media platforms. In the future, users might need to practice more restraint and think about consequences before posting online. They might want to read etiquette books from the pre-Internet days when constructive behavior was not an anomaly.
Whitney Grace, August 10, 2023
Technology and AI: A Good Enough and Opaque Future for Humans
August 9, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“What Self Driving Cars Tell Us about AI Risks” provides an interesting view of smart software. I sensed two biases in the write up which I want to mention before commenting on the guts of the essay. The first bias is what I call “engineering blindspots.” The idea is that while flaws exist, technology gets better as wizards try and try again. The problem is that “good enough” may not lead to “better now” in a time measured by available funding. Therefore, the optimism engineers have for technology makes them blind to minor issues created by flawed “decisions” or “outputs.”
A technology wizard who took classes in ethics (got a gentleperson’s “C”, advanced statistics (got close enough to an “A” to remain a math major), and applied machine learning experiences a moment of minor consternation at a smart water treatment plant serving portions of New York City. The engineer looks at his monitor and says, “How did that concentration of 500 mg/L of chlorine get into the Newtown Creek Waste Water Treatment Plant?” MidJourney has a knack for capturing the nuances of an engineer’s emotions who ends up as a water treatment engineer, not an AI expert in Silicon Valley.
The second bias is that engineers understand inherent limitations. Non engineers “lack technical comprehension” and that smart software at this time does not understand “the situation, the context, or any unobserved factors that a person would consider in a similar situation.” The idea is that techno-wizards have a superior grasp of a problem. The gap between an engineer and a user is a big one, and since comprehension gaps are not an engineering problem, that’s the techno-way.
You may disagree. That’s what makes allegedly honest horse races in which stallions don’t fall over dead or have to be terminated in order to spare the creature discomfort and the owners big fees.
Now what about the innards of the write up?
- Humans make errors. This begs the question, “Are engineers human in the sense that downstream consequences are important, require moral choices, and like the humorous medical doctor adage “Do no harm”?
- AI failure is tough to predict? But predictive analytics, Monte Carlo simulations, and Fancy Dan statistical procedures like a humanoid setting a threshold because someone has to do it.
- Right now mathy stuff cannot replicate “judgment under uncertainty.” Ah, yes, uncertainty. I would suggest considering fear and doubt too. A marketing trifecta.
- Pay off that technical debt. Really? You have to be kidding. How much of the IBM mainframe’s architecture has changed in the last week, month, year, or — do I dare raise this issue — decade? How much of Google’s PageRank has been refactored to keep pace with the need to discharge advertiser paid messages as quickly as possible regardless of the user’s query? I know. Technical debt. No an issue.
- AI raises “system level implications.” Did that Israeli smart weapon make the right decision? Did the smart robot sever a spinal nerve? Did the smart auto mistake a traffic cone for a child? Of course not. Traffic cones are not an issue for smart cars unless one puts some on the road and maybe one on the hood of a smart vehicle.
Net net: Are you ready for smart software? I know I am. At the AutoZone on Friday, two individuals were unable to replace the paper required to provide a customer with a receipt. I know. I watched for 17 minutes until one of the young professionals gave me a scrawled handwritten note with the credit card code transaction number. Good enough. Let ‘er rip.
Stephen E Arnold, August 9, 2023
Someone Is Thinking Negatively and Avoiding Responsibility
August 9, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I have no idea how old the “former journalist” who wrote “I Feel Like an Old Shoe: Workers Feel Degraded and Cast Aside Because of Ageism.” Let’s consider a couple of snippets. Then I will offer several observations which demonstrate my lack of sympathy for individuals who want to blame their mental state on others. Spoiler: Others don’t care about anyone but themselves in my experience.
A high school student says to her teacher, “You are the reason I failed this math test. If you were a better teacher, I would have understood the procedure. But, no. You were busy focusing on the 10 year old genius who transferred into our class from Wuhan.” Baffled, the teacher says, “It is your responsibility to learn. There is plenty of help available from me, your classmates, or your tutor, Mr. Rao. You have to take responsibility and stop blaming others for what you did.” Thanks, MidJourney. Were you, by chance, one of those students who blame others for your faults?
Here’s a statement I noted:
“Employers told me individuals over 45 and particularly those over the age of 55 must be ‘exceptional’ in order to be hired. The most powerful finding for me however had to do with participants [of a survey] explaining that once they were labeled ‘old,’ they felt degraded and cast aside. One person told me, ‘I feel like an old shoe that’s of no use any more.’”
Okay, blame the senior managers, some of whom will be older, maybe old-age home grade like Warren Buffet or everyone’s favorite hero of Indiana Jones (Harrison Ford), or possibly Mr. Biden. Do these people feel old and like an old shoe? I suppose but they put on a good show. Are these people exceptional? Sure, why not label them as such. My point is that they persevere.
Now this passage from the write up:
Over all, there are currently about the same number of younger and older workers. Nevertheless, the share of older workers has increased for almost all occupations.
These data originate from Statistics Canada. For my purposes, let’s assume that Canada data are similar to US data. If an older worker feels like an “old shoe,” perhaps a personal version of the two slit experiment is operation. The observer alters the reality. What this means is that when the worker looks at himself or herself, the reality is fiddled. Toss in some emotional baggage, like a bad experience in kindergarten, and one can make a case for “they did this to me.”
My personal view is that some radical empiricism may be helpful to those who are old and want to blame others for their perceived status, their prospects, or there personal situation.
I am not concerned about my age. I am going to be 79 in a few weeks. I am proud to be a dinobaby, a term coined by someone at IBM I have heard to refer to the deadwood. The idea was that “old” meant high salary and often an informed view of a business or technical process. Younger folks wanted to outsource and salary, age, and being annoying in meetings were convenient excuses for cost reduction.
I am working on a project for an AI outfit. I have a new book (which is for law enforcement professionals, not the humilus genus. I have a keynote speech to deliver in October 2023. In short, I keep doing what I have been doing since I left a PhD program to work for that culturally sensitive outfit which helped provide technical services to those who would make bombs and other oddments.
If a person in my lecture comes up to me and says, “I disagree,” I listen. I don’t whine, make excuses, or dodge the comment. I deal with it to the best of my ability. I am not going to blame anything or anyone for my age or my work product. People who grouse are making clear to me that they lack the mental wiring to provide immediate and direct problem solving skills and to be spontaneously helpful.
Sorry. The write up is not focusing on the fix which is inside the consciousness of the individuals who want to blame others for their plight in life.
Stephen E Arnold, August 7, 2023
Research 2023: Is There a Methodology of Control via Mendacious Analysis
August 8, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “How Facebook Does (and Doesn’t) Shape Our Political Views.” I am not sure what to conclude from the analysis of four studies about Facebook (is it Zuckbook?) presumably completed with some cooperation from the social media giant itself. The message I carried away from the write up is that tainted research may be the principal result of these supervised studies.
The up and coming leader says to a research assistant, “I am telling you. Manipulate or make up the data. I want the results I wrote about supported by your analysis. If you don’t do what I say, there will be consequences.” MidJourney does know how to make a totally fake leader appear intense.
Consider the state of research in 2023. I have mentioned the problem with Stanford University’s president and his making up data. I want to link the Stanford president’s approach to research with Facebook (Meta). The university has had some effect on companies in the Silicon Valley region. And Facebook (Meta) employs a number of Stanford graduates. For me, then, it is logical to consider the approach of the university to objective research and the behavior of a company with some Stanford DNA to share certain characteristics.
“How Facebook Does (and Doesn’t) Shape Our Political Views” offers this observation based on “research”:
“… the findings are consistent with the idea that Facebook represents only one facet of the broader media ecosystem…”
The summary of the Facebook-chaperoned research cites an expert who correctly in my my identifies two challenges presented by the “research”:
- Researchers don’t know what questions to ask. I think this is part of the “don’t know what they don’t know.” I accept this idea because I have witnessed it. (Example: A reporter asking me about sources of third party data used to spy on Americans. I ignored the request for information and disconnected from the reporter’s call.)
- The research was done on Facebook’s “terms”. Yes, powerful people need control; otherwise, the risk of losing power is created. In this case, Facebook (Meta) wants to deflect criticism and try to make clear that the company’s hand was not on the control panel.
Are there parallels between what the fabricating president of Stanford did with data and what Facebook (Meta) has done with its research initiative? Shaping the truth is common to both examples.
In Stanford’s Ideal Destiny, William James said this about about Stanford:
It is the quality of its men that makes the quality of a university.
What links the actions of Stanford’s soon-to-be-former president and Facebook (Meta)? My answer would be, “Creating a false version of objective data is the name of the game.” Professor James, I surmise, would not be impressed.
Stephen E Arnold, August 8, 2023
Another High School Tactic: I Am Hurt, Coach
August 7, 2023
This is a rainy Monday (August 7, 2023). From my point of view, the content flowing across my monitoring terminal is not too exciting. More security issue, 50-50 financial rumor mongering, and adult Internet users may be monitored (the world is coming to an end!). But in the midst of this semi-news was an item called “Musk Says He May Need Surgery, Will Get MRI on Back and Neck.” Wow. The ageing icon of self-driving autos which can run over dinobabies like me has dipped into his management Book of Knowledge for a tactic to avoid a “cage match” with the lovable Zuck, master of Threads and beloved US high-technology social media king thing.
“What do you mean, your neck hurts? I need you for the big game on Saturday. Win and you will be more famous than any other wizard with smart cars, rockets, and a social media service.” says the assistant coach. Thanks MidJourney, you are a sport.
You can get the information from the cited story, which points out:
The world’s richest person said he will know this week whether surgery will be required, ahead of his proposed cage fight with Meta Platforms Inc. co-founder Mark Zuckerberg. He previously said he “might need an operation to strengthen the titanium plate holding my C5/C6 vertebrae together.”
Mr. Zuckerberg allegedly is revved and ready. The write up reports:
Zuckerberg posted Sunday on Threads that he suggested Aug. 26 for the match and he’s still awaiting confirmation. “I’m ready today,” he said. “Not holding my breath.”
From my point of view, the tactic is similar to “the dog ate my homework.” This variant — I couldn’t do my homework because I was sick — comes directly from the Guide to the High School Science Club Management Method, known internationally as GHSSCMM. The information in this well-known business manual has informed outstanding decision making in personnel methods (Dr. Timnit Gebru, late of Google), executives giving themselves more money before layoffs (too many companies to identify in a blog post like this), and appearing in US Congressional hearings (Thank you for the question. I don’t know. I will have the information delivered to your office).
Health problems can be problematic. Will the cage match take place? What if Mr. Musk says, “I can fight.” Will Mr. Zuckerberg respond, “I sprained my ankle”? What does the GHSSCMM suggest in a tit-for-tat dynamic?
Perhaps we should ask both Mr. Musk’s generative AI system and the tame Zuckerberg LLAMLA? That’s “real” news.
Stephen E Arnold, August 7, 2023
MBAs, Lawyers, and Sociology Majors Lose Another Employment Avenue
August 4, 2023
Note: Dinobaby here: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. Services are now ejecting my cute little dinosaur gif. (´?_?`) Like my posts related to the Dark Web, the MidJourney art appears to offend someone’s sensibilities in the datasphere. If I were not 78, I might look into these interesting actions. But I am and I don’t really care.
Some days I find MBAs, lawyers, and sociology majors delightful. On others I fear for their future. One promising avenue of employment has now be cut off. What’s the job? Avocado peeler in an ethnic restaurant. Some hearty souls channeling Euell Gibbons may eat these as nature delivers them. Others prefer a toast delivery vehicle or maybe a dip to accompany a meal in an ethnic restaurant or while making a personal vlog about the stresses of modern life.
“Chipotle’s Autocado Robot Can Prep Avocados Twice as Fast as Humans” reports:
The robot is capable of peeling, seeding, and halving a case of avocados significantly faster than humans, and the company estimates it could cut its typical 50-minute guacamole prep time in half…
When an efficiency expert from a McKinsey-type firm or a second tier thinker from a mid-tier consulting firm reads this article, there is one obvious line of thought the wizard will follow: Replace some of the human avocado peelers with a robot. Projecting into the future while under the influence of spreadsheet fever, an upgrade to the robot’s software will enable it to perform other jobs in the restaurant or food preparation center; for example, taco filler or dip crafter.
Based on this actual factual write up, I have concluded that some MBAs, lawyers, and sociology majors will have to seek another pathway to their future. Yard sale organizer, pet sitter, and possibly the life of a hermit remain viable options. Oh, the hermit will have GoFundMe and BuyMeaCoffee pages. Perhaps a T shirt or a hat?
Stephen E Arnold, August 4, 2023
Google Relies on People Not Perceiving a Walled Garden
August 3, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I want to keep this brief. I have been, largely without effect, explaining Google’s foundational idea of a walled garden people will perceive as the digital world. Once in the garden, why leave?
A representation of the Hotel California’s walled garden. Why bother to leave? The walled garden is the entire digital world. I wish MidJourney would have included the neon sign spelling out “Hotel California” or “Googleplex.” But no, no, no. Smart software has guard rails.
Today (August 3, 2023), I want to point to two different write ups which explain bafflement at what Google is doing.
The first is a brief item from Mastodon labeled (I think) “A List of Recent Hostile Moves by Google’s Chrome Team.” The write up points out that Google’s attempt to become the gatekeeper for content. Another is content blocking. And action to undermine an image format. Hacker News presents several hundred comments which say to me, “Why is Google doing this? Google is supposed to be a good company.” Imagine. Read these comments at this link. Amazing, at least to me!
The second item is from a lawyer. The article is “Google’s Plan To DRM The Web Goes Against Everything Google Once Stood For.” Please, read the write up yourself. What’s remarkable is the point of view expressed in this phrase “everything Google once stood for.” Lawyers are a fascinating branch of professional advice givers. I am fearful of this type of thinking; therefore, I try to stay as far away from attorneys as I can.
Let’s step back.
In one of my monographs about Google (research funded by commercial enterprises) which contain some of the information my clients did not deep sensitive, I depicted Sergey Brin as a magician with fire in his hand. From the beginning of Google’s monetization via advertising misdirection and keeping the audience amazed were precepts of the company. Today’s Google is essentially running what is now a 25 year old game plan. Why not? Advertising “inspired” by Yahoo-GoTo-Overture’s pay-for-traffic model generates almost 70 percent of the company’s revenue. With advertising under pressure, the Google has to amp up its controls. Those relevant ads displayed to me for feminine-centric products reflect the efficacy of the precision ad matching, right?
Think about the walled garden metaphor. Think about the magician analogy. Now think about what Google will do to extend and influence the world around it. Exciting for young people who view the world through eyes which have only seen what the walled garden offers. If TikTok goes away, Google has its version with influencers and product placements. What’s not to like about Google News, Gmail, or Android? Most people find nothing untoward, and altering those perceptions may be difficult.
The long view is helpful when one extends control a free service at a time. And relevance of Google’s search results? The results are relevant because the objective is ad revenue and messaging on Googley things. An insect in the walled garden does not know much about other gardens and definitely nothing about an alternative.
Stephen E Arnold, August 3, 2023