Microsoft: Just a Minor Thing
June 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Several years ago, I was asked to be a technical advisor to a UK group focused on improper actions directed toward children. Since then, I have paid some attention to the information about young people that some online services collect. One of the more troubling facets of improper actions intended to compromise the privacy, security, and possibly the safety of minors is the role data aggregators play. Whether gathering information from “harmless” apps favored by young people to surreptitious collection and cross correlation of young users’ online travels, these often surreptitious actions of people and their systems trouble me.
The “anything goes” approach of some organizations is often masked by public statements and the use of words like “trust” when explaining how information “hoovering” operations are set up, implemented, and used to generate revenue or other outcomes. I am not comfortable identifying some of these, however.
A regulator and a big company representative talking about a satisfactory resolution to the regrettable collection of kiddie data. Both appear to be satisfied with another job well done. The image was generated by the MidJourney smart software.
Instead, let me direct your attention to the BBC report “Microsoft to Pay $20m for Child Privacy Violations.” The write up states as “real news”:
Microsoft will pay $20m (£16m) to US federal regulators after it was found to have illegally collected
data on children who had started Xbox accounts.
The write up states:
From 2015 to 2020 Microsoft retained data “sometimes for years” from the account set up, even when a parent failed to complete the process …The company also failed to inform parents about all the data it was collecting, including the user’s profile picture and that data was being distributed to third parties.
Will the leader in smart software and clever marketing have an explanation? Of course. That’s what advisory firms and lawyers help their clients deliver; for example:
“Regrettably, we did not meet customer expectations and are committed to complying with the order to continue improving upon our safety measures,” Microsoft’s Dave McCarthy, CVP of Xbox Player Services, wrote in an Xbox blog post. “We believe that we can and should do more, and we’ll remain steadfast in our commitment to safety, privacy, and security for our community.”
Sounds good.
From my point of view, something is out of alignment. Perhaps it is my old-fashioned idea that young people’s online activities require a more thoughtful approach by large companies, data aggregators, and click capturing systems. The thought, it seems, is directed at finding ways to take advantage of weak regulation, inattentive parents and guardians, and often-uninformed young people.
Like other ethical black holes in certain organizations, surfing for fun or money on children seems inappropriate. Does $20 million have an impact on a giant company? Nope. The ethical and moral foundation of decision making is enabling these data collection activities. And $20 million causes little or no pain. Therefore, why not continue these practices and do a better job of keeping the procedures secret?
Pragmatism is the name of the game it seems. And kiddie data? Fair game to some adrift in an ethical swamp. Just a minor thing.
Stephen E Arnold, June 6, 2023
Software Cannot Process Numbers Derived from Getty Pix, Honks Getty Legal Eagle
June 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Getty Asks London Court to Stop UK Sales of Stability AI System.” The write up comes from a service which, like Google, bandies about the word trust with considerable confidence. The main idea is that software is processing images available in the form of Web content, converting these to numbers, and using the zeros and ones to create pictures.
The write up states:
The Seattle-based company [Getty] accuses the company of breaching its copyright by using its images to “train” its Stable Diffusion system, according to the filing dated May 12, [2023].
I found this statement in the trusted write up fascinating:
Getty is seeking as-yet unspecified damages. It is also asking the High Court to order Stability AI to hand over or destroy all versions of Stable Diffusion that may infringe Getty’s intellectual property rights.
When I read this, I wonder if the scribes upon learning about the threat Gutenberg’s printing press represented were experiencing their “Getty moment.” The advanced technology of the adapted olive press and hand carved wooden letters meant that the quill pen champions had to adapt or find their future emptying garderobes (aka chamber pots).
Scribes prepare to throw a Gutenberg printing press and the evil innovator Gutenberg in the Rhine River. Image was produced by the evil incarnate code of MidJourney. Getty is not impressed like letters on paper with the outputs of Beelzebub-inspired innovations.
How did that rebellion against technology work out? Yeah. Disruption.
What happens if the legal system in the UK and possibly the US jump on the no innovation train? Japan’s decision points to one option: Using what’s on the Web is just fine. And China? Yep, those folks in the Middle Kingdom will definitely conform to the UK and maybe US rules and regulations. What about outposts of innovation in Armenia? Johnnies on the spot (not pot, please). But what about those computer science students at Cambridge University? Jail and fines are too good for them. To the gibbet.
Stephen E Arnold, June 6, 2023
India and Its Management Secrets: Under Utilized Staff Motivation Technique
June 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I am not sure if the information in the article is accurate, but it sure is entertaining. If true, I think I have identified one of those management secrets which makes wizards from India such outstanding managers. Navigate to “Company Blocks Employees from Leaving Office: Now, a Clarification.” The write up states:
Coding Ninjas, a Gurugram-based edtech institute, has issued clarification on a recent incident that saw its employees being ‘locked’ inside the office so that they cannot exit ‘without permission.’
And what was the clarification? Let me guess. Heck, no. Just a misunderstanding. The write up explains:
… the company [Coding Ninjas, remember?], while acknowledging the incident, attributed it to a ‘regrettable’ action by an employee. The ‘action,’ noted, was ‘immediately rectified within minutes,’ and the individual in question acknowledged his ‘mistake’ and apologized for it. Further saying that the founders had expressed their ‘regret’ and apologized to the staff, Coding Ninjas described this as an ‘isolated’ incident. Coding Ninjas’ senior executive gets gate locked to stop employees from leaving office; company says action ‘regrettable…’
For another take on this interesting management approach to ensuring productivity, check out “Coding Ninjas’ Senior Executive Gets Gate Locked to Stop Employees from Leaving Office; Company Says Action ‘Regrettable’.”
What if you were to look for a link to this story on Reddit? I located a page which showed a door being locked. Exciting footage was available at this link on June 6, 2023 at this link. (If the information has been deleted, you have learned something about Reddit.com in my opinion.)
My interpretation of this enjoyable incident (if indeed true) is:
- Something to keep in mind when accepting a job in Mumbai or similar technology hot spot
- Why graduates of the Indian Institutes of Technology are in such demand; those folks are indeed practical and focused on maximizing employee productivity as measured in minutes in a facility
- A solution to employees who want to work from home. When an employee wants a paycheck, make them come to the office and lock the employees in. Works well and the effectiveness is evident in prisons and re-education facilities in a number of countries.
And regrettable? Yes, in terms of PR. No, in terms of getting snagged in what may be fake news. Is this a precept of the high school science club management method. Yep. Yep.
Stephen E Arnold, June 6, 2023
The Google AI Way: EEAT or Video Injection?
June 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Over the weekend, I spotted a couple of signals from the Google marketing factory. The first is the cheerleading by that great champion of objective search results, Danny Sullivan who wrote with Chris Nelson “Rewarding High Quality Content, However, It Is Produced.” The authors pointed out that their essay is on behalf of the Google Search Quality team. This “team” speaks loudly to me when we run test queries on Google.com. Once in a while — not often, mind you — a relevant result will appear in the first page or two of results.
The subject of this essay by Messrs.Sullivan and Nelson is EEAT. My research team and I think that the fascinating acronym is pronounced like to word “eat” in the sense of ingesting gummy cannabinoids. (One hopes these are not the prohibited compounds such as Delta-9 THC.) The idea is to pop something in your mouth and chew. As the compound (fact and fiction, GPT generated content and factoids) dissolve and make their way into one’s system, the psychoactive reaction is greater perceived dependence on the Google products. You may not agree, but that’s how I interpret the essay.
So what’s EEAT? I am not sure my team and I are getting with the Google script. The correct and Googley answer is:
Expertise, experience, authoritativeness, and trustworthiness.
The write up says:
Focusing on rewarding quality content has been core to Google since we began. It continues today, including through our ranking systems designed to surface reliable information and our helpful content system. The helpful content system was introduced last year to better ensure those searching get content created primarily for people, rather than for search ranking purposes.
I wonder if this text has been incorporated in the Sundar and Prabhakar Comedy Show? I would suggest that it replace the words about meeting users’ needs.
The meat of the synthetic turkey burger strikes me as:
it’s important to recognize that not all use of automation, including AI generation, is spam. Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.
Synthetic or manufactured information, content objects, data, and other outputs are okay with us. We’re Google, of course, and we are equipped with expertise, experience, authoritativeness, and trustworthiness to decide what is quality and what is not.
I can almost visualize a T shirt with the phrase “EEAT It” silkscreened on the back with a cheerful Google logo on the front. Catchy. EEAT It. I want one. Perhaps a pop tune can be sampled and used to generate a synthetic song similar to Michael Jackson’s “Beat It”? Google AI would dodge the Weird Al Yankovic version of the 1983 hit. Google’s version might include the refrain:
Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)
If chowing down on this Google information is not to your liking, one can get with the Google program via a direct video injection. Google has been publicizing its free video training program from India to LinkedIn (a Microsoft property to give the social media service its due). Navigate to “Master Generative AI for Free from Google’s Courses.” The free, free courses are obviously advertisements for the Google way of smart software. Remember the key sequence: Expertise, experience, authoritativeness, and trustworthiness.
The courses are:
- Introduction to Generative AI
- Introduction to Large Language Models
- Attention Mechanism
- Transformer Models and BERT Model
- Introduction to Image Generation
- Create Image Captioning Models
- Encoder-Decoder Architecture
- Introduction to Responsible AI (remember the phrase “Expertise, experience, authoritativeness, and trustworthiness.”)
- Introduction to Generative AI Studio
- Generative AI Explorer (Vertex AI).
Why is Google offering free infomercials about its approach to AI?
The cited article answers the question this way:
By 2030, experts anticipate the generative AI market to reach an impressive $109.3 billion, signifying a promising outlook that is captivating investors across the board. [Emphasis added.]
How will Microsoft respond to the EEAT It positioning?
Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)
Stephen E Arnold, June 5, 2023
IBM Dino Baby Unhappy about Being Outed as Dinobaby in the Baby Wizards Sandbox
June 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I learned the term “dinobaby” reading blog posts about IBM workers who alleged Big Blue wanted younger workers. After thinking about the term, I embraced it. This blog post features an animated GIF of me dancing in my home office. I try to avoid the following: [a] Millennials, GenX, GenZ, and GenY super wizards; [b] former IBM workers who grouse about growing old and not liking a world without CICS; and [c] individuals with advanced degrees who want to talk with me about “smart software.” I have to admit that I have not been particularly successful in this effort in 2023: Conferences, Zooms, face-to-face meetings, lunches, yada yada. Either I am the most magnetic dinobaby in Harrod’s Creek, or these jejune world changers are clueless. (Maybe I should live in a cave on a mountain and accept acolytes?)
I read “Laid-Off 60-Year-Old Kyndryl Exec Says He Was Told IT Giant Wanted New Blood.” The write up includes a number of interesting statements. Here’s one:
BM has been sued numerous times for age discrimination since 2018 when it was reported that company leadership carried out a plan to de-age its workforce – charges IBM has consistently denied, despite US Equal Employment Opportunity Commission (EEOC) findings to the contrary and confidential settlements.
Would IBM deny allegations of age discrimination? There are so many ways to terminate employees today. Why use the “you are old, so you are RIF’ed” ploy? In my opinion, it is an example of the lack of management finesse evident in many once high-flying companies today. I term the methods apparently in use at outfits like Twitter, Google, Facebook, and others as “high school science club management methods” or H2S2M2. The acronym has not caught one, but I assume that someone with a subscription to ChatGPT will use AI to write a book on the subject soon.
The write up also includes this statement:
Liss-Riordan [an attorney representing the dinobaby] said she has also been told that an algorithm was used to identify those who would lose their jobs, but had no further details to provide with regard to that allegation.
Several observations are warranted:
- Discrimination is nothing new. Oldsters will be nuked. No question about it. Why? Old people like me (I am 78) make younger folks nervous because we belong in warehouses for the soon dead, not giving lectures to the leaders of today and tomorrow.
- Younger folks do not know what they do not know. Consequently, opportunities exist to [a] make fun of young wizards as I do in this blog Monday through Friday since 2008 and [b] charge these “masters of the universe” money to talk about that which is part of their great unknowing. Billing is rejuvenating.
- No one cares. One can sue. One can rage. One can find solace in chemicals, fast cars, or climbing a mountain. But it is important to keep one thing in mind: No one cares.
Net net: Does IBM practice dark arts to rid the firm of those who slow down Zoom meetings, raise questions to which no one knows answers, and burdens on benefits plans? My hunch is that IBM type outfits will do what’s necessary to keep the camp ground free of old timers. Who wouldn’t?
Stephen E Arnold, June 5, 2023
Smart Software and a Re-Run of Paradise Lost Joined Progress
June 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I picked up two non-so-faint and definitely not-encrypted signals about the goals of Google and Microsoft for smart software.
Which company will emerge as the one true force in smart software? MidJourney did not pick a winner, just what the top dog will wear to the next quarterly sales report delivered via a neutral Zoom call.
Navigate to the visually thrilling podcast hosted by Lex Fridman, an American MIT wizard. He interviewed the voluble Google wizard Chris Lattner. The subject was the Future of Programming and AI. After listening to the interview, I concluded the following:
- Google wants to define and control the “meta” framework for artificial intelligence. What’s this mean? Think a digital version of a happy family: Vishnu, Brahma, and Shiva, among others.
- Google has an advantage when it comes to doing smart software because its humanoids have learned what works, what to do, and how to do certain things.
- The complexity of Google’s multi-pronged smart software methods, its home-brew programming languages, and its proprietary hardware are nothing more than innovation. Simple? Innovation means no one outside of the Google AI cortex can possibly duplicate, understand, or outperform Googzilla.
- Google has money and will continue to spend it to deliver the Vishnu, Brahma, and Shiva experience in my interpretation of programmer speak.
How’s that sound? I assume that the fruit fly start ups are going to ignore the vibrations emitted from Chris Lattner, the voluble Chris Lattner, I want to emphasize. But like those short-lived Diptera, one can derive some insights from the efforts of less well-informed, dependent, and less-well-funded lab experiments.
Okay, that’s signal number one.
Signal number two appears in “Microsoft Signs Deal for AI Computing Power with Nvidia-Backed CoreWeave That Could Be Worth Billions.” This “real news” story asserts:
… Microsoft has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave …
CoreWeave? Yep, the company “sells simplified access to Nvidia’s graphics processing units, or GPUs, which are considered the best available on the market for running AI models.” By the way, nVidia has invested in this outfit. What’s this signal mean to me? Here are the flickering lines on my oscilloscope:
- Microsoft wants to put smart software into its widely-used enterprise applications in order to make the one true religion of smart software. The idea, of course, is to pass the collection plate and convert dead dog software into racing greyhounds.
- Microsoft has an advantage because when an MBA does calculations and probably letters to significant others, Excel is the go-to solution. Some people create art in Excel and then sell it. MBAs just get spreadsheet fever and do leveraged buyouts. With smart software the Microsoft alleged monopoly does the billing.
- The wild and wonderful world of Azure is going to become smarter because… well, Microsoft does smart things. Imagine the demand for training courses, certification for Microsoft engineers, and how-to YouTube videos.
- Microsoft has money and will continue to achieve compulsory attendance at the Church of Redmond.
Net net: Two titans will compete. I am thinking about the battle between the John Milton’s protagonist and antagonist in “Paradise Lost.” This will be fun to watch whilst eating chicken korma.
Stephen E Arnold, June 5, 2023
AI Allegedly Doing Its Thing: Let Fake News Fly Free
June 2, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I cannot resist this short item about the smart software. Stories has appeared in my newsfeeds about AI which allegedly concluded that to complete its mission, it had to remove an obstacle — the human operator.
A number of news sources reported as actual factual that a human operator of a smart weapon system was annoying the smart software. The smart software decided that the humanoid was causing a mission to fail. The smart software concluded that the humanoid had to be killed so the smart software could go kill more humanoids.
I collect examples of thought provoking fake news. It’s my new hobby and provides useful material for my “OSINT Blindspots” lectures. (The next big one will be in October 2023 after I return from Europe in late September 2023.)
However, the write up “US Air Force Denies AI Drone Attacked Operator in Test” presents a different angle on the story about evil software. I noted this passage from an informed observer:
Steve Wright, professor of aerospace engineering at the University of the West of England, and an expert in unmanned aerial vehicles, told me jokingly that he had “always been a fan of the Terminator films” when I asked him for his thoughts about the story. “In aircraft control computers there are two things to worry about: ‘do the right thing’ and ‘don’t do the wrong thing’, so this is a classic example of the second,” he said. “In reality we address this by always including a second computer that has been programmed using old-style techniques, and this can pull the plug as soon as the first one does something strange.”
Now the question: Did smart software do the right thing. Did it go after its humanoid partner? In a hypothetical discussion perhaps? In real life, nope. My hunch is that the US Air Force anecdote is anchored in confusing “what if” thinking with reality. That’s easy for some younger than me to do in my experience.
I want to point out that in August 2020, a Heron Systems’ AI (based on Google technology) killed an Air Force “top gun” in a simulated aerial dog fight. How long did it take the smart software to neutralize the annoying humanoid? About a minute, maybe a minute and a half. See this Janes new item for more information.
My view is that smart software has some interesting capabilities. One scenario of interest to me is a hacked AI-infused weapons system? Pondering this idea opens the door some some intriguing “what if” scenarios.
Stephen E Arnold, June 2, 2023
The TikTok Addition: Has a Fortune Magazine Editor Been Up Swiping?
June 2, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A colleague called my attention to the Fortune Magazine article boldly titled “Gen Z Teens Are So Unruly in Malls, Fed by Their TikTok Addition, That a Growing Number Are requiring Chaperones and Supervision.” A few items I noted in this headline:
- Malls. I thought those were dead horses. There is a YouTube channel devoted to these real estate gems; for example, Urbex Offlimits and a creator named Brandon Moretti’s videos.
- Gen Z. I just looked up how old Gen Zs are. According to Mental Floss, these denizens of empty spaces are 11 to 26 years old. Hmmm. For what purpose are 21 to 25 year olds hanging out in empty malls? (Could that be a story for Fortune?)
- The “TikTok addition” gaffe. My spelling checker helps me out too. But I learned from a super-duper former Fortune writer whom I shall label Peter V, “Fortune is meticulous about its thorough research, its fact checking, and its proofreading.” Well, super-duper Peter, not in 2023. Please, explain in 25 words of less this image from the write up:
I did notice several factoids and comments in the write up; to wit:
Interesting item one:
“On Friday and Saturdays, it’s just been a madhouse,” she said on a recent Friday night while shopping for Mother’s Day gifts with Jorden and her 4-month-old daughter.
A madhouse is, according to the Cambridge dictionary is “a place of great disorder and confusion.” I think of malls as places of no people. But Fortune does the great fact checking, according to the attestation of Peter V.
Interesting item two:
Even a Chik-fil-A franchise in southeast Pennsylvania caused a stir with its social media post earlier this year that announced its policy of banning kids under 16 without an adult chaperone, citing unruly behavior.
I thought Chik-fil-A was a saintly, reserved institution with restaurants emulating Medieval monasteries. No longer. No wonder so many cars line up for a chickwich.
Interesting item three:
Cohen [a mall expert] said the restrictions will help boost spending among adults who must now accompany kids but they will also likely reduce the number of trips by teens, so the overall financial impact is unclear.
What these snippets tell me is that there is precious little factual data in the write up. The headline leading “TikTok addiction” is not the guts of the write up. Maybe the idea that kids who can’t go to the mall will play online games? I think it is more likely that kids and those lost little 21 to 25 year olds will find other interesting things to do with their time.
But malls? Kids can prowl Snapchat and TikTok, but those 21 to 25 year olds? Drink or other chemical activities?
Hey, Fortune, let’s get addicted to the Peter V. baloney: “Fortune is meticulous about its thorough research, its fact checking, and its proofreading.”
Stephen E Arnold, June 2, 2023
The Prospects for Prompt Engineers: English Majors, Rejoice
June 2, 2023
I noted some good news for English majors. I suppose some history and political science types may be twitching with constrained jubilation too.
Navigate to “9 in 10 Companies That Are Currently Hiring Want Workers with ChatGPT Experience.” The write up contains quite a number of factoids. (Are these statistically valid? I believe everything I read on the Internet with statistical data, don’t you.) Well, true or not, I found these statements interesting:
- 91 percent of the companies in a human resourcey survey want workers with ChatGPT experience. What does “experience” mean? The write up does not deign to elucidate. The question about how to optimize phishing email counts.
- 75 percent of those surveyed will fire people who are declared redundant, annoying, or too expensive to pay.
- 30 percent of those in the sample say that hiring a humanoid with ChatGPT experience is “urgent.” Why not root around in the reason for this urgency? Oh, right. That’s research work.
- 66 percent of the respondents perceive that ChatGPT will deliver a “competitive edge.” What about the link to cost reduction? Oh, I forgot. That’s additional research work.
What work functions will get to say, “Hello” to smart software? The report summary identifies six job categories:
- Software engineering
- Customer service
- Human resources
- Marketing
- Data entry
- Sale
- Finance
For parents with a 22 to 40 year old working in one of these jobs, my suggestion is to get that spare bedroom ready. The progeny may return to the nest.
Stephen E Arnold, June 2, 2023
The Intellectual Titanic and Sister Ships at Sea: Ethical Ballast and Flawed GPS Aboard
June 1, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Researchers Retract Over 300 COVID-Era Medical Papers For Scientific Errors, Ethical Concerns.” I ignored the information about the papers allegedly hand crafted with cow outputs. I did note this statement, however:
Gunnveig Grødeland, a senior researcher at the Institute of Immunology at the University of Oslo, said many withdrawn papers during COVID-19 have been the result of ethical shortcomings.
Interesting. I recall hearing that the president of a big time university in Palo Alto was into techno sci-fi paper writing. I also think that the estimable Jeffrey Epstein affiliated MIT published some super positive information about the new IBM smart WatsonX. (Doesn’t IBM invest big bucks in MIT?) I have also memory tickles about inventors and entrepreneurs begging to be regulated.
Bad, distorted values chase kids the Lane of Life. Imagine. These young people and their sense of right and wrong will be trampled by darker motives. Image produced by MidJourney, of course.
What this write up about peer reviewed and allegedly scholarly paper says to me is that ethical research and mental gyroscopes no longer align with what I think of as the common good.
Academics lie. Business executives lie. Entrepreneurs lie. Now what’s that mean for the quaint idea that individuals can be trusted? I can hear the response now:
Senator, thank you, for that question. I will provide the information you desire after this hearing.
I suppose one can look forward to made up information as the increasingly lame smart software marketing demonstrations thrill the uninformed.
Is it possible for flawed ethical concepts and out of kilter moral GPS system to terminate certain types of behavior?
Here’s the answer: Sure looks like it. That’s an interesting gain of function.
Stephen E Arnold, June 1, 2023