Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations

September 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:

“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”

Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.

Cynthia Murrell, September 4, 2023

Google: Another Modest Proposal to Solve an Existential Crisis. No Big Deal, Right?

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am fascinated with corporate “do goodism.” Many people find themselves in an existential crisis anchored in zeros and ones. Is the essay submitted as original work the product of an industrious 15 year old? Or, is the essay the 10 second output of a smart software system like ChatGPT or You.com? Is that brilliant illustration the labor of a dedicated 22 year old laboring in a cockroach infested garage in in Corona, Queens? Or, was the art used in this essay output in about 60 seconds by my trusted graphic companion Mother MidJourney?

9 1 hidden pixel

“I see the watermark. This is a fake!” exclaims the precocious lad. This clever middle school student has identified the super secret hidden clue that this priceless image is indeed a fabulous fake. How could a young person detect such a sophisticated and subtle watermark? The is, “Let’s overestimate our capabilities and underestimate those of young people who are skilled navigators of the digital world?”

Queens and what’s this “modest proposal” angle. Jonathan Swift beat this horse until it died in the late 17th century. I think the reference makes a bit of sense. Mr. Swift proposed simple solutions to big problems. “DeepMind Develops Watermark to Identify AI Images” explains:

Google’s DeepMind is trialling [sic] a digital watermark that would allow computers to spot images made by artificial intelligence (AI), as a means to fight disinformation. The tool, named SynthID, will embed changes to individual pixels in images, creating a watermark that can be identified by computers but remains invisible to the human eye. Nonetheless, DeepMind has warned that the tool is not “foolproof against extreme image manipulation.

Righto, it’s good enough. Plus, the affable crew at Alphabet Google YouTube are in an ideal position to monitor just about any tiny digital thing in the interwebs. Such a prized position as de facto ruler of the digital world makes it easy to flag and remove offending digital content with the itty bitty teenie weeny  manipulated pixel thingy.

Let’s assume that everyone, including the young fake spotter in the Mother MJ image accompany this essay gets to become the de facto certifier of digital content. What are the downsides?

Gee, I give up. I cannot think of one thing that suggests Google’s becoming the chokepoint for what’s in bounds and what’s out of bounds. Everyone will be happy. Happy is good in our stressed out world.

And think of the upsides? A bug might derail some creative work? A system crash might nuke important records about a guilty party because pixels don’t lie? Well, maybe just a little bit. The Google intern given the thankless task of optimizing image analysis might stick in an unwanted instruction. So what? The issue will be resolved in a court, and these legal proceedings are super efficient and super reliable.

I find it interesting that the article does not see any problem with the Googley approach. Like the Oxford research which depended upon Facebook data, the truth is the truth. No problem. Gatekeepers and certification authority are exciting business concepts.

Stephen E Arnold, September 1, 2023

Regulating Smart Software: Let Us Form a Committee and Get Industry Advisors to Help

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The Boston Globe published what I thought was an amusing “real” news story about legislators and smart software. I know. I know. I am entering oxymoron land. The article is “The US Regulates Cars, Radio, and TV. When Will It Regulate AI? A number of passages received True Blue check marks.

8 26 stone age mobile

A person living off the grid works to make his mobile phone deliver generative content to solve the problem of … dinner. Thanks, MidJourney. You did a Stone Age person but you would not generate a street person. How helpful!

Let me share two passages and then offer a handful of observations.

How about this statement attributed to Microsoft’s Brad Smith. He is the professional who was certain Russia organized 1,000 programmers to figure out the SolarWinds’ security loopholes. Yes, that Brad Smith. The story quotes him as saying:

“We should move quickly,” Brad Smith, the president of Microsoft, which launched an AI-powered version of its search engine this year, said in May. “There’s no time for waste or delay,” Chuck Schumer, the Senate majority leader, has said. “Let’s get ahead of this,” said Sen. Mike Rounds, R-S.D.

Microsoft moved fast. I think the reason was to make Google look stupid. Both of these big outfits know that online services aggregate and become monopolistic. Microsoft wants to be the AI winner. Microsoft is not spending extra time helping elected officials understand smart software or the stakes on the digital table. No way.

The second passage is:

Historically, regulation often happens gradually as a technology improves or an industry grows, as with cars and television. Sometimes it happens only after tragedy.

Please, read the original “real” news story for Captain Obvious statements. Here are a few observations:

  1. Smart software is moving along at a reasonable clip. Big bucks are available to AI outfits in Germany and elsewhere. Something like 28 percent of US companies are fiddling with AI. Yep, even those raising chickens have AI religion.
  2. The process of regulation is slow. We have a turtle and a hare situation. Nope, the turtle loses unless an exogenous power kills the speedy bunny.
  3. If laws were passed, how would one get fast action to apply them? How is the FTC doing? What about the snappy pace of the CDC in preparing for the next pandemic?

Net net: Yes, let’s understand AI.

Stephen E Arnold, September 1, 2023.

The statement aligns with my experience.

YouTube Content: Are There Dark Rabbit Holes in Which Evil Lurks? Come On Now!

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Google has become a cultural touchstone. The most recent evidence is a bit of moral outrage in Popular Science. Now the venerable magazine is PopSci.com, and the Google has irritated the technology explaining staff. Navigate to “YouTube’s Extremist Rabbit Holes Are Deep But Narrow.”

8 30 shout

Google, your algorithm is creating rabbit holes. Yes, that is a technical term,” says the PopSci technology expert. Thanks for a C+ image MidJourney.

The write up asserts:

… exposure to extremist and antagonistic content was largely focused on a much smaller subset of already predisposed users. Still, the team argues the platform “continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.” Not only that, but engagement with this content still results in advertising profits.

I think the link with popular science is the “algorithm.” But the write up seems to be more a see-Google-is-bad essay. Science? No. Popular? Maybe?

The essay concludes with this statement:

While continued work on YouTube’s recommendation system is vital and admirable, the study’s researchers echoed that, “even low levels of algorithmic amplification can have damaging consequences when extrapolated over YouTube’s vast user base and across time.” Approximately 247 million Americans regularly use the platform, according to recent reports. YouTube representatives did not respond to PopSci at the time of writing.

I find the use of the word “admirable” interesting. Also, I like the assertion that algorithms can do damage. I recall seeing a report that explained social media is good and another study pitching the idea that bad digital content does not have a big impact. Sure, I believe these studies, just not too much.

Google has a number of buns in the oven. The firm’s approach to YouTube appears to be “emulate Elon.” Content moderation will be something with a lower priority than keeping tabs on Googlers who don’t come to the office or do much Google work. My suggestion for Popular Science is to do a bit more science, and a little less quasi-MBA type writing.

Stephen E Arnold, September 1, 2023

Microsoft Pop Ups: Take Screen Shots

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Microsoft Is Using Malware-Like Pop-Ups in Windows 11 to Get People to Ditch Google.” Kudos to the wordsmiths at TheVerge.com for avoiding the term “po*n storm” to describe the Windows 11 alleged pop ups.

8 30 pop up

A person in the audience says, “What’s that pop up doing up there?” Thanks, MJ. Another so so piece of original art.

The write up states:

I have no idea why Microsoft thinks it’s ok to fire off these pop-ups to Windows 11 users in the first place. I wasn’t alone in thinking it was malware, with posts dating back three months showing Reddit users trying to figure out why they were seeing the pop-up.

What popups for three months? I love “real” news when it is timely.

The article includes this statement:

Microsoft also started taking over Chrome searches in Bing recently to deliver a canned response that looks like it’s generated from Microsoft’s GPT-4-powered chatbot. The fake AI interaction produced a full Bing page to entirely take over the search result for Chrome and convince Windows users to stick with Edge and Bing.

How can this be? Everyone’s favorite software company would not use these techniques to boost Credge’s market share, would it?

My thought is that Microsoft’s browser woes began a long time ago in an operating system far, far away. As a result, Credge is lagging behind Googzilla’s browser. Unless Google shoots itself in both feet and fires a digital round into the beastie’s heart, the ad monster will keep on sucking data and squeezing out alternatives.

The write up does not seem to be aware that Google wants to control digital information flows. Microsoft will need more than popups to prevent the Chrome browser from becoming the primary access mechanism to the World Wide Web. Despite Microsoft’s market power, users don’t love the Microsoft  Credge thing. Hey, Microsoft, why not pay people to use Credge.

Stephen E Arnold, August 31, 2023

Slackers, Rejoice: Google Has a Great Idea Just for You

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I want to keep this short because the idea of not doing work to do work offends me deeply. Just like the big thinkers who want people to relax, take time, smell the roses, and avoid those Type A tendencies annoy me. I like being a Type A. In fact, if I were not a Type A, I would not “be” to use some fancy Descartes logic.

8 29 looking down info highway

Is anyone looking down the Information Superhighway to see what speeding AI vehicle is approaching? Of course not, everyone is on break or playing Foosball. Thanks, Mother MidJourney, you did not send me to the arbitration committee for my image request.

Google Meet’s New AI Will Be Able to Go to Meetings for You” reports:

…you might never need to pay attention to another meeting again — or even show up at all.

Let’s think about this new Google service. If AI continues to advance at a reasonable pace, an AI which can attend a meeting for a person can at some point replace the person. Does that sound reasonable? What a GenZ thrill. Money for no work. The advice to take time for kicking back and living a stress free life is just fantastic.

In today’s business climate, I am not sure that delegating knowledge work to smart software is a good idea. I like to use the phrase “gradient descent.” My connotation of this jargon means a cushioned roller coaster to one or more of the Seven Deadly Sins. I much prefer intentional use of software. I still like most of the old-fashioned methods of learning and completing projects. I am happy to encounter a barrier like my search for the ultimate owners of the domain rrrrrrrrrrr.com or the methods for enabling online fraud practiced by some Internet service providers. (Sorry, I won’t name these fine outfits in this free blog post. If you are attending my keynote at the Massachusetts and New York Association of Crime Analysts’ conference in early October, say, “Hello.” In that setting, I will identify some of these outstanding companies and share some thoughts about how these folks trample laws and regulations. Sound like fun?

Google’s objective is to become the source for smart software. In that position, the company will have access to knobs and levers controlling information access, shaping, and distribution. The end goal is a quarterly financial report and the diminution of competition from annoying digital tsetse flies in my opinion.

Wouldn’t it be helpful if the “real news” looked down the Information Highway? No, of course not. For a Type A, the new “Duet” service does not “do it” for me.

Stephen E Arnold, August 31, 2023

A Wonderful Romp through a Tech Graveyard

August 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I heard about a Web site called killedby.tech. I took a look and what a walk down Memory Lane. You know Memory Lane. It runs close to the Information Superhighway. Are products smashed on the Info Highway? Some, not all.

The entry for ILoo, an innovation from the Softies was born and vaporized in 2003. Killedby describes the breakthrough this way:

iLoo was a smart portable toilet integrating the complete equipment to surf the Internet from inside and outside the cabinet.

I wonder how many van lifers would buy this product. Imagine the TikTok videos. That would keep the Oracle TikTok review team busy and probably provide some amusement for others as well.

And I had forgotten about Google’s weird response to failing to convince the US government to use the Googley search system for FirstGov.gov. Ah, forward truncation — something Google would never ever do. The product/service was Google Public Service Search. Here’s what the tomb stone says:

Google Public Service Search provided governmental, non-profit and academic organizational search results without ads.

That idea bit the dust in 2006, which is the year I have pegged as the point at which Google went all-in on its cheerful, transparent business model. No ads! Imagine that!

I had forgotten about Google’s real time search play. Killedby says:

Google Real-Time Search provided live search results from Twitter, Facebook, and news websites.

I never learned why this was sent to the big digital dumpster behind the Google building on Shoreline. Rumor was that some news outfits and some social media Web sites were not impressed. Google — ever the trusted ad provider — hasta la vista to a social information metasearch.

Great site. I did not see Google Transformic, however. Killedby is quite good.

Stephen E Arnold, August 31, 2023

Google: Trapped in Its Own Walled Garden with Lots of Science Club Alums

August 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “MapReduce, TensorFlow, Vertex: Google’s Bet to Avoid Repeating History in AI.” I found the idea that Google gets in its own way a retelling of how high school science club management produces interesting consequences.

image

A young technology wizard finds himself in a Hall of Mirrors at the carnival. He is not sure what is real or in which direction to go. The world of the House of Mirrors is disorienting. The young luminary wants to return to the walled garden where life is more comfortable. Thanks, MidJourney. Four tries and I get this tired illustration. Gradient descent time?

The write up asserts:

Google is in the middle of trying to avoid repeating history when releasing its industry-altering technology.

I disagree. The methods defining Google produce with remarkable consistency a lack of informed control. The idea is that organizations have a culture. That cultural evolves over time, but it remains anchored in its past. Thus, as the organization appears to move forward in time, that organization behaves in a predictable way; for example, Google has an approach to management which guarantees friction. Examples range from the staff protests to the lateral arabesque used to move Dr. Jeff Dean out of the way of the DeepMind contingent.

The write up takes a different view; for example:

Run by engineers, the [Google MapReduce] team essentially did not foresee the coming wave of open-source technology to power the modern Web and the companies that would come to commercialize it.

Google lacks the ability to perceive its opportunities. The company is fenced by its dependence on online advertising. Thus, innovations are tough for the Googlers to put into perspective. One reason is the high school science club ethos of the outfit; the other is that the outside world is as foreign to many Googlers as the world beyond the goldfish’s bowl filled with water. The view is distorted, surreal, and unfamiliar.

How can a company innovate and make a commercially viable product with this in its walled garden? It cannot. Advertising at Google is a me-too product for which Google prior to its IPO settled a dispute with Yahoo over the “inspiration” for pay-to-play search. The cost of this “inspiration” was about $1 billion.

In a quarter century, Google remains what one Microsoftie called “a one-trick pony.” Will the Google Cloud emerge as a true innovation? Nope. There are lots of clouds. Google is the Enterprise Rent-a-Car to the Hertz and Avis cloud rental firms. Google’s innovation track record is closer to a high school science club which has been able to win the state science club content year after year. Other innovators win the National Science Club Award (once called the Westinghouse Award). The context-free innovations are useful to others who have more agility and market instinct.

My view is that Google has become predictable, lurching from one technical paper to legal battle like a sine wave in a Physics 101 class; that is, a continuous wave with a smooth periodic function.

Don’t get me wrong. Google is an important company. What is often overlooked is the cultural wall that keeps the 100,000 smartest people in the world locked down in the garden. Innovation is constrained, and the excitement exists off the virtual campus. Why do so many Xooglers innovate and create interesting things once freed from the walled garden? Culture has strengths and weaknesses. Google’s muffing the bunny, as the article points out, is one defining characteristic of a company which longs for high school science club meetings and competitions with those like themselves.

Tony Bennett won’t be singing in the main cafeteria any longer, but the Googlers don’t care. He was an outsider, interesting but not in the science club. If the thought process doesn’t fit, you must quit.

Stephen E Arnold, August 30. 2023

Microsoft and Good Enough Engineering: The MSI BSOD Triviality

August 30, 2023

My line up of computers does not have a motherboard from MSI. Call me “Lucky” I guess. Some MSI product owners were not. “Microsoft Puts Little Blame on Its Windows Update after Unsupported Processor BSOD Bug” is a fun read for those who are keeping notes about Microsoft’s management methods. The short essay romps through a handful of Microsoft’s recent quality misadventures.

8 26 broken vase

“Which of you broke mom’s new vase?” asks the sister. The boys look surprised. The vase has nothing to say about the problem. Thanks, MidJourney, no adjudication required for this image.

I noted this passage in the NeoWin.net article:

It has been a pretty eventful week for Microsoft and Intel in terms of major news and rumors. First up, we had the “Downfall” GDS vulnerability which affects almost all of Intel’s slightly older CPUs. This was followed by a leaked Intel document which suggests upcoming Wi-Fi 7 may only be limited to Windows 11, Windows 12, and newer.

The most helpful statement in the article in my opinion was this statement:

Interestingly, the company says that its latest non-security preview updates, ie, Windows 11 (KB5029351) and Windows 10 (KB5029331), which seemingly triggered this Unsupported CPU BSOD error, is not really what’s to blame for the error. It says that this is an issue with a “specific subset of processors”…

Like the SolarWinds’ misstep and a handful of other bone-chilling issues, Microsoft is skilled at making sure that its engineering is not the entire problem. That may be one benefit of what I call good enough engineering. The space created by certain systems and methods means that those who follow documentation can make mistakes. That’s where the blame should be placed.

Makes sense to me. Some MSI motherboard users looking at the beloved BSOD may not agree.

Stephen E Arnold, August 30, 2023

New Learning Model Claims to Reduce Bias, Improve Accuracy

August 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Promises, promises. We have seen developers try and fail to eliminate bias in machine learning models before. Now ScienceDaily reports, “New Model Reduces Bias and Enhances Trust in AI Decision-Making and Knowledge Organization.” Will this effort by University of Waterloo researchers be the first to succeed? The team worked in a field where AI bias and inaccuracy can be most devastating: healthcare. The write-up tells us:

“Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups. Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)”

Wong states his team was able to disentangle statistics in a certain set of complex medical results data, leading to the development of a new XAI model they call Pattern Discovery and Disentanglement (PDD). The post continues:

“The PDD model has revolutionized pattern discovery. Various case studies have showcased PDD, demonstrating an ability to predict patients’ medical results based on their clinical records. The PDD system can also discover new and rare patterns in datasets. This allows researchers and practitioners alike to detect mislabels or anomalies in machine learning.”

If accurate, PDD could lead to more thorough algorithms that avoid hasty conclusions. Less bias and fewer mistakes. Can this ability to be extrapolated to other fields, like law enforcement, social services, and mortgage decisions? Assurances are easy.

Cynthia Murrell, August 30, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta