Microsoft and What Fizzled with One Trivial Omission. Yep, Inconsequential
October 27, 2023
This essay is the work of a dumb humanoid. No smart software required.
I read “10 Hyped-Up Windows Features That Fizzled Out” is an interesting list. I noticed that the Windows Phone did not make the cut. How important is the mobile phone to online computing and most people’s life? Gee, a mobile phone? What’s that? Let’s see Apple has a phone and it produces some magnetism for the company’s other products and services. And Google has a phone with its super original, hardly weird Android operating system with the pull through for advertising sales. Google does fancy advertising, don’t you think? Then we have the Huawei outfit, which despite political headwinds, keeps tacking and making progress and some money. But Microsoft? Nope, no phone despite the superior thinking which brought Nokia into the Land of Excitement.
What do you mean security is a priority? I was working on 3D, the metaverse, and mixed reality. I don’t think anyone on my team knows anything about security. Is someone going to put out that fire? I have to head to an off site meeting. Catch you later,” says the hard working software professional. Thanks MidJourney, you understand dumpster fire, don’t you?
What’s on the list? Here are five items that the online write up identified as “fizzled out” products. Please, navigate to the original “let’s make a list and have lunch delivered” article.
The five items I noted are:
- The dual screen revolution Windows 10X for devices like the “Surface Neo.” Who knew?
- 3D modeling. Okay, I would have been happy if Microsoft could support plain old printing from its outstanding Windows products.
- Mixed reality. Not even the Department of Defense was happy with weird goggles which could make those in the field of battle a target.
- Set tabs. Great idea. Now you can buy it from Stardock, the outfit that makes software to kill the weird Window interface. Yep, we use this on our Windows computers. Why? The new interface is a pain, not a “pane.”
- My People. I don’t have people. I have a mobile phone and email. Good enough.
What else is missing from this lunch time-brainstorming list generation session?
My nomination is security. The good enough approach is continuing to demonstrate that — bear with me for this statement — good enough is no longer good enough in my opinion.
Stephen E Arnold, October 27, 2023
Microsoft Making Changes: Management and Personnel Signals
October 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
We post headlines to the blog posts in Beyond Search to LinkedIn, “hire me” service. The traffic produced is minimal, and I find it surprising that a 1,000 people or so look at the information that catches our attention. As a dinobaby who is not interested in work, I find LinkedIn amusing. The antics of people posting little videos, pictures of employees smiling, progeny in high school athletic garb, and write ups which say, “I am really wonderful” are fascinating. Every month or so, I receive a message from a life coach. I get a kick out of telling the young person, “I am 78 and I don’t have much life left. What’s to coach?” I never hear from the individual again. What fun is that?
I wonder if the life coaches offer their services to Microsoft LinkedIn? Perhaps the organization could benefit more than I would. What justifies this statement? “LinkedIn Employees Discovered a Mysterious List of around 500 Names Over the Weekend. On Monday, Workers Said Those on the List Were Laid Off” might provide a useful group of prospects. Imagine. A group of professionals working on a job hunting site possibly terminated by Microsoft LinkedIn. That’s the group to write about life coaching and generating leads. What’s up with LinkedIn? Is LinkedIn a proxy for management efforts to reduce costs?
“Turn the ship, sir. You will run aground, leak fuel, and kill the sea bass,” shouts a consultant to the imposing vessel Titanic 3. Thanks, MidJourney, close enough for horse shoes.
Without any conscious effort other LinkedIn-centric write ups caught my eye. Each signals that change is being forced upon a vehicle for aggressive self promotion to make money. Let me highlight these other “reports” and offer a handful of observations. Keep in mind that [a] I am a dinobaby and [b] I see social media as a generally bad idea. See. I told you I was a dinobaby.
The first article I spotted in my newsfeed was “Microsoft Owned LinkedIn Lays Off Nearly 700 Employees — Read the Memo Here.” The big idea is that LinkedIn is not making as much money as it coulda, woulda, shoulda. The fix is to allow people to find their future elsewhere via role reductions. Nice verbiage. Chatty and rational, right, tech bros? Is Microsoft emulating the management brilliance of Elon Musk or the somewhat thick fingered efforts of IBM?
The article states:
LinkedIn is now ramping up hiring in India…
My hunch it is a like a combo at a burger joint: “Some X.com, please. Oh, add some IBM too.”
Also, I circled an item with the banner “20% of LinkedIn’s Recent Layoffs Were Managers.” Individuals offered some interesting comments. These could be accurate or the fabrications of a hallucinating ChatGPT-type service. Who knows? Consider these remarks:
- From Kuchenbecker: I’m at LI and my reporting chain is Sr mgr > Sr Director > VP > Sr vp > CEO. A year ago it was mgr > sr mgr > director > sr Director> vp> svp > ceo. No one in my management chain was impacted but the flattening has been happening organically as folks leave. LI has a distinctive lack of chill right now contrary to the company image, but generally things are just moving faster.
- From Greatpostman: I have a long held belief that engineering managers are mostly a scam, and are actually just overpaid scrum masters. This is from working at some top companies
- From Xorcist: Code is work, and the one thing that signals moving up the social ladder is not having to work.
- From Booleandilemma: My manager does little else besides asking what everyone is working on every day. We could automate her position with a slack bot and get the same results.
The comments suggest a well-crafted bureaucracy. No wonder security buffs find Microsoft interesting. Everyone is busy with auto scheduled meetings and getting Teams to work.
Next, I spotted was “Leaked Microsoft Pay Guidelines Reveal Salary, Hiring Bonus, and Stock Award Ranges by Level.” I underlined this assertion in the article:
In 2022, when the economy was still booming, Microsoft granted an across-the board compensation raise for levels 67 and lower through larger stock grants, in response to growing internal dissatisfaction with compensation compared to competitors, and to stop employees from leaving for better pay, especially to Amazon. As Insider previously reported, earlier this year, as the economy faltered, Microsoft froze base pay raises and cut its budget for bonuses and stock awards.
Does this suggest some management problems, problems money cannot resolve? Other observations:
- Will Microsoft be able to manage its disparate businesses as it grows ever larger?
- Has Microsoft figured out how to scale and achieve economies that benefit its stakeholders?
- Will Microsoft’s cost cutting efforts create other “gaps” in the plumbing of the company; for example, security issues?
I am not sure, but the game giant and AI apps vendor appears to be trying to turn a flotilla, not a single aircraft carrier. The direction? Lower cost talent in India? Will the quality of Microsoft’s products and services suffer? Nope. A certain baseline of excellence exists and moving that mark gets more difficult by the day.
Stephen E Arnold, October 17, 2023
Microsoft Claims to Bring Human Reasoning to AI with New Algorithm
September 20, 2023
Has Microsoft found the key to meld the strengths of AI reasoning and human cognition? Decrypt declares, “Microsoft Infuses AI with Human-Like Reasoning Via an ‘Algorithm of Thoughts’.” Not only does the Algorithm of Thoughts (AoT for short) come to better conclusions, it also saves energy by streamlining the process, Microsoft promises. Writer Jose Antonio Lanz explains:
“The AoT method addresses the limitations of current in-context learning techniques like the ‘Chain-of-Thought’ (CoT) approach. CoT sometimes provides incorrect intermediate steps, whereas AoT guides the model using algorithmic examples for more reliable results. AoT draws inspiration from both humans and machines to improve the performance of a generative AI model. While humans excel in intuitive cognition, algorithms are known for their organized, exhaustive exploration. The research paper says that the Algorithm of Thoughts seeks to ‘fuse these dual facets to augment reasoning capabilities within LLMs.’ Microsoft says this hybrid technique enables the model to overcome human working memory limitations, allowing more comprehensive analysis of ideas. Unlike CoT’s linear reasoning or the ‘Tree of Thoughts’ (ToT) technique, AoT permits flexible contemplation of different options for sub-problems, maintaining efficacy with minimal prompting. It also rivals external tree-search tools, efficiently balancing costs and computations. Overall, AoT represents a shift from supervised learning to integrating the search process itself. With refinements to prompt engineering, researchers believe this approach can enable models to solve complex real-world problems efficiently while also reducing their carbon impact.”
Wowza! Lanz expects Microsoft to incorporate AoT into its GPT-4 and other advanced AI systems. (Microsoft has partnered with OpenAI and invested billions into ChatGPT; it has an exclusive license to integrate ChatGPT into its products.) Does this development bring AI a little closer to humanity? What is next?
Cynthia Murrell, September 20, 2023
Microsoft: Good Enough Just Is Not
September 18, 2023
Was it the Russian hackers? What about the special Chinese department of bad actors? Was it independent criminals eager to impose ransomware on hapless business customers?
No. No. And no.
The manager points his finger at the intern working the graveyard shift and says, “You did this. You are probably worse than those 1,000 Russian hackers orchestrated by the FSB to attack our beloved software. You are a loser.” The intern is embarrassed. Thanks, Mom MJ. You have the hands almost correct… after nine months or so. Gradient descent is your middle name.
“Microsoft Admits Slim Staff and Broken Automation Contributed to Azure Outage” presents an interesting interpretation of another Azure misstep. The report asserts:
Microsoft’s preliminary analysis of an incident that took out its Australia East cloud region last week – and which appears also to have caused trouble for Oracle – attributes the incident in part to insufficient staff numbers on site, slowing recovery efforts.
But not really. The report adds:
The software colossus has blamed the incident on “a utility power sag [that] tripped a subset of the cooling units offline in one datacenter, within one of the Availability Zones.”
Ah, ha. Is the finger of blame like a heat seeking missile. By golly, it will find something like a hair dryer, fireworks at a wedding where such events are customary, or a passenger aircraft. A great high-tech manager will say, “Oops. Not our fault.”
The Register’s write up points out:
But the document [an official explanation of the misstep] also notes that Microsoft had just three of its own people on site on the night of the outage, and admits that was too few.
Yeah. Work from home? Vacay time? Managerial efficiency planning? Whatever.
My view of this unhappy event is:
- Poor managers making bad decisions
- A drive for efficiency instead of a drive toward excellence
- A Microsoft Bob moment.
More exciting Azure events in the future? Probably. More finger pointing? It is a management method, is it not?
Stephen E Arnold, September 18, 2023
Surprised? Microsoft Drags Feet on Azure Security Flaw
September 5, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft has addressed a serious security flaw in Azure, but only after being called out by the cybersecurity firm that found the issue. It only took several months. Oh, and according to that firm, the “fix” only applies to new applications despite Microsoft’s assurances to the contrary. “Microsoft Fixes Flaw After Being Called Irresponsible by Tenable CEO,” Bleeping Computer reports. Writer Sergiu Gatlan describes the problem Tenable found within the Power Platform Custom Connectors feature:
“Although customer interaction with custom connectors usually happens via authenticated APIs, the API endpoints facilitated requests to the Azure Function without enforcing authentication. This created an opportunity for attackers to exploit unsecured Azure Function hosts and intercept OAuth client IDs and secrets. ‘It should be noted that this is not exclusively an issue of information disclosure, as being able to access and interact with the unsecured Function hosts, and trigger behavior defined by custom connector code, could have further impact,’ says cybersecurity firm Tenable which discovered the flaw and reported it on March 30th. ‘However, because of the nature of the service, the impact would vary for each individual connector, and would be difficult to quantify without exhaustive testing.’ ‘To give you an idea of how bad this is, our team very quickly discovered authentication secrets to a bank. They were so concerned about the seriousness and the ethics of the issue that we immediately notified Microsoft,’ Tenable CEO Amit Yoran added.”
Yes, that would seem to be worth a sense of urgency. But even after the eventual fix, this bank and any other organizations already affected were still vulnerable, according to Yoran. As far as he can tell, they weren’t even notified of the problem so they could mitigate their risk. If accurate, can Microsoft be trusted to keep its users secure going forward? We may have to wait for another crop of interns to arrive in Redmond to handle the work “real” engineers do not want to do.
Cynthia Murrell, September 5, 2023
Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations
September 4, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:
“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”
Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.
Cynthia Murrell, September 4, 2023
Microsoft Pop Ups: Take Screen Shots
August 31, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Microsoft Is Using Malware-Like Pop-Ups in Windows 11 to Get People to Ditch Google.” Kudos to the wordsmiths at TheVerge.com for avoiding the term “po*n storm” to describe the Windows 11 alleged pop ups.
A person in the audience says, “What’s that pop up doing up there?” Thanks, MJ. Another so so piece of original art.
The write up states:
I have no idea why Microsoft thinks it’s ok to fire off these pop-ups to Windows 11 users in the first place. I wasn’t alone in thinking it was malware, with posts dating back three months showing Reddit users trying to figure out why they were seeing the pop-up.
What popups for three months? I love “real” news when it is timely.
The article includes this statement:
Microsoft also started taking over Chrome searches in Bing recently to deliver a canned response that looks like it’s generated from Microsoft’s GPT-4-powered chatbot. The fake AI interaction produced a full Bing page to entirely take over the search result for Chrome and convince Windows users to stick with Edge and Bing.
How can this be? Everyone’s favorite software company would not use these techniques to boost Credge’s market share, would it?
My thought is that Microsoft’s browser woes began a long time ago in an operating system far, far away. As a result, Credge is lagging behind Googzilla’s browser. Unless Google shoots itself in both feet and fires a digital round into the beastie’s heart, the ad monster will keep on sucking data and squeezing out alternatives.
The write up does not seem to be aware that Google wants to control digital information flows. Microsoft will need more than popups to prevent the Chrome browser from becoming the primary access mechanism to the World Wide Web. Despite Microsoft’s market power, users don’t love the Microsoft Credge thing. Hey, Microsoft, why not pay people to use Credge.
Stephen E Arnold, August 31, 2023
Microsoft and Good Enough Engineering: The MSI BSOD Triviality
August 30, 2023
My line up of computers does not have a motherboard from MSI. Call me “Lucky” I guess. Some MSI product owners were not. “Microsoft Puts Little Blame on Its Windows Update after Unsupported Processor BSOD Bug” is a fun read for those who are keeping notes about Microsoft’s management methods. The short essay romps through a handful of Microsoft’s recent quality misadventures.
“Which of you broke mom’s new vase?” asks the sister. The boys look surprised. The vase has nothing to say about the problem. Thanks, MidJourney, no adjudication required for this image.
I noted this passage in the NeoWin.net article:
It has been a pretty eventful week for Microsoft and Intel in terms of major news and rumors. First up, we had the “Downfall” GDS vulnerability which affects almost all of Intel’s slightly older CPUs. This was followed by a leaked Intel document which suggests upcoming Wi-Fi 7 may only be limited to Windows 11, Windows 12, and newer.
The most helpful statement in the article in my opinion was this statement:
Interestingly, the company says that its latest non-security preview updates, ie, Windows 11 (KB5029351) and Windows 10 (KB5029331), which seemingly triggered this Unsupported CPU BSOD error, is not really what’s to blame for the error. It says that this is an issue with a “specific subset of processors”…
Like the SolarWinds’ misstep and a handful of other bone-chilling issues, Microsoft is skilled at making sure that its engineering is not the entire problem. That may be one benefit of what I call good enough engineering. The space created by certain systems and methods means that those who follow documentation can make mistakes. That’s where the blame should be placed.
Makes sense to me. Some MSI motherboard users looking at the beloved BSOD may not agree.
Stephen E Arnold, August 30, 2023
Microsoft Wants to Help Improve Security: What about Its Engineering of Security
August 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft is a an Onion subject when it comes to security. Black hat hackers easily crack any new PC code as soon as it is released. Generative AI adds a new slew of challenges for bad actors but Microsoft has taken preventative measures to protect their new generative AI tools. Wired details how Microsoft has invested in AI security for years, “Microsoft’s AI Red Team Has Already Made The Case For Itself.”
While generative AI aka chatbots aka AI assistants are new for consumers, tech professionals have been developing them for years. While the professionals have experimented with the best ways to use the technology, they have also tested the best way to secure AI.
Microsoft shared that since 2018 it has had a team learning how to attack its AI platforms to discover weaknesses. Known as Microsoft’s AI red team, the group consists of an interdisciplinary team of social engineers, cybersecurity engineers, and machine learning experts. The red team shares its findings with its parent company and the tech industry. Microsoft wants the information known across the tech industry. The team learned that AI security has conceptual differences from typical digital defense so AI security experts need to alter their approach to their work.
“ ‘When we started, the question was, ‘What are you fundamentally going to do that’s different? Why do we need an AI red team?’ says Ram Shankar Siva Kumar, the founder of Microsoft’s AI red team. ‘But if you look at AI red teaming as only traditional red teaming, and if you take only the security mindset, that may not be sufficient. We now have to recognize the responsible AI aspect, which is accountability of AI system failures—so generating offensive content, generating ungrounded content. That is the holy grail of AI red teaming. Not just looking at failures of security but also responsible AI failures.’”
Kumar said it took time to make the distinction and that red team with have a dual mission. The red team’s early work focused on designing traditional security tools. As time passed, the AI read team expanded its work to incorporate machine learning flaws and failures.
The AI red team also concentrates on anticipating where attacks could emerge and developing solutions to counter them. Kumar explains that while the AI red team is part of Microsoft, they work to defend the entire industry.
Whitney Grace, August 24, 2023
Microsoft and Russia: A Convenient Excuse?
August 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
In the Solarwinds’ vortex, the explanation of 1,000 Russia hackers illuminated a security with the heat of a burning EV with lithium batteries. Now Russian hackers have again created a problem. Are these Russians cut from the same cloth as the folks who have turned a special operation into a noir Laurel & Hardy comedy routine?
users in Microsoft Teams chatrooms, pretending to be from technical support. In a blog post [August 2, 2023], Microsoft researchers called the campaign a “highly targeted social engineering attack” by a Russia-based hacking team dubbed Midnight Blizzard. The hacking group, which was previously tracked as Nobelium, has been attributed by the U.S. and UK governments as part of the Foreign Intelligence Service of the Russian Federation.
Isn’t this the Russia producing planners who stalled a column of tanks in its alleged lightning strike on the capital of Ukraine? I think this is the country now creating problems for Microsoft. Imagine that.
The write up continues:
For now, the fake domains and accounts have been neutralized, the researchers said. “Microsoft has mitigated the actor from using the domains and continues to investigate this activity and work to remediate the impact of the attack,” Microsoft said. The company also put forth a list of recommended precautions to reduce the risk of future attacks, including educating users about “social engineering” attacks.
Let me get this straight. Microsoft deployed software with issues. Those issues were fixed after the Russians attacked. The fix, if I understand the statement, is for customers/users to take “precautions” which include teaching obviously stupid customers/users how to be smart. I am probably off base, but it seems to me that Microsoft deployed something that was exploitable. Then after the problem became obvious, Microsoft engineered an alleged “repair.” Now Microsoft wants others to up their game.
Several observations:
- Why not cut and paste the statements from Microsoft’s response to the SolarWinds’ missteps. Why write the same old stuff and recycle the tiresome assertion about Russia? ChatGPT could probably help out Microsoft’s PR team.
- The bad actors target Microsoft because it is a big, overblown system/products with security that whips some people into a frenzy of excitement.
- Customers and users are not going to change their behaviors even with a new training program. The system must be engineered to work in the environment of the real-life users.
Net net: The security problem can be identified when Microsofties look in a mirror. Perhaps Microsoft should train its engineers to deliver security systems and products?
Stephen E Arnold, August 14, 2023