The Thought Process May Be a Problem: Microsoft and Copilot Fees
February 4, 2025
Yep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.
Here’s a connection to consider. On one hand, we have the remarkable attack surface of Microsoft software. Think SolarWinds. Think note from the US government to fix security. Think about the flood of bug fixes to make Microsoft software secure. Think about the happy bad actors gleefully taking advantage of what is the equivalent of a piece of chocolate cake left on a picnic table in Iowa in July.
Now think about the marketing blast that kicked off the “smart software” revolution. Google flashed its weird yellow and red warning lights. Sam AI-Man began thinking in terms of trillions of dollars. Venture firms wrote checks like it was 1999 again. Even grade school students are using smart software to learn about George Washington crossing the Delaware.
And where are we? ZDNet published an interesting article which may have the immediate effect of getting some Microsoft negative vibes. But to ZDNet’s credit the write up “The Microsoft 365 Copilot Launch Was a Total Disaster.” I want to share some comments from the write up before I return to the broader notion that the “thought process” is THE Microsoft problem.
I noted this passage:
Shortly after the New Year, someone in Redmond pushed a button that raised the price of its popular (84 million paid subscribers worldwide!) Microsoft 365 product. You know, the one that used to be called Microsoft Office? Yeah, well, now the app is called Microsoft 365 Copilot, and you’re going to be paying at least 30% more for that subscription starting with your next bill.
How about this statement:
No one wants to pay for AI
Some people do, but these individuals do not seem to be the majority of computing device users. Furthermore there are some brave souls suggesting that today’s approach to AI is not improving as the costs of delivering AI continue to rise. Remember those Sam AI-Man trillions?
Microsoft is not too good with numbers either. The article walks through the pricing and cancellation functions. Here’s the key statement after explaining the failure to get the information consistent across the Microsoft empire:
It could be worse, I suppose. Just ask the French and Spanish subscribers who got a similar pop-up message telling them their price had gone from €10 a month to €13,000. (Those pesky decimals.)
Yep, details. Let’s go back to the attack surface idea. Microsoft’s corporate thought process creates problems. I think the security and Copilot examples make clear that something is amiss at Microsoft. The engineering of software and the details of that engineering are not a priority.
That is the problem. And, to me, it sure seems as though Microsoft’s worse characteristics are becoming the dominant features of the company. Furthermore, I believe that the organization cannot remediate itself. That is very concerning. Not only have users lost control, but the firm is unconsciously creating a greater set of problems for many people and organizations.
Not good. In fact, really bad.
Stephen E Arnold, February 4, 2025
Microsoft and Bob Think for Bing
February 4, 2025
Bing is not Google, but Microsoft wants its search engine to dominate queries. Microsoft Bing has a small percentage of Internet searches and in a bid to gain more traction it has copied Google’s user interface (UI). Windows Latest spills the tea over the UI copying: “Microsoft Bing Is Trying To Spoof Google UI When People Search Google.com."
Google’s UI is very distinctive with its minimalist approach. The only item on the Google UI is the query box and menus along the top and bottom of the page. Microsoft Edge is Google’s Web browser and it is programed to use Bing. In a sneaky (and genius) move, when Edge users type Google into the Bing search box they are taken to UI that is strangely Google-esque. Microsoft is trying this new UI to lower the Bing bounce rate, users who leave.
Is it an effective tactic?
“But you might wonder how effective this idea would be. Well, if you’re a tech-savvy person, you’ll probably realize what’s going on, then scroll and open Google from the link. However, this move could keep people on Bing if they just want to use a search engine. Google is the number one search engine, and there’s a large number of users who are just looking for a search engine, but they think the search engine is Google. In their mind, the two are the same. That’s because Google has become a synonym for search engines, just like Chrome is for browsers. A lot of users don’t really care what search engine they’re using, so Microsoft’s new practice, which might appear stupid to some of you, is likely very effective.”
For unobservant users and/or those who don’t care, it will work. Microsoft is also tugging on heartstrings with another tactic:
“On top of it, there’s also an interesting message underneath the Google-like search box that says “every search brings you closer to a free donation. Choose from over 2 million nonprofits. This might also convince some people to keep using Bing.”
What a generous and genius tactic! We’re not sure this is the interface everyone sees, but we love the me too approach from monopolies and alleged monopolies.
Whitney Grace, February 4, 2025
Another Bad Apple? Is It This Shipment or a Degraded Orchard?
February 3, 2025
Yep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.
I read “Siri Is Super Dumb and Getting Dumber.” Now Siri someone told me had some tenuous connection to the Stanford Research Institute. Then the name and possibly some technology DNA wafted to Cupertino. The juicy apple sauce company produced smart software. Someone demonstrated it to me by asking Siri to call a person named “Yankelovich” by saying the name. That just did not work.
The write up explains that my experience was “dumb” and the new Apple smart software is dumber. That is remarkable. A big company and a number of mostly useful products like the estimable science fiction headset and a system demanding that I log into Facetime, iMessage, and iCloud every time I use the computer even though I don’t use these features is mostly perceived as one of the greatest companies on earth.
The write up says:
It’s just incredible how stupid Siri is about a subject matter of such popularity.
Stupid about a popular subject? Even the even more estimable Google figured out a long time ago that one could type just about any spelling of Britney Spears into the search box and the Google would spit out a nifty but superficial report about this famous person and role model for young people.
But Apple? The write up says from a really, truly objective observer of Apple:
New Siri — powered by Apple Intelligence™ with ChatGPT integration enabled — gets the answer completely but plausibly wrong, which is the worst way to get it wrong. It’s also inconsistently wrong — I tried the same question four times, and got a different answer, all of them wrong, each time. It’s a complete failure.
The write up points out:
It’s like Siri is a special-ed student permitted to take an exam with the help of a tutor who knows the correct answers, and still flunks.
Hmmm. Consistently wrong with variations of incorrectness — Do you want to log in to iCloud?
But the killer statement in the write up in my opinion is this one:
Misery loves company they say, so perhaps Apple should, as they’ve hinted since WWDC last June, partner with Google to add Gemini as another “world knowledge” partner to power — or is it weaken? — Apple Intelligence.
Several observations are warranted even though I don’t use Apple mobile devices, but I do like the ruggedness of the Mac Air laptops. (No, I don’t want to log into Apple Media Services or Facetime, thanks.) Here we go with my perceptions:
- Skip the Sam AI-Man stuff, the really macho Zuck stuff, and the Sundar & Prabhakar stuff. Go with Deepseek. (Someone in Beijing will think positively about the iPhone. Maybe?)
- Face up to the fact that Apple does reasonably good marketing. Those M1, M2, M3 chips in more flavors than the once-yummy Baskin-Robbins offered are easy for consumers to gobble up.
- Innovation is not just marketing. The company has to make what its marketers describe in words. That leap is not working in my opinion.
So where does that leave the write up, the Siri thing, and me? Free to select another vendor and consider shorting Apple stock. The orchard is dropping fruit not fit for human consumption but a few can be converted to apple sauce. That’s a potential business. AI slop, not so much.
Stephen E Arnold, February 3, 2025
A Failure Retrospective
February 3, 2025
Every year has tech failures, some of them will join the zeitgeist as cultural phenomenons like Windows Vista, Windows Me, Apple’s Pippin game console, chatbots, etc. PC Mag runs down the flops in: “Yikes: Breaking Down the 10 Biggest Tech Fails of 2024.” The list starts with Intel’s horrible year with its booted CEO, poor chip performance. It follows up with the Salt Typhoon hack that proved (not that we didn’t already know it with TikTok) China is spying on every US citizen with a focus on bigwigs.
National Public Data lost 272 million social security numbers to a hacker. That was a great day in summer for hacker, but the summer travel season became a nightmare when a CrowdStrike faulty kernel update grounded over 2700 flights and practically locked down the US borders. Microsoft’s Recall, an AI search tool that took snapshots of user activity that could be recalled later was a concern. What if passwords and other sensitive information were recorded?
The fabulous Internet Archive was hacked and taken down by a bad actor to protest the Israel-Gaza conflict. It makes us worry about preserving Internet and other important media history. Rabbit and Humane released AI-powered hardware that was supposed to be a hands free way to use a digital assistant, but they failed. JuiceBox ended software support on its EV car chargers, while Scarlett Johansson’s voice was stolen by OpenAI for its Voice Mode feature. She sued.
The worst of the worst is this:
“Days after he announced plans to acquire Twitter in 2022, Elon Musk argued that the platform needed to be “politically neutral” in order for it to “deserve public trust.” This approach, he said, “effectively means upsetting the far right and the far left equally.” In March 2024, he also pledged to not donate to either US presidential candidate, but by July, he’d changed his tune dramatically, swapping neutrality for MAGA hats. “If we want to preserve freedom and a meritocracy in America, then Trump must win,” Musk tweeted in September. He seized the @America X handle to promote Trump, donated millions to his campaign, shared doctored and misleading clips of VP Kamala Harris, and is now working closely with the president-elect on an effort to cut government spending, which is most certainly a conflict of interest given his government contracts. Some have even suggested that he become Speaker of the House since you don’t have to be a member of Congress to hold that position. The shift sent many X users to alternatives like BlueSky, Threads, and Mastodon in the days after the US election.”
It doesn’t matter what Musk’s political beliefs are. He has no right to participate in politics.
Whitney Grace, February 3, 2025
AI Smart, Humans Dumb When It Comes to Circuits
February 3, 2025
Anyone who knows much about machine learning knows we don’t really understand how AI comes to its conclusions. Nevertheless, computer scientists find algorithms do some things quite nicely. For example, ZME Science reports, "AI Designs Computer Chips We Can’t Understand—But They Work Really Well." A team from Princeton University and IIT Madras decided to flip the process of chip design. Traditionally, human engineers modify existing patterns to achieve desired results. The task is difficult and time-consuming. Instead, these researchers fed their AI the end requirements and told it to take it from there. They call this an "inverse design" method. The team says the resulting chips work great! They just don’t really know how or why. Writer Mihai Andrei explains:
"Whereas the previous method was bottom-up, the new approach is top-down. You start by thinking about what kind of properties you want and then figure out how you can do it. The researchers trained convolutional neural networks (CNNs) — a type of AI model — to understand the complex relationship between a circuit’s geometry and its electromagnetic behavior. These models can predict how a proposed design will perform, often operating on a completely different type of design than what we’re used to. … Perhaps the most exciting part is the new types of designs it came up with."
Yes, exciting. That is one word for it. Lead researcher Kaushik Sengupta notes:
"’We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance,’ says Sengupta. The designs were unintuitive and very different than those made by the human mind. Yet, they frequently offered significant improvements."
But at what cost? We may never know. It is bad enough that health care systems already use opaque algorithms, with all their flaws, to render life-and-death decisions. Just wait until these chips we cannot understand underpin those calculations. New world, new trade-offs for a world with dumb humans.
Cynthia Murrell, February 3, 2025
Dumb Smart Software? This Is News?
January 31, 2025
A blog post written by a real and still-alive dinobaby. If there is art, there is AI in my workflow.
The prescient “real” journalists at the Guardian have a new insight: When algorithms are involved, humans get the old shaftola. I assume that Weapons of Math Destruction was not on some folks’ reading list. (O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016). That book did a reasonably good job of explaining how smart software’s math can create some excitement for mere humans. Anecdotes about Amazon’s management of its team of hard-working delivery professionals shifting into survival tricks revealed by the wily Dane creating Survival Russia videos for YouTube.
(Yep, he took his kids to search for graves near a gulag.) “It’s a Nightmare: Couriers Mystified by the Algorithms That Control Their Jobs” explains that smart software raises some questions. The “real” journalist explains:
This week gig workers, trade unions and human rights groups launched a campaign for greater openness from Uber Eats, Just Eat and Deliveroo about the logic underpinning opaque algorithms that determine what work they do and what they are paid. The couriers wonder why someone who has only just logged on gets a gig while others waiting longer are overlooked. Why, when the restaurant is busy and crying out for couriers, does the app say there are none available?
Confusing? To some but to the senior managers of the organizations shifting to smart software, the cost savings are a big deal. Imagine. In Britain, a senior manager can spend a week or two in Nice, maybe Monaco? The write up reports:
The app companies say they do have rider support staffed by people and some information about the algorithms is available on their websites and when drivers are initially “onboarded”.
Of course the “app companies” say positive things. The issue is that management embraces smart software. A third-party firm is retained to advise the lawyers and accountants and possibly one presentable information technology person to a briefing. The options are considered and another third-party firm is retained to integrate the smart software. That third-party retains a probably unpresentable IT person who can lash up some smart software to the bailing-wire-and-spit enterprise software system. Bingo! The algorithms perform their magic. Oh, whom does one blame for a flawed solution? I don’t know. Just call in the lawyers.
The article explains the impact on a worker who delivers for people who cannot walk to a restaurant or the grocery:
“Every worker should understand the basis on which they are paid,” Farrar [a delivery professional] said. “But you’re being gamed into deciding whether to accept a job or not. Will I get a better offer? It’s like gambling and it’s very distressing and stressful for people. You are completely in a vacuum about how best to do the job and because people often don’t understand how decisions are being made about their work, it encourages conspiracies.”
To whom should Mr. Farrar and others shafted by math complain? Perhaps the Guardian newspaper, which is slightly less popular than TikTok or X.com, Facebook or Red Book, or BlueSky or YouTube. My suggestion would be for the Guardian to use these channels and beg for pounds or dollars like other valiant social media professionals. The person doing deliveries might want to explore working for Amazon deliveries and avail himself of Survival Russia videos when on his generous Amazon breaks. And what about the people who call a restaurant and specify at home delivery? I would recommend getting out of that comfy lounge chair and walking to the restaurant in person. While you wait for your lovingly-crafted meal at the Indian takeaway, you can read Weapons of Math Destruction.
Stephen E Arnold, January 31, 2025
Two Rules for Software. All Software If You Can Believe It
January 31, 2025
Did you know that there are two rules that dictate how all software is written? No, we didn’t either. FJ van Wingerde from the Ask The User blog states and explains what the rules are in his post: “The Two Rules Of Software Creation From Which Every Problem Derives.” After a bunch of jib jab about the failures of different codes, Wingerde states the questions:
“It’s the two rules that actually are behind every statement in the agile manifesto. The manifesto unfortunately doesn’t name them really; the people behind it were so steeped in the problems of software delivery—and what they thought would fix it—that they posited their statements without saying why each of these things are necessary to deliver good software. (Unfortunately, necessary but not enough for success, but that we found out in the next decades.) They are [1] Humans cannot accurately describe what they want out of a software system until it exists. and [2] Humans cannot accurately predict how long any software effort will take beyond four weeks. And after 2 weeks it is already dicey.”
The first rule is a true statement for all human activities, except the inability to accurately describe the problem. That may be true for software, however. Humans know they have a problem, but they don’t have a solution to fix. The smart humans figure out how to solve the problem and learn how to describe it with greater accuracy.
As for number two, is project management and weekly maintenance on software all a lucky guess then? Unless effort changes daily and that justifies paying software developers. Then again, someone needs to keep the systems running. Tech people are what keep businesses running, not to mention the entire world.
If software development only has these two rules, we now know why why developers cannot provide time estimates or provide assurances that their software works as leadership trained as accountants and lawyers expect. Rest easy. Software is hopefully good enough and advertising can cover the costs.
Whitney Grace, January 31, 2025
Happy New Year the Google Way
January 31, 2025
We don’t expect Alphabet Inc. to release anything but positive news these days. Business Standard reports another revealing headline, especially for Googlers in the story: "Google Layoffs: Sundar Pichai Announced 10% Job Cuts In Managerial Roles.” After a huge push in the wake of wokeness to hire under represented groups aka DEI hires, Google has slowly been getting rid of its deadweight employees. That is what Alphabet Inc. probably calls them.
DEI hires were the first to go, now in the last vestiges of Googles 2024 push for efficiency, 10% of its managerial positions are going bye-bye. Among those positions are directors and vice presidents. CEO Sundar Pichai says the push for downsizing also comes from bigger competition from AI companies, such as OpenAI. These companies are challenging Google’s dominance in the tech industry.
Pichai started the efficiency push in 2022 when people were starting to push back against the ineffectiveness of DEI hires, especially when their budgets were shrunk from inflation. In January 2023, 12,000 employees were laid off. Picker is also changing the meaning of “Googleyness”:
“At the same meeting, Pichai introduced a refined vision for ‘Googleyness’, a term that once broadly defined the traits of an ideal Google employee but had grown too ambiguous. Pichai reimagined it with a sharper focus on mission-driven work, innovation, and teamwork. He emphasized the importance of creating helpful products, taking bold risks, fostering a scrappy attitude, and collaborating effectively. “Updating modern Google,” as Pichai described it, is now central to the company’s ethos.”
The new spin on being Googley. Enervating. A month into the bright new year, let me ask a non Googley question: “How are those job searches, bills, and self esteem coming along?
Whitney Grace, January 31, 2025
AI Innovation: Writing Checks Is the Google Solution
January 30, 2025
A blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.
Wow. First, Jeff Dean gets the lateral arabesque. Then the Google shifts its smart software to the “I am a star” outfit Deep Mind in the UK. Now, the cuddly Google has, according to Analytics India, pulled a fast one on the wizards laboring at spelling advertising another surprise. “Google Invests $1 Bn in Anthropic” reports:
This new investment is separate from the company’s earlier reported funding round of nearly $2 billion earlier this month, led by Lightspeed Venture Partners, to bump the company’s valuation to about $60 billion. In 2023, Google had invested $300 million in Anthropic, acquiring a 10% stake in the company. In November last, Amazon led Anthropic’s $4 billion fundraising effort, raising its overall funding to $8 billion for the company.
I thought Google was quantumly supreme. I thought Google reinvented protein stuff. I thought Google could do podcasts and fix up a person’s Gmail. I obviously was wildly off the mark. Perhaps Google’s “leadership” has taken time from writing scripts for the Sundar & Prabhakar Comedy Tour and had an epiphany. Did the sketch go like this:
Prabhakar: Did you see the slide deck for my last talk about artificial intelligence?
Sundar: Yes, I thought it was so so. Your final slide was a hoot. Did you think it up?
Prabhakar: No, I think little. I asked Anthropic Claude for a snappy joke. It worked.
Sundar: Did Jeff Dean help? Did Dennis Hassabis contribute?
Prabhakar: No, just Claude Sonnet. He likes me, Sundar.
Sundar: The secret of life is honesty, fair dealing, and Code Yellow!
Prabhakar: I think Google intelligence may be a contradiction in terms. May I requisition another billion for Anthropic?
Sundar: Yes, we need to care about posterity. Otherwise, our posterity will be defined by a YouTube ad.
Prabhakar: We don’t want to take it in the posterity, do we?
Sundar: Well….
Anthropic allegedly will release a “virtual collaborator.” Google wants that, right Jeff and Dennis? Are there anti-trust concerns? Are there potential conflicts of interest? Are there fears about revenues?
Of course not.
Will someone turn off those darned flashing red and yellow lights! Innovation is tough with the sirens, the lights, the quantumly supremeness of Googleness.
Stephen E Arnold, January 30, 2025
Who Knew? A Perfect Bribery Vehicle, According to Ethereum Creator
January 30, 2025
A blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.
I read “Ethereum Creator Vitalik Buterin: Politician Issued Coins Perfect Bribery Vehicle.” Isn’t Mr. Buterin a Russian Canadian? People with these cultural influences can spot a plastic moose quickly in experience.
The write up reports:
Ethereum founder Vitalik Buterin has criticized cryptocurrencies issued by politicians as “a perfect bribery vehicle.” “If a politician issues a coin, you do not even need to send them any coins to give them money,” Buterin explained in a tweet. “Instead, you just buy and hold the coin, and this increases the value of their holdings passively.” He added that one of the reasons these “politician coins” are potentially excellent tools for bribery is the element of “deniability.”
Mr. Buterin is quoted in the write up as saying:
“I recommend politicians do not go down this path.”
Who knew that a plastic moose would become animated and frighten the insightful Russian Canadian? What sound does a plastic moose make? Hee haw hee haw.
Nope, that’s a jackass. Easy mistake.
Stephen E Arnold, January 30, 2025