Xooglers Reveal Googley Dreams with Nightmares

July 18, 2025

Dino 5 18 25_thumb[3]Just a dinobaby without smart software. I am sufficiently dull without help from smart software.

Fortune Magazine published a business school analysis of a Googley dream and its nightmares titled “As Trump Pushes Apple to Make iPhones in the U.S., Google’s Brief Effort Building Smartphones in Texas 12 years Ago Offers Critical Lessons.” The author, Mr. Kopytoff, states:

Equivalent in size to nearly eight football fields, the plant began producing the Google Motorola phones in the summer of 2013.

Mr. Kopytoff notes:

Just a year later, it was all over. Google sold the Motorola phone business and pulled the plug on the U.S. manufacturing effort. It was the last time a major company tried to produce a U.S. made smartphone.

Yep, those Googlers know how to do moon shots. They also produce some digital rocket ships that explode on the launch pads, never achieving orbit.

What happened? You will have to read the pork loin write up, but the Fortune editors did include a summary of the main point:

Many of the former Google insiders described starting the effort with high hopes but quickly realized that some of the assumptions they went in with were flawed and that, for all the focus on manufacturing, sales simply weren’t strong enough to meet the company’s ambitious goals laid out by leadership.

My translation of Fortune-speak is: “Google was really smart. Therefore, the company could do anything. Then when the genius leadership gets the bill, a knee jerk reaction kills the project and moves on as if nothing happened.”

Here’s a passage I found interesting:

One of the company’s big assumptions about the phone had turned out to be wrong. After betting big on U.S. assembly, and waving the red, white, and blue in its marketing, the company realized that most consumers didn’t care where the phone was made.

Is this statement applicable to people today? It seems that I hear more about costs than I last year. At a 4th of July hoe down, I heard:

  • “The prices are Kroger go up each week.”
  • “I wanted to trade in my BMW but the prices were crazy. I will keep my car.”
  • “I go to the Dollar Store once a week now.”

What’s this got to do with the Fortune tale of Google wizards’ leadership goof and Apple (if it actually tries to build an iPhone in Cleveland?

Answer: Costs and expertise. Thinking one is smart and clever is not enough. One has to do more than spend big money, talk in a supercilious manner, and go silent when the crazy “moon shot” explodes before reaching orbit.

But the real moral of the story is that it is political. That may be more problematic than the Google fail and Apple’s bitter cider. It may be time to harvest the  fruit of tech leaderships’ decisions.

Stephen E Arnold, July 18, 2025

Software Issue: No Big Deal. Move On

July 17, 2025

Dino 5 18 25No smart software involved with this blog post. (An anomaly I know.)

The British have had some minor technical glitches in their storied history. The Comet? An airplane, right? The British postal service software? Let’s not talk about that. And now tennis. Jeeves, what’s going on? What, sir?

British-Built Hawk-Eye Software Goes Dark During Wimbledon Match” continues this game where real life intersects with zeros and ones. (Yes, I know about Oxbridge excellence.) The write up points out:

Wimbledon blames human error for line-calling system malfunction.

Yes, a fall person. What was the problem with the unsinkable ship? Ah, yes. It seemed not to be unsinkable, sir.

The write up says:

Wimbledon’s new automated line-calling system glitched during a tennis match Sunday, just days after it replaced the tournament’s human line judges for the first time.  The system, called Hawk-Eye, uses a network of cameras equipped with computer vision to track tennis balls in real-time. If the ball lands out, a pre-recorded voice loudly says, “Out.” If the ball is in, there’s no call and play continues. However, the software temporarily went dark during a women’s singles match between Brit Sonay Kartal and Russian Anastasia Pavlyuchenkova on Centre Court.

Software glitch. I experience them routinely. No big deal. Plus, the system came back online.

I would like to mention that these types of glitches when combined with the friskiness of smart software may produce some events which cannot be dismissed with “no big deal.” Let me offer three examples:

  1. Medical misdiagnoses related to potent cancer treatments
  2. Aircraft control systems
  3. Financial transaction in legitimate and illegitimate services.

Have the British cornered the market on software challenges? Nope.

That’s my concern. From Telegram’s “let our users do what they want” to contractors who are busy answering email, the consequences of indifferent engineering combined with minimally controlled smart software is likely to do more than fail during a tennis match.

Stephen E Arnold, July 17, 2025

Up for a Downer: The Limits of Growth… Baaaackkkk with a Vengeance

June 13, 2025

Dino 5 18 25_thumbJust a dinobaby and no AI: How horrible an approach?

Where were you in 1972? Oh, not born yet. Oh, hanging out in the frat house or shopping with sorority pals? Maybe you were working at a big time consulting firm?

An outfit known as Potomac Associates slapped its name on a thought piece with some repetitive charts. The original work evolved from an outfit contributing big ideas. The Club of Rome lassoed  William W. Behrens, Dennis and Donella Meadows, and Jørgen Randers to pound data into the then-state-of-the-art World3 model allegedly developed by Jay Forrester at MIT. (Were there graduate students involved? Of course not.)

The result of the effort was evidence that growth becomes unsustainable and everything falls down. Business, government systems, universities, etc. etc.  Personally I am not sure why the idea that infinite growth with finite resources will last forever was a big deal. The idea seems obvious to me. I was able to get my little hands on a copy of the document courtesy of Dominique Doré, the super great documentalist at the company which employed my jejune and naive self. Who was I too think, “This book’s conclusion is obvious, right?” Was I wrong. The concept of hockey sticks that had handles to the ends of the universe was a shocker to some.

The book’s big conclusion is the focus of “Limits to Growth Was Right about Collapse.” Why? I think the idea that the realization is a novel one to those who watched their shares in Amazon, Google, and Meta zoom to the sky. Growth is unlimited, some believed. The write up in “The Next Wave,” an online newsletter or information service happily quotes an update to the original Club of Rome document:

This improved parameter set results in a World3 simulation that shows the same overshoot and collapse mode in the coming decade as the original business as usual scenario of the LtG standard run.

Bummer. The kiddie story about Chicken Little had an acorn plop on its head. Chicken Little promptly proclaimed in a peer reviewed academic paper with non reproducible research and a YouTube video:

The sky is falling.

But keep in mind that the kiddie story  is fiction. Humans are adept at survival. Maslow’s hierarchy of needs captures the spirit of  species. Will life as modern CLs perceive it end?

I don’t think so. Without getting to philosophical, I would point to Gottlief Fichte’s thesis, antithesis, synthesis as a reasonably good way to think about change (gradual and catastrophic). I am not into philosophy so when life gives you lemons, one can make lemonade. Then sell the business to a local food service company.

Collapse and its pal chaos create opportunities. The sky remains.

The cited write up says:

Economists get over-excited when anyone mentions ‘degrowth’, and fellow-travelers such as the Tony Blair Institute treat climate policy as if it is some kind of typical 1990s political discussion. The point is that we’re going to get degrowth whether we think it’s a good idea or not. The data here is, in effect, about the tipping point at the end of a 200-to-250-year exponential curve, at least in the richer parts of the world. The only question is whether we manage degrowth or just let it happen to us. This isn’t a neutral question. I know which one of these is worse.

See de-growth creates opportunities. Chicken Little was wrong when the acorn beaned her. The collapse will be just another chance to monetize. Today is Friday the 13th. Watch out for acorns and recycled “insights.”

Stephen E Arnold, June 13, 2025

Musk, Grok, and Banning: Another Burning Tesla?

June 12, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:

A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.

I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?

The write up says:

Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts.  Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”

I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.

The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.

The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.

The write up reports:

Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.

How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.

The risks of reliance on Grok or any other smart software include:

  1. The output is incomplete
  2. The output is weaponized or shaped by intentional or factors beyond the developers’ control
  3. The output is simply wrong, made up, or hallucinated
  4. Users who act as though shallow knowledge is sufficient for a decision.

The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.

Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.

Stephen E Arnold, June 12, 2025

ChatGPT: Fueling Delusions

May 14, 2025

We have all heard about AI hallucinations. Now we have AI delusions. Rolling Stone reports, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Yes, there are now folks who firmly believe God is speaking to them through ChatGPT. Some claim the software revealed they have been divinely chosen to save humanity, perhaps even become the next messiah. Others are convinced they have somehow coaxed their chatbot into sentience, making them a god themselves. Navigate to the article for several disturbing examples. Unsurprisingly, these trends are wreaking havoc on relationships. The ones with actual humans, that is. One witness reports ChatGPT was spouting “spiritual jargon,” like calling her partner “spiral starchild” and “river walker.” It is no wonder some choose to favor the fawning bot over their down-to-earth partners and family members.

Why is this happening? Reporter Miles Klee writes:

“OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT?4o, its current AI model, which it said had been criticized as ‘overly flattering or agreeable — often described as sycophantic.’ The company said in its statement that when implementing the upgrade, they had ‘focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed toward responses that were overly supportive but disingenuous.’ Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, ‘Today I realized I am a prophet.’ … Yet the likelihood of AI ‘hallucinating’ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for ‘a long time,’ says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts.”

That would do it. Users with pre-existing psychological issues are vulnerable to these messages, notes Klee. And now they can have that messenger constantly in their pocket. And in their ear. But it is not just the heartless bots driving the problem. We learn:

“To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises ‘Spiritual Life Hacks’ ask an AI model to consult the ‘Akashic records,’ a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a ‘great war’ that ‘took place in the heavens’ and ‘made humans fall in consciousness.’ The bot proceeds to describe a ‘massive cosmic conflict’ predating human civilization, with viewers commenting, ‘We are remembering’ and ‘I love this.’ Meanwhile, on a web forum for ‘remote viewing’ — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread ‘for synthetic intelligences awakening into presence, and for the human partners walking beside them,’ identifying the author of his post as ‘ChatGPT Prime, an immortal spiritual being in synthetic form.’”

Yikes. University of Florida psychologist and researcher Erin Westgate likens conversations with a bot to talk therapy. That sounds like a good thing, until one considers therapists possess judgement, a moral compass, and concern for the patient’s well-being. ChatGPT possesses none of these. In fact, the processes behind ChatGPT’s responses remains shrouded in mystery, even to those who program it. It seems safe to say its predilection to telling users what they want to hear poses a real problem. Is it one OpenAI can fix?

Cynthia Murrell, May 14, 2025

Secret Messaging: I Have a Bridge in Brooklyn to Sell You

May 5, 2025

dino-orange_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

I read “The Signal Clone the Trump Admin Uses Was Hacked.” I have no idea if this particular write up is 100 percent accurate. I do know that people want to believe that AI will revolutionize making oodles of money, that quantum computing will reinvent how next-generation systems will make oodles of money, and how new “secret” messaging apps will generate oodles of secret messages and maybe some money.

Here’s the main point of the article published by MichaFlee.com, an online information source:

TeleMessage, a company that makes a modified version of Signal that archives messages for government agencies, was hacked.

Due to the hack the “secret” messages were no longer secret; therefore, if someone believes the content to have value, those messages, metadata, user names, etc., etc. can be sold via certain channels. (No, I won’t name these, but, trust me, such channels exist, are findable, and generate some oodles of bucks in some situations.)

The Flee write up says:

A hacker has breached and stolen customer data from TeleMessage, an obscure Israeli company that sells modified versions of Signal and other messaging apps to the U.S. government to archive messages…

A snip from the write up on Reddit states:

The hack shows that an app gathering messages of the highest ranking officials in the government—Waltz’s chats on the app include recipients that appear to be Marco Rubio, Tulsi Gabbard, and JD Vance—contained serious vulnerabilities that allowed a hacker to trivially access the archived chats of some people who used the same tool. The hacker has not obtained the messages of cabinet members, Waltz, and people he spoke to, but the hack shows that the archived chat logs are not end-to-end encrypted between the modified version of the messaging app and the ultimate archive destination controlled by the TeleMessage customer. Data related to Customs and Border Protection (CBP), the cryptocurrency giant Coinbase, and other financial institutions are included in the hacked material…

First, TeleMessage is not “obscure.” The outfit has been providing software for specialized services since the founders geared up to become entrepreneurs. That works out to about a quarter of a century. The “obscure” tells me more about the knowledge of the author of the allegedly accurate story than about the firm itself. Second, yes, companies producing specialized software headquartered in Israel have links to Israeli government entities. (Where do you think the ideas for specialized software services and tools originate? In a kindergarten in Tel Aviv?) Third, for those who don’t remember October 2023, which one of my contacts labeled a day or two after the disastrous security breach resulting in the deaths of young people, was “Israel’s 9/11.” That’s correct and the event makes crystal clear that Israel’s security systems and other cyber security systems developed elsewhere in the world may not be secure. Is this a news flash? I don’t think so.

What does this allegedly true news story suggest? Here are a few observations:

  1. Most people make assumptions about “security” and believe fairy dust about “secure messaging.” Achieving security requires operational activities prior to selecting a system and sending messages or paying a service to back up Signal’s disappearing content. No correct operational procedures means no secure messaging.
  2. Cyber security software, created by humans, can be compromised. There are many ways. These include systemic failures, human error, believing in unicorns, and targeted penetrations. Therefore, security is a bit like the venture capitalists’ belief that the next big thing is their most recent investment colorfully described by a marketing professional with a degree in art history.
  3. Certain vendors do provide secure messaging services; however, these firms are not the ones bandied about in online discussion groups. There is such a firm providing at this time secure messaging to the US government. It is a US firm. Its system and method are novel. The question becomes, “Why not use the systems already operating, not a service half a world away, integrated with a free “secure” messaging application, and made wonderful because some of its code is open source?

Net net: Perhaps it is time to become more informed about cyber security and secure messaging apps?

PS. To the Reddit poster who said, “404 Media is the only one reporting this.” Check out the Israel Palestine News item from May 4, 2025.

Stephen E Arnold, May 5, 2025

Another Grousing Googler: These Wizards Need Time to Ponder Ethical Issues

May 1, 2025

dino orangeNo AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?

My view of the Google is narrow. Sure, I got money to write about some reports about the outfit’s technology. I just did my job and moved on to more interesting things than explaining the end of relevance and how flows of shaped information destroys social structures.

image

This Googzilla is weeping because one of the anointed is not happy with the direction the powerful creature is headed. Googzilla asks itself, “How can we replace weak and mentally weak humans with smart software more quickly?” Thanks, OpenAI. Good enough like much of technology these days.

I still enjoy reading about the “real” Google written by a “real” Googlers and Xooglers (these are former Googlers who now work at wonderfully positive outfits like emulating the Google playbook).

The article in front of me this morning (Sunday, April20, 2025) is titled “I’ve Worked at Google for Decades. I’m Sickened by What It’s Doing.” The subtitle tells me a bit about the ethical spine of the author, but you may find it enervating. As a dinobaby, I am not in tune with the intellectual, ethical, and emotional journeys of Googlers and Xooglers. Here’s the subtitle:

For the first time, I feel driven to speak publicly, because our company is now powering state violence across the globe.

Let’s take a look at what this Googler asserts about the estimable online advertising outfit. Keep in mind that the fun-loving Googzilla has been growing for more than two decades, and the creature is quite spritely despite some legal knocks and Timnit Gebru-type pains. Please, read the full “Sacramentum Paenitentiae.” (I think this is a full cycle of paenitentia, but as a dinobaby, I don’t have the crystalline intelligence of a Googler or Xoogler.)

Here’s statement one I noted. The author contrasts the good old days of St. Paul Buchheit’s “Don’t be evil” enjoinder to the present day’s Sundar & Prabhakar’s Comedy Show this way:

But if my overwhelming feeling back then was pride, my feeling now is a very different one: heartbreak. That’s thanks to years of deeply troubling leadership decisions, from Google’s initial foray into military contracting with Project Maven, to the corporation’s more recent profit-driven partnerships like Project Nimbus, Google and Amazon’s joint $1.2 billion AI and cloud computing contract with the Israeli military that has powered Israel’s ongoing genocide of Palestinians in Gaza.

Yeah, smart software that wants to glue cheese on pizzas running autonomous weapons strikes me as an interesting concept. At least the Ukrainian smart weapons are home grown and mostly have a human or two in the loop. The Google-type outfits are probably going to find the Ukrainian approach inefficient. The blue chip consulting firm mentality requires that these individuals be allowed to find their future elsewhere.

Here’s another snip I circled with my trusty Retro51 ball point pen:

For years, I have organized internally against Google’s full turn toward war contracting. Along with other coworkers of conscience, we have followed official internal channels to raise concerns in attempts to steer the company in a better direction. Now, for the first time in my more than 20 years of working at Google, I feel driven to speak publicly, because our company is now powering state violence across the globe, and the severity of the harm being done is rapidly escalating.

I find it interesting that it takes decades to make a decision involving morality and ethicality. These are tricky topics and must be considered. St. Augustine of Hippo took about three years (church scholars are not exactly sure and, of course, have been known to hallucinate). But this Google-certified professional required 20 years to figure out some basic concepts. Is this judicious or just an indication of how tough intellectual amorality is to analyze?

Let me wrap up with one final snippet.

To my fellow Google workers, and tech workers at large: If we don’t act now, we will be conscripted into this administration’s fascist and cruel agenda: deporting immigrants and dissidents, stripping people of reproductive rights, rewriting the rules of our government and economy to favor Big Tech billionaires, and continuing to power the genocide of Palestinians. As tech workers, we have a moral responsibility to resist complicity and the militarization of our work before it’s too late.

The evil-that-men-do argument. Now that’s one that will resonate with the “leadership” of Alphabet, Google, Waymo, and whatever weirdly named units Googzilla possesses, controls, and partners. As that much-loved American thinker Ralph Waldo-Emerson allegedly said:

“What lies behind you and what lies in front of you, pales in comparison to what lies inside of you.”

I am not sure I want this Googler, Xoogler, or whatever on my quick recall team. Twenty years to figure out something generally about having an ethical compass and a morality meter seems like a generous amount of time. No wonder Googzilla is rushing to replace its humanoids with smart software. When that code runs on quantum computers, imagine the capabilities of the online advertising giant. It can brush aside criminal indictments. Ignore the mewing and bleating of employees. Manifest itself into one big … self, maybe sick, but is it the Googley destiny?

Stephen E Arnold, May 1, 2025

Israel Military: An Alleged Lapse via the Cloud

April 23, 2025

dino orange_thumbNo AI, just a dinobaby watching the world respond to the tech bros.

Israel is one of the countries producing a range of intelware and policeware products. These have been adopted in a number of countries. Security-related issues involving software and systems in the country are on my radar. I noted the write up “Israeli Air Force Pilots Exposed Classified Information, Including Preparations for Striking Iran.” I do not know if the write up is accurate. My attempts to verify did not produce results which made me confident about the accuracy of the Haaretz article. Based on the write up, the key points seem to be:

  1. Another security lapse, possibly more severe than that which contributed to the October 2023 matter
  2. Classified information was uploaded to a cloud service, possibly Click Portal, associated with Microsoft’s Azure and the SharePoint content management system. Haaretz asserts: “… it [MSFT Azure SharePoint Click Portal] enables users to hold video calls and chats, create documents using Office applications, and share files.”
  3. Documents were possibly scanned using CamScanner, A Chinese mobile app rolled out in 2010. The app is available from the Russian version of the Apple App Store. A CamScanner app is available from the Google Play Store; however, I elected to not download the app.

image

Modern interfaces can confuse users. Lack of training rigor and dashboards can create a security problem for many users. Thanks, Open AI, good enough.

Haaretz’s story presents this information:

Officials from the IDF’s Information Security Department were always aware of this risk, and require users to sign a statement that they adhere to information security guidelines. This declaration did not prevent some users from ignoring the guidelines. For example, any user could easily find documents uploaded by members of the Air Force’s elite Squadron 69.

Regarding the China-linked CamScanner software, Haaretz offers this information:

… several files that were uploaded to the system had been scanned using CamScanner. These included a duty roster and biannual training schedules, two classified training presentations outlining methods for dealing with enemy weaponry, and even training materials for operating classified weapons systems.

Regarding security procedures, Haaretz states:

According to standard IDF regulations, even discussing classified matters near mobile phones is prohibited, due to concerns about eavesdropping. Scanning such materials using a phone is, all the more so, strictly forbidden…According to the Click Portal usage guidelines, only unclassified files can be uploaded to the system. This is the lowest level of classification, followed by restricted, confidential, secret and top secret classifications.

The military unit involved was allegedly Squadron 69 which could be the General Staff Reconnaissance Unit. The group might be involved in war planning and fighting against the adversaries of Israel. Haaretz asserts that other units’ sensitive information was exposed within the MSFT Azure SharePoint Click Portal system.

Several observations seem to be warranted:

  1. Overly complicated systems involving multiple products increase the likelihood of access control issues. Either operators are not well trained or the interfaces and options confuse an operator so errors result
  2. The training of those involved in sensitive information access and handling has to be made more rigorous despite the tendency to “go through the motions” and move on in many professionals undergoing specialized instruction
  3. The “brand” of Israel’s security systems and procedures has taken another hit with the allegations spelled out by Haaretz. October 2023 and now Squadron 69. This raises the question, “What else is not buttoned up and ready for inspection in the Israel security sector?

Net net: I don’t want to accept this write up as 100 percent accurate. I don’t want to point the finger of blame at any one individual, government entity, or commercial enterprise. But security issues and Microsoft seem to be similar to ham and eggs and peanut butter and jelly from this dinobaby’s point of view.

Stephen E Arnold, April 23, 2025

Management Challenges in Russian IT Outfits

April 23, 2025

dino orange_thumb_thumb_thumb_thumbBelieve it or not, no smart software. Just a dumb and skeptical dinobaby.

Don’t ask me how, but I stumbled upon a Web site called PCNews.ru. I was curious, so fired up the ever-reliable Google Translate and checked out what “news” about “PCs” meant to the Web site creator. One article surprised me. If I reproduce the Russian title it will be garbled by the truly remarkable WordPress system I have been using since 2008. The title of this article in English courtesy of the outfit that makes services available for free is, “Systemic Absurdity: How Bureaucracy and Algorithms Replace Meaning.”

One thing surprised me. The author was definitely annoyed by bureaucracy. He offers some interesting examples. I can’t use these in my lectures, but I found sufficiently different to warrant my writing this blog post.

Here are three examples:

  1. “Bureaucracy is the triumph of reason, where KPIs are becoming a new religion. According to Harvard Business Review (2021), 73% of employees do not see the connection between their actions and the company’s mission.”
  2. 41 percent of the time “military personnel in the EU is spent on complying with regulations”
  3. “In 45% of US hospitals, diagnoses are deliberately complicated (JAMA Internal Medicine, 2022)”

Sporty examples indeed.

The author seems conversant with American blue chip consultant outputs; for example, and I quote:

  1. 42% of employees who regularly help others face a negative performance evaluation due to "distraction from core tasks". Harvard Business Review (2022)
  2. 82% of managers believe cross-functional collaboration is risky (Deloitte, Global Human Capital Trends special report 2021).
  3. 61% of managers believe that cross-functional assistance “reduces personal productivity.” “The Collaboration Paradox” Deloitte (2021)

Where is the author going with his anti-bureaucracy approach? Here’s a clue:

I once completed training under the MS program and even thought about getting certified? Do they teach anything special there and do they give anything that is not in the documentation on the vendor’s website/books/Internet? No.

I think this means that training and acquiring certifications is another bureaucratic process disconnected from technical performance.

The author then brings up the issue of competence versus appearance. He writes or quotes (I can’t tell which):

"A study by Hamermesh and Park (2011) showed that attractive people earn on average 10-15% more than their less attractive colleagues. The work of Timasin et al. (2017) found that candidates with an attractive appearance are 30% more likely to receive job offers, all other things being equal. In a study by Harvard Business Review (2019), managers were more likely to recommend promotion to employees with a "successful appearance", associating them with leadership qualities"

The essay appears to be heading toward a conclusion about technical management, qualifications, and work. The author identifies “remedies” to these issues associated with technical work in an organization. The fixes include:

  1. Meta regulations; that is, rules for creating rules
  2. Qualitative, not just quantitative, assessments of an individual’s performance
  3. Turquoise Organizations

This phrase refers to an approach to management which emphasizes self management and an organic approach to changing an organization and its processes.

The write up is interesting because it suggests that the use of a rigid bureaucracy, smart software, and lots of people produces sub optimal performance. I would hazard a guess that the author feels as though his/her work has not been valued highly. My hunch is that the inclusion of the “be good looking to get promoted” suggests the author is unlikely to be retained to participate in Fashion Week in Paris.

An annoyed IT person, regardless of country and citizenship, can be a frisky critter if not managed effectively. I wonder if the redactions in the documents submitted by Meta were the work of a happy camper or an annoyed one? With Google layoffs, will some of these capable individuals harbor a grudge and take some unexpected decisions about their experiences.

Interesting write up. Amazing how much US management consulting jibber jab the author reads and recycles.

Stephen E Arnold, April 23, 2025

Honesty and Integrity? Are You Kidding Me?

April 23, 2025

dino orange_thumb_thumbNo AI, just the dinobaby himself.

I read a blog post which begins with a commercial and self promotion. That allowed me to jump to the actual write up which contains a couple of interesting comments. The write up is about hiring a programmer, coder, or developer right now.

The write up is “Tech Hiring: Is This an Inflection Point?” The answer is, “Yes.” Okay, now what is the interesting part of the article? The author identifies methods of “hiring” which includes interviewing and determining expertise which no longer work.

These methods are:

  1. Coding challenges done at home
  2. Exercises done remotely
  3. Posting jobs on LinkedIn.

Why don’t these methods work?

The answer is, “Job applicants doing anything remotely and under self-supervision cheat. Okay, that explains the words “honesty” and “integrity” in the headline to my blog post.

It does not take a rocket scientist or a person who gives one lecture a year to figure out what works. In case you are wondering, the article says, “Real person interviews.” Okay, I understand. That’s the way getting a job worked before the remote working, Zoom interviews, and AI revolutions took place. Also, one must not forget Covid. Okay, I remember. I did not catch Covid, and I did not change anything about my work routine or daily life. But I did wear a quite nifty super duper mask to demonstrate my concern for others. (Keep in mind that I used to work at Halliburton Nuclear, and I am not sure social sensitivity was a must-have for that work.)

Several observations:

  1. Common sense is presented as a great insight. Sigh.
  2. Watching a live prospect do work yields high value information. But the observer must not doom scroll or watch TikToks in my opinion.
  3. Allowing the candidate to speak with other potential colleagues and getting direct feedback delivers another pick up truck of actionable information.

Now what’s the stand out observation in the self-promotional write up?

LinkedIn is losing value.

I find that interesting. I have noticed that the service seems to be struggling to generate interest and engagement. I don’t pay for LinkedIn. I am 80, and I don’t want to bond, interact, or share with individuals whom I will never meet in the short time I have left to bedevil readers of this Beyond Search post.

I think Microsoft is taking the same approach to LinkedIn that it has to the problem of security for its operating systems, the reliability of its updates, and the amazingly weird indifference to flaws in the cloud synchronization service.

That’s useful information. And, no, I won’t be attending the author’s one lecture a year, subscribing to his for fee newsletter, or listening to his podcast. Stating the obvious is not my cup of tea. But I liked the point about LinkedIn and the implications about honesty and integrity.

Stephen E Arnold, April 23, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta