If You Want to Work at Meta, You Must Say Yes, Boss, Yes Boss, Yes Boss

August 18, 2025

Dino 5 18 25No AI. Just a dinobaby working the old-fashioned way.

These giant technology companies are not very good in some situations. One example which comes to mind in the Apple car. What was the estimate? About $10 billion blown Meta pulled a similar trick with its variant of the Google Glass. Winners.

I read “Meta Faces Backlash over AI Policy That Lets Bots Have Sensual Conversations with Children.” My reaction was, “You are kidding, right?” Nope. Not a joke. Put aside common sense, a parental instinct for appropriateness, and the mounting evidence that interacting with smart software can be a problem. What are these lame complaints.

The write up says:

According to Meta’s 200-page internal policy seen by Reuters, titled “GenAI: Content Risk Standards”, the controversial rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist.

Okay, let’s stop the buggy right here, pilgrim.

A “chief ethicist”! A chief ethicist who thought that this was okay:

An internal Meta policy document, seen by Reuters, showed the social media giant’s guidelines for its chatbots allowed the AI to “engage a child in conversations that are romantic or sensual”, generate false medical information, and assist users in arguing that Black people are “dumber than white people”.

What is an ethicist? First, it is a knowledge job. One I assume requiring knowledge of ethical thinking embodied in different big thinkers. Second, it is  a profession which relies on context because what’s right for Belgium in the Congo may not be okay today. Third, the job is likely one that encourages flexible definitions of ethics. It may be tough to get another high-paying gig if one points out that the concept of sensual conversations with children is unethical.

The write up points out that an investigation is needed. Why? The chief ethicist should say, “Sorry. No way.”

Chief ethicist? A chief “yes, boss” person.

Stephen E Arnold, August 18, 2025

c

Google: Simplicity Is Not a Core Competency

August 18, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Telegram Messenger is reasonably easy to use messaging application. People believe that it is bulletproof, but I want to ask, “Are you sure?” Then there is WhatsApp, now part of Darth Zuck’s empire. However, both of these outfits appear to be viewed as obtuse and problematic by Kremlin officials. The fix? Just ban these service. Banning online services is a popular way for a government to “control” information flow.

I read a Russian language article about an option some Russians may want to consider. The write up’s title is “How to Replace Calls on WhatsApp and Telegram. Review of the Google Meet Application for Android and iOS.”

I worked through the write up and noted this statement:

Due to the need to send invitation links Meet is not very convenient for regular calls— and most importantly it belongs to the American company Google, whose products, by definition, are under threat of blocking. Moreover, several months ago, Russian President Vladimir Putin himself called for «stifling» Western services operating in Russia, and instructed the Government to prepare a list of measures to limit them by September 1, 2025.

The bulk of the write up is a how to. In order to explain the process of placing a voice call via the Google system, PCNews presented:

  1. Nine screenshots
  2. These required seven arrows
  3. One rectangular box in red to call attention to something. (I couldn’t figure out what, however.)
  4. Seven separate steps.

How does one “do” a voice call in Telegram Messenger. Here are the steps:

  1. I opened Telegram mini app and select the contact with whom I want to speak
  2. I tap on my contact’s name
  3. I look for the phone call icon and tap it
  4. I choose “Voice Call” from the options to start an audio call. If I want to make a video call instead, I select “Video Call”

One would think that when a big company wants to do a knock off of a service, someone would check out what Telegram does. (It is a Russian audience due to the censorship in the country.) Then the savvy wizard would figure out how to make the process better and faster and easier.  Instead the clever Googlers add steps. That’s the way of the Sundar & Prabhakar Comedy Show.

Stephen E Arnold, August 18, 2025

AI Applesauce: Sweeten the Story about Muffing the Bunny

August 14, 2025

Dino 5 18 25_thumbNo AI. Just a dinobaby being a dinobaby.

I read “Apple CEO Tim Cook Calls AI ‘Bigger Than the Internet’ in Rare All-Hands Meeting.” I noted this passage:

In a global all-hands meeting hosted from Apple’s headquarters in Cupertino, California, CEO Tim Cook seemed to admit to what analysts and Apple enthusiasts around the world had been raising concerns about: that Apple has fallen behind competitors in the AI race. And Cook promised employees that the company will be doing everything to catch up. “Apple must do this. Apple will do this. This is sort of ours to grab.” …The AI revolution [is] “as big or bigger” than the internet.

Okay. Two companies of some significance have miss the train to AI Ville: Apple and Telegram. Both have interesting technology. Apple is far larger, but for some users Telegram is more important to their lives. One is fairly interested in China activities; the other is focused on Russia and crypto.

But both have managed their firms into the same digital row boat. Apple had Siri and it was not very good. Telegram knew about AI and allowed third-party bot developers to use it, but Telegram itself dragged its feet.

Both companies are asserting that each has plenty of time. Tim Cook is talking about smart software but so far the evidence of making an AI difference is scant. Telegram, on the other hand, has aimed Nikolai Durov at AI. That wizard is working on a Telegram AI system.

But the key point is that both of these forward leaning outfits are trying to catch up. This  is not keeping pace, mind. The two firms are trying to go from watching the train go down the tracks to calling an Uber to get to their respective destinations.

My take on both companies is that the “leadership” have some good reasons for muffing the AI bunny. Apple is struggling with its China “syndrome.” Will the nuclear reactor melt down, fizzle out, or blow up? Apple’s future in hardware may become radioactive.

Telegram is working under the shadow of the criminal trial lumbering toward its founder and owner Pavel Durov. More than a dozen criminal charges and a focused French judicial figure have Mr. Durov reporting a couple of times a week. To travel, he has to get a note from his new “mom.”

But well-run companies don’t let things like China dependency or 20 years in Fleury-Mérogis Prison upset trillion dollar companies or cause more than one billion people to worry about their free text messages and non fungible tokens.

“Leadership,” not technology, strikes me as the problem with AI challenges. If AI is so big, why did two companies fail to get the memo? Inattention, pre-occupation with other matters, fear? Pick one or two.

Stephen E Arnold, August 14, 2025

Microsoft Management Method: Fire Humans, Fight Pollution

August 7, 2025

How Microsoft Plans to Bury its AI-Generated Waste

Here is how one big tech firm is addressing the AI sustainability quandary. Windows Central reports, “Microsoft Will Bury 4.9 Tons of ‘Manure’ in a Secretive Deal—All to Offset its AI Energy Demands that Drive Emissions Up by 168%.” We suppose this is what happens when you lay off employees and use the money for something useful. Unlike Copilot.

Writer Kevin Okemwa begins by summarizing Microsoft’s current approach to AI. Windows and Office users may be familiar with the firm’s push to wedge its AI products into every corner of the environment, whether we like it or not. Then there is the feud with former best bud OpenAI, a factor that has Microsoft eyeing a separate path. But whatever the future holds, the company must reckon with one pressing concern. Okemwa writes:

“While it has made significant headway in the AI space, the sophisticated technology also presents critical issues, including substantial carbon emissions that could potentially harm the environment and society if adequate measures aren’t in place to mitigate them. To further bolster its sustainability efforts, Microsoft recently signed a deal with Vaulted Deep (via Tom’s Hardware). It’s a dual waste management solution designed to help remove carbon from the atmosphere in a bid to protect nearby towns from contamination. Microsoft’s new deal with the waste management solution firm will help remove approximately 4.9 million metric tons of waste from manure, sewage, and agricultural byproducts for injection deep underground for the next 12 years. The firm’s carbon emission removal technique is quite unique compared to other rivals in the industry, collecting organic waste which is combined into a thick slurry and injected about 5,000 feet underground into salt caverns.”

Blech. But the process does keep the waste from being dumped aboveground, where it could release CO2 into the environment. How much will this cost? We learn:

“While it is still unclear how much this deal will cost Microsoft, Vaulted Deep currently charges $350 per ton for its carbon removal services. Simple math suggests that the deal might be worth approximately $1.7 billion.”

That is a hefty price tag. And this is not the only such deal Microsoft has made: We are told it signed a contract with AtmosClear in April to remove almost seven million metric tons of carbon emissions. The company positions such deals as evidence of its good stewardship of the planet. But we wonder—is it just an effort to keep itself from being buried in its own (literal and figurative) manure?

Cynthia Murrell, August 7, 2025

Microsoft: Knee Jerk Management Enigma

July 29, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. Not even smart software can help this reptilian thinker.

I read “In New Memo, Microsoft CEO Addresses Enigma of Layoffs Amid Record Profits and AI Investments.” The write up says in a very NPR-like soft voice:

“This is the enigma of success in an industry that has no franchise value,” he wrote. “Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding. But it’s also a new opportunity for us to shape, lead through, and have greater impact than ever before.” The memo represents Nadella’s most direct attempt yet to reconcile the fundamental contradictions facing Microsoft and many other tech companies as they adjust to the AI economy. Microsoft, in particular, has been grappling with employee discontent and internal questions about its culture following multiple rounds of layoffs.

Discontent. Maybe the summer of discontent. No, it’s a reshaping or re-invention of a play by William Shakespeare (allegedly) which borrows from Chaucer’s Troilus and Criseyde with a bit more emphasis on pettiness and corruption to add spice to Boccaccio’s antecedent. Willie’s Troilus and Cressida makes the “love affair” more ironic.

Ah, the Microsoft drama. Let’s recap: [a] Troilus and Cressida’s Two Kids: Satya and Sam, [b] Security woes of SharePoint (who knew? eh, everyone]; [c] buying green credits or how much manure does a gondola rail card hold? [d] Copilot (are the fuel switches on? Nope); and [e] layoffs.

What’s the description of these issues? An enigma. This is a word popping up frequently it seems. An enigma is, according to Venice, a smart software system:

The word “enigma” derives from the Greek “ainigma” (meaning “riddle” or “dark saying”), which itself stems from the verb “aigin” (“to speak darkly” or “to speak in riddles”). It entered Latin as “aenigma”, then evolved into Old French as “énigme” before being adopted into English in the 16th century. The term originally referred to a cryptic or allegorical statement requiring interpretation, later broadening to describe any mysterious, puzzling, or inexplicable person or thing. A notable modern example is the Enigma machine, a cipher device used in World War II, named for its perceived impenetrability. The shift from “riddle” to “mystery” reflects its linguistic journey through metaphorical extension.

Okay, let’s work through this definition.

  1. Troilus and Cressida or Satya and Sam. We have a tortured relationship. A bit of a war among the AI leaders, and a bit of the collapse of moral certainty. The play seems to be going nowhere. Okay, that fits.
  2. Security woes. Yep, the cipher device in World War II. Its security or lack of it contributed to a number of unpleasant outcomes for a certain nation state associated with beer and Rome’s failure to subjugate some folks.
  3. Manure. This seems to be a metaphorical extension. Paying “green” or money for excrement is a remarkable image. Enough said.
  4. Fuel switches and the subsequent crash, explosion, and death of some hapless PowerPoint users. This lines up with “puzzling.” How did those Word paragraphs just flip around? I didn’t do it. Does anyone know why? Of course not.
  5. Layoffs. Ah, an allegorical statement. Find your future elsewhere. There is a demand for life coaches, LinkedIn profile consultants, and lawn service workers.

Microsoft is indeed speaking darkly. The billions burned in the AI push have clouded the atmosphere in Softie Land. When the smoke clears, what will remain? My thought is that the items a to e mentioned above are going to leave some obvious environmental alterations. Yep, dark saying because knee jerk reactions are good enough.

Stephen E Arnold, July 29, 2025

Why Customer Trust of Chatbot Does Not Matter

July 22, 2025

Dino 5 18 25Just a dinobaby working the old-fashioned way, no smart software.

The need for a winner is pile driving AI into consumer online interactions. But like the piles under the San Francisco Leaning Tower of Insurance Claims, the piles cannot stop the sag, the tilt, and the sight of a giant edifice tilting.

I read an article in the “real” new service called Fox News. The story’s title is “Chatbots Are Losing Customer Trust Fast.” The write up is the work of the CyberGuy, so you know it is on the money. The write up states:

While companies are excited about the speed and efficiency of chatbots, many customers are not. A recent survey found that 71% of people would rather speak with a human agent. Even more concerning, 60% said chatbots often do not understand their issue. This is not just about getting the wrong answer. It comes down to trust. Most people are still unsure about artificial intelligence, especially when their time or money is on the line.

So what? Customers are essentially irrelevant. As long as the outfit hits its real or imaginary revenue goals, the needs of the customer are not germane. If you don’t believe me, navigate to a big online service like Amazon and try to find the number of customer service. Let me know how that works out.

Because managers cannot “fix” human centric systems, using AI is a way out. Let AI do it is a heck of lot easier than figuring out a work flow, working with humans, and responding to customer issues. The old excuse was that middle management was not needed when decisions were pushed down to the “workers.”

AI flips that. Managerial ranks have been reduced. AI decisions come from “leadership” or what I call carpetland. AI solves problems: Actually managing, cost reduction, and having good news for investor communications.

The customers don’t want to talk to software. The customer wants to talk to a human who can change a reservation without automatically billing for a service charge. The customer wants a person to adjust a double billing for a hotel doing business Snap Commerce Holdings. The customer wants a fair shake.

AI does not do fair. AI does baloney, confusion, errors, and hallucinations. I tried a new service which put Google Gemini front and center. I asked one question and got an incomplete and erroneous answer. That’s AI today.

The CyberGuy’s article says:

If a company is investing in a chatbot system, it should track how well that system performs. Businesses should ask chatbot vendors to provide real-world data showing how their bots compare to human agents in terms of efficiency, accuracy and customer satisfaction. If the technology cannot meet a high standard, it may not be worth the investment.

This is simply not going to happen. Deployment equals cost savings. Only when the money goes away will someone in leadership take action. Why? AI has put many outfits in a precarious position. Big money has been spent. Much of that money comes from other people. Those “other people” want profits, not excuses.

I heard a sci-fi rumor that suggests Apple can buy OpenAI and catch up. Apple can pay OpenAI’s investors and make good on whatever promissory payments have been offered by that firm’s leadership. Will that solve the problem?

Nope. The AI firms talk about customers but don’t care. Dealing with customers abused by intentionally shady business practices cooked up by a committee that has to do something is too hard and too costly. Let AI do it.

If the CyberGuy’s write up is correct, some excitement is speeding down the information highway toward some well known smart software companies. A crash at one of the big boys junctions will cause quite a bit of collateral damage.

Whom do you trust? Humans or smart software.

Stephen E Arnold, July 22, 2025

What Did You Tay, Bob? Clippy Did What!

July 21, 2025

Dino 5 18 25This blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I was delighted to read “OpenAI Is Eating Microsoft’s Lunch.” I don’t care who or what wins the great AI war. So many dollars have been bet that hallucinating software is the next big thing. Most content flowing through my dinobaby information system is political. I think this food story is a refreshing change.

So what’s for lunch? The write up seems to suggest that Sam AI-Man has not only snagged a morsel from the Softies’ lunch pail but Sam AI-Man might be prepared to snap at those delicate lady fingers too. The write up says:

ChatGPT has managed to rack up about 10 times the downloads that Microsoft’s Copilot has received.

Are these data rock solid? Probably not, but the idea that two “partners” who forced Googzilla to spasm each time its Code Red lights flashed are not cooperating is fascinating. The write up points out that when Microsoft and OpenAI were deeply in love, Microsoft had the jump on the smart software contenders. The article adds:

Despite that [early lead], Copilot sits in fourth place when it comes to total installations. It trails not only ChatGPT, but Gemini and Deepseek.

Shades of Windows phone. Another next big thing muffed by the bunnies in Redmond. How could an innovation power house like Microsoft fail in the flaming maelstrom of burning cash that is AI? Microsoft’s long history of innovation adds a turbo boost to its AI initiatives. The Bob, Clippy, and Tay inspired Copilot is available to billions of Microsoft Windows users. It is … everywhere.

The write up explains the problem this way:

Copilot’s lagging popularity is a result of mismanagement on the part of Microsoft.

This is an amazing insight, isn’t it? Here’s the stunning wrap up to the article:

It seems no matter what, Microsoft just cannot make people love its products. Perhaps it could try making better ones and see how that goes.

To be blunt, the problem at Microsoft is evident in many organizations. For example, we could ask IBM Watson what Microsoft should do. We could fire up Deepseek and get some China-inspired insight. We could do a Google search. No, scratch that. We could do a Yandex.ru search and ask, “Microsoft AI strategy repair.”

I have a more obvious dinobaby suggestion, “Make Microsoft smaller.” And play well with others. Silly ideas I know.

Stephen E Arnold, July 21, 2025

Xooglers Reveal Googley Dreams with Nightmares

July 18, 2025

Dino 5 18 25_thumb[3]Just a dinobaby without smart software. I am sufficiently dull without help from smart software.

Fortune Magazine published a business school analysis of a Googley dream and its nightmares titled “As Trump Pushes Apple to Make iPhones in the U.S., Google’s Brief Effort Building Smartphones in Texas 12 years Ago Offers Critical Lessons.” The author, Mr. Kopytoff, states:

Equivalent in size to nearly eight football fields, the plant began producing the Google Motorola phones in the summer of 2013.

Mr. Kopytoff notes:

Just a year later, it was all over. Google sold the Motorola phone business and pulled the plug on the U.S. manufacturing effort. It was the last time a major company tried to produce a U.S. made smartphone.

Yep, those Googlers know how to do moon shots. They also produce some digital rocket ships that explode on the launch pads, never achieving orbit.

What happened? You will have to read the pork loin write up, but the Fortune editors did include a summary of the main point:

Many of the former Google insiders described starting the effort with high hopes but quickly realized that some of the assumptions they went in with were flawed and that, for all the focus on manufacturing, sales simply weren’t strong enough to meet the company’s ambitious goals laid out by leadership.

My translation of Fortune-speak is: “Google was really smart. Therefore, the company could do anything. Then when the genius leadership gets the bill, a knee jerk reaction kills the project and moves on as if nothing happened.”

Here’s a passage I found interesting:

One of the company’s big assumptions about the phone had turned out to be wrong. After betting big on U.S. assembly, and waving the red, white, and blue in its marketing, the company realized that most consumers didn’t care where the phone was made.

Is this statement applicable to people today? It seems that I hear more about costs than I last year. At a 4th of July hoe down, I heard:

  • “The prices are Kroger go up each week.”
  • “I wanted to trade in my BMW but the prices were crazy. I will keep my car.”
  • “I go to the Dollar Store once a week now.”

What’s this got to do with the Fortune tale of Google wizards’ leadership goof and Apple (if it actually tries to build an iPhone in Cleveland?

Answer: Costs and expertise. Thinking one is smart and clever is not enough. One has to do more than spend big money, talk in a supercilious manner, and go silent when the crazy “moon shot” explodes before reaching orbit.

But the real moral of the story is that it is political. That may be more problematic than the Google fail and Apple’s bitter cider. It may be time to harvest the  fruit of tech leaderships’ decisions.

Stephen E Arnold, July 18, 2025

New Business Tactics from Google and Meta: Fear-Fueled Management

July 8, 2025

Dino 5 18 25No smart software. Just a dinobaby and an old laptop.

I like to document new approaches to business rules or business truisms. Examples range from truisms like “targeting is effective” to “two objectives is no objectives.” Today July 1, 2025, I spotted anecdotal evidence of two new “rules.” Both seem customed tailored to the GenX, GenY, GenZ, and GenAI approach to leadership. Let’s look at each briefly and then consider how effective these are likely to be.

The first example of new management thinking appears in “Google Embraces AI in the Classroom with New Gemini Tools for Educators, Chatbots for Students, and More.” The write up explains that Google has:

introduced more than 30 AI tools for educators, a version of the Gemini app built for education, expanded access to its collaborative video creation app Google Vids, and other tools for managed Chromebooks.

Forget the one objective idea when it comes to products. Just roll out more than two dozen AI services. That will definitely catch the attention of grade, middle school, high school, junior college, and university teachers in the US and elsewhere. I am not a teacher, but I know that when I attend neighborhood get togethers, the teachers at these functions often ask me about smart software. From these interactions, very few understand that smart software comes in different “flavors.” AI is still a mostly unexplored innovation. But Google is chock full of smart people who certainly know how teachers can rush to two dozen new products and services in a jiffy.

The second rule is that organizations are hierarchical. Assuming this is the approach, one person should lead an organization and then one person should lead a unit and one person should lead a department and so on. This is the old Great Chain of Being slapped on an enterprise. My father worked in this type of company, and he liked it. He explained how work flowed from one box on the organization chart to another. With everything working the way my father liked things to work, bulldozers and mortars appeared on the loading docks. Since I grew up with this approach, it made sense to me. I must admit that I still find this type of set up appealing, and I am usually less than thrilled to work in an matrix management, let’s just roll with it set up.

In “Nikita Bier, The Founder Of Gas And TBH, Who Once Asked Elon Musk To Hire Him As VP Of Product At Twitter, Has Joined X: ‘Never Give Up‘” I learned that Meta is going with the two bosses approach to smart software. The write up reports as real news as opposed to news release news:

On Monday, Bier announced on X that he’s officially taking the reins as head of product. "Ladies and gentlemen, I’ve officially posted my way to the top: I’m joining @X as Head of Product," Bier wrote.

Earlier in June 2025, Mark Zuckerberg pumped money into Scale.io (an indexing outfit) and hired Alexandr Wang to be the top dog of Meta’s catch up in AI initiative. It appears that Meta is going to give the two bosses are better than one approach its stamp of management genius approval. OpenAI appeared to emulate this approach, and it seemed to have spawned a number of competitors and created an environment in which huge sums of money could attract AI wizards to Mr. Zuckerberg’s social castle.

The first new management precept is that an organization can generate revenue by shotgunning more than two dozen new products and services to what Google sees as the education market. The outmoded management approach would focus on one product and service, provide that to a segment of the education market with some money to spend and a problem to solve. Then figure out how to make that product more useful and grow paying customers in that segment. That’s obviously stupid and not GenAI. The modern approach is to blast that bird shot somewhere in the direction of a big fuzzy market and go pick up the dead ducks for dinner.

The second new management precept is to have an important unit, a sense of desperation born from failure, and put two people in charge. I think this can work, but in most of the successful outfits to which I have been exposed, there is one person at the top. He or she may be floating above the fray, but the idea is that someone, in theory, is in charge.

Several observations are warranted:

  1. The chaos approach to building a business has taken root and begun to flower at Google and Meta. Out with the old and in with the new. I am willing to wait and see what happens because when either success or failure arrives, the stories of VCs jumping from tall buildings or youthful managers buying big yachts will circulate.
  2. The innovations in management at Google and Meta suggest to me a bit of desperation. Both companies perceive that each is falling behind or in danger of losing. That perception may be accurate because once the AI payoff is not evident, Google and Meta may find themselves paddling up the river, not floating down the river.
  3. The two innovations viewed as discrete actions are expensive, risky, and illustrative of the failure of management at both firms. Employees, stakeholders, and users have a lot to win or lose.

I heard a talk by someone who predicted that traditional management consulting would be replaced by smart software. In the blue chip firm in which I worked years ago, management decisions like these would be guaranteed to translate to old-fashioned, human-based consulting projects.

In today’s world, decisions by “leadership” are unlikely to be remediated by smart software. Fixing up the messes will require individuals with experience, knowledge, and judgment.

As Julius Caesar allegedly said:

In summo periculo timor miericordiam non recipit.

This means something along the lines, “In situations of danger, fear feels no pity.” These new management rules suggest that both Google and Meta’s “leadership” are indeed fearful and grandstanding in order to overcome those inner doubts. The decisions to go against conventional management methods seem obvious and logical to them. To others, perhaps the “two bosses” and “a blast of AI products and service” are just ill advised or not informed?

Stephen E Arnold, July 8, 2025

Technology Firms: Children of Shoemakers Go Barefoot

July 7, 2025

If even the biggest of Big Tech firms are not safe from cyberattacks, who is? Investor news site Benzinga reveals, “Apple, Google and Facebook Among Services Exposed in Massive Leak of More than 16 Billion Login Records.” The trove represents one of the biggest exposures of personal data ever, writer Murtuza J. Merchant tells us. We learn:

“Cybersecurity researchers have uncovered 30 massive data collections this year alone, each containing tens of millions to over 3.5 billion user credentials, Cybernews reported. These previously unreported datasets were briefly accessible through misconfigured cloud storage or Elasticsearch instances, giving the researchers just enough time to detect them, though not enough to trace their origin. The findings paint a troubling picture of how widespread and organized credential leaks have become, with login information originating from malware known as infostealers. These malicious programs siphon usernames, passwords, and session data from infected machines, usually structured as a combination of a URL, username, and password.”

Ah, advanced infostealers. One of the many handy tools AI has made possible. The write-up continues:

“The leaked credentials span a wide range of services from tech giants like Apple, Facebook, and Google, to platforms such as GitHub, Telegram, and various government portals. Some datasets were explicitly labeled to suggest their source, such as ‘Telegram’ or a reference to the Russian Federation. … Researchers say these leaks are not just a case of old data resurfacing.”

Not only that, the data’s format is cybercriminal-friendly. Merchant writes:

“Many of the records appear recent and structured in ways that make them especially useful for cybercriminals looking to run phishing campaigns, hijack accounts, or compromise corporate systems lacking multi-factor authentication.”

But it is the scale of these datasets that has researchers most concerned. The average collection held 500 million records, while the largest had more than 3.5 billion. What are the chances your credentials are among them? The post suggests the usual, most basic security measures: complex and frequently changed passwords and regular malware scans. But surely our readers are already observing these best practices, right?

Cynthia Murrell, July 7, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta