Real News Outfit Finds a Study Proving That AI Has No Impact in the Workplace
May 27, 2025
Just the dinobaby operating without Copilot or its ilk.
The “real news” outfit is the wonderful business magazine Fortune, now only $1 a month. Subscribe now!
The title of the write up catching my attention was “Study Looking at AI Chatbots in 7,000 Workplaces Finds ‘No Significant Impact on Earnings or Recorded Hours in Any Occupation.” Out of the blocks this story caused me to say to myself, “This is another you-can’t-fire-human-writers” proxy.”
Was I correct? Here are three snips, and I not only urge you to subscribe to Fortune but read the original article and form your own opinion. Another option is to feed it into an LLM which is able to include Web content and ask it to tell you about the story. If you are reading my essay, you know that a dinobaby plucks the examples, no smart software required, although as I creep toward 81, I probably should let a free AI do the thinking for me.
Here’s the first snip I captured:
Their [smart software or large language models] popularity has created and destroyed entire job descriptions and sent company valuations into the stratosphere—then back down to earth. And yet, one of the first studies to look at AI use in conjunction with employment data finds the technology’s effect on time and money to be negligible.
You thought you could destroy humans, you high technology snake oil peddlers (not the contraband Snake Oil popular in Hong Kong at this time). Think old-time carnival barkers.
Here’s the second snip about the sample:
focusing on occupations believed to be susceptible to disruption by AI
Okay, “believed” is the operative word. Who does the believing a University of Chicago assistant professor of economics (Yay, Adam Smith. Yay, yay, Friedrich Hayak) and a graduate student. Yep, a time honored method: A graduate student.
Now the third snip which presents the rock solid proof:
On average, users of AI at work had a time savings of 3%, the researchers found. Some saved more time, but didn’t see better pay, with just 3%-7% of productivity gains being passed on to paychecks. In other words, while they found no mass displacement of human workers, neither did they see transformed productivity or hefty raises for AI-wielding super workers.
Okay, not much payoff from time savings. Okay, not much of a financial reward for the users. Okay, nobody got fired. I thought it was hard to terminate workers in some European countries.
After reading the article, I like the penultimate paragraph’s reminder that outfits like Duolingo and Shopify have begun rethinking the use of chatbots. Translation: You cannot get rid of human writers and real journalists.
Net net: A temporary reprieve will not stop the push to shift from expensive humans who want health care and vacations. That’s the news.
Stephen E Arnold, May 27, 2025
Microsoft Investigates Itself and a Customer: Finding? Nothing to See Here
May 26, 2025
No AI, just a dinobaby and his itty bitty computer.
GeekWire, creator of the occasional podcast, published “Microsoft: No Evidence Israeli Military Used Technology to Harm Civilians, Reviews Find.” When an outfit emits occasional podcasts published a story, I know that the information is 100 percent accurate. GeekWire has written about Microsoft and its outstanding software. Like Windows Central, the enthusiasm for what the Softies do is a key feature of the information.
What did I learn included:
- Israel’s military uses Microsoft technology
- Israel may have used Microsoft technology to harm non-civilians
- The study was conducted by the detail-oriented and consistently objective company. Self-study is known to be reliable, a bit like research papers from Harvard which are a bit dicey in the reproducible results department
- The data available for the self-study was limited; that is, Microsoft relied on an incomplete data set because certain information was presumably classified
- Microsoft “provided limited emergency support to the Israeli government following the October 7, 2023, Hamas attacks.”
Yeah, that sounds rock solid to me.
Why did the creator of Bob and Clippy sit down and study its navel? The write up reported:
Microsoft said it launched the reviews in response to concerns from employees and the public over media reports alleging that its Azure cloud platform and AI technologies were being used by the Israeli military to harm civilians.
The Microsoft investigation concluded:
its recent reviews found no evidence that the Israeli Ministry of Defense has failed to comply with its terms of service or AI Code of Conduct.
That’s a fact. More than rock solid, the fact is like one of those pre-Inca megaliths. That’s really solid.
GeekWire goes out on a limb in my opinion when it includes in the write up a statement from an individual who does not see eye to eye with the Softies’ investigation. Here’s that passage:
A former Microsoft employee who was fired after protesting the company’s ties to the Israeli military, he said the company’s statement is “filled with both lies and contradictions.”
What’s with the allegation of “lies and contradictions”? Get with the facts. Skip the bogus alternative facts.
I do recall that several years ago I was told by an Israeli intelware company that their service was built on Microsoft technology. Now here’s the key point. I asked if the cloud system worked on Amazon? The response was total confusion. In that English language meeting, I wondered if I had suffered a neural malfunction and posed the question, “Votre système fonctionne-t-il sur le service cloud d’Amazon?” in French, not English.
The idea that this firm’s state-of-the-art intelware would be anything other than Microsoft centric was a total surprise to those in the meeting. It seemed to me that this company’s intelware like others developed in Israel would be non Microsoft was inconceivable.
Obviously these professionals were not aware that intelware systems (some of which failed to detect threats prior to the October 2023 attack) would be modified so that only adversary military personnel would be harmed. That’s what the Microsoft investigation just proved.
Based on my experience, Israel’s military innovations are robust despite that October 2023 misstep. Furthermore, warfighting systems if they do run on Microsoft software and systems have the ability to discriminate between combatants and non-combatants. This is an important technical capability and almost on a par with the Bob interface, Clippy, and AI in Notepad.
I don’t know about you, but the Microsoft investigation put my mind at ease.
Stephen E Arnold, May 26, 2025
Ten Directories of AI Tools
May 26, 2025
Just the dinobaby operating without Copilot or its ilk.
I scan DailyHunt, an India-based news summarizer powered by AI I think. The link I followed landed me on a story titled “Best 10 AI Directories to Promote.” I looked for a primary source, an author, and links to each service. Zippo. Therefore, I assembled the list, provided links, and generated with my dinobaby paws and claws the list below. Enjoy or ignore. I am weary of AI, but many others are not. I am not sure why, but that is our current reality, replete with alternative facts, cheating college professors, and oodles of crypto activity. Remember. The list is not my “best of”; I am simply presenting incomplete information in a slightly more useful format.
AIxploria https://www.aixploria.com/en/ [Another actual directory. Its promotional language says “largest list”. Yeah, I believe that]
AllAITool.ai at https://allaitool.ai/
FamouseAITools.ai https://famousaitools.ai/ [Another marketing outfit sucking up AI tool submissions]
Futurepedia.io https://www.futurepedia.io/
TheMangoAI.co https://themangoai.co/ [Not a directory, an advertisement of sorts for an AI-powered marketing firm]
NeonRev https://www.neonrev.com/ [Another actual directory. It looks like a number of Telegram bot directories]
Spiff Store https://spiff.store/ [Another directory. I have no idea how many tools are included]
StackViv https://stackviv.ai/ [An actual directory with 10,000 tools. No I did not count them. Are you kidding me?]
TheresanAIforThat https://theresanaiforthat.com/ [You have to register to look at the listings. A turn off for me]
Toolify.ai https://www.toolify.ai/ [An actual listing of more than 25,000 AI tools organized into categories probably by AI, not a professional indexing specialist]
When I looked at each of these “directories”, marketing is something the AI crowd finds important. A bit more effort in the naming of some of these services might help. Just a thought. Enjoy.
Stephen E Arnold, May 26, 2025
Microsoft: Did It Really Fork This Fellow?
May 26, 2025
Just the dinobaby operating without Copilot or its ilk.
Forked doesn’t quite communicate the exact level of frustration Philip Laine experienced while working on a Microsoft project. He details the incident in his post, “Getting Forked By Microsoft.” Laine invented a solution for image scalability without a stateful component and needed minimal operation oversight. He dubbed his project Spegel, made it open source, and was contacted by Microsoft.
Microsoft was pleased with Spegel. Laine worked with Microsoft engineers to implement Spegel into its architecture. Everything went well until Microsoft stopped working with him. He figured the moved onto other projects. Microsoft did move on but the engineers developed their own version of Spegel. They have the grace to thank Laine and in a README file. It gets worse:
"While looking into Peerd, my enthusiasm for understanding different approaches in this problem space quickly diminished. I saw function signatures and comments that looked very familiar, as if I had written them myself. Digging deeper I found test cases referencing Spegel and my previous employer, test cases that have been taken directly from my project. References that are still present to this day. The project is a forked version of Spegel, maintained by Microsoft, but under Microsoft’s MIT license.”
Microsoft plagiarized…no…downright stole Spegel’s base coding from Laine. He, however, published Spegel with Microsoft’s MIT licensing. The MIT licensing means:
“Software released under an MIT license allows for forking and modifications, without any requirement to contribute these changes back. I default to using the MIT license as it is simple and permissive.”
It does require this:
“The license does not allow removing the original license and purport that the code was created by someone else. It looks as if large parts of the project were copied directly from Spegel without any mention of the original source.”
Laine wanted to work with Microsoft and have their engineers contribute to his open source project. He’s dedicated his energy, time, and resources to Spegel and continues to do so without much contribution other than GitHub sponsors and the thanks of its users. Laine is considering changing Spegel’s licensing as it’s the only way to throw a stone at Microsoft.
If true, the pulsing AI machine is a forker.
Whitney Grace, May 26, 2025
Censorship Gains Traction at an Individual Point
May 23, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I read a somewhat sad biographical essay titled “The Great Displacement Is Already Well Underway: It’s Not a Hypothetical, I’ve Already Lost My Job to AI For The Last Year.” The essay explains that a 40 something software engineer lost his job. Despite what strike me as heroic efforts, no offers ensued. I urge you to take a look at this essay because the push to remove humans from “work” is accelerating. I think with my 80 year old neuro-structures that the lack of “work” will create some tricky social problems.
I spotted one passage in the essay which struck me as significant. The idea of censorship is a popular topic in central Kentucky. Quite a few groups and individuals have quite specific ideas about what books should be available for students and others to read. Here is the quote about censorship from the cited “Great Displacement” essay:
I [the author of the essay] have gone back and deleted 95% of those articles and vlogs, because although many of the ideas they presented were very forward-thinking and insightful at the time, they may now be viewed as pedestrian to AI insiders merely months later due to the pace of AI progress. I don’t want the wrong person with a job lead to see a take like that as their first exposure to me and think that I’m behind the last 24 hours of advancements on my AI takes.
Self-censorship was used to create a more timely version of the author. I have been writing articles with titles like “The Red Light on the Green Board” for years. This particular gem points out that public school teachers sell themselves and their ideas out. The prostitution analogy was intentional. I caught a bit of criticism from an educator in the public high school in which I “taught” for 18 months. Now people just ignore what I write. Thankfully my lectures about online fraud evoke a tiny bit of praise because the law enforcement, crime analysts, and cyber attorneys don’t throw conference snacks at me when I offer one of my personal observations about bad actors.
The cited essay presents a person who is deleting content into to present an “improved” or “shaped” version of himself. I think it is important to have in original form essays, poems, technical reports, and fiction — indeed, any human-produced artifact — available. These materials I think will provide future students and researchers with useful material to mine for insights and knowledge.
Deletion means that information is lost. I am not sure that is a good thing. What’s notable is that censorship is taking place by the author for the express purpose of erasing the past and shaping an impression of the present individual. Will that work? Based on the information in the essay, it had not when I read the write up.
Censorship may be one facet of what the author calls a “displacement.” I am not too keen on censorship regardless of the decider or the rationalization. But I am a real dinobaby, not a 40-something dinobaby like the author of the essay.
Stephen E Arnold, May 23, 2025
We Browse Alongside Bots in Online Shops
May 23, 2025
AI’s growing ability to mimic humans has brought us to an absurd milestone. TechRadar declares, “It’s Official—The Majority of Visitors to Online Shops and Retailers Are Now Bots, Not Humans.” A recent report from Radware examined retail site traffic during the 2024 holiday season and found automated programs made up 57%. The statistic includes tools from simple scripts to digital agents. The more evolved the bot, the harder it is to keep it out. Writer Efosa Udinmwen tells us:
“The report highlights the ongoing evolution of malicious bots, as nearly 60% now use behavioral strategies designed to evade detection, such as rotating IP addresses and identities, using CAPTCHA farms, and mimicking human browsing patterns, making them difficult to identify without advanced tools. … Mobile platforms have become a critical battleground, with a staggering 160% rise in mobile-targeted bot activity between the 2023 and 2024 holiday seasons. Attackers are deploying mobile emulators and headless browsers that imitate legitimate app behavior. The report also warns of bots blending into everyday internet traffic. A 32% increase in attack traffic from residential proxy networks is making it much harder for ecommerce sites to apply traditional rate-limiting or geo-fencing techniques. Perhaps the most alarming development is the rise of multi-vector campaigns combining bots with traditional exploits and API-targeted attacks. These campaigns go beyond scraping prices or testing stolen credentials – they aim to take sites offline entirely.”
Now why would they do that? To ransom retail sites during the height of holiday shopping, perhaps? Defending against these new attacks, Udinmwen warns, requires new approaches. The latest in DDoS protection, for example, and intelligent traffic monitoring. Yes, it takes AI to fight AI. Apparently.
Cynthia Murrell, May 23, 2025
Some Outfits Takes Pictures… Of Users
May 23, 2025
Conspiracy theorists aka wackadoos assert preach that the government is listening to everyone with microphones and it’s only gotten worse with mobile devices. This conspiracy theory has been running circuits since before the invention of the Internet. It used to be spies or aluminum can string telephones were the culprit. Truth is actually stranger than fiction and New Atlas updated an article about how well Facebook is actually listening to us, “Your Phone Isn’t Secretly Listening To You, But The Truth Is More Disturbing.”
Let’s assume that the story is accurate, but the information was on the Internet, so for AI and some humans, the write up is chock full of meaty facts. It was revealed in 2024 that Cox Media Group (CMG) developed Active Listening, a system to capture “real time intent data” with mobile devices’ microphones. It then did the necessary technology magic and fed personalized ads. Tech companies distanced themselves from CMG. CMG stopped using the system. It supposedly worked by listening to small vocal data uploaded after digital assistants were activated. It bleeds into the smartphone listening conspiracy but apparently that’s still not a tenable reality.
The mobile cyber security company Wandera tested the listening microphone theory. They placed two smart phones in a room, played pet food ads on an audio loop for thirty minutes a day over three days. Here are the nitty gritty details:
“User permissions for a large number of apps were all enabled, and the same experiment was performed, with the same phones, in a silent test room to act as a control. The experiment had two main goals. First, a number of apps were scanned following the experiment to ascertain whether pet food ads suddenly appeared in any streams. Secondly, and perhaps more importantly, the devices were closely examined to track data consumption, battery use, and background activity.”
The results showed that phones weren’t listening to conversations. The truth was on par and more feasible given the current technology:
“In early 2017 Jingjing Ren, a PhD student at Northeastern University, and Elleen Pan, an undergraduate student, designed a study to investigate the very issue of whether phones listen in on conversations without users knowing. Pretty quickly it became clear to the researchers that the phones’ microphones were not being covertly activated, but it also became clear there were a number of other disconcerting things going on. There were no audio leaks at all – not a single app activated the microphone,’ said Christo Wilson, a computer scientist working on the project. ‘Then we started seeing things we didn’t expect. Apps were automatically taking screenshots of themselves and sending them to third parties. In one case, the app took video of the screen activity and sent that information to a third party.’”
There are multiple other ways Facebook and companies are actually tracking and collecting data. Everything done on a smartphone from banking to playing games generates data that can be tracked and sent to third parties. The more useful your phone is to you, the more useful it is as a tracking, monitoring, and selling tool to AI algorithms to generate targeted ads and more personalized content. It’s a lot easier to believe in the microphone theory because it’s easier to understand the vast amounts of technology at work to steal…er…gather information. To sum up, innovators are inspirational!
Whitney Grace, May 23, 2025
Sharp Words about US Government Security
May 22, 2025
No AI. Just a dinobaby who gets revved up with buzzwords and baloney.
On Monday (April 29, 2025), I am headed to the US National Cyber Crime Conference. I am 80, and I don’t do too many “in person” lectures. Heck, I don’t do too many lectures anymore period. A candidate for the rest home or an individual ready for a warehouse for the soon-to-die is a unicorn amidst the 25 to 50 year old cyber fraud, law enforcement professionals, and government investigators.
In my lectures, I steer clear of political topics. This year, I have been assigned a couple of topics which the NCCC organizers know attract a couple of people out of the thousand or so attendees. One topic concerns changes in the Dark Web. Since I wrote “Dark Web Notebook” years ago, my team and I keep track of what’s new and interesting in the world of the Dark Web. This year, I will highlight three or four services which caught our attention. The other topic is my current research project: Telegram. I am not sure how I became interested in this messaging service, but my team and I will will make available to law enforcement, crime analysts, and cyber fraud investigators a monograph modeled on the format we used for the “Dark Web Notebook.”
I am in a security mindset before the conference. I am on the lookout for useful information which I can use as a point of reference or as background information. Despite my age, I want to appear semi competent. Thus, I read “Signalgate Lessons Learned: If Creating a Culture of Security Is the Goal, America Is Screwed.” I think the source publication is British. The author may be an American journalist.
Several points in the write up caught my attention.
First, the write up makes a statement I found interesting:
And even if they are using Signal, which is considered the gold-standard for end-to-end chat encryption, there’s no guarantee their personal devices haven’t been compromised with some sort of super-spyware like Pegasus, which would allow attackers to read the messages once they land on their phones.
I did not know that Signal was “considered the gold standard for end-to-end chat encryption.” I wonder if there are some data to back this up.
Second, is NSO Group’s Pegasus “super spyware.” My information suggests that there are more modern methods. Some link to Israel but others connect to other countries; for example, Spain, the former Czech Republic, and others. I am not sure what “super” means, and the write up does not offer much other than a nebulous adjectival “super spyware.”
Third, these two references are fascinating:
“The Salt Typhoon and Volt Typhoon campaigns out of China demonstrate this ongoing threat to our telecom systems. Circumventing the Pentagon’s security protocol puts sensitive intelligence in jeopardy.”
The authority making the statement is a former US government official who went on to found a cyber security company. There were publicized breaches, and I am not sure comparable to Pegasus type of data exfiltration method. “Insider threats” are different from lousy software from established companies with vulnerabilities as varied as Joseph’s multi-colored coat. An insider, of course, is an individual presumed to be “trusted”; however, that entity provides information for money to an individual who wants to compromise a system, a person who makes an error (honest or otherwise), and victims who fall victim to quite sophisticated malware specifically designed to allow targeted emails designed to obtain information to compromise that person or a system. In fact, the most sophisticated of these “phishing” attack systems are available for about $250 per month for the basic version with higher fees associated with more robust crime as a service vectors of compromise.
The opinion piece seems to focus on a single issue focused on one of the US government’s units. I am okay with that; however, I think a slightly different angle would put the problem and challenge of “security” in a context less focused on ad hominin rhetorical methods.
Stephen E Arnold, May 22, 2025
AI: Improving Spam Quality, Reach, and Effectiveness
May 22, 2025
It is time to update our hoax detectors. The Register warns, “Generative AI Makes Fraud Fluent—from Phishing Lures to Fake Lovers.” What a great phrase: “fluent fraud.” We can see it on a line of hats and t-shirts. Reporter Iain Thomson consulted security pros Chester Wisniewski of Sophos and Kevin Brown at NCC Group. We learn:
“One of the red flags that traditionally identified spam, including phishing attempts, was poor spelling and syntax, but the use of generative AI has changed that by taking humans out of the loop. … AI has also widened the geographical scope of spam and phishing. When humans were the primary crafters of such content, the crooks stuck to common languages to target the largest audience with the least amount of work. But, Wisniewski explained, AI makes it much easier to craft emails in different languages.”
For example, residents of Quebec used to spot spam by its use of European French instead of the Québécois dialect. Similarly, folks in Portugal learned to dismiss messages written in Brazilian Portuguese. Now, though, AI makes it easy to replicate regional dialects. Perhaps more eerily, it also make it easier to replicate human empathy. Thomson writes:
“AI chatbots have proven highly effective at seducing victims into thinking they are being wooed by an attractive partner, at least during the initial phases. Wisniewski said that AI chatbots can easily handle the opening phases of the scams, registering interest and appearing to be empathetic. Then a human operator takes over and begins removing funds from the mark by asking for financial help, or encouraging them to invest in Ponzi schemes.”
Great. To make matters worse, much of this is now taking place with realistic audio fakes. For example:
“Scammers might call everybody on the support team with an AI-generated voice that duplicates somebody in the IT department, asking for a password until one victim succumbs.”
Chances are good someone eventually will. Whether video bots are a threat (yet) is up for debate. Wisniewski, for one, believes convincing, real-time video deepfakes are not quite there. But Brown reports the experienced pros at his firm have successfully created them for specific use cases. Both believe it is only a matter of time before video deepfakes become not only possible but easy to create and deploy. It seems we must soon learn to approach every interaction that is not in-person with great vigilance and suspicion. How refreshing.
Cynthia Murrell, May 22, 2025
Employee Time App Leaks User Information
May 22, 2025
Oh boy! Security breaches are happening everywhere these days. It’s not scary unless your personal information is leaked, like what happened to, “Top Employee Monitoring App Leaks 21 Million Screenshots On Thousands Of Users,” reports TechRadar. The app in question is called WorkComposer and it’s described as an “employee productivity monitoring tool.” Cybernews cybersecurity researchers discovered an archive of millions of WorkComposer-generated real time screenshots. These screenshot showed what the employee worked on, which might include sensitive information.
The sensitive information could include intellectual property, passwords, login portals, emails, proprietary data, etc. These leaked images are a major privacy violation, meaning WorkComposer is in boiling water. Privacy organizations and data watchdogs could get involved.
Here is more information about the leak:
“Cybernews said that WorkComposer exposed more than 21 million images in an unsecured Amazon S3 bucket. The company claims to have more than 200,000 active users. It could also spell trouble if it turns out that cybercriminals found the bucket in the past. At press time, there was no evidence that it did happen, and the company apparently locked the archive down in the meantime.”
WorkComposer was designed for companies to monitor the work of remote employees. It allows leads to track their employees’ work and captures an image every twenty seconds.
It’s a useful monitoring application but a scary situation with the leaks. Why doesn’t the Cybernews people report the problem or fix it? That’s a white hat trick.
Whitney Grace, May 22, 2025