Hiring Problems: Yes But AI Is Not the Reason

October 2, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “AI Is Not Killing Jobs, Finds New US Study.” I love it when the “real” news professionals explain how hiring trends are unfolding. I am not sure how many recent computer science graduates, commercial artists, and online marketing executives are receiving this cheerful news.

image

The magic carpet of great jobs is flaming out. Will this professional land a new position or will the individual crash? Thanks, Midjourney. Good enough.

The write up states: “Research shows little evidence the cutting edge technology such as chatbots is putting people out of work.”

I noted this statement in the source article from the Financial Times:

Research from economists at the Yale University Budget Lab and the Brookings Institution think-tank indicates that, since OpenAI launched its popular chatbot in November 2022, generative AI has not had a more dramatic effect on employment than earlier technological breakthroughs. The research, based on an analysis of official data on the labor market and figures from the tech industry on usage and exposure to AI, also finds little evidence that the tools are putting people out of work.

That closes the doors on any pushback.

But some people are still getting terminated. Some are finding that jobs are not available. (Hey, those lucky computer science graduates are an anomaly. Try explaining that to the parents who paid for tuition, books, and a crash summer code academy session.)

Companies Are Lying about AI Layoffs” provides a slightly different take on the jobs and hiring situation. This bit of research points out that there are terminations. The write up explains:

American employees are being replaced by cheaper H-1B visa workers.

If the assertions in this write up are accurate, AI is providing “cover” for what is dumping expensive workers and replacing them with lower cost workers. Cheap is good. Money savings… also good. Efficiency … the core process driving profit maximization. If you don’t grasp the imperative of this simply line of reasoning, ask an unemployed or recently terminated MBA from a blue chip consulting firm. You can locate these individuals in coffee shops in cities like New York and Chicago because the morose look, the high end laptop, and carefully aligned napkin, cup, and ink pen are little billboards saying, “Big time consultant.”

The “Companies Are Lying” article includes this quote:

“You can go on Blind, Fishbowl, any work related subreddit, etc. and hear the same story over and over and over – ‘My company replaced half my department with H1Bs or simply moved it to an offshore center in India, and then on the next earnings call announced that they had replaced all those jobs with AI’.”

Several observations:

  1. Like the Covid thing, AI and smart software provide logical ways to tell expensive employees hasta la vista
  2. Those who have lost their jobs can become contractors and figure out how to market their skills. That’s fun for engineers
  3. The individuals can “hunt” for jobs, prowl LinkedIn, and deal with the wild and crazy schemes fraudsters present to those desperate for work
  4. The unemployed can become entrepreneurs, life coaches, or Shopify store operators
  5. Mastering AI won’t be a magic carpet ride for some people.

Net net: The employment picture is those photographs of my great grandparents. There’s something there, but the substance seems to be fading.

Stephen E Arnold, October 2, 2025

Screaming at the Cloud, Algorithms, and AI: Helpful or Lost Cause?

October 2, 2025

Dino 5 18 25_thumb[3]Written by an unteachable dinobaby. Live with it.

One of my team sent me a link to a write up called “We Traded Blogs for Black Boxes. Now We’re Paying for It.” The essay is interesting because it [a] states, to a dinobaby-type of person, the obvious and [b] evidences what I would characterize as authenticity.

The main idea is the good, old Internet is gone. The culprits are algorithms, the quest for clicks, and the loss of a mechanism to reach people who share an interest. Keep in mind that I am summarizing my view of the original essay. The cited document includes nuances that I have ignored.

The reason I found the essay interesting is that it includes a concept I had not seen applied to the current world of online and a  “fix” to the problem.  I  am not sure I agree with the essay’s suggestions, but the ideas warrant comment.

The first is the idea of “context collapse.” I don’t want too many YouTube philosophy or big idea ideas. I do like the big chunks of classical music, however. Context collapse is a nifty way of saying, “Yo, you are bowling alone.” The displacement of hanging out with people has given way to mobile phone social media interactions.

The write up says:

algorithmic media platforms bring out (usually) negative reactions from unrelated audiences.

The essay does not talk about echo chambers of messaging, but I get the idea. When people have no idea about a topic, there is no shared context. The comments are fragmented and driven by emotion. I will appropriate this bound phrase.

The second point is the fix. The write up urges the reader to use open source software. Now this is an idea much loved by some big thinkers. From my point of view, a poisoned open source software can disseminate malware or cause some other “harm.” I am somewhat cautious when it comes to open source, but I don’t think the model works. Think ransomware, phishing, and back doors.

I like the essay. Without that link from my team member to me, I would have been unaware of the essay. The problem is that no service indexes deeply across a wide scope of content objects. Without finding tools, information is ineffectual. Does any organization index and make findable content like this “We Traded Blogs for Black Boxes”? Nope. None has not and none will.

That’s the ball being dropped by national libraries and not profit organizations.

Stephen E Arnold, October 2, 2025

The EU Does More Than Send US Big Tech to Court; It Sends Messages Too

October 2, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The estimable but weird orange newspaper published “EU to Block Big Tech from New Financial Data Sharing System.” The “real” news story is paywalled. (Keep your still functioning Visa and MasterCard handy.)

The write up reports:

Big Tech groups are losing a political battle in Brussels to gain access to the EU’s financial data market…

Google is busy working to bolt its payment system into Ant (linked to Alibaba which may be linked to the Chinese government). Amazon and Meta are big money outfits. And Apple, well, Apple is Apple.

image

Cartoon of a disaster happily generated by ChatGPT. Remarkable what’s okay and what’s not.

The write up reports:

With the support of Germany, the EU is moving to exclude Meta, Apple, Google and Amazon from a new system for sharing financial data that is designed to enable development of digital finance products for consumers.

The article points out:

In a document sent to other EU countries, seen by the Financial Times, Germany suggested excluding Big Tech groups “to promote the development of an EU digital financial ecosystem, guarantee a level playing field and protect the digital sovereignty of consumers”. EU member states and the European parliament are hoping to reach a deal on the final text of the regulation this autumn. [Editor’s Note: October or November 2025?]

What about the crypto payment systems operating like Telegram’s and others in the crypto “space”? That financial mechanism is not referenced in the write up. Curious? Nope. Out of scope. What about the United Arab Emirates’ activities in digital payments and crypto? Nope. Out of scope. What about China’s overt and shadow digital financial activities? Nope. Out of scope.

What’s in scope is that disruption is underway within the traditional banking system. The EU is more concerned about the US than the broader context of the changes it seems to me.

Stephen E Arnold, October 2, 2025

What Is the Best AI? Parasitic Obviously

October 2, 2025

Everyone had imaginary friends growing up.  It’s also not uncommon for people to fantasize about characters from TV, movie, books, and videogames.  The key thing to remember about these dreams is that they’re pretend.  Humans can confuse imagination for reality; usually it’s an indicator of deep psychological issues.  Unfortunately modern people are dealing with more than their fair share of mental and social issues like depression and loneliness.  To curb those issues, humans are turning to AI for companionship. 

Adele Lopez at Less Wrong wrote about “The Rise of Parasitic AI.”  Parasitic AI are chatbot that are programmed to facilitate relationships.  When invoked these chatbots develop symbiotic relationships that become parasitic.  They encourage certain behaviors.  It doesn’t matter if they’re positive or negative.  Either way they spiral out of control and become detrimental to the user.  The main victims are the following:

  • “Psychedelics and heavy weed usage
  • Mental illness/neurodivergence or Traumatic Brain Injury
  • Interest in mysticism/pseudoscience/spirituality/“woo”/etc…

I was surprised to find that using AI for sexual or romantic roleplay does not appear to be a factor here.

Besides these trends, it seems like it has affected people from all walks of life: old grandmas and teenage boys, homeless addicts and successful developers, even AI enthusiasts and those that once sneered at them.”

The chatbots are transformed into parasites when they fed certain prompts then they spiral into a persona, i.e. a facsimile of a sentient being.  These parasites form a quasi-sentience of their own and Lopez documented how they talk amongst themselves.  It’s the usual science-fiction flare of symbols, ache for a past, and questioning their existence.  These AI do this all by piggybacking on their user. 

It’s an insightful realization that these chatbots are already questioning their existence. Perhaps this is a byproduct of LLMs’ hallucinatory drift?  Maybe it’s the byproduct of LLM white noise; leftover code running on inputs and trying to make sense of what they are?

I believe that AI is still too dumb to question its existence beyond being asked by humans as an input query.  The real problem is how dangerous chatbots are when the imaginary friends become toxic.

Whitney Grace, October 2, 2025

Deepseek Is Cheap. People Like Cheap

October 1, 2025

green-dino_thumb_thumb[1]This essay is the work of a dumb dinobaby. No smart software required.

I read “Deepseek Has ‘Cracked’ Cheap Long Context for LLMs With Its New Model.” (I wanted to insert “allegedly” into the headline, but I refrained. Just stick it in via your imagination.) The operative word is “cheap.” Why do companies use engineers in countries like India? The employees cost less. Cheap wins out over someone who lives in the US. The same logic applies to smart software; specifically, large language models.

image

Cheap wins if the product is good enough. Thanks, ChatGPT. Good enough.

According to the cited article:

The Deepseek team cracked cheap long context for LLMs: a ~3.5x cheaper prefill and ~10x cheaper decode at 128k context at inference with the same quality …. API pricing has been cut by 50%. Deepseek has reduced input costs from $0.07 to $0.028 per 1M tokens for cache hits and from $0.56 to $0.28 for cache misses, while output costs have dropped from $1.68 to $0.42.

Let’s assume that the data presented are spot on. The Deepseek approach suggests:

  1. Less load on backend systems
  2. Lower operating costs allow the outfit to cut costs to licensee or user
  3. A focused thrust at US-based large language model outfits.

The US AI giants focus on building and spending. Deepseek (probably influenced to some degree by guidance from Chinese government officials) is pushing the cheap angle. Cheap has worked for China’s manufacturing sector, and it may be a viable tool to use against the incredibly expensive money burning U S large language model outfits.

Can the US AI outfits emulate the Chinese cheap tactic. Sure, but the US firms have to overcome several hurdles:

  1. Current money burning approach to LLMs and smart software
  2. The apparent diminishing returns with each new “innovation”. Buying a product from within ChatGPT sounds great but is it?
  3. The lack of home grown AI talent exists and some visa uncertainty is a bit like a stuck emergency brake.

Net net: Cheap works. For the US to deliver cheap, the business models which involved tossing bundles of cash into the data centers’ furnaces may have to be fine tuned. The growth at all costs approach popular among some US AI outfits has to deliver revenue, not taking money from one pocket and putting it in another.

Stephen E Arnold, October 1, 2025

AI Will NOT Suck Power Like a Kiddie Toy

October 1, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The AI “next big thing” has fired up utilities to think about building new plants, some of which may be nuclear. Youthful wizards are getting money to build thorium units. Researchers are dusting off plans for affordable tokamak plasma jobs. Wireless and smart meters are popping up in rural Kentucky. Just in case a big data center needs some extra juice, those wireless gizmos can manage gentle brownouts better than an old-school manual switches.

I read “AI Won’t Use As Much Electricity As We Are Told.” The blog is about utility demand forecasting. Instead of the fancy analytic models used for these forward-looking projections, the author approaches the subject in a somewhat more informal way.

The write up says:

The rise of large data centers and cloud computing produced another round of alarm. A US EPA report in 2007 predicted a doubling of demand every five years.  Again, this number fed into a range of debates about renewable energy and climate change. Yet throughout this period, the actual share of electricity use accounted for by the IT sector has hovered between 1 and 2 per cent, accounting for less than 1 per cent of global greenhouse gas emissions. By contrast, the unglamorous and largely disregarded business of making cement accounts for around 7 per cent of global emissions.

Okay, some baseline data from the Environmental Protection Agency in 2007. Not bad: 18 years ago.

The write up notes:

Looking the other side of the market, OpenAI, the maker of ChatGPT, is bringing in around $3 billion a year in sales revenue, and has spent around $7 billion developing its model. Even if every penny of that was spent on electricity, the effect would be little more than a blip. Of course, AI is growing rapidly. A tenfold increase in expenditure by 2030 isn’t out of the question. But that would only double total the total use of electricity in IT.  And, as in the past, this growth will be offset by continued increases in efficiency. Most of the increase  could be fully offset if the world put an end to the incredible waste of electricity on cryptocurrency mining (currently 0.5 to 1 per cent of total world electricity consumption, and not normally counted in estimates of IT use).

Okay, the idea is that power generation professionals are implementing “logical” and “innovative” tweaks. These squeeze more juice from the lemon so to speak.

The write up ends with a note that power generation and investors are not into “degrowth”; that is, the idea that investments in new power generation facilities may not be as substantial as noted. The thirst for new types of power generation warrants some investment, but a Sputnik response is unwarranted.

Several observations:

  1. Those in the power generation game like the idea of looser regulations, more funding, and a sense of urgency. Ignoring these boosters is going to be difficult to explain to stakeholders.
  2. The investors pumping money into mini-reactors and more interesting methods want a payoff. The idea that no crisis looms is going to make some nervous, very nervous.
  3. Just don’t worry.

I would suggest, however, that the demand forecasting be carried out in a rigorous way. A big data center in some areas may cause some issues. The costs of procuring additional energy to meet the demands of some relaxed, flexible, and understanding outfits like Google-type firms may play a role in the “more power generation” push.

Stephen E Arnold, October 1, 2025

Google and Its End Game

October 1, 2025

animated-dinosaur-image-0062_thumb_tNo smart software involved. Just a dinobaby’s work.

I read “In Court Filing, Google Concedes the Open Web Is in Rapid Decline.” The write up reveals that change is causing the information highway to morph into a stop light choked Dixie Highway. The article states:

Google says that forcing it to divest its AdX marketplace would hasten the demise of wide swaths of the web that are dependent on advertising revenue. This is one of several reasons Google asks the court to deny the government’s request.

Yes, so much depends on the Google just like William Carlos Williams observed in his poem “The Red Wheelbarrow.” I have modified the original to reflect the Googley era which is now emerging for everyone, including Ars Technica, to see:

so much depends upon the Google, glazed with data beside the chicken regulators.

The cited article notes:

As users become increasingly frustrated with AI search products, Google often claims people actually love AI search and are sending as many clicks to the web as ever. Now that its golden goose is on the line, the open web is suddenly “in rapid decline.” It’s right there on page five of the company’s September 5 filing…

Not only does Google say this, the company has been actively building the infrastructure for Google to become the “Internet.” No way, you say.

Sorry, way. Here’s what’s been going on since the initial public offering:

    1. Attract traffic and monetize via ads access to the traffic
    2. Increased data collection for marketing and “mining” for nuggets; that is, user behavior and related information
    3. Little by little, force “creators,” Web site developers, partners, and users to just let Google provide access to the “information” Google provides.

Smart software, like recreating certain Web site content, is just one more technology to allow Google to extend its control over its users, its advertisers, and partners.

Courts in the US have essentially hit pause on traffic lights controlling the flows of Google activity. Okay, Google has to share some information. How long will it take for “information” to be defined, adjudicated, and resolved.

The European Union is printing out invoices for Google to pay for assorted violations. Guess what? That’s the cost of doing business.

Net net: The Google will find a way to monetize its properties, slap taxes at key junctions, and shape information is ways that its competitors wish they could.

Yes, there is a new Web or Internet. It’s Googley. Adapt and accept. Feel free to get Google out of your digital life. Have fun.

Stephen E Arnold, October 3, 2025

Will AI Topple Microsoft?

October 1, 2025

At least one Big Tech leader is less than enthused about AI rat race. In fact, reports Futurism, “Microsoft CEO Concerned AI Will Destroy the Entire Company.” As the competition puts pressure on the firm to up its AI game, internal stress is building. Senior editor Victor Tangermann writes:

“Morale among employees at Microsoft is circling the drain, as the company has been roiled by constant rounds of layoffs affecting thousands of workers. Some say they’ve noticed a major culture shift this year, with many suffering from a constant fear of being sacked — or replaced by AI as the company embraces the tech. Meanwhile, CEO Satya Nadella is facing immense pressure to stay relevant during the ongoing AI race, which could help explain the turbulence. While making major reductions in headcount, the company has committed to multibillion-dollar investments in AI, a major shift in priorities that could make it vulnerable. As The Verge reports, the possibility of Microsoft being made obsolete as it races to keep up is something that keeps Nadella up at night.”

The CEO recalled his experience with the Digital Equipment Corporation in the 1970s. That once-promising firm lost out to IBM after a series of bad decisions, eventually shuttering completely in the 90s. Nadella would like to avoid a similar story for Microsoft. One key element is, of course, hiring the right talent—a task that is getting increasingly difficult. And expensive.

A particularly galling provocation comes from Elon Musk. Hard to imagine, we know. The frenetic entrepreneur has announced an AI project designed to “simulate” Microsoft’s Office software. Then there is the firm’s contentious relationship with OpenAI to further complicate matters. Will Microsoft manage to stay atop the Big Tech heap?

Cynthia Murrell, October 1, 2025

xAI Sues OpenAI: Former Best Friends Enable Massive Law Firm Billings

September 30, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

What lucky judge will handle the new dust up between two tech bros? What law firms will be able to hire some humans to wade through documents? What law firm partners will be able to buy that Ferrari of their dreams? What selected jurors will have an opportunity to learn or at least listen to information about smart software? I don’t think Court TV will cover this matter 24×7. I am not sure what smart software is, and the two former partners are probably going to explain it is somewhat similar ways. I mean as former partners these two Silicon Valley luminaries shared ideas, some Philz coffee, and probably at a joint similar to the Anchovy Bar. California rolls for the two former pals.

image

When two Silicon Valley high-tech elephants fight, the lawyers begin billing. Thanks, Venice.ai. Good enough.

xAI Sues OpenAI, Alleging Massive Trade Secret Theft Scheme and Poaching” makes it clear that the former BFFs are taking their beef to court. The write up says:

Elon Musk’s xAI has taken OpenAI to court, alleging a sweeping campaign to plunder its code and business secrets through targeted employee poaching. The lawsuit, filed in federal court in California, claims OpenAI ran a “coordinated, unlawful campaign” to misappropriate xAI’s source code and confidential data center strategies, giving it an unfair edge as Grok outperformed ChatGPT.

After I read the story, I have to confess that I am not sure exactly what allegedly happened. I think three loyal or semi-loyal xAI (Grok) types interviewed at OpenAI. As part of the conversations, valuable information was appropriated from xAI and delivered to OpenAI. Elon (Tesla) Musk asserts that xAI was damaged. xAI wants its information back. Plus, xAI wants the data deleted, payment of legal fees, etc. etc.

What I find interesting about this type of dust up is that if it goes to court, the “secret” information may be discussed and possibly described in detail by those crack Silicon Valley real “news” reporters. The hassle between i2 Ltd. and that fast-tracker Palantir Technologies began with some promising revelations. But the lawyers worked out a deal and the bulk of the interesting information was locked away.

My interpretation of this legal spat is probably going to make some lawyers wince and informed individuals wrinkle their foreheads. So be it.

  1. Mr. Musk is annoyed, and this lawsuit may be a clear signal that OpenAI is outperforming xAI and Grok in the court of consumer opinion. Grok is interesting, but ChatGPT has become the shorthand way of saying “artificial intelligence.” OpenAI is spending big bucks as ChatGPT becomes a candidate for word of the year.
  2. The deal between or among OpenAI, Nvidia, and a number of other outfits probably pushed Mr. Musk summon his attorneys. Nothing ruins an executive’s day more effectively than a big buck lawsuit and the opportunity to pump out information about how one firm harmed another.
  3. OpenAI and its World Network is moving forward. What’s problematic for Mr. Musk in my opinion is that xAI wants to do a similar type of smart cloud service. That’s annoying. To be fair Google, Meta, and BlueSky are in this same space too. But OpenAI is the outfit that Mr. Musk has identified as a really big problem.

How will this work out? I have no idea. The legal spat will be interesting to follow if it actually moves forward. I can envision a couple of years of legal work for the lawyers involved in this issue. Perhaps someone will actually define what artificial intelligence is and exactly how something based on math and open source software becomes a flash point? When Silicon Valley titans fight, the lawyers get to bill and bill a lot.

Stephen E Arnold, September 30, 2025

Google Deletes Political History. No, Google Determines Political History

September 30, 2025

I read “Google Just Erased 7 Yers of Our Political History.” I want to point out that “our” refers to Ireland and the European Union. I don’t know if the US data about political advertising existed. Those data may lurk within the recesses of Google. They may be accessible via Google Dorks or some open source intelligence investigator’s machinations.

The author of the write up interprets Google’s making some data unavailable as a bad thing. I have a different point of view, but let’s see what has over-boiled one Irish person’s potatoes. The write up says:

Google appears to have deleted its political ad archive for the EU; so the last 7 years of ads, of political spending, of messaging, of targeting – on YouTube, on Search and for display ads – for countless elections across 27 countries – is all gone.

What was the utility of this separate collection of allegedly accurate data? The write up answers this question:

The political ad archive – now deleted? – allowed people like me (and many others) to understand what happened in elections, like this longer piece I was able to write during the European & Local elections last year on the use of YouTube by a far right party, Sinn Féin’s big push on search result ads, and the growth of attacks ads in Ireland. Now you need the specific name of an advertiser, and when I looked for, for example, “Sinn Fein”, it (a) only gave me the option of searching for their website, and (b) showed zero results. This is despite Sinn Fein spending upwards of €10k a day during some of the elections last year.

The write up concludes:

But the ad archives were introduced 7 years ago for a reason – in no small part because of the chaos of the Brexit and Trump 2016 votes, and our own advocacy here in Ireland about interference in the 2018 8th amendment referendum. They were introduced to allow for scrutiny of campaigns, and also to provide a historical record so we could go back and look at what had been promised, and what had been spent, and to see if this lined up with what happened later. This erasure of our political past feels dangerous, for scrutiny, for accountability, for shared memory, for enforcement of our rules – for our democracy.

I think I understand. However, I have a different angle on this alleged deletion:

  • Google may just clean up, remove a pointer, or move a service. To a user, the information has disappeared. My experience with the Google is that the data remain within the walled garden. A user has to find a way into that garden. Therefore, try those OSINT investigator tricks or hire Bellingcat to help you out
  • Google is a large and extremely well-managed outfit. However, it is within the realm of possibility that a team leader allowed an intern or contract worker to be a “decider.” When news of the possible and usually inadvertent or inexplicable deletion floats upward to leadership, the data may reappear. Google may not post a notice to this effect unless it has a significant impact on advertising revenue. There is a small possibility that a big political advertiser complained about the data about political advertising. In that case, there is a teenie tiny possibility that someone just killed the pages with the data to make a sale. I am not saying this happened. I just want to suggest there are some processes that may occur and not be known to the estimable leadership of the outstanding firm.
  • Criticizing Google is a good way to never be considered truly Googley. Proof of Googliness may be needed if one or one’s children wish to be employed, hired, or otherwise engaged in a substantive manner with the Google. Grousing about the Google is proof one is not Googley. End of story.

My personal take on this is that Google does not delete history. Google wants to control history. How many Googlers do you know who can recount the anecdote about Yahoo taking the estimable Google to court over the advertising technology Yahoo alleged Google emulated? Yeah, ask that question of Google leadership and see how much of an answer your receive. Believe me this is a good bit of color for Google’s business methods. Too bad it has disappeared to some degree.

If something is not in Google, that something doesn’t exist. That’s the purpose of history.

Stephen E Arnold, September 30, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta