Faux Boeuf Delivers Zero Calories Plus a Non-Human Toxin
August 29, 2025
No AI. Just a dinobaby working the old-fashioned way.
That sizzling rib AI called boeuf à la Margaux Blanchard is a treat. I learned about this recipe for creating filling, substantive, calorie laden content in “Wired and Business Insider Remove Articles by AI-Generated Freelancer.” I can visualize the meeting in which the decision was taken to hire Margaux Blanchard. I can also run in my mental VHS, the meeting when the issue was discovered. In my version, the group agreed to blame it on a contractor and the lousy job human resource professionals do these days.
What’s the “real” story? Let go to the Guardian write up:
On Thursday [August 22, 2025], Press Gazette reported that at least six publications, including Wired and Business Insider, have removed articles from their websites in recent months after it was discovered that the stories – written under the name of Margaux Blanchard – were AI-generated.
I frequently use the phrase “ordained officiant” in my dinobaby musings. Doesn’t everyone with some journalism experience?
The write u p said:
Wired’s management acknowledged the faux pas, saying: “If anyone should be able to catch an AI scammer, it’s Wired. In fact we do, all the time … Unfortunately, one got through. We made errors here: This story did not go through a proper fact-check process or get a top edit from a more senior editor … We acted quickly once we discovered the ruse, and we’ve taken steps to ensure this doesn’t happen again. In this new era, every newsroom should be prepared to do the same.”
Yeah, unfortunately and quickly. Yeah.
I liked this paragraph in the story:
This incident of false AI-generated reporting follows a May error when the Chicago Sun-Times’ Sunday paper ran a syndicated section with a fake reading list created by AI. Marco Buscaglia, a journalist who was working for King Features Syndicate, turned to AI to help generate the list, saying: “Stupidly, and 100% on me, I just kind of republished this list that [an AI program] spit out … Usually, it’s something I wouldn’t do … Even if I’m not writing something, I’m at least making sure that I correctly source it and vet it and make sure it’s all legitimate. And I definitely failed in that task.” Meanwhile, in June, the Utah court of appeals sanctioned a lawyer after he was discovered to have used ChatGPT for a filing he made in which he referenced a nonexistent court case.
Hey, that AI is great. It builds trust. It is intellectually satisfying just like some time in the kitchen with Margot Blanchard, a hot laptop, and some spicy prompts. Yum yum yum.
Stephen E Arnold, August 29, 2025
Computer Science Grad Job Crisis: Root Cause Revealed
August 29, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read a short item called “A Popular College Major Has One of The Highest Unemployment Rates.” The article contains old news, but it also reveals one of the underlying causes of the issue.
First, here’s the set up for the “no jobs for you” write up:
Computer science ranked seventh amongst undergraduate majors with the highest unemployment at 6.1 percent, according to the Federal Reserve Bank of New York. “Every kid with a laptop thinks they’re the next Zuckerberg, but most can’t debug their way out of a paper bag,” one expert told Newsweek.
Now, let’s look at the passage that points to an underlying cause:
HR consultant Bryan Driscoll told Newsweek: Computer science majors have long been sold a dream that doesn’t match reality.
And a bit of supporting input:
Michael Ryan, a finance expert and the founder of MichaelRyanMoney.com, told Newsweek: … “We created a gold rush mentality around coding right as the gold ran out. Companies are cutting engineering budgets by 40 percent while CS enrollment hits record highs. It’s basic economics. Flood the market, crater the wages.”
My take is this another example of “think it and it will become real” patterning in the US and probably elsewhere too. College and universities wanted to “sell” student loans. Computer science was nothing more than the bait on the hook of employment for life for the mark.
When one can visualize a world and make it real corresponds to how life unspools strikes me as crazy. In my career I have met a few people who said, “I knew I wanted to be an X, so I just did it.” The majority of those with whom I have interacted in my 60 plus year work career say something like this, “Yeah, I majored in X, but an opportunity arose, and I took it. Now I do Y. Go figure.”
The “think it into reality” approach seems to deliver low probability results. Situational decisions have several upsides. First, one doesn’t have a choice for some reason. Two, surprises happen. And, three, as one moves through life (the unspooling idea) perceptions, interests, and even intelligence change.
My hunch is that today (it happens to be August 21, 2025) is that we are living in a world in which “think it and it will happen” thought processes are everywhere. Is Mark Zuckerberg suddenly concerned about an AI bubble? Will Microsoft launch Excel Copilot with a warning label that says, “This will output errors”? Will you trust your child’s medical treatment to a smart robot?
I like to thing about dialing more “real” world into everyday life. Unemployment for computer science graduates won’t change too much in the “up” direction. But at least the carnival culture approach to selling a college education, an AI start up idea to a 20 something MBA “managing director”, and the “do it for 10,000 hours and become an expert” may loosen the grip on what are some pretty wacky ideas.
Stephen E Arnold, August 29, 2025
Misunderstanding the Google: A Hot Wok
August 29, 2025
No AI. Just a dinobaby working the old-fashioned way.
I am no longer certain how many people read blog posts. Bing, Google, and Yandex seem to be crawling in a more focused way; that is, comprehensiveness is not part of the game plan. I want to do my small part by recommending that you scan (preferably study) “Google Is Killing the Open Web.”
The premise of the essay is clear: Google has been working steadily and in a relatively low PR voltage mode to control the standards for the Web. I commented on this in my Google Legacy, Google Version 2.0, and other Google writings as early as 2003. How did I identify this strategic vision? Easy. A Googler told me. This individual like it when I called Google a “calculating predator.” This person made an effort (a lame one because he worked at Google) to hear my lectures about Google’s Web search.
Now 22 years later, a individual has put the pieces together and concluded rightly that Google is killing the open Web. The essay states:
Google is managing to achieve what Microsoft couldn’t: killing the open web. The efforts of tech giants to gain control of and enclose the commons for extractive purposes have been clear to anyone who has been following the history of the Internet for at least the last decade, and the adopted strategies are varied in technique as they are in success, from Embrace, Extend, Extinguish (EEE) to monopolization and lock-in.
Several observations:
- The visible efforts to monopolize have been search, ads, and the mobile plays. The lower profile technical standards are going to be more important as new technologies emerge. The accuracy of the early Googlers’ instincts were accurate. People (namely Wok) are just figuring it out. Unfortunately it is too late.
- Because online services have a tendency to become monopolies, the world of “online” has become increasingly centralized. The “myth” of decentralization is a great one but so was “Epic of Gilgamesh.” There may be some pony in there, but the reality is that it is better to centralize and then decide what to move out there.
- The big tech outfits reside in a “country,” but the reality is that these are borderless. There is no traditional there there. Consequently governments struggle to regulate what these outfits do. Australia levies a fine on Google. So what? Google just keeps being Googley. Live with it.
One cannot undo decades of methodical, strategic thinking, and deft tactical moves quickly. My view is that changing Google will occur within Google. The management thinking is becoming increasingly like that of an AT&T type company. Chop it up and it will just glue itself back together.
I know the Wok is hot. Time to cool off and learn to thrive in the walled garden. Getting out is going to be more difficult than many other tasks. Google controls lots of technology, including the button that opens the gate to the walled garden.
Stephen E Arnold, August 26, 2025
Think It. The * It * Becomes Real. Think Again?
August 27, 2025
No AI. Just a dinobaby working the old-fashioned way.
Fortune Magazine — once the gem for a now spinning-in-his-grave publisher —- posted “MIT Report: 95% of Generative AI Pilots at Companies Are Failing.” I take a skeptical view of MIT. Why? The esteemed university found Jeffrey Epstein a swell person.
The thrust of the story is that people stick smart software into an organization, allow it time to steep, cook up a use case, and find the result unpalatable. Research is useful. When it evokes a “Duh!”, I don’t get too excited.
But there was a phrase in the write up which caught my attention: Learning gap. AI or smart software is a “belief.” The idea of the next big thing creates an opportunity to move money. Flow, churn, motion — These are positive values in some business circles.
AI fits the bill. The technology demonstrates interesting capabilities. Use cases exist. Companies like Microsoft have put money into the idea. Moving money is proof that “something” is happening. And today that something is smart software. AI is the “it” for the next big thing.
Learning gap, however, is the issue. The hurdle is not Sam Altman’s fears about the end of humanity or his casual observation that trillions of dollars are needed to make AI progress. We have a learning gap.
But the driving vision for Internet era innovation is do something big, change the world, reinvent society. I think this idea goes back to the sales-oriented philosophy of visualizing a goal and aligning one’s actions to achieve that goal. I a fellow or persona named Napoleon Hill pulled together some ideas and crafted “Think and Grow Rich.” Today one just promotes the “next big thing,” gets some cash moving, and an innovation like smart software will revolutionize, remake, or redo the world.
The “it” seems to be stuck in the learning gap. Here’s the proof, and I quote:
But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained. The data also reveals a misalignment in resource allocation. More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation—eliminating business process outsourcing, cutting external agency costs, and streamlining operations.
Consider this question: What if smart software mostly works but makes humans uncomfortable in ways difficult for the user to articulate? What if humans lack the mental equipment to conceptualize what a smart system does? What if the smart software cannot answer certain user questions?
I find information about costs, failed use cases, hallucinations, and benefits plentiful. I don’t see much information about the “learning gap.” What causes a learning gap? Spell check makes sense. A click that produces a complete report on a complex topic is different. But in what way? What is the impact on the user?
I think the “learning gap” is a key phrase. I think there is money to be made in addressing it. I am not confident that visualizing a better AI is going to solve the problem which is similar to a bonfire of cash. The learning gap might be tough to fill with burning dollar bills.
Stephen E Arnold, August 27, 2025
Apple and Meta: The After Market Route
August 26, 2025
No AI. Just a dinobaby working the old-fashioned way.
Two big outfits are emulating the creative motif for an American television series titled “Pimp My Ride.” The show was hosted by rapper Xzibit, who has a new album called “Kingmaker” in the works. He became the “meme” of the television program with his signature phrase, “Yo, dawg, I heard you like.”
A DVD of season one, is available for sale at www.bol.com.
Each episode a “lucky person” would be approached and told that his or her vehicle would be given a make over. Some of the make overs were memorable. Examples included the “Yellow Shag Disaster,” which featured yellow paint and yellow shag carpeting. The team removed a rat living in the 1976 Pacer. Another was the “Drive In Theater Car.” It included a pop up champagne dispenser and a TV screen installed under the hood for a viewing experience when people gathered outside the vehicle.
The idea was to take something that mostly worked and then add-on extras. Did the approach work? It made Xzibit even more famous and it contributed the phrase “Yo, dawg, I heard you like” to the US popular culture between 2004 and 2007.
I think the “Pimp My Ride” concept has returned for Apple and Meta. Let me share my thoughts with you.
First, I noted that Bloomberg is exploring the use of Google Gemini AI to Power the long suffering Siri. You can read the paywalled story at this link. Apple knows that Google’s payments are worth real money. The idea of adding more Google and getting paid for the decision probably makes sense to the estimable Apple. Will the elephants mate and produce more money or will the grass get trampled. I don’t know. It will be interesting to see what the creative wizards at both companies produce. There is no date for the release of the first episode. I will be watching.
Second, the story presented in fragments on X.com appears at this X.com page. The key item of information is the alleged tie up between Meta and MidJourney:
Today we’re proud to announce a partnership with @midjourney , to license their aesthetic technology for our future models and products, bringing beauty to billions.
Meta, like Apple, is partnering with an AI success in the arts and crafts sector of smart software. The idea seems to focus on “aesthetic excellence.” How will these outfits enhance Meta. Here’s what the X.com comment offers:
To ensure Meta is able to deliver the best possible products for people it will require taking an all-of-the-above approach. This means world-class talent, ambitious compute roadmap, and working with the best players across the industry.
Will these add-one approaches to AI deliver something useful to millions or will the respective organizations produce the equivalent of the “Pimp My Ride” Hot Tub Limousine. This after-market confection added a hot tub filled with water to a limousine. The owner of the vehicle could relax in the hot tub while the driver ferried the proud owner to the bank.
I assume the creations of the Apple, Google, Meta, and MidJourney teams will be captured on video and distributed on TikTok-type services as well as billions of computing devices. My hope is that Xzibit is asked to host the roll outs for the newly redone services. I would buy a hat, a T shirt, and a poster for the “winner” of this new AI enhanced effort.
Yo, dawg, I heard you like AI, right?
Stephen E Arnold, August 26, 2025
And the Problem for Enterprise AI Is … Essentially Unsolved
August 26, 2025
No AI. Just a dinobaby working the old-fashioned way.
I try not to let my blood pressure go up when I read “our system processes all your organization’s information.” Not only is this statement wildly incorrect it is probably some combination of [a] illegal, [b] too expensive, and [c] too time consuming.
Nevertheless, vendors either repeat the mantra or imply it. When I talk with representatives of these firms, over time, fewer and fewer recognize the craziness of the assertion. Apparently the reality of trying to process documents related to a legal matter, medical information, salary data, government-mandated secrecy cloaks, data on a work-from-home contractor’s laptop which contains information about payoffs in a certain country to win a contract, and similar information is not part of this Fantasyland.
I read “Immature Data Strategies Threaten Enterprise AI Plans.” The write up is a hoot. The information is presented in a way to avoid describing certain ideas as insane or impossible. Let’s take a look at a couple of examples. I will in italics offer my interpretation of what the online publication is trying to coat with sugar and stick inside a Godiva chocolate.
Here’s the first snippet:
Even as senior decision-makers hold their data strategies in high regard, enterprises face a multitude of challenges. Nearly 90% of data pros reported difficulty with scaling and complexity, and more than 4 in 5 pointed to governance and compliance issues. Organizations also grapple with access and security risks, as well as data quality, trust and skills gaps.
My interpretation: Executives (particularly leadership types) perceive their organizations as more buttoned up than they are in reality. Ask another employee, and you will probably hear something like “overall we do very well.” The fact of the matter is that leadership and satisfied employees have zero clue about what is required to address a problem. Looking too closely is not a popular way to get that promotion or to keep the Board of Directors and stakeholders happy. When you have to identify an error use a word like “governance” or “regulations.”
Here’s the second snippet:
To address the litany of obstacles, organizations are prioritizing data governance. More than half of those surveyed expect strengthened governance to significantly improve AI implementation, data quality and trust in business decisions.
My interpretation: Let’s talk about governance, not how poorly procurement is handled and the weird system problems that just persist. What is “governance”? Organizations are unsure how they continue to operate. The purpose of many organizations is — believe it or not — lost. Make money is the yardstick. Do what’s necessary to keep going. That’s why in certain organizations an employee from 30 years ago could return and go to a meeting. Why? No change. Same procedures, same thought processes, just different people. Incrementalism and momentum power the organization.
So what? Organizations are deciding to give AI a whirl or third parties are telling them to do AI. Guess what? Major change is difficult. Systems-related activities repeat the same cycle. Here’s one example: “We want to use Vendor X to create an enterprise knowledge base.” Then the time, cost, and risks are slowly explained. The project gets scaled back because there is neither time, money, employee cooperation, or totally addled attorneys to make organization spanning knowledge available to smart software.
The pitch sounds great. It has for more than 60 years. It is still a difficult deliverable, but it is much easier to market today. Data strategies are one thing; reality is anther.
Stephen E Arnold, August 26, 2025
Deal Breakers in Medical AI
August 26, 2025
No AI. Just a dinobaby working the old-fashioned way.
My newsfeed thing spit out a link to “Why Radiology AI Didn’t Work and What Comes Next.” I have zero interest in radiology. I don’t get too excited about smart software. So what did I do? Answer: I read the article. I was delighted to uncover a couple of points that, in my opinion, warrant capturing in my digital notebook.
The set up is that a wizard worked at a start up trying to get AI to make sense of the consistently fuzzy, murky, and baffling images cranked out by radiology gizmos. Tip: Follow the instructions and don’t wear certain items of jewelry. The start up fizzled. AI was part of the problem, but the Jaws-type sharp lurking in the murky image explains this type of AI implosion.
Let’s run though the points that struck me.
First, let’s look at this passage:
Unlike coding or mathematics, medicine rarely deals in absolutes. Clinical documentation, especially in radiology, is filled with hedge language — phrases like “cannot rule out,” “may represent,” or “follow-up recommended for correlation.” These aren’t careless ambiguities; they’re defensive signals, shaped by decades of legal precedent and diagnostic uncertainty.
Okay, lawyers play a significant role in establishing thought processes and normalizing ideas that appear to be purpose-built to vaporize like one of those nifty tattoo removing gadgets the smart system. I would have pegged insurance companies, then lawyers, but the write up directed my attention of the legal eagles’ role: Hedge language. Do I have disease X? The doctor responds, “Maybe, maybe not. Let’s wait 30 days and run more tests.” Fuzzy lingo, fuzzy images, perfect.
Second, the write up asks two questions:
- How do we improve model coverage at the tail without incurring prohibitive annotation costs?
- Can we combine automated systems with human-in-the-loop supervision to address the rare but dangerous edge cases?
The answers seem to be: You cannot afford to have humans do indexing and annotation. That’s why certain legal online services charge a lot for annotations. And, the second question, no, you cannot pull off automation with humans for events rarely covered in the training data. Why? Cost and finding enough humans who will do this work in a consistent way in a timely manner.
Here’s the third snippet:
Without direct billing mechanisms or CPT reimbursement codes, it was difficult to monetize the outcomes these tools enabled. Selling software alone meant capturing only a fraction of the value AI actually created. Ultimately, we were offering tools, not outcomes. And hospitals, rightly, were unwilling to pay for potential unless it came bundled with performance.
Finally, insurance procedures. Hospitals aren’t buying AI; they are buying ways to deliver “service” and “bill.” AI at this time does not sell what hospitals want to buy: A way to keep high rates and slash costs wherever possible.
Unlikely but perhaps some savvy AI outfit will create a system that can crack the issues the article identifies. Until then, no money, no AI.
Stephen E Arnold, August 26, 2025
Leave No Data Unslurped: A New Google T Shirt Slogan?
August 25, 2025
No AI. Just a dinobaby working the old-fashioned way.
That mobile phone is the A Number One surveillance device ever developed. Not surprisingly, companies have figured out how to monetize the data flowing through the device. Try explaining the machinations of those “Accept Defaults” to a clutch of 70-something bridge players. Then try explaining the same thing to the GenAI type of humanoid. One group looks at you with a baffled work on their faces. The other group stares into the distance and says, “Whatever.”
Now the Google wants more data, fresh information, easily updated. Because why not? “Google Expands AI-Based Age Verification System for Search Platform.” The write up says:
Google has begun implementing an artificial intelligence-based age verification system not only on YouTube but also on Google Search … Users in the US are reporting pop-ups on Google Search saying, “We’ve changed some of your settings because we couldn’t verify that you’re of legal age.” This is a sign of new rules in Google’s Terms of Service.
Why the scope creep from YouTube to “search” with its AI wonderfulness? The write up says:
The new restrictions could be another step in re-examining the balance between usability and privacy.
Wrong. The need for more data to stuff into the assorted AI “learning” services provide a reasonable rationale. Tossing in the “prevent harm” angle is just cover.
My view of the matter is:
- Mobile is a real time service. Capturing more information of a highly-specific nature is something that is an obvious benefit to the Google.
- Users have zero awareness of how the data interactions work and most don’t want to know to try to understand cross correlation.
- Google’s goals are not particularized. This type of “fingerprint” just makes sense.
The motto could be “Leave no data unslurped.” What’s this mean? Every Google service will require verification. The more one verifies, the fresher the identify information and the items that tag along and can be extracted. I think of this as similar to the process of rendering slaughtered livestock. The animal is dead, so what’s the harm.
None, of course. Google is busy explaining how little its data centers use to provide those helpful AI overview things.
Stephen E Arnold, August x, 2025
Stephen E Arnold, August 25, 2025
Copilot, Can You Crash That Financial Analysis?
August 22, 2025
No AI. Just a dinobaby working the old-fashioned way.
The ever-insouciant online service The Verge published a story about Microsoft, smart software, and Excel. “Microsoft Excel Adds Copilot AI to Help Fill in Spreadsheet Cells” reports:
Microsoft Excel is testing a new AI-powered function that can automatically fill cells in your spreadsheets, which is similar to the feature that Google Sheets rolled out in June.
Okay, quite specific intentionality: Fill in cells. And a dash of me-too. I like it.
However, the key statement in my opinion is:
The COPILOT function comes with a couple of limitations, as it can’t access information outside your spreadsheet, and you can only use it to calculate 100 functions every 10 minutes. Microsoft also warns against using the AI function for numerical calculations or in “high-stakes scenarios” with legal, regulatory, and compliance implications, as COPILOT “can give incorrect responses.”
I don’t want to make a big deal out of this passage, but I will do it anyway. First, Microsoft makes clear that the outputs can be incorrect. Second, don’t use it too much because I assume one will have to pay to use a system that “can give incorrect results.” In short, MSFT is throttling Excel’s Copilot. Doesn’t everyone want to explore numbers with an addled Copilot known to flub numbers in a jet aircraft at 0.8 Mach?
I want to quote from “It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes”:
Think of it. Forty-five hundred years ago, if you were a Sumerian scribe, while your calculations on the world’s first abacus might have been laborious, you could be assured they’d be correct. Four hundred years ago, if you were palling around with William Oughtred, his new slide rule may have been a bit intimidating at first, but you could know its output was correct. In the 1980s, you could have bought the cheapest, shittiest Casio-knockoff calculator you could find, and used it exclusively, for every day of the rest of your life, and never once would it give anything but a correct answer. You could use it today! But now we have Microsoft apparently determining that “unpredictability” was something that some number of its customers wanted in their calculators.
I know that I sure do. I want to use a tool that is likely to convert “high-stakes scenarios” into an embarrassing failure. I mean who does not want this type of digital Copilot?
Why do I find this Excel with Copilot software interesting?
- It illustrates that accuracy has given way to close enough for horseshoes. Impressive for a company that can issue an update that could kill one’s storage devices.
- Microsoft no longer dances around hallucinations. The company just says, “The outputs can be wrong.” But I wonder, “Does Microsoft really mean it?” What about Red Bull-fueled MBAs handling one’s retirement accounts? Yeah, those people will be really careful.
- The article does not come and and say, “Looks like the AI rocket ship is losing altitude.”
- I cannot imagine sitting in a meeting and observing the rationalizations offered to justify releasing a product known to make NUMERICAL errors.
Net net: We are learning about the quality of [a] managerial processes at Microsoft, [b] the judgment of employees, and [c] the sheer craziness that an attorney said, “Sure, release the product just include an upfront statement that it will make mistakes.” Nothing builds trust more than a company anchored in customer-centric values.
Stephen E Arnold, August 22, 2025
News Flash: Google Does Not Care about Publishers
August 21, 2025
No AI. Just a dinobaby working the old-fashioned way.
I read another Google is bad story. This one is titled “Google Might Not Believe It, But Its AI Summaries Are Bad News for Publishers.” The “news” service reports that a publishing industry group spokesperson said:
“We must ensure that the same AI ‘answers’ users see at the top of Google Search don’t become a free substitute for the original work they’re based on.”
When this sentence was spoken was the industry representative’s voice trembling? Were there tears in his or her eyes? Did the person sniff to avoid the embarrassment of a runny nose?
No idea.
The issue is that Google looks at its metrics, fiddles with its knobs and dials on its ad sales system, and launches AI summaries. Those clicks that used to go to individual sites now provide the “summary space” which is a great place for more expensive, big advertising accounts to slap their message. Yep, it is the return to the go-go days of television. Google is the only channel and one of the few places to offer a deal.
What does Google say? Here’s a snip from the “news” story:
"Overall, total organic click volume from Google Search to websites has been relatively stable year-over-year," Liz Reid, VP and Head of Google Search, said earlier this month. "Additionally, average click quality has increased, and we’re actually sending slightly more quality clicks to websites than a year ago (by quality clicks, we mean those where users don’t quickly click back — typically a signal that a user is interested in the website). Reid suggested that reports like the ones from Pew and DCN are "often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in Search."
Translation: Haven’t you yokels figured out after 20 years of responding to us, we are in control now. We don’t care about you. If we need content, we can [a] pay people to create it, [b] use our smart software to write it, and [c] offer inducements to non profits, government agencies, and outfits with lots of writers desperate for recognition a deal. TikTok has changed video, but TikTok just inspired us to do our own TikTok. Now publishers can either get with the program or get out.
PC News apparently does not know how to translate Googlese.
It’s been 20 plus years and Google has not changed. It is doing more of the game plan. Adapt or end up prowling LinkedIn for work.
Stephen E Arnold, August 21, 2025