Google Is Really Cute: Push Your Content into the Jaws of Googzilla
November 4, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Google has a new, helpful, clever, and cute service just for everyone with a business Web site. “Google Labs’ Free New Experiment Creates AI-Generated Ads for Your Small Business” lays out the basics of Pomelli. (I think this word means knobs or handles.)

A Googley business process designed to extract money and data from certain customers. Thanks, Venice.ai. Good enough.
The cited article states:
Pomelli uses AI to create campaigns that are unique to your business; all you need to do is upload your business website to begin. Google says Pomelli uses your business URL to create a “Business DNA” that analyzes your website images to identify brand identity. The Business DNA profile includes tone of voice, color palettes, fonts, and pictures. Pomelli can also generate logos, taglines, and brand values.
Just imagine Google processing your Web site, its content, images, links, and entities like email addresses, phone numbers, etc. Then using its smart software to create an advertising campaign, ads, and suggestions for the amount of money you should / will / must spend via Google’s own advertising system. What a cute idea!
The write up points out:
Google says this feature eliminates the laborious process of brainstorming unique ad campaigns. If users have their own campaign ideas, they can enter them into Pomelli as a prompt. Finally, Pomelli will generate marketing assets for social media, websites, and advertisements. These assets can be edited, allowing users to change images, headers, fonts, color palettes, descriptions, and create a call to action.
How will those tireless search engine optimization consultants and Google certified ad reselling outfits react to this new and still “experimental” service? I am confident that [a] some will rationalize the wonderfulness of this service and sell advisory services about the automated replacement for marketing and creative agencies; [b] some will not understand that it is time to think about a substantive side gig because Google is automating basic business functions and plugging into the customer’s wallet with no pesky intermediary to shave off some bucks; and [c] others will watch as their own sales efforts become less and less productive and then go out of business because adaptation is hard.
Is Google’s idea original? No, Adobe has something called AI Found, according to the write up. Google is not into innovation. Need I remind you that Google advertising has some roots in the Yahoo garden in bins marked GoTo.com and Overture.com. Also, there is a bank account with some Google money from a settlement about certain intellectual property rights that Yahoo believed Google used as a source of business process inspiration.
As Google moves into automating hooks, it accrues several significant benefits which seem to stick up in Google’s push to help its users:
- Crawling costs may be reduced. The users will push content to Google. This may or may not be a significant factor, but the user who updates provides Google with timely information.
- The uploaded or pushed content can be piped into the Google AI system and used to inform the advertising and marketing confection Pomelli. Training data and ad prospects in one go.
- The automation of a core business function allows Google to penetrate more deeply into a business. What if that business uses Microsoft products? It strikes me that the Googlers will say, “Hey, switch to Google and you get advertising bonus bucks that can be used to reduce your overall costs.”
- The advertising process is a knob that Google can be used to pull the user and his cash directly into the Google business process automation scheme.
As I said, cute and also clever. We love you, Google. Keep on being Googley. Pull those users’ knobs, okay.
Stephen E Arnold, November 4, 2025
Social Credit Already Exists In The West…Just with Different Spins
November 4, 2025
China is a dystopian nightmare with its social credit system. Westerners believe they can breathe a sigh of relief because that doesn’t happen in their home countries. Oh, how wrong they are. Social credit systems are here, they’re just run by a capitalist system. The Nexus author Natalie Pang explores the idea in, “Your Phone Already Has Social Credit. We Just Lie About It.”
What exactly is social credit? It’s your digital reputation, a profile of your behavior captured by everything: Amazon, credit score, Airbnb, Uber, etc. There isn’t any difference between the social credit system in the west and China, except for one thing: transparency. China is 100% transparent that it rates people, while the West hides it behind many facades. China’s social credit system has been disbanded except for a few outliers. In the West, it’s alarming the impact it has on lives:
“Your credit score doesn’t just determine loan eligibility; it affects where you can live, which jobs you can get, and how much you pay for car insurance. But traditional credit scoring is expanding rapidly. Some specialized lenders scan social media profiles as part of alternative credit assessments, particularly for borrowers with limited credit histories. Payment apps and financial services increasingly track spending patterns and transaction behaviors to build comprehensive risk profiles. The European Central Bank has asked some institutions to monitor social media chatter for early warnings of bank runs, though this is more about systemic risk than individual account decisions. Background check companies routinely analyze social media presence for character assessment. LinkedIn algorithmically manages your professional visibility based on engagement patterns, posting frequency, and network connections, rankings that recruiters increasingly rely on to filter candidates. Even dating has become a scoring system: apps use engagement rates and response patterns to determine who rises to the top of the queue and who gets buried.”
Another difference between China and the West is that these apps don’t talk or affect each other. Amazon doesn’t impact your ride shares, while your dating app doesn’t impact your credit score. These data points can be described as proprietary data or a violation of a user’s privacy, so these companies don’t share them. Another way of putting it these companies don’t want to harm their bottom line.
Social crediting systems are already affecting the west, but only in realm of capitalism and social media. The bigger question to ask is what will happen if companies decide to share data for a profit? Then we’re screwed.
Whitney Grace, November 4, 2025
Is It Unfair to Blame AI for Layoffs? Sure
October 30, 2025
When AI exploded onto the scene, we were promised the tech would help workers, not replace them. Then that story began to shift, with companies revealing they do plan to slash expenses by substituting software for humans. But some are skeptical of this narrative, and for good reason. Techspot asks, “Is AI Really Behind Layoffs, or Just a Convenient Excuse for Companies?” Reporter Rob Thubron writes:
“Several large organizations, including Accenture, Salesforce, Klarna, Microsoft, and Duolingo, have said they are reducing staff numbers as AI helps streamline operations, reduce costs, and increase efficiency. But Fabian Stephany, Assistant Professor of AI & Work at the Oxford Internet Institute, told CNBC that companies are ‘scapegoating’ the technology.”
Stephany notes many companies are still trying to expel the extra humans they hired during the pandemic. Apparently, return-to-office mandates have not driven out as many workers as hoped. The write-up continues:
“Blaming AI for layoffs also has its advantages. Multibillion- and trillion-dollar companies can not only push the narrative that the changes must be made in order to stay competitive, but doing so also makes them appear more cutting-edge, tech-savvy, and efficient in the eyes of potential investors. Interestingly, a study by the Yale Budget Lab a few weeks ago showed there is little evidence that AI has displaced workers more severely than earlier innovations such as computers or the internet. Meanwhile, Goldman Sachs Research has estimated that AI could ultimately displace 6 to 7 percent of the US workforce, though it concluded the effect would likely be temporary.”
The write-up includes a graph Anthropic made in 2023 that compares gaps between actual and expected AI usage by occupation. A few fields overshot the expectation– most notably in computer and mathematical jobs. Most, though, fell short. So are workers really losing their jobs to AI? Or is that just a high-tech scapegoat?
Cynthia Murrell, October 30, 2025
A Big Waste of Time: Talking about Time to Young People
October 29, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I will be 81 in a matter of days. In 1963, I had a professor at the third rate institution I attended who required that the class read Sir Francis Bacon’s essay “Of Time.” Snappy stuff. I was 18 years old, and there was one thing I did not think about. I don’t recall worrying about time. I structured my life around what classes I had to attend, what assignments I had to do, when I worked at the root beer stand, and when I had to show up at some family function like a holiday. Time was anchored in immediacy. There was no past except the day before. There was no future except checking tasks off my mental checklist or the notecards for which I became famous. Yes, I still write down things to do on notecards.

An older person provides some advice to a young person about using time and taking risks. The young person listens and responds in an appropriate way for 2025 college graduates. Thanks, Venice.ai. Good enough.
That sporty guy Francis wrote:
“Men fear time, but time fears the pyramids.”
I know that this thought did not resonate for me in 1963, and to be frank, I am not sure it resonates with me. The pyramids exist but data about when they were constructed strikes me as fuzzy. I thought about this mismatch between youth, time, and the lack of knowledge about pyramid construction or similar matters when I read “Don’t Waste Your 20s Not Taking Big Risks: You Have It So Easy, and So Little Time.”
The time talk doesn’t work for young people. Time is measured in weird and idiosyncratic ways. The “amount” of time is experiential, contextual, and personal. The write up says:
You don’t appreciate how little time you have to easily go after it and how much harder it’s going to be later.
I am sorry. This does not compute.
The write up continues:
Each year you delay is costing you 10% of the easiest period in your life to take a big risk. So if you are in college or you’re in your 20s and you think that you might want to start a business, completely change your career, move to a new city, do something radical like that, you should do that as soon as humanly possible. Ignore the scared voice in your head. The downside is basically non-existent.
I view this statement as generally bad advice. An informed decision is important. The key word is “informed.” The meaning of “informed” depends on the individual. We are dealing with moving targets. An “informed” decision to a drug addict means one thing. Time to this individual is defined by narcotic need. An “informed” decision for a person who wants to do well in college means doing the work, trying to be organized, and obtaining information to achieve desired outcomes.
“Ignore” is important when one deals with life. “Ignore” is not important in the context of time. I am not sure what time is. I have zero interest in trying to defend Sir Francis’ pyramid time nor do I pay attention to the floundering physicists who argue about what time is.
For a young person today, life is like the world of any young person at any point in history. Telling that young person to not waste time is pointless. In fact, it is a waste of time.
The cited essay wants young people to do stuff, probably backpack in some remote country or start an AI company. The environment today is that the experiential, contextual, and personal cues for “time” come from inputs unique to this point in history. Nevertheless, young people make what they can of their life in the digital fish bowl.
Several observations:
- Decisions occur even if the person involved does not go through the weird notecard drill I did and do. The reality is “stuff happens” and then young people adapt in a way defined by their experiential, contextual, and personal space
- Young people hear “time” and define it as a young person. That means most have no clue what time means in a philosophical or technical context. Give them an essay to read. Have them write 500 words. Forget it. That worked for me and it probably works for many young people if they can actually read Bacon’s essay without AI support.
- At any point in a human’s life, time is not viewed as part of a big picture. Those words about “using time wisely” tell me more about the person speaking them than valid inputs for another individual. Thanks, but I don’t think about time unless it is anchored in some way.
Net net: As the general environment in the US and the technical business sector seems less warm and fuzzy, making informed decisions works better than watching roses die. Risk must be assessed. If it is not, interesting things happen to people. But time as a big idea or a resource to be use in a way that fits into some grand life plan is something oddly positioned in a TikTok-type of amped up Hollywood movie world. Making the best decision based on the information one has is a more useful way to mark off life intervals in my opinion. If your inputs come from Twitter, well, that may work for you. For me, not a chance.
Stephen E Arnold, October 29, 2025
Smart Software: The DNA and Its DORK Sequence
October 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:
A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.
My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.
Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.
Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.
The write up adds:
Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.
Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.
Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.
The write up concludes with this gem:
The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”
Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.
Stephen E Arnold, October 22, 2025
AI Service Industry: Titan or Titanic?
October 6, 2025
Venture capitalists believe they have a new recipe for success: Buy up managed-services providers and replace most of the staff with AI agents. So far, it seems to be working. (For the VCs, of course, not the human workers.) However, asserts TechCrunch, “The AI Services Transformation May Be Harder than VCs Think.” Reporter Connie Loizos throws cold water on investors’ hopes:
“But early warning signs suggest this whole services-industry metamorphosis may be more complicated than VCs anticipate. A recent study by researchers at Stanford Social Media Lab and BetterUp Labs that surveyed 1,150 full-time employees across industries found that 40% of those employees are having to shoulder more work because of what the researchers call ‘workslop’ — AI-generated work that appears polished but lacks substance, creating more work (and headaches) for colleagues. The trend is taking a toll on the organizations. Employees involved in the survey say they’re spending an average of nearly two hours dealing with each instance of workslop, including to first decipher it, then decide whether or not to send it back, and oftentimes just to fix it themselves. Based on those participants’ estimates of time spent, along with their self-reported salaries, the authors of the survey estimate that workslop carries an invisible tax of $186 per month per person. ‘For an organization of 10,000 workers, given the estimated prevalence of workslop . . . this yields over $9 million per year in lost productivity,’ they write in a new Harvard Business Review article.”
Surprise: compounding baloney produces more baloney. If companies implement the plan as designed, “workslop” will expand even as the humans who might catch it are sacked. But if firms keep on enough people to fix AI mistakes, they will not realize the promised profits. In that case, what is the point of the whole endeavor? Rather than upending an entire industry for no reason, maybe we should just leave service jobs to the humans that need them.
Cynthia Murrell, October 6, 2025
Being Good: Irrelevant at This Time
September 29, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
I read an essay titled “Being Good Isn’t Enough.” The author seems sincere. He provides insight about how to combine knowledge to create greater knowledge value. These are not my terms. The jargon appears in “The Knowledge Value Revolution or a History of the Future by Taichi Sakaiya. The book was published in Japan in 1985. I gave some talks shortly after the book was available. One of the individuals whom I met after one of my lectures at the Osaka Institute of Technology. I recommend the book because it expands on the concepts touched upon in the cited essay.
“Being Good Isn’t Enough” states:
The biggest gains come from combining disciplines. There are four that show up everywhere: technical skill, product thinking, project execution, and people skills. And the more senior you get, the more you’re expected to contribute to each.
Sakaiya includes this Japanese proverb:
As an infant, he was a prodigy. As a student, he was brilliant. But after 20 years, he was just another young man.
“Being Good Isn’t Enough” walks through the idea of identifying “your weakest discipline” and then adds:
work on that.
Sound advice. However, in today’s business environment in the US, I do not think this suggestion is particularly helpful; to wit:
Find a mentor, be a mentor. Lead a project, propose one. Do the work, present it. Create spaces for others to do the same. Do whatever it takes to get better…. But all of this requires maybe the most important thing of all: agency. It’s more powerful than smarts or credentials or luck. And the best part is you can literally just choose to be high-agency. High-agency people make things happen. Low-agency people wait. And if you want to progress, you can’t wait.
I think the advice is unlikely to “work” in the present world of work is calibrating as if it were 1970. Today the path forward depends on:
- Political connections
- Friends who can make introductions
- Former colleagues who can provide a soft recommendation in order to avoid HR issues
- Influence either inherited from a parent or other family member or fame
- Credentials in the form of a degree or a letter of acceptance from an institution perceived by the lender or possible employer as credible.
A skill or blended skills are less relevant at this time.
The obvious problem is that a person looking for a job has to be more than a bundle of knowledge value. For most people, Sakaiya’s and “Being Good’s” assertions are unlikely to lead to what most people want from work: Fulfillment, reward, and stability.
Stephen E Arnold, September 29, 2025
Can Human Managers Keep Up with AI-Assisted Coders? Sure, Sure
September 26, 2025
AI may have sped up the process of coding, but it cannot make other parts of a business match its velocity. Business Insider notes, “Andrew Ng Says the Real Bottleneck in AI Startups Isn’t Coding—It’s Product Management.” The former Google Brain engineer and current Stanford professor shared his thoughts on a recent episode of the "No Priors" podcast. Writer Lee Chong Ming tells us:
“In the past, a prototype might take three weeks to develop, so waiting another week for user feedback wasn’t a big deal. But today, when a prototype can be built in a single day, ‘if you have to wait a week for user feedback, that’s really painful,’ Ng said. That mismatch is forcing teams to make faster product decisions — and Ng said his teams are ‘increasingly relying on gut.’ The best product managers bring ‘deep customer empathy,’ he said. It’s not enough to crunch data on user behavior. They need to form a mental model of the ideal customer. It’s the ability to ‘synthesize lots of signals to really put yourself in the other person’s shoes to then very rapidly make product decisions,’ he added.”
Experienced humans matter. Who knew? But Google, for one, is getting rid of managers. This Xoogler suggests managers are important. Is this the reason he is no longer at Google?
Cynthia Murrell, September 26, 2025
UAE: Will It Become U-AI?
September 23, 2025
Written by an unteachable dinobaby. Live with it.
UAE is moving forward in smart software, not just crypto. “Industry Leading AI Reasoning for All” reports that the Institute of foundation Models has “industry leading AI reasoning for all.” The new item reports:
Built on six pillars of innovation, K2 Think represents a new class of reasoning model. It employs long chain-of-thought supervised fine-tuning to strengthen logical depth, followed by reinforcement learning with verifiable rewards to sharpen accuracy on hard problems. Agentic planning allows the model to decompose complex challenges before reasoning through them, while test-time scaling techniques further boost adaptability.
I am not sure what the six pillars of innovation are, particularly after looking at some of the UAE’s crypto plays, but there is more. Here’s another passage which suggests that Intel and Nvidia may not be in the k2think.ai technology road map:
K2 Think will soon be available on Cerebras’ wafer-scale, inference-optimized compute platform, enabling researchers and innovators worldwide to push the boundaries of reasoning performance at lightning-fast speed. With speculative decoding optimized for Cerebras hardware, K2 Think will achieve unprecedented throughput of 2,000 tokens per second, making it both one of the fastest and most efficient reasoning systems in existence.
If you want to kick its tires (tAIres?), the system is available at k2think.ai and on Hugging Face. Oh, the write up quotes two people with interesting names: Eric Xing and Peng Xiao.
Stephen E Arnold, September 23, 2025
Innovation Is Like Gerbil Breeding: It Is Tough to Produce a Panda
September 8, 2025
Just a dinobaby sharing observations. No AI involved. My apologies to those who rely on it for their wisdom, knowledge, and insights.
The problem is innovation is a tough one. I remember getting a job from a top dog at the consulting firm silly enough to employ me. The task was to chase down the Forbes Magazine list of companies ordered by how much they spend on innovation. I recall that the goal was to create an “estimate” or what would be a “model” today of what a company of X size should be spending on “innovation.”
Do that today for an outfit like OpenAI or one of the other US efforts to deliver big money via the next big thing and the result is easy to express; namely, every available penny is spent trying to create something new. Yep, spend the cash innovating. Think it, and the “it” becomes real. Build “it,” and the “it” draws users with cash.
A recent and somewhat long essay plopped in my “Read file.” The article is titled “We’ve Lost the Plot with Smartphones.” (The write up requires signing up and / or paying for access.)
The main idea of the essay is that smartphones, once heralded as revolutionary devices for communication and convenience, have evolved into tools that undermine our attention and well-being. I agree. However, innovation may not fix the problem. In my view, the fix may be an interesting effort, but as long as there are gizmos, the status quo will return.
The essay suggests that the innovation arc of such devices like a toaster or the mobile phone solves problems or adds obvious convenience to a user otherwise unfamiliar with the device. Like Steve Jobs suggested, users have to see and use a device. Words alone don’t do the job. Pushing deck chairs around a technology yacht does not add much to the value of the device. This is the “me too” approach to innovation or what is often called “featuritis.”
Several observations:
- Innovations often arise without warning, no matter what process is used
- The US is supporting “old” businesses, and other countries are pushing applied AI, which may be a better bet
- Big money innovation usually surfs on month, years, or decades of previous work. Once that previous work is exhausted, the brutal odds of innovation success kick in. A few winners will emerge from many losers.
One of the oddities is the difficulty of identifying a significant or substantive innovation. That seems to be as difficult to do as set up a system to generate innovation. In short, technology innovation reminds me of gerbils. Start with a few and quickly have lots of gerbils. The problem is that you have gerbils and what you want is something different.
Good luck.
Stephen E Arnold, September 8, 2025

