Can Sergey Brin Manage? Maybe Not?
December 12, 2025
True Reason Sergey Used “Super Voting Power”
Yuchen Jin, the CTO and co-founder of Hyperbolic Labs posted on X about recent situation at Google. According topmost, Sergey Brin was disappointed in how Google was using Gemini. The AI algorithm, in fact, wasn’t being used for coding and Sergey wanted it to be used for that.
It created a big tiff. Sergey told Sundar that, “I can’t deal with these people. You have to deal with this.” Sergey still owns Google and has super voting power. Translation: he can do whatever he darn well pleases with his own company.
Yuchin Jin summed it up well:
“Big companies always build bureaucracy. Sergey (and Larry) still have super voting power, and he used it to cut through the BS. Suddenly Google is moving like a startup again. Their AI went from “way behind” to “easily #1” across domains in a year.”
Congratulations to Google making a move that other Big Tech companies knew to make without the intervention of founder.
Google would have eventually shifted to using Gemini for coding. Sergey’s influence only sped it up. The bigger question is if this “tiff” indicates something else. Big companies do have bureaucracies but if older workers have retired, then that means new blood is needed. The current new blood is Gen Z and they are as despised as Millennials once were.
I think this means Sergey cannot manage young tech workers either. He had to turn to the “consultant” to make things happen. It’s quite the admission from a Big Tech leader.
Whitney Grace, December 12, 2025
Clippy, How Is Copilot? Oh, Too Bad
December 8, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
In most of my jobs, rewards landed on my desk when I sold something. When the firms silly enough to hire me rolled out a product, I cannot remember one that failed. The sales professionals were the early warning system for many of our consulting firm’s clients. Management provided money to a product manager or R&D whiz with a great idea. Then a product or new service idea emerged, often at a company event. Some were modest, but others featured bells and whistles. One such roll out had a big name person who a former adviser to several presidents. These firms were either lucky or well managed. Product dogs, diseased ferrets, and outright losers were identified early and the efforts redirected.

Two sales professionals realize that their prospects resist Microsoft’s agentic pawing. Mortgages must be paid. Sneakers must be purchased. Food has to be put on the table. Sales are needed, not push backs. Thanks, Venice.ai. Good enough.
But my employers were in tune with what their existing customer base wanted. Climbing a tall tree and going out on a limb were not common occurrences. Even Apple, which resides in a peculiar type of commercial bubble, recognizes a product that does not sell. A recent example is the itsy bitsy, teeny weenie mobile thingy. Apple bounced back with the Granny Scarf designed to hold any mobile phone. The thin and light model is not killed; its just not everywhere like the old reliable orange iPhone.
Sales professionals talk to prospects and customers. If something is not selling, the sales people report, “Problemo, boss.”
In the companies which employed me, the sales professionals knew what was coming and could mention in appropriately terms to those in the target market. This happened before the product or service was in production or available to clients. My employers (Halliburton, Booz, Allen, and a couple of others held in high esteem) had the R&D, the market signals, the early warning system for bad ideas, and the refinement or improvement mechanism working in a reliable way.
I read “Microsoft Drops AI Sales Targets in Half after Salespeople Miss Their Quotas.” The headline suggested three things to me instantly:
- The pre-sales early warning radar system did not exist or it was broken
- The sales professionals said in numbers, “Boss, this Copilot AI stuff is not selling.”
- Microsoft committed billions of dollars and significant, expensive professional staff time on something that prospects and customers do not rush to write checks, use, or tell their friends about the next big thing.”
The write up says:
… one US Azure sales unit set quotas for salespeople to increase customer spending on a product called Foundry, which helps customers develop AI applications, by 50 percent. Less than a fifth of salespeople in that unit met their Foundry sales growth targets. In July, Microsoft lowered those targets to roughly 25 percent growth for the current fiscal year. In another US Azure unit, most salespeople failed to meet an earlier quota to double Foundry sales, and Microsoft cut their quotas to 50 percent for the current fiscal year. The sales figures suggest enterprises aren’t yet willing to pay premium prices for these AI agent tools. And Microsoft’s Copilot itself has faced a brand preference challenge: Earlier this year, Bloomberg reported that Microsoft salespeople were having trouble selling Copilot to enterprises because many employees prefer ChatGPT instead.
Microsoft appears to have listened to the feedback. The adjustment, however, does not address the failure to implement the type of marketing probing process used by Halliburton and Booz, Allen: Microsoft implemented the “think it and it will become real.” The thinking in this case is that software can perform human work roles in a way that is equivalent to or better than a human’s execution.
I may be a dinobaby, but I figured out quickly that smart software has been for the last three years a utility. It is not quite useless, but it is not sufficiently robust to do the work that I do. Other people are on the same page with me.
My take away from the lower quotas is that Microsoft should have a rethink. The OpenAI bet, the AI acquisitions, the death march to put software that makes mistakes in applications millions use in quite limited ways, and the crazy publicity output to sell Copilot are sending Microsoft leadership both audio and visual alarms.
Plus, OpenAI has copied Google’s weird Red Alert. Since Microsoft has skin in the game with OpenAI, perhaps Microsoft should open its eyes and check out the beacons and listen to the klaxons ringing in Softieland sales meetings and social media discussions about Microsoft AI? Just a thought. (That Telegram virtual AI data center service looks quite promising to me. Telegram’s management is avoiding the Clippy-type error. Telegram may fail, but that outfit is paying GPU providers in TONcoin, not actual fiat currency. The good news is that MSFT can make Azure AI compute available to Telegram and get paid in TONcoin. Sounds like a plan to me.)
Stephen E Arnold, December 8, 2025
Apples Misses the AI Boat Again
December 4, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Apple and Telegram have a characteristic in common. Both did not recognize the AI boomlet that began in 2020 or so. Apple was thinking about Granny scarfs that could hold an iPhone and working out ways to cope with its dependence on Chinese manufacturing. Telegram was struggling with the US legal system and trying to create a programming language that a mere human could use to code a distributed application.
Apple’s ship has sailed, and it may dock at Google’s Gemini private island or it could decide to purchase an isolated chunk of real estate and build its de-perplexing AI system at that location.

Thanks, MidJourney. Good enough.
I thought about missing a boat or a train. The reason? I read “Apple AI Chief John Giannandrea Retiring After Siri Delays.” I simply don’t know who has been responsible for Apple AI. Siri did not work when I looked at it on my wife’s iPhone many years ago. Apparently it doesn’t work today. Could that be a factor in the leadership changes at the Tim Apple outfit?
The write up states:
Giannandrea will serve as an advisor between now and 2026, with former Microsoft AI researcher Amar Subramanya set to take over as vice president of AI. Subramanya will report to Apple engineering chief Craig Federighi, and will lead Apple Foundation Models, ML research, and AI Safety and Evaluation. Subramanya was previously corporate vice president of AI at Microsoft, and before that, he spent 16 years at Google.
Apple will probably have a person who knows some people to call at Softie and Google headquarters. However, when will the next AI boat arrive. Apple excelled at announcing AI, but no boat arrived. Telegram has an excuse; for example, our owner Pavel Durov has been embroiled in legal hassles and arm wrestling with the reality that developing complex applications for the Telegram platform is too difficult. One would have thought that Apple could have figured out a way to improve Siri, but it apparently was lost in a reality distortion field. Telegram didn’t because Pavel Durov was in jail in Paris, then confined to the country, and had to report to the French judiciary like a truant school boy. Apple just failed.
The write up says:
Giannandrea’s departure comes after Apple’s major iOS 18 Siri failure. Apple introduced a smarter, “?Apple Intelligence?” version of ?Siri? at WWDC 2024, and advertised the functionality when marketing the iPhone 16. In early 2025, Apple announced that it would not be able to release the promised version of ?Siri? as planned, and updates were delayed until spring 2026. An exodus of Apple’s AI team followed as Apple scrambled to improve ?Siri? and deliver on features like personal context, onscreen awareness, and improved app integration. Apple is now rumored to be partnering with Google for a more advanced version of ?Siri? and other ?Apple Intelligence? features that are set to come out next year.
My hunch is that grafting AI into the bizarro world of the iPhone and other Apple computing devices may be a challenge. Telegram’s solution is to not do hardware. Apple is now an outfit distinguishing itself by missing the boat. When does the next one arrive?
Stephen E Arnold, December 4, 2025
From the Ostrich Watch Desk: A Signal for Secure Messaging?
December 4, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
A dinobaby is not supposed to fraternize with ostriches. These two toed birds can run. It may be time for those cyber security folks who say, “Signal is secure to run away from that broad statement.” Perhaps something like sort of secure if the information presented by the “please, please, please, send us money” newspaper company. (Tip to the Guardian leadership. There are ways to generate revenue some of which I shared in a meeting about a decade ago.)

Listening, verifying, and thinking critically are skills many professionals may want to apply to routine meetings about secure services. Thanks, Venice.ai. Good enough.
The write up from the “please, please, please, donate” outfit is “The FBI Spied on a Signal Group Chat of Immigration Activists, Records Reveal.” The subtitle makes clear that I have to mind the length of my quotes and emphasize that absolutely no one knows about this characteristic of super secret software developed by super quirky professionals working in the not-so-quirky US of A today.
The write up states:
The FBI spied on a private Signal group chat of immigrants’ rights activists who were organizing “courtwatch” efforts in New York City this spring, law enforcement records shared with the Guardian indicate.
How surprised is the Guardian? The article includes this statement, which I interpret as the Guardian’s way of saying, “You Yanks are violating privacy.” Judge for yourself:
Spencer Reynolds, a civil liberties advocate and former senior intelligence counsel with the DHS, said the FBI report was part of a pattern of the US government criminalizing free speech activities.
Several observations are warranted:
- To the cyber security vice president who told me, “Signal is secure.” The Guardian article might say, “Ooops.” When I explained it was not, he made a Three Stooges’ sound and cancel cultured me.
- When appropriate resources are focused on a system created by a human or a couple of humans, that system can be reverse engineered. Did you know Android users can drop content on an iPhone user’s device. What about those how-tos explaining the insecurity of certain locks on YouTube? Yeah. Security.
- Quirky and open source are not enough, and quirky will become less suitable as open source succumbs to corporatism and agentic software automates looking for tricks to gain access. Plus, those after-the-fact “fixes” are usually like putting on a raincoat after the storm. Security enhancement is like going to the closest big box store for some fast drying glue.
One final comment. I gave a lecture about secure messaging a couple of years ago for a US government outfit. One topic was a state of the art messaging service. Although a close hold, a series of patents held by entities in Virginia disclosed some of the important parts of the system and explained in a way lawyers found just wonderful a novel way to avoid Signal-type problems. The technology is in use in some parts of the US government. Better methods for securing messages exist. Open source, cheap, and easy remains popular.
Will I reveal the name of this firm, provide the patent numbers in this blog, and present my diagram showing how the system works? Nope.
PS to the leadership of the Guardian. My recollection is that your colleagues did not know how to listen when I ran down several options for making money online. Your present path may lead to some tense moments at budget review time. Am I right?
Stephen E Arnold, December 4, 2025
Turkey Time: IT Projects Fail Like Pies and Cakes from Crazed Aunties
November 27, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
Today is Thanksgiving, and it is appropriate to consider the turkey approach to technology. The source of this idea comes from the IEEE.org online publication. The article explaining what I call “turkeyism” is “How IT Managers Fail Software Projects.” Because the write up is almost 4,000 words and far too long for reading during an American football game’s halftime break, I shall focus on a handful of points in the write up. I encourage you to read the entire article and, of course, sign up and subscribe. If you don’t, the begging for dollars pop up may motivate you to click away and lose the full wisdom of the IEEE write up. I want to point out that many IT managers are trained as electrical engineers or computer scientists who have had to endure the veritable wonderland of imaginary numbers for a semester or two. But increasingly IT managers can be MBAs or in some frisky Silicon Valley type companies, recent high school graduates with a native ability to solve complex problems and manage those older than they. Hey, that works, right?

Auntie knows how to manage the baking process. She practices excellent hygiene, but with age comes forgetfulness. Those cookies look yummy. Thanks, Venice.a. No mom. But good enough with Auntie pawing the bird.
Answer: Actually no.
The cited IEEE article states:
Global IT spending has more than tripled in constant 2025 dollars since 2005, from US $1.7 trillion to $5.6 trillion, and continues to rise. Despite additional spending, software success rates have not markedly improved in the past two decades. The result is that the business and societal costs of failure continue to grow as software proliferates, permeating and interconnecting every aspect of our lives.
Yep, and lots of those managers are members of IEEE or similar organizations. How about that jump from solving mathy problems to making software that works? It doesn’t seem to be working. Is it the universities, the on the job training, or the failure of continuing education? Not surprisingly, the write up doesn’t offer a solution.
What we have is a global, expensive problem. With more of everyday life dependent on “technology,” a failure can have some interesting consequences. Not only is it tough to get that new sweater delivered by Amazon, but downtime can kill a kid in a hospital when a system keels over. Dead is dead, isn’t it?
The write up says:
A report fromthe Consortium for Information & Software Quality (CISQ) estimated the annual cost of operational software failures in the United States in 2022 alone was $1.81 trillion, with another $260 billion spent on software-development failures. It is larger than the total U.S. defense budget for that year, $778 billion.
Chatter about the “cost” of AI tosses around even bigger numbers. Perhaps some of the AI pundits should consider the impact of AI failure in the context of IT failure. Frankly I am not confident about AI because of IT failure. The money is one thing, but given the evidence about the prevalence of failure, I am not ready to sing the JP Morgan tune about the sunny side of the street.
The write up adds:
Next to electrical infrastructure, with which IT is increasingly merging into a mutually codependent relationship, the failure of our computing systems is an existential threat to modern society. Frustratingly, the IT community stubbornly fails to learn from prior failures.
And what role does a professional organization play in this little but expensive drama? Are the arrows of accountability pointing at the social context in which the managers work? What about the education of these managers? What about the drive to efficiency? You know. Design the simplest possible solution. Yeah, these contextual components have created a high probability of failure. Will Auntie’s dessert give everyone food poisoning? Probably. Auntie thinks she has washed her hands and baked with sanitation in mind. Yep, great assumption because Auntie is old. Auntie Compute is going on 85 now. Have another cookie.
But here’s the killer statement in the write up:
Not much has worked with any consistency over the past 20 years.
This is like a line in a Jack Benny Show skit.
Several observations:
- The article identifies a global, systemic problem
- The existing mechanisms for training people to manage don’t work
- There is no solution.
Have a great Thanksgiving. Have another one of Auntie’s cookies. The two people who got food poisoning last year just had upset tummies. It will just get better. At least that’s what mom says.
Stephen E Arnold, November 27, 2025
Tim Apple, Granny Scarfs, and Snooping
November 24, 2025
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I spotted a write in a source I usually ignore. I don’t know if the write up is 100 percent on the money. Let’s assume for the purpose of my dinobaby persona that it indeed is. The write up is “Apple to Pay $95 Million Settle Suit Accusing Siri Of Snoopy Eavesdropping.” Like Apple’s incessant pop ups about my not logging into Facetime, iMessage, and iCloud, Siri being in snoop mode is not surprising to me. Tim Apple, it seems, is winding down. The pace of innovation, in my opinion, is tortoise like. I haven’t nothing against turtle like creatures, but a granny scarf for an iPhone. That’s innovation, almost as cutting edge as the candy colored orange iPhone. Stunning indeed.

Is Frederick the Great wearing an Apple Granny Scarf? Thanks, Venice.ai. Good enough.
What does the write up say about this $95 million sad smile?
Apple has agreed to pay $95 million to settle a lawsuit accusing the privacy-minded company of deploying its virtual assistant Siri to eavesdrop on people using its iPhone and other trendy devices. The proposed settlement filed Tuesday in an Oakland, California, federal court would resolve a 5-year-old lawsuit revolving around allegations that Apple surreptitiously activated Siri to record conversations through iPhones and other devices equipped with the virtual assistant for more than a decade.
Apple has managed to work the legal process for five years. Good work, legal eagles. Billable hours and legal moves generate income if my understanding is correct. Also, the notion of “surreptitiously” fascinates me. Why do the crazy screen nagging? Just activate what you want and remove the users’ options to disable the function. If you want to be surreptitious, the basic concept as I understand it is to operate so others don’t know what you are doing. Good try, but you failed to implement appropriate secretive operational methods. Better luck next time or just enable what you want and prevent users from turning off the data vacuum cleaner.
The write up notes:
Apple isn’t acknowledging any wrongdoing in the settlement, which still must be approved by U.S. District Judge Jeffrey White. Lawyers in the case have proposed scheduling a Feb. 14 court hearing in Oakland to review the terms.
I interpreted this passage to mean that the Judge has to do something. I assume that lawyers will do something. Whoever brought the litigation will do something. It strikes me that Apple will not be writing a check any time soon, nor will the fine change how Tim Apple has set up that outstanding Apple entity to harvest money, data, and good vibes.
I have several questions:
- Will Apple offer a complementary Granny Scarf to each of its attorneys working this case?
- Will Apple’s methods of harvesting data be revealed in a white paper written by either [a] Apple, [b] an unhappy Apple employee, or [c] a researcher laboring in the vineyards of Stanford University or San Jose State?
- Will regulatory authorities and the US judicial folks take steps to curtail the “we do what we want” approach to privacy and security?
I have answers for each of these questions. Here we go:
- No. Granny Scarfs are sold out
- No. No one wants to be hassled endlessly by Apple’s legions of legal eagles
- No. As the recent Meta decision about WhatsApp makes clear, green light, tech bros. Move fast, break things. Just do it.
Stephen E Arnold, November 24, 2025
Collaboration: Why Ask? Just Do. (Great Advice, Job Seeker)
November 24, 2025
Another short essay from a real and still-alive dinobaby. If you see an image, we used AI. The dinobaby is not an artist like Grandma Moses.
I read
I am too old to have an opinion about collaboration in 2025. I am a slacker, not a user of Slack. I don’t “GoTo” meetings; I stay in my underground office. I don’t “chat” on Facebook or smart software. I am, therefore, qualified to comment on the essay “Collaboration Sucks.” The main point of the essay is that collaboration is not a positive. (I know that this person has not worked at a blue chip consulting firm. If you don’t collaborate, you better have telepathy. Otherwise, you will screw up in a spectacular fashion with the client and the lucky colleagues who get to write about your performance or just drop hints to a Carpetland dweller.
The essay states:
We aim to hire people who are great at their jobs and get out of their way. No deadlines, minimal coordination, and no managers telling you what to do. In return, we ask for extraordinarily high ownership and the ability to get a lot done by yourself. Marketers ship code, salespeople answer technical questions without backup, and product engineers work across the stack.
To me, this sounds like a Silicon Valley commandment along with “Go fast and break things” or “It’s easier to ask forgiveness than it is to get permission.” Allegedly Rear Admiral Grace Hopper offered this observation. However, Admiral Craig Hosmer told me that her attitude did more harm to females in the US Navy’s technical services than she thought. Which Admiral does one believe? I believe what Admiral Hosmer told me when I provided technical support to his little Joint Committee on Nuclear Energy many years ago.

Thanks, Venice.ai. Good enough. Good enough.
The idea that a team of really smart and independent specialists can do great things is what has made respected managers familiar with legal processes around the world. I think Google just received an opportunity to learn from its $600 million fine levied by Germany. Moving fast, Google made some interesting decisions about German price comparison sites. I won’t raise again the specter of the AI bubble and the leadership methods of Sam AI-Man. Everything is working out just swell, right?
The write up presents seven reasons why collaboration sucks. Most of the reasons revolve around flaws in a person. I urge you to read the seven variations on the theme of insecurity, impostor syndrome, and cluelessness.
My view is that collaboration, like any business process, depends on the context of the task and the work itself. In some organizations, employees can do almost anything because middle managers (if they are still present) have little idea about what’s going on with workers who are in an office half a world away, down the hall but playing Foosball, pecking away at a laptop in a small, overpriced apartment in Plastic Fantastic (aka San Mateo), or working from a van and hoping the Starlink is up.
I like the idea of crushing collaboration. I urge those who want to practice this skill join a big time law firm, a blue chip consulting firm, or engage in the work underway at a pharmaceutical research lab. I love the tips the author trots out; specifically:
- Just ship the code, product, whatever. Ignore inputs like Slack messages.
- Tell the boss or leader, you are the “driver.” (When I worked for the Admiral, I would suggest that this approach was not appropriate for the context of that professional, the work related to nuclear weapons, or a way to win his love, affection, and respect. I would urge the author to track down a four star and give his method a whirl. Let me know how that works out.)
- Tell people what you need. That’s a great idea if one has power and influence. If not, it is probably important to let ChatGPT word an email for you.
- Don’t give anyone feedback until the code or product has shipped. This a career builder in some organizations. It is quite relevant when a massive penalty ensures because an individual withheld knowledge and thus made the problem worse. (There is something called “discovery.” And, guess what, those Slack and email messages can be potent.)
- Listen to inputs but just do what you want. (In my 60 year work career, I am not sure this has ever been good advice. In an AI outfit, it’s probably gold for someone. Isn’t there something called Fool’s Gold?)
Plus, there is one item on the action list for crushing collaboration I did not understand. Maybe you can divine its meaning? “If you are a team lead, or leader of leads, who has been asked for feedback, consider being more you can just do stuff.”
Several observations:
- I am glad I am not working in Sillycon Valley any longer. I loved the commute from Berkeley each day, but the craziness in play today would not match my context. Translation: I have had enough of destructive business methods. Find someone else to do your work.
- The suggestions for killing collaboration may kill one’s career except in toxic companies. (Notice that I did not identify AI-centric outfits. How politic of me.)
- The management failure implicit in this approach to colleagues, suggestions, and striving for quality is obvious to me. My fear is that some young professionals may see this collaboration sucks approach and fail to recognize the issues it creates.
Net net: When you hire, I suggest you match the individual to the context and the expertise required to the job. Short cuts contribute to the high failure rate of start ups and the dead end careers some promising workers create for themselves.
Stephen E Arnold, November 24, 2025
If You Want to Be Performant, Do AI or Try to Do AI
November 6, 2025
For firms that have invested heavily in AI only to be met with disappointment, three tech executives offer some quality spin. Fortune reports, “Experts Say the High Failure Rate in AI adoption Isn’t a Bug, but a Feature.” The leaders expressed this interesting perspective at Fortune’s recent Most Powerful Women Summit. Writer Dave Smith writes:
“The panel discussion, titled ‘Working It Out: How AI Is Transforming the Office,’ tackled head-on a widely circulated MIT study suggesting that approximately 95% of enterprise AI pilots fail to pay off. The statistic has fueled doubts about whether AI can deliver on its promises, but the three panelists—Amy Coleman, executive vice president and chief people officer at Microsoft; Karin Klein, founding partner at Bloomberg Beta; and Jessica Wu, cofounder and CEO of Sola—pushed back forcefully on the narrative that failure signals fundamental problems with the technology.? ‘We’re in the early innings,’ Klein said. ‘Of course, there’s going to be a ton of experiments that don’t work. But, like, has anybody ever started to ride a bike on the first try? No. We get up, we dust ourselves off, we keep experimenting, and somehow we figure it out. And it’s the same thing with AI.’”
Interesting analogy. Ideally kiddos learn to ride on a cul-de-sac with supervision, not set loose on the highway. Shouldn’t organizations do their AI experimentations before making huge investments? Or before, say, basing high-stakes decisions in medicine, law-enforcement, social work, or mortgage approvals on AI tech? Ethical experimentation calls for parameters, after all. Have those been trampled in the race to adopt AI?
Cynthia Murrell, November 6, 2025
Parents and Screen Time for Their Progeny: A Losing Battle? Yep
October 22, 2025
Sometimes I am glad my child-rearing days are well behind me. With technology a growing part of childhood education and leisure, how do parents stay on top of it all? For over 40%, not as well as they would like. The Pew Research Center examined “How Parents Manage Screen Time for Kids.” The organization surveyed US parents of kids 12 and under about the use of tablets, smartphones, smartwatches, gaming devices, and computers in their daily lives. Some highlights include:
“Tablets and smartphones are common – TV even more so.
[a] Nine-in-ten parents of kids ages 12 and younger say their child ever watches TV, 68% say they use a tablet and 61% say they use a smartphone.
[b] Half say their child uses gaming devices. About four-in-ten say they use desktops or laptops.
AI is part of the mix.
[c] About one-in-ten parents say their 5- to 12-year-old ever uses artificial intelligence chatbots like ChatGPT or Gemini.
[c] Roughly four-in-ten parents with a kid 12 or younger say their child uses a voice assistant like Siri or Alexa. And 11% say their child uses a smartwatch.
Screens start young.
[e] Some of the biggest debates around screen time center on the question: How young is too young?
[f] It’s not just older kids on screens: Vast majorities of parents say their kids ever watch TV – including 82% who say so about a child under 2.
[g] Smartphone use also starts young for some, but how common this is varies by age. About three-quarters of parents say their 11- or 12-year-old ever uses one. A slightly smaller share, roughly two-thirds, say their child age 8 to 10 does so. Majorities say so for kids ages 5 to 7 and ages 2 to 4.
[h] And fewer – but still about four-in-ten – say their child under 2 ever uses or interacts with one.”
YouTube is a big part of kids’ lives, presumably because it is free and provides a “contained environment for kids.” Despite this show of a “child-safe” platform, many have voiced concerns about both child-targeted ads and questionable content. TikTok and other social media are also represented, of course, though a whopping 80% of parents believe those platforms do more harm than good for children.
Parents cite several reasons they allow kids to access screens. Most do so for entertainment and learning. For children under five, keeping them calm is also a motivation. Those who have provided kids with their own phones overwhelmingly did so for ease of contact. On the other hand, those who do not allow smartphones cite safety, developmental concerns, and screen time limits. Their most common reason, though, is concern about inappropriate content. (See this NPR article for a more in-depth discussion of how and why to protect kids from seeing porn online, including ways porn is more harmful than it used to be. Also, your router is your first line of defense.)
It seems parents are not blind to the potential harms of technology. Almost all say managing screen time is a priority, though for most it is not in the top three. See the write-up for more details, including some handy graphs. Bottomline: Parents are fighting a losing battle in many US households.
Cynthia Murrell, October 22. 2025
GenX, GenY, and Probably GenAI: Hopeless Is Not a Positive
October 13, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Generation Z is the first generation in a long time that is worse off than their predecessors. Millennials also have their own problems too, because they came of age in a giant recession that could have been avoided. Millennials might have been teased about their lack of work ethic, but Generation Z is much worse. The prior generations had some problem solving skills, this younger sect (not all of them) lack the ability to even attempt to solve their problems.
Fortune embodied the mantra of the current generation in the article: “Suzy Welch Says Gen Z and Millennials Are Burnt Out Because Older Generations Worked Just As Hard, But They ‘Had Hope.’” Suzy Welch holds a MBA, served as a management consultant, and is the editor in chief of the Harvard Business Review. She makes the acute observation that younger generations are working the same demanding schedules as prior generations, but they lack hope that hard work will lead to meaningful advancement. Young workers of today are burnt out:
The sense of powerlessness—to push back against climate change, to deal with grapple with effects of the political environment like diminished public health and gun violence, and most notably to make enough money to support lifestyles, family, housing, and a future—has led to an erosion of institutional trust. Unlike baby boomers who embraced existing institutions to get rich and live a comfortable life, the younger generations do not feel that institutions—which are perceived as cumbersome, hierarchical, and a source of inequality and discrimination—can improve their situation. When combined with the economic realities Welch identified, where hard work no longer guarantees advancement, this helps explain why more than 50% of young people fear they will be poorer than their parents during their lifetime, according to Leger’s annual Youth Study.”
Okay. The older generations had hope while the younger ones are hopeless. Maybe if there was a decrease in inflation and a rise in wages the younger people wouldn’t be so morbid. Fire up the mobile. Grab a coffee. Doomscroll. Life will work out.
Whitney Grace, October 13, 2025

