Apple and Google Relationship: Starting to Fray?

May 8, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

I spotted a reference to an Apple manager going out on a limb of the old, Granny Smith tree. At the end of the limb, the Apple guru allegedly suggested that the Google search ain’t what it used to be. Whether true or not, Apple pays the Google lots of money to be the really but formerly wonderful Web search system for the iPhone and Safari “experience.”

That assertion of decline touched a nerve at the Google. I noted this statement in the Google blog. I am not sure which one because Google has many pages of smarmy talk. I am a dinobaby and easily confused. Here’s that what Google document with the SEO friendly title “Here’s Our Statement on This Morning’s Press Reports about Search Traffic” says:

We continue to see overall query growth in Search. That includes an increase in total queries coming from Apple’s devices and platforms. More generally, as we enhance Search with new features, people are seeing that Google Search is more useful for more of their queries — and they’re accessing it for new things and in new ways, whether from browsers or the Google app, using their voice or Google Lens. We’re excited to continue this innovation and look forward to sharing more at Google I/O.

Several observations:

  1. I love the royal “we”. I think that the Googlers who are nervous about search include the cast of the Sundar & Prabhakar Comedy Act. Search means ads. Ads mean money. Money means Wall Street. Therefore, a decline in search makes the Wall Street types jumpy, twitchy, and grumpy. Do not suggest traffic declines when controlling the costs of the search plumbing are becoming quite interesting for the Googley bean counters.
  2. Apple device users are searching Google a lot. I believe it. Monopolies like to have captives who don’t know that there are now alternatives to the somewhat uninspiring version of Jon Kleinberg’s CLEVER inventions spiced with some Fancy Dan weighting. These “weights” are really useful for boosting I believe.
  3. The leap to user satisfaction with Google search is unsupported by audited data. Those happy faces don’t convey why millions of people are using ChatGPT or why people complain that Google search results are mostly advertising. Oh, well, when one is a monopoly controlling what’s presented to users within the content of big spending advertisers, reality is what the company chooses to present.
  4. The Google is excited about its convention. Will it be similar to the old network marketing conventions or more like the cheerleading at Telegram’s Gateway Conference? It doesn’t matter. Google is excited.

Net net: The alleged Apple remark goosed the Google to make “our statement.” Outstanding defensive tone and posture. Will the pair seek counseling?

Stephen E Arnold, May 8, 2025

Apple and Meta: Virtual Automatic Teller Machines for the EU

April 29, 2025

dino orangeNo AI, just a dinobaby watching the world respond to the tech bros.

I spotted this story in USA Today. You remember that newspaper, of course. The story “Apple Fined $570 Million and Meta $228 Million for Breaching European Union Law” reports:

Apple was fined 500 million euros ($570 million) on Wednesday and Meta 200 million euros, as European Union antitrust regulators handed out the first sanctions under landmark legislation aimed at curbing the power of Big Tech.

I have observed that to many regulators the brands Apple and Meta (Facebook) are converted to the sound of ka-ching. For those who don’t recognize the onomatopoeia for an old-fashioned cash register ringing up a sale. The modern metaphor might be an automatic teller machine emitting beeps and honks. That works. Punch the Apple and Meta logos and bonk, beep, out comes millions of euros. Bonk, beep.

The law which allows the behavior of what some Europeans view as “tech bros” to be converted first to a legal process and then to cash is the Digital Markets Act. The idea is that certain technology centric outfits based in the US operate without much regard for the rules, regulations, and laws of actual nation-states and their governing entities. I mean who pays attention to what the European Union says? Certainly not a geek à la sauce californienne.

The companies are likely to interpret these fines as some sort of deus ex machina, delivered by a third-rate vengeful god in a TikTok-type of video. Perhaps? But the legal process identified some actions by the fined American companies as illegal. Examples range from preventing an Apple store user from certain behaviors to Meta’s reluctance to conform to some privacy requirements. I am certainly not a lawyer, nor am I involved with either of the American companies. However, I can make several observations from my dinobaby point of view, of course:

  1. The ka-ching / bonk beep incentive is strong. Money talks in the US and elsewhere. It is not surprising that the fines are becoming larger with each go-round. How does one stop the cost creep? One thought is to change the behavior of the companies. Sorry, EU, that is not going to happen.
  2. The interpretation of the penalty as a reaction against America is definitely a factor. For those who have not lived and worked in other countries, the anti-American sentiment is not understood. I learned when people painted slurs on the walls of our home in Campinas, Brazil. I was about 13, and the anger extended beyond black paint on our pristine white, eight-foot high walls with glass embedded at the top of them. Inviting, right?
  3. The perception that a company is more powerful than a mere government entity has been growing as the concentration of eyeballs, money, and talented people has increased at certain firms. Once the regulators have worked through the others in this category, attention will turn to the second tier of companies. I won’t identify any entities but the increased scrutiny of Cloudflare by French authorities is a glimpse of what might be coming down the information highway.

Net net: Ka-ching, ka-ching, and ka-ching. Beep, bong, beep, bong.

Stephen E Arnold, April 29, 2025

Banks and Security? Absolutely

April 28, 2025

The second-largest US bank has admitted it failed to recover documents lost to a recent data breach. The Daily Hodl reports, “Bank of America Discloses Data Breach After Customers’ Documents Disappear, Says Names, Addresses, Account Information and Social Security Numbers Affected.” Writer Mark Emem tells us:

“Bank of America says efforts to locate sensitive documents containing personal information on an undisclosed number of customers have failed. The North Carolina-based bank says it is unable to recover the documents, which were lost in transit and ‘resulted in the disclosure’ of personal information. [The bank’s notice states,] ‘According to our records, the information involved in this incident was related to your savings bonds and included your first and last name, address, phone number, Social Security number, and account number…We understand how upsetting this can be and sincerely apologize for this incident and any concerns or inconvenience it may cause. We are notifying you so we can work together to protect your personal and account information.’

Banks are forthcoming and bad actors know there is money in them. It is no surprise Bank of America faces a challenge. The succinct write-up notes the bank’s pledge to notify affected customers of any suspicious activity on their accounts. It is also offering them a two-year membership to an identity theft protection service. We suggest any Bank of America customers go ahead and change their passwords as a precaution. Now. We will wait.

Cynthia Murrell, April 28, 2025

Geocoding Price Data

April 28, 2025

dino orange_thumb_thumbNo AI, just a dinobaby watching the world respond to the tech bros.

Some data must be processed to add geocodes to the items. Typically the geocode is a latitude and longitude coordinate. Some specialized services add addition codes to facilitate height and depth measurements. However, a geocode is not just what are generally called “lat and long coordinates.” Here’s a selected list of some of the items which may be included in a for-fee service:

  • An address
  • A place identifier
  • Boundaries like a neighborhood, county, etc.
  • Time zone
  • Points-of-interest data.

For organizations interested in adding geocodes to their information or data, pricing of commercial services becomes an important factor.

I want to suggest that you navigate to “Geocoding APIs Compared: Pricing, Free Tiers & Terms of Use.” This article was assembled in 2023. The fees presented are out of date. However, as you work through the article, you will gather useful information about vendors such as Google, MSFT Azure, and TomTom, among others.

One of the question-answering large language models can be tapped to provide pricing information that is more recent.

Stephen E Arnold, April 28, 2025

Microsoft and Its Modern Management Method: Waffling

April 23, 2025

dino orange_thumb_thumb_thumb_thumbNo AI, just the dinobaby himself.

The Harvard Business School (which I assume will remain open for “business”) has not addressed its case writers to focus on Microsoft’s modern management method. To me, changing direction is not a pivot; it is a variant of waffling. “Waffling” means saying one thing like “We love OpenAI.” Then hiring people who don’t love OpenAI and cutting deals with other AI outfits. The whipped cream on the waffle is killing off investments in data centers.

If you are not following this, think of the old song “The first time is the last time,” and you might get a sense of the confusion that results from changes in strategic and tactical direction. You may find this GenX, Y and Z approach just fine. I think it is a hoot.

PC Gamer, definitely not the Harvard Business Review, tackles one example of Microsoft’s waffling in “Microsoft Pulls Out of Two Big Data Centre Deals Because It Reportedly Doesn’t Want to Support More OpenAI Training Workloads.”

The write up says:

Microsoft has pulled out of deals to lease its data centres for additional training of OpenAI’s language model ChatGPT. This news seems surprising given the perceived popularity of the model, but the field of AI technology is a contentious one, for a lot of good reasons. The combination of high running cost, relatively low returns, and increasing competition—plus working on it’s own sickening AI-made Quake 2 demo—have proven enough reason for Microsoft to bow out of two gigawatt worth of projects across the US and Europe.

I love the scholarly “sickening.” Listen up, HBR editors. That’s a management term for 2025.

The article adds:

Microsoft, as well as its investors, have witnessed this relatively slow payoff alongside the rise of competitor models such as China’s Deepseek.

Yep, “payoff.” The Harvard Business School’s professors are probably not familiar with the concept of a payoff.

The news report points out that Microsoft is definitely, 100 percent going to spend $80 billion on infrastructure in 2025. With eight months left in the year, the Softies have to get in gear. The Google is spending as well. The other big time high tech AI juggernauts are also spending.

Will these investments payoff? Sure. Accountants and chief financial officers learn how to perform number magic. Guess where? Schools like the HBS. Don’t waffle. Go to class. Learn and then implement big time waffling.

Stephen E Arnold, April 23, 2025

AI and Movies: Better and Cheaper!

April 21, 2025

dino orange_thumb_thumbBelieve it or not, no smart software. Just a dumb and skeptical dinobaby.

I am not a movie oriented dinobaby. I do see occasional stories about the motion picture industry. My knowledge is shallow, but several things seem to be stuck in my mind:

  1. Today’s movies are not too good
  2. Today’s big budget films are recycles of sequels, pre-quels, and less than equals
  3. Today’s blockbusters are expensive.

I did a project for a little-time B movie fellow. I have even been to an LA party held in a mansion in La Jolla. I sat in the corner in my brown suit and waited until I could make my escape.

End of Hollywood knowledge.

I read “Ted Sarandos Responds To James Cameron’s Vision Of AI Making Movies Cheaper: “There’s An Even Bigger Opportunity To Make Movies 10% Better.” No, I really did red the article. I cam away confused. Most of my pre-retirement work involved projects whose goal was to make a lot of money. The idea was be clever, do a minimum of “real” work, and then fix up the problems when people complained. The magic formula for some Silicon Valley and high-technology outfits located outside of the Plastic Fantastic World.

This article pits better versus cheaper. I learned:

Citing recent comments by James Cameron, Netflix Co-CEO Ted Sarandos said he hopes AI can make films “10% better,” not just “50% cheaper.”

Well, there you go. Better and cheaper. Is that the winning formula for creative work? The write up quotes Ted Sarandos (a movie expert, I assume) as saying:

Today, you can use these AI-powered tools to enable smaller-budget projects to have access to big VFX on screen.

From my point of view “better” means more VFX which is, I assume, movie talk for visual effects. These are the everyday things I see at my local grocery store. There are super heroes stopping crimes in progress. There are giant alien creatures shooting energy beams at military personnel. There are machines that have a great voice that some AI experts found particularly enchanting.

The cheaper means that the individuals who sit in front of computer screens fooling around with Blackmagic’s Fusion and the super-wonderful Adobe software will be able to let smart software do some of the work. If 100 people work on a big budget film’s VFX and smart software can do the work cheaper, the question arises, “Do we need these 100 people?” Based on my training, the answer is, “Nope. Let them find their future elsewhere.”

The article sidesteps two important questions: Question 1. What does better mean? Question 2. What does cheaper mean?

Better is subjective. Cheaper is a victim of scope creep. Big jobs don’t get cheaper. Big jobs get more expensive.

What smart software will do the motion picture industry is hasten its “re-invention.”

The new video stars are those who attract eyeballs on TikTok- and YouTube-type platforms. The traditional motion picture industry which created yesterday’s stars or “influencers” is long gone. AI is going to do three things:

  1. Replace skilled technicians with software
  2. Allow today’s “influencers” to become the next Clark Gabel and Marilyn Monroe (How did she die?)
  3. Reduce the barrier for innovations that do not come from recycling Superman-type pre-quels, sequels, and less than equals.

To sum up, both of these movie experts are right and wrong. I suppose both can be reskilled. Does Mr. Beast offer a for fee class on video innovation which includes cheaper production and better outputs?

Stephen E Arnold, April 21, 2025

When

AI: Job Harvesting

April 9, 2025

It is a question that keeps many of us up at night. Commonplace ponders, "Will AI Automate Away Your Job?" The answer: Probably, sooner or later. The when depends on the job. Some workers may be lucky enough to reach retirement age before that happens. Writer Jason Hausenloy explains:

"The key idea where the American worker is concerned is that your job is as automatable as the smallest, fully self-contained task is. For example, call center jobs might be (and are!) very vulnerable to automation, as they consist of a day of 10- to 20-minute or so tasks stacked back-to-back. Ditto for many forms of many types of freelancer services, or paralegals drafting contracts, or journalists rewriting articles. Compare this to a CEO who, even in a day broken up into similar 30-minute activities—a meeting, a decision, a public appearance—each required years of experiential context that a machine can’t yet simply replicate. … This pattern repeats across industries: the shorter the time horizon of your core tasks, the greater your automation risk."

See the post for a more detailed example that compares the jobs of a technical support specialist and an IT systems architect.

Naturally, other factors complicate the matter. For example, Hausenloy notes, blue-collar jobs may be safer longer because physical robots are more complex to program than information software. Also, the more data there is on how to do a job, the better equipped algorithms are to mimic it. That is one reason many companies implement tracking software. Yes, it allows them to micromanage workers. And also it gathers data needed to teach an LLM how to do the job. With every keystroke and mouse click, many workers are actively training their replacements.

Ironically, it seems those responsible for unleashing AI on the world may be some of the most replaceable. Schadenfreude, anyone? The article notes:

"The most vulnerable jobs, then, are not those traditionally thought of as threatened by automation—like manufacturing workers or service staff—but the ‘knowledge workers’ once thought to be automation-proof. And most vulnerable of all? The same Silicon Valley engineers and programmers who are building these AI systems. Software engineers whose jobs are based on writing code as discrete, well-documented tasks (often following standardized updates to a central directory) are essentially creating the perfect training data for AI systems to replace them."

In a section titled "Rethinking Work," Hausenloy waxes philosophical on a world in which all of humanity has been fired. Is a universal basic income a viable option? What, besides income, do humans get out of their careers? In what new ways will we address those needs? See the write-up for those thought exercises. Meanwhile, if you do want to remain employed as long as possible, try to make your job depend less on simple, repetitive tasks and more on human connection, experience, and judgement. With luck, you may just reach retirement before AI renders you obsolete.

Cynthia Murrell, April 9, 2025

Oh, Oh, a Technological Insight: Unstable, Degrading, Non-Reversable.

April 9, 2025

dino orange_thumb_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

Building a House of Cards” has a subtitle which echoes other statements of “Oh, oh, this is not good”:

Beneath the glossy promises of artificial intelligence lies a ticking time bomb — and it’s not the one you’re expecting

Yep, another, person who seems younger than I has realized that flows of digital information erode, not just social structures but other functions as well.

The author, who publishes in Mr. Plan B, states:

The real crisis isn’t Skynet-style robot overlords. It’s the quiet, systematic automation of human bias at scale.

The observation is excellent. The bias of engineers and coders who set thresholds, orchestrate algorithmic beavers, and use available data. The human bias is woven into the systems people use, believe, and depend upon.

The essay asserts:

We’re not coding intelligence — we’re fossilizing prejudice.

That, in my opinion, is a good line.

The author, however, runs into a bit of a problem. The idea of a developers’ manifesto is interesting but flawed. Most devs, as some term this group, like creating stuff and solving problems. That’s the kick. Most of the devs with whom I have worked laugh when I tell them I majored in medieval religious poetry. One, a friend of mine, said, “I paid someone to write my freshman essay, and I never took any classes other than math and science.”

I like that: Ignorance and a good laugh at how I spent my college years. The one saving grace is that I got paid to help a professor index Latin sermons using the university’s one computer to output the word lists and microfilm locators. Hey, in 1962, this was voodoo.

Those who craft the systems are not compensated to think about whether Latin sermons were original or just passed around when a visiting monk exchanged some fair copies for a snort of monastery wine and a bit of roast pig. Let me tell you that most of those sermons were tediously similar and raised such thorny problems as the originality of the “author.”

The essay concludes with a factoid:

25 years in tech taught me one thing: Every “revolutionary” technology eventually faces its reckoning. AI’s is coming.

I am not sure that those engaged in the noble art and craft of engineering “smart” software accept, relate, or care about the validity of the author’s statement.

The good news is that the essay’s author now understand that flows of digital information do not construct. The bits zipping around erode just like the glass beads or corn cob abrasive in a body shop’s media blaster aimed at rusted automobile frame.

The body shop “restores” the rusted part until it is as good as new. Even better some mechanics say.

As long as it is “good enough,” the customer is happy. But those in the know realize that the frame will someday be unable to support the stress placed upon it.

See. Philosophy from a mechanical process. But the meaning speaks to a car nut. One may have to give up or start over.

Stephen E Arnold, April 9, 2025

Programmers? Just the Top Code Wizards Needed. Sorry.

April 8, 2025

dino orange_thumb_thumb_thumb_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

Microsoft has some interesting ideas about smart software and writing “code.” To sum it up, consider another profession.

Microsoft CTO Predicts AI Will Generate 95% of Code by 2030” reports:

Developers’ roles will shift toward orchestrating AI-driven workflows and solving complex problems.

I think this means that instead of figuring out how to make something happen, one will perform the higher level mental work. The “script” comes out of the smart software.

The write up says:

“It doesn’t mean that the AI is doing the software engineering job … authorship is still going to be human,” Scott explained. “It creates another layer of abstraction [as] we go from being an input master (programming languages) to a prompt master (AI orchestrator).” He doesn’t believe AI will replace developers, but it will fundamentally change their workflows. Instead of painstakingly writing every line of code, engineers will increasingly rely on AI tools to generate code based on prompts and instructions. In this new paradigm, developers will focus on guiding AI systems rather than programming computers manually. By articulating their needs through prompts, engineers will allow AI to handle much of the repetitive work, freeing them to concentrate on higher-level tasks like design and problem-solving.

The idea is good. Does it imply that smart software has reached the end of its current trajectory and will not be able to:

  1. Recognize a problem
  2. Formulate appropriate questions
  3. Obtain via research, experimentation, or Eureka! moments a solution?

The observation by the Microsoft CTO does not seem to consider this question about a trolly line that can follow its tracks.

The article heads off in another direction; specifically, what happens to the costs?

IBM CEO Arvind Krishna’s is quoted as saying:

“If you can produce 30 percent more code with the same number of people, are you going to get more code written or less?” Krishna rhetorically posed, suggesting that increased efficiency would stimulate innovation and market growth rather than job losses.

Where does this leave “coders”?

Several observations:

  • Those in the top one percent of skills are in good shape. The other 99 percent may want to consider different paths to a bright, fulfilling future
  • Money, not quality, is going to become more important
  • Inexperienced “coders” may find themselves looking for ways to get skills at the same time unneeded “coders” are trying to reskill.

It is no surprise that CNET reported, “The public is particularly concerned about job losses. AI experts are more optimistic.”

Net net: Smart software, good or bad, is going to reshape work in a big chunk of the workforce. Are schools preparing students for this shift? Are there government programs in place to assist older workers? As a dinobaby, it seems the answer is not far to seek.

Stephen E Arnold, April 8, 2025

Amazon Takes the First Step Toward Moby Dickdom

April 7, 2025

dino orange_thumb_thumb_thumb_thumb_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

This Engadget article does not predict the future. “Amazon Will Use AI to Generate Recaps for Book Series on the Kindle” reports:

Amazon’s new feature could make it easier to get into the latest release in a series, especially if it’s been some time since you’ve read the previous books. The new Recaps feature is part of the latest software update for the Kindle, and the company compares it to “Previously on…” segments you can watch for TV shows. Amazon announced Recaps in a blog post, where it said that you can get access to it once you receive the software update over the air or after you download and install it from Amazon’s website. Amazon didn’t talk about the technology behind the feature in its post, but a spokesperson has confirmed to TechCrunch that the recaps will be AI generated.

You may know a person who majored in American or English literature. Here’s a question you could pose:

Do those novels by a successful author follow a pattern; that is, repeatable elements and a formula?

My hunch is that authors who have written a series of books have a recipe. The idea is, “If it makes money, do it again.” In the event that you could ask Nora Roberts or commune with Billy Shakespeare, did their publishers ask, “Could you produce another one of those for us? We have a new advance policy.” When my Internet 2000: The Path to the Total Network made money in 1994, I used the approach, tone, and research method for my subsequent monographs. Why? People paid to read or flip through the collected information presented my way. I admit I that combined luck, what I learned at a blue chip consulting firm, and inputs from people who had written successful non-fiction “reports.” My new monograph — The Telegram Labyrinth — follows this blueprint. Just ask my son, and he will say, “My dad has a template and fills in the blanks.”

If a dinobaby can do it, what about flawed smart software?

Chase down a person who teaches creative writing, preferably in a pastoral setting. Ask that person, “Do successful authors of series follow a pattern?”

Here’s what I think is likely to happen at Amazon. Remember. I have zero knowledge about the inner workings of the Bezos bulldozer. I inhale its fumes like many other people. Also, Engadget doesn’t get near this idea. This is a dinobaby opinion.

Amazon will train its smart software to write summaries. Then someone at Amazon will ask the smart software to generate a 5,000 word short story in the style of Nora Roberts or some other money spinner. If the story is okay, then the Amazonian with a desire to shift gears says, “Can you take this short story and expand it to a 200,000 word novel, using the patterns, motifs, and rhetorical techniques of the series of novels by Nora, Mark, or whoever.

Guess what?

Amazon now has an “original” novel which can be marketed as an Amazon test, a special to honor whomever, or experiment. If Prime members or the curious click a lot, that Amazon employee has a new business to propose to the big bulldozer driver.

How likely is this scenario? My instinct is that there is a 99 percent probability that an individual at Amazon or the firm from which Amazon is licensing its smart software has or will do this.

How likely is it that Amazon will sell these books to the specific audience known to consume the confections of Nora and Mark or whoever? I think the likelihood is close to 80 percent. The barriers are:

  1. Bad optics among publishers, many of which are not pals of fume spouting bulldozers in the few remaining bookstores
  2. Legal issues because both publishers and authors will grouse and take legal action. The method mostly worked when Google was scanning everything from timetables of 19th century trains in England to books just unwrapped for the romance novel crowd
  3. Management disorganization. Yep, Amazon is suffering the organization dysfunction syndrome just like other technology marvels
  4. The outputs lack the human touch. The project gets put on ice until OpenAI, Anthropic, or whatever comes along and does a better job and probably for fewer computing resources which means more profit.

What’s important is that this first step is now public and underway.

Engadget says, “Use it at your own risk.” Whose risk may I ask?

Stephen E Arnold, April 7, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta