CD? Ignore That, Big Tech AI
February 27, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I try to filter the Epstein Epstein Epstein just as I try to block the AI AI AI. However, one of the people on my team showed me this write up: “AIs Can Generate Near-Verbatim Copies of Novels from Training Data.” I am not surprised that a big time US big tech AI system would spit out text from a source. These companies struck me as outfits that were going to ingest content and let the lawyers run interference. Publishers and individual authors usually lack the fleets of legal eagles available to big tech outfits.

Thanks, Venice.ai. Good enough.
Furthermore, I am not surprised that some people are surprised that these smart software systems are stupid enough to output content that clearly illustrates that their marvels appear to surf on other people’s creative work. Why do I have this view? Before I stepped away from the work fray, I bounced in and out of some Silicon Valley entities. Heck, I worked at one for several years. I was involved in a couple of zippy start ups and heard people on the team make clear that if something could be done, just do it. “They” won’t figure it out for a long time. The “they”, of course, was users, regulators, law enforcement, morality watch dogs, and moms.
I assume that the author of the article is not aware that some of the big tech outfits are complaining that other big tech outfits are pirating their systems and methods. Yep, the outfits that just took other people’s work are squawking that a big tech company has the unmitigated gall to use another big tech firm’s intellectual property.
My term for this behavior is cynical duplicity. Its characteristics are:
- Move fast, break things. Reason: Chaos destabilizes and makes meaningful responses quite difficult
- Just take it. Reason: Most people don’t know that a well crafted or even a crappy crawler can suck down a lot of data quickly. By the time the source figures out that the data are gone, the data are — well, what do you think? — gone.
- Sue people who break the law. Reason: Money buys lawyers. Lawyers, many times, just do what the client wants. The entity with the most time and money wins. Period.
How do I know cynical duplicity is operating? Check out these headlines and stories:
- Anthropic says Deepseek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude
- Google Blocks Antigravity for OpenClaw-Linked AI Ultra Users, Cites “Malicious Usage”
- OpenAI Claims Deepseek Distilled US Models to Gain an Edge
Note: I pulled these headlines from Bing News. If the urls 404, contact the estimable Microsoft, not me.
As a dinobaby, I think the focus of the story about smart software spitting out novels is interesting. However, I think the CD or cynical duplicity is the significant aspect of how big tech AI outfits conduct themselves.
Stephen E Arnold, February 27, 2026
Palantir Peregrinations: Next Up, the Capital of Caribe
February 27, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I read The Nine Nations of North America in 1981. My recollection is that Miami (which I believe Joel Garreau dubbed Caribe) was in a segment of the US called Dixie. Palantir Technologies, if the information in “Palantir Shifts HQ to Miami From Denver After Protests” is correct is on the move again. [Note: If the url 404s, don’t blame me. Buzz those responsive folks at Yahoo.] In Garreau’s analysis of what America had become in 1980, the company started out in Ectopia where Silicon Valley nestled. Then Palantir moved its headquarters to what Garreau called “The Empty Quarter” and Denver, Colorado. Now Palantir is off to Dixie and the capital of Caribe (Mr. Garreau’s name for that which is south of the US border.)

Thanks, Venice.ai. Good enough.
The write up which I spotted in Yahoo’s finance section says:
Palantir Technologies Inc. said it’s moved its headquarters to Miami from Denver at a time when tech firms are headed to South Florida as local officials promote the region as an alternative to California’s Silicon Valley. The announcement was made Tuesday in a brief statement on the social media platform X, with no reason provided for the move.
Palantir prides itself for a “system” that can ingest data and output high probability answers. Properly configured, one could ask Palantir’s AI and analytic system, “Identify the optimal city for our headquarters.” The answer was originally Silicon Valley. That was in the firm’s formative era round about 2003 when Peter Thiel, Alex Karp, Joe Lonsdale, Stephen Cohen, and Nathan Gettings set up “The Shire.” (Yep, that’s a Lord of the Rings reference.)
Palantir then probably consulted its “seeing stone” and learned that the firm should shift its headquarters to Denver, Colorado. That move took place in early 2020.
Now, five years later, the Palantir leadership asked its system for optimal headquarters’ locations and learned that it was Dixie, specifically Miami, the capital of Caribe.
Why is this important? For me, it’s a sign that Dixie is a thriving center of high technology. That’s why I live in rural Kentucky. You now know that I am not alone in the intellectual excitement and fervor of Dixie. You thought I was here because when I relocated from DC to work at the Courier Journal & Louisville Times Co. it was to help make a money pit into a gold mine. Well, you are wrong. I liked the knowledge value of living in a progressive state where basketball is less important than analytic geometry. I bet you didn’t know that!
The write up says:
Palantir, a data analytics company with extensive defense contracts, is Colorado’s largest public company. Its decision followed multiple protests since it moved to Denver in 2020 from Palo Alto, due to cultural and ideological differences, according to the Denver Post. Protests have targeted the company’s support of the Israeli military and more recently its work with US Immigration and Customs Enforcement by using artificial intelligence to identify targets for deportation. State and local officials said they were not told of the decision ahead of time, including Colorado Governor Jared Polis.
Interesting. I wonder why the Palantir seeing stone system did not notice the probability that the company would engender local protests. Perhaps Palantir discounted the culture of Boulder, giving excess “weight” to the value of the community in the just folks’ town of Aspen, Colorado?
Here’s a question that crossed my mind, “What if the Palantir system output erroneous information?” Moving a company’s headquarters, even if it is an outfit set up on the Airbnb principles of Telegram, is a hassle.
What are the implications if the answer to the question “What if the Palantir system output erroneous information? is, “Yep, it sure did”? I don’t want to think about the inconceivable answer. Forget Hershey’s experience with Palantir. Think about health care in the UK.
Maybe the move was not Palantir’s leadership idea. The write up points out:
Peter Thiel, Palantir’s chairman, opened an office for his private investment firm in Miami’s Wynwood neighborhood at the end of 2025, expanding the billionaire’s presence in Florida. The tech mogul has owned a mansion in Miami Beach since 2020, and his venture capital firm Founders Fund has had an office nearby since 2021. He also moved his voter registration to Florida in March 2024, according to state records.
Okay, protests and probabilistic outputs aside, will the company offer immersion classes in Spanish? The language might be useful if the protests create multi-lingual signage.
The big question, “Why are folks complaining about Palantir?”
Stephen E Arnold, February 27, 2026
Amazon: Employee Innovation the Bezos Bulldozer Way
February 27, 2026
In 2025, Amazon instituted a cute trick to monitor those who were returning to the office or RTO to work and learn if they were RIFed. Amazon has implemented a new tool to track its employees in a manner that would make Big Brother smile. Business Insider has the scoop on employee tracking in the article, “Amazon Gives Managers A New Way To Spot Employees Who Aren’t Spending Enough Time In The Office.”
The new monitoring policy tracks how often employees come to the office, how long they stay, and the locations where they work. There are there types of Amazon employees this tracking system will affect:
“The system flags three kinds of employees: “Low-Time Badgers,” defined as employees whose weekly median time in the office is less than four hours per day, averaged over a rolling eight-week period; “Zero Badgers,” who don’t badge into any Amazon building during that span; and “Unassigned Building Badgers,” who badge into a building other than the one they’re assigned to over half the time.”
Amazon put the monitoring system in place because employees weren’t abiding by the RTO. Now everyone needs to suffer under the watchful of Big Bezos Bulldozer.
Amazon just upgraded its “coffee badging” that required teams to be in office for a minimum of two to six hours. Some employees (soon to learn that each could find their future elsewhere) said coffee badging was a high school method. Amazon claims its tracking system is an effort to encourage collaboration among its employees. The company claims that “working in-office is important to our culture…”.
Here’s a question for you. Will Amazon just chip its employees like dogs. The robots would have another signal so a dawdler would not come face to face with a smart machine. Hook in an agent, and Amazon will know where every employee is and what he or she is allegedly doing every minute of every day. That sounds efficient. We know Amazon has great confidence in AI. A recent outage allegedly caused by a “good enough” AI system was traced to a human. Let’s chip that one for sure.
Whitney Grace, February 27, 2026
A Peek into the Thiel-iverse
February 26, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I don’t pay much attention to Palantir. After I saw the firm run an ad in the Wall Street Journal explaining that it was an AI company, I dropped it from my intelware watch list. Palantir, I concluded, was an open source surfing custom software shop. It was essentially building solutions for customers. There’s nothing wrong with that approach, but I prefer outfits like Octostar who just sell a license. The customer is ready to roll after a short training course.
I read two items I found interesting. The link between the two is Peter Thiel, not the companies themselves. Let’s look briefly at these two items. Please, read each in its original form and assemble your own opinion about the messages contained in each “real” news item.

Thanks, Venice.ai. Good enough.
The first article is in the UK newspaper The Telegraph. Its story is “HS Contractor Palantir Will Suffer $200bn Wipeout, Says Big Short Investor.” The write up reports, “Michael Burry accuses ‘easily replaceable’ tech firm of overplaying its AI credentials.”
I found this snippet interesting:
Under its latest contract with the NHS, Palantir was tasked with joining up existing NHS data in a bid to speed up diagnosis and reduce waiting times and hospital stays. However, official figures this week showed that A&E trolley waits have risen to their worst on record – with more NHS patients than ever facing 12-hour delays last month.
Not unusual. Years ago I had a client with a juicy NHS contract. The client’s software did a couple of things, but to do heavy lifting such as that required by the NHS, custom code had to be written. My client could not meet the requirement and its contract was not renewed. Was it the NHS’ fault? Was my client responsible? I have no idea. But custom software required for a product that does a couple of things often presents challenges. Palantir is now tackling the NHS, and it has an alleged US$200 billion to help out the NHS with some fundamental issues.
He [an expert named Michael Burry] said its chief executive, Alex Karp, had initially been “blindsided by ChatGPT” and other large language models, only then to decide that he could “spin this as Palantir is AI”. Mr Burry said: “Like Trump, Karp figures bluster has gotten him pretty far, and so will continue in that mode.” Palantir has previously hit back against Mr Burry’s criticism. In late 2025, Mr Karp branded the investor “bats–t crazy” for predicting such sharp falls in its stock.
Yep, professional.
The second article is “Discord Distances Itself From Age Verification Firm After Ties To Palantir’s Peter Thiel Surface.” This write up states:
Started in 2018, Persona develops identity detection and anti-fraud technologies. They’ve been having an absolute field day since the OSA, being implemented to verify user ID across Reddit and Roblox. One sticking point, however, is who’s backing the company: Peter Thiel, the cofounder of ICE-approved surveillance firm Palantir.
The write up points out:
Thiel, of course, is known for many things. A co-founder of PayPal, Thiel is now more closely affiliated with Palantir, a company specializing in digital surveillance and exploiting user information.
My reaction to the Thiel thread linking these two items is that:
- Aggressive marketing is working for Palantir
- Mr. Thiel has a knack for spotting “in between” opportunities; that is, pools of high value information and customers like governments
- Some people like Mr. Burry and the author of the Discord article are nervous about the companies and, I surmise, Mr. Thiel.
Is it possible that Mr. Thiel and other influential Silicon Valley professionals want to use their technology to create an on ramp for themselves and their companies to gain not just more money but direct influence over the government and the citizens of a country?
A partial answer might be found in the public statements of thinkers like Nick Land, Patrick Deneen, and Curtis Yarvin. The touchstone old timer may be René Girard or Leo Strauss. Some of the ideas might shed light on Mr. Thiel’s investments, his support of the Palantir approach to marketing, and the funding of outfits like Persona.
Getting fascinated with an individual chess piece is necessary, but the game is won by trying to figure out the strategy of the player. That’s why I don’t follow Palantir. It is the bigger picture into which Palantir fits that matters.
Stephen E Arnold, February 26, 2026
AI Use Cases: Let Many Flowers Bloom Even Where They Are Unwanted
February 25, 2026
As the title asks, why is it surprising that people use AI differently? Technology has never been one out-of-the-box solution services all. The Harvard Business Review asks this stupid question and researches it: “Why AI Boosts Creativity For Some Employees But Not Others.” Generative AI bots are becoming an essential tool for day-to-day business. The hope is that generative AI will make employees more creative and create more inspiring ideas.
Nope.
A Gallup survey reported that only 26% of employees who use generative AI saw creativity improvements. Why?
“Our new research, published in the Journal of Applied Psychology, answers this question. We find that generative AI can indeed boost employee creativity, but the gains are not universal. Specifically, employees with stronger metacognition—the ability to plan, evaluate, monitor, and refine their thinking—are more likely to experience creative gains from using generative AI, because they can use it more effectively to acquire the cognitive job resources that fuel creativity.”
The team conducted creativity research with the appropriate scientific method jargon. Blah. Blah. Blah. They discovered smart individuals and people with more creative thinking don’t use AI like a calculator. They use it as a tool to enhance their work skills. Stupid people, however, just plug in questions and take the results at face value. Here’s more results about the differences between the the employees with proper results language:
“By contrast, employees low in metacognition are more likely to accept AI’s first answer, rely on default outputs, and fail to check whether AI’s suggestions are accurate or relevant. As a result, employees with stronger metacognition are far better positioned to use AI tools to acquire the cognitive job resources that fuel creativity, whereas those with weaker metacognitive skills see few creative gains from AI.”
Let’s step back. Is it possible that humans want to be valued for their work, not the work of unknown coders and black boxes? Is it possible that humans know that “good enough” is exactly what probabilistic software delivers? Can it be that humans need to do something meaningful to make them happy or at least give them something about which to complain?
The reality is that AI leads in one direction: Abrogation of control from humans to a couple of humans who own the smart software. Happy now?
Whitney Grace, February 25, 2026
Telegram Notes: A Look at Grachev
February 24, 2026
The Telegram Notes collections of my informal comments based on notes I gathered for my book “The Telegram Labyrinth” can viewed at this online location.
Andrei Grachev operates quickly and is less well known that some other professionals supporting the Telegram – linked TON Foundation. He had a brush with the law in Moscow, served as president of the China-linked Huobi Moscow, and secured a board level appointment to RACIB. The first part of a three – part informal essay is “Part I. Andrei Grachev: A Hungry Uzbek Falcon.” Part II will appear in five or six days.
Stephen E Arnold, February 24, 2026
Want a Peek at the Future: Hire This CPA
February 24, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Generally speaking, I have like the accountants I have worked with. Some were green eye shade types (yo, Buddy) and other were knock offs of my colleagues at a blue chip consulting firm (yep, Andy, I am talking about you). However, none of these professionals were futurists. The mental orientation was toward following the money, documentation, and keeping up with those exciting tax regulations and the changes thereto.

An office supply shop in Manhattan has a large number of green eye shades and ledger-books. The two store owners have a question. How would you answer it? Answer: Just ask an AI system. Thanks, Venice.ai. Good enough.
I think I have found a CPA futurist.
Navigate to “What Happened to Software is Happening to Finance and Accounting.” The write up says:
A year ago, AI was 20% of my workflow — a better Stack Overflow. I’d use it to research syntax I didn’t understand. Three months ago, I flipped it to 100% — but I was project-managing every function like if it were a junior developer. Now I hand it a product spec and let it run.
On the surface, the observation does seem like the future in one of those sci-fi books that captivate wizards like Steve Gibson, co-host of Security Now, and some of the people whom I met at assorted high-tech firms who hired me to work on some special projects. But the next layer down we have a person who is a CPA. My father was an accountant, and he seemed quite focused on what Ebenezer Scrooge wanted Bob Cratchit to do with no coal in his grate.
The write up makes clear that a CPA / accountant is a human interfacing with different software; for instance, Excel. The author points out:
But “being great at the interface” is losing its edge. Because the interface is becoming an agent. It’s already showing up in the numbers. Earlier this month, the FT reported that KPMG pressured its own auditor, Grant Thornton, to cut fees — arguing that AI should make the work cheaper. Grant Thornton agreed to a 14% reduction. When a Big Four firm is using AI as leverage to renegotiate its own audit fees, the repricing isn’t theoretical.
My interpretation is that many accounting jobs are history. Furthermore, revenue from humans reading bank statements is going to go down in many use cases. And AI tools have become “leverage” to lower prices. Finally, if you want to be a big time CPA, you will be an orchestrator of agents, not 24 year olds working at whatever is left of the Big Four.
The article states:
The leverage is in designing systems that create auditable truth at startup speed.
What’s interesting is that the essay traverses ground my team and I have have stumbled through for a while. The twist is that the essay is a “hire me” document. Note this passage:
Let’s Talk. If you’re already deep in this: I’d love to compare workflows. There are no best practices yet — we’re all learning by watching each other. If you’re not in it yet: let’s hang out. I’ll show you what this looks like live. No pitch — just the actual workflows running in real time. DM me.
Several observations:
- The document was not plucked from LinkedIn where many “thought leaders” roost I have heard
- The pitch is not to be the accountant but to help create a smart system which is pretty much the accountant (tough luck for the newly minted CPA)
- The same logic can be applied, in my opinion, to general mid-tier consulting work, some legal work, and a great deal of the work at certain government agencies. (Sorry, I just can’t think of any to mention now.)
Net net: A novel approach to getting hired.
Stephen E Arnold, February 24, 2026
Amazon and AI: Who Has the Story Straight?
February 20, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
The orange newspaper comes up with some interesting stories. “Amazon Service Was Taken Down by AI Coding Bot.” [Paywalled. Don’t hassle me. I didn’t create the mess for traditional news outfits.] The write up puzzled me. You may find this information crystal clear, but this dinobaby struggled with these two statements in the write up:
[a] “Amazon Web Services experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes, according to four people familiar with the matter. The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to “delete and recreate the environment.”
and
[b] Amazon said it was a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action”. “In both instances, this was user error, not AI error,” Amazon said, adding that it had not seen evidence that mistakes were more common with AI tools. The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service”.
Okay, [a] tells me AI did it. [b] tells me a “user” did it.
Thanks, Image Z. Good enough.
Which is correct? My hunch is that the Financial Times does not know the source of the problem. Despite that, the laser of doubt sweeps across the Amazon landscape and illuminates Amazon professionals and an artificial intelligence system.
Does the FT’s article resolve the question of AI screw up or human failure? Nope. I noted this passage:
Some Amazon employees said they were still skeptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 per cent of developers to use AI for coding tasks at least once a week and was closely tracking adoption. Amazon said it was experiencing strong customer growth for Kiro and that it wanted customers and employees to benefit from efficiency gains.
Here I am in rural Kentucky. I have decades of work experience behind me. What’s my view?
- Amazon and AWS leadership need AI to succeed for the MBA reasons: better, faster, and cheaper. Yeah, cheaper. Therefore, it is AI, folks.
- Amazon’s enterprise sales professionals want to sell AI, AWS, and the Amazon way. Any hint of AI fouling up the plumbing is very bad news. How does one deal with bad news? Bad news. What bad news? Exactly.
- Stakeholders want AWS to work. Microsoft Azure may not be the most agile dog in the kennel, but it is big. The Google, despite its peculiarities, is out their pitching its AI and cloud. Then there are the China-linked AI systems. Making those available for free or very low cost strikes at the tender parts of Amazon’s pricing tactics. Free and low cost are bad, bad news.
Therefore, Amazon will find a human throat to choke. My hunch is that the execution will be handled by an AWS agent who doesn’t complain and doesn’t explain like the people the FT’s writer spoke with.
But what happened? A human let an AI loose. The human did not spot the problem. The smart software does what smart software does: Makes mistakes. What about the smart software? No problemo. What about the humanoid? The individual has an opportunity to find a future elsewhere.
Stephen E Arnold, February 20, 2026
Not only are blue
https://www.ft.com/content/00c282de-ed14-4acd-a948-bc8d6bdb339d
Blue Chip Consulting Management Method: Threats and Money
February 19, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
Not only are blue chip consulting firms struggling to figure out how to price their services and license their AI agents, the firms have some recalcitrant staff. These are Type A people who know how to do a couple of things. As those experts get more job experience, the tried-and-true methods become the unwritten and sometimes written rules of what’s acceptable in a blue chip consulting firm. The bluer the blue chip, the more these “we do things this way” pressures increase. After all, if McKinsey hacked out executive memos like a person fresh out of a junior college trying to be an efficiency expert, the work product would be different in my opinion. Experience is often a positive. Experience makes ruts in professional practices like those two-wheeled carts cut ruts in the streets of Pompei. “In a rut” has a real meaning. How do you enforce change? Money and implicit threats. That’s a sure-fire approach to winning engagements and building a loyal staff.
Thanks, Venice.ai. Good enough.
I read “Accenture Links Staff Promotions to Use of AI Tools.” The operative idea here is incentives. Pay goes up if you do what leadership tells you. The write up says:
Accenture has reportedly started tracking staff use of its AI tools and will take this into consideration when deciding on top promotions, as the consulting company tries to increase uptake of the technology by its workforce. The company told senior managers and associate directors that being promoted to leadership roles would require “regular adoption” of artificial intelligence…
I noted this passage:
Accenture has previously said it has trained 550,000 of its 780,000-strong workforce in generative AI, up from only 30 people in 2022, and has announced it is rolling out training to all of its employees as part of its annual $1bn (£740m) annual spend on learning.
Yep, training. This means that the “old” methods are going to be AI-ized when the Type As, leadership with imposter syndrome, and senior consultants like it or not.
A good question is, “Why?”
My hunch is that leadership at this estimable firm figures that AI is cheaper than young MBAs and CPAs, financial engineers, and developers. Therefore, if the firm can get everyone using AI, then the old up and out method of employee performance review can cut staff, reduce costs for stupid things like health care and retirement, and produce more bonuses and higher salaries for leadership.
The big “if” looms over this approach. What if this grand plan backfires and clients want to use AI to replace the blue chip consulting firms or to negotiate for lower fees? What if the AI screws up a big time audit and leadership gets to spend quality time with lots of lawyers? What if the staff think this surveillance methods sucks when the professional surveilled went to a top school to operate more or less like an actual human knowledge worker and less like a cyborg?
My view is that leadership in some blue chip consulting firms knows that some type of meaningful action must be taken. However, the AI road is an uncertain one. AI seems to work when applied to killing the enemy in a kill zone. Will it work when sophisticated tax management and analysis are required? Will it work when a client shows up and says, “We need help turning this drug into a consumer product. Can you help us?”
Why? Clients like to do things for themselves. Blue chip consultancies work hard to keep secrets and prevent leakage of client information to other clients. Can AI systems deliver this? What about agents?
Several observations:
- We will know how successful the strategy is because RIFed employees will post on social media, give speeches, or write essays on Medium
- The leadership is back in the crap game held in a dark alley. This surveillance and enforced AI are big bets. Really big bets
- The employees at blue chip consulting firms are not particularly easy to manage. Some have money already. Some have families with clout. Some are working side gigs so they can run their own company. Some will tell a client, “Hey, let me join your firm. We can do what the blue chip firm does for less money. I can set this up and run it for you.”
Why aren’t the consultants jumping at AI? Explaining errors to clients is embarrassing. Who wants to look stupid? No, I won’t answer that. Just ask an AI system.
Stephen E Arnold, February 19, 2026
Has Japan Caught the Ka-Ching Cash Register Bug?
February 19, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I just caught up on my news. Tucked into the list was an interesting write up with an international spin: “A Coalition of 600-Plus Companies in Japan Wants More Regulatory Action against Apple and Google.” Doesn’t Japan know that the US is busy regulating big tech? Apparently not or the actions taken have not addressed some of the allegations leveled at these exemplary American firms.
The write up states:
Last December Apple announced changes impacting iOS apps in Japan to comply with the Mobile Software Competition Act (MSCA). However they’re apparently not enough for a coalition representing more than 600 local companies is calling for further regulatory action….However, seven IT-related industry groups, including the Computer Entertainment Supplier’s Association, released a joint statement on Thursday calling on Apple and Google to swiftly eliminate new commissions imposed on app companies, reports The Japan News. The groups said the burden of the commissions is so heavy that directing users to external sites for payment “has not become a viable option.”
The article points out:
According to The Japan Times, they also argued that in the U.S., similar payment methods are offered free of charge. The groups accused Apple and Google of placing Japan’s consumers and businesses at a disadvantage compared to ones in the United States.
Several thoughts crossed my mind:
- Has Japan caught the EU’s ka-ching cash register bug? The idea is that suing US big tech firms can produce cash which can be collected once the endless appeals have been exhausted.
- Is Apple consciously discriminating against Japanese professionals? If so, why?
- Is this matter another example of Apple’s flagging management control? I ask this question because Siri is a no show again. What’s up with getting the management cart seeming unable to pull stuff from the orchard?
I don’t think too much about Apple. But maybe I should pay attention?
Stephen E Arnold, February 19, 226

