FTC Enacts Investigative Process On AI Products and Services
December 15, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”
The executive recruiter for a government contractor says, “You can earn great money with a side gig helping your government validate AI algorithms. Does that sound good?” Will American schools produce enough AI savvy people to validate opaque and black box algorithms? Thanks, MSFT Copilot. You hallucinated on this one, but your image was good enough.
The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.
The FTC defines AI as:
“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”
AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.
The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers? Where will the expert analysts come? An online university training program?
Whitney Grace, December 15, 2023
Microsoft Snags Cyber Criminal Gang: Enablers Finally a Target
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this year at the National Cyber Crime Conference, we shared some of our research about “enablers.” The term is our shorthand for individuals, services, and financial outfits providing the money, services, and management support to cyber criminals. Online crime comes, like Baskin & Robbins ice cream, in a mind-boggling range of “flavors.” To make big bucks, funding and infrastructure are needed. The reasons include amped up enforcement from the US Federal Bureau of Investigation, Europol, and cooperating law enforcement agencies. The cyber crime “game” is a variation of a cat-and-mouse game. With each technological advance, bad actors try out the latest and greatest. Then enforcement agencies respond and neutralize the advantage. The bad actors then scan the technology horizon, innovate, and law enforcement responds. There are many implications of this innovate-react-innovate cycle. I won’t go into those in this short essay. Instead I want to focus on a Microsoft blog post called “Disrupting the Gateway Services to Cybercrime.”
Industrialized cyber crime uses existing infrastructure providers. That’s a convenient, easy, and economical means of hiding. Modern obfuscation technology adds to law enforcements’ burden. Perhaps some oversight and regulation of these nearly invisible commercial companies is needed? Thanks, MSFT Copilot. Close enough and I liked the investigators on the roof of a typical office building.
Microsoft says:
Storm-1152 [the enabler?] runs illicit websites and social media pages, selling fraudulent Microsoft accounts and tools to bypass identity verification software across well-known technology platforms. These services reduce the time and effort needed for criminals to conduct a host of criminal and abusive behaviors online.
What moved Microsoft to take action? According to the article:
Storm-1152 created for sale approximately 750 million fraudulent Microsoft accounts, earning the group millions of dollars in illicit revenue, and costing Microsoft and other companies even more to combat their criminal activity.
Just 750 million? One question which struck me was: “With the updating, the telemetry, and the bits and bobs of Microsoft’s “security” measures, how could nearly a billion fake accounts be allowed to invade the ecosystem?” I thought a smaller number might have been the tipping point.
Another interesting point in the essay is that Microsoft identifies the third party Arkose Labs as contributing to the action against the bad actors. The company is one of the firms engaged in cyber threat intelligence and mitigation services. The question I had was, “Why are the other threat intelligence companies not picking up signals about such a large, widespread criminal operation?” Also, “What is Arkose Labs doing that other sophisticated companies and OSINT investigators not doing?” Google and In-Q-Tel invested in Recorded Future, a go to threat intelligence outfit. I don’t recall seeing, but I heard that Microsoft invested in the company, joining SoftBank’s Vision Fund and PayPal, among others.
I am delighted that “enablers” have become a more visible target of enforcement actions. More must be done, however. Poke around in ISP land and what do you find? As my lecture pointed out, “Respectable companies in upscale neighborhoods harbor enablers, so one doesn’t have to travel to Bulgaria or Moldova to do research. Silicon Valley is closer and stocked with enablers; the area is a hurricane of crime.
In closing, I ask, “Why are discoveries of this type of industrialized criminal activity unearthed by one outfit?" And, “What are the other cyber threat folks chasing?”
Stephen E Arnold, December 14, 2023
Why Modern Interfaces Leave Dinobabies Lost in Space
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Until the advent of mobile phones, I paid zero attention to interfaces. As a dinobaby, I have well-honed skills remembering strings of commands. I would pump these into the computing device via a keyboard or a script and move on with the work. Now, when a new app arrives, I resist using it. The reasons are explained quite well in “Modern iOS Navigation Patterns.” I would suggest that the craziness presented clearly in the essay be extended to any modern interface: Desktop anchor, zippy tablet, or the look-alike mobiles.
The dinobaby says, “How in the world do I send a picture to my grandson?” Thanks, MSFT Copilot. Did you learn something with the Windows phone interface?
The write up explains and illustrates the following types of “modern” iOS interfaces. I am not making these up, and I am assuming that the write up is not a modern-day Swiftian satire. Here we go:
- Structural navigation with these options or variants: Drill down, flat, pyramid, and hub and spoke
- Overlay navigations with these options or variants: High friction, low friction, and non-modal (following along?)
- Embedded navigation with these options or variants: State change, step by step, or content driven (crystal clear, right?)
Several observations. I want an interface to deliver the functions the software presents as its core functionality. I do not want changing interfaces, hidden operations, or weirdness which distracts me from the task which I wish to accomplish.
What do designers do when they have to improve an interface. Embrace one of the navigation approaches, go to meetings, decide on which to use and where. When the “new” interface comes out, poll users to get feedback. Ignore the dinobabies who say, “You are nuts because the app is unusable.”
Stephen E Arnold, December 14, 2023
Apple Harvests Old Bell Tel Ideas
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I am not a Bell head. True, my team did work at Bell Labs. In mid project, Judge Green’s order was enforced; therefore, the project morphed into a Bellcore job. I had opportunities to buy a Young Pioneer T shirt. Apple’s online store has “matured” that idea. The computer platform was one of those inviolate things. Apple is into digital chastity belts too I believe. Lose your iTunes’ password, and you are instantly transferred back to the world of Bell Tel hell if you “lost” your Western Electric 202 handset.
So what?
I read “Apple Shutters Third-Party Apps That Enabled iMessage on Android.” In my opinion, the write up says, “Apple killed a cross platform messaging application.” This is no surprise to anyone who had the experience of attending pre-Judge Green meetings. May I illustrate? In one meeting in Manhattan, the firm with which I was affiliated attended a meeting to explain a proposal and the fee for professional services. I don’t recall what my colleagues and I were pitching, I just remember the reaction to the fee. I am a dinobaby, but the remark ran along this railroad line:
A Fruit Company executive visits a user. The visit is intended to make clear that the user will suffer penalties if she continues to operate outside the rules of the orchard. That MSFT Copilot. Only three tries today to get one good enough cartoon.
That’s a big number. We may have to raise the price of long-distance calls. But you guys won’t get paid until we get enough freight cars organized. We will deliver the payment in nickels, dimes, and quarters.
Yep, a Bell head joke, believe it or not. Ho, ho, ho. Railcars filled with coins.
The write up states:
The iPhone maker said in a statement it “took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage.” It added that “these techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.” The company said it would continue to make changes in the future to protect its users.
If you remember the days when a person tried to connect a non-Western Electric device into the Bell phone system, the comments were generally similar. Unauthorized devices could imperil national security or cause people to die. There you go.
As a resident of Kentucky, I am delighted that big companies want to protect me. Those Kentuckians unfortunate enough to have gobbled a certain pharma company’s medications may not believe the “protect users” argument.
As a dinobaby, I see Apple’s “protect users” play as little more than an overt and somewhat clumsy attempt to kill cross platform messaging. The motives are easy to identify:
- Protect the monopoly until Apply-pleasing terms can be put in place
- Demonstrate that the company is more powerful than an upstart innovator
- Put the government on notice that it will control its messaging platform
Oh, I almost forget. Apple wants to “protect users.” Bell/AT&T thinking has fertilized the soil in the Apple orchard in my view. I feel more protected already even though a group fired mortars at a certain meeting’s attendees, causing me to hide in a basement until the supply of shells was exhausted.
Oh, yeah, there were people who were supposed to protect me and others at the meeting. How did that work out?
Stephen E Arnold, December 13, 2023
x
x
x
Stressed Staff Equals Security Headaches
December 14, 2023
This essay is the work of a dumb dinobaby. No smart software required.
How many times does society need to say that happy employees mean a better, more profitable company? The world is apparently not getting the memo, because employees, especially IT workers, are overworked, stressed, exhausted, and burnt out like blackened match. While zombie employees are bad for productivity, they’re even worse for cyber security. BetaNews reports on an Adarma, a detection and response specialist company, survey, “Stressed Staff Put Enterprises At Risk Of Cyberattack.”
The overworked IT person says, “Are these sticky notes your passwords?” The stressed out professional service worker replies, “Hey, buddy, did I ask you if your company’s security system actually worked? Yeah, you are one of those cyber security experts, right? Next!” Thanks, MSFT Copilot. I don’t think you had a human intervene to create this image like you know who.
The survey responders believe they’re at a greater risk of cyberattack due to the poor condition of their employees. Five hundred cybersecurity professionals from UK companies with over 2000 employees were studied and 51% believed their IT security are dead inside. This puts them at risk of digital danger. Over 40% of the cybersecurity leaders felt that their skills were limited to understand threats. An additional 43% had little or zero expertise to respond or detect threats to their enterprises.
IT people really love computers and technology but when they’re working in an office environment and dealing with people, stress happens:
“‘Cybersecurity professionals are typically highly passionate people, who feel a strong personal sense of duty to protect their organization and they’ll often go above and beyond in their roles. But, without the right support and access to resources in place, it’s easy to see how they can quickly become victims of their own passion. The pressure is high and security teams are often understaffed, so it is understandable that many cybersecurity professionals are reporting frustration, burnout, and unsustainable stress. As a result, the potential for mistakes being made that will negatively impact an organization increases. Business leaders should identify opportunities to ease these gaps, so that their teams can focus on the main task at hand, protecting the organization,’ says John Maynard, Adarma’s CEO.”
The survey demonstrates why it’s important to diversify the cybersecurity talent pool? Wait, is this in regard to ethnicity and biological sex? Is Adarma advocating for a DEI quota in cybersecurity or is the organization advocating for a diverse talent pool with varied experience to offer differ perspectives?
While it is important to have different education backgrounds and experience, hiring someone simply based on DEI quotas is stupid. It’s failing in the US and does more harm than good.
Whitney Grace, December 14, 2023
The Cloud Kids Are Not Happy: Where Is Mom?
December 13, 2023
This essay is the work of a dumb dinobaby. No smart software required.
An amusing item about the trials and tribulations of a cloud techno feudalists seems appropriate today. Navigate to the paywalled story “Microsoft Has Stranglehold on the Cloud, Say Amazon and Google.” With zero irony, the write up reports:
Amazon and Google have complained to the UK’s competition regulator that their rival, Microsoft, uses practices that restrict customer choice in the £7.5 billion cloud computing market.
What’s amusing is that Google allegedly said before it lost its case related to the business practices of its online store:
“These licensing practices are the only insurmountable barrier preventing competition on the merits for new customers migrating to the cloud and for existing workloads. They lead to less choice, less innovation, and increased costs for UK customers of all sizes.”
What was Amazon’s view? According to the article:
“Microsoft changed its licensing terms in 2019 and again in 2022 to make it more difficult for customers to run some of its popular software offerings on Google Cloud, AWS and Alibaba. To use many of Microsoft’s software products with these other cloud services providers, a customer must purchase a separate license even if they already own the software. This often makes it financially unviable for a customer to choose a provider other than Microsoft.”
How similar is this finger pointing and legal activity to a group of rich kids complaining that one child has all the toys? I think the similarities are — well — similar.
The question is, “What entity will become the mom to adjudicate the selfish actions of the cloud kids?”
Stephen E Arnold, December 13, 2023
Why Is a Generative System Lazy? Maybe Money and Lousy Engineering
December 13, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Great post on the Xhitter. From @ChatGPT app:
we’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it
My experience with Chat GPT is that it responds like an intern working with my team between the freshman and sophomore years at college. Most of the information output is based on a “least effort” algorithm; that is, the shortest distance between A and B is vague promises.
An engineer at a “smart” software company leaps into action. Thanks, MSFT Copilot. Does this cartoon look like any of your technical team?
When I read about “unpredictable”, I wonder if people realize that probabilistic systems are wrong a certain percentage of the time or outputs. The horse loses the race. Okay, a fact. The bet on that horse is a different part of the stall.
But the “lazier” comment evokes several thoughts in my dinobaby mind:
- Allocate less time per prompt to reduce the bottlenecks in a computationally expensive system; thus, laziness is signal about crappy engineering
- Recognize that recycling results for frequent queries is a great way to give a user “something” close enough for horseshoes. If the user is clever, that user will use words like “give me more” or some similar rah rah to trigger another pass through what’s available
- The costs of system are so great, the Sam AI-Man system is starved for cash for engineers, hardware, bandwidth, and computational capacity. Until there’s more dough, the pantry will be poorly stocked.
Net net: Lazy may be a synonym for more serious issues. How does one make AI perform? Fabrication and marketing seem to be useful.
Stephen E Arnold, December 13, 2023
Allegations That Canadian Officials Are Listening
December 13, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Widespread Use of Phone Surveillance Tools Documented in Canadian Federal Agencies
It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:
Many people, it seems, are listening to Grandma’s conversations in a suburb of Calgary. (Nice weather in the winter.) Thanks, MSFT Copilot. I enjoyed the flurry of messages that you were busy creating my other image requests. Just one problemo. I had only one image request.
“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.
To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.
Cynthia Murrell, December 13, 2023
Interesting Factoid about Money and Injury Reduction Payoff of Robots at Amazon
December 12, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Who know if the data in “Amazon’s Humanoid Warehouse Robots Will Eventually Cost Only $3 Per Hour to Operate. That Won’t Calm Workers’ Fears of Being Replaced” are accurate. Anyone who has watch a video clip about the Musky gigapress or the Toyota auto assembly process understands one thing: Robots don’t take breaks, require vacations, or baloney promises that taking a college class will result in a promotion.
An unknown worker speaks with a hypothetical robot. The robot allegedly stepped on a worker named “John.” My hunch is that the firm’s PR firm will make clear that John is doing just fine. No more golf or mountain climbing but otherwise just super. Thanks MSFT Copilot. Good enough.
The headline item is the most important; that is, the idea of $3 per hour cost. That’s why automation even if the initial robots are lousy will continue apace. Once an outfit like Amazon figures out how to get “good enough” work from non-humans, it will be hasta la vista time.
However, the write up includes a statement which is fascinating in its vagueness. The context is that automation may mistake a humanoid for a box or a piece of equipment. The box is unlikely to file a law suit if the robot crushes it. The humanoid, on the other hand, will quickly surrounded by a flock of legal eagles.
Here’s the passage which either says a great deal about Amazon or about the research effort invested in the article:
And it’s still not clear whether robots will truly improve worker safety. One whistleblower report in 2020 from investigative journalism site Reveal included leaked internal data that showed that Amazon’s robotic warehouses had higher injury rates than warehouses that don’t use robots — Amazon strongly refuted the report at the time, saying that the reporter was "misinterpreting data." "Company data shows that, in 2022, recordable incident rates and lost-time incident rates were 15% and 18% lower, respectively, at Amazon Robotics sites than non-robotics sites," Amazon says on its website.
I understand the importance of the $3 per hour cost. But the major item of interest is the incidence of accidents when humanoids and robots interact in a fast-paced picking and shipping set up. The information provided about injuries is thin and warrants closer analysis in my opinion. I loved the absence of numeric context for the assertion of a “lower” injury rate. Very precise.
Stephen E Arnold, December 12, 2023
Google: Another Court Decision, Another Appeal, Rinse, Repeat
December 12, 2023
This essay is the work of a dumb dinobaby. No smart software required.
How long will the “loss” be tied up in courts? Answer: As long as possible.
I am going to skip the “what Google did” reports and focus on what I think is a quite useful list. The items in the list apply to Apple and Google, and I am not sure the single list is the best way to present what may be “clever” ways to dominate a market. But I will stick with what Echelon provided at this YCombinator link.
Two warring samurai find that everyone in the restaurant is a customer. The challenge becomes getting “more.” Thanks, MSFT Copilot. Good enough.
What does the list present? I interpreted the post as a “racket analysis.” Your mileage may vary:
Apple is horrible, but Google isn’t blameless.
Google and Apple are a duopoly that controls one of the most essential devices of our time. Their racket extends more broadly than Standard Oil. The smartphone is a critical piece of modern life, and these two companies control every aspect of them.
- Tax 30%
- Control when and how software can be deployed
- Can pull software or deny updates
- Prevent web downloads (Apple)
- Sell ads on top of your app name or brand
- Scare / confuse users about web downloads or app installs (Google)
- Control the payment rails
- Enforce using their identity and customer management (Apple)
- Enforce using their payment rails (Apple)
- Becoming the de-facto POS payment methods (for even more taxation)
- Partnering with governments to be identity providers
- Default search provider
- Default browser
- Prevent other browser runtimes (Apple)
- Prevent browser tech from being comparable to native app installs (mostly Apple)
- Unfriendly to repairs
- Unfriendly to third party components (Apple)
- Battery not replaceable
- Unofficial pieces break core features due to cryptographic signing (Apple)
- Updates obsolete old hardware
- Green bubbles (Apple)
- Tactics to cause FOMO in children (Apple)
- Growth into media (movie studios, etc.) to keep eyeballs on their platforms (Apple)
- Growth into music to keep eyeballs on their platforms
There are no other companies in the world with this level of control over such an important, cross-cutting, cross-functional essential item. If we compared the situation to auto manufacturers, there would be only two providers, you could only fuel at their gas stations, they’d charge businesses every time you visit, they’d display ads constantly, and you’d be unable to repair them without going to the provider. There need to be more than two providers. And if we can’t get more than two providers, then most of these unfair advantages need to be rolled back by regulators. This is horrific.
My team and I leave it to you to draw conclusions about the upsides and downsides of a techno feudal set up. What’s next? Appeals, hearings, trials, judgment, appeals, hearings, and trials. Change? Unlikely for now.
Stephen E Arnold, December 12, 2023

