Generative AI Means Big Money…Maybe
May 6, 2024
Whenever new technology appears on the horizon, there are always optimistic, venture capitalists that jump on the idea that it will be a gold mine. While this is occasionally true, other times it’s a bust. Anything can sound feasible on paper, but reality often proves that brilliant ideas don’t work. Medium published Ashish Karan’s article, “Generative AI: A New Gold Rush For Software Engineering.”
Kakran opens his article asserting the brilliant simplicity of Einstein’s E=mc² formula to inspire readers. He alludes that generative AI will revolutionize industries like Einstein’s formula changed physics. He also says that white collar jobs stand to be automated for the first time in history. White collar jobs have been automated or made obsolete for centuries.
Kakran then runs numbers complete with charts and explanations about how generative AI is going to change the world. His diagrams and explanations probably mean something but it reads like white paper gibberish. This part makes sense:
“If you rewind to the year 2008, you will suddenly hear a lot of skepticism about the cloud. Would it ever make sense to move your apps and data from private or colo [cated] data centers to cloud thereby losing fine-grained control. But the development of multi-cloud and devops technologies made it possible for enterprises to not only feel comfortable but accelerate their move to the cloud. Generative AI today might be comparable to cloud in 2008. It means a lot of innovative large companies are still to be founded. For founders, this is an enormous opportunity to create impactful products as the entire stack is currently getting built.”
The author is correct that are business opportunities to leverage generative AI. Is it a California gold rush? Nobody knows. If you have the funding, expertise, and a good idea then follow it. If not, maybe focusing on a more attainable career is better.
Whitey Grace, May 6, 2024
Microsoft: Security Debt and a Cooked Goose
May 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Microsoft has a deputy security officer. Who is it? For reasons of security, I don’t know. What I do know is that our test VPNs no longer work. That’s a good way to enforce reduced security: Just break Windows 11. (Oh, the pushed messages work just fine.)
Is Microsoft’s security goose cooked? Thanks, MSFT Copilot. Keep following your security recipe.
I read “At Microsoft, Years of Security Debt Come Crashing Down.” The idea is that technical debt has little hidden chambers, in this case, security debt. The write up says:
…negligence, misguided investments and hubris have left the enterprise giant on its back foot.
How has Microsoft responded? Great financial report and this type of news:
… in early April, the federal Cyber Safety Review Board released a long-anticipated report which showed the company failed to prevent a massive 2023 hack of its Microsoft Exchange Online environment. The hack by a People’s Republic of China-linked espionage actor led to the theft of 60,000 State Department emails and gained access to other high-profile officials.
Bad? Not as bad as this reminder that there are some concerning issues
What is interesting is that big outfits, government agencies, and start ups just use Windows. It’s ubiquitous, relatively cheap, and good enough. Apple’s software is fine, but it is different. Linux has its fans, but it is work. Therefore, hello Windows and Microsoft.
The article states:
Just weeks ago, the Cybersecurity and Infrastructure Security Agency issued an emergency directive, which orders federal civilian agencies to mitigate vulnerabilities in their networks, analyze the content of stolen emails, reset credentials and take additional steps to secure Microsoft Azure accounts.
The problem is that Microsoft has been successful in becoming for many government and commercial entities the only game in town. This warrants several observations:
- The Microsoft software ecosystem may be impossible to secure due to its size and complexity
- Government entities from America to Zimbabwe find the software “good enough”
- Security — despite the chit chat — is expensive and often given cursory attention by system architects, programmers, and clients.
The hope is that smart software will identify, mitigate, and choke off the cyber threats. At cyber security conferences, I wonder if the attendees are paying attention to Emily Dickinson (the sporty nun of Amherst), who wrote:
Hope is the thing with feathers
That perches in the soul
And sings the tune without the words
And never stops at all.
My thought is that more than hope may be necessary. Hope in AI is the cute security trick of the day. Instead of a happy bird, we may end up with a cooked goose.
Stephen E Arnold, May 3, 2024
Trust the Internet? Sure and the Check Is in the Mail
May 3, 2024
This essay is the work of a dumb humanoid. No smart software involved.
When the Internet became common place in schools, students were taught how to use it as a research tool like encyclopedias and databases. Learning to research is better known as information literacy and it teaches critical evaluation skills. The biggest takeaway from information literacy is to never take anything at face value, especially on the Internet. When I read CIRA and Continuum Loops’ report, “A Trust Layer For The Internet Is Emerging: A 2023 Report,” I had my doubts.
CIRA is the Canadian Internet Registration Authority, a non-profit organization that supposedly builds a trusted Internet. CIRA acknowledges that as a whole the Internet lacks a shared framework and tool sets to make it trustworthy. The non-profit states that there are small, trusted pockets on the Internet, but they sacrifice technical interoperability for security and trust.
CIRA released a report about how people are losing faith in the Internet. According to the report’s executive summary, the number of Canadians who trust the Internet fell from 71% to 57% while the entire world went from 74% to 63%. The report also noted that companies with a high trust rate outperform their competition. Then there’s this paragraph:
“In this report, CIRA and Continuum Loop identify that pairing technical trust (e.g., encryption and signing) and human trust (e.g., governance) enables a trust layer to emerge, allowing the internet community to create trustworthy digital ecosystems and rebuild trust in the internet as a whole. Further, they explore how trust registries help build trust between humans and technology via the systems of records used to help support these digital ecosystems. We’ll also explore the concept of registry of registries (RoR) and how it creates the web of connections required to build an interoperable trust layer for the internet.”
Does anyone else hear the TLA for Whiskey Tango Foxtrot in their head? Trusted registries sound like a sales gimmick to verify web domains. There are trusted resources on the Internet but even those need to be fact checked. The companies that have secure networks are Microsoft, TikTok, Google, Apple, and other big tech, but the only thing that can be trusted about some outfits are the fat bank accounts.
Whitey Grace, May 3, 2024
Amazon: Big Bucks from Bogus Books
May 3, 2024
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Anyone who shops for books on Amazon should proceed with caution now that “Fake AI-Generated Books Swarm Amazon.” Good e-Reader’s Navkiran Dhaliwal cites an article from Wired as she describes one author’s somewhat ironic experience:
“In 2019, AI researcher Melanie Mitchell wrote a book called ‘Artificial Intelligence: A Guide for Thinking Humans’. The book explains how AI affects us. ChatGPT sparked a new interest in AI a few years later, but something unexpected happened. A fake version of Melanie’s book showed up on Amazon. People were trying to make money by copying her work. … Melanie Mitchell found out that when she looked for her book on Amazon, another ebook with the same title was released last September. This other book was much shorter, only 45 pages. This book copied Melanie’s ideas but in a weird and not-so-good way. The author listed was ‘Shumaila Majid,’ but there was no information about them – no bio, picture, or anything online. You’ll see many similar books summarizing recently published titles when you click on that name. The worst part is she could not find a solution to this problem.”
It took intervention from WIRED to get Amazon to remove the algorithmic copycat. The magazine had Reality Defender confirm there was a 99% chance it was fake then contacted Amazon. That finally did the trick. Still, it is unclear whether it is illegal to vend AI-generated “summaries” of existing works and sell them under the original title. Regardless, asserts Mitchell, Amazon should take steps to prevent the practice. Seems reasonable.
And Amazon cares. No, really. Really it does.
Cynthia Murrell, April 29, 2024
Security Conflation: A Semantic Slippery Slope to Persistent Problems
May 2, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
My view is that secrets can be useful. When discussing who has what secret, I think it is important to understand who the players / actors are. When I explain how to perform a task to a contractor in the UK, my transfer of information is a secret; that is, I don’t want others to know the trick to solve a problem that can take others hours or day to resolve. The context is an individual knows something and transfers that specific information so that it does not become a TikTok video. Other secrets are used by bad actors. Some are used by government officials. Commercial enterprises — for example, pharmaceutical companies wrestling with an embarrassing finding from a clinical trial — have their secrets too. Blue-chip consulting firms are bursting with information which is unknown by all but a few individuals.
Good enough, MSFT Copilot. After “all,” you are the expert in security.
I read “Hacker Free-for-All Fights for Control of Home and Office Routers Everywhere.” I am less interested in the details of shoddy security and how it is exploited by individuals and organizations. What troubles me is the use of these words: “All” and “Everywhere.” Categorical affirmatives are problematic in a today’s datasphere. The write up conflates any entity working for a government entity with any bad actor intent on committing a crime as cut from the same cloth.
The write up makes two quite different types of behavior identical. The impact of such conflation, in my opinion, is to suggest:
- Government entities are criminal enterprises, using techniques and methods which are in violation of the “law”. I assume that the law is a moral or ethical instruction emitted by some source and known to be a universal truth. For the purposes of my comments, let’s assume the essay’s analysis is responding to some higher authority and anchored on that “universal” truth. (Remember the danger of all and everywhere.)
- Bad actors break laws just like governments and are, therefore, both are criminals. If true, these people and entities must be punished.
- Some higher authority — not identified in the write up — must step in and bring these evil doers to justice.
The problem is that there is a substantive difference among the conflated bad actors. Those engaged in enforcing laws or protecting a nation state are, one hopes, acting within that specific context; that is, the laws, rules, and conventions of that nation state. When one investigator or analyst seeks “secrets” from an adversary, the reason for the action is, in my opinion, easy to explain: The actor followed the rules spelled out by the context / nation state for which the actor works. If one doesn’t like how France runs its railroad, move to Saudi Arabia. In short, find a place to live where the behaviors of the nation state match up with one’s individual perceptions.
When a bad actor — for example a purveyor of child sexual abuse material on an encrypted messaging application operating in a distributed manner from a country in the Middle East — does his / her business, government entities want to shut down the operation. Substitute any criminal act you want, and the justification for obtaining information to neutralize the bad actor is at least understandable to the child’s mother.
The write up dances into the swamp of conflation in an effort to make clear that the system and methods of good and bad actors are the same. That’s the way life is in the datasphere.
The real issue, however, is not the actors who exploit the datasphere, in my view, the problems begins with:
- Shoddy, careless, or flawed security created and sold by commercial enterprises
- Lax, indifferent, and false economies of individuals and organizations when dealing with security their operating environment
- Failure of regulatory authorities to certify that specific software and hardware meet requirements for security.
How does the write up address fixing the conflation problem, the true root of security issues, and the fact that exploited flaws persist for years? I noted this passage:
The best way to keep routers free of this sort of malware is to ensure that their administrative access is protected by a strong password, meaning one that’s randomly generated and at least 11 characters long and ideally includes a mix of letters, numbers, or special characters. Remote access should be turned off unless the capability is truly needed and is configured by someone experienced. Firmware updates should be installed promptly. It’s also a good idea to regularly restart routers since most malware for the devices can’t survive a reboot. Once a device is no longer supported by the manufacturer, people who can afford to should replace it with a new one.
Right. Blame the individual user. But that individual is just one part of the “problem.” The damage done by conflation and by failing to focus on the root causes remains. Therefore, we live in a compromised environment. Muddled thinking makes life easier for bad actors and harder for those who are charged with enforcing rules and regulations. Okay, mom, change your password.
Stephen E Arnold, May 2, 2024
Search Metrics: One Cannot Do Anything Unless One Finds the Info
May 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The search engine optimization crowd bamboozled people with tales of getting to be number one on Google. The SEO experts themselves were tricked. The only way to appear on the first page of search results is to buy an ad. This is the pay-to-play approach to being found online. Now a person cannot do anything, including getting in the building to start one’s first job without searching. The company sent the future wizard an email with the access code. If the new hire cannot locate the access code, she cannot work without going through hoops. Most work or fun is similar. Without an ability to locate specific information online, a person is going to be locked out or just lost in space.
The new employee cannot search her email to locate the access code. No job for her. Thanks, MSFT Copilot, a so-so image without the crazy Grandma says, “You can’t get that image, fatso.”
I read a chunk of content marketing called “Predicted 25% Drop In Search Volume Remains Unclear.” The main idea (I think) is that with generative smart software, a person no longer has to check with Googzilla to get information. In some magical world, a person with a mobile phone will listen as the smart software tells a user what information is needed. Will Apple embrace Microsoft AI or Google AI? Will it matter to the user? Will the number of online queries decrease for Google if Apple decides it loves Redmond types more than Googley types? Nope.
The total number of online queries will continue to go up until the giant search purveyors collapse due to overburdened code, regulatory hassles, or their own ineptitude. But what about the estimates of mid tier consulting firms like Gartner? Hello, do you know that Gartner is essentially a collection of individuals who do the bidding of some work-from-home, self-anointed experts?
Face facts. There is one alleged monopoly controlling search. That is Google. It will take time for an upstart to siphon significant traffic from the constellation of Google services. Even Google’s own incredibly weird approach to managing the company will not be able to prevent people from using the service. Every email search is a search. Every direction in Waze is a search. Every click on a suggested YouTube TikTok knock off is a search. Every click on anything Google is a search. To tidy up the operation, assorted mechanisms for analyzing user behavior provide a fingerprint of users. Advertisers, even if they know they are being given a bit of a casino frippery, have to decide among Amazon, Meta, or, or … Sorry. I can’t think of another non-Google option.
If you want traffic, you can try to pull off a Black Swan event as OpenAI did. But for most organizations, if you want traffic, you pay Google. What about SEO? If the SEO outfit is a Google partner, you are on the Information Highway to Google’s version of Madison Avenue.
But what about the fancy charts and graphs which show Google’s vulnerability? Google’s biggest enemy is Google’s approach to managing its staff, its finances, and its technology. Bing or any other search competitor is going to find itself struggling to survive. Don’t believe me? Just ask the founder of Search2, Neeva, or any other search vendor crushed under Googzilla’s big paw. Unclear? Are you kidding me? Search volume is going to go up until something catastrophic happens. For now, buy Google advertising for traffic. Spend some money with Meta. Use Amazon if you sell fungible things. Google owns most of the traffic. Adjust and quit yapping about some fantasy cooked up by so-called experts.
Stephen E Arnold, May 2, 2024
AI: Strip Mining Life Itself
May 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I may be — like a AI system — hallucinating. I think I am seeing more philosophical essays and medieval ratio recently. A candidate expository writing is “To Understand the Risks Posed by AI, Follow the Money.” After reading the write up, I did not get a sense that the focus was on following the money. Nevertheless, I circled several statements which caught my attention.
Let’s look at these, and you may want to navigate to the original essay to get each statement’s context.
First, the authors focus on what they as academic thinkers call “an extractive business model.” When I saw the term, I thought of the strip mines in Illinois. Giant draglines stripped the earth to expose coal. Once the coal was extracted, the scarred earth was bulldozed into what looked like regular prairie. It was not. Weeds grew. But to get corn or soy beans, the farmer had to spend big bucks to get chemicals and some Fancy Dan equipment to coax the trashed landscape to utility. Nice.
The essay does not make the downside of extractive practices clear. I will. Take a look at a group of teens in a fast food restaurant or at a public event. The group is a consequence of the online environment in which the individual spends hours each day. I am not sure how well the chemicals and equipment used to rehabilitate the strip minded prairie applies to humans, but I assume someone will do a study and report.
The second statement warranting a blue exclamation mark is:
Algorithms have become market gatekeepers and value allocators, and are now becoming producers and arbiters of knowledge.
From my perspective, the algorithms are expressions of human intent. The algorithms are not the gatekeepers and allocators. The algorithms express the intent, goals, and desire of the individuals who create them. The “users” knowingly or unknowingly give up certain thought methods and procedures to provide what appears to be something scratches a Maslow’s Hierarchy of Needs’ itch. I think in terms of the medieval Great Chain of Being. The people at the top own the companies. Their instrument of control is their service. The rest of the hierarchy reflects a skewed social order. A fish understands only the environment of the fish bowl. The rest of the “world” is tough to perceive and understand. In short, the fish is trapped. Online users (addicts?) are trapped.
The third statement I marked is:
The limits we place on algorithms and AI models will be instrumental to directing economic activity and human attention towards productive ends.
Okay, who exactly is going to place limits? The farmer who leased his land to the strip mining outfit made a decision. He traded the land for money. Who is to blame? The mining outfit? The farmer? The system which allowed the transaction?
The situation at this moment is that yip yap about open source AI and the other handwaving cannot alter the fact that a handful of large US companies and a number of motivated nation states are going to spend what’s necessary to obtain control.
Net net: Houston, we have a problem. Money buys power. AI is a next generation way to get it.
Stephen E Arnold, May 2, 2024
Using AI But For Avoiding Dumb Stuff One Hopes
May 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:
I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.
Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?
The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:
I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.
The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).
I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.
However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.
Several observations are warranted:
- For certain types of content, the systems lack sufficient data to know what the heck I am talking about
- For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
- The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.
Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?
Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.
Stephen E Arnold, May 1, 2024
Big Tech and Their Software: The Tent Pole Problem
May 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.
Okay, ChatGPT, good enough.
I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.
Let’s look at the laws explicated in the essay.
The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.
The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.
The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:
Code can be shaped and built upon.
My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers. Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.
The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:
… we should embrace the malleability of code and avoid redesign processes at all costs!
Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.
Stephen E Arnold, May 1, 2024
A High-Tech Best Friend and Campfire Lighter
May 1, 2024
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A dog is allegedly man’s best friend. I have a French bulldog,
and I am not 100 percent sure that’s an accurate statement. But I have a way to get the pal I have wanted for years.
Ars Technica reports “You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000” from Ohio-based maker Throwflame. See the article for footage of this contraption setting fire to what appears to be a forest. Terrific. Reporter Benj Edwards writes:
“Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone. It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration. The company lists possible applications of the new robot as ‘wildfire control and prevention,’ ‘agricultural management,’ ‘ecological conservation,’ ‘snow and ice removal,’ and ‘entertainment and SFX.’ But most of all, it sets things on fire in a variety of real-world scenarios.”
And what does my desired dog look like? The GenY Tibby asleep at work? Nope.
I hope my Thermonator includes an AI at the controls. Maybe that will be an add-on feature in 2025? Unitree, maker of the robot base mentioned above, once vowed to oppose the weaponization of their products (along with five other robotics firms.) Perhaps Throwflame won them over with assertions their device is not technically a weapon, since flamethrowers are not considered firearms by federal agencies. It is currently legal to own this mayhem machine in 48 states. Certain restrictions apply in Maryland and California. How many crazies can get their hands on a mere $9,420 plus tax for that kind of power? Even factoring in the cost of napalm (sold separately), probably quite a few.
Cynthia Murrell, May 1, 2024