Search Metrics: One Cannot Do Anything Unless One Finds the Info
May 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The search engine optimization crowd bamboozled people with tales of getting to be number one on Google. The SEO experts themselves were tricked. The only way to appear on the first page of search results is to buy an ad. This is the pay-to-play approach to being found online. Now a person cannot do anything, including getting in the building to start one’s first job without searching. The company sent the future wizard an email with the access code. If the new hire cannot locate the access code, she cannot work without going through hoops. Most work or fun is similar. Without an ability to locate specific information online, a person is going to be locked out or just lost in space.
The new employee cannot search her email to locate the access code. No job for her. Thanks, MSFT Copilot, a so-so image without the crazy Grandma says, “You can’t get that image, fatso.”
I read a chunk of content marketing called “Predicted 25% Drop In Search Volume Remains Unclear.” The main idea (I think) is that with generative smart software, a person no longer has to check with Googzilla to get information. In some magical world, a person with a mobile phone will listen as the smart software tells a user what information is needed. Will Apple embrace Microsoft AI or Google AI? Will it matter to the user? Will the number of online queries decrease for Google if Apple decides it loves Redmond types more than Googley types? Nope.
The total number of online queries will continue to go up until the giant search purveyors collapse due to overburdened code, regulatory hassles, or their own ineptitude. But what about the estimates of mid tier consulting firms like Gartner? Hello, do you know that Gartner is essentially a collection of individuals who do the bidding of some work-from-home, self-anointed experts?
Face facts. There is one alleged monopoly controlling search. That is Google. It will take time for an upstart to siphon significant traffic from the constellation of Google services. Even Google’s own incredibly weird approach to managing the company will not be able to prevent people from using the service. Every email search is a search. Every direction in Waze is a search. Every click on a suggested YouTube TikTok knock off is a search. Every click on anything Google is a search. To tidy up the operation, assorted mechanisms for analyzing user behavior provide a fingerprint of users. Advertisers, even if they know they are being given a bit of a casino frippery, have to decide among Amazon, Meta, or, or … Sorry. I can’t think of another non-Google option.
If you want traffic, you can try to pull off a Black Swan event as OpenAI did. But for most organizations, if you want traffic, you pay Google. What about SEO? If the SEO outfit is a Google partner, you are on the Information Highway to Google’s version of Madison Avenue.
But what about the fancy charts and graphs which show Google’s vulnerability? Google’s biggest enemy is Google’s approach to managing its staff, its finances, and its technology. Bing or any other search competitor is going to find itself struggling to survive. Don’t believe me? Just ask the founder of Search2, Neeva, or any other search vendor crushed under Googzilla’s big paw. Unclear? Are you kidding me? Search volume is going to go up until something catastrophic happens. For now, buy Google advertising for traffic. Spend some money with Meta. Use Amazon if you sell fungible things. Google owns most of the traffic. Adjust and quit yapping about some fantasy cooked up by so-called experts.
Stephen E Arnold, May 2, 2024
AI: Strip Mining Life Itself
May 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I may be — like a AI system — hallucinating. I think I am seeing more philosophical essays and medieval ratio recently. A candidate expository writing is “To Understand the Risks Posed by AI, Follow the Money.” After reading the write up, I did not get a sense that the focus was on following the money. Nevertheless, I circled several statements which caught my attention.
Let’s look at these, and you may want to navigate to the original essay to get each statement’s context.
First, the authors focus on what they as academic thinkers call “an extractive business model.” When I saw the term, I thought of the strip mines in Illinois. Giant draglines stripped the earth to expose coal. Once the coal was extracted, the scarred earth was bulldozed into what looked like regular prairie. It was not. Weeds grew. But to get corn or soy beans, the farmer had to spend big bucks to get chemicals and some Fancy Dan equipment to coax the trashed landscape to utility. Nice.
The essay does not make the downside of extractive practices clear. I will. Take a look at a group of teens in a fast food restaurant or at a public event. The group is a consequence of the online environment in which the individual spends hours each day. I am not sure how well the chemicals and equipment used to rehabilitate the strip minded prairie applies to humans, but I assume someone will do a study and report.
The second statement warranting a blue exclamation mark is:
Algorithms have become market gatekeepers and value allocators, and are now becoming producers and arbiters of knowledge.
From my perspective, the algorithms are expressions of human intent. The algorithms are not the gatekeepers and allocators. The algorithms express the intent, goals, and desire of the individuals who create them. The “users” knowingly or unknowingly give up certain thought methods and procedures to provide what appears to be something scratches a Maslow’s Hierarchy of Needs’ itch. I think in terms of the medieval Great Chain of Being. The people at the top own the companies. Their instrument of control is their service. The rest of the hierarchy reflects a skewed social order. A fish understands only the environment of the fish bowl. The rest of the “world” is tough to perceive and understand. In short, the fish is trapped. Online users (addicts?) are trapped.
The third statement I marked is:
The limits we place on algorithms and AI models will be instrumental to directing economic activity and human attention towards productive ends.
Okay, who exactly is going to place limits? The farmer who leased his land to the strip mining outfit made a decision. He traded the land for money. Who is to blame? The mining outfit? The farmer? The system which allowed the transaction?
The situation at this moment is that yip yap about open source AI and the other handwaving cannot alter the fact that a handful of large US companies and a number of motivated nation states are going to spend what’s necessary to obtain control.
Net net: Houston, we have a problem. Money buys power. AI is a next generation way to get it.
Stephen E Arnold, May 2, 2024
Using AI But For Avoiding Dumb Stuff One Hopes
May 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:
I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.
Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?
The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:
I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.
The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).
I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.
However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.
Several observations are warranted:
- For certain types of content, the systems lack sufficient data to know what the heck I am talking about
- For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
- The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.
Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?
Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.
Stephen E Arnold, May 1, 2024
Big Tech and Their Software: The Tent Pole Problem
May 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.
Okay, ChatGPT, good enough.
I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.
Let’s look at the laws explicated in the essay.
The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.
The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.
The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:
Code can be shaped and built upon.
My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers. Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.
The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:
… we should embrace the malleability of code and avoid redesign processes at all costs!
Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.
Stephen E Arnold, May 1, 2024
A High-Tech Best Friend and Campfire Lighter
May 1, 2024
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
A dog is allegedly man’s best friend. I have a French bulldog,
and I am not 100 percent sure that’s an accurate statement. But I have a way to get the pal I have wanted for years.
Ars Technica reports “You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000” from Ohio-based maker Throwflame. See the article for footage of this contraption setting fire to what appears to be a forest. Terrific. Reporter Benj Edwards writes:
“Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone. It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration. The company lists possible applications of the new robot as ‘wildfire control and prevention,’ ‘agricultural management,’ ‘ecological conservation,’ ‘snow and ice removal,’ and ‘entertainment and SFX.’ But most of all, it sets things on fire in a variety of real-world scenarios.”
And what does my desired dog look like? The GenY Tibby asleep at work? Nope.
I hope my Thermonator includes an AI at the controls. Maybe that will be an add-on feature in 2025? Unitree, maker of the robot base mentioned above, once vowed to oppose the weaponization of their products (along with five other robotics firms.) Perhaps Throwflame won them over with assertions their device is not technically a weapon, since flamethrowers are not considered firearms by federal agencies. It is currently legal to own this mayhem machine in 48 states. Certain restrictions apply in Maryland and California. How many crazies can get their hands on a mere $9,420 plus tax for that kind of power? Even factoring in the cost of napalm (sold separately), probably quite a few.
Cynthia Murrell, May 1, 2024
One Half of the Sundar & Prabhakar Act Gets Egged: Garrf.
April 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
After I wrote Google Version 2: The Calculating Predator, BearStearns bought the rights to portions of my research and published one of its analyst reports. In that report, a point was made about Google’s research into semantic search. Remember, this was in 2005, long before the AI balloon inflated to the size of Taylor Swift’s piggy bank. My client (whom I am not allowed to name) and I were in the Manhattan BearStearns’ office. We received a call from Prabhakar Raghavan, who was the senior technology something at Yahoo at that time. I knew of Dr. Raghavan because he had been part of the Verity search outfit. On that call, Dr. Raghavan was annoyed that BearStearns suggested Yahoo was behind the eight ball in Web search. We listened, and I pointed out that Yahoo was not matching Google’s patent filing numbers. Although not an indicator of innovation, it is one indicator. The Yahoo race car had sputtered and had lost the search race. I recall one statement Dr. Raghavan uttered, “I can do a better search engine for $300,000 dollars.” Well, I am still waiting. Dr. Raghavan may have an opportunity to find his future elsewhere if he continues to get the type of improvised biographical explosive device shoved under his office door at Google. I want to point out that I thought Dr. Raghavan’s estimate of the cost of search was a hoot. How could he beat that for a joke worthy of Jack Benny?
A big dumb bunny gets egged. Thanks, MSFT Copilot. Good enough.
I am referring to “The Man Who Killed Google Search,” written by Edward Zitron. For those to whom Mr. Zitron is not a household name like Febreze air freshener, he is “the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron). I host the Better Offline Podcast, coming to iHeartRadio and everywhere else you find your podcasts February 2024.” For more about Mr. Zitron, navigate to this link. (Yep, it takes quite a while to load, but be patient.)
The main point of the write up is that the McKinsey-experienced Sundar Pichai (the other half of the comedy act) hired the article-writing, Verity-seasoned Dr. Raghavan to help steer the finely-crafted corporate aircraft carrier, USS Google into the Sea of Money. Even though, the duo are not very good at comedy, they are doing a bang up job of making the creaking online advertising machine output big money. If you don’t know how big, just check out the earning for the most recent financial quarter at this link. If you don’t want to wade through Silicon Valley jargon, Google is “a two trillion dollar company.” How do you like that, Mr. and Mrs. Traditional Advertising?
The write up is filled with proper names of Googlers past and present. The point is that the comedy duo dumped some individuals who embraced the ethos of the old, engineering-oriented, relevant search results Google. The vacancies were filled with those who could shove more advertising into what once were clean, reasonably well-lighted places. At the same time, carpetland (my term for the executive corridor down which Messrs. Brin and Page once steered their Segways) elevated above the wonky world of the engineers, the programmers, the Ivory Tower thinker types, and outright wonkiness of the advanced research units. (Yes, there were many at one time.)
Using the thought processes of McKinsey (the opioid idea folks) and the elocutionary skills of Dr. Raghavan, Google search degraded while the money continued to flow. The story presented by Mr. Zitron is interesting. I will leave it to you to internalize it and thank your luck stars you are not given the biographical improvised explosive device as a seat cushion. Yowzah.
Several observations:
- I am not sure the Sundar & Prabhakar duo wrote the script for the Death of Google Search. Believe me, there were other folks in Google carpetland aiding the process. How about a baby maker in the legal department as an example of ground principles? What about an attempted suicide by a senior senior senior manager’s squeeze? What about a big time thinker’s untimely demise as a result of narcotics administered by a rental female?
- The problems at Google are a result of decades of high school science club members acting out their visions of themselves as masters of the universe and a desire to rig the game so money flowed. Cleverness, cute tricks, and owning the casino and the hotel and the parking lot were part of Google’s version of Hotel California. The business set up was money in, fancy dancing in public, and nerdland inside. Management? Hey, math is hard. Managing is zippo.
- The competitive arena was not set up for a disruptor like the Google. I do not want to catalog what the company did to capture what appears to be a very good market position in online advertising. After a quarter century, the idea that Google might be an alleged monopoly is getting some attention. But alleged is one thing; change is another.
- The innovator’s dilemma has arrived in the lair of Googzilla. After inventing tensors, OpenAI made something snazzy with them and cut a deal with Microsoft. The result was the AI hyper moment with Google viewed as a loser. Forget the money. Google is not able to respond, some said. Perception is important. The PR gaffe in Paris where Dr. Prabhakar showed off Bard outputting incorrect information; the protests and arrests of staff; and the laundry list of allegations about the company’s business practices in the EU are compounding the one really big problem — Google’s ability to control its costs. Imagine. A corporate grunt sport could be the hidden disease. Is Googzilla clear headed or addled? Time will tell I believe.
Net net: The man who killed Google is just an clueless accomplice, not the wizard with the death ray cooking the goose and its eggs. Ultimately, in my opinion, we have to blame the people who use Google products and services, rely on Google advertising, and trust search results. Okay, Dr. Raghavan, suspended sentence. Now you can go build your $300,000 Web search engine. I will be available to evaluate it as I did Search2, Neeva, and the other attempts to build a better Google. Can you do it? Sure, you will be a Xoogler. Xooglers can do anything. Just look at Mr. Brin’s airship. And that egg will wash off unlike that crazy idea to charge Verity customers for each entry in an index passed for each user’s query. And that’s the joke that’s funnier than the Paris bollocksing of smart software. Taxi meter pricing for an in-house, enterprise search system. That is truly hilarious.
Stephen E Arnold, April 30, 2024
The Google Explains the Future of the Google Cloud: Very Googley, Of Course
April 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
At its recent Next 24 conference, Google Cloud and associates shared their visions for the immediate future of AI. Through the event’s obscurely named Session Library, one can watch hundreds of sessions and access resources connected to many more. The idea — if you have not caught on to the Googley nomenclature — is to make available videos of the talks at the conference. To narrow, one can filter by session category, conference track, learning level, solution, industry, topic of interest, and whether video is available. Keep in mind that the words you (a normal human, I presume) may use to communicate your interest may not be the lingo Googzilla speaks. AI and Machine Learning feature prominently. Other key areas include data and databases, security, development and architecture, productivity, and revenue growth (naturally). There is even a considerable nod to diversity, equity, and inclusion (DEI). Okay, nod, nod.
Here are a few session titles from just the “AI and ML” track to illustrate the scope of this event and the available information:
- A cybersecurity expert’s guide to securing AI products with Google SAIF
- AI for banking: Streamline core banking services and personalize customer experiences
- AI for manufacturing: Enhance productivity and build innovative new business models
- AI for telecommunications: Transform customer interactions and network operations
- AI in capital markets: The biggest bets in the industry
- Accelerate software delivery with Gemini and Code Transformations
- Revolutionizing healthcare with AI
- Streamlining access to youth mental health services
It looks like there is something for everybody. We think the titles make reasonably clear the scope and bigness of Google’s aspirations. Nor would we expect less from a $2 trillion outfit based on advertising, would we? Run a query for Code Red or in Google lingo CodeRED, and you will be surprised that the state of emergency, Microsoft is a PR king mentality persists. (Is this the McKinsey way?) Well, not for those employed at McKinsey. Former McKinsey professionals have more latitude in their management methods; for example, emulating high school science club planning techniques. There are no sessions we could spot about Google’s competition. If one is big enough, there is no competition. One of Googzilla’s relatives made a mess of Tokyo real estate largely without lasting consequences.
Cynthia Murrell, April 30, 2024
NSO Pegasus: No Longer Flying Below the Radar
April 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “AP Exclusive: Polish Opposition Senator Hacked with Spyware.” I remain fearful of quoting the AP or Associated Press. I think it is a good business move to have an 89 year old terrified of an American “institution, don’t you. I think I am okay if I tell you the AP recycled a report from the University of Toronto’s Citizen Lab. Once again, the researchers have documented the use of what I call “intelware” by a nation state. The AP and other “real” news outfits prefer the term “spyware.” I think it has more sizzle, but I am going to put NSO Group’s mobile phone system and method in the category of intelware. The reason is that specialized software like Pegasus gathers information for a nation’s intelligence entities. Well, that’s the theory. The companies producing these platforms and tools want to answer such questions as “Who is going to undermine our interests?” or “What’s the next kinetic action directed at our facilities?” or “Who is involved in money laundering, human trafficking, or arms deals?”
Thanks, MSFT Copilot. Cutting down the cycles for free art, are you?
The problem is that specialized software is no longer secret. The Citizen Lab and the AP have been diligent in explaining how some of the tools work and what type of information can be gathered. My personal view is that information about these tools has been converted into college programming courses, open source software tools, and headline grabbing articles. I know from personal experience that most people do not have a clue how data from an iPhone can be exfiltrated, cross correlated, and used to track down those who would violate the laws of a nation state. But, as the saying goes, information wants to be free. Okay, it’s free. How about that?
The write up contains an interesting statement. I want to note that I am not plagiarizing, undermining advertising sales, or choking off subscriptions. I am offering the information as a peg on which to hang some observations. Here’s the quote:
“My heart sinks with each case we find,” Scott-Railton [a senior researcher at UT’s Citizen Lab] added. “This seems to be confirming our worst fear: Even when used in a democracy, this kind of spyware has an almost immutable abuse potential.”
Okay, we have malware, a command-and-control system, logs, and a variety of delivery mechanisms.
I am baffled because malware is used by both good and bad actors. Exactly what does the University of Toronto and the AP want to happen. The reality is that once secret information is leaked, it becomes the Teflon for rapidly diffusing applications. Does writing about what I view an “old” story change what’s happening with potent systems and methods? Will government officials join in a kumbaya moment and force the systems and methods to fall into disuse? Endless recycling of an instrumental action by this country or that agency gets us where?
In my opinion, the sensationalizing of behavior does not correlate with responsible use of technologies. I think the Pegasus story is a search for headlines or recognition for saying, “Look what we found. Country X is a problem!” Spare me. Change must occur within institutions. Those engaged in the use of intelware and related technologies are aware of issues. These are, in my experience, not ignored. Improper behavior is rampant in today’s datasphere.
Standing on the sidelines and yelling at a player who let the team down does what exactly? Perhaps a more constructive approach can be identified and offered as a solution beyond Pegasus again? Broken record. I know you are “just doing your job.” Fine but is there a new tune to play?
Stephen E Arnold, April l29, 2024
A Modern Spy Novel: A License to Snoop
April 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
“UK’s Investigatory Powers Bill to Become Law Despite Tech World Opposition” reports the Investigatory Powers Amendment Bill or IPB is now a law. In a nutshell, the law expands the scope of data collection by law enforcement and intelligence services. The Register, a UK online publication, asserts:
Before the latest amendments came into force, the IPA already allowed authorized parties to gather swathes of information on UK citizens and tap into telecoms activity – phone calls and SMS texts. The IPB’s amendments add to the Act’s existing powers and help authorities trawl through more data, which the government claims is a way to tackle “modern” threats to national security and the abuse of children.
Thanks, Copilot. A couple of omissions from my prompt, but your illustration is good enough.
One UK elected official said:
“Additional safeguards have been introduced – notably, in the most recent round of amendments, a ‘triple-lock’ authorization process for surveillance of parliamentarians – but ultimately, the key elements of the Bill are as they were in early versions – the final version of the Bill still extends the scope to collect and process bulk datasets that are publicly available, for example.”
Privacy advocates are concerned about expanding data collections’ scope. The Register points out that “big tech” feels as though it is being put on the hot seat. The article includes this statement:
Abigail Burke, platform power program manager at the Open Rights Group, previously told The Register, before the IPB was debated in parliament, that the proposals amounted to an “attack on technology.”
Several observations:
- The UK is a member in good standing of an intelligence sharing entity which includes Australia, Canada, New Zealand, and the US. These nation states watch one another’s activities and sometimes emulate certain policies and legal frameworks.
- The IPA may be one additional step on a path leading to a ban on end-to-end-encrypted messaging. Such a ban, if passed, would prove disruptive to a number of business functions. Bad actors will ignore such a ban and continue their effort to stay ahead of law enforcement using homomorphic encryption and other sophisticated techniques to keep certain content private.
- Opportunistic messaging firms like Telegram may incorporate technologies which effectively exploit modern virtual servers and other technology to deploy networks which are hidden and effectively less easily “seen” by existing monitoring technologies. Bad actors can implement new methods forcing LE and intelligence professionals to operate in reaction mode. IPA is unlikely to change this cat-and-mouse game.
- Each day brings news of new security issues with widely used software and operating systems. Banning encryption may have some interesting downstream and unanticipated effects.
Net net: I am not sure that modern threats will decrease under IPA. Even countries with the most sophisticated software, hardware, and humanware security systems can be blindsided. Gaffes in Israel have had devastating consequences that an IPA-type approach would remedy.
Stephen E Arnold, April 29, 2024
Right, Professor. No One Is Using AI
April 29, 2024
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Artificial intelligence and algorithms aren’t new buzzwords but they are the favorite technology jargon being tossed around BI and IT water coolers. (Or would it be Zoom conferences these days?) AI has been a part of modern life for years but AI engines are finally “smart enough” to do actual jobs—sort of. There are still big problems with AI, but one expert shares his take on why the technology isn’t being adopted more in the UiPath article: “3 Common Barriers AI Adoption And How To Overcome Them.”
Whenever new technology hits the market, experts write lists about why more companies aren’t implementing it. The first “mistake” is lack of how to adopt AI because they don’t know about all the work processes within their companies. The way to overcome this issue is to take an inventory of the processes and this can be done via data mining. That’s not so simple if a company doesn’t have the software or know-how.
The second “mistake” is lack of expertise about the subject. The cure for this is classes and “active learning.” Isn’t that another term for continuing education? The third “mistake” is lack of trust and risks surrounding AI. Those exist because the technology is new and needs to be tested more before it’s deployed on a mass scale. Smaller companies don’t want to be guinea pigs so they wait until the technology becomes SOP.
AI is another tool that will become as ubiquitous as mobile phones but the expert is correct about this:”
These barriers are significant, but they pale in comparison to the risk of delaying AI adoption. Early adopters are finding new AI use cases and expanding their lead on the competition every day.
There’s lots to do to prepare your organization for this new era, but there’s also plenty of value and advantages waiting for you along your AI adoption journey. Automation can do a lot to help you move forward quickly to capture AI’s value across your organization.”
If your company finds an AI solution that works then that’s wonderful. Automation is part of advancing technology, but AI isn’t ready to be deployed by all companies. If something works for a business and it’s not too archaic than don’t fix what ain’t broke.
But students have figured out how to use AI to deal with certain professors. No, I am not mentioning any names.
Whitey Grace, April 29, 2024


