The Best Of The Worst Failed AI Experiments

January 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

We never think about technology failures (unless something explodes or people die) because we want to concentrate on our successes. In order to succeed, however, we must fail many times so we learn from mistakes. It’s also important to note and share our failures so others can benefit and sometimes it’s just funny. C#Corner listed the, “The Top AI Experiments That Failed” and some of them are real doozies.

The list notes some of the more famous AI disasters like Microsoft’s Tay chatbot that became a cursing, racist, and misogynist and Uber’s accident with a self-driving car. Some projects are examples of obvious AI failures, such as Amazon using AI for job recruitment except the training data was heavily skewed towards males. Women weren’t hired as an end result.

Other incidents were not surprising. A Knightscope K5 security robot didn’t detect a child, accidentally knocking the kid down. The child was fine but it prompts more checks into safety. The US stock market integrated high-frequency trading algorithms AI to execute rapid trading. The AI caused the Flash Clash of 2010, making the Dow Jones Industrial Average sink 600 points in 5 minutes.

The scariest, coolest failure is Facebook’s language experiment:

“In an effort to develop an AI system capable of negotiating with humans, Facebook conducted an experiment where AI agents were trained to communicate and negotiate. However, the AI agents evolved their own language, deviating from human language rules, prompting concerns and leading to the termination of the experiment. The incident raised questions about the potential unpredictability of AI systems and the need for transparent and controllable AI behavior.”

Facebook’s language experiment is solid proof that AI will evolve. Hopefully when AI does evolve the algorithms will follow Asimov’s Laws of Robotics.

Whitney Grace, January 3, 2024

Kagi For-Fee Search: Comments from a Thread about Search

January 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Comparisons of search engine performance are quite difficult to design, run, and analyze. In the good old days when commercial databases reigned supreme, special librarians could select queries, refine them, and then run those queries via Dialog, LexisNexis, DataStar, or another commercial search engine. Examination of the results were tabulated and hard copy print outs on thermal paper were examined. The process required knowledge of the search syntax, expertise in query shaping, and the knowledge in the minds of the special librarians performing the analysis. Others were involved, but the work focused on determining overlap among databases, analysis of relevance (human and mathematical), and expertise gained from work in the commercial database sector, academic training, and work in a special library.

Who does that now? Answer: No one. With this prefatory statement, let’s turn our attention to “How Bad Are Search Results? Let’s Compare Google, Bing, Marginalia, Kagi, Mwmbl, and ChatGPT.” Please, read the write up. The guts of the analysis, in my opinion, appear in this table:

image

The point is that search sucks. Let’s move on. The most interesting outcome from this write up from my vantage point is the comments in the Hacker News post. What I want to do is highlight observations about Kagi.com, a pay-to-use Web search service. The items I have selected make clear why starting and building sustainable revenue from Web search is difficult. Furthermore, the comments I have selected make clear that without an editorial policy, specific information about the corpus, its updating, and content acquisition method — evaluation is difficult.

Long gone are the days of precision and recall, and I am not sure most of today’s users know or care. I still do, but I am a dinobaby and one of those people who created an early search-related service called The Point (Top 5% of the Internet), the Auto Channel, and a number of other long-forgotten Web sites that delivered some type of findability. Why am I roadkill on the information highway? No one knows or cares about the complexity of finding information in print or online. Sure, some graduate students do, but are you aware that the modern academic just makes up or steals other information; for instance, the former president of Stanford University.l

Okay, here are some comments about Kagi.com from Hacker News. (Please, don’t write me and complain that I am unfair. I am just doing what dinobabies with graduate degrees do — Provide footnotes)

hannasanario: I’m not able to reproduce the author’s bad results in Kagi, at all. What I’m seeing when searching the same terms is fantastic in comparison. I don’t know what went wrong there. Dinobaby comment: Search results, in the absence of editorial policies and other basic information about valid syntax means subjectivity is the guiding light. Remember that soap operas were once sponsored influencer content.

Semaphor: This whole thread made me finally create a file for documenting bad searches on Kagi. The issue for me is usually that they drop very important search terms from the query and give me unrelated results. Dinobaby comment: Yep, editorial functions in results, and these are often surprises. But when people know zero about a topic, who cares? Not most users.

Szundi: Kagi is awesome for me too. I just realize using Google somewhere else because of the sh&t results. Dinobaby comment: Ah, the Google good enough approach is evident in this comment. But it is subjective, merely an opinion. Just ask a stockholder. Google delivers, kiddo.

Mrweasel: Currently Kagi is just as dependent on Google as DuckDuckGo is on Bing. Dinobaby comment: Perhaps Kagi is not communicating where content originates, how results are generated, and why information strikes Mrweasel as “dependent on Google. Neeva was an outfit that wanted to go beyond Google and ended up, after marketing hoo hah selling itself to some entity.

Fenaro: Kagi should hire the Marginalia author. Dinobaby comment: Staffing suggestions are interesting but disconnected from reality in my opinion.

ed109685: Kagi works because there is no incentive for SEO manipulators to target it since their market share is so small. Dinobaby comment: Ouch, small.

shado: I became a huge fan of Kagi after seeing it on hacker news too. It’s amazing how good a search engine can be when it’s not full of ads. Dinobaby comment: A happy customer but no hard data or examples. Subjectivity in full blossom.

yashasolutions: Kagi is great… So I switch recently to Kagi, and so far it’s been smooth sailing and a real time saver. Dinobaby comment: Score another happy, paying customer for Kagi.

innocentoldguy: I like Kagi and rarely use anything else. Kagi’s results are decent and I can blacklist sites like Amazon.com so they never show up in my search results. Dionobaby comment: Another dinobaby who is an expert about search.

What does this selection of Kagi-related comments reveal about Web search? Here’s snapshot of my notes:

  1. Kagi is not marketing its features and benefits particularly well, but what search engine is? With Google sucking in more than 90 percent of the query action, big bucks are required to get the message out. This means that subscriptions may be tough to sell because marketing is expensive and people sign up, then cancel.
  2. There is quite a bit of misunderstanding among “expert” searchers like the denizens of Hacker News. The nuances of a Web search, money supported content, metasearch, result matching, etc. make search a great big cloud of unknowing for most users.
  3. The absence of reproducible results illustrates what happens when consumerization of search and retrieval becomes the benchmark. The pursuit of good enough results in loss of finding functionality and expertise.

Net net: Search sucks. Oh, wait, I used that phrase in an article for Barbara Quint 35 years ago.

PS. Mwmbl is at https://mwmbl.or in case you are not familiar with the open source, non profit search engine. You have to register, well, because…

Stephen E Arnold, January 2, 2024

Kiddie Control: Money and Power. What Is Not to Like?

January 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I want to believe outputs from Harvard University. But the ethics professor who made up data about ethics and the more recent the recent publicity magnet from the possibly former university president nag at me. Nevertheless, let’s assume that some of the data in “Social Media Companies Made $11 Billion in US Ad Revenue from Minors, Harvard Study Finds” are semi-correct or at least close enough for horseshoes. (You may encounter a paywall or a 404 error. Well, just trust a free Web search system to point you to a live version of the story. I admit that I was lucky. The link from my colleague worked.)

image

The senior executive sets the agenda for the “exploit the kiddies” meeting. Control is important. Ignorant children learn whom to trust, believe, and follow. Does this objective require an esteemed outfit like the Harvard U. to state the obvious? Seems like it. Thanks, MSFT Copilot, you output child art without complaint. Consistency is not your core competency, is it?

From the write up whose authors I hope are not crossing their fingers like some young people do to neutralize a lie.

Check this statement:

The researchers say the findings show a need for government regulation of social media since the companies that stand to make money from children who use their platforms have failed to meaningfully self-regulate. They note such regulations, as well as greater transparency from tech companies, could help alleviate harms to youth mental health and curtail potentially harmful advertising practices that target children and adolescents.

The sentences contain what I think are silly observations. “Self regulation” is a bit of a sci-fi notion in today’s get-rich-quick high-technology business environment. The idea of getting possible oligopolists together to set some rules that might hurt revenue generation is something from an alternative world. Plus, the concept of “government regulation” strikes me as a punch line for a stand up comedy act. How are regulatory agencies and elected officials addressing the world of digital influencing? Answer: Sorry, no regulation. The big outfits are in many situations are the government. What elected official or Washington senior executive service professional wants to do something that cuts off the flow of nifty swag from the technology giants? Answer: No one. Love those mouse pads, right?

Now consider these numbers which are going to be tough to validate. Have you tried to ask TikTok about its revenue? What about that much-loved Google? Nevertheless, these are interesting if squishy:

According to the Harvard study, YouTube derived the greatest ad revenue from users 12 and under ($959.1 million), followed by Instagram ($801.1 million) and Facebook ($137.2 million). Instagram, meanwhile, derived the greatest ad revenue from users aged 13-17 ($4 billion), followed by TikTok ($2 billion) and YouTube ($1.2 billion). The researchers also estimate that Snapchat derived the greatest share of its overall 2022 ad revenue from users under 18 (41%), followed by TikTok (35%), YouTube (27%), and Instagram (16%).

The money is good. But let’s think about the context for the revenue. Is there another payoff from hooking minors on a particular firm’s digital content?

Control. Great idea. Self regulation will definitely address that issue.

Stephen E Arnold, January 2, 2023

Lawyer, Former Government Official, and Podcaster to Head NSO Group

January 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The high-profile intelware and policeware vendor NSO Group has made clear that specialized software is a potent policing tool. NSO Group continues to market its products and services at low-profile trade shows like those sponsored by an obscure outfit in northern Virginia. Now the firm has found a new friend in a former US official. TechDirt reports, “Former DHS/NSA Official Stewart Baker Decides He Can Help NSO Group Turn a Profit.” Writer Tim Cushing tells us:

“This recent filing with the House of Representatives makes it official: Baker, along with his employer Steptoe and Johnson, will now be seeking to advance the interests of an Israeli company linked to abusive surveillance all over the world. In it, Stewart Baker is listed as the primary lobbyist. This is the same Stewart Baker who responded to the Commerce Department blacklist of NSO by saying it wouldn’t matter because authoritarians could always buy spyware from… say…. China.”

So, the reasoning goes, why not allow a Western company to fill that niche? This perspective apparently makes Baker just the fellow to help NSO buff up NSO Group’s reputation. Cushing predicts:

“The better Baker does clearing NSO’s tarnished name, the sooner it and its competitors can return to doing the things that got them in trouble in the first place. Once NSO is considered somewhat acceptable, it can go back to doing the things that made it the most money: i.e., hawking powerful phone exploits to human rights abusers. But this time, NSO has a former US government official in its back pocket. And not just any former government official but one who spent months telling US citizens who were horrified by the implications of the Snowden leaks that they were wrong for being alarmed about bulk surveillance.”

Perhaps the winning combination for the NSO Group is a lawyer, former US government official, and a podcaster in one sleek package will do the job? But there are now alternatives to the Pegasus solution. Some of these do not have the baggage carted around by the stealthy flying horse.

Perhaps there will be a podcast about NSO Group in the near future.

Cynthia Murrell, January 2, 2024

Google, There Goes Two Percent of 2022 Revenues. How Will the Company Survive?

January 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

True or false: Google owes $5 billion US. I am not sure, but the headline in Metro makes the number a semi-factoid. So let’s see what could force Googzilla to transfer the equivalent of less than two percent of Google’s alleged 2022 revenues. Wow. That will be painful for the online advertising giant. Well, fire some staff; raise ad rates; and boost the cost of YouTube subscriptions. Will the GOOG survive? I think so.

image

An executive ponders a court order to pay the equivalent of two percent of 2022 revenues for unproven alleged improper behavior. But the court order says, “Have a nice day.” I assume the court is sincere. Thanks, MSFT Copilot Bing thing. Good enough.

Google Settles $5,000,000,000 Claim over Searches for Intimate and Embarrassing Things” reports:

Google has agreed to settle a US lawsuit claiming it secretly tracked millions of people who thought they were browsing privately through its Incognito Mode between 2016 and 2020. The claim was seeking at least $5 billion in damages, including at least $5,000 for each user affected. Ironically, the terms of the settlement have not been disclosed, but a formal agreement will be submitted to the court by February 24.

My thought is that Google’s legal eagles will not be partying on New Year’s Eve. These fine professionals will be huddling over their laptops, scrolling for fee legal databases, and using Zoom (the Google video service is a bit of a hassle) to discuss ways to [a] delay, [b] deflect, [c] deny, and [d] dodge the obviously [a] fallacious, [b] foul, [c] false, [d] flimsy, and [e] flawed claims that Google did anything improper.

Hey, incognito means what Google says it means, just like the “unlimited” data claims from wireless providers. Let’s not get hung up on details. Just ask the US regulatory authorities.

For you and me, we need to read Google’s terms of service, check our computing device’s security settings, and continue to live in a Cloud of Unknowing. The allegations that Google mapping vehicles did Wi-Fi sniffing? Hey, these assertions are [a] fallacious, [b] foul, [c] false, [d] flimsy, and [e] flawed . Tracking users. Op cit, gentle reader.

Never has a commercial enterprise been subjected to so many [a] unwarranted, [b] unprovable, [c] unacceptable, and [d] unnecessary assertions. Here’s my take: [a] The Google is innocent; [b] the GOOG is misunderstood, [c] Googzilla is a victim. I ticked a, b, and c.

Stephen E Arnold, January 1, 2024

The Cost of Clever

January 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A New Year and I want to highlight an interesting story which I spotted in SFGate: “Consulting Firm McKinsey Agrees to $78 Million Settlement with Insurers over Opioids.” The focus on efficiency and logic created an interesting consulting opportunity for a blue-chip firm. That organization responded. The SFGate story reports:

Consulting firm McKinsey and Co. has agreed to pay $78 million to settle claims from insurers and health care funds that its work with drug companies helped fuel an opioid addiction crisis.

image

A blue-consultant has been sent to the tool shed by Ms. Justice. The sleek wizard is not happy because the tool shed is the location for severe punishment by Ms. Justice. Thanks, MSFT Copilot Bing thing.

What did the prestigious firm’s advisors assist Purdue Pharma to achieve? The story says:

The insurers argued that McKinsey worked with Purdue Pharma – the maker of OxyContin – to create and employ aggressive marketing and sales tactics to overcome doctors’ reservations about the highly addictive drugs. Insurers said that forced them to pay for prescription opioids rather than safer, non-addictive and lower-cost drugs, including over-the-counter pain medication. They also had to pay for the opioid addiction treatment that followed.

The write up presents McKinsey’s view of its service this way:

“As we have stated previously, we continue to believe that our past work was lawful and deny allegations to the contrary,” the company said, adding that it reached a settlement to avoid protracted litigation. McKinsey said it stopped advising clients on any opioid-related business in 2019.

What’s interesting is that the so-called opioid crisis reveals the consequences of a certain mental orientation. The goal of generating a desired outcome for a commercial enterprise can have interesting and, in this case, expensive consequences. Have some of these methods influenced other organizations? Will blue-chip consulting firms and efficiency-oriented engineers learn from wood shed visits?

Happy New Year everyone.

Stephen E Arnold, January 1, 2024

Another AI Output Detector

January 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It looks like AI detection may have a way to catch up with AI text capabilities. But for how long? Nature reports, “’ChatGPT Detector’ Catches AI Generated Papers with Unprecedented Accuracy.” The key to this particular tool’s success is its specificity—it was developed by chemist Heather Desaire and her team at the University of Kansas specifically to catch AI-written chemistry papers. Reporter McKenzie Prillaman tells us:

“Using machine learning, the detector examines 20 features of writing style, including variation in sentence lengths, and the frequency of certain words and punctuation marks, to determine whether an academic scientist or ChatGPT wrote a piece of text. The findings show that ‘you could use a small set of features to get a high level of accuracy’, Desaire says.”

The model was trained on human-written papers from 10 chemistry journals then tested on 200 samples written by ChatGPT-3.5 and ChatGPT-4. Half the samples were based on the papers’ titles, half on the abstracts. Their tool identified the AI text 100% and 98% of the time, respectively. That clobbers the competition: ZeroGPT only caught about 35–65% and OpenAI’s own text-classifier snagged 10–55%. The write-up continues:

“The new ChatGPT catcher even performed well with introductions from journals it wasn’t trained on, and it caught AI text that was created from a variety of prompts, including one aimed to confuse AI detectors. However, the system is highly specialized for scientific journal articles. When presented with real articles from university newspapers, it failed to recognize them as being written by humans.”

The lesson here may be that AI detectors should be tailor made for each discipline. That could work—at least until the algorithms catch on. On the other hand, developers are working to make their systems more and more like humans.

Cynthia Murrell, January 1, 2024

Balloons, Hands Off Virtual Services, and Enablers: Technology Shadows and Ghosts

December 30, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Earlier this year (2023) I delivered a lecture called “Ghost Web.” I defined the term, identified what my team and I call “enablers,” and presented several examples. These included a fan of My Little Pony operating Dark Web friendly servers, a non-governmental organization pitching equal access, a disgruntled 20 something with a fixation on adolescent humor, and a suburban business executive pumping adult content to anyone able to click or swipe via well-known service providers. These are examples of enablers.

image

Enablers are accommodating. Hear no evil, see no evil, admit to knowing nothing is the mantra. Thanks, MSFT Copilot Bing thing.

Figuring out the difference between the average bad guy and a serious player in industrialized cyber crime is not easy. Here’s another possible example of how enablers facilitate actions which may be orthogonal to the interests of the US and its allies. Navigate to “U.S. Intelligence Officials Determined the Chinese Spy Balloon Used a U.S. Internet Provider to Communicate.” The report may or may not be true, but the scant information presented lines up with my research into “enablers.” (These are firms which knowingly set up their infrastructure services to allow the customer to control virtual services. The idea is that the hosting vendor does nothing but process the credit card, bank transfer, crypto, or other accepted form of payment. Done. The customer or the sys admin for the actor does the rest: Spins up the servers, installs necessary software, and operates the service. The “enabler” just looks at logs and sends bills.

Enablers are aware that their virtual infrastructure makes it easy for a customer to operate in the shadows. Look up a url and what do you find? Missing information due to privacy regulations like those in Western Europe or an obfuscation service offered by the “enabler.” Explore the urls using an appropriate method and what do you find? Dead ends. What happens when a person looks into an enabling hosting provider? Looks of confusion because the mechanism does not know if the customers are “real”? Stuff is automatic. The blank looks reflect the reality that at certain enabling ISPs, no one knows because no one wants to know. As long as the invoice is paid, the “enabler” is a happy camper.

What’s the NBC News report say?

U.S. intelligence officials have determined that the Chinese spy balloon that flew across the U.S. this year used an American internet service provider to communicate, according to two current and one former U.S. official familiar with the assessment.

The “American Internet Service Provider” is an enabler. Neither the write up nor an “official” is naming the alleged enabler. I want to point out that there many firms are in the enabling business. I will not identify by name these outfits, but I can characterize the types of outfits my team and I have identified. I will highlight three for this free, public blog post:

  1. A grifter who sets up an ISP and resells services. Some of these outfits have buildings and lease machines; others just use space in a very large utility ISP. The enabling occurs because of what we call the Russian doll set up. A big outfit allows resellers to brand an ISP service and pay a commission to the company with the pings, pipes, and other necessaries.
  2. An outright criminal no longer locked up sets up a hosting operation in a country known to be friendly to technology businesses. Some of these are in nation states with other problems on their hands and lack the resources to chase what looks like a simple Web hosting operation. Other variants include known criminals who operate via proxies and focus on industrialized cyber crime in different flavors.
  3. A business person who understands enough about technology to hire and compensate engineers to build a “ghost” operation. One such outfit diverted itself of a certain sketchy business when the holding company sold what looked like a “plain vanilla” services firm. The new owner figured out what was going on and sold the problematic part of the business to another party.

There are other variants.

The big question is, “How do these outfits remain in business?” My team and I identified a number of reasons. Let me highlight a handful because this is, once again, a free blog and not a mechanism for disseminating information reserved for specialists:

The first is that the registration mechanism is poorly organized, easily overwhelmed, and without enforcement teeth. As a result, it is very easy to operate a criminal enterprise, follow the rules (such as they are), and conduct whatever online activities desired with minimal oversight. Regulation of the free and open Internet facilitates enablers.

The second is that modern methods and techniques make it possible to set up an illegal operation and rely on scripts or semi-smart software to move the service around. The game is an old one, and it is called Whack A Mole. The idea is that when investigators arrive to seize machines and information, the service is gone. The account was in the name of a fake persona. The payments arrived via a bogus bank account located in a country permitting opaque banking operations. No one where physical machines are located paid any attention to a virtual service operated by an unknown customer. Dead ends are not accidental; they are intentional and often technical.

The third is that enforcement personnel have to have time and money to pursue the bad actors. Some well publicized take downs like the Cyberbunker operation boil down to a mistake made by the owner or operator of a service. Sometimes investigators get a tip, see a message from a disgruntled employee, or attend a hacker conference and hear a lecturer explain how an encrypted email service for cyber criminals works. The fix, therefore, is additional, specialized staff, technical resources, and funding.

What’s the NBC News’s story mean?

Cyber crime is not just a lone wolf game. Investigators looking into illegal credit card services find that trails can lead to a person in prison in Israel or to a front company operating via the Seychelles using a Chinese domain name registrar with online services distributed around the world. The problem is like one of those fancy cakes with many layers.

How accurate is the NBC News report? There aren’t many details, but it a fact that enablers make things happen. It’s time for regulatory authorities in the US and the EU to put on their Big Boy pants and take more forceful, sustained action. But that’s just my opinion about what I call the “ghost Web,” its enablers, and the wide range of criminal activities fostered, nurtured, and operated 24×7 on a global basis.

When a member of your family has a bank account stripped or an identity stolen, you may have few options for a remedy. Why? You are going to be chasing ghosts and the machines which make them function in the real world. What’s your ISP facilitating?

Stephen E Arnold, December 30, 2023

Scale Fail: Define Scale for Tech Giants, Not Residents of Never Never Land

December 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Scale Is a Trap.” The essay presents an interesting point of view, scale from the viewpoint of a resident of Never Never Land. The write up states:

But I’m pretty convinced the reason these sites [Vice, Buzzfeed, and other media outfits] have struggled to meet the moment is because the model under which they were built — eyeballs at all cost, built for social media and Google search results — is no longer functional. We can blame a lot of things for this, such as brand safety and having to work through perhaps the most aggressive commercial gatekeepers that the world has ever seen. But I think the truth is, after seeing how well it worked for the tech industry, we made a bet on scale — and then watched that bet fail over and over again.

The problem is that the focus is on media companies designed to surf on the free megaphones like Twitter and the money from Google’s pre-threat ad programs. 

However, knowledge is tough to scale. The firms which can convert knowledge into what William James called “cash value” charge for professional services. Some content is free like wild and crazy white papers. But the “good stuff” is for paying clients.

Outfits which want to find enough subscribers who will pay the necessary money to read articles is a difficult business to scale. I find it interesting that Substack is accepting some content sure to attract some interesting readers. How much will these folks pay. Maybe a lot?

But scale in information is not what many clever writers or traditional publishers and authors can do. What happens when a person writes a best seller. The publisher demands more books and the result? Subsequent books which are not what the original was. 

Whom does scale serve? Scale delivers power and payoff to the organizations which can develop products and services that sell to a large number of people who want a deal. Scale at a blue chip consulting firm means selling to the biggest firms and the organizations with the deepest products. 

But the scale of a McKinsey-type firm is different from the scale at an outfit like Microsoft or Google.

What is the definition of scale for a big outfit? The way I would explain what the technology firms mean when scale is kicked around at an artificial intelligence conference is “big money, big infrastructure, big services, and big brains.” By definition, individuals and smaller firms cannot deliver.

Thus, the notion of appropriate scale means what the cited essay calls a “niche.” The problems and challenges include:

  • Getting the cash to find, cultivate, and grow people who will pay enough to keep the knowledge enterprise afloat
  • Finding other people to create the knowledge value
  • Protecting the idea space from carpetbaggers
  • Remaining relevant because knowledge has a shelf life, and it takes time to grow knowledge or acquire new knowledge.

To sum up, the essay is more about how journalists are going to have to adapt to a changing world. The problem is that scale is a characteristic of the old school publishing outfits which have been ill-suited to the stress of adapting to a rapidly changing world.

Writers are not blue chip consultants. Many just think they are.

Stephen E Arnold, December 29, 2023

AI Silly Putty: Squishes Easily, Impossible to Remove from Hair

December 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I like happy information. I navigated to “Meta’s Chief AI Scientist Says Terrorists and Rogue States Aren’t Going to Take Over the World with Open Source AI.” Happy information. Terrorists and the Axis of Evil outfits are just going to chug along. Open source AI is not going to give these folks a super weapon. I learned from the write up that the trustworthy outfit Zuckbook has a Big Wizard in artificial intelligence. That individual provided some cheerful words of wisdom for me. Here’s an example:

It won’t be easy for terrorists to takeover the world with open-source AI.

Obviously there’s a caveat:

they’d need a lot money and resources just to pull it off.

That’s my happy thought for the day.

image

“Wow, getting this free silly putty out of your hair is tough,” says the scout mistress. The little scout asks, “Is this similar to coping with open source artificial intelligence software?” Thanks, MSFT Copilot. After a number of weird results, you spit out one that is good enough.

Then I read “China’s Main Intel Agency Has Reportedly Developed An AI System To Track US Spies.” Oh, oh. Unhappy AI information. China, I assume, has the open source AI software. It probably has in its 1.4 billion population a handful of AI wizards comparable to the Zuckbook’s line up. Plus, despite economic headwinds, China has money.

The write up reports:

The CIA and China’s Ministry of State Security (MSS) are toe to toe in a tense battle to beat one another’s intelligence capabilities that are increasingly dependent on advanced technology… , the NYT reported, citing U.S. officials and a person with knowledge of a transaction with contracting firms that apparently helped build the AI system. But, the MSS has an edge with an AI-based system that can create files near-instantaneously on targets around the world complete with behavior analyses and detailed information allowing Beijing to identify connections and vulnerabilities of potential targets, internal meeting notes among MSS officials showed.

Not so happy.

Several observations:

  1. The smart software is a cat out of the bag
  2. There are intelligent people who are not pals of the US who can and will use available tools to create issues for a perceived adversary
  3. The AI technology is like silly putty: Easy to get, free or cheap, and tough to get out of someone’s hair.

What’s the deal with silly putty? Cheap, easy, and tough to remove from hair, carpet, and seat upholstery. Just like open source AI software in the hands of possibly questionable actors. How are those government guidelines working?

Stephen E Arnold, December 29, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta