When Wizards Squabble the Digital World Bleats, “AI Yi AI”

October 21, 2024

dino orange_thumb_thumb_thumbNo smart software but we may use image generators to add some modern spice to the dinobaby’s output.

The world is abuzz with New York Times “real” news story. From my point of view, the write up reminds me of a script from “The Guiding Light.” The “to be continued” is implicit in the drama presented in the pitch for a new story line. AI wizard and bureaucratic marvel squabble about smart software.

According to “Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying”:

At an A.I. conference in Seattle this month, Microsoft didn’t spend much time discussing OpenAI. Asha Sharma, an executive working on Microsoft’s A.I. products, emphasized the independence and variety of the tech giant’s offerings. “We definitely believe in offering choice,” Ms. Sharma said.

image

Two wizards squabble over the AI goblet. Thanks, MSFT Copilot, good enough which for you is top notch.

What? Microsoft offers a choice. What about pushing Edge relentlessly? What about the default install of an intelligence officer’s fondest wish: Historical data on a bad actor’s computer? What about users who want to stick with Windows 7 because existing applications run on it without choking? What about users who want to install Windows 11 but cannot because of arbitrary Microsoft restrictions? Choice?

Several observations:

  1. The tension between Sam AI-Man and Satya Nadella, the genius behind today’s wonderful Microsoft software is not secret. Sam AI-Man found some acceptance when he crafted a deal with Oracle.
  2. When wizards argue the drama is high because both of the parties to the dispute know that AI is a winner take all game, with losers destined to get only 65 percent of the winner’s size. Others get essentially nothing. Winners get control.
  3. The anti-MBA organization of OpenAI, Microsoft’s odd deal, and the staffing shenanigans of both Microsoft and OpenAI suggest that neither MSFT’s Nadella or OpenAI’s Sam AI-Man are big picture thinkers.

What will happen now? I think that the Googlers will add a new act to the Sundar & Prabhakar Comedy Tour. The two jokers will toss comments back and forth about how both the Softies and the AI-Men need to let another firm’s AI provide information about organizational planning.

I think the story will be better as a comedy routine. Scrap that “Guiding Light” idea. A soap opera is far to serious for the comedy now on stage.

Stephen E Arnold, October 21, 2024

Can Prabhakar Do the Black Widow Thing to Technology at Google?

October 21, 2024

dino orange_thumb_thumbNo smart software but we may use image generators to add some modern spice to the dinobaby’s output.

The reliable (mostly?) Wall Street Journal ran a story titled“Google Executive Overseeing Search and Advertising Leaves Role.” The executive in question is Prabhakar Raghavan, the other half of the Sundar and Prabhakar Comedy Team. The wizardly Prabhakar is the person Edward Zitron described as “The Man Who Killed Google Search.” I recommend reading that essay because it has more zip than the Murdoch approach to poohbah analysis.

I want to raise a question because I assume that Mr. Zitron is largely correct about the demise of Google Search. The sleek Prabhakar accelerated the decline. He was the agent of the McKinsey think infused in his comedy partner Sundar. The two still get laughs at their high school reunions amidst chums and more when classmates gather to explain their success to one another.

The Google approach: Who needs relevance? Thanks, MSFT Copilot. Not quite excellent.

What is the question? Here it is:

Will Prabhakar do to Google’s technology what he did to search?

My view is that Google’s technology has demonstrated corporate ossification. The company “invented”, according to Google lore, the transformer. Then Google — because it was concerned about its invention — released some of it as open source and then watched as Microsoft marketed AI as the next big thing for the Softies. And what was the outfit making Microsoft’s marketing coup possible? It was Sam AI-Man.

Microsoft, however, has not been a technology leader for how many years?

Suddenly the Google announced a crisis and put everyone on making Google the leader in AI. I assume the McKinsey think did not give much thought to the idea that MSFT’s transformer would be used to make Google look darned silly. In fact, it was Prabhakar who stole the attention of the pundits with a laughable AI demonstration in Paris.

Flash forward from early 2023 to late 2024 what’s Google doing with technology? My perception is that Google is trying to create AI winners, capture the corporate market from Microsoft, and convince as many people as possible that if Google is broken apart, AI in America will flop.

Yes, the fate of the nation hangs on Google’s remaining a monopoly. That sounds like a punch line to a skit in the Sundar and Prabhakar Comedy Show.

Here’s my hypothesis: The death of search (the Edward Zitron view) is a job well done. The curtains fall on Act I of the Google drama. Act II is about the Google technology. The idea is that the technology of the online advertising monopoly defines the future of America.

Stay tuned because the story will be streamed on YouTube with advertising, lots of advertising, of course.

Stephen E Arnold, October 21, 2024

Pavel Durov and Telegram: In the Spotlight Again

October 21, 2024

dino orangeNo smart software used for the write up. The art, however, is a different story.

Several news sources reported that the entrepreneurial Pavel Durov, the found of Telegram, has found a way to grab headlines. Mr. Durov has been enjoying a respite in France, allegedly due to his contravention of what the French authorities views as a failure to cooperate with law enforcement. After his detainment, Mr. Durov signaled that he has cooperated and would continue to cooperate with investigators in certain matters.

image

A person under close scrutiny may find that the experience can be unnerving. The French are excellent intelligence operators. I wonder how Mr. Durov would hold up under the ministrations of Israeli and US investigators. Thanks, ChatGPT, you produced a usable cartoon with only one annoying suggestion unrelated to my prompt. Good enough.

Mr. Durov may have an opportunity to demonstrate his willingness to assist authorities in their investigation into documents published on the Telegram Messenger service. These documents, according to such sources as Business Insider and South China Morning Post, among others, report that the Telegram channel Middle East Spectator dumped information about Israel’s alleged plans to respond to Iran’s October 1, 2024, missile attack.

The South China Morning Post reported:

The channel for the Middle East Spectator, which describes itself as an “open-source news aggregator” independent of any government, said in a statement that it had “received, through an anonymous source on Telegram who refused to identify himself, two highly classified US intelligence documents, regarding preparations by the Zionist regime for an attack on the Islamic Republic of Iran”. The Middle East Spectator said in its posted statement that it could not verify the authenticity of the documents.

Let’s look outside this particular document issue. Telegram’s mostly moderation-free approach to the content posted, distributed, and pushed via the Telegram platform is like to come under more scrutiny. Some investigators in North America view Mr. Durov’s system as a less pressing issue than the content on other social media and messaging services.

This document matter may bring increased attention to Mr. Durov, his brother (allegedly with the intelligence of two PhDs), the 60 to 80 engineers maintaining the platform, and its burgeoning ancillary interests in crypto. Mr. Durov has some fancy dancing to do. One he is able to travel, he may find that additional actions will be considered to trim the wings of the Open Network Foundation, the newish TON Social service, and the “almost anything” goes approach to the content generated and disseminated by Telegram’s almost one billion users.

From a practical point of view, a failure to exercise judgment about what is allowed on Messenger may derail Telegram’s attempts to become more of a mover and shaker in the world of crypto currency. French actions toward Mr. Pavel should have alerted the wizardly innovator that governments can and will take action to protect their interests.

Now Mr. Durov is placing himself, his colleagues, and his platform under more scrutiny. Close scrutiny may reveal nothing out of the ordinary. On the other hand, when one pays close attention to a person or an organization, new and interesting facts may be identified. What happens then? Often something surprising.

Will Mr. Durov get that message?

Stephen E Arnold, October 21, 2024

Another Reminder about the Importance of File Conversions That Work

October 18, 2024

Salesforce has revamped its business plan and is heavily investing in AI-related technology. The company is also acquiring AI companies located in Israel. CTech has the lowdown on Salesforce’s latest acquisition related to AI file conversion: “Salesforce Acquiring Zoomin For $450 Million.”

Zoomin is an Israeli data management provider for unstructured at and Salesforce purchased it for $450 million. This is way more than what Zoomin was appraised at in 2021, so investors are happy. Earlier in September, Salesforce also bought another Israeli company Own. Buying Zoomin is part of Salesforce’s long term plan to add AI into its business practices.

Since AI need data libraries to train and companies also possess a lot of unstructured data that needs organizing, Zoomin is a wise investment for Salesforce. Zoomin has a lot to offer Salesforce:

“Following the acquisition, Zoomin’s technology will be integrated into Salesforce’s Agentforce platform, allowing customers to easily connect their existing organizational data and utilize it within AI-based customer experiences. In the initial phase, Zoomin’s solution will be integrated into Salesforce’s Data Cloud and Service Cloud, with plans to expand its use across all Salesforce solutions in the future.”

Salesforce is taking steps that other businesses will eventually follow. Will Salesforce start selling the converted data to train AI? Also will Salesforce become a new Big Tech giant?

Whitney Grace, October 18, 2024

Online Search: The Old Function Is in Play

October 18, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

We spotted an interesting marketing pitch from Kagi.com, the pay-to-play Web search service. The information is located on the Kagi.com Help page at this link. The approach is what I call “fact-centric marketing.” In the article, you will find facts like these:

In 2022 alone, search advertising spending reached a staggering 185.35 billion U.S. dollars worldwide, and this is forecast to grow by six percent annually until 2028, hitting nearly 261 billion U.S. dollars.

There is a bit of consultant-type analysis which explains the difference between Google’s approach labeled “ad-based search” and the Kagi.com approach called “user-centric search.” I don’t want to get into an argument about these somewhat stark bifurcations in the murky world of information access, search, and retrieval. Let’s just accept the assertion.

I noted more numbers. Here’s a sampling (not statistically valid, of course):

Google generated $76 billion in US ad revenue in 2023. Google had 274 million unique visitors in the US as of February 2023. To estimate the revenue per user, we can divide the 2023 US ad revenue by the 2023 number of users: $76 billion / 274 million = $277 revenue per user in the US or $23 USD per month, on average! That means there is someone, somewhere, a third party and a complete stranger, an advertiser, paying $23 per month for your searches.

The Kagi.com point is:

Choosing to subscribe to Kagi means that while you are now paying for your search you are getting a fair value for your money, you are getting more relevant results, are able to personalize your experience and take advantage of all the tools and features we built, all while protecting your and your family’s privacy and data.

Why am I highlighting this Kagi.com Help information? Leo Laporte on the October 13, 2024, This Week in Tech program talked about Kagi. He asserted that Kagi uses Bing, Google, and its own search index. I found this interesting. If true, Mr. Laporte is disseminating the idea that Kagi.com is a metasearch engine like Ixquick.com (now StartPage.com). The murkiness about what a Web search engine presents to a user is interesting.

image

A smart person is explaining why paying for search and retrieval is a great idea. It may be, but Google has other ideas. Thanks, You.com. Good enough

In the last couple of days I received an invitation to join a webinar about a search system called Swirl, which connotes mixing content perhaps? I also received a spam message from a fund called TheStreet explaining that the firm has purchased a block of Elastic B.V. shares. A company called provided an interesting explanation of what struck me as a useful way to present search results.

Everywhere companies are circling back to the idea that one cannot “find” needed information.

With Google facing actual consequences for its business practices, that company is now suggesting this angle: “Hey, you can’t break us up. Innovation in AI will suffer.”

So what is the future? Will vendors get a chance to use the Google search index for free? Will alternative Web search solutions become financial wins? Will metasearch triumph, using multiple indexes and compiling a single list of results? Will new-fangled solutions like Glean dominate enterprise information access and then move into the mainstream? Will visual approaches to information access kick “words” to the curb?

Here are some questions I like to ask those who assert that they are online experts, and I include those in the OSINT specialist clan as well:

  1. Finding information is an unsolved problem. Can you, for example, easily locate a specific frame from a video your mobile device captured a year ago?
  2. Can you locate the specific expression in a book about linear algebra germane to the question you have about its application to an AI procedure?
  3. Are you able to find quickly the telephone number (valid at the time of the query) for a colleague you met three years ago at an international conference?

As 2024 rushes to what is likely to be a tumultuous conclusion, I want to point out that finding information is a very difficult job. Most people tell themselves they can find the information needed to address a specific question or task. In reality, these folks are living in a cloud of unknowing. Smart software has not made keyword search obsolete. For many users, ChatGPT or other smart software is a variant of search. If it is easy to use and looks okay, the output is outstanding.

So what? I am not sure the problem of finding the right information at the right time has been solved. Free or for fee, ad supported or open sourced, dumb string matching or Fancy Dan probabilistic pattern identification — none is delivering what so many people believe are on point, relevant, timely information. Don’t even get me started on the issue of “correct” or “accurate.”

Marketers, stand down. Your assertions, webinars, advertisements, special promotions, jargon, and buzzwords do not deliver findability to users who don’t want to expend effort to move beyond good enough. I know one thing for certain, however: Finding relevant information is now more difficult than it was a year ago. I have a hunch the task is only become harder.

Stephen E Arnold, October 18, 2024

Hey, France, Read Your Pavel-Grams: I Cooperate

October 18, 2024

dino orange_thumb_thumbJust a humanoid processing information related to online services and information access.

Did you know that Telegram has shared IPs since 2018. Do your homework!

Telegram is a favored message application, because it is supposed to protect user privacy, especially for crypto users. Not say, says Coin Telegraph in the article, “Telegram Has Been Disclosing User IPs Since 2018, Durov Says.” Before you start posting nasty comments about Telegram’s lies, the IPs the message is sharing belong to bad actors. CEO Pavel Durov shared on his Telegram channel that his company reports phone numbers and IP addresses to law enforcement.

The company has been disclosing criminal information to authorities since 2018, but only when proper legal procedure is followed. Telegram abides by formal legal requests when they are from relevant communication lines. Durov stressed that Telegram remains an anonymous centered app:

Durov said the news from last week showed that Telegram has been “streamlining and unifying its privacy policy across different countries.” He stressed that Telegram’s core principles haven’t changed, as the company has always sought to comply with relevant local laws ‘as long as they didn’t go against our values of freedom and privacy.’ He added: ‘Telegram was built to protect activists and ordinary people from corrupt governments and corporations — we do not allow criminals to abuse our platform or evade justice.”’

French authorities indicted Durov in August 2024 on six charges related to illicit activity via Telegram. He posted the $5.5 million bail in September, then revealed to the public how his company complies with legal requests after calling the charges misguided.

Kudos for Telegram disclosing the information to be transparent.

Whitney Grace, October 18, 2024

Another Stellar Insight about AI

October 17, 2024

Because AI we think AI is the most advanced technology, we believe it is impenetrable to attack. Wrong. While AI is advanced, the technology is still in its infancy and is extremely vulnerable, especially to smart bad actors. One of the worst things about AI and the Internet is that we place too much trust in it and bad actors know that. They use their skills to manipulate information and AI says ArsTechnica in the article: “Hacker Plants False Memories In ChatGPT To Steal User Data In Perpetuity.”

Johann Rehberger is a security researcher who discovered that ChatGPT is vulnerable to attackers. The vulnerability allows bad actors to leave false information and malicious instructions in a user’s long-term memory settings. It means that they could steal user data or cause more mayhem. OpenAI didn’t take Rehmberger serious and called the issue a safety concern aka not a big deal.

Rehberger did not like being ignored, so he hacked ChatGPT in a “proof-of-concept” to perpetually exfiltrate user data. As a result, ChatGPT engineers released a partial fix.

OpenAI’s ChatGPT stores information to use in future conversations. It is a learning algorithm to make the chatbot smarter. Rehberger learned something incredible about that algorithm:

“Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.”

Bad attackers could exploit the vulnerability for their own benefits. What is alarming is that the exploit was as simple as having a user view a malicious image to implement the fake memories. Thankfully ChatGPT engineers listened and are fixing the issue.

Can’t anything be hacked one way or another?

Whitney Grace, October 17, 2024

Darknet: Pounding Out a Boring Beat

October 17, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

PC World finally got around to sharing the biggest Internet secret: “the Darknet.

The Darknet is better known as the Dark Web and it has been around for while. PC World is treating the Dark Web like a newly discovered secret in: “What Is The Darknet? How The Web’s Secretive, Hidden Underbelly Works.”

If you’ve been living under a rock for the past decade, the Dark Web is the flipside of the Internet. It’s where criminals, freedom fighters, and black marketeers thrive under anonymity. Anything can be bought on the Dark Web, including people, drugs, false passports, credit cards, perusal information, weapons, and more.

The Dark Web is accessed through the downloadable Tor browser. The Tor browser allows users to remain anonymous as long as they don’t enter in any personal information during a session. Tor also allows users to visit “hidden” Web sites that use a special web address ending with a .onion extension. Links to .onion Web sites are found the Hidden Wiki, Haystack, Ahmia, and Torch.

Tor hides Web sites inside layers similar to an onion:

“In order to conceal its origin, the Tor software installed on the user’s PC routes each data packet via various randomly selected computers (nodes) before it is then transferred to the open internet via an exit node.

The data is specially secured so that it cannot be read on any of the Tor computers involved. This entails multiple instances of encryption using the onion-skin principle: Each of the nodes involved in the transport decrypts one layer. As a result, the packet that arrives at a node looks different to eavesdroppers than the packet that the node sends on.”

It’s not illegal to use the Tor and it’s a great tool to browse the Internet anonymously. The problem with Tor is that it is slower than regular Internet, because of the anonymization process rendering.

The article is fill of technical jargon, but does a decent job of explaining the basics of the Darknet. But “real” news? Nope.

Whitney Grace, October 17, 2024

AI: The Key to Academic Fame and Fortune

October 17, 2024

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbJust a humanoid processing information related to online services and information access.

Why would professors use smart software to “help” them with their scholarly papers? The question may have been answered in the Phys.org article “Analysis of Approximately 75 Million Publications Finds Those Employing AI Are More Likely to Be a ‘Hit Paper’” reports:

A new Northwestern University study analyzing 74.6 million publications, 7.1 million patents and 4.2 million university course syllabi finds papers that employ AI exhibit a “citation impact premium.” However, the benefits of AI do not extend equitably to women and minority researchers, and, as AI plays more important roles in accelerating science, it may exacerbate existing disparities in science, with implications for building a diverse, equitable and inclusive research workforce.

Years ago some universities had an “honor code”? I think the University of Virginia was one of those dinosaurs. Today professors are using smart software to help them crank out academic hits.

The write up continues by quoting a couple of the study’s authors (presumably without using smart software) as saying:

“These advances raise the possibility that, as AI continues to improve in accuracy, robustness and reach, it may bring even more meaningful benefits to science, propelling scientific progress across a wide range of research areas while significantly augmenting researchers’ innovation capabilities…”

What are the payoffs for the professors who probably take a dim view of their own children using AI to make life easier, faster, and smoother? Let’s look at a handful my team and I discussed:

  1. More money in the form of pay raises
  2. Better shot at grants for research
  3. Fame at conferences
  4. Groupies. I know it is hard to imagine but it happens. A lot.
  5. Awards
  6. Better committee assignments
  7. Consulting work.

When one considers the benefits from babes to bucks, the chit chat about doing better research is of little interest to professors who see virtue in smart software.

The president of Stanford cheated. The head of the Harvard Ethics department appears to have done it. The professors in the study sample did it. The conclusion: Smart software use is normative behavior.

Stephen E Arnold, October 17, 2024

Gee, Will the Gartner Group Consultants Require Upskilling?

October 16, 2024

dino orange_thumbThe only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.

I have a steady stream of baloney crossing my screen each day. I want to call attention to one of the most remarkable and unsupported statements I have seen in months. The PR document “Gartner Says Generative AI Will Require 80% of Engineering Workforce to Upskill Through 2027” contains a number of remarkable statements. Let’s look at a couple.

image

How an allegedly big time consultant is received in a secure artificial intelligence laboratory. Thanks, MSFT Copilot, good enough.

How about this one?

Through 2027, generative AI (GenAI) will spawn new roles in software engineering and operations, requiring 80% of the engineering workforce to upskill, according to Gartner, Inc.

My thought is that the virtual band of wizards which comprise Gartner cook up data the way I microwave a burrito when I am hungry. Pick a common number like the 80-20 Pareto figure. It is familiar and just use it. Personally I was disappointed that Gartner did not use 67 percent, but that’s just an old former blue chip consultant pointing out that round numbers are inherently suspicious. But does Gartner care? My hunch is that whoever reviewed the news release was happy with 80 percent. Did anyone question this number? Obviously not: There are zero supporting data, no information about how it was derived, and no hint of the methodology used by the incredible Gartner wizards. That’s a clue that these are microwaved burritos from a bulk purchase discount grocery.

How about this statement which cites a … wait for it … Gartner wizard as the source of the information?

“In the AI-native era, software engineers will adopt an ‘AI-first’ mindset, where they primarily focus on steering AI agents toward the most relevant context and constraints for a given task,” said Walsh. This will make natural-language prompt engineering and retrieval-augmented generation (RAG) skills essential for software engineers.

I love the phrase “AI native” and I think dubbing the period from January 2023 when Microsoft demonstrated its marketing acumen by announcing the semi-tie up with OpenAI. The code generation systems help exactly what “engineer”? One has to know quite a bit to craft a query, examine the outputs, and do any touch ups to get the outputs working as marketed? The notion of “steering” ignores what may be an AI problem no one at Gartner has considered; for example, emergent patterns in the code generated. This means, “Surprise.” My hunch is that the idea of multi-layered neural networks behaving in a way that produces hitherto unnoticed patterns is of little interest to Gartner. That outfit wants to sell consulting work, not noodle about the notion of emergence which is a biased suite of computations. Steering is good for those who know what’s cooking and have a seat at the table in the kitchen. Is Gartner given access to the oven, the fridge, and the utensils? Nope.

Finally, how about this statement?

According to a Gartner survey conducted in the fourth quarter of 2023 among 300 U.S. and U.K. organizations, 56% of software engineering leaders rated AI/machine learning (ML) engineer as the most in-demand role for 2024, and they rated applying AI/ML to applications as the biggest skills gap.

Okay, this is late 2024 (October to be exact). The study data are a year old. So far the outputs of smart coding systems remain a work in progress. In fact, Dr. Sabine Hossenfelder has a short video which explains why the smart AI programmer in a box may be more disappointing than other hyperbole artists claim. If you want Dr. Hossenfelder’s view, click here. In a nutshell, she explains in a very nice way about the giant bologna slide plopped on many diners’ plates. The study Dr. Hossenfelder cites suggests that productivity boosts are another slice of bologna. The 41 percent increase in bugs provides a hint of the problems the good doctor notes.

Net net: I wish the cited article WERE generated by smart software. What makes me nervous is that I think real, live humans cooked up something similar to a boiled shoe. Let me ask a more significant question. Will Gartner experts require upskilling for the new world of smart software? The answer is, “Yes.” Even today’s sketchy AI outputs information often more believable that this Gartner 80 percent confection.

Stephen E Arnold, October 16, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta