Habba Logic? Is It Something One Can Catch?

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t know much about lawyering. I have been exposed to some unusual legal performances. Most recently, Alina Habba delivered in impassioned soliloquy after a certain high-profile individual was told, “You have to pay a person whom you profess not to know $83 million.” Ms. Habba explained that the decision was a bit of a problem based on her understanding of New York State law. That’s okay. As a dinobaby, I am wrong on a pretty reliable basis. Once it is about 3 pm, I have difficulty locating my glasses, my note cards about items for this blog, and my bottle of Kroger grape-flavored water. (Did you know the world’s expert on grape flavor was a PhD named Abe Bakal. I worked with him in the 1970s. He influenced me, hence the Bakalized water.)

image

Habba logic explains many things in the world. If Socrates does not understand, that’s his problem, the young Agonistes Habba in the logic class. Thanks, MSFT Copilot. Good enough. But the eyes are weird.

I did find my notecard about a TechDirt article titled “Cable Giants Insist That Forcing Them to Make Cancellations Easier Violates Their First Amendment Rights.” I once learned that the First Amendment had something to do with free speech. To me, a dinobaby don’t forget, this means I can write a blog post, offer my personal opinions, and mention the event or item which moved me to action. Dinobabies are not known for their swiftness.

The write up explains that cable companies believe that making it difficult for a customer to cancel a subscription to TV, phone, Internet, and other services is a free speech issue. The write up reports:

But the cable and broadband industry, which has a long and proud tradition of whining about every last consumer protection requirement (no matter how basic), is kicking back at the requirement. At a hearing last week, former FCC boss-turned-top-cable-lobbying Mike Powell suggested such a rule wouldn’t be fair, because it might somehow (?) prevent cable companies from informing customers about better deals.

The idea is that the cable companies’ free of speech would be impaired. Okay.

What’s this got to do with the performance by Ms. Habba after her client was slapped with a big monetary award? Answer: Habba logic.

Normal logic says, “If a jury finds a person guilty, that’s what a jury is empowered to do.” I don’t know if describing it in more colorful terms alters what the jury does. But Habba logic is different, and I think it is diffusing from the august legal chambers to a government meeting. I am not certain how to react to Habba logic.

I do know, however, however, that cable companies are having a bit of struggle retaining their customers, amping up their brands, and becoming the equivalent of Winnie the Pooh sweatshirts for kids and adults. Cable companies do not want a customer to cancel and boost the estimable firms’ churn ratio. Cable companies do want to bill every month in order to maintain their cash intake. Cable companies do want to maintain a credit card type of relationship to make it just peachy to send mindless snail mail marketing messages about outstanding services, new set top boxes, and ever faster Internet speeds. (Ho ho ho. Sorry. I can’t help myself.)

Net net: Habba logic is identifiable, and I will be watching for more examples. Dinobabies like watching those who are young at heart behaving in a fascinating manner. Where’s my fake grape water? Oh, next to fake logic.

Stephen E Arnold, January 30, 2024

Google Gems: January 30, 2024

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The dinobaby wants to share another collection of Google gems. These are high-value actions which provide insight into one of the world’s most successful online advertising companies. Let’s get rolling with the items which I thought were the biggest outputs of behavioral magma movements in the last week, give or take a day or two. For gems, whose keeping track?

image

The dinobaby is looking for Google gems. There are many. Thanks, MSFT Copilot Bing thing. Good enough, but I think I am more svelt than your depiction of me.

GOOGLE AND REAL INNOVATION

How do some smart people innovate. “Google Settles AI-Related Chip Patent Lawsuit That Sought US$1.67-Billion in Damages” states:

Singular, founded by Massachusetts-based computer scientist Joseph Bates, claimed that Google incorporated his technology into processing units that support AI features in Google Search, Gmail, Google Translate and other Google services. The 2019 lawsuit said that Bates shared his inventions with the company between 2010 and 2014. It argued that Google’s Tensor Processing Units copied Bates’ technology and infringed two patents.

Did Google accidentally borrow intellectual property? I don’t know. But when $1.67 is bandied about as a desired amount and the Google settles right before trial, one can ask, “Does Google do me-too invention?” Of course not. Google is too cutting edge. Plus the invention allegedly touches Google’s equally innovative artificial intelligence set up. But $1.67 billion? Interesting.

A TWO’FER

Two former Googlers have their heads in the clouds (real, not data center clouds). Well, one mostly former Googler and another who has returned to the lair to work on AI. Hey, those are letters which appear in the word lAIr. What a coincidence. Xoogler one is a founder of the estimable company. Xoogler two is a former “adult” at the innovative firm.

Sergey Brin’s, like Icarus, has taken flight. He didn’t. His big balloon has. The Travel reports in “The World’s Largest Airship Is Now A Reality As It Took Flight In California”:

Pathfinder 1, a prototype electric airship designed by LTA Research, is being unveiled to the public as dawn rises over Silicon Valley. The project’s backer, Google co-founder Sergey Brin, expects it will speed the airship’s humanitarian efforts and usher in a new age of eco-friendly air travel. The airship has magnified drone technology, incorporating fly-by-wire controls, electric motors, and lidar sensing, to a scale surpassing that of three Boeing 737s. This enlarged version has the potential to transport substantial cargo across extensive distances. Its distinctive snow-white steampunk appearance is easily discernible from the bustling 101 highway.

The article includes a reference to the newsreel meme The Hindenburg. Helpful? Not so much. Anyway the Brin-aloon is up.

The second item about a Xoogler also involves flight. Business Insider (an outfit in the news itself this week) published “Ex-Google CEO Eric Schmidt Quietly Created a Company Called White Stork, Which Plans to Build AI-Powered Attack Drones, Report Says.” Drones are a booming business. The write up states:

The former Google chief told Wired that occasionally, a new weapon comes to market that “changes things” and that AI could help revolutionize the Department of Defense’s equipment. He said in the Wired interview, “Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology — nuclear weapons — that could change war, which it clearly did. I would argue that [AI-powered] autonomy and decentralized, distributed systems are that powerful.”

What if a smart White Stork goes after Pathfinder? Impossible. AI is involved.

WAY FINDING WITH THRILLS

The next major Google gem is about the map product I find almost impossible to use. But I am a dinobaby, and these nifty new products are not tuned to 80-year-old eyes and fingers. I can still type, however. “The Google Maps Effect: Authorities Looking for Ways to Prevent Cars From Going Down Steps” shares this allegedly actual factual functionality:

… beginning in December, several drivers attempted to go down the steps either in small passenger cars or lorries that wouldn’t even fit in the small space between the buildings. Drivers blamed Google Maps on every occasion, claiming they followed the turn-by-turn guidance offered by the application. Google Maps told them to make a turn and attempt to go down the steps, so they eventually got stuck for obvious reasons.

I did a job for the bright fellow who brought WordStar to market. Google Maps wanted me to drive off the highway and into the bay. I turned off the helpful navigation system. I may be old, but dinobabies are not completely stupid. Other drivers relying on good enough Google presumably are.

AI MARKETING HOO-HAH

The Google is tooting its trumpet. Here are some recent “innovations” designed to keep the pesky OpenAI, Mistal, and Zuckbookers at bay:

  1. Google can make videos using AI. “Google’s New AI Video Generator Looks Incredible” reports that the service is “incredible.” What else from the quantum supremacy crowd? Sure, and it produces cute animals.
  2. Those Chromebooks are not enough. Google is applying its AI to education. Read more about how an ad company will improve learning in “Google Announces New AI-Powered Features for Education.”
  3. More Googley AI is coming to ads. If you are into mental manipulation, you will revel in “YouTube Ads Are About to Get Way More Effective with AI-Powered Neuromarketing.” Hey, “way more” sounds like the super smart Waymo Google car thing, doesn’t it?

LITTLE CUBIC ZIRCONIAS

Let me highlight what I call little cubic zirconias of Google goodness. Here we go:

  1. The New York Post published “Google News Searches Ranked AI-Generated Rip-offs Above Real Articles — Including a Post Exclusive.” The main point is that Google’s estimable system and wizards cannot tell diamonds from the chemical twins produced by non-Googlers. With elections coming, let’s talk about trust in search results, shall we?
  2. Google’s wizards have created a new color for the Pixel phone. Read about the innovative green at this link.
  3. TechRadar reported that Google has a Kubernetes “flaw.” Who can exploit it? Allegedly anyone with a Google Gmail account. Details at this Web location.

Before I close this week’s edition of Gems, I want to mention two relatively minor items. Some people may think these molehills are much larger issues. What can I do?

Google has found that firing people is difficult. According to Business Insider, Googlers fired in South Korea won’t leave the company. Okay. Whatever.

Also, New York Magazine, a veritable treasure trove of technical information, reports that Google has ended the human Internet with the upgrade Chrome browser. News flash: The human Internet was killed by search engine optimization years ago.

Watch for more Google Gems next week. I think there will be sparkly items available.

Stephen E Arnold, January 30, 2024

Ho-Hum Write Up with Some Golden Nuggets

January 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.

image

“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.

Here these items are:

  1. Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
  2. The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
  3. And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:

“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”

Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?

Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.

Stephen E Arnold, January 30, 2024

AI Coding: Better, Faster, Cheaper. Just Pick Two, Please

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Visual Studio Magazine is not on my must-read list. Nevertheless, one of my research team told me that I needed to read “New GitHub Copilot Research Finds “Downward Pressure on Code Quality.” I had no idea what “downward pressure” means. I read the article trying to figure out what the plain English meaning of this tortured phrase meant. Was it the downward pressure on the metatarsals when a person is running to a job interview? Was it the deadly downward pressure exerted on the OceanGate submersible? Was it the force illustrated in the YouTube “Hydraulic Press Channel”?

image

A partner at a venture firms wants his open source recipients to produce more code better, faster, and cheaper. (He does not explain that one must pick two.) Thanks MSFT Copilot Bing thing. Good enough. But the green? Wow.

Wrong.

The writeup is a content marketing piece for a research report. That’s okay. I think a human may have written most of the article. Despite the frippery in the article, I spotted several factoids. If these are indeed verifiable, excitement in the world of machine generated open source software will ensue. Why does this matter? Well, in the words of the SmartNews content engine, “Read on.”

Here are the items of interest to me:

  1. Bad code is being created and added to the GitHub repositories.
  2. Code is recycled, despite smart efforts to reduce the copy-paste approach to programming.
  3. AI is preparing a field in which lousy, flawed, and possible worse software will flourish.

Stephen E Arnold, January 29, 2024

Modern Poison: Models, Data, and Outputs. Worry? Nah.

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.

image

How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.

Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:

AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Interesting. The article noted:

Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty …  Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.

Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:

"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen…  And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."

If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.

Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?

Nope.

Stephen E Arnold, January 29, 2024

AI Will Take Whose Job, Ms. Newscaster?

January 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Will AI take jobs? Abso-frickin-lutely. Why? Cost savings. Period. In an era of “good enough” is the new mark of excellence, hallucinating software is going to speed up some really annoying commercial functions and reduce costs. What if the customers object to being called dorks? Too bad. The company will apologize, take down the wonky system, and put up another smart service. Better? No, good enough. Faster? Yep. Cheaper? Bet your bippy on that, pilgrim. (See, for a chuckle, AI Chatbot At Delivery Firm DPD Goes Rogue, Insults Customer And Criticizes Company.)

image

Hey, MSFT Bing thing, good enough. How is that MSFT email security today, kiddo?

I found this Fox write up fascinating: “Two-Thirds of Americans Say AI Could Do Their Job.” That works out to about 67 percent of an estimated workforce of 120 million to a couple of Costco parking lots of people. Give or take a few, of course.

The write up says:

A recent survey conducted by Spokeo found that despite seeing the potential benefits of AI, 66.6% of the 1,027 respondents admitted AI could carry out their workplace duties, and 74.8% said they were concerned about the technology’s impact on their industry as a whole.

Oh, oh. Now it is 75 percent. Add a few more Costco parking lots of people holding signs like “Will broadcast for food”, “Will think for food,” or “Will hold a sign for Happy Pollo Tacos.” (Didn’t some wizard at Davos suggest that five percent of jobs would be affected? Yeah, that’s on the money.)

The write up adds:

“Whether it’s because people realize that a lot of work can be easily automated, or they believe the hype in the media that AI is more advanced and powerful than it is, the AI box has now been opened. … The vast majority of those surveyed, 79.1%, said they think employers should offer training for ChatGPT and other AI tools.

Yep, take those free training courses advertised by some of the tech feudalists. You too can become an AI sales person just like “search experts” morphed into search engine optimization specialists. How is that working out? Good for the Google. For some others, a way station on the bus ride to the unemployment bureau perhaps?

Several observations:

  1. Smart software can generate the fake personas and the content. What’s the outlook for talking heads who are not celebrities or influencers as “real” journalists?
  2. Most people overestimate their value. Now the jobs for which these individuals compete, will go to the top one percent. Welcome to the feudal world of 21st century.
  3. More than holding signs and looking sad will be needed to generate revenue for some people.

And what about Fox news reports like the one on which this short essay is based? AI, baby, just like Sports Illustrated and the estimable SmartNews.

Stephen E Arnold, January 29, 2024

Why Stuff Does Not Work: Airplane Doors, Health Care Services, and Cyber Security Systems, Among Others

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Downward Spiral of Technology” stuck a chord with me. Think about building monuments in the reign of Cleopatra. The workers can check out the sphinx and giant stone blocks in the pyramids and ask, “What happened to the technology? We are banging with bronze and crappy metal compounds and those ancient dudes were zipping along with snappier tech.? That conversation is imaginary, of course.

The author of “The Downward Spiral” is focusing on less dusty technology, the theme might resonate with my made up stone workers. Modern technology lacks some of the zing of the older methods. The essay by Thomas Klaffke hit on some themes my team has shared whilst stuffing Five Guys’s burgers in their shark-like mouths.

Here are several points I want to highlight. In closing, I will offer some of my team’s observations on the outcome of the Icarus emulators.

First, let’s think about search. One cannot do anything unless one can find electronic content. (Lawyers, please, don’t tell me you have associates work through the mostly-for-show books in your offices. You use online services. Your opponents in court print stuff out to make life miserable. But electronic content is the cat’s pajamas in my opinion.)

Here’s a table from the Mr. Klaffke essay:

image

Two things are important in this comparison of the “old” tech and the “new” tech deployed by the estimable Google outfit. Number one: Search in Google’s early days made an attempt to provide content relevant to the query. The system was reasonably good, but it was not perfect. Messrs. Brin and Page fancy danced around issues like disambiguation, date and time data, date and time of crawl, and forward and rearward truncation. Flash forward to the present day, the massive contributions of Prabhakar Raghavan and other “in charge of search” deliver irrelevant information. To find useful material, navigate to a Google Dorks service and use those tips and tricks. Otherwise, forget it and give Swisscows.com, StartPage.com, or Yandex.com a whirl. You are correct. I don’t use the smart Web search engines. I am a dinobaby, and I don’t want thresholds set by a 20 year old filtering information for me. Thanks but no thanks.

The second point is that search today is a monopoly. It takes specialized expertise to find useful, actionable, and accurate information. Most people — even those with law degrees, MBAs, and the ability to copy and paste code — cannot cope with provenance, verification, validation, and informed filtering performed by a subject matter expert. Baloney does not work in my corner of the world. Baloney is not a favorite food group for me or those who are on my team. Kudos to Mr. Klaffke to make this point. Let’s hope someone listens. I have given up trying to communicate the intellectual issues lousy search and retrieval creates. Good enough. Nope.

image

Yep, some of today’s tools are less effective than modern gizmos. Hey, how about those new mobile phones? Thanks, MSFT Copilot Bing thing. Good enough. How’s the MSFT email security today? Oh, I asked that already.

Second, Mr Klaffke gently reminds his reader that most people do not know snow cones from Shinola when it comes to information. Most people assume that a computer output is correct. This is just plain stupid. He provides some useful examples of problems with hardware and user behavior. Are his examples ones that will change behaviors. Nope. It is, in my opinion, too late. Information is an undifferentiated haze of words, phrases, ideas, facts, and opinions. Living in a haze and letting signals from online emitters guide one is a good way to run a tiny boat into a big reef. Enjoy the swim.

Third, Mr. Klaffke introduces the plumbing of the good-enough mentality. He is accurate. Some major social functions are broken. At lunch today, I mentioned the writings about ethics by Thomas Dewey and William James. My point was that these fellows wrote about behavior associated with a world long gone. It would be trendy to wear a top hat and ride in a horse drawn carriage. It would not be trendy to expect that a person would work and do his or her best to do a good job for the agreed-upon wage. Today I watched a worker who played with his mobile phone instead of stocking the shelves in the local grocery store. That’s the norm. Good enough is plenty good. Why work? Just pay me, and I will check out Instagram.

I do not agree with Mr. Klaffke’s closing statement; to wit:

The problem is not that the “machine” of humanity, of earth is broken and therefore needs an upgrade. The problem is that we think of it as a “machine”.

The problem is that worldwide shared values and cultural norms are eroding. Once the glue gives way, we are in deep doo doo.

Here are my observations:

  1. No entity, including governments, can do anything to reverse thousands of years of cultural accretion of norms, standards, and shared beliefs.
  2. The vast majority of people alive today are reverting back to some fascinating behaviors. “Fascinating” is not a positive in the sense in which I am using the word.
  3. Online has accelerated the stress on social glue; smart software is the turbocharger of abrupt, hard-to-understand change.

Net net: Please, read Mr. Klaffke’s essay. You may have an idea for remediating one or more of today’s challenges.

Stephen E Arnold, January 25, 2024

Education on the Cheap: No AI Required

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t write about education too often. I do like to mention the plagiarizing methods of some academics. What fun! I located a true research gem (probably non-reproducible, hallucinogenic, or just synthetic but I don’t care). “Emergency-Hired Teachers Do Just as Well as Those Who Go Through Normal Training” states:

New research from Massachusetts and New Jersey suggests maybe not. In both states, teachers who entered the profession without completing the full requirements performed no worse than their normally trained peers.

image

A sanitation worker with a high school diploma is teaching advanced seventh graders about linear equations. The students are engaged… with their mobile phones. Hey, good enough, MSFT Copilot Bing thing. Good enough.

Then a modest question:

The better question now is why these temporary waivers aren’t being made permanent.

And what’s the write up say? I quote:

In other words, making it harder to become a teacher will reduce the supply but offers no guarantee that those who meet the bar will actually be effective in the classroom.

Huh?

Using people who did not slog through college and learned something (one hopes) is expensive. Think of the cost savings when using those who are untrained and unencumbered with expectations of big money! When good enough is the benchmark of excellence, embrace those without an comprehensive four-year or more education. Ooops. Who wants that?

I thought that I once heard that the best, most educated teaching professionals should work with the youngest students. I must have been doing some of that AI-addled thinking common among some in the old age home. When’s lunch?

Stephen E Arnold, January 26, 2024

Apple, Now Number One, But Maybe Not in Mobile Security?

January 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

MIT Professor Stuart E. Madnick allegedly discovered that iPhone data breaches tripled between 2013-2022. Venture Beat explains more in the article “Why Attackers Love To Target Misconfigured Clouds And Phones.”

Hackers use every method to benefit from misconfiguration, but ransomware is their favorite technique. Madnick discovered a near 50% increase in ransomware attacks in organizations in the first six months of 2023 compared to 2022. After finding the breach, hackers then attack organizations’ mobile phone fleets. They freeze all communications until the ransom is paid.

Bad actors want to find the easiest ways into clouds. Unfortunately organizations are unaware that attacks happen when they don’t monitor their networks:

Merritt Baer, Field CISO at Lacework, says that bad actors look first for an easy front door to access misconfigured clouds, the identities and access to entire fleets of mobile devices. “Novel exploits (zero-days) or even new uses of existing exploits are expensive to research and discover. Why burn an expensive zero-day when you don’t need to? Most bad actors can find a way in through the “front door”– that is, using legitimate credentials (in unauthorized ways).”

Baer added, ‘This avenue works because most permissions are overprovisioned (they aren’t pruned down/least privileged as much as they could be), and because with legitimate credentials, it’s hard to tell which calls are authorized/ done by a real user versus malicious/ done by a bad actor.’”

Almost 99% of cloud security breaches are due to incorrectly set manual controls. Also nearly 50% of organizations unintentionally exposed storage, APIs, network scents, and applications. These breaches cost an average of $4 million to solve.

Organizations need to rely on more than encryption to protect their infrastructures. Most attacks occur because bad actors use authenticate credentials. Unified endpoint management, passwordless multi-factor authentication, and mobile device management housed on a single platform is the best defense.

How about these possibly true revelations about Apple?

Whitney Grace, January 26, 2024

AI and Web Search: A Meh-crosoft and Google Mismatch

January 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a shocking report summary. Is the report like one of those Harvard Medical scholarly articles or an essay from the former president of Stanford University? I don’t know. Nevertheless, let’s look at the assertions in “Report: ChatGPT Hasn’t Helped Bing Compete With Google.” I am not sure if the information provides convincing proof that Googzilla is a big, healthy market dominator or if Microsoft has been fooling itself about the power of the artificial intelligence revolution.

image

The young inventor presents his next big thing to a savvy senior executive at a techno-feudal company. The senior executive is impressed. Are you? I know I am. Thanks, MSFT Copilot Bing thing. Too bad you timed out and told me, “I apologize for the confusion. I’ll try to create a more cartoon-style illustration this time.” Then you crashed. Good enough, right?

Let’s look at the write up. I noted this passage which is coming to me third, maybe fourth hand, but I am a dinobaby and I go with the online flow:

Microsoft added the generative artificial intelligence (AI) tool to its search engine early last year after investing $10 billion in ChatGPT creator OpenAI. But according to a recent Bloomberg News report — which cited data analytics company StatCounter — Bing ended 2023 with just 3.4% of the worldwide search market, compared to Google’s 91.6% share. That’s up less than 1 percentage point since the company announced the ChatGPT integration last January.

I am okay with the $10 billion. Why not bet big? The tactics works for some each year at the Kentucky Derby. I don’t know about the 91.6 number, however. The point six is troubling. What’s with precision when dealing with a result that makes clear that of 100 random people on line at the ever efficient BWI Airport, only eight will know how to retrieve information from another Web search system; for example, the busy Bing or the super reliable Yandex.ru service.

If we assume that the Bing information of modest user uptake, those $10 billion were not enough to do much more than get the management experts at Alphabet to press the Red Alert fire alarm. One could reason: Google is a monopoly in spirit if not in actual fact. If we accept the market share of Bing, Microsoft is putting life preservers manufactured with marketing foam and bricks on its Paul Allen-esque super yacht.

The write up says via what looks like recycled information:

“We are at the gold rush moment when it comes to AI and search,” Shane Greenstein, an economist and professor at Harvard Business School, told Bloomberg. “At the moment, I doubt AI will move the needle because, in search, you need a flywheel: the more searches you have, the better answers are. Google is the only firm who has this dynamic well-established.”

Yeah, Harvard. Oh, well, the sweatshirts are recognized the world over. Accuracy, trust, and integrity implied too.

Net net: What’s next? Will Microsoft make it even more difficult to use another outfit’s search system. Swisscows.com, you may be headed for the abattoir. StartPage.com, you will face your end.

Stephen E Arnold, January 25, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta