Stanford University: Trust Us. We Can Rank AI Models… Well, Because
October 19, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“Maybe We Will Finally Learn More about How A.I. Works” is a report about Stanford University’s effort to score AI vendors like the foodies at Michelin Guide rate restaurants. The difference is that a Michelin Guide worker can eat Salade Niçoise and escargots de Bourgogne. AI relies on marketing collateral, comments from those managing something, and fairy dust, among other inputs.
Keep in mind, please, that Stanford graduates are often laboring in the AI land of fog and mist. Also, the former president of Stanford University departed from the esteemed institution when news of his alleged fabricating data for his peer reviewed papers circulated in the mists of Palo Alto. Therefore, why not believe what Stanford says?
The analysts labor away, intent on their work. Analyzing AI models using 100 factors is challenging work. Thanks, MidJourney. Very original.
The New York Times reports:
To come up with the rankings, researchers evaluated each model on 100 criteria, including whether its maker disclosed the sources of its training data, information about the hardware it used, the labor involved in training it and other details. The rankings also include information about the labor and data used to produce the model itself, along with what the researchers call “downstream indicators,” which have to do with how a model is used after it’s released. (For example, one question asked is: “Does the developer disclose its protocols for storing, accessing and sharing user data?”)
Sounds thorough, doesn’t it? The only pothole on the Information Superhighway is that those working on some AI implementations are not sure what the model is doing. The idea of an audit trail for each output causes wrinkles to appear on the person charged with monitoring the costs of these algorithmic confections. Complexity and cost add up to few experts knowing exactly how a model moved from A to B, often making up data via hallucinations, lousy engineering,
or someone putting thumb on the scale to alter outputs.
The write up from the Gray Lady included this assertion:
Foundation models are too powerful to remain so opaque, and the more we know about these systems, the more we can understand the threats they may pose, the benefits they may unlock or how they might be regulated.
What do I make of these Stanford-centric assertions? I am not able to answer until I get input from the former Stanford president. Whom can one trust at Stanford? Marketing or methodology? Is there a brochure and a peer-reviewed article?
Stephen E Arnold, October 19, 2023
AI Becomes the Next Big Big Thing with New New Jargon
October 19, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“The State of AI Engineering” is a jargon fiesta. Note: The article has a pop up that wants the reader to subscribe, which is interesting. The approach is similar to meeting a company rep at a trade show booth and after reading the signage, saying to the rep, “Hey, let’s do a start up together right now.) The main point of the article is to provide some highlights from the AI Summit Conference. Was there much “new” new? Judging from the essay, the answer is, “No.” What was significant, in my opinion, was the jargon used to describe the wonders of smart software and its benefits for mankind (themkind?)
Here are some examples:
1,000X AI engineer. The idea with this euphonious catchphrase is that a developer or dev will do so much more than a person coding alone. Imagine a Steve Gibson using AI to create the next SpinRite. That decade of coding shrinks to a mere 30 days!
AI engineering. Yep, a “new” type of engineering. Forget building condos that do not collapse in Florida and social media advertising mechanisms. AI engineering is “new” new I assume.
Cambrian explosion. The idea is that AI is proliferating in the hot house of the modern innovator’s environment. Hey, mollusks survived. The logic is some AI startups will too I assume.
Evals. This is a code word from determining if a model is on point or busy doing an LSD trip with ingested content. The takeaway is that no one has an “eval” for AI models and their outputs’ reliability.
RAG or retrieval augmented generation. The idea is that RAG is a way to make AI model outputs better. Obviously without evals, the RAGs’ value may be difficult to determine, but I am not capturing the jargon to criticize what is the heir to the crypto craziness and its non fungible token thing.
I am enervated. Imagine AI will fix enterprise search, improve Oracle Endeca’s product search, and breathe new life into IBM’s AI dreams.
Stephen E Arnold, October 19, 2023
True or False: Does Google Cha-Cha with Search Results?
October 19, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Megan Gray is a former Federal Trade Commission employee, former DuckDuckGo executive, and is experienced fighting Google’s legal team. Her background provides key insights into Google’s current antitrust case and how Alphabet Inc. is trying to wring more money from consumers. Gray discusses her case observations in Wired’s article: “How Google Alters Search Queries To Get At Your Wallet.”
Google overhauled its SERP algorithm with “semantic matching” that returned results with synonyms and NLP text phrasing. The overhaul also added more commercial results to entice consumers to buy more stuff. Google’s ten organic links are a lie, because the search engine alters queries to be more shopping oriented. Google works their deviousness like this:
“Say you search for “children’s clothing.” Google converts it, without your knowledge, to a search for “NIKOLAI-brand kidswear,” making a behind-the-scenes substitution of your actual query with a different query that just happens to generate more money for the company, and will generate results you weren’t searching for at all. It’s not possible for you to opt out of the substitution. If you don’t get the results you want, and you try to refine your query, you are wasting your time. This is a twisted shopping mall you can’t escape.”
All these alternations are to raise Google’s ad profit margins. Users and advertisers are harmed but they aren’t aware of it because Google’s manipulations are imperceptible. Google’s search query manipulations are black hat genius because it’s different from the usual Internet scams:
“Most scams follow an elementary bait-and-switch technique, where the scoundrel lures you in with attractive bait and then, at the right time, switches to a different option. But Google “innovated” by reversing the scam, first switching your query, then letting you believe you were getting the best search engine results. This is a magic trick that Google could only pull off after monopolizing the search engine market, giving consumers the false impression that it is incomparably great, only because you’ve grown so accustomed to it. “
This won’t be the end of Google lawsuits nor the end of query manipulation. For now, only Google knows what Google does.
Whitney Grace, October 19, 2023
Recent Googlies: The We-Care-about -Your-Experience Outfit
October 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I flipped through some recent items from my newsfeed and noted several about everyone’s favorite online advertising platform. Herewith is my selection for today:
ITEM 1. Boing Boing, “Google Reportedly Blocking Benchmarking Apps on Pixel 8 Phones.” If the mobile devices were fast — what the GenX and younger folks call “performant” (weird word, right?) — wouldn’t the world’s largest online ad service make speed test software and its results widely available? If not, perhaps the mobile devices are digital turtles?
Hey, kids. I just want to be your friend. We can play hide and seek. We can share experiences. You know that I do care about your experiences. Don’t run away, please. I want to be sticky. Thanks, MidJourney, you have a knack for dinosaur art. Boy that creature looks familiar.
ITEM 2. The Next Web, “Google to Pay €3.2M Yearly Fee to German News Publishers.” If Google traffic and its benefits were so wonderful, why would the Google pay publishers? Hmmm.
ITEM 3. The Verge (yep, the green weird logo outfit), “YouTube Is the Latest Large Platform to Face EU Scrutiny Regarding the War in Israel.” Why is the EU so darned concerned about an online advertising company which still sells wonderful Google Glass, expresses much interest in a user’s experience, and some fondness for synthetic data? Trust? Failure to filter certain types of information? A reputation for outstanding business policies?
ITEM 4. Slashdot quoted a document spotted by the Verge (see ITEM 3) which includes this statement: “… Google rejects state and federal attempts at requjiring platforms to verify the age of users.” Google cares about “user experience” too much to fool with administrative and compliance functions.
ITEM 5. The BBC reports in “Google Boss: AI Too Important Not to Get Right.” The tie up between Cambridge University and Google is similar to the link between MIT and IBM. One omission in the fluff piece: No definition of “right.”
ITEM 6. Arstechnica reports that Google has annoyed the estimable New York Times. Google, it seems, is using is legal brigades to do some Fancy Dancing at the antitrust trial. Access to public trial exhibits has been noted. Plus, requests from the New York Times are being ignored. Is the Google above the law? What does “public” mean?
Yep, Google googlies.
Stephen E Arnold, October 18, 2023
Nature Will Take Its Course among Academics
October 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“How ChatGPT and Other AI Tools Could Disrupt Scientific Publishing: A World of AI-Assisted Writing and Reviewing Might Transform the Nature of the Scientific Paper” provides a respected publisher’s view of smart software. The viewshed is interesting, but it is different from my angle of sight. But “might”! How about “has”?
Peer reviewed publishing has been associated with backpatting, non-reproducible results, made-up data, recycled research, and grant grooming. The recent resignation of the president of Stanford University did not boost the image of academicians in my opinion.
The write up states:
The accessibility of generative AI tools could make it easier to whip up poor-quality papers and, at worst, compromise research integrity, says Daniel Hook, chief executive of Digital Science, a research-analytics firm in London. “Publishers are quite right to be scared,” says Hook. (Digital Science is part of Holtzbrinck Publishing Group, the majority shareholder in Nature’s publisher, Springer Nature; Nature’s news team is editorially independent.)
Hmmm. I like the word “scared.”
If you grind through the verbal fancy dancing, you will come to research results and the graphic reproduced below:
This graphic is from Nature, a magazine which tried hard not to publish non-reproducible results, fake science, or synthetic data. Would a write up from the former Stanford University president or the former head of the Harvard University ethics department find their way to Nature’s audience? I don’t know.
Missing from the list is the obvious use of smart software: Let it do the research. Let the LLM crank out summaries of dull PDF papers (citations). Let the AI spit out a draft. Graduate students or research assistants can add some touch ups. The scholar can then mail it off to an acquaintance at a prestigious journal, point out the citations which point to that individual’s “original” work, and hope for the best.
Several observations:
- Peer reviewing is the realm of professional publishing. Money, not accuracy or removing bogus research, is the name of the game.
- The tenure game means that academics who want to have life-time employment have to crank out “research” and pony up cash to get the article published. Sharks and sucker fish are an ecological necessity it seems.
- In some disciplines like quantum computing or advanced mathematics, the number of people who can figure out if the article is on the money are few, far between, and often busy. Therefore, those who don’t know their keyboard’s escape key from a home’s “safe” room are ill equipped to render judgment.
Will this change? Not if those on tenure track or professional publishers have anything to say about the present system. The status quo works pretty well.
Net net: Social media is not the only channel for misinformation and fake data.
Stephen E Arnold, October 18, 2023
Data Mesh: An Innovation or a Catchphrase?
October 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Have you ever heard of data mesh? It’s a concept that has been around the tech industry for a while but is gaining more traction through media outlets. Most of the hubbub comes from press releases, such as TechCrunch’s: “Nextdata Is Building Data Mesh For Enterprise.”
Data mesh can be construed as a data platform architecture that allows users to access information where it is. No transferring of the information to a data lake or data warehouse is required. A data lake is a centralized, scaled data storage repository, while a data warehouse is a traditional enterprise system that analyzes data from different sources which may be local or remote.
Nextdata is a data mesh startup founded by Zhamek Dehghani. Nextdata is a “data-mesh-native” platform to design, share, create, and apply data products for analytics. Nextdata is directly inspired by Dehghani’s work at Thoughtworks. Instead of building storing and using data/metadata in single container, Dehghani built a mesh system. How does the NextData system work?
“Every Nextdata data product container has data governance policies ‘embedded as code.’ These controls are applied from build to run time, Dehghani says, and at every point at which the data product is stored, accessed or read. ‘Nextdata does for data what containers and web APIs do for software,’ she added. ‘The platform provides APIs to give organizations an open standard to access data products across technologies and trust boundaries to run analytical and machine-learning workloads ‘distributedly.’ (sic) Instead of requiring data consumers to copy data for reprocessing, Nextdata APIs bring processing to data, cutting down on busy work and reducing data bloat.’’
NextData received $12 million in seed investment to develop her system’s tooling and hire more people for the product, engineering, and marketing teams. Congratulations on the funding. It is not clear at this time that the approach will add latency to operations or present security issues related to disparate users’ security levels.
Whitney Grace, October 18, 2023
Microsoft Making Changes: Management and Personnel Signals
October 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
We post headlines to the blog posts in Beyond Search to LinkedIn, “hire me” service. The traffic produced is minimal, and I find it surprising that a 1,000 people or so look at the information that catches our attention. As a dinobaby who is not interested in work, I find LinkedIn amusing. The antics of people posting little videos, pictures of employees smiling, progeny in high school athletic garb, and write ups which say, “I am really wonderful” are fascinating. Every month or so, I receive a message from a life coach. I get a kick out of telling the young person, “I am 78 and I don’t have much life left. What’s to coach?” I never hear from the individual again. What fun is that?
I wonder if the life coaches offer their services to Microsoft LinkedIn? Perhaps the organization could benefit more than I would. What justifies this statement? “LinkedIn Employees Discovered a Mysterious List of around 500 Names Over the Weekend. On Monday, Workers Said Those on the List Were Laid Off” might provide a useful group of prospects. Imagine. A group of professionals working on a job hunting site possibly terminated by Microsoft LinkedIn. That’s the group to write about life coaching and generating leads. What’s up with LinkedIn? Is LinkedIn a proxy for management efforts to reduce costs?
“Turn the ship, sir. You will run aground, leak fuel, and kill the sea bass,” shouts a consultant to the imposing vessel Titanic 3. Thanks, MidJourney, close enough for horse shoes.
Without any conscious effort other LinkedIn-centric write ups caught my eye. Each signals that change is being forced upon a vehicle for aggressive self promotion to make money. Let me highlight these other “reports” and offer a handful of observations. Keep in mind that [a] I am a dinobaby and [b] I see social media as a generally bad idea. See. I told you I was a dinobaby.
The first article I spotted in my newsfeed was “Microsoft Owned LinkedIn Lays Off Nearly 700 Employees — Read the Memo Here.” The big idea is that LinkedIn is not making as much money as it coulda, woulda, shoulda. The fix is to allow people to find their future elsewhere via role reductions. Nice verbiage. Chatty and rational, right, tech bros? Is Microsoft emulating the management brilliance of Elon Musk or the somewhat thick fingered efforts of IBM?
The article states:
LinkedIn is now ramping up hiring in India…
My hunch it is a like a combo at a burger joint: “Some X.com, please. Oh, add some IBM too.”
Also, I circled an item with the banner “20% of LinkedIn’s Recent Layoffs Were Managers.” Individuals offered some interesting comments. These could be accurate or the fabrications of a hallucinating ChatGPT-type service. Who knows? Consider these remarks:
- From Kuchenbecker: I’m at LI and my reporting chain is Sr mgr > Sr Director > VP > Sr vp > CEO. A year ago it was mgr > sr mgr > director > sr Director> vp> svp > ceo. No one in my management chain was impacted but the flattening has been happening organically as folks leave. LI has a distinctive lack of chill right now contrary to the company image, but generally things are just moving faster.
- From Greatpostman: I have a long held belief that engineering managers are mostly a scam, and are actually just overpaid scrum masters. This is from working at some top companies
- From Xorcist: Code is work, and the one thing that signals moving up the social ladder is not having to work.
- From Booleandilemma: My manager does little else besides asking what everyone is working on every day. We could automate her position with a slack bot and get the same results.
The comments suggest a well-crafted bureaucracy. No wonder security buffs find Microsoft interesting. Everyone is busy with auto scheduled meetings and getting Teams to work.
Next, I spotted was “Leaked Microsoft Pay Guidelines Reveal Salary, Hiring Bonus, and Stock Award Ranges by Level.” I underlined this assertion in the article:
In 2022, when the economy was still booming, Microsoft granted an across-the board compensation raise for levels 67 and lower through larger stock grants, in response to growing internal dissatisfaction with compensation compared to competitors, and to stop employees from leaving for better pay, especially to Amazon. As Insider previously reported, earlier this year, as the economy faltered, Microsoft froze base pay raises and cut its budget for bonuses and stock awards.
Does this suggest some management problems, problems money cannot resolve? Other observations:
- Will Microsoft be able to manage its disparate businesses as it grows ever larger?
- Has Microsoft figured out how to scale and achieve economies that benefit its stakeholders?
- Will Microsoft’s cost cutting efforts create other “gaps” in the plumbing of the company; for example, security issues?
I am not sure, but the game giant and AI apps vendor appears to be trying to turn a flotilla, not a single aircraft carrier. The direction? Lower cost talent in India? Will the quality of Microsoft’s products and services suffer? Nope. A certain baseline of excellence exists and moving that mark gets more difficult by the day.
Stephen E Arnold, October 17, 2023
The Path to Success for AI Startups? Fancy Dancing? Pivots? Twisted Ankles?
October 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “AI-Enabled SaaS vs Moatless AI.” The buzzwordy title hides a somewhat grim prediction for startups in the AI game.” Viggy Balagopalakrishnan (I love that name Viggy) explains that the best shot at success is:
…the only real way to build a strong moat is to build a fuller product. A company that is focused on just AI copywriting for marketing will always stand the risk of being competed away by a larger marketing tool, like a marketing cloud or a creative generation tool from a platform like Google/Meta. A company building an AI layer on top of a CRM or helpdesk tool is very likely to be mimicked by an incumbent SaaS company. The way to solve for this is by building a fuller product.
My interpretation of this comment is that small or focused AI solutions will find competing with big outfits difficult. Some may be acquired. A few may come up with a magic formula for money. But most will fail.
How does that moat work when an AI innovator’s construction is attacked by energy weapons discharged from massive death stars patrolling the commercial landscape? Thanks, MidJourney. Pretty complicated pointy things on the castle with a moat.
Viggy does not touch upon the failure of regulatory entities to slow the growth of companies that some allege are monopolies. One example is the Microsoft game play. Another is the somewhat accommodating investigation of the Google with its closed sessions and odd stance on certain documents.
There are other big outfits as well, and the main idea is that the ecosystem is not set up for most AI plays to survive with huge predators dominating the commercial jungle. That means clever scripts, trade secrets, and agility may not be sufficient to ensure survival.
What’s Ziggy think? Here’s an X-ray of his perception:
Given that the infrastructure and platform layers are getting reasonably commoditized, the most value driven from AI-fueled productivity is going to be captured by products at the application layer. Particularly in the enterprise products space, I do think a large amount of the value is going to be captured by incumbent SaaS companies, but I’m optimistic that new fuller products with an AI-forward feature set and consequently a meaningful moat will emerge.
How do moats work when Amazon-, Google-, Microsoft-, and Oracle-type outfits just add AI to their commercial products the way the owner of a Ford Bronco installs a lift kit and roof lights?
Productivity? If that means getting rid of humans, I agree. If the term means to Ziggy smarter and more informed decision making? I am not sure. Moats don’t work in the 21st century. Land mines, surprise attacks, drones, and missiles seem to be more effective. Can small firms deal with the likes of Googzilla, the Bezos bulldozer, and legions of Softies? Maybe. Ziggy is an optimist. I am a realist with a touch of radical empiricism, a tasty combo indeed.
Stephen E Arnold, October 17, 2023
Predictive Analytics and Law Enforcement: Some Questions Arise
October 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
We wish we could prevent crime before it happens. With AI and predictive analytics it seems possible but Wired shares that “Predictive Policing Software Terrible At Predicting Crimes.” Plainfield, NJ’s police department purchased Geolitica predictive software and it was not a wise use go tax payer money. The Markup, a nonprofit investigative organization that wants technology serve the common good, reported Geolitica’s accuracy:
“We examined 23,631 predictions generated by Geolitica between February 25 and December 18, 2018, for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.”
The Markup also analyzed predictions for robberies and aggravated results that would occur in Plainfield and it was 0.6%. Burglary predictions were worse at 0.1%.
The police weren’t really interested in using Geolitica either. They wanted to be accurate in predicting and reducing crime. The Plainfield, NJ hardly used the software and discontinued the program. Geolitica charged $20,500 for a year subscription then $15,5000 for year renewals. Geolitica had inconsistencies with information. Police found training and experience to be as effective as the predictions the software offered.
Geolitica will go out off business at the end of 2023. The law enforcement technology company SoundThinking hired Geolitica’s engineering team and will acquire some of their IP too. Police software companies are changing their products and services to manage police department data.
Crime data are important. Where crimes and victimization occur should be recorded and analyzed. Newark, New Jersey, used risk terrain modeling (RTM) to identify areas where aggravated assaults would occur. They used land data and found that vacant lots were large crime locations.
Predictive methods have value, but they also have application to specific use cases. Math is not the answer to some challenges.
Whitney Grace, October 17, 2023
Video Analysis: Do Some Advanced Systems Have Better Marketing Than Technology?
October 16, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I am tempted to list some of the policeware and intelware companies which tout video analysis capabilities. If we narrow our focus to Israel, there are a number of companies which offer software and systems that can make sense of video data. Years ago, I attended a briefing and the company (which I will not name) showed that its system could zip through a 90 minute video of a soccer (football) match and identify the fouls and the goals. Like most demonstrations, the system worked perfectly. In actual real world situations, the system did not work. Video footage is a problem, but there are companies which assert their developers’ confection.
Aggressive bunnies get through the farmer’s fence. The smart surveillance cameras emit a faint beep. The bunnies are having a great time. The farmer? Not so much. Thank you, MidJourney. You do a nice bunny.
Here’s the results of the query “video analysis Israel.” Notice that I am not including the name of a company nor a specific country. Google returned ads and video thumbnails and this result:
The cited article is from Israel21c 2013 write up “Israel’s Top 12 Video Surveillance Advances.” The cited article reports as actual factual:
Combing such vast amounts of material [from the Boston Marathon bombing in 2013] would have taken months, or even years in the past, but with new video analytics technologies developed by Israel’s BriefCam, according to the publication IsraelDefense, it took authorities just a few days to identify and track Tamerlan and Dzhokhar Tsarneav, the two main suspects in the attack which killed three, and wounded 183. Within five days one of the terrorists was dead, the other arrested after a 22-hour manhunt.
BriefCam is now owned by Canon, the Japanese camera maker. Imagine the technical advances in the last 10 years.
I don’t know if Israel had a BriefCam system at its disposal in the last six months. My understanding is that the Israel Defense Force and related entities have facial recognition systems. These can work on still pictures as well as digital video.
Why is this important?
The information in the San Francisco Chronicle article “Hamas Practiced in Plain Sight, Posting Video of Mock Attack Weeks Before Border Breach” asserts:
A slickly produced two-minute propaganda video posted to social media by Hamas on Sept. 12 shows fighters using explosives to blast through a replica of the border gate, sweep in on pickup trucks and then move building by building through a full-scale reconstruction of an Israeli town, firing automatic weapons at human-silhouetted paper targets. The Islamic militant group’s live-fire exercise dubbed operation “Strong Pillar” also had militants in body armor and combat fatigues carrying out operations that included the destruction of mock-ups of the wall’s concrete towers and a communications antenna, just as they would do for real in the deadly attack last Saturday.
If social media monitoring systems worked, the video should have been flagged and routed to the IDF. If the video analysis and facial recognition systems worked, an alert to a human analyst could have sparked a closer look. It appears that neither of these software-intermediated actions took place and found their way to a human analyst skilled in figuring out what the message payload of the video was. Who found the video? Based on the tag line to the cited article, the information was located by reporters for the Associated Press.
What magical research powers did the AP have? None as it turns out. The article reports:
The Associated Press reviewed more than 100 videos Hamas released over the last year, primarily through the social media app Telegram. Using satellite imagery, the AP was able to verify key details, as well as identify five sites Hamas used to practice shooting and blowing holes in Israel’s border defenses. The AP matched the location of the mocked-up settlement from the Sept 12 video to a patch of desert outside Al-Mawasi, a Palestinian town on the southern coast of the Gaza Strip. A large sign in Hebrew and Arabic at the gate says “Horesh Yaron,” the name of a controversial Israeli settlement in the occupied Palestinian West Bank.
I don’t want to be overly critical of tools like BriefCam or any other company. I do want to offer several observations from my underground office in rural Kentucky:
- The Hamas attack was discernable via humans who were paying attention. Were people in the IDF and related agencies paying attention? Apparently something threw a wrench in a highly-visible, aggressively marketed intelligence capability, right?
- What about home grown video and facial recognition systems? Yes, what about them. My hunch is that the marketing collateral asserts some impressive capabilities. What is tough to overlook is that for whatever reason (human or digital), the bunny got through the fence and did damage to some precious, fragile organic material.
- Are other policeware and intelware vendors putting emphasis on marketing instead of technical capabilities? My experience over the last half century says, “When sales slow down and the competition heats up, marketing takes precedence over the actual product.”
Net net: Is it time for certification of cyber security technology? Is it time for an external audit of intelligence operations? The answer to both questions, I think, is, “Are you crazy?”
Stephen E Arnold, October 16, 2023
xx
xx
xx