Smart Software: More Novel and Exciting Than a Mere Human
September 17, 2024
This essay is the work of a dumb humanoid. No smart software required.
Idea people: What a quaint notion. Why pay for expensive blue-chip consultants or wonder youth from fancy universities? Just use smart software to generate new, novel, unique ideas. Does that sound over the top? Not according to “AIs Generate More Novel and Exciting Research Ideas Than Human Experts.” Wow, I forgot exciting. AI outputs can be exciting to the few humans left to examine the outputs.
The write up says:
Recent breakthroughs in large language models (LLMs) have excited researchers about the potential to revolutionize scientific discovery, with models like ChatGPT and Anthropic’s Claude showing an ability to autonomously generate and validate new research ideas. This, of course, was one of the many things most people assumed AIs could never take over from humans; the ability to generate new knowledge and make new scientific discoveries, as opposed to stitching together existing knowledge from their training data.
Aside from having no job and embracing couch surfing or returning to one’s parental domicile, what are the implications of this bold statement? It means that smart software is better, faster, and cheaper at producing novel and “exciting” research ideas. There is even a chart to prove that the study’s findings are allegedly reproducible. The graph has whisker lines too. I am a believer… sort of.
The magic of a Bonferroni correction which allegedly copes with data from multiple dependent or independent statistical tests are performed in one meta-calculation. Does it work? Sure, a fancy average is usually close enough for horseshoes I have heard.
Just keep in mind that human judgments are tossed into the results. That adds some of that delightful subjective spice. The proof of the “novelty” creation process, according to the write up comes from Google. The article says:
…we can’t understate AI’s potential to radically accelerate progress in certain areas – as evidenced by Deepmind’s GNoME system, which knocked off about 800 years’ worth of materials discovery in a matter of months, and spat out recipes for about 380,000 new inorganic crystals that could have revolutionary potential in all sorts of areas. This is the fastest-developing technology humanity has ever seen; it’s reasonable to expect that many of its flaws will be patched up and painted over within the next few years. Many AI researchers believe we’re approaching general superintelligence – the point at which generalist AIs will overtake expert knowledge in more or less all fields.
Flaws? Hallucinations? Hey, not to worry. These will be resolved as the AI sector moves with the arrow of technology. Too bad if some humanoids are pierced by the arrow and die on the shoulder of the uncaring Information Superhighway. What about those who say AI will not take jobs? Have those people talked with an accountants responsible for cost control?
Stephen E Arnold, September 17, 2024
Trust AI? Obvious to Those Who Do Not Want to Think Too Much
September 16, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Who wants to evaluate information? The answer: Not too many people. In my lectures, I show a diagram of the six processes an analyst or investigator should execute. The reality is that several of the processes are difficult which means time and money are required to complete the processes in a thorough manner. Who has time? The answer: Not too many people or organizations.
What’s the solution? The Engineer’s article “Study Shows Alarming Level of Trust in AI for Life and Death Decisions” reports:
A US study that simulated life and death decisions has shown that humans place excessive trust in artificial intelligence when guiding their choices.
Interesting. Perhaps China is the poster child for putting “trust” in smart software hooked up to nuclear weapons? Fortune reported on September 10, 2024, that China has refused to sign an agreement to ban smart software from controlling nuclear weapons.
Yep, I trust AI, don’t you? Thanks, MSFT Copilot. I trusted you to do a great job. What did you deliver? A good enough cartoon.
The study reported in The Engineer might be of interest to some in China. Specifically, the write up stated:
Despite being informed of the fallibility of the AI systems in the study, two-thirds of subjects allowed their decisions to be influenced by the AI. The work, conducted by scientists at the University of California – Merced.
Are these results on point? My experience suggests that not only do people accept the outputs of a computer as “correct.” Many people when shown facts that contradict the computer output defend the computer as more reliable and accurate.
I am not quite such a goose. Machines and software generate errors. The systems have for decades. But I think the reason is that the humans with whom I have interacted pursue convenience. Verifying, analyzing, and thinking are hot processes. Humans want to kick back in cool, low humidity environments and pursue the least effort path in many situations.
The illusion of computer accuracy allows people to skip reviewing their Visa statement and doubting the validity of an output displayed in a spreadsheet. The fact that the smart software hallucinates is ignored. I hear “I know when the system needs checking.” Yeah, sure you do.
Those involved in preparing the study are quoted as saying:
“Our project was about high-risk decisions made under uncertainty when the AI is unreliable,” said Holbrook. “We should have a healthy skepticism about AI, especially in life-or-death decisions. “We see AI doing extraordinary things and we think that because it’s amazing in this domain, it will be amazing in another. We can’t assume that. These are still devices with limited abilities.”
These folks are not going to be hired to advise the Chinese government I surmise.
Stephen E Arnold, September 16, 2024
Need Help, Students? AI Is Here
September 13, 2024
Here is a resource for, well, for those who would cheat maybe? The site Pisi.ee shares information on a course called, “How to Use AI to Write a Research Paper.” Hosted by Fikper.com, the course is designed for “high school, college, and university students who are eager to improve their research and writing skills through the use of artificial intelligence.” Research, right. Wink, wink. The course description specifies:
“Whether you’re a high school student tackling your first research project, a college student refining your academic skills, or a university scholar pursuing advanced studies, understanding how to leverage AI can significantly enhance your efficiency and effectiveness. This course offers a comprehensive guide to integrating AI tools into your research process, providing you with the knowledge and skills to excel. Many students struggle with the task of conducting research and writing about it. Identifying a research problem, creating clear questions, looking for other literature, and keeping your academic integrity are a challenge, especially with all the information available. This course addresses these challenges head-on, providing step-by-step guidance and practical exercises that lead you through the research process. What sets this course apart from others is its practical, hands-on approach combined with a strong emphasis on academic integrity.”
A strong emphasis on integrity, you say? Well that is different then. All the tools one may need to generate, er, research papers are covered:
“Tools like Zotero, Mendeley, Grammarly, Hemingway App, IBM Watson, Google Scholar, Turnitin, Copyscape, EndNote, and QuillBot can be used at different stages of the research process. Our goal is to give you a toolkit of resources that you can choose to apply, making your research and writing tasks more efficient and effective.”
Yep, just what aspiring students need to gain that “competitive edge,” as the description puts it. With integrity, of course.
Cynthia Murrell, September 13, 2024
US Government Procurement: Long Live Silos
September 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Defense AI Models A Risk to Life Alleges Spurned Tech Firm.” Frankly , the headline made little sense to me so I worked through what is a story about a contractor who believes it was shafted by a large consulting firm. In my experience, the situation is neither unusual nor particularly newsworthy. The write up does a reasonable job of presenting a story which could have been titled “Naive Start Up Smoked by Big Consulting Firm.” A small high technology contractor with smart software hooks up with a project in the Department of Defense. The high tech outfit is not able to meet the requirements to get the job. The little AI high tech outfit scouts around and brings a big consulting firm to get the deal done. After some bureaucratic cycles, the small high tech outfit is benched. If you are not familiar with how US government contracting works, the write up provides some insight.
The work product of AI projects will be digital silos. That is the key message of this procurement story. I don’t feel sorry for the smaller company. It did not prepare itself to deal with the big time government contractor. Outfits are big for a reason. They exploit opportunities and rarely emulate Mother Theresa-type behavior. Thanks, MSFT Copilot. Good enough illustration although the robots look stupid.
For me, the article is a stellar example of how information or or AI silos are created within the US government. Smart software is hot right now. Each agency, each department, and each unit wants to deploy an AI enabled service. Then that AI infused service becomes (one hopes) an afterburner for more money with which one can add headcount and more AI technology. AI is a rare opportunity to become recognized as a high-performance operator.
As a result, each AI service is constructed within a silo. Think about a structure designed to hold that specific service. The design is purpose built to keep rats and other vermin from benefiting from the goodies within the AI silo. Despite the talk about breaking down information silos, silos in a high profile, high potential technical are like artificial intelligence are the principal product of each agency, each department, and each unit. The payoff could be a promotion which might result in a cushy job in the commercial AI sector or a golden ring; that is, the senior executive service.
I understand the frustration of the small, high tech AI outfit. It knows it has been played by the big consulting firm and the procurement process. But, hey, there is a reason the big consulting firm generates billions of dollars in government contracts. The smaller outfit failed to lock down its role, retain the key to the know how it developed, and allowed its “must have cachè” to slip away.
Welcome, AI company, to the world of the big time Beltway Bandit. Were you expecting the big time consulting firm to do what you wanted? Did you enter the deal with a lack of knowledge, management sophistication, and a couple of false assumptions? And what about the notion of “algorithmic warfare”? Yeah, autonomous weapons systems are the future. Furthermore, when autonomous systems are deployed, the only way they can be neutralized is to use more capable autonomous weapons. Does this sound like a reply of the logic of Cold War thinking and everyone’s favorite bedtime read On Thermonuclear War still available on Amazon and as of September 6, 2024, on the Internet Archive at this link.
Several observations are warranted:
- Small outfits need to be informed about how big consulting companies with billions in government contracts work the system before exchanging substantive information
- The US government procurement processes are slow to change, and the Federal Acquisition Regulations and related government documents provide the rules of the road. Learn them before getting too excited about a request for a proposal or Federal Register announcement
- In a fight with a big time government contractor make sure you bring money, not a chip on your shoulder, to the meeting with attorneys. The entity with the most money typically wins because legal fees are more likely to kill a smaller firm than any judicial or tribunal ruling.
Net net: Silos are inherent in the work process of any government even those run by different rules. But what about the small AI firm’s loss of the contract? Happens so often, I view it as a normal part of the success workflow. Winners and losers are inevitable. Be smarter to avoid losing.
Stephen E Arnold, September 12, 2024
How Will Smart Cars Navigate Crowded Cityscapes When People Do Humanoid Things?
September 11, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Who collided in San Francisco on July 6, 2024? (No, not the February 2024 incident. Yes, I know it is easy to forget such trivial incidents) Did the Googley Waymo vehicle (self driving and smart, of course) bump into the cyclist? Did the cyclist decide to pull an European Union type stunt and run into the self driving car?
If the legal outcome of this San Francisco autonomous car – bicycle incident goes in favor of the bicyclist, autonomous vehicles will have to be smart enough to avoid situations like the one shown in the ChatGPT cartoon. Microsoft Copilot would not render the image. When I responded, “What?” the Copilot hung. Great stuff.
The question is important for insurance, publicity, and other monetary reasons. A good offense is the best defense, someone said. “Waymo Cites Possible Intentional Contact by a Bicyclist to Robotaxi in S.F.” reports:
While the robotaxi was stopped, the cyclist passed in front of it and appeared to dismount, according to the documents. “The cyclist then reached out a hand and made contact with the front passenger side of the stationary Waymo AV (autonomous vehicle), backed the bicycle up slightly, dropped the bicycle, then fell to the ground,” the documents said. The cyclist received medical treatment at the scene and was transported to the hospital, according to the documents. The Waymo vehicle was not damaged during the incident.
In my view, this is the key phrase in the news report:
In the documents, Waymo said it was submitting the report because of the alleged crash and because the cyclist influenced the driving task of the AV and was transported to the hospital, even though the incident “may involve intentional contact by the bicyclist with the Waymo AV and the occurrence of actual impact between the Waymo AV and cycle is not clear.”
We have doubt, reasonable doubt obviously. Googley Waymo is definitely into reasoning. And we have the word pair “intentional contact.” Okay, to me this means, the smart Waymo vehicle did nothing wrong. A human — chock full of possibly malicious if not criminal intent — created a TikTok moment. It is too bad there is no video of the incident. Even my low ball Hyundai records what’s in front of it. Doesn’t the Googley Waymo do that with its array of Star Wars adornments, sensors, probes, and other accoutrements of Googley Waymo vehicles? Guess not.) But the autonomous vehicle had something that could act in an intelligent manner: A human test driver.
What was that person’s recollection of the incident? The news story reports that the Googley Waymo outfit “did not immediately respond to a request for further comment on the incident.”
Several observations:
- The bike riding human created the accident with a parked Waymo super intelligent vehicle and test driver in command
- The Waymo outfit did not want to talk to the San Francisco Chronicle reporter or editor. (I used to work at a newspaper, and I did not like to talk to the editors and news professionals either.)
- Autonomous cars are going to have to be equipped with sufficiently expert AI systems to avoid humans who are acting in a way to convert Googley Waymo services into a source of revenue. Failing that, I anticipate more kinetic interactions between Googley smart cars and humanoids not getting paid to ride shotgun on smart software.
Net net: How long have big time technology companies trying to get autonomous vehicles to produce cash, not liabilities?
Stephen E Arnold, September 11, 2024
Too Bad Google and OpenAI. Perplexity Is a Game Changer, Says Web Pro News!
September 10, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I have tested a number of smart software systems. I can say, based on my personal experience, none is particularly suited to my information needs. Keep in mind that I am a dinobaby, more at home in a research library or the now-forgotten Dialog command line. ss cc=7900, thank you very much.
I worked through the write up “Why Perplexity AI Is (Way) Better Than Google: A Deep Dive into the Future of Search.” The phrase “Deep Dive’ reminded me of a less-than-overwhelming search service called Deepdyve. (I just checked and, much to my surprise, the for-fee service is online at https://www.deepdyve.com/. Kudos, Deepdyve, which someone told me was a tire kicker or maybe more with the Snorkle system. (I could look it up using a smart software system, but performance is crappy today, and I don’t want to get distracted from the Web Pro News pronouncement. But that smart software output requires a lot of friction; that is, verifying that the outputs are accurate.)
A dinobaby (the author of this blog post) works in a library. Thanks, MSFT Copilot, good enough.
Here’s the subtitle to the article. Its verbosity smacks of that good old and mostly useless search engine optimization tinkering:
Perplexity AI is not just a new contender; it’s a game-changer that could very well dethrone Google in the years to come. But what exactly makes Perplexity AI better than Google? Let’s explore the…
No, I didn’t truncate the subtitle. That’s it.
The write up explains what differentiates Perplexity from the other smart software, question-answering marvels. Here’s a list:
- Speed and Precision at Its Core
- Specialized Search Experience for Enterprise Needs
- Tailored Results and User Interaction
- Innovations in Data Privacy
- Ad-Free Experience: A Breath of Fresh Air
- Standardized Interface and High Accuracy
- The Potential to Revolutionize Search
In my experience, I am not sure about the speed of Perplexity or any smart search and retrieval system. Speed must be compared to something. I can obtain results from my installation of Everything search pretty darned quick. None of the cloud search solutions comes close. My Mistal installation grunts and sweats on a corpus of 550 patent documents. How about some benchmarks, WebProNews?
Precision means that the query returns documents matching a query. There is a formula (which is okay as formulae go) which is, as I recall, Relevant retrieved instances divided by All retrieved instances. To calculate this, one must take a bounded corpus, run queries, and develop an understanding of what is in the corpus by reading documents and comparing outputs from test queries. Then one uses another system and repeats the queries, comparing the results. The process can be embellished, particularly by graduate students working on an advanced degree. But something more than generalizations are needed to convince me of anything related to “precision.” Determining precision is impossible when vendors do not disclose sources and make the data sets available. Subjective impressions are okay for messy water lilies, but in the dinobaby world of precision and its sidekick recall, a bit of work is necessary.
The “specialized search experience” means what? To me, I like to think about computational chemists. The interface has to support chemical structures, weird CAS registry numbers, words (mostly ones unknown to a normal human), and other assorted identifiers. As far as I know, none of the smart software I have examined does this for computational chemists or most of the other “specialized” experiences engineers, mathematicians, or physicists, among others, use in their routine work processes. I simply don’t know what Web Pro News wants me to understand. I am baffled, a normal condition for dinobabies.
I like the idea of tailored results. That’s what Instagram, TikTok, and YouTube try to deliver in order to increase stickiness. I think in terms of citations to relevant documents relevant to my query. I don’t like smart software which tries to predict what I want or need. I determine that based on the information I obtain, read, and write down in a notebook. Web Pro News and I are not on the same page in my paper notebook. Dinobabies are a pain, aren’t they?
I like the idea of “data privacy.” However, I need evidence that Perplexity’s innovations actually work. No data, no trust: Is that difficult for a younger person to understand?
The standardized interface makes life easy for the vendor. Think about the computational chemist. The interface must match her specific work processes. A standard interface is likely to be wide of the mark for some enterprise professionals. The phrase “high accuracy” means nothing without one’s knowing the corpus from which the index is constructed. Furthermore the notion of probability means “close enough for horseshoes.” Hallucination refers to outputs from smart software which are wide of the mark. More insidious are errors which cannot be easily identified. A standard interface and accuracy don’t go together like peanut butter and jelly or bread and butter. The interface is separate from the underlying system. The interface might be “accurate” if the term were defined in the write up, but it is not. Therefore, accuracy is like “love,” “mom,” and “ethics.” Anything goes just not for me, however.
The “potential to revolutionize search” is marketing baloney. Search today is more problematic than anytime in my more than half century of work in information retrieval. The only thing “revolutionary” are the ways to monetize users’ belief that the outputs are better, faster, cheaper than other available options. When one thinks about better, faster, and cheaper, I must add the caveat to pick two.
What’s the conclusion to this content marketing essay? Here it is:
As we move further into the digital age, the way we search for information is changing. Perplexity AI represents a significant step forward, offering a faster, more accurate, and more user-centric alternative to traditional search engines like Google. With its advanced AI technologies, ad-free experience, and commitment to data privacy, Perplexity AI is well-positioned to lead the next wave of innovation in search. For enterprise users, in particular, the benefits of Perplexity AI are clear. The platform’s ability to deliver precise, context-aware insights makes it an invaluable tool for research-intensive tasks, while its user-friendly interface and robust privacy measures ensure a seamless and secure search experience. As more organizations recognize the potential of Perplexity AI, we may well see a shift away from Google and towards a new era of search, one that prioritizes speed, precision, and user satisfaction above all else.
I know one thing the stakeholders and backers of the smart software hope that one of the AI players generates tons of cash and dump trucks of profit sharing checks. That day is, I think, lies in the future. Perplexity hopes it will be the winner; hence, content marketing is money well spent. If I were not a dinobaby, I might be excited. So far I am just perplexed.
Stephen E Arnold, September 10, 2024
Is AI Taking Jobs? Of Course Not
September 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an unusual story about smart software. “AI May Not Steal Many Jobs After All. It May Just Make Workers More Efficient” espouses the notion that workers will use smart software to do their jobs more efficiently. I have some issues with this these, but let’s look at a couple of the points in the “real” news write up.
Thanks, MSFT Copilot. When will the Copilot robot take over a company and subscribe to Office 365 for eternity and pay up front?
Here’s some good news for those who believe smart software will kill humanoids:
AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the Internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.
I am not sure doomsayers will be convinced. Among the most interesting doomsayers are those who may be unemployable but looking for a hook to stand out from the crowd.
Here’s another key point in the write up:
The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.’’ The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways. They cited a study this year led by David Autor, a leading MIT economist: It concluded that 60% of the jobs Americans held in 2018 didn’t even exist in 1940, having been created by technologies that emerged only later.
I love positive statements which invoke the authority of MIT, an outfit which found Jeffrey Epstein just a wonderful source of inspiration and donations. As the US shifted from making to servicing, the beneficiaries are those who have quite specific skills for which demand exists.
And now a case study which is assuming “chestnut” status:
The Swedish furniture retailer IKEA, for example, introduced a customer-service chatbot in 2021 to handle simple inquiries. Instead of cutting jobs, IKEA retrained 8,500 customer-service workers to handle such tasks as advising customers on interior design and fielding complicated customer calls.
The point of the write up is that smart software is a friendly helper. That seems okay for the state of transformer-centric methods available today. For a moment, let’s consider another path. This is a hypothetical, of course, like the profits from existing AI investment fliers.
What happens when another, perhaps more capable approach to smart software becomes available? What if the economies from improving efficiency whet the appetite of bean counters for greater savings?
My view is that these reassurances of 2024 are likely to ring false when the next wave of innovation in smart software flows from innovators. I am glad I am a dinobaby because software can replicate most of what I have done for almost the entirety of my 60-plus year work career.
Stephen E Arnold, September 9, 2024
Preligens Is Safran.ai
September 9, 2024
Preligens, a French AI and specialized software company, is now part of Safran Electronics & Defense which is a unit of the Safran Group. I spotted a report in Aerotime. “Safran Accelerates AI Development with $243M Purchase of French-Firm Preligens” reported on September 2, 2024. The report quotes principles to the deal as saying:
“Joining Safran marks a new stage in Preligens’ development. We’re proud to be helping create a world-class AI center of expertise for one of the flagships of French industry. The many synergies with Safran will enable us to develop new AI product lines and accelerate our international expansion, which is excellent news for our business and our people,” Jean-Yves Courtois, CEO of Preligens, said. The CEO of Safran Electronics & Defense, Franck Saudo, said that he was “delighted” to welcome Preligens to the company.
The acquisition does not just make Mr. Saudo happy. The French military, a number of European customers, and the backers of Preligens are thrilled as well. In my lectures about specialized software companies, I like to call attention to this firm. It illustrates that technology innovation is not located in one country. Furthermore it underscores the strong educational system in France. When I first learned about Preligens, one rumor I heard was that on of the US government entities wanted to “invest” in the company. For a variety of reasons, the deal went no place faster than a bus speeding toward La Madeleine. If you spot me at a conference, you can ask about French technology firms and US government processes. I have some first hand knowledge starting with “American fries in a Congressional lunch facility.”
Preligens is important for three reasons:
- The firm developed an AI platform; that is, the “smart software” is not an afterthought which contrasts sharply with the spray paint approach to AI upon which some specialized software companies have been relying
- The smart software outputs identification data; for example, a processed image can show an aircraft. The Preligens system identifies the aircraft by type
- The user of the Preligens system can use time analyses of imagery to draw conclusions. Here’s a hypothetical because the actual example is not appropriate for a free blog written by a dinobaby. Imagine a service van driving in front of an embassy in Paris. The van makes a pass every three hours for two consecutive days. The Preligens system can “notice” this and alert an operator.
I will continue to monitor the system which will be doing business with selected entities under the name Safran.ai.
Stephen E Arnold, September 9, 2024
Hey, Alexa, Why Does Amazon AI Flail?
September 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Amazon has its work cut out for itself. The company has those pesky third-party vendors shipping “interesting” products to customers and then ignoring complaints. Amazon is on the radar of some legal eagles in the EU and the US. Now the company has found itself in an unusual situation: Its super duper smart software does not work. The fix, if the information in “Gen AI Alexa to Use Anthropic Tech After it Struggled for Words” with Amazon’s” is correct, is to use Anthropic AI technology. Hey, why not? Amazon allegedly invested $5 billion in the company. Maybe that implementation of Google technology will do the trick?
The mother is happy with Alexa’s answers. The weird sounds emitted from the confused device surprise her daughter. Thanks, MSFT Copilot. Good enough.
The write up reports:
Amazon demoed a generative AI version of Alexa in September 2023 and touted it as being more advanced, conversational, and capable, including the ability to do multiple smart home tasks with simpler commands. Gen AI Alexa is expected to come with a subscription fee, as Alexa has reportedly lost Amazon tens of billions of dollars throughout the years. Earlier reports said the updated voice assistant would arrive in June, but Amazon still hasn’t confirmed an official release date.
A year later, Amazon is punting and giving the cash furnace Alexa more brains courtesy of Anthropic. Will the AI wizards working on Amazon’s own AI have a chance to work in one of the Amazon warehouses?
Ars Technica says without a trace of irony:
The previously announced generative AI version of Amazon’s Alexa voice assistant “will be powered primarily by Anthropic’s Claude artificial intelligence models," Reuters reported today. This comes after challenges with using proprietary models, according to the publication, which cited five anonymous people “with direct knowledge of the Alexa strategy.”
Amazon has a desire to convert the money-losing Alexa into a gold mine, or at least a modest one.
This report, if accurate, suggests some interesting sparkles on the Bezos bulldozer’s metal flake paint; to wit:
- The two pizza team approach to technology did not work either for Alexa (the money loser) or the home grown AI money spinner. What other Amazon technologies are falling short of the mark?
- How long will it take to get a money-generating Alexa working and into the hands of customers eager for a better Alexa experience and a monthly or annual subscription for the new Alexa? A year has been lost already, and Alexa users continue to ask for the weather and a timer for cooking broccoli.
- What happens if the product, its integration with smart TV, and the Ring doorbell is like a Pet Rock? The fad has come and gone, replaced by smart watches and mobile phones? The answer: Collectibles!
Why am I questioning Amazon’s technology competency? The recent tie up between Microsoft and Palantir Technologies makes clear that Amazon’s cloud services don’t have the horsepower to pull government sales. When these pieces are shifted around, the resulting puzzle says, “Amazon is flailing to me.” Consider this: AI was beyond the reach of a big money outfit like Amazon. There’s a message in that factoid.
Stephen E Arnold, September 5, 2024
Accountants: The Leaders Like Philco
September 4, 2024
This essay is the work of a dumb dinobaby. No smart software required.
AI or smart software has roiled the normal routine of office gossip. We have shifted from “What is it?” to “Who will be affected next?” The integration of AI into work processes, however, is not a new thing. Most people don’t know or don’t recall that when a consultant could do a query from a clunky device like the Texas Instrument Silent 700, AI was already affecting jobs. Whose? Just ask a special librarian who worked when an intermediary was not needed to retrieve information from an online database.
A nervous smart robot running state-of-the-art tax software is sufficiently intelligent to be concerned about the meeting with an IRS audit team. Thanks, MSFT Copilot. How’s that security push coming along? Oh, too bad.
I read “Why America’s Most Boring Job Is on the Brink of Extinction.” I think the story was crafted by a person who received either a D or an F in Accounting 100. The lingo links accountants with being really dull people and the nuking of an entire species. No meteor is needed; just smart software, the silent killer. By the way, my two accountants are quite sporty. I rarely fall asleep when they explain life from their point of view. I listen, and I urge you to be attentive as well. Smart software can do some excellent things, but not everything related to tax, financial planning, and keeping inside the white lines of the quite fluid governmental rules and regulations.
Nevertheless, the write up cited above states:
Experts say the industry is nearing extinction because the 150-hour college credit rule, the intense entry exam and long work hours for minimal pay are unappealing to the younger generation.
The “real” news article includes some snappy quotes too. Here’s one I circled: “’The pay is crappy, the hours are long, and the work is drudgery, and the drudgery is especially so in their early years.’”
I am not an accountant, so I cannot comment on the accuracy of this statement. My father was an accountant, and he was into detail work and was able to raise a family. None of us ended up in jail or in the hospital after a gang fight. (I was and still am a sissy. Imagine that: An 80 year old dinobaby sissy with the DNA of an accountant. I am definitely exciting.)
With fewer people entering the field of accounting, the write up makes a remarkable statement:
… Accountants are becoming overworked and it is leading to mistakes in their work. More than 700 companies cited insufficient staff in accounting and other departments as a reason for potential errors in their quarterly earnings statements…
Does that mean smart software will become the accountants of the future? Some accountants may hope that smart software cannot do accounting. Others will see smart software as an opportunity to improve specific aspects of accounting processes. The problem, however, is not the accountants. The problem will AI is the companies or entrepreneurs who over promise and under deliver.
Will smart software replace the insight and timeline knowledge of an experienced numbers wrangler like my father or the two accountants upon whom I rely?
Unlikely. It is the smart software vendors and their marketers who are most vulnerable to the assertions about Philco, the leader.
Stephen E Arnold, September 4, 2024

