Doc Watson Says: Take Two Big Blue Pills and Call Me in the Morning… If You Are Alive

August 1, 2018

Oh, dear. AI technology has great potential for good, but even IBM Watson is not perfect, it seems. Gizmodo reports, “IBM Watson Reportedly Recommended Cancer Treatments that Were ‘Unsafe and Incorrect’.” The flubs were found during an evaluation of the software, not within a real-world implementation. (We think.) Still, it is a problem worth keeping an eye on. Writer Jennings Brown cites a report by Stat News that reviewed some 2017 documents from IBM Watson’s former deputy health chief Andrew Norden, reports that were reportedly also provided to IBM Watson Health’s management. We’re told:

“One example in the documents is the case of a 65-year-old man diagnosed with lung cancer, who also seemed to have severe bleeding. Watson reportedly suggested the man be administered both chemotherapy and the drug ‘Bevacizumab.’ But the drug can lead to ‘severe or fatal hemorrhage,’ according to a warning on the medication, and therefore shouldn’t be given to people with severe bleeding, as Stat points out. A Memorial Sloan Kettering (MSK) Cancer Center spokesperson told Stat that they believed this recommendation was not given to a real patient, and was just a part of system testing. …According to the report, the documents blame the training provided by IBM engineers and on doctors at MSK, which partnered with IBM in 2012 to train Watson to ‘think’ more like a doctor. The documents state that—instead of feeding real patient data into the software—the doctors were reportedly feeding Watson hypothetical patients data, or ‘synthetic’ case data. This would mean it’s possible that when other hospitals used the MSK-trained Watson for Oncology, doctors were receiving treatment recommendations guided by MSK doctors’ treatment preferences, instead of an AI interpretation of actual patient data.”

Houston, we have a problem. Let that be a lesson, folks—always feed your AI real, high-quality case data. Not surprisingly, doctors who have already invested in Watson for Oncology are unhappy about the news, saying the technology can now only be used to supply an “extra opinion” when human doctors disagree. Sounds like a plan or common sense.

Cynthia Murrell, August 1, 2018

IBM Turns to Examples to Teach AI Ethics

July 31, 2018

It seems that sometimes, as with humans, the best way to teach an AI is by example. That’s one key takeaway from VentureBeat’s article, “IBM Researchers Train Ai to Follow Code of Ethics.” The need to program a code of conduct into AI systems has become clear, but finding a method to do so has proven problematic. Efforts to devise rules and teach them to systems are way too slow, and necessarily leave out many twists and turns of morality that (most) humans understand instinctively. IBM’s solution is to make the machine draw conclusions for itself by studying examples. Writer Ben Dickson specifies:

“The AI recommendation technique uses two different training stages. The first stage happens offline, which means it takes place before the system starts interacting with the end user. During this stage, an arbiter gives the system examples that define the constraints the recommendation engine should abide by. The AI then examines those examples and the data associated with them to create its own ethical rules. As with all machine learning systems, the more examples and the more data you give it, the better it becomes at creating the rules. … The second stage of the training takes place online in direct interaction with the end user. Like a traditional recommendation system, the AI tries to maximize its reward by optimizing its results for the preferences of the user and showing content the user will be more inclined to interact with. Since satisfying the ethical constraints and the user’s preferences can sometimes be conflicting goals, the arbiter can then set a threshold that defines how much priority each of them gets. In the [movie recommendation] demo IBM provided, a slider lets parents choose the balance between the ethical principles and the child’s preferences.”

Were told the team is also working to use more complex systems than the yes/no model, ones based on ranked priorities instead, for example. Dickson notes the technique can be applied to many other purposes, like calculating optimal drug dosages for certain patients in specific environments. It could also, he posits, be applied to problems like filter bubbles and smartphone addiction.

Beyond Search wonders if IBM ethical methods apply to patent enforcement, staff management of those over 55 year old, and unregulated blockchain services. Annoying questions? I hope so.

Cynthia Murrell, July 31, 2018

IBM and a University Tie Up or Tie Down

July 26, 2018

I wanted to comment about the resuscitation of IBM’s cancer initiative at the Veterans Administration. But that’s pure Watson, and I think Watson has become old news.

A more interesting “galactico” initiative at IBM is blockchain.

What’s bigger than Watson?

Blockchain. Well, that’s the the hope.

IBM is grasping tightly to blockchain technology, this time through an academic partnership, we learn in CoinDesk’s piece, “IBM Teams with Columbia to Launch Blockchain Research Center.” Located on the Manhattan campus of Columbia University, the center hopes to speed the development of blockchain apps and cultivate education initiatives. Writer Wolfie Zhao elaborates:

“A dedicated committee comprised of both Columbia faculty members and IBM research scientists will start reviewing proposals for blockchain ‘curriculum development, business initiatives and research programs’ later this year. In addition, the center will advise on regulatory issues for startups in the blockchain space and provide internship opportunities to improve technical skills for students and professionals with an interest in the tech.”

Zhao also notes this move fits into a larger trend:

“The announcement marks the latest effort by the blockchain industry to invest in a top-tier university in the U.S. to accelerate blockchain understanding and adoption. As reported by CoinDesk in June, San Francisco-based distributed ledger startup Ripple said it will invest $2 million in blockchain research initiatives in the University of Texas at Austin in the next five years, as part of its pledge to invest $50 million in worldwide institutions.”

For those who are interested in the University of Texas at Austin’s Blockchain Initiative, there is more information here, via the university’s McCombs School of Business. Ripple, by the way, was founded in 2012 specifically to capitalize on blockchain technology. Though it is indeed based in San Francisco, the company also maintains offices in New York City and Atlanta.

Perhaps IBM will just buy university research departments before Amazon, Facebook, and Google consume the blockchain academic oxygen?

Cynthia Murrell, July 26, 2018

IBM and Watson

July 23, 2018

I spotted a brief comment about IBM’s recent earnings report. Yep, IBM is doing better. However, “IBM Results Leave Watson Thinking” makes this point:

Artificial intelligence is at the heart of IBM’s long-term strategy, yet its cognitive solutions business experienced a slight decline.

If I had the energy, I would pull from my IBM cognitive archive some of the statements about the huge payoff Watson would deliver, the odd ball advertisement showing Watson as chemical symbols, and the news release about the Union Square office. But it is Monday, and I am reluctant to revisit the Watson thing.

The operative word is “decline.”

Stephen E Arnold, July 23, 2018

IBM Demo: Debating Watson

June 29, 2018

IBM once again displays its AI chops—SFGate reports,  “IBM Computer Proves Formidable Against 2 Human Debaters.” The project, dubbed Project Debater, shows off the tech’s improvements in mimicking human-like speech and reasoning. At a recent demonstration, neither the AI nor the two humans knew the topics beforehand: space exploration and telemedicine. According to one of the human participants, the AI held its own pretty well, even if it did rely too much on blanket statements. Writer Matt O’brien says this about IBM’s approach:

“Rather than just scanning a giant trove of data in search of factoids, IBM’s latest project taps into several more complex branches of AI. Search engine algorithms used by Google and Microsoft’s Bing use similar technology to digest and summarize written content and compose new paragraphs. Voice assistants such as Amazon’s Alexa rely on listening comprehension to answer questions posed by people. Google recently demonstrated an eerily human-like voice assistant that can call hair salons or restaurants to make appointments…But IBM says it’s breaking new ground by creating a system that tackles deeper human practices of rhetoric and analysis, and how they’re used to discuss big questions whose answers aren’t always clear. ‘If you think of the rules of debate, they’re far more open-ended than the rules of a board game,’ said Ranit Aharonov, who manages the debater project.

The demo did not declare any “winner” in the debate, but researchers were able to draw some (perhaps obvious) conclusions: While the software was better at recalling specific facts and statistics to bolster its arguments, humans brought more linguistic flair and the power of personal experience to the field. As for potential applications of this technology, IBM’s VP of research suggests it could be used by human workers to better inform their decisions. Lawyers, specifically, were mentioned.

Keep in mind. Demo.

Cynthia Murrell, June 29, 2018

 

Artificial Intelligence and the New Normal: Over Promising and Under Delivering

June 15, 2018

IBM has the world’s fastest computer. That’s intriguing. Now Watson can output more “answers” in less time. Pity the poor user who has to figure out what’s right and what’s no so right. Progress.

Perhaps a wave of reason is about to hit the AI field. Blogger Filip Piekniewski forecasts, “AI Winter is Well on its Way.” While the neural-networking approach behind deep learning has been promising, it may fall short of the hype some companies have broadcast. Piekniewski writes:

“Many bets were made in 2014, 2015 and 2016 when still new boundaries were pushed, such as the Alpha Go etc. Companies such as Tesla were announcing through the mouths of their CEO’s that fully self-driving car was very close, to the point that Tesla even started selling that option to customers [to be enabled by future software update]. We have now mid 2018 and things have changed. Not on the surface yet, NIPS conference is still oversold, the corporate PR still has AI all over its press releases, Elon Musk still keeps promising self driving cars and Google CEO keeps repeating Andrew Ng’s slogan that AI is bigger than electricity. But this narrative begins to crack. And as I predicted in my older post, the place where the cracks are most visible is autonomous driving – an actual application of the technology in the real world.”

This post documents a certain waning of interest in deep learning, and notes an apparently unforeseen limit to its scale. Most concerning so far, of course, are the accidents that have involved self-driving cars; Piekniewski examines that problem from a technical perspective, so see the article for those details. Whether the AI field will experience a “collapse,” as this post foresees, or we will simply adapt to more realistic expectations, we cannot predict.

Cynthia Murrell, June 15, 2018

IBM: Watson Wizards Available for a New Job?

May 28, 2018

I know that newspapers do real “news.” I know I worked for a reasonably good newspaper. I, therefore, assume that the information is true in the story “Some IBM Watson Employees Said They Were Laid Off Thursday.” The Thursday for those who have been on a “faire le pont” is May 24, 2018.

The write up states:

IBM told some employees in the United States and other countries on Thursday that they were being laid off. The news was reported on websites, which cited social media and Internet posts by IBM employees.

IBM also seems to be taking the reduction in force approach to success by nuking some of the Big Blue team in its health unit. (See “‘Ugly Day:’ IBM Laying Off Workers in Watson Health Group, Including Triangle.”)

I noted this statement in the Cleveland write up:

Since 2012, the Cleveland Clinic has collaborated with IBM on electronic medical records and other tools employing Watson, IBM’s supercomputer. The Clinic and IBM Watson Health have worked together to identify new cancer treatments, improve electronic medical records and medical student education, and look at the adoption of genomic-based medicine.

The issue may relate to several facets of Watson:

  1. Partners do not have a good grasp of the time and effort required to create questions which Watson is expected to answer. High powered smart people are okay with five minute conversations with an IBM Watson engineer, but extend those chats to a couple of hours over weeks, then the Watson thing is not the time saver some hoped
  2. Watson, like other smart systems, works within a tightly bounded domain. As new issues arise, questions by users cannot be answered in a way that is “spontaneously helpful.” The reason is that Watson and similar systems are just processing queries. if one does not know what one does not know, asking and answering questions can range from general to naive to dead wrong in my experience
  3. Watson and similar systems are inevitably compared to Google’s ability to locate a pizza restaurant as one drives a van in an unfamiliar locale. Watson does not work like Google.

Toss in the efficiency of using one’s experience or asking a colleague, and Watson gets in the way. Like many smart systems, users do not want to become expert Watson or similar system users. The smart system is supposed to or is expected to provide answers a person can use.

The problem with the Watson approach is that it is old fashioned search. A user has to figure out from a list of results or outputs what’s what. Contrast that to next generation information access systems which provide an answer.

IBM owns technology which performs in a more intelligent and useful way than the Watson solution.

Why IBM chased the same dream that cratered many firms with key word search technology has intrigued me. Was it the crazy idea that marketing would make search work? IBM Watson seems to be to be a potpourri of home brew code, acquired metasearch technology like Vivisimio, and jacking in open source software.

What distinguished it was the hope that marketing would make Watson into a billion dollar business.

It seems as if that dream has suffered a setback. One weird consequence is the use of the word “cognitive.” Vendors worldwide describe their systems as “cognitive search.”

From my point of view, search and retrieval is a utility. One cannot perform digital work without finding a file, content, or some other digital artifact.

No matter how many “governance” experts, how many “search” experts, how many MBAs, how many content management experts, how many information professionals want search to be the next big thing—search is still a utility. Forget this when one dreams of billions in revenue, and there is a disconnect between dreams and reality.

Effective “search” is not a single method or system. Effective search is not just smart software. Effective search is not a buzzword like “cognitive” or “artificial intelligence.”

Finding information and getting useful “answers” requires multiple tools and considerable thought.

My hunch is that the apparent problems with Watson “health” foreshadow even more severe changes for the game show winners, its true believers, and the floundering experts who chant “cognitive” at every opportunity.

Search is difficult, and in my decades of work in information access, I have not found the promised land. Silver bullets, digital bags of garlic, and unicorn dreams have not made information access a walk in the park.

Cognitive? Baloney. Remember. Television programs like Jeopardy do what’s called post production. A flawed cancer treatment may not afford this luxury. Winning a game show is TV. Sorry, IBM. Watson’s business is reality which may make a great business school case study.

Stephen E Arnold, May 28, 2018

IBM and Distancing: New Collar Jobs in France

May 23, 2018

I have zero idea if the information in the article “Exclusive: IBM bringing 1,800 AI jobs to France.” The story caught my attention because I had read “Macron Vowed to Make France a ‘Start-Up Nation.’ Is It Getting There?” You can find the story online at this link, although I read a version of the story in my dead tree edition of the real “news” paper at breakfast this morning (May 23, 2018).

Perhaps IBM recognizes that the “culture” of France makes it difficult for startups to get funding without the French management flair. Consequently a bold and surgical move to use IBM management expertise could make blockchain, AI, and Watson sing Johnny Hallyday’s Johnny, reviens ! Les Rocks les plus terribles and shoot to the top of YouTube views.

On the other hand, the play may be a long shot.

What I did find interesting in the write up was this statement:

IBM continues to make moves aimed at distancing itself from peers.

That is fascinating. IBM has faced a bit of pushback as it made some personnel decisions which annoyed some IBMers. One former IBM senior manager just shook his head and grimaced when I mentioned the floundering of the Watson billion dollar bet. I dared not bring up riffing workers over 55. That’s a sore subject for some Big Blue bleeders.

I also liked the “New Collar” buzzword.

To sum up, I assume that IBM will bring the New Collar fashion wave to the stylish world of French technology.

Let’s ask Watson. No, bad idea. Let’s not. I don’t have the time to train Watson to make sense of questions about French finance, technology, wine, cheese, schools, family history, and knowledge of Molière.

Stephen E Arnold, May 23, 2018

IBM: Just When You Thought Crazy Stuff Was Dwindling

May 19, 2018

How has IBM marketing reacted to the company’s Watson and other assorted technologies? Consider IBM and quantum computing. That’s the next big thing, just as soon as the systems become scalable. And the problem of programming? No big deal. What about applications? Hey, what is this a reality roll call?

Answer: Yes, plus another example of IBM predicting the future.

Navigate to “IBM Warns of Instant Breaking of Encryption by Quantum Computers: ‘Move Your Data Today’.”

I like that “warning.” I like that “instant breaking of encryption.” I like that command: “Move your data today.”

Hogwash.

hog in mud

IBM’s quantum computing can solve encryption problems instantly. Can this technology wash this hog? The answer is that solving encryption instantly and cleaning this dirty beast remain highly improbably. To verify this hunch, let’s ask Watson.

The write up states with considerable aplomb:

“Anyone that wants to make sure that their data is protected for longer than 10 years should move to alternate forms of encryption now,” said Arvind Krishna, director of IBM Research.

So, let me get this straight. Quantum computing can break encryption instantly. I am supposed to move to an alternate form of encryption. But if encryption can be broken instantly, why bother?

That strikes me as a bit of the good old tautological reasoning which leads exactly to nowhere. Perhaps I don’t understand.

I learned:

The IBM Q is an attempt to build a commercial system, and IBM has allowed more than 80,000 developers run applications through a cloud-based interface. Not all types of applications will benefit from quantum computers. The best suited are problems that can be broken up into parallel processes. It requires different coding techniques. “We still don’t know which applications will be best to run on quantum computers,” Krishna said. “We need a lot of new algorithms.”

No kidding. Now we need numerical recipes, and researchers have to figure out what types of problems quantum computing can solve?

We have some dirty hogs in Harrod’s Creek, Kentucky. Perhaps IBM’s quantum cloud computing thing which needs algorithms can earn some extra money. You know that farmers in Kentucky pay pretty well for hog washing.

Stephen E Arnold, May 19, 2018

IBM Watson: Did You Generate These AI Requirements Answers?

May 16, 2018

I read a darned remarkable write up called “The 5 Attributes Of Useful AI, According To IBM.” IBM, of course, has Watson, the billion dollar bet that continues to chase other horses in the artificial intelligence derby. What Facebook and Google lack in marketing, IBM has that facet of grooming expensive horses nailed tighter than a stall barn door.

Let me run through the five attributes of “useful AI” which are explained in the write up:

  1. Managed. I think this means one pays a big outfit to do the engineering, tuning, and servicing of the useful AI. Billability seems to lurk around the edges of this seemingly innocuous term.
  2. Resilient. My hunch is that when the AI goes off the rails and generates nonsense or dead wrong outputs, the useful AI is going to fix itself. See item number 1. If the AI is resilient, why do we need the “managed” approach?
  3. Performant. I first encountered this word in Norway when a person who taught English to hearty Norwegians used it when communication with me. I think it means “works” or “performs in an acceptable manner.” The idea is that the AI system delivers a useful output. Keep in mind the “managed” and “billability” angles, please.
  4. Measureable. I like this idea almost as much as I like precision and recall. However, when one asks Watson how to treat a cancer, it seems to me that the treatment should nuke the cancer. I am on board with statistical analyses, but in the case of a doctor depending on AI for a treatment, the operative number is one and the key value is 100 percent. Your mileage may differ unless you have life threatening cancer.
  5. Continuous. I loop back to “managed” and the notion of “billability.” I like the notion that smart software should operate continuously, but there are challenges associated with “drift” as new content enters the system, the cost of processing real time or near real time flows of information which has a tendency to expand over time, and built in algorithmic biases. Few want to talk about how popular numerical recipes output junk unless tweaked, retrained, tuned, and enhanced. This work is obviously “billable.”

I would point out that one attribute important to me is that the useful AI should generate a beneficial financial positive for the customer. I understand the revenue upside for an outfit like IBM, but AI has an interesting characteristic: The smart software becomes increasingly expensive to maintain and operate in a “useful” manner over time.

If I look at “useful” from IBM’s perspective, the task for the stalwarts in Big Blue is making money from this “useful” software. Seems like it has been slow going.

Stephen E Arnold, May 16, 2018

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta