MIT Embraces Google DeepMinds Intuitive Technology Focus

October 6, 2016

The article on MIT Technology Review titled How Google Plans to Solve Artificial Intelligence conveys the exciting world of Google DeepMind’s Labyrinth. Labyrinth is a 3D environment forged on an open-source platform where DeepMind is challeneged by tasks such as, say, finishing a maze. As DeepMind progresses, the challenges become increasingly complicated. The article says,

What passes for smart software today is specialized to a particular task—say, recognizing faces. Hassabis wants to create what he calls general artificial intelligence—something that, like a human, can learn to take on just about any task. He envisions it doing things as diverse as advancing medicine by formulating and testing scientific theories, and bounding around in agile robot bodies…The success of DeepMind’s reinforcement learning has surprised many machine-learning researchers.

Of the endless applications possible for intuitive technology, the article focuses on the medical, understanding text, and robotics. When questioned about the ethical implications of the latter, Demis Hassabis, the head of Google’s DeepMind team, gave the equivalent of a shrug, and said that those sorts of questions were premature. In spite of this, MIT’s Technology Review seems pretty pumped about Google, which makes us wonder whether IBM Watson has been abandoned. Our question for Watson is, what is the deal with MIT?

Chelsea Kerwin, October 6, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Humans Screw Up the Self-Driving Car Again

August 5, 2015

Google really, really wants its self-driving cars to be approved for consumer usage.  While the cars have been tested on actual roads, they have also been accompanied by car accidents.  The Inquirer posted the article, “Latest Self-Driving Car Crash Injures Three Google Employees” about how the public might not be ready for self-driving vehicles.  Google, not surprisingly, blames the crash on humans.

Google has been testing self-driving cars for over six years and there have been a total of fourteen accidents involving the vehicles.  The most recent accident is the first that resulted in injuries.  Three Google employees were using the self-driving vehicle during Mountain View, California rush hour traffic on July 1. When the accident occurred, each of the three employees were treated for whiplash.  Google says that its car was not at fault and a distracted driver was at caused the accident, which is also the reason for the other accidents.

While Google is upset, the accidents have not hindered their plans, they have motivated them to push forward.  Google explained that:

“ ‘The most recent collision, during the evening rush hour on 1 July, is a perfect example. The light was green, but traffic was backed up on the far side, so three cars, including ours, braked and came to a stop so as not to get stuck in the middle of the intersection.  After we’d stopped, a car slammed into the back of us at 17 mph? ?and it hadn’t braked at all.’ ”

Google continues to insist that human error and inattention are ample reason to allow self-driving cars on the road.  While it is hard to trust a machine with driving a weapon going 50 miles per hour, why do we trust people who have proven to be poor drivers with a license?

Whitney Grace, August 5, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

AI May Give Edge to Small and Medium Businesses

April 7, 2015

Over at the B2B News Network, writer Rick Delgado shares some observations about the use of data-related AI in small and medium-sized businesses in his piece, “Building Business Intelligence Through Artificial Intelligence.” He asserts that using AI-enhanced data analysis can help such companies compete with the big players. He writes:

“Most smaller companies don’t have experienced IT technicians and data scientists familiar with the language required for proper data analysis. Having an AI feature allows employees to voice questions as they would normally talk, and even allows for simple-to-understand responses, as opposed to overly technical insights. The ability to understand a program is key to its functionality, and AI shortens the learning curve allowing organizations to get to work faster.”

The article observes that AI can help with sales and marketing by, for example, narrowing down leads to the most promising prospects. It can also make supply chains more efficient. Delgado notes that, though existing supply-chain tools are not very adaptable, he believes they will soon automatically adjust for changing factors like transportation costs or commodity prices around the world. The article concludes:

“Any attempt to predict how AI will evolve over the coming years is a fool’s errand, because every new discovery leads to countless possibilities. What we do know is that AI won’t remain restricted to just improving sales and organizational supply chain. Already we see its availability to everyday users with announcements like Microsoft combining AI with Windows. Experts are also exploring other possibilities, like using AI to improve network security, law enforcement and robotics. The important takeaway is that the combination of Big Data and AI will allow for rapid decisions that don’t require constant human oversight, improving both efficiency and productivity.”

Wonderful! We would caution our dear readers to look before they leap, however. To avoid wasting time and money, a company should know just what they need from their software before they go shopping.

Cynthia Murrell, April 7, 2015

Stephen E Arnold, Publisher of CyberOSINT at www.xenky.com

  • Archives

  • Recent Posts

  • Meta