The Importance of Google AI

December 23, 2015

According to Business Insider, we’ve all been overlooking something crucial about Google. Writer Lucinda Shen reports, “Top Internet Analyst: There Is One Thing About Google that Everyone Is Missing.” Shen cites an observation by prominent equity analyst Carlos Kirjner. She writes:

“Kirjner, that thing [that everyone else is missing] is AI at Google. ’Nobody is paying attention to that because it is not an issue that will play out in the next few quarters, but longer term it is a big, big opportunity for them,’ he said. ‘Google’s investments in artificial intelligence, above and beyond the use of machine learning to improve character, photo, video and sound classification, could be so revolutionary and transformational to the point of raising ethical questions.’

“Even if investors and analysts haven’t been closely monitoring Google’s developments in AI, the internet giant is devoted to the project. During the company’s third-quarter earnings call, CEO Sundar Pichai told investors the company planned to integrate AI more deeply within its core business.”

Google must be confident in its AI if it is deploying it across all its products, as reported. Shen recalls that the company made waves back in November, when it released the open-source AI platform TensorFlow. Is Google’s AI research about to take the world by storm?

 

Cynthia Murrell, December 23, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

The AI Evolution

September 10, 2015

An article at WT Vox announces, “Google Is Working on a New Type of Algorithm Called ‘Thought Vectors’.” It sounds like a good use for a baseball cap with electrodes, a battery pack, WiFi, and a person who thinks great thoughts. In actuality, it’s a project based on the work of esteemed computer scientist Geoffrey E. Hinton, who has been exploring the idea of neural networks for decades. Hinton is now working with Google to create the sophisticated algorithm of our dreams (or nightmares, depending on one’s perspective).

Existing language processing software has come a very long way; Google Translate, for example, searches dictionaries and previously translated docs to translate phrases. The app usually does a passably good job of giving one the gist of a source document, but results are far from reliably accurate (and are often grammatically comical.) Thought vectors, on the other hand, will allow software to extract meanings, not just correlations, from text.

Continuing to use translation software as the example, reporter Aiden Russell writes:

“The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical ‘meaning space’ or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector….

“The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences. At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.”

But, won’t all efficient machine learning lead to a killer-robot-ruled dystopia? Hinton bats away that claim as a distraction; he’s actually more concerned about the ways big data is already being (mis)used by intelligence agencies. The man has a point.

Cynthia Murrell, September 10, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Algorithms Still Need Oversight

September 8, 2015

Many have pondered what might happen when artificial intelligence systems go off the rails. While not spectacular enough for Hollywood, some very real consequences have been observed; the BBC examines “The Bad Things that Happen When Algorithms Run Online Shops.”

The article begins by relating the tragic tale of an online T-shirt vendor who just wanted to capitalize on the “Keep Calm and Carry On” trend. He set up an algorithm to place random terms into the second half of that oft-copied phrase and generate suggested products. Unfortunately, the list of phrases was not sufficiently vetted, resulting in a truly regrettable slogan virtually printed on virtual examples. Despite the fact that the phrase appeared only on the website, not on any actual shirts, the business never recovered its reputation and closed shortly thereafter. Reporter Chris Baranuik writes:

“But that’s the trouble with algorithms. All sorts of unexpected results can occur. Sometimes these are costly, but in other cases they have benefited businesses to the tune of millions of pounds. What’s the real impact of the machinations of machines? And what else do they do?”

Well, one other thing is to control prices. Baranuik reports that software designed to set online prices competitively, based on what other sites are doing, can cause prices to fluctuate day-to-day, sometimes hour-to-hour. Without human oversight, results can quickly become extreme to either end of the scale. For example, for a short time last December, prices of thousands of products sold through Amazon were set to just one penny each. Amazon itself probably weathered the unintended near-giveaways just fine, but smaller merchants selling through the site were not so well-positioned; some closed as a direct result of the error. On the other hand, vendors trying to keep their prices as high as feasible can make the opposite mistake; the article points to the time a blogger found an out-of-print textbook about flies priced at more than $23 million, the result of two sellers’ dueling algorithms.

Such observations clearly mean that consumers should be very wary about online prices. The bigger takeaway, though, is that we’re far from ready to hand algorithms the reigns of our world without sufficient human oversight. Not yet.

Cynthia Murrell, September 8, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Neural Networks and Thought Commands

July 22, 2015

If you’ve been waiting for the day you can operate a computer by thinking at it, check out “When Machine Learning Meets the Mind: BBC and Google Get Brainy” at the Inquirer. Reporter Chris Merriman brings our attention to two projects, one about hardware and one about AI, that stand at the intersection of human thought and machine. Neither venture is anywhere near fruition, but a peek at their progress gives us clues about the future.

The internet-streaming platform iPlayer is a service the BBC provides to U.K. residents who wish to catch up on their favorite programmes. In pursuit of improved accessibility, the organization’s researchers are working on a device that allows users to operate the service with their thoughts. The article tells us:

“The electroencephalography wearable that powers the technology requires lucidity of thought, but is surprisingly light. It has a sensor on the forehead, and another in the ear. You can set the headset to respond to intense concentration or meditation as the ‘fire’ button when the cursor is over the option you want.”

Apparently this operation is easier for some subjects than for others, but all users were able to work the device to some degree. Creepy or cool? Perhaps it’s both, but there’s no escaping this technology now.

As for Google’s undertaking, we’ve examined this approach before: the development of artificial neural networks. This is some exciting work for those interested in AI. Merriman writes:

“Meanwhile, a team of Google researchers has been looking more closely at artificial neural networks. In other words, false brains. The team has been training systems to classify images and better recognise speech by bombarding them with input and then adjusting the parameters to get the result they want.

But once equipped with the information, the networks can be flipped the other way and create an impressive interpretation of objects based on learned parameters, such as ‘a screw has twisty bits’ or ‘a fly has six legs’.”

This brain-in-progress still draws some chuckle-worthy and/or disturbing conclusions from images, but it is learning. No one knows what the end result of Google’s neural network research will be, but it’s sure to be significant. In a related note, the article points out that IBM is donating its machine learning platform to Apache Spark. Who knows where the open-source community will take it from here?

Cynthia Murrell, July 22, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Watson Still Has Much to Learn About Healthcare

July 9, 2015

If you’ve wondered what is taking Watson so long to get its proverbial medical degree, check out IEEE Spectrum’s article, “IBM’s Dr. Watson Will See You… Someday.” When IBM’s AI Watson won Jeopardy in 2011, folks tasked with dragging healthcare into the digital landscape naturally eyed the software as a potential solution, and IBM has been happy to oblige. However, “training” Watson in healthcare documentation is proving an extended process. Reporter Brandon Keim writes:

“Where’s the delay? It’s in our own minds, mostly. IBM’s extraordinary AI has matured in powerful ways, and the appearance that things are going slowly reflects mostly on our own unrealistic expectations of instant disruption in a world of Uber and Airbnb.”

Well that, and the complexities of our healthcare system. Though the version of Watson that beat Jeopardy’s human champion was advanced and powerful, tailoring it to manage medicine calls for a wealth of very specific tweaking. In fact, there are now several versions of “Doctor” Watson being developed in partnership with individual healthcare and research facilities, insurance companies, and healthcare-related software makers. The article continues:

“Watson’s training is an arduous process, bringing together computer scientists and clinicians to assemble a reference database, enter case studies, and ask thousands of questions. When the program makes mistakes, it self-adjusts. This is what’s known as machine learning, although Watson doesn’t learn alone. Researchers also evaluate the answers and manually tweak Watson’s underlying algorithms to generate better output.

“Here there’s a gulf between medicine as something that can be extrapolated in a straightforward manner from textbooks, journal articles, and clinical guidelines, and the much more complicated challenge of also codifying how a good doctor thinks. To some extent those thought processes—weighing evidence, sifting through thousands of potentially important pieces of data and alighting on a few, handling uncertainty, employing the elusive but essential quality of insight—are amenable to machine learning, but much handcrafting is also involved.”

Yes, incorporating human judgement is time-consuming. See the article for more on the challenges Watson faces in the field of healthcare, and for some of the organizations contributing to the task. We still don’t know how much longer it will take for the famous AI (and perhaps others like it) to dominate the healthcare field. When that day arrives, will it have been worth the effort?

Cynthia Murrell, July 9, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Don’t  Fear the AI

May 14, 2015

Will intelligent machines bring about the downfall of the human race? Unlikely, says The Technium, in “Why I Don’t Worry About a Super AI.” The blogger details four specific reasons he or she is unafraid: First, AI does not seem to adhere to Moore’s law, so no Terminators anytime soon. Also, we do have the power to reprogram any uppity AI that does crop up and (reason three) it is unlikely that an AI would develop the initiative to reprogram itself, anyway. Finally, we should see managing this technology as an opportunity to clarify our own principles, instead of a path to dystopia. The blog opines:

“AI gives us the opportunity to elevate and sharpen our own ethics and morality and ambition. We smugly believe humans – all humans – have superior behavior to machines, but human ethics are sloppy, slippery, inconsistent, and often suspect. […] The clear ethical programing AIs need to follow will force us to bear down and be much clearer about why we believe what we think we believe. Under what conditions do we want to be relativistic? What specific contexts do we want the law to be contextual? Human morality is a mess of conundrums that could benefit from scrutiny, less superstition, and more evidence-based thinking. We’ll quickly find that trying to train AIs to be more humanistic will challenge us to be more humanistic. In the way that children can better their parents, the challenge of rearing AIs is an opportunity – not a horror. We should welcome it.”

Machine learning as a catalyst for philosophical progress—interesting perspective. See the post for more details behind this writer’s reasoning. Is he or she being realistic, or naïve?

Cynthia Murrell, May 14, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Gartner VP Claims Researching “Ethical Programming” Necessary for Future of Smart Machines

April 17, 2015

The article on TweakTown titled Gartner: Smart Machines Must Include Ethical Programming Protocols briefly delves into the necessity of developing ethical programming in order to avoid some sort of Terminator/ I,Robot situation that culminates in the rise of the machines and the end of humanity. Gartner is one of the world’s leading technology research and advisory companies, but it hardly sounds like the company stance. The article quotes Frank Buytendijk, a Gartner research VP,

“Clearly, people must trust smart machines if they are to accept and use them…The ability to earn trust must be part of any plan to implement artificial intelligence (AI) or smart machines, and will be an important selling point when marketing this technology.”

If you’re thinking, sounds like another mid-tier consultant is divining the future, you aren’t wrong. Researching ethical programming for the hypothetical self-aware machines that haven’t been built yet might just be someone’s idea of a good time. The article concludes with the statement that “experts are split on the topic, arguing whether or not humans truly have something to worry about.” While the experts figure out how we humans will cause the end of the human reign over earth, some of us are just waiting for the end of another in a line of increasingly violent winters.

Chelsea Kerwin, April 17, 2014

Stephen E Arnold, Publisher of CyberOSINT at www.xenky.com

AI Technology Poised to Spread Far and Wide

April 3, 2015

Artificial intelligence is having a moment; the second half of last year saw about half a billion dollars invested in the AI industry. Wired asks and answers, “The AI Resurgence: Why Now?” Writer Babak Hodjat observes that advances in hardware and cloud services have allowed more contenders to afford to enter the arena. Open source tools like Hadoop also help. Then there’s public perception; with the proliferation of Siri and her ilk, people are more comfortable with the whole concept of AI (Steve Wozniak aside, apparently). It seems to help that these natural-language personal assistants have a sense of humor.  Hodjat continues:

“But there’s more substance to this resurgence than the impression of intelligence that Siri’s jocularity gives its users. The recent advances in Machine Learning are truly groundbreaking. Artificial Neural Networks (deep learning computer systems that mimic the human brain) are now scaled to several tens of hidden layer nodes, increasing their abstraction power. They can be trained on tens of thousands of cores, speeding up the process of developing generalizing learning models. Other mainstream classification approaches, such as Random Forest classification, have been scaled to run on very large numbers of compute nodes, enabling the tackling of ever more ambitious problems on larger and larger data-sets (e.g., Wise.io).”

The investment boom has produced a surge of start-ups offering AI solutions to companies in a wide range of industries. Organizations in fields as diverse as medicine and oil production seem eager to incorporate these tools; it remains to be seen whether the tech is a good investment for every type of enterprise. For his part, Hodjat has high hopes for its use in fraud detection, medical diagnostics, and online commerce. And for ever-improving personal assistants, of course.

Cynthia Murrell, April 3, 2015

Stephen E Arnold, Publisher of CyberOSINT at www.xenky.com

« Previous Page

  • Archives

  • Recent Posts

  • Meta