The Robots Are Not Taking over Libraries

December 14, 2016

I once watched a Japanese anime that featured a robot working in a library.  The robot shelved, straightened, and maintained order of the books by running on a track that circumnavigated all the shelves in the building.  The anime took place in a near-future Japan, when all paper documents were rendered obsolete.  While we are a long way off from having robots in public libraries (budget constraints and cuts), there is a common belief that libraries are obsolete as well.

Libraries are the furthest thing from being obsolete, but robots have apparently gained enough artificial intelligence to find lost books, however.  Popsci shares the story in “Robo Librarian Tracks Down Misplaced Book.”  It explains a situation that librarians hate to deal with: people misplacing books on shelves instead of letting the experts put them back.  Libraries rely on books being in precise order and if they are in the wrong place, they are as good as lost.  Fancy libraries, like a research library at the University of Chicago, have automated the process, but it is too expensive and unrealistic to deploy.  There is another option:

A*STAR roboticists have created an autonomous shelf-scanning robot called AuRoSS that can tell which books are missing or out of place. Many libraries have already begun putting RFID tags on books, but these typically must be scanned with hand-held devices. AuRoSS uses a robotic arm and RFID scanner to catalogue book locations, and uses laser-guided navigation to wheel around unfamiliar bookshelves. AuRoSS can be programmed to scan the library shelves at night and instruct librarians how to get the books back in order when they arrive in the morning.

Manual labor is still needed to put the books in order after the robot does its work at night.   But what happens when someone needs help with research, finding an obscure citation, evaluating information, and even using the Internet correctly?  Yes, librarians are still needed.  Who else is going to interpret data, guide research, guard humanity’s knowledge?

Whitney Grace, December 14, 2016

Googles Bid for AI Dominance

December 14, 2016

Google‘s dominance on our digital lives cannot be refuted. The tech giant envisages that the future of computing will be Artificial Intelligence (AI), and the search engine leader is all set to dominate it once again.

Arabian Business in a feature article titled Inside Google’s Brave New World, the author says:

The $500bn technology giant is extending its reach into hardware and artificial intelligence, ultimately aiming to create a sophisticated robot that can communicate with smart-device users to get things done.

The efforts can be seen in the form of company restructuring and focus on developing products and hardware that can host its sophisticated AI-powered algorithms. From wearable devices to in-home products like Google Home, the company is not writing powerful algorithms to answer user queries but is also building the hardware that will seamlessly integrate with the AI.

Though these advances might mean more revenue for the company and its shareholders, with Google controlling every aspect of our working lives, the company also needs to address the privacy concerns with equal zeal. As the author points out:

However, with this comes huge responsibility and a host of ethical and other policy issues such as data privacy and cybersecurity, which Google says its teams are working to resolve on a day-to-day basis.

Apart from Google, other tech companies like Amazon, Microsoft, Facebook and Apple too are in the race for AI dominance. However, the privacy concerns remain there too as the end user never knows, how and where the data collected will be used.

Vishal Ingole, December  14, 2016

GE Now Manufactures Artificial Intelligence

December 9, 2016

GE (General Electric) makes appliances, such as ovens, ranges, microwaves, washers, dryers, and refrigerators.  Once you get them out of the appliance market, their expertise appears to end.  Fast Company tells us that GE wants to branch out into new markets and the story is in, “GE Wants To Be The Next Artificial Intelligence Powerhouse .”

GE is a multi-billion dollar company and they have the resources to invest in the burgeoning artificial intelligence market.  They plan to employ two new acquisitions and bring machine learning to the markets they already dominate.  GE first used machine learning in 2015 with Predix Cloud, which recorded industrial machinery sensor patterns.  It was, however, more of a custom design for GE than one with a universal application.

GE purchased Bit Stew Systems, a company similar to the Predix Cloud except that collected industrial data, and Wise.io, a company that used astronomy-based technology to streamline customer support systems.  Predix already has a string of customers and has seen much growth:

Though young, Predix is growing fast, with 270 partner companies using the platform, according to GE, which expects revenue on software and services to grow over 25% this year, to more than $7 billion. Ruh calls Predix a “significant part” of that extra money. And he’s ready to brag, taking a jab at IBM Watson for being a “general-purpose” machine-learning provider without the deep knowledge of the industries it serves. “We have domain algorithms, on machine learning, that’ll know what a power plant is and all the depth of that, that a general-purpose machine learning will never really understand,” he says.

GE is tackling issues in healthcare and energy issues with Predix.  GE is proving it can do more than make a device that can heat up a waffle.  The company can affect the energy, metal, plastic, and computer system used to heat the waffle.  It is exactly like how mason jars created tools that will be used in space.

Whitney Grace, December 9, 2016

Google Shifts Development Emphasis to Artificial Intelligence

December 2, 2016

The article on The American Genius titled Google’s Ambitious Plans to Change Every Device on the Planet explains the focus on A.I. innovation by Sundar Pichai, a Google CEO. If you think Google is behind when it comes to A.I., you haven’t been paying close enough attention. Google has dipped its feet in voice recognition and machine translation as well as language understanding, but the next step is Google Home. The article states,

This device seems to be a direct answer to Amazon’s Echo. Google Home isn’t the only product set to launch, however. They also plan to launch a messaging app called Allo. This is likely a direct response to WhatsApp, Kik, and other popular messaging platforms… Google may be hoping Allo is the answer for what this particular platform is lacking. Allo and Google Home will both be powered by a “Google assistant” (a bit like Siri), but in their eyes, more engaging.

So what will the future landscape of A.I. technology look like? Depends on who you believe. Microsoft, Apple, and Amazon can all point to an existing product, but Google can mention AlphaGo, the computer program developed by Google DeepMind, in response. Pichai recognizes that Google must be all about the long game when it comes to A.I., because so far, we have only scratched the surface. What role will Google play in the much-feared A.I. arms race? All we know right now is that more Google is good for Google.

Chelsea Kerwin, December 2, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Could AI Spell Doom for Marketers?

December 1, 2016

AI is making inroads into almost every domain; marketing is no different. However, inability of AI to be creative in true sense may be a major impediment.

The Telegraph in a feature article titled Marketing Faces Death by Algorithm Unless It Finds a New Code says:

Artificial intelligence (AI) is one of the most-hyped topics in advertising right now. Brands are increasingly finding that they need to market to intelligent machines in order to reach humans, and this is set to transform the marketing function.

The problem with AI, as most marketers agree is its inability to imitate true creativity. As the focus of marketing is shifting from direct product placement to content marketing, the importance of AI becomes even bigger. For instance, a clothing company cannot analyze vast amounts of Big Data, decipher it and then create targeted advertising based on it. Algorithms will play a crucial role in it. However, the content creation will ultimately require human touch and intervention.

As it becomes clear here:

While AI can build a creative idea, it’s not creative “in the true sense of the word”, according to Mr Cooper. Machine learning – the driving technology behind how AI can learn – still requires human intelligence to work out how the machine would get there. “It can’t put two seemingly random thoughts together and recognize something new.

The other school of thought says that what AI lacks is not creativity, but processing power and storage. It seems we are moving closer to bridging this gap. Thus when AI closes this gap, will most occupations, including, creative and technical become obsolete?

Vishal Ingole, December 1, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Facebook AI pro Throws Shade at DeepMind Headquarters

November 29, 2016

An AI expert at Facebook criticizes Google’s handling of  DeepMind, we learn in Business Insider’s article, “Facebook’s AI Guru Thinks DeepMind is Too Far Away from the ‘Mothership’.” Might Yann LeCun, said guru, be biased? Nah. He simply points out that DeepMind’s London offices are geographically far away from Google’s headquarters in California. Writer Sam Shead, on the other hand, observes that physical distance does not hamper collaboration the way it did before this little thing called the Internet came along.

The article reminds us of rumors that Facebook was eying DeepMind before Google snapped it up. When asked, LeCun declined to confirm or deny that rumor. Shead tells us:

LeCun said: ‘You know, things played out the way they played out. There’s a lot of very good people at DeepMind.’ He added: ‘I think the nature of DeepMind eventually would have been quite a bit different from what it is now if DeepMind had been acquired by a different company than Google.

Google and Facebook are competitors in some areas of their businesses but the companies are also working together to advance the field of AI. ‘It’s very nice to have several companies that work on this space in an open fashion because we build on each other’s ideas,’ said LeCun. ‘So whenever we come up with an idea, very often DeepMind will build on top of it and do something that’s better and vice versa. Sometimes within days or months of each other we work on the same team. They hire half of my students.

Hooray for cooperation. As it happens, London is not an arbitrary location for DeepMind. The enterprise was founded in 2010 by two Oxbridge grads, Demis Hassabis and Mustafa Suleyman, along with UCL professor Shane Legg. Google bought the company in 2014, and has been making the most of their acquisition ever since. For example, Shead reminds us, Google has used the AI to help boost the efficiency of their data-center cooling units by some 40%. A worthy endeavor, indeed.

Cynthia Murrell, November 29, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Wisdom from the First OReilly AI Conference

November 28, 2016

Forbes contributor Gil Press nicely correlates and summarizes the insights he found at September’s inaugural O’Reilly AI Conference, held in New York City, in his article, “12 Observations About Artificial Intelligence from the O’Reily AI Conference.” He begins:

At the inaugural O’Reilly AI conference, 66 artificial intelligence practitioners and researchers from 39 organizations presented the current state-of-AI: From chatbots and deep learning to self-driving cars and emotion recognition to automating jobs and obstacles to AI progress to saving lives and new business opportunities. … Here’s a summary of what I heard there, embellished with a few references to recent AI news and commentary.

Here are Press’ 12 observations; check out the article for details on any that spark your interest: “AI is a black box—just like humans”; “AI is difficult”; “The AI driving driverless cars is going to make driving a hobby. Or maybe not”; “AI must consider culture and context”; “AI is not going to take all our jobs”; “AI is not going to kill us”; “AI isn’t magic and deep learning is a useful but limited tool”; “AI is Augmented Intelligence”; “AI changes how we interact with computers—and it needs a dose of empathy”; “AI should graduate from the Turing Test to smarter tests”; “AI according to Winston Churchill”; and “AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm.”

It is worth contemplating the point Press saved for last—are we even approaching this whole AI thing from the most productive angle? He ponders:

Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous ‘AI Winter’? And that continuing to adhere to it and refusing to consider ‘genuinely new ideas,’ out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter? Maybe, just maybe, our minds are not computers and computers do not resemble our brains?  And maybe, just maybe, if we finally abandon the futile pursuit of replicating ‘human-level AI’ in computers, we will find many additional–albeit ‘narrow’–applications of computers to enrich and improve our lives?

I think Press is on to something. Perhaps we should admit that anything approaching Rosie the Robot is still decades away (according to conference presenter Oren Etzioni). At this early date, we may do well to accept and applaud specialized AIs that do one thing very well but are completely ignorant of everything else. After all, our Roombas are unlikely to attempt conquering the world.

Cynthia Murrell, November 28, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Machine Learning Does Not Have All the Answers

November 25, 2016

Despite our broader knowledge, we still believe that if we press a few buttons and press enter computers can do all work for us.  The advent of machine learning and artificial intelligence does not repress this belief, but instead big data vendors rely on this image to sell their wares.  Big data, though, has its weaknesses and before you deploy a solution you should read Network World’s, “6 Machine Learning Misunderstandings.”

Pulling from Juniper Networks’s security intelligence software engineer Roman Sinayev explains some of the pitfalls to avoid before implementing big data technology.  It is important not to take into consideration all the variables and unexpected variables, otherwise that one forgotten factor could wreck havoc on your system.  Also, do not forget to actually understand the data you are analyzing and its origin.  Pushing forward on a project without understanding the data background is a guaranteed fail.

Other practical advice, is to build a test model, add more data when the model does not deliver, but some advice that is new even to us is:

One type of algorithm that has recently been successful in practical applications is ensemble learning – a process by which multiple models combine to solve a computational intelligence problem. One example of ensemble learning is stacking simple classifiers like logistic regressions. These ensemble learning methods can improve predictive performance more than any of these classifiers individually.

Employing more than one algorithm?  It makes sense and is practical advice why did that not cross our minds? The rest of the advice offered is general stuff that can be applied to any project in any field, just change the lingo and expert providing it.

Whitney Grace, November 25, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Do Not Forget to Show Your Work

November 24, 2016

Showing work is messy, necessary step to prove how one arrived at a solution.  Most of the time it is never reviewed, but with big data people wonder how computer algorithms arrive at their conclusions.  Engadget explains that computers are being forced to prove their results in, “MIT Makes Neural Networks Show Their Work.”

Understanding neural networks is extremely difficult, but MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a way to map the complex systems.  CSAIL figured the task out by splitting networks in two smaller modules.  One for extracting text segments and scoring according to their length and accordance and the second module predicts the segment’s subject and attempts to classify them.  The mapping modules sounds almost as complex as the actual neural networks.  To alleviate the stress and add a giggle to their research, CSAIL had the modules analyze beer reviews:

For their test, the team used online reviews from a beer rating website and had their network attempt to rank beers on a 5-star scale based on the brew’s aroma, palate, and appearance, using the site’s written reviews. After training the system, the CSAIL team found that their neural network rated beers based on aroma and appearance the same way that humans did 95 and 96 percent of the time, respectively. On the more subjective field of “palate,” the network agreed with people 80 percent of the time.

One set of data is as good as another to test CSAIL’s network mapping tool.  CSAIL hopes to fine tune the machine learning project and use it in breast cancer research to analyze pathologist data.

Whitney Grace, November 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

All the Things Watson Could Do

November 21, 2016

One of our favorite artificial intelligence topics has made the news again: Watson.   Technology Review focuses on Watson’s job descriptions and his emergence in new fields, “IBM’s Watson Is Everywhere-But What Is It?”  We all know that Watson won Jeopardy and has been deployed as the ultimate business intelligence solution, but what exactly does Watson do for a company?

The truth about Watson’s Jeopardy appearance is that very little of the technology was used. In reality, Watson is an umbrella name IBM uses for an entire group of their machine learning and artificial intelligence technology.  The Watson brand is employed in a variety of ways from medical disease interpretation to creating new recipes via experimentation.  The technology can be used for many industries and applied to a variety of scenarios.  It all depends on what the business needs resolved.  There is another problem:

Beyond the marketing hype, Watson is an interesting and potentially important AI effort. That’s because, for all the excitement over the ways in which companies like Google and Facebook are harnessing AI, no one has yet worked out how AI is going to fit into many workplaces. IBM is trying to make it easier for companies to apply these techniques, and to tap into the expertise required to do so.

IBM is experiencing problems of its own, but beyond those another consideration to take is Watson’s expense.  Businesses are usually eager to incorporate new technology, if the benefit is huge.  However, they are reluctant for the initial payout, especially if the technology is still experimental and not standard yet.  Nobody wants to be a guinea pig, but someone needs to set the pace for everyone else.  So who wants to deploy Watson?

Whitney Grace, November 21, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Next Page »

  • Archives

  • Recent Posts

  • Meta