As a.I. Scientists Forge Ahead Teaching Robots to Hunt Prey, White House Discusses Regulations And “Understandings”
August 15, 2016
The article on Engadget titled Scientists Are Teaching Robots How to Hunt Down Prey marks advancements in artificial intelligence that may well feed into an A.I. arms race. The scientists working on this project at the University of Zurich see their work in a much less harmful way. The ability to search for and retrieve prey involves identifying and tracking a target. Some of the applications mentioned are futuristic shopping carts or luggage that can follow around its owner. Whether the scientists are experiencing severe tunnel vision or are actually just terrifically naïve is unknown. The article explains,
“The predator robot’s hardware is actually modeled directly after members of the animal kingdom, as the robot uses a special “silicon retina” that mimics the human eye. Delbruck is the inventor, created as part of the VISUALISE project. It allows robots to track with pixels that detect changes in illumination and transmit information in real time instead of a slower series of frames like a regular camera uses.”
Meanwhile, conversations about an A.I. arms race are also occurring, as illustrated by the article on ZDNet titled White House: We’re “Clear-Eyed” About Weaponizing A.I. Humans have a long history of short-sightedness when it comes to weapons technology, perhaps starting with the initial reasoning behind the invention of dynamite. The creator stated that he believed he had created a weapon so terrible that no one would ever dare use it. Obviously, that didn’t work out. But the White House Chief of Staff, Denis McDonough, claims that by establishing a “code of conduct and set of understandings” we can prevent a repetition of history. Commencing eyebrow raise.
Chelsea Kerwin, August 15, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/
The Less Scary Applications of Artificial Intelligence: Computer Vision
August 3, 2016
The article on The Christian Science Monitor titled Shutterstock’s Reverse Image Search Promises a Gentler Side of AI provides a glimpse into computer vision, or the way a computer assesses and categorizes any image into its parts. Shutterstock finds that using machine learning to find other images similar to the first is a vast improvement, because rather than analyzing keywords, AI analyzes the image directly based on exact colors and shapes. The article states,
“That keyword data, while useful for indexing images into categories on our site, wasn’t nearly as effective for surfacing the best and most relevant content,” says Kevin Lester, vice president of engineering at the company, in a blog post. “So our computer vision team worked to apply machine learning techniques to reimagine and rebuild that process.”
The neural network has now examined 70 million images and 4 million video clips in its collection.”
In addition, the company plans to expand the search feature to videos as well as images. Jon Oringer, CEO and founder of Shutterstock, has a vision of endless possibilities for this technology. The article points out that this is one of the clearly positive effects of AI, which gets a bad rap, perhaps not unfairly, given the potential for autonomous weapons and commercial abuse. So by all means, let’s use AI to recognize a cat, like Google, or to analyze images.
Chelsea Kerwin, August 3, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Google DeepMind AI Project Makes Progress
July 25, 2016
For anyone following the development of artificial intelligence, I recommend checking out the article, “How Google Plans to Solve Artificial Intelligence” at MIT Technology Review. The article delves into Google’s DeepMind project, an object of renewed curiosity after its AlphaGo software bested the human world champion of the ancient game Go in March.
This Go victory is significant, because it marks progress beyond the strategy of calculating different moves’ possible outcomes; the game is too complex for that established approach (though such calculations did allow IBM’s DeepBlue to triumph over the world chess champion in 1997). The ability to master Go has some speaking of “intuition” over calculation. Just how do you give software an approximation of human intuition? Writer Tom Simonite tells us:
“Hassabis believes the reinforcement learning approach is the key to getting machine-learning software to do much more complex things than the tricks it performs for us today, such as transcribing our words, or understanding the content of photos. ‘We don’t think just observing is enough for intelligence, you also have to act,’ he says. ‘Ultimately that’s the only way you can really understand the world.’”
“DeepMind’s 3-D environment Labyrinth, built on an open-source clone of the first-person-shooter Quake, is designed to provide the next steps in proving that idea. The company has already used it to challenge agents with a game in which they must explore randomly generated mazes for 60 seconds, winning points for collecting apples or finding an exit…. Future challenges might require more complex planning—for example, learning that keys can be used to open doors. The company will also test software in other ways, and is considering taking on the video game Starcraft and even poker. But posing harder and harder challenges inside Labyrinth will be a major thread of research for some time, says Hassabis. “It should be good for the next couple of years,” he says.”
The article has a video of DeepMind’s virtual labyrinth you can check out, if you’re curious. (It looks very much like an old Windows screen saver some readers may recall.) Simonite tells us that AI firms across the industry are watching this project carefully. He also points to some ways DeepMind is already helping with real-world problems, like developing training software with the U.K.’s National Health Service to help medical personnel recognize commonly missed signs of kidney problems.
See the article for much more about Google’s hopes and plans for DeepMind. Simonite concludes by acknowledging the larger philosophical and ethical concerns around artificial intelligence. We’re told DeepMind has its own “internal ethics board of philosophers, lawyers, and businesspeople.” I think it is no exaggeration to say these folks, whom Google indicates it will name someday soon, could have great influence over the nature of our future technology. Let us hope Google chooses wisely.
Cynthia Murrell, July 25, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on July 26, 2016. Information is at this link: http://bit.ly/29tVKpx.
The Potential of AI Journalism
July 12, 2016
Most of us are familiar with the concept of targeted advertising, but are we ready for targeted news? Personalized paragraphs within news stories is one development writer Jonathan Holmes predicts in, “AI is Already Making Inroads Into Journalism but Could it Win a Pulitzer?” at the Guardian.
Even now, the internet is full of both clickbait and news articles generated by algorithms. Such software is also producing quarterly earnings reports, obituaries, even poetry and fiction. Now that it has been established that, at least, some software can write better than some humans, researchers are turning to another question: What can AI writers do that humans cannot? Holmes quotes Reg Chua, of Thomson Reuters:
“‘I think it may well be that in the future a machine will win not so much for its written text, but by covering an important topic with five high quality articles and also 500,000 versions for different people.’ Imagine an article telling someone how local council cuts will affect their family, specifically, or how they personally are affected by a war happening in a different country. ‘I think the results might show up in the next couple of years,’ Caswell agrees. ‘It’s something that could not be done by a human writer.’”
The “Caswell” above is David Caswell, a fellow at the University of Missouri’s Donald W Reynolds Journalism Institute. Holmes also describes:
“In Caswell’s system, Structured Stories, the ‘story’ is not a story at all, but a network of information that can be assembled and read as copy, infographics or any other format, almost like musical notes. Any bank of information – from court reports to the weather – could eventually be plugged into a database of this kind. The potential for such systems is enormous.”
Yes, it is; we are curious to see where this technology is headed. In the meantime, we should all remember not to believe everything we read… was written by a human.
Cynthia Murrell, July 12, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Supercomputers Have Individual Personalities
July 1, 2016
Supercomputers like Watson are more than a novelty. They were built to be another tool for humans, rather than replacing humans all together or so reads some comments from Watson’s chief technology officer Rob High. High was a keynote speaker at the Nvidia GPU Technology Conference in San Jose, California. The Inquirer shares the details in “Nvidia GTC: Why IBM Watson Dances Gangam Style And Sings Like Taylor Swift.”
At the conference, High said that he did not want his computer to take over his thinking, instead he wanted the computer to do his research for him. Research and keeping up with the latest trends in any industry consumes A LOT of time and a supercomputer could potentially eliminate some of the hassle. This requires that supercomputers become more human:
“This leads on to the fact that the way we interact with computers needs to change. High believes that cognitive computers need four skills – to learn, to express themselves with human-style interaction, to provide expertise, and to continue to evolve – all at scale. People who claim not to be tech savvy, he explained, tend to be intimidated by the way we currently interact with computers, pushing the need for a further ‘humanising’ of the process.”
In order to humanize robots, what is taking place is them learning how to be human. A few robots have been programmed with Watson as their main processor and they can interact with humans. By interacting with humans, the robots pick up on human spoken language as well as body language and vocal tone. It allows them to learn how to not be human, but rather the best “artificial servant it can be”.
Robots and supercomputers are tools that can ease a person’s job, but the fact still remains that in some industries they can also replace human labor.
Whitney Grace, July 1, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Chatbot Tay Calls into Question Intelligence of Software
June 30, 2016
Chatbots are providing something alright. These days it’s more like entertainment. Venture Beat shared an article highlighting the latest, Microsoft’s Tay chatbot comes back online, says it’s ‘smoking kush’ in front of the police. Tay, the machine-learning bot, was designed to “be” a teenage girl. Microsoft’s goal with it was to engage followers of a young demographic while simultaneously learning how to engage them. The article explains,
“Well, uh, Microsoft’s Tay chatbot, which got turned off a few days ago after behaving badly, has suddenly returned to Twitter and has started tweeting to users like mad. Most of its musings are innocuous, but there is one funny one I’ve come across so far. “i’m smoking kush infront the police,” it wrote in brackets. Kush is slang for marijuana, a drug that can result in a fine for possession in the state of Washington, where Microsoft has its headquarters. But this is one of hundreds of tweets that the artificial intelligence-powered bot has sent out in the past few minutes.”
Poised by some sources as next-generation search, or a search replacement, chatbots appear to need a bit of optimization, to put it lightly. This issue occurred when the chatbot should have still been offline undergoing testing, according to Microsoft. But when it was only offline because of learning bullying and hate speech from trolls who seized on the nature of its artificial intelligence programming. Despite the fact it is considered AI, is this smart software? There is a little important something called emotional intelligence.
Megan Feil, June 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Artificial Intelligence Spreading to More Industries
May 10, 2016
According to MIT Technology Review, it has finally happened. No longer is artificial intelligence the purview of data wonks alone— “AI Hits the Mainstream,” they declare. Targeted AI software is now being created for fields from insurance to manufacturing to health care. Reporter Nanette Byrnes is curious to see how commercialization will affect artificial intelligence, as well as how this technology will change different industries.
What about the current state of the AI field? Byrnes writes:
“Today the industry selling AI software and services remains a small one. Dave Schubmehl, research director at IDC, calculates that sales for all companies selling cognitive software platforms —excluding companies like Google and Facebook, which do research for their own use—added up to $1 billion last year. He predicts that by 2020 that number will exceed $10 billion. Other than a few large players like IBM and Palantir Technologies, AI remains a market of startups: 2,600 companies, by Bloomberg’s count. That’s because despite rapid progress in the technologies collectively known as artificial intelligence—pattern recognition, natural language processing, image recognition, and hypothesis generation, among others—there still remains a long way to go.”
The article examines ways some companies are already using artificial intelligence. For example, insurance and financial firm USAA is investigating its use to prevent identity theft, while GE is now using it to detect damage to its airplanes’ engine blades. Byrnes also points to MyFitnessPal, Under Armor’s extremely successful diet and exercise tracking app. Through a deal with IBM, Under Armor is blending data from that site with outside research to help better target potential consumers.
The article wraps up by reassuring us that, despite science fiction assertions to the contrary, machine learning will always require human guidance. If you doubt, consider recent events—Google’s self-driving car’s errant lane change and Microsoft’s racist chatbot. It is clear the kids still need us, at least for now.
Cynthia Murrell, April 10, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Watson Joins the Hilton Family
April 30, 2016
It looks like Paris Hilton might have a new sibling, although the conversations at family gatherings will be lackluster. No, the hotel-chain family has not adopted Watson, instead a version of the artificial intelligence will work as a concierge. Ars Technica informs us that “IBM Watson Now Powers A Hilton Hotel Robot Concierge.”
The Hilton McLean hotel in Virginia now has a now concierge dubbed Connie, after Conrad Hilton the chain’s founder. Connie is housed in a Nao, a French-made android that is an affordable customer relations platform. Its brain is based on Watson’s program and answers verbal queries from a WayBlazer database. The little robot assists guests by explaining how to navigate the hotel, find restaurants, and tourist attractions. It is unable to check in guests yet, but when the concierge station is busy, you do not want to pull out your smartphone, or have any human interaction it is a good substitute.
” ‘This project with Hilton and WayBlazer represents an important shift in human-machine interaction, enabled by the embodiment of Watson’s cognitive computing,’ Rob High, chief technology officer of Watson said in a statement. ‘Watson helps Connie understand and respond naturally to the needs and interests of Hilton’s guests—which is an experience that’s particularly powerful in a hospitality setting, where it can lead to deeper guest engagement.’”
Asia already uses robots in service industries such as hotels and restaurants. It is worrying that Connie-like robots could replace people in these jobs. Robots are supposed to augment human life instead of taking jobs away from it. While Connie-like robots will have a major impact on the industry, there is something to be said for genuine human interaction, which usually is the preference over artificial intelligence. Maybe team the robots with humans in the service industries for the best all around care?
Whitney Grace, April 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Watson Lacks Conversation Skills and He Is Not Evil
April 22, 2016
When I was in New York last year, I was walking on the west side when I noticed several other pedestrians moving out of the way of a man mumbling to himself. Doing as the natives do, I moved aside and heard the man rumble about how, “The robots are taking over and soon they will be ruling us. You all are idiots for not listening to me.” Fear of a robot apocalypse has been constant since computer technology gained precedence and we also can thank science-fiction for perpetuating it. Tech Insider says in “Watson Can’t Actually Talk To You Like In The Commercials” Elon Musk, Bill Gates, Stephen Hawking, and other tech leaders have voiced their concerns about creating artificial intelligence that is so advanced it can turn evil.
IBM wants people to believe otherwise, which explains their recent PR campaign with commercials that depict Watson carrying on conversations with people. The idea is that people will think AI are friendly, here to augment our jobs, and overall help us. There is some deception on IBM’s part, however. Watson cannot actually carry on a conversation with a person. People can communicate with, usually via an UI like a program via a desktop or tablet. Also there is more than one Watson, each is programmed for different functions like diagnosing diseases or cooking.
“So remember next time you see Watson carrying on a conversation on TV that it’s not as human-like as it seems…Humor is a great way to connect with a much broader audience and engage on a personal level to demystify the technology,’ Ann Rubin, Vice President IBM Content and Global Creative, wrote in an email about the commercials. ‘The reality is that these technologies are being used in our daily lives to help people.’”
If artificial intelligence does become advanced enough that it is capable of thought and reason comparable to a human, it is worrisome. It might require that certain laws be put into place to maintain control over the artificial “life.” That day is a long time off, however, until then embrace robots helping to improve life.
Whitney Grace, April 22, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Natural Language Takes Lessons from Famous Authors
April 18, 2016
What better way to train a natural language AI than to bring venerated human authors into the equation? Wired reports, “Google Wants to Predict the Next Sentences of Dead Authors.” Not surprisingly, Google researchers are tapping into Project Gutenberg for their source material. Writer Matt Burgess relates:
“The network is given millions of lines from a ‘jumble’ of authors and then works out the style of individual writers. Pairs of lines were given to the system, which made a simple ‘yes’ or ‘no’ decision to whether they matched up. Initially the system didn’t know the identity of any authors, but still only got things wrong 17 percent of the time. By giving the network an indication of who the authors were, giving it another factor to compare work against, the computer scientists reduced the error rate to 12.3 percent. This was also improved by a adding a fixed number of previous sentences to give the network more context.”
The researchers carry their logic further. As the Wired title says, they have their AI predict an author’s next sentence; we’re eager to learn what Proust would have said next. They also have the software draw conclusions about authors’ personalities. For example, we’re told:
“Google admitted its predictions weren’t necessarily ‘particularly accurate,’ but said its AI had identified William Shakespeare as a private person and Mark Twain as an outgoing person. When asked ‘Who is your favourite author?’ and [given] the options ‘Mark Twain’, ‘William Shakespeare’, ‘myself’, and ‘nobody’, the Twain model responded with ‘Mark Twain’ and the Shakespeare model responded with ‘William Shakespeare’. Asked who would answer the phone, the AI Shakespeare hoped someone else would answer, while Twain would try and get there first.”
I can just see Twain jumping over Shakespeare to answer the phone. The article notes that Facebook is also using the work of human authors to teach its AI, though that company elected to use children’s classics like The Jungle Book, A Christmas Carol, and Alice in Wonderland. Will we eventually see a sequel to Through the Looking Glass?
Cynthia Murrell, April 18, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

