Now Watson Wants to Be a Judge
December 27, 2016
IBM has deployed Watson in many fields, including the culinary arts, sports, and medicine. The big data supercomputer can be used in any field or industry that creates a lot of data. Watson, in turn, will digest the data, and depending on the algorithms spit out results. Now IBM wants Watson to take on the daunting task of judging, says The Drum in “Can Watson Pick A Cannes Lion Winner? IBM’s Cognitive System Tries Its Arm At Judging Awards.”
According to the article, judging is a cognitive process and requires special algorithms, not the mention the bias of certain judges. In other words, it should be right up Watson’s alley (perhaps the results will be less subjective as well). The Drum decided to put Watson to the ultimate creative test and fed Watson thousands of previous Cannes films. Then Watson predicted who would win the Cannes Film Festival in the Outdoor category this year.
This could change the way contests are judged:
The Drum’s magazine editor Thomas O’Neill added: “This is an experiment that could massively disrupt the awards industry. We have the potential here of AI being able to identify an award winning ad from a loser before you’ve even bothered splashing out on the entry fee. We’re looking forward to seeing whether it proves as accurate in reality as it did in training.
I would really like to see this applied to the Academy Awards that are often criticized for their lack of diversity and consisting of older, white men. It would be great to see if Watson would yield different results that what the Academy actually selects.
Whitney Grace, December 27, 2016
Big Data Needs to Go Public
December 16, 2016
Big Data touches every part of our lives and we are unaware. Have you ever noticed when you listen to the news, read an article, or watch a YouTube video that people say items such as: “experts claim, “science says,” etc.” In the past, these statements relied on less than trustworthy sources, but now they can use Big Data to back up their claims. However, popular opinion and puff pieces still need to back up their big data with hard fact. Nature.com says that transparency is a big deal for Big Data and algorithm designers need to work on it in the article, “More Accountability For Big-Data Algorithms.”
One of the hopes is that big data will be used to bridge the divide between one bias and another, except that he opposite can happen. In other words, Big Data algorithms can be designed with a bias:
There are many sources of bias in algorithms. One is the hard-coding of rules and use of data sets that already reflect common societal spin. Put bias in and get bias out. Spurious or dubious correlations are another pitfall. A widely cited example is the way in which hiring algorithms can give a person with a longer commute time a negative score, because data suggest that long commutes correlate with high staff turnover.
Even worse is that people and organizations can design an algorithm to support science or facts they want to pass off as the truth. There is a growing demand for “algorithm accountability,” mostly in academia. The demands are that data sets fed into the algorithms are made public. There also plans to make algorithms that monitor algorithms for bias.
Big Data is here to say, but relying too much on algorithms can distort the facts. This is why the human element is still needed to distinguish between fact and fiction. Minority Report is closer to being our present than ever before.
Whitney Grace, December 16, 2016
Could AI Spell Doom for Marketers?
December 1, 2016
AI is making inroads into almost every domain; marketing is no different. However, inability of AI to be creative in true sense may be a major impediment.
The Telegraph in a feature article titled Marketing Faces Death by Algorithm Unless It Finds a New Code says:
Artificial intelligence (AI) is one of the most-hyped topics in advertising right now. Brands are increasingly finding that they need to market to intelligent machines in order to reach humans, and this is set to transform the marketing function.
The problem with AI, as most marketers agree is its inability to imitate true creativity. As the focus of marketing is shifting from direct product placement to content marketing, the importance of AI becomes even bigger. For instance, a clothing company cannot analyze vast amounts of Big Data, decipher it and then create targeted advertising based on it. Algorithms will play a crucial role in it. However, the content creation will ultimately require human touch and intervention.
As it becomes clear here:
While AI can build a creative idea, it’s not creative “in the true sense of the word”, according to Mr Cooper. Machine learning – the driving technology behind how AI can learn – still requires human intelligence to work out how the machine would get there. “It can’t put two seemingly random thoughts together and recognize something new.
The other school of thought says that what AI lacks is not creativity, but processing power and storage. It seems we are moving closer to bridging this gap. Thus when AI closes this gap, will most occupations, including, creative and technical become obsolete?
Vishal Ingole, December 1, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Emphasize Data Suitability over Data Quantity
November 30, 2016
It seems obvious to us, but apparently, some folks need a reminder. Harvard Business Review proclaims, “You Don’t Need Big Data, You Need the Right Data.” Perhaps that distinction has gotten lost in the Big Data hype. Writer Maxwell Wessel points to Uber as an example. Though the company does collect a lot of data, the key is in which data it collects, and which it does not. Wessel explains:
In an era before we could summon a vehicle with the push of a button on our smartphones, humans required a thing called taxis. Taxis, while largely unconnected to the internet or any form of formal computer infrastructure, were actually the big data players in rider identification. Why? The taxi system required a network of eyeballs moving around the city scanning for human-shaped figures with their arms outstretched. While it wasn’t Intel and Hewlett-Packard infrastructure crunching the data, the amount of information processed to get the job done was massive. The fact that the computation happened inside of human brains doesn’t change the quantity of data captured and analyzed. Uber’s elegant solution was to stop running a biological anomaly detection algorithm on visual data — and just ask for the right data to get the job done. Who in the city needs a ride and where are they? That critical piece of information let the likes of Uber, Lyft, and Didi Chuxing revolutionize an industry.
In order for businesses to decide which data is worth their attention, the article suggests three guiding questions: “What decisions drive waste in your business?” “Which decisions could you automate to reduce waste?” (Example—Amazon’s pricing algorithms) and “What data would you need to do so?” (Example—Uber requires data on potential riders’ locations to efficiently send out drivers.) See the article for more notes on each of these guidelines.
Cynthia Murrell, November 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Do Not Forget to Show Your Work
November 24, 2016
Showing work is messy, necessary step to prove how one arrived at a solution. Most of the time it is never reviewed, but with big data people wonder how computer algorithms arrive at their conclusions. Engadget explains that computers are being forced to prove their results in, “MIT Makes Neural Networks Show Their Work.”
Understanding neural networks is extremely difficult, but MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a way to map the complex systems. CSAIL figured the task out by splitting networks in two smaller modules. One for extracting text segments and scoring according to their length and accordance and the second module predicts the segment’s subject and attempts to classify them. The mapping modules sounds almost as complex as the actual neural networks. To alleviate the stress and add a giggle to their research, CSAIL had the modules analyze beer reviews:
For their test, the team used online reviews from a beer rating website and had their network attempt to rank beers on a 5-star scale based on the brew’s aroma, palate, and appearance, using the site’s written reviews. After training the system, the CSAIL team found that their neural network rated beers based on aroma and appearance the same way that humans did 95 and 96 percent of the time, respectively. On the more subjective field of “palate,” the network agreed with people 80 percent of the time.
One set of data is as good as another to test CSAIL’s network mapping tool. CSAIL hopes to fine tune the machine learning project and use it in breast cancer research to analyze pathologist data.
Whitney Grace, November 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Partnership Aims to Establish AI Conventions
October 24, 2016
Artificial intelligence research has been booming, and it is easy to see why— recent advances in the field have opened some exciting possibilities, both for business and society as a whole. Still, it is important to proceed carefully, given the potential dangers of relying too much on the judgement of algorithms. The Philadelphia Inquirer reports on a joint effort to develop some AI principles and best practices in its article, “Why This AI Partnership Could Bring Profits to These Tech Titans.” Writer Chiradeep BasuMallick explains:
Given this backdrop, the grandly named Partnership on AI to Benefit People and Society is a bold move by Alphabet, Facebook, IBM and Microsoft. These globally revered companies are literally creating a technology Justice League on a quest to shape public/government opinion on AI and to push for friendly policies regarding its application and general audience acceptability. And it should reward investors along the way.
The job at hand is very simple: Create a wave of goodwill for AI, talk about the best practices and then indirectly push for change. Remember, global laws are still obscure when it comes to AI and its impact.
Curiously enough, this elite team is missing two major heavyweights. Apple and Tesla Motors are notably absent. Apple Chief Executive Tim Cook, always secretive about AI work, though we know about the estimated $200 million Turi project, is probably waiting for a more opportune moment. And Elon Musk, co-founder, chief executive and product architect of Tesla Motors, has his own platform to promote technology, called OpenAI.
Along with representatives of each participating company, the partnership also includes some independent experts in the AI field. To say that technology is advancing faster than the law can keep up with is a vast understatement. This ongoing imbalance underscores the urgency of this group’s mission to develop best practices for companies and recommendations for legislators. Their work could do a lot to shape the future of AI and, by extension, society itself. Stay tuned.
Cynthia Murrell, October 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Pattern of Life Analysis to Help Decrypt Dark Web Actors
October 18, 2016
Google funded Recorded Future plans to use technologies like natural language processing, social network analysis and temporal pattern analysis to track Dark Web actors. This, in turn, will help security professionals to detect patterns and thwart security breaches well in advance.
An article Decrypting The Dark Web: Patterns Inside Hacker Forum Activity that appeared on DarkReading points out:
Most companies conducting threat intelligence employ experts who navigate the Dark Web and untangle threats. However, it’s possible to perform data analysis without requiring workers to analyze individual messages and posts.
Recorded Future which deploys around 500-700 servers across the globe monitors Dark Web forums to identify and categorize participants based on their language and geography. Using advanced algorithms, it then identifies individuals and their aliases who are involved in various fraudulent activities online. This is a type of automation where AI is deployed rather than relying on human intelligence.
The major flaw in this method is that bad actors do not necessarily use same or even similar aliases or handles across different Dark Web forums. Christopher Ahlberg, CEO of Recorded Future who is leading the project says:
A process called mathematical clustering can address this issue. By observing handle activity over time, researchers can determine if two handles belong to the same person without running into many complications.
Again, researchers and not AI or intelligent algorithms will have to play a crucial role in identifying the bad actors. What’s interesting is to note that Google, which pretty much dominates the information on Open Web is trying to make inroads into Dark Web through many of its fronts. The question is – will it succeed?
Vishal Ingole, October 18, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
The Case for Algorithmic Equity
September 20, 2016
We know that AI algorithms are skewed by the biases of both their creators and, depending on the application, their users. Social activist Cathy O’Neil addresses the broad consequences to society in her book, Weapons of Math Destruction. Time covers her views in its article, “This Mathematician Says Big Data is Causing a ‘Silent Financial Crisis’.” O’Neil studied mathematics at Harvard, utilized quantitative trading at a hedge-fund, and introduced a targeted-advertising startup. It is fair to say she knows what she is talking about.
More and more businesses and organizations rely on algorithms to make decisions that have big impacts on people’s lives: choices about employment, financial matters, scholarship awards, and where to deploy police officers, for example. Yet, the processes are shrouded in secrecy, and lawmakers are nowhere close to being on top of the issue. There is currently no way to ensure these decisions are anything approaching fair. In fact, the algorithms can create a sort of feedback loop of disadvantage. Reporter Rana Foroohar writes:
Using her deep technical understanding of modeling, she shows how the algorithms used to, say, rank teacher performance are based on exactly the sort of shallow and volatile type of data sets that informed those faulty mortgage models in the run up to 2008. Her work makes particularly disturbing points about how being on the wrong side of an algorithmic decision can snowball in incredibly destructive ways—a young black man, for example, who lives in an area targeted by crime fighting algorithms that add more police to his neighborhood because of higher violent crime rates will necessarily be more likely to be targeted for any petty violation, which adds to a digital profile that could subsequently limit his credit, his job prospects, and so on. Yet neighborhoods more likely to commit white collar crime aren’t targeted in this way.
Yes, unsurprisingly, it is the underprivileged who bear the brunt of algorithmic aftermath; the above is just one example. The write-up continues:
Indeed, O’Neil writes that WMDs [Weapons of Math Destruction] punish the poor especially, since ‘they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.’ Whereas the poor engage more with faceless educators and employers, ‘the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.
So, algorithms add to the disparity between how the wealthy and the poor experience life. Compounding the problem, algorithms also allow the wealthy to isolate themselves online as well as in real life, through curated news and advertising that make it ever easier to deny that poverty is even a problem. See the article for its more thorough discussion.
What does O’Neil suggest we do about this? First, she proposes a “Hippocratic Oath for mathematicians.” She also joins the calls for much more thorough regulation of the AI field and to update existing civic-rights laws to include algorithm-based decisions. Such measures will require the cooperation of legislators, who, as a group, are hardly known for their technical understanding. It is up to those of us who do comprehend the issues to inform them action must be taken. Sooner rather than later, please.
Cynthia Murrell, September 20, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
Microsoft Considers next Generation Artificial Intelligence
August 24, 2016
While science fiction portrays artificial intelligence in novel and far-reaching ways, certain products utilizing artificial intelligence are already in existence. WinBeta released a story, Microsoft exec at London conference: AI will “change everything”, which reminds us of this. Digital assistants like Cortana and Siri are one example of how mundane AI can appear. However, during a recent AI conference, Microsoft UK’s chief envisioning officer Dave Choplin projected much more impactful applications. This article summarizes the landscape of concerns,
Of course, many also are suspect about the promise of artificial intelligence and worry about its impact on everyday life or even its misuse by malevolent actors. Stephen Hawking has worried AI could be an existential threat and Tesla CEO Elon Musk has gone on to create an open source AI after worrying about its misuse. In his statements, Choplin also stressed that as more and more companies try to create AI, ‘We’ve got to start to make some decisions about whether the right people are making these algorithms.
There is much to consider in regards to artificial intelligence. However, such a statement about “the right people” cannot stop there. Choplin goes on to refer to the biases of people creating algorithms and the companies they work for. Because organizational structures must also be considered, so too must their motivator: the economy. Perhaps machine learning to understand the best way to approach AI would be a good first application.
Megan Feil, August 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Behind the Google Search Algorithm
June 16, 2016
Trying to reveal the secrets behind Google’s search algorithm is almost harder than breaking into Fort Knox. Google keeps the 200 ranking factors a secret, what we do know is that keywords do not play the same role that they used to and social media does play some sort of undisclosed factor. Search Engine Journal shares that “Google Released The Top 3 ranking Factors” that offers a little information to help SEO.
Google Search Quality Senior Strategist Andrey Lipattsev shared that the three factors are links, content, and RankBrain-in no particular order. RankBrain is an artificial intelligence system that relies on machine learning to help Google process search results to push the more relevant search results to the top of the list. SEO experts are trying to figure out how this will affect their jobs, but the article shares that:
“We’ve known for a long time that content and links matter, though the importance of links has come into question in recent years. For most SEOs, this should not change anything about their day-to-day strategies. It does give us another piece of the ranking factor puzzle and provides content marketers with more ammo to defend their practice and push for growth.”
In reality, there is not much difference, except that few will be able to explain how artificial intelligence ranks particular sites. Nifty play, Google.
Whitney Grace, June 15, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

