Google Now Has Dowsing Ability
March 16, 2016
People who claim to be psychic are fakes. There is not a way to predict the future, instantly locate a lost person or item, or read someone’s aura. No scientific theory has proven it exists. One of the abilities psychics purport to have is “dowsing,” the power to sense where water, precious stones or metals, and even people are hiding. Instead of relying on a suspended crystal or an angular stick, Google now claims it can identify any location based solely on images, says The Technology Review in the article, “Google Unveils Neural Network With ‘Superhuman’ Ability To Determine The Location Of Almost Any Image.”
Using computer algorithms, not magic powers, and Tobias Weyand’s programming prowess and a team of tech savvy people, they developed a way for a Google deep-learning machine to identity location pictures. Weyand and his team designed PlaNET, the too, and accomplished this by dividing the world into 26,000 square grid (sans ocean and poles) of varying sizes depending on populous areas.
“Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.
Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates.”
With the remaining 34 million images in the data set, they tested the PlaNET to check its accuracy. PlaNET can accurately guess 3.6% images at street level, 10.1% on city level, 28.4% country of origin, and 48% of the continent. These results are very good compared to the limited knowledge that a human keeps in their head.
Weyand believes that PlaNET is able to determine the location, because it has learned new parents to recognize subtle patterns about areas that humans cannot distinguish, as it has arguably been more places than any human. What is even more amazing is how much memory PlaNET uses: only 377 MB!
When will PlaNET become available as a GPS app?
Whitney Grace, March 16, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Woman Fights Google and Wins
January 21, 2016
Google is one of those big corporations that if you have a problem with it, you might as well let it go. Google is powerful, respected, and has (we suspect) a very good legal department. There are problems with Google, such as the “right to be forgotten” and Australian citizens have a big bone to pick with the search engine. Australian News reports that “SA Court Orders Google Pay Dr. Janice Duffy $115,000 Damages For Defamatory Search Results.”
Duffy filed a lawsuit against Google for displaying her name along with false and defamatory content within its search results. Google claimed no responsibility for the actual content, as it was not the publisher. The Australian Supreme Court felt differently:
“In October, the court rejected Google’s arguments and found it had defamed Dr Duffy due to the way the company’s patented algorithm operated. Justice Malcolm Blue found the search results either published, republished or directed users toward comments harmful to her reputation. On Wednesday, Justice Blue awarded Dr Duffy damages of $100,000 and a $15,000 lump sum to cover interest.”
Duffy was not the only one who was upset with Google. Other Australians filed their own complaints, including Michael Trkulja with a claim search results linked him to crime and Shane Radbone sued to learn the identities of bloggers who wrote negative comments.
It does not seem that Google should be held accountable, but technically they are not responsible for the content. However, Google’s algorithms are wired to bring up the most popular and in-depth results. Should they develop a filter that measures negative and harmful information or is it too subjective?
Whitney Grace, January 21, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Hello, Big Algorithms
January 15, 2016
The year had barely started and it looks lime we already have a new buzzword to nestle into our ears: big algorithms. The term algorithm has been tossed around with big data as one of the driving forces behind powerful analytics. Big data is an encompassing term that refers to privacy, security, search, analytics, organization, and more. The real power, however, lies in the algorithms. Benchtec posted the article, “Forget Big Data-It’s Time For Big Algorithms” to explain how algorithms are stealing the scene.
Data is useless unless you are able to are pull something out of it. The only way get the meat off the bone is to use algorithms. Algorithms might be the powerhouses behind big data, but they are not unique. The individual data belonging to different companies.
“However, not everyone agrees that we’ve entered some kind of age of the algorithm. Today competitive advantage is built on data, not algorithms or technology. The same ideas and tools that are available to, say, Google are freely available to everyone via open source projects like Hadoop or Google’s own TensorFlow…infrastructure can be rented by the minute, and rather inexpensively, by any company in the world. But there is one difference. Google’s data is theirs alone.”
Algorithms are ingrained in our daily lives from the apps run on smartphones to how retailers gather consumer detail. Algorithms are a massive untapped market the article says. One algorithm can be manipulated and implemented for different fields. The article, however, ends on some socially conscious message about using algorithms for good not evil. It is a good sentiment, but kind of forced here, but it does spur some thoughts about how algorithms can be used to study issues related to global epidemics, war, disease, food shortages, and the environment.
Whitney Grace, January 15, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
IBM and Yahoo Hard at Work on Real-Time Data Handling
January 7, 2016
The article titled What You Missed in Big Data: Real-time Intelligence on SiliconAngle speaks to the difficulties of handling the ever-increasing volumes of real-time data for corporations. Recently, IBM created supplementary stream process services including a machine learning engine that comes equipped with algorithm building capabilities. The algorithms aid in choosing relevant information from the numerous connected devices of a single business. The article explains,
“An electronics manufacturer, for instance, could use the service to immediately detect when a sensor embedded in an expensive piece of equipment signals a malfunction and automatically alert the nearest technician. IBM is touting the functionality as a way to cut through the massive volume of machine-generated signals produced every second in such environments, which can overburden not only analysts but also the technology infrastructure that supports their work.”
Yahoo has been working on just that issue, and lately open-sourced its engineers’ answer. In a demonstration to the press, the technology proved able to power through 100 million vales in under three seconds. Typically, such a high number would require two and a half minutes. The target of this sort of technology is measuring extreme numbers like visitor statistics. Accuracy takes a back seat to speed through estimation, but at such a speed it’s worth the sacrifice.
Chelsea Kerwin, January 7, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Google Search and Cultural Representation
January 6, 2016
Google Search has worked its way into our culture as an indispensable, and unquestioned, tool of modern life. However, the algorithms behind the platform have become more sophisticated, allowing Google to tinker more and more with search results. Since so many of us regularly use the search engine to interact with the outside world, Google’s choices (and ours) affect the world’s perception of itself. Researcher Safiya Umoja Noble details some of the adverse effects of this great power in her paper, “Google Search: Hyper-Visibility as a Means of Rendering Black Women and Girls Invisible,” posted at the University of Rochester’s InVisible Culture journal. Not surprisingly, commerce features prominently in the story. Noble writes:
“Google’s algorithmic practices of biasing information toward the interests of the powerful elites in the United States,14 while at the same time presenting its results as generated from objective factors, has resulted in a provision of information that perpetuates the characterizations of women and girls through misogynist and pornified websites. Stated another way, it can be argued that Google functions in the interests of its most influential (i.e. moneyed) advertisers or through an intersection of popular and commercial interests. Yet Google’s users think of it as a public resource, generally free from commercial interest15—this fact likely bolstered by Google’s own posturing as a company for whom the informal mantra, ‘Don’t be evil,’ has functioned as its motivational core. Further complicating the ability to contextualize Google’s results is the power of its social hegemony.16 At the heart of the public’s general understanding and trust in commercial search engines like Google, is a belief in the neutrality of technology … which only obscures our ability to understand the potency of misrepresentation that further marginalizes and renders the interests of Black women, coded as girls, invisible.”
Noble goes on to note ways we, the users, codify our existing biases through our very interaction with Google Search. To say the paper treats these topic in depth is an understatement. Noble provides enough background on the study of culture’s treatment of Black women and girls to get any non-social-scientist up to speed. Then, she describes the extension of that treatment onto the Web, and how certain commercial enterprises now depend on those damaging representations. Finally, the paper calls for a critical approach to search to address these, and similar, issues. It is an important, and informative, paper; we suggest interested readers give it a gander.
Cynthia Murrell, January 6, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Marketing Analytics Holds Many Surprises
December 29, 2015
What I find interesting is how data analysts, software developers, and other big data pushers are always saying things like “hidden insights await in data” or “your business will turn around with analytics.” These people make it seem like it is big thing, when it is really the only logical outcome that could entail from employing new data analytics. Marketing Land continues with this idea in the article, “Intentional Serendipity: How Marketing Analytics Trigger Curiosity Algorithms And Surprise Discoveries.”
Serendipitous actions take place at random and cannot be predicted, but the article proclaims with the greater amount of data available to marketers that serendipitous outcomes can be optimized. Data shows interesting trends, including surprises that make sense but were never considered before the data brought them to our attention.
“Finding these kinds of data surprises requires a lot of sophisticated natural language processing and complex data science. And that data science becomes most useful when the patterns and possibilities they reveal incorporate the thinking of human beings, who contribute the two most important algorithms in the entire marketing analytics framework — the curiosity algorithm and the intuition algorithm.”
The curiosity algorithm is the simple process of triggering a person’s curious reflex, then the person can discern what patterns can lead to a meaningful discovery. The intuition algorithm is basically trusting your gut and having the data to back up your faith. Together these make explanatory analytics help people change outcomes based on data.
It follows up with a step-by-step plan about how to organize your approach to explanatory analytics, which is a basic business plan but it is helpful to get the process rolling. In short, read your data and see if something new pops up.
Whitney Grace, December 29, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Another Good Reason for Diversity in Tech
December 29, 2015
Just who decides what we see when we search? If we’re using Google, it’s a group of Google employees, of course. The Independent reports, “Google’s Search Results Aren’t as Unbiased as You Think—and a Lack of Diversity Could Be the Cause.” Writer Doug Bolton points to a TEDx talk by Swedish journalist Andreas Ekström, in which Ekström describes times Google has, and has not, counteracted campaigns to deliberately bump certain content. For example, the company did act to decouple racist imagery from searches for “Michelle Obama,” but did nothing to counter the association between a certain Norwegian murderer and dog poop. Boldon writes:
“Although different in motivation, the two campaigns worked in exactly the same way – but in the second, Google didn’t step in, and the inaccurate Breivik images stayed at the top of the search results for much longer. Few would argue that Google was wrong to end the Obama campaign or let the Breivik one run its course, but the two incidents shed light on the fact that behind such a large and faceless multi-billion dollar tech company as Google, there’s people deciding what we see when we search. And in a time when Google has such a poor record for gender and ethnic diversity and other companies struggle to address this imbalance (as IBM did when they attempted to get women into tech by encouraging them to ‘Hack a Hairdryer’), this fact becomes more pressing.”
The article notes that only 18 percent of Google’s tech staff worldwide are women, and that it is just two percent Hispanic and one percent black. Ekström’s talk has many asking what unperceived biases lurk in Google’s algorithms, and some are calling on the company anew to expand its hiring diversity. Naturally, though, any tech company can only do so much until more girls and minorities are encouraged to explore the sciences.
Cynthia Murrell, December 29, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Algorithmic Bias and the Unintentional Discrimination in the Results
October 21, 2015
The article titled When Big Data Becomes Bad Data on Tech In America discusses the legal ramifications of relying on algorithms for companies. The “disparate impact” theory has been used in the courtroom for some time to ensure that discriminatory policies be struck down whether they were created with the intention to discriminate or not. Algorithmic bias occurs all the time, and according to the spirit of the law, it discriminates although unintentionally. The article states,
“It’s troubling enough when Flickr’s auto-tagging of online photos label pictures of black men as “animal” or “ape,” or when researchers determine that Google search results for black-sounding names are more likely to be accompanied by ads about criminal activity than search results for white-sounding names. But what about when big data is used to determine a person’s credit score, ability to get hired, or even the length of a prison sentence?”
The article also reminds us that data can often be a reflection of “historical or institutional discrimination.” The only thing that matters is whether the results are biased. This is where the question of human bias becomes irrelevant. There are legal scholars and researchers arguing on behalf of ethical machine learning design that roots out algorithmic bias. Stronger regulations and better oversight of the algorithms themselves might be the only way to prevent time in court.
Chelsea Kerwin, October 21, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Computers Learn Discrimination from Their Programmers
September 14, 2015
One of the greatest lessons one take learn from the Broadway classic South Pacific is that children aren’t born racist, rather they learn about racism from their parents and other adults. Computers are supposed to be infallible, objective machines, but according to Gizmodo’s article, “Computer Programs Can Be As Biased As Humans.” In this case, computers are “children” and they observe discriminatory behavior from their programmers.
As an example, the article explains how companies use job application software to sift through prospective employees’ resumes. Algorithms are used to search for keywords related to experience and skills with the goal of being unbiased related to sex and ethnicity. The algorithms could also be used to sift out resumes that contain certain phrases and other information.
“Recently, there’s been discussion of whether these selection algorithms might be learning how to be biased. Many of the programs used to screen job applications are what computer scientists call machine-learning algorithms, which are good at detecting and learning patterns of behavior. Amazon uses machine-learning algorithms to learn your shopping habits and recommend products; Netflix uses them, too.”
The machine learning algorithms are mimicking the same discrimination habits of humans. To catch these computer generated biases, other machine learning algorithms are being implemented to keep the other algorithms in check. Another option to avoid the biases is to reload the data in a different manner so the algorithms do not fall into the old habits. From a practical stand point it makes sense: if something does not work the first few times, change the way it is done.
Whitney Grace, September 14, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
The AI Evolution
September 10, 2015
An article at WT Vox announces, “Google Is Working on a New Type of Algorithm Called ‘Thought Vectors’.” It sounds like a good use for a baseball cap with electrodes, a battery pack, WiFi, and a person who thinks great thoughts. In actuality, it’s a project based on the work of esteemed computer scientist Geoffrey E. Hinton, who has been exploring the idea of neural networks for decades. Hinton is now working with Google to create the sophisticated algorithm of our dreams (or nightmares, depending on one’s perspective).
Existing language processing software has come a very long way; Google Translate, for example, searches dictionaries and previously translated docs to translate phrases. The app usually does a passably good job of giving one the gist of a source document, but results are far from reliably accurate (and are often grammatically comical.) Thought vectors, on the other hand, will allow software to extract meanings, not just correlations, from text.
Continuing to use translation software as the example, reporter Aiden Russell writes:
“The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical ‘meaning space’ or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector….
“The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences. At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.”
But, won’t all efficient machine learning lead to a killer-robot-ruled dystopia? Hinton bats away that claim as a distraction; he’s actually more concerned about the ways big data is already being (mis)used by intelligence agencies. The man has a point.
Cynthia Murrell, September 10, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

