Twitter Influential but a Poor Driver of News Traffic
June 20, 2016
A recent report from social analytics firm Parse.ly examined the relationship between Twitter and digital publishers. NeimanLab shares a few details in, “Twitter Has Outsized Influence, but It Doesn’t Drive Much Traffic for Most News Orgs, a New Report Says.” Parse.ly tapped into data from a couple hundred of its clients, a group that includes digital publishers like Business Insider, the Daily Beast, Slate, and Upworthy.
Naturally, news sites that make the most of Twitter do so by knowing what their audience wants and supplying it. The study found there are two main types of Twitter news posts, conversational and breaking, and each drives traffic in its own way. While conversations can engage thousands of users over a period of time, breaking news produces traffic spikes.
Neither of those findings is unexpected, but some may be surprised that Twitter feeds are not inspiring more visits publishers’ sites. Writer Joseph Lichterman reports:
“Despite its conversational and breaking news value, Twitter remains a relatively small source of traffic for most publishers. According to Parse.ly, less than 5 percent of referrals in its network came from Twitter during January and February 2016. Twitter trails Facebook, Google, and even Yahoo as sources of traffic, the report said (though it does edge out Bing!)”
Still, publishers are unlikely to jettison their Twitter accounts anytime soon, because that platform offers a different sort of value. One that is, perhaps, more important for consumers. Lichterman quotes the report:
“Though Twitter may not be a huge overall source of traffic to news websites relative to Facebook and Google, it serves a unique place in the link economy. News really does ‘start’ on Twitter.”
And the earlier a news organization knows about a situation, the better. That is an advantage few publishers will want to relinquish.
Cynthia Murrell, June 20, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Behind the Google Search Algorithm
June 16, 2016
Trying to reveal the secrets behind Google’s search algorithm is almost harder than breaking into Fort Knox. Google keeps the 200 ranking factors a secret, what we do know is that keywords do not play the same role that they used to and social media does play some sort of undisclosed factor. Search Engine Journal shares that “Google Released The Top 3 ranking Factors” that offers a little information to help SEO.
Google Search Quality Senior Strategist Andrey Lipattsev shared that the three factors are links, content, and RankBrain-in no particular order. RankBrain is an artificial intelligence system that relies on machine learning to help Google process search results to push the more relevant search results to the top of the list. SEO experts are trying to figure out how this will affect their jobs, but the article shares that:
“We’ve known for a long time that content and links matter, though the importance of links has come into question in recent years. For most SEOs, this should not change anything about their day-to-day strategies. It does give us another piece of the ranking factor puzzle and provides content marketers with more ammo to defend their practice and push for growth.”
In reality, there is not much difference, except that few will be able to explain how artificial intelligence ranks particular sites. Nifty play, Google.
Whitney Grace, June 15, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
The Time Google Flagged Itself for Potentially Malicious Content
June 13, 2016
Did you know Google recently labeled itself as ‘partially dangerous’? Fortune released a story, Google Has Stopped Rating ‘Google.com’ as ‘Partially Dangerous’, which covers what happened. Google has a Safe Browsing tool which identifies potentially harmful websites by scanning URLs. Users noticed that Google itself was flagged for a short time. Was there a rational explanation? This article offers a technology-based reason for the rating,
“Fortune noted that Google’s Safe Browsing tool had stopped grading its flagship site as a hazard on Wednesday morning. A Google spokesperson told Fortune that the alert abated late last night, and that the Safe Browsing service is always on the hunt for security issues that might need fixing. The issue is likely the result of some Google web properties hosting risky user-generated content. The safety details of the warning specifically called out Google Groups, a service that provides online discussion boards and forums. If a user posted something harmful there, Google’s tool would have factored that in when assessing the security of the google.com domain as a whole, a person familiar with the matter told Fortune.”
We bet some are wondering whether this is a reflection of Google management or the wonkiness of Google’s artificial intelligence? Considering hacked accounts alone, it seems like malicious content would be posted in Google Groups fairly regularly. This flag seems to be a flag for more than the “partially dangerous” message spells out. The only question remaining is, a flag for what?
Megan Feil, June 13, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
The Scottish Philosopher in Silicon Valley
June 6, 2016
When Alistair Duff, professor of information society and policy at Scotland’s Edinburgh Napier University, checked out Silicon Valley, he identified several disturbing aspects of the prevailing tech scene. The Atlantic’s Kevah Waddell interviews the professor in, “The Information Revolution’s Dark Turn.”
The article reminds us that, just after World War II, the idealistic “information revolution” produced many valuable tools and improved much about our lives. Now, however, the Silicon-Valley-centered tech scene has turned corporate, data-hungry, and self-serving. Or, as Duff puts it, we are now seeing “the domination of information technology over human beings, and the subordination of people to a technological imperative.”
Waddell and Duff discuss the professor’s Normative Theory of the Information Society; the potential for information technology to improve society; privacy tradeoffs; treatment of workers; workplace diversity; and his preference that tech companies (like Apple) more readily defer to government agencies (like the FBI). Regarding that last point, it is worth noting Duff’s stance against the “anti-statism” he believes permeates Silicon Valley, and his estimation that “justice” outranks “freedom” as a social consideration.
Waddell asks Duff what a tech hub should look like, if Silicon Valley is such a poor example. The professor responds:
“It would look more like Scandinavia than Silicon Valley. I’m not saying that we shouldn’t develop the tech industry—we can learn a massive amount from Silicon Valley….
“But what we shouldn’t do is incorporate the abuse of the boundary between work and home, we should treat people with respect, we should have integrated workforces. A study came out that only 2 percent of Google’s, Yahoo’s, and a couple of other top companies’ workforces were black. Twelve percent of the U.S. population is black, so that is not good, is it? I’m not saying they discriminate overtly against black people—I very much doubt that—but they’re not doing enough to change things.
“We need the best of Silicon Valley and the best of European social democracy, combined into a new type of tech cluster.
“There’s a book by Manuel Castells and Pekka Himanen called The Information Society and the Welfare State: The Finnish Model, which argues that you can have a different type of information society from the libertarian, winner-takes-all model pioneered in Silicon Valley. You can have a more human, a more proportioned, a tamer information society like we’ve seen in Finland.”
Duff goes on to say that the state should absolutely be involved in building the information society, a concept that goes over much better in Europe than in the U.S. He points to Japan as a country which has built a successful information society with guidance from the state. See the interview for more of Professor Duff’s observations.
Cynthia Murrell, June 6, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Google Has Much at Stake in Intel Tax Case
June 3, 2016
In the exciting department of tax activities, 9to5Google reports, “Google Could Effectively Recoup All the Tax it Paid Last Year if Intel Wins Test Case.” Why is Google so invested in a dispute between Intel and the IRS? Writer Ben Lovejoy explains:
“In essence, the case hinges on share compensation packages paid by overseas subsidiaries. The IRS says that the cost of these should be offset against the expenses of the overseas companies; Intel says no, the cost should be deducted by the U.S. parent company – reducing its tax liabilities in its home country. The IRS introduced the rule in 2003. Companies like Google have abided by the rule but reserved the right to reallocate costs if a court ruling went against the IRS, giving them a huge potential windfall.”
This windfall could amount to $3.5 billion for Alphabet, now technically Google’s “parent” company (but really just a reorganized Google). Apparently, according to the Wall Street Journal, at least 20 tech companies, including Microsoft and eBay, are watching this case very closely.
Google is known for paying the fewest taxes it thinks it can get away with, a practice very unpopular with some. We’re reminded:
“Google has recently come under fire for its tax arrangements in Europe, a $185M back-tax deal in the UK being described as ‘disproportionately small’ and possibly illegal. France is currently seeking to claim $1.76B from the company in back taxes.”
So, how much will the world’s tax collectors be able to carve out of the Google revenue pie? I suspect it will vary from year to year, and will keep courts and lawyers around the world very busy.
Cynthia Murrell, June 3, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Speculation About Beyond Search
June 2, 2016
If you are curious to learn more about the purveyor of the Beyond Search blog, you should check out Singularity’s interview with “Stephen E Arnold On Search Engine And Intelligence Gathering.” A little bit of background about Arnold is that he is an expert specialist in content processing, indexing, online search as well as the author of seven books and monographs. His past employment record includes Booz, Allen, & Hamilton (Edward Snowden was a contractor for this company), Courier Journal & Louisville Times, and Halliburton Nuclear. He worked on the US government’s Threat Open Source Intelligence Service and developed a cost analysis, technical infrastructure, and security for the FirstGov.gov.
Singualrity’s interview covers a variety of topics and, of course, includes Arnold’s direct sense of humor:
“During our 90 min discussion with Stephen E. Arnold we cover a variety of interesting topics such as: why he calls himself lucky; how he got interested in computers in general and search engines in particular; his path from college to Halliburton Nuclear and Booze, Allen & Hamilton; content and web indexing; his who’s who list of clients; Beyond Search and the core of intelligence; his Google Trilogy – The Google Legacy (2005), Google Version 2.0 (2007), and Google: The Digital Gutenberg (2009); CyberOSINT and the Dark Web Notebook; the less-known but major players in search such as Recorded Future and Palantir; Big Brother and surveillance; personal ethics and Edward Snowden.”
When you listen to the experts in certain fields, you always get a different perspective than what the popular news outlets gives. Arnold offers a unique take on search as well as the future of Internet security, especially the future of the Dark Web.
Whitney Grace, June 2, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Everyone Rejoice! We Now Have Emoji Search
June 1, 2016
It was only a matter of time after image search actually became a viable and useful tool that someone would develop a GIF search. Someone thought it would be a keen idea to also design an emoji search and now, ladies and gentlemen, we have it! Tech Viral reports that “Now You Can Search Images On Google Using Emoji.”
Using the Google search engine is a very easy process, type in a few keywords or a question, click search, and then delve into the search results. The Internet, though, is a place where people develop content and apps just for “the heck of it”. Google decided to design an emoji search option, probably for that very reason. Users can type in an emoji, instead of words to conduct an Internet search.
The new emoji search is based on the same recognition skills as the Google image search, but the biggest question is how many emojis will Google support with the new function?
“Google has taken searching algorithm to the next level, as it is now allowing users to search using any emoji icon. Google stated ‘An emoji is worth a thousand words’. This feature may be highly appreciated by lazy Google users, as they now they don’t need to type a complete line instead you just need to use an emoji for searching images.”
It really sounds like a search for lazy people and do not be surprised to get a variety of results that do not have any relation to the emoji or your intended information need. An emoji might be worth a thousand words, but that is a lot of words with various interpretations.
Whitney Grace, June 1, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
The Google Knowledge Vault Claimed to Be the Future
May 31, 2016
Back in 2014, I heard rumors that the Google Knowledge Vault was supposed to be the next wave of search. How many times do you hear a company or a product making the claim it is the next big thing? After I rolled my eyes, I decided to research what became of the Knowledge Vault and I found an old article from Search Engine Land: “Google ‘Knowledge Vault’ To Power Future Of Search.” Google Knowledge Graph was used to supply more information to search results, what we now recognize as the summarized information at the top of Google search results. The Knowledge Vault was supposedly the successor and would rely less on third party information providers.
“Sensationally characterized as ‘the largest store of knowledge in human history,’ Knowledge Vault is being assembled from content across the Internet without human editorial involvement. ‘Knowledge Vault autonomously gathers and merges information from across the web into a single base of facts about the world, and the people and objects in it,’ says New Scientist. Google has reportedly assembled 1.6 billion “facts” and scored them according to confidence in their accuracy. Roughly 16 percent of the information in the database qualifies as ‘confident facts.’”
Knowledge Vault was also supposed to give Google a one up in the mobile search market and even be the basis for artificial intelligence applications. It was a lot of hoopla, but I did a bit more research and learned from Wikipedia that Knowledge Vault was nothing more than a research paper.
Since 2014, Google, Apple, Facebook, and other tech companies have concentrated their efforts and resources on developing artificial intelligence and integrating it within their products. While Knowledge Vault was a red herring, the predictions about artificial intelligence were correct.
Whitney Grace, May 31, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Paid Posts and PageRank
May 27, 2016
Google users rely on the search engine’s quality-assurance algorithm, PageRank, to serve up the links most relevant to their query. Blogger and Google engineer Matt Cutts declares, reasonably enough, that “Paid Posts Should Not Affect Search Engines.” His employer, on the other hand, has long disagreed with this stance. Cutts concedes:
“We do take the subject of paid posts seriously and take action on them. In fact, we recently finished going through hundreds of ‘empty review’ reports — thank you for that feedback! That means that now is a great time to send us reports of link buyers or sellers that violate our guidelines. We use that information to improve our algorithms, but we also look through that feedback manually to find and follow leads.”
Well, that’s nice to know. However, Cutts emphasizes, no matter how rigorous the quality assurance, there is good reason users may not want paid posts to make it through PageRank at all. He explains:
“If you are searching for information about brain cancer or radiosurgery, you probably don’t want a company buying links in an attempt to show up higher in search engines. Other paid posts might not be as starkly life-or-death, but they can still pollute the ecology of the web. Marshall Kirkpatrick makes a similar point over at ReadWriteWeb. His argument is as simple as it is short: ‘Blogging is a beautiful thing. The prospect of this young media being overrun with “pay for play” pseudo-shilling is not an attractive one to us.’ I really can’t think of a better way to say it, so I’ll stop there.”
Cynthia Murrell, May 27, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Open Source Software Needs a Micro-Payment Program
May 27, 2016
Open source software is an excellent idea, because it allows programmers across the globe to share and contribute to the same project. It also creates a think tank like environment that can be applied (arguably) to any tech field. There is a downside to open source and creative commons software and that is it not a sustainable model. Open Source Everything For The 21st Century discusses the issue in their post about “Robert Steele: Should Open Source Code Have A PayPal Address & AON Sliding Scale Rate Sheet?”
The post explains that open source delivers an unclear message about how code is generated, it comes from the greater whole rather than a few people. It also is not sustainable, because people do need funds to survive as well as maintain the open source software. Fair Source is a reasonable solution: users are charged if the software is used at a company with fifteen or more employees, but it too is not sustainable.
Micro-payments, small payments of a few cents, might be the ultimate solution. Robert Steele wrote that:
“I see the need for bits of code to have embedded within them both a PayPalPayPal-like address able to handle micro-payments (fractions of a cent), and a CISCO-like Application Oriented Network (AON) rules and rate sheet that can be updated globally with financial-level latency (which is to say, instantly) and full transparency. Some standards should be set for payment scales, e.g. 10 employees, 100, 1000 and up; such that a package of code with X number of coders will automatically begin to generate PayPal payments to the individual coders when the package hits N use cases within Z organizational or network structures.”
Micro-payments are not a bad idea and it has occasionally been put into practice, but not very widespread. No one has really pioneered an effective system for it.
Steele is also an advocate for “…Internet access and individual access to code is a human right, devising new rules for a sharing economy in which code is a cost of doing business at a fractional level in comparison to legacy proprietary code — between 1% and 10% of what is paid now.”
It is the ideal version of the Internet, where people are able to make money from their content and creations, users’ privacy is maintained, and ethics is essential are respected. The current trouble with YouTube channels and copyright comes to mind as does stolen information sold on the Dark Web and the desire to eradicate online bullying.
Whitney Grace, May 27, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

