Dark Web Purchases Potentially More Challenging Than Media Portrays

August 8, 2016

German TV journalists recently discovered acquiring weapons on the Dark Web may be more challenging than media coverage suggests. Vice’s Motherboard published an article on this called TV Journalists Try Buying AK-47 on Dark Web, Fail. Producers for German channel ARD, working for a show “Fear of terror—how vulnerable is Germany” lost about $800 in bitcoin during the attempted transaction through a middleman. We learned,

“It’s not totally clear if this was because the seller wasn’t legitimate, or whether the package had been intercepted. Regardless, this shouldn’t be much of a surprise: The dark web gun trade is rife with scammers. One con-artist previously told Motherboard he would ask legal sellers to send him photos of weapons next to a piece of paper with his username. From here, he would “just send a bag of sugar,” when an order came in. And undercover law enforcement agents also sell weapons in order to identify potential customers.”

Motherboard is careful to reference cases of successful Dark Web gun sales. Not that readers would be so quick to assume guns cannot be easily purchased on the Dark Web after seeing numerous media coverage that is the case. For the average reader, is the knowledge of the Dark Web from media or personal experience? We see a lot of articles reporting number of web sites that exist, perhaps because of the inability to accurately report a number of users on the Dark Web. While that may not be retrievable, perhaps the number of Tor downloads may be.

 

Megan Feil, August 8, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

 

The Web, the Deep Web, and the Dark Web

July 18, 2016

If it was not a challenge enough trying to understand how the Internet works and avoiding identity theft, try carving through the various layers of the Internet such as the Deep Web and the Dark Web.  It gets confusing, but “Big Data And The Deep, Dark Web” from Data Informed clears up some of the clouds that darken Internet browsing.

The differences between the three are not that difficult to understand once they are spelled out.  The Web is the part of the Internet that we use daily to check our email, read the news, check social media sites, etc.  The Deep Web is an Internet sector not readily picked up by search engines.  These include password protected sites, very specific information like booking a flight with particular airline on a certain date, and the TOR servers that allow users to browse anonymously.  The Dark Web are Web pages that are not indexed by search engines and sell illegal goods and services.

“We do not know everything about the Dark Web, much less the extent of its reach.

“What we do know is that the deep web has between 400 and 550 times more public information than the surface web. More than 200,000 deep web sites currently exist. Together, the 60 largest deep web sites contain around 750 terabytes of data, surpassing the size of the entire surface web by 40 times. Compared with the few billion individual documents on the surface web, 550 billion individual documents can be found on the deep web. A total of 95 percent of the deep web is publically accessible, meaning no fees or subscriptions.”

The biggest seller on the Dark Web is child pornography.  Most of the transactions take place using BitCoin with an estimated $56,000 in daily sales.  Criminals are not the only ones who use the Dark Web, whistle-blowers, journalists, and security organizations use it as well.  Big data has not even scratched the surface related to mining, but those interested can find information and do their own mining with a little digging

 

Whitney Grace,  July 18 , 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

There is a Louisville, Kentucky Hidden Web/Dark
Web meet up on July 26, 2016.
Information is at this link: http://bit.ly/29tVKpx.

DuckDuckGo Sees Apparent Exponential Growth

July 1, 2016

The Tor-enabled search engine DuckDuckGo has received attention recently for being an search engine that does not track users. We found their activity report that shows a one year average of their direct queries per day. DuckDuckGo launched in 2008 and offers an array of options to prevent “search leakage”. Their website defines this term as the sharing of personal information, such as the search terms queried. Explaining a few of DuckDuckGo’s more secure search options, their website states:

“Another way to prevent search leakage is by using something called a POST request, which has the effect of not showing your search in your browser, and, as a consequence, does not send it to other sites. You can turn on POST requests on our settings page, but it has its own issues. POST requests usually break browser back buttons, and they make it impossible for you to easily share your search by copying and pasting it out of your Web browser’s address bar.

Finally, if you want to prevent sites from knowing you visited them at all, you can use a proxy like Tor. DuckDuckGo actually operates a Tor exit enclave, which means you can get end to end anonymous and encrypted searching using Tor & DDG together.”

Cybersecurity and privacy have become hot topics since Edward Snowden made headlines in 2013, which is notably when DuckDuckGo’s exponential growth begins to take shape. Recognition of Tor also became more mainstream around that time, 2013, which is when the Silk Road shutdown occurred, placing the Dark Web in the news. It appears that starting a search engine focused on anonymity in 2008 was not such a bad idea.

 

Megan Feil, July 1, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

CloudFlare Claims Most Activity from Tor Is Malicious

June 28, 2016

Different sources suggest varying levels of malicious activity on Tor. Tech Insider shared an article responding to recent claims about Tor made by CloudFlare. The article, entitled, Google Search has a secret feature that shouts animal noises at you, offers information about CloudFlare’s perspective and that of the Tor Project. CloudFlare reports most requests from Tor, 94 percent, are “malicious” and the Tor Project has responded by requesting evidence to justify the claim. Those involved in the Tor Project have a hunch the 94 percent figure stems from CloudFlare attributing the label of “malicious” to any IP address that has ever sent spam. The article continues,

“We’re interested in hearing CloudFlare’s explanation of how they arrived at the 94% figure and why they choose to block so much legitimate Tor traffic. While we wait to hear from CloudFlare, here’s what we know: 1) CloudFlare uses an IP reputation system to assign scores to IP addresses that generate malicious traffic. In their blog post, they mentioned obtaining data from Project Honey Pot, in addition to their own systems. Project Honey Pot has an IP reputation system that causes IP addresses to be labeled as “malicious” if they ever send spam to a select set of diagnostic machines that are not normally in use. CloudFlare has not described the nature of the IP reputation systems they use in any detail.”

This article raises some interesting points, but also alludes to more universal problems with making sense of any information published online. An epistemology about technology, and many areas of study, is like chasing a moving target. Knowledge about technology is complicated by the relationship between technology and information dissemination. The important questions are what does one know about Tor and how does one know about it?

 

Megan Feil, June 28, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Libraries Will Save the Internet

June 10, 2016

Libraries are more than place to check out free DVDs and books and use a computer.  Most people do not believe this and if you try to tell them otherwise, their eyes glaze offer and they start chanting “obsolete” under their breath.  BoingBoing, however, agrees that “How Libraries Can Save The Internet Of Things From The Web’s Centralized Fate”.  For the past twenty years, the Internet has become more centralized and content is increasingly reliant on proprietary sites, such as social media, Amazon, and Google.

Back in the old days, the greatest fear was that the government would take control of the Internet.  The opposite has happened with corporations consolidating the Internet.  Decentralization is taking place, mostly to keep the Internet anonymous.  Usually, these are tied to the Dark Web.  The next big thing in the Internet is “the Internet of things,” which will be mostly decentralized and that can be protected if the groundwork is laid now.  Libraries can protect decentralized systems, because

“Libraries can support a decentralized system with both computing power and lobbying muscle. The fights libraries have pursued for a free, fair and open Internet infrastructure show that we’re players in the political arena, which is every bit as important as servers and bandwidth.  What would services built with library ethics and values look like? They’d look like libraries: Universal access to knowledge. Anonymity of information inquiry. A focus on literacy and on quality of information. A strong service commitment to ensure that they are available at every level of power and privilege.”

Libraries can teach people how to access services like Tor and disseminate the information to a greater extent than many other institutes within the community.  While this is possible, in many ways it is not realistic due to many factors.  Many of the decentralized factors are associated with the Dark Web, which is held in a negative light.  Libraries also have limited budgets and trying to install a program like this will need finances, which the library board might not want to invest in.  Also comes the problem of locating someone to teach these services.  Many libraries are staffed by librarians that are limited in their knowledge, although they can learn.

It is possible, it would just be hard.

 

Whitney Grace, June 10, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

More Data to Fuel Debate About Malice on Tor

June 9, 2016

The debate about malicious content on Tor continues. Ars Technica published an article continuing the conversation about Tor and the claims made by a web security company that says 94 percent of the requests coming through the network are at least loosely malicious. The article CloudFlare: 94 percent of the Tor traffic we see is “per se malicious” reveals how CloudFlare is currently handling Tor traffic. The article states,

“Starting last month, CloudFlare began treating Tor users as their own “country” and now gives its customers four options of how to handle traffic coming from Tor. They can whitelist them, test Tor users using CAPTCHA or a JavaScript challenge, or blacklist Tor traffic. The blacklist option is only available for enterprise customers. As more websites react to the massive amount of harmful Web traffic coming through Tor, the challenge of balancing security with the needs of legitimate anonymous users will grow. The same network being used so effectively by those seeking to avoid censorship or repression has become a favorite of fraudsters and spammers.”

Even though the jury may still be out in regards to the statistics reported about the volume of malicious traffic, several companies appear to want action sooner rather than later. Amazon Web Services, Best Buy and Macy’s are among several sites blocking a majority of Tor exit nodes. While a lot seems unclear, we can’t expect organizations to delay action.

 

Megan Feil, June 9, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Websites Found to Be Blocking Tor Traffic

June 8, 2016

Discrimination or wise precaution? Perhaps both? MakeUseOf tells us, “This Is Why Tor Users Are Being Blocked by Major Websites.” A recent study (PDF) by the University of Cambridge; University of California, Berkeley; University College London; and International Computer Science Institute, Berkeley confirms that many sites are actively blocking users who approach through a known Tor exit node. Writer Philip Bates explains:

“Users are finding that they’re faced with a substandard service from some websites, CAPTCHAs and other such nuisances from others, and in further cases, are denied access completely. The researchers argue that this: ‘Degraded service [results in Tor users] effectively being relegated to the role of second-class citizens on the Internet.’ Two good examples of prejudice hosting and content delivery firms are CloudFlare and Akamai — the latter of which either blocks Tor users or, in the case of Macys.com, infinitely redirects. CloudFlare, meanwhile, presents CAPTCHA to prove the user isn’t a malicious bot. It identifies large amounts of traffic from an exit node, then assigns a score to an IP address that determines whether the server has a good or bad reputation. This means that innocent users are treated the same way as those with negative intentions, just because they happen to use the same exit node.”

The article goes on to discuss legitimate reasons users might want the privacy Tor provides, as well as reasons companies feel they must protect their Websites from anonymous users. Bates notes that there  is not much one can do about such measures. He does point to Tor’s own Don’t Block Me project, which is working to convince sites to stop blocking people just for using Tor. It is also developing a list of best practices that concerned sites can follow, instead. One site, GameFAQs, has reportedly lifted its block, and CloudFlare may be considering a similar move. Will the momentum build, or must those who protect their online privacy resign themselves to being treated with suspicion?

 

Cynthia Murrell, June 8, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

A Possible Goodbye to the Dark Web

June 7, 2016

Should the Dark Web be eradicated? An article from Mic weighs in with an editorial entitled, Shutting Down the Dark Web Is a Plainly Absurd Idea From Start to Finish. Where is this idea coming from? Apparently 71 percent of internet users believe the Dark Web “should be shut down”. This statistic is according to a survey of over 24,000 people from Canadian think tank Centre for International Governance Innovation. The Mic article takes issue with the concept that the Dark Web could be “shut down”,

“The Dark Net, or Deep Web or a dozen other names, isn’t a single set of sites so much as a network of sites that you need special protocols or software in order to find. Shutting down the network would mean shutting down every site and relay. In the case of the private web browser Tor, this means simultaneously shutting down over 7,000 secret nodes worldwide. The combined governments of various countries have enough trouble keeping the Pirate Bay from operating right on the open web, never mind trying to shut down an entire network of sites with encrypted communications and hidden IP addresses hosted worldwide.”

The feasibility of shutting down the Dark Web is also complicated by the fact that there are multiple networks, such as Tor, Freenet or I2P, that allow Dark Web access. Of course, there is also the issue, as the article acknowledges, that many uses of the Dark Web are benign or even to further human rights causes. We appreciated a similar article from Softpedia, which pointed to the negative public perception stemming from media coverage of the takedown child pornography and drug sales site takedowns. It’s hard to know what isn’t reported in mainstream media.

 

Megan Feil, June 7, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Mouse Movements Are the New Fingerprints

May 6, 2016

A martial artist once told me that an individual’s fighting style, if defined enough, was like a set of fingerprints.  The same can be said for painting style, book preferences, and even Netflix selections, but what about something as anonymous as a computer mouse’s movement?  Here is a new scary thought from PC & Tech Authority: “Researcher Can Indentify Tor Users By Their Mouse Movements.”

Juan Carlos Norte is a researcher in Barcelona, Spain and he claims to have developed a series of fingerprinting methods using JavaScript that measures times, mouse wheel movements, speed movement, CPU benchmarks, and getClientRects.   Combining all of this data allowed Norte to identify Tor users based on how they used a computer mouse.

It seems far-fetched, especially when one considers how random this data is, but

“’Every user moves the mouse in a unique way,’ Norte told Vice’s Motherboard in an online chat. ‘If you can observe those movements in enough pages the user visits outside of Tor, you can create a unique fingerprint for that user,’ he said. Norte recommended users disable JavaScript to avoid being fingerprinted.  Security researcher Lukasz Olejnik told Motherboard he doubted Norte’s findings and said a threat actor would need much more information, such as acceleration, angle of curvature, curvature distance, and other data, to uniquely fingerprint a user.”

This is the age of big data, but looking Norte’s claim from a logical standpoint one needs to consider that not all computer mice are made the same, some use lasers, others prefer trackballs, and what about a laptop’s track pad?  As diverse as computer users are, there are similarities within the population and random mouse movement is not individualistic enough to ID a person.  Fear not Tor users, move and click away in peace.

 

Whitney Grace, May 6, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Developing Nations Eager to Practice Cyber Surveillance

April 28, 2016

Is it any surprise that emerging nations want in on the ability to spy on their citizens? That’s what all the cool governments are doing, after all. Indian Strategic Studies reports, “Even Developing Nations Want Cyber Spying Capabilities.” Writer Emilio Iasiello sets the stage—he contrasts efforts by developed nations to establish restrictions versus developing countries’ increased interest in cyber espionage tools.

On one hand, we could take heart from statements like this letter and this summary from the UN, and the “cyber sanctions” authority the U.S. Department of Treasury can now wield against foreign cyber attackers. At the same time, we may uneasily observe the growing popularity of FinFisher, a site which sells spyware to governments and law enforcement agencies. A data breach against FinFisher’s parent company, Gamma International, revealed the site’s customer list. Notable client governments include Bangladesh, Kenya, Macedonia, and Paraguay. Iasiello writes:

“While these states may not use these capabilities in order to conduct cyber espionage, some of the governments exposed in the data breach are those that Reporters without Borders have identified as ‘Enemies of the Internet’ for their penchant for censorship, information control, surveillance, and enforcing draconian legislation to curb free speech. National security is the reason many of these governments provide in ratcheting up authoritarian practices, particularly against online activities. Indeed, even France, which is typically associated with liberalism, has implemented strict laws fringing on human rights. In December 2013, the Military Programming Law empowered authorities to surveil phone and Internet communications without having to obtain legal permission. After the recent terrorist attacks in Paris, French law enforcement wants to add addendums to a proposed law that blocks the use of the TOR anonymity network, as well as forbids the provision of free Wi-Fi during states of emergency. To put it in context, China, one of the more aggressive state actors monitoring Internet activity, blocks TOR as well for its own security interests.”

The article compares governments’ cyber spying and other bad online behavior to Pandora’s box. Are resolutions against such practices too little too late?

 

Cynthia Murrell, April 28, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta