Do Businesses Have a Collective Intelligence?

May 4, 2016

After working in corporate America for several years, I was amazed by the sheer audacity of its stupidity.  I came to the conclusion that many people in corporate America lack intelligence and are slowly skirting insanity’s edge, so when I read Xconomy’s article, “Brainspace Aims To Harness ‘Collective Intelligence’ Of Businesses” made me giggle.   I digress.  Intelligence really does run rampant in businesses, especially in IT departments the keep modern companies up and running. The digital workspace has created a collective intelligence within a company’s enterprise system and the information is either accessed directly from the file hierarchy or through (the usually quicker) search box.

Keywords within the correct context pertaining to a company are extremely important to semantic search, which is why Brainspace invented a search software that creates a search ontology for individual companies.  Brainspace says that all companies create collective intelligence within their systems and their software takes the digitized “brain” and produces a navigable map that organizes the key items into clusters.

“As the collection of digital data on how we work and live continues to grow, software companies like Brainspace are working on making the data more useful through analytics, artificial intelligence, and machine-learning techniques. For example, in 2014 Google acquired London-based Deep Mind Technologies, while Facebook runs a program called FAIR—Facebook AI Research. IBM Watson’s cognitive computing program has a significant presence in Austin, TX, where a small artificial intelligence cluster is growing.”

Building a search ontology by incorporating artificial intelligence into semantic search is a fantastic idea.  Big data relies on deciphering information housed in the “collective intelligence,” but it can lack human reasoning to understanding context.  An intelligent semantic search engine could do wonders that Google has not even built a startup for yet.

 

Whitney Grace, May 4, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Google Relies on Freebase Machine ID Numbers to Label Images in Knowledge Graph

May 3, 2016

The article on Seo by the Sea titled Image Search and Trends in Google Search Using FreeBase Entity Numbers explains the transformation occurring at Google around Freebase Machine ID numbers. Image searching is a complicated business when it comes to differentiating labels. Instead of text strings, Google’s Knowledge Graph is based in Freebase entities, which are able to uniquely evaluate images- without language. The article explains with a quote from Chuck Rosenberg,

An entity is a way to uniquely identify something in a language-independent way. In English when we encounter the word “jaguar”, it is hard to determine if it represents the animal or the car manufacturer. Entities assign a unique ID to each, removing that ambiguity, in this case “/m/0449p” for the former and “/m/012×34” for the latter.”

Metadata is wonderful stuff, isn’t it? The article concludes by crediting Barbara Starr, a co-administrator of the Lotico San Diego Semantic Web Meetup, with noticing that the Machine ID numbers assigned to Freebase entities now appear in Google Trend’s URLs. Google Trends is a public web facility that enables an exploration of the hive mind by showing what people are currently searching. The Wednesday that President Obama nominated a new Supreme Court Justice, for example, had the top search as Merrick Garland.

 

Chelsea Kerwin, May 3, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Duck Duck Go as a Privacy Conscious Google Alternative

April 26, 2016

Those frustrated with Google may have an alternative. Going over to the duck side: A week with Duck Duck Go from Search Engine Watch shares a thorough first-hand account of using Duck Duck Go for a week. User privacy protection seems to be the hallmark of the search service and there is even an option to enable Tor in its mobile app. Features are comparable, such as one designed to compete with Google’s Knowledge Graph called Instant Answers. As an open source product, Instant Answers is built up by community contributions. As far as seamless, intuitive search, the post concludes,

“The question is, am I indignant enough about Google’s knowledge of my browsing habits (and everyone else’s that feed its all-knowing algorithms) to trade the convenience of instantly finding what I’m after for that extra measure of privacy online? My assessment of DuckDuckGo after spending a week in the pond is that it’s a search engine for the long term. To get the most out of using it, you have to make a conscious change in your online habits, rather than just expecting to switch one search engine for another and get the same results.”

Will a majority of users replace “Googling” with “Ducking” anytime soon? Time will tell, and it will be an interesting saga to see unfold. I suppose we could track the evolution on Knowledge Graph and Instant Answers to see the competing narratives unfold.

 

Megan Feil, April 26, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Google Removes Pirate Links

April 21, 2016

A few weeks ago, YouTube was abuzz with discontent from some of its most popular YouTube stars.  Their channels had been shut down die to copyright claims by third parties, even thought the content in question fell under the Fair Use defense.  YouTube is not the only one who has to deal with copyright claims.  TorrentFreak reports that “Google Asked To Remove 100,000 ‘Pirate Links’ Every Hour.”

Google handles on average two million DMCA takedown notices from copyright holders about pirated content.  TorrentFreak discovered that the number has doubled since 2015 and quadrupled since 2014.  The amount beats down to one hundred thousand per hour.  If the rate continues it will deal with one billion DMCA notices this year, while it had previously taken a decade to reach this number.

“While not all takedown requests are accurate, the majority of the reported links are. As a result many popular pirate sites are now less visible in Google’s search results, since Google downranks sites for which it receives a high number of takedown requests.  In a submission to the Intellectual Property Enforcement Coordinator a few months ago Google stated that the continued removal surge doesn’t influence its takedown speeds.”

Google does not take broad sweeping actions, such as removing entire domain names from search indexes, as it does not want to become a censorship board.  The copyright holders, though, are angry and want Google to promote only legal services over the hundreds of thousands of Web sites that pop up with illegal content.   The battle is compared to an endless whack-a-mole game.

Pirated content does harm the economy, but the numbers are far less than how the huge copyright holders claim.  The smaller people who launch DMCA takedowns, they are hurt more.  YouTube stars, on the other hand, are the butt of an unfunny joke and it would be wise for rules to be revised.

 

Whitney Grace, April 21, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Digging for a Direction of Alphabet Google

April 21, 2016

Is Google trying to emulate BAE System‘s NetReveal, IBM i2, and systems from Palantir? Looking back at an older article from Search Engine Watch, How the Semantic Web Changes Everything for Search may provide insight. Then, Knowledge Graph had launched, and along with it came a wave of communications generating buzz about a new era of search moving from string-based queries to a semantic approach, organizing by “things”. The write-up explains,

“The cornerstone of any march to a semantic future is the organization of data and in recent years Google has worked hard in the acquisition space to help ensure that they have both the structure and the data in place to begin creating “entities”. In buying Wavii, a natural language processing business, and Waze, a business with reams of data on local traffic and by plugging into the CIA World Factbook, Freebase and Wikipedia and other information sources, Google has begun delivering in-search info on people, places and things.”

This article mentioned Knowledge Graph’s implication for Google to deliver strengthened and more relevant advertising with this semantic approach. Even today, we see the Alphabet Google thing continuing to shift from search to other interesting information access functions in order to sell ads.

 

Megan Feil, April 21, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Software That Contains Human Reasoning

April 20, 2016

Computer software has progressed further and keeps advancing faster than we can purchase the latest product.  Software is now capable of holding simple conversations, accurately translating languages, GPS, self-driving cars, etc.  The one thing that that computer developers cannot program is human thought and reason.  The New York Times wrote “Taking Baby Steps Toward Software That Reasons Like Humans” about the goal just out of reach.

The article focuses on Richard Socher and his company MetaMind, a deep learning startup working on pattern recognition software.  He along with other companies focused on artificial intelligence are slowly inching their way towards replicating human thought on computers.  The progress is slow, but steady according to a MetaMind paper about how machines are now capable of answering questions of both digital images and textual documents.

“While even machine vision is not yet a solved problem, steady, if incremental, progress continues to be made by start-ups like Mr. Socher’s; giant technology companies such as Facebook, Microsoft and Google; and dozens of research groups.  In their recent paper, the MetaMind researchers argue that the company’s approach, known as a dynamic memory network, holds out the possibility of simultaneously processing inputs including sound, sight and text.”

The software that allows computers to answer questions about digital images and text is sophisticated, but the data to come close to human capabilities is not only limited, but also nonexistent.  We are coming closer to understanding the human brain’s complexities, but artificial intelligence is not near Asimov levels yet.

 

 

Whitney Grace, April 20, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Natural Language Takes Lessons from Famous Authors

April 18, 2016

What better way to train a natural language AI than to bring venerated human authors into the equation? Wired reports, “Google Wants to Predict the Next Sentences of Dead Authors.” Not surprisingly, Google researchers are tapping into Project Gutenberg for their source material. Writer Matt Burgess relates:

“The network is given millions of lines from a ‘jumble’ of authors and then works out the style of individual writers. Pairs of lines were given to the system, which made a simple ‘yes’ or ‘no’ decision to whether they matched up. Initially the system didn’t know the identity of any authors, but still only got things wrong 17 percent of the time. By giving the network an indication of who the authors were, giving it another factor to compare work against, the computer scientists reduced the error rate to 12.3 percent. This was also improved by a adding a fixed number of previous sentences to give the network more context.”

The researchers carry their logic further. As the Wired title says, they have their AI predict an author’s next sentence; we’re eager to learn what Proust would have said next. They also have the software draw conclusions about authors’ personalities. For example, we’re told:

“Google admitted its predictions weren’t necessarily ‘particularly accurate,’ but said its AI had identified William Shakespeare as a private person and Mark Twain as an outgoing person. When asked ‘Who is your favourite author?’ and [given] the options ‘Mark Twain’, ‘William Shakespeare’, ‘myself’, and ‘nobody’, the Twain model responded with ‘Mark Twain’ and the Shakespeare model responded with ‘William Shakespeare’. Asked who would answer the phone, the AI Shakespeare hoped someone else would answer, while Twain would try and get there first.”

I can just see Twain jumping over Shakespeare to answer the phone. The article notes that Facebook is also using the work of human authors to teach its AI, though that company elected to use children’s classics  like The Jungle Book, A Christmas Carol, and Alice in Wonderland. Will we eventually see a sequel to Through the Looking Glass?

 

 

Cynthia Murrell, April 18, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Tumblr Tumbles, Marking yet Another Poor Investment Decision by Yahoo

April 14, 2016

The article on VentureBeat titled As Tumblr’s Value Head to Zero, a Look at Where It Ranks Among Yahoo’s 5 Worst Acquisition Deals pokes fun at Yahoo’s tendency to spend huge amounts of cash for companies only to watch them immediately fizzle. In the number one slot is Broadcast.com. Remember that? Me neither. But apparently Yahoo doled out almost $6B in 1999 to wade into the online content streaming game only to shut the company down after a few years. And thusly, we have Mark Cuban. Thanks Yahoo. The article goes on with the ranking,

“2. GeoCities: Yahoo paid $3.6 billion for this dandy that let people who knew nothing about the Web make web pages. Fortunately, this was also mostly shut down, and nearly all of its content vanished, saving most of us from a lot GIF-induced embarrassment. 3. Overture: Yahoo paid $1.63 billion in 2003 for this search engine firm after belatedly realizing that some upstart called Google was eating its lunch. Spoiler alert: Google won.”

The article suggests that Tumblr would slide into fourth place given the $1.1B price tag and two year crash and burn. It also capitulates that there are other ways of measuring this list, such as: levels of hard to watch. By that metric, cheaper deals with more obvious mismanagement like the social sites Flickr or Delicious might take the cake.

 

Chelsea Kerwin, April 14, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

What Not to Say to a Prospective Investor (Unless They Just Arrived via Turnip Truck)

April 11, 2016

The article on Pando titled Startups Anonymous: Things Founders Say to Investors That Are Complete BS is an installment from a weekly series on the obstacles and madness inherent in the founder/investor relationship. Given that one person is trying to convince the other to give them money, and the other is looking for reasons to not give money, the conversations often turn comical faster than it takes the average startup to go broke. The article provides a list of trending comments that one might overhear coming from a founder’s mouth (while their nose simultaneously turns red and elongates.) Here are a few gems, along with their translated meanings,

“Our growth has been all organic.” Translation: Our friends are using it. “My cofounder turned down a job at Google to focus on our company.” Translation: He applied for an internship a while back and it fell through. “We want to create a very minimalist design.” Translation: We’re not designers and can’t afford to hire a decent one. “This is a $50 billion per year untapped market.” Translation: I heard this tactic works for getting investors.”

The frustrations of fundraising is no joke, but founders get their turn to laugh at investors in the companion article titled What I’d Really Like to Say to Investors. For example: “If today, we had the revenue you’d like to see, I wouldn’t be talking to you right now. It’s as simple as that.” Injecting honesty into these interactions is apparently always funny, perhaps because as founders get increasingly desperate, their BS artistry rises in correlation.

 

Chelsea Kerwin, April 11, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Potential Corporate Monitoring Concerns Tor Users

April 7, 2016

The Dark Web has been seen as a haven by anyone interested in untraceable internet activity. However, a recent article from Beta News, Tor Project says Google, CloudFlare and others are involved in dark web surveillance and disruption, brings to light the potential issue of Tor traffic being monitored. A CDN and DDoS protection service called CloudFlare has introduced CAPTCHAs and cookies to Tor for monitoring purpose and accusations about Google and Yahoo have also been made. The author writes,

“There are no denials that the Tor network — thanks largely to the anonymity it offers — is used as a platform for launching attacks, hence the need for tools such as CloudFlare. As well as the privacy concerns associated with CloudFlare’s traffic interception, Tor fans and administrators are also disappointed that this fact is being used as a reason for introducing measures that affect all users. Ideas are currently being bounced around about how best to deal with what is happening, and one of the simpler suggestions that has been put forward is adding a warning that reads “Warning this site is under surveillance by CloudFlare” to sites that could compromise privacy.”

Will a simple communications solution appease Tor users? Likely not, as such a move would essentially market Tor as providing the opposite service of what users expect. This will be a fascinating story to see unfold as it could be the beginning of the end of the Dark Web as it is known, or perhaps the concerns over loss of anonymity will fuel further innovation.

 

Megan Feil, April 7, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta