Social Search: Don Quixote Is Alive and Well

January 18, 2013

Here I float in Harrod’s Creek, Kentucky, an addled goose. I am interested in other geese in rural Kentucky. I log into Facebook, using a faux human alias (easier than one would imagine) and run a natural language query (human language, of course). I peck with my beak on my iPad using an app, “Geese hook up 40027.” What do I get? Nothing, Zip, zilch, nada.

Intrigued I query, “modern American drama.” What do I get? Nothing, Zip, zilch, nada.

I give up. Social search just does not work under my quite “normal” conditions.

First, I am a goose spoofing the world as a human. Not too many folks like this on Facebook, so my interests and my social graph is useless.

Second, the key words in my natural language query do not match the Facebook patterns, crafted by former Googlers and 20 somethings to deliver hook up heaven and links to the semi infamous Actor’s Theater or the Kentucky Center.

social outcast

Social search is not search. Social search is group centric. Social search is an outstanding system for monitoring and surveillance. For information retrieval, social search is a subset of information retrieval. How do semantic methods improve the validity of the information retrieved? I am not exactly sure. Perhaps the vendors will explain and provide documented examples?

Third, without context, my natural language queries shoot through the holes in the Swiss Cheese of the Facebook database.

After I read “The Future of Social Search,” I assumed that information was available at the peck of my beak. How misguided was I? Well, one more “next big thing” in search demonstrated that baloney production is surging in a ailing economy. Optimism is good. Crazy predictions about search are not so good. Look at the sad state of enterprise search, Web search, and email search. Nothing works exactly as I hope. The dust up between Hewlett Packard and Autonomy suggests that “meaning based computing” is a point of contention.

If social search does not work for an addled goose, for whom does it work? According to the wild and crazy write up:

Are social networks (or information networks) the new search engine? Or, as Steve Jobs would argue, is the mobile app the new search engine? Or, is the question-and-answer formula of Quora the real search 2.0? The answer is most likely all of the above, because search is being redefined by all of these factors. Because search is changing, so too is the still maturing notion of social search, and we should certainly think about it as something much grander than socially-enhanced search results.

Yep, Search 2.0.

But the bit of plastic floating in my pond is semantic search. Here’s what the Search 2.0 social crowd asserts:

Let’s embrace the notion that social search should be effortless on the part of the user and exist within a familiar experience — mobile, social or search. What this foretells is a future in which semantic analysis, machine learning, natural language processing and artificial intelligence will digest our every web action and organically spit out a social search experience. This social search future is already unfolding before our very eyes. Foursquare now taps its massive check in database to churn out recommendations personalized by relationships and activities. My6sense prioritizes tweets, RSS feeds and Facebook updates, and it’s working to personalize the web through semantic analysis. Even Flipboard offers a fresh form of social search and helps the user find content through their social relationships. Of course, there’s the obvious implementations of Facebook Instant Personalization: Rotten Tomatoes, Clicker and Yelp offer Facebook-personalized experiences, essentially using your social graph to return better “search” results.

Semantics. Better search results. How does that work on Facebook images and Twitter messages?

My view is that when one looks for information, there are some old fashioned yardsticks; for example, precision, recall, editorial policy, corpus provenance, etc.

When a clueless person asks about pop culture, I am not sure that traditional reference sources will provide an answer. But as information access is trivialized, the need for knowledge about the accuracy and comprehensiveness of content, the metrics of precision and recall, and the editorial policy or degree of manipulation baked into the system decreases.

image

See Advantech.com for details of a surveillance system.

Search has not become better. Search has become subject to self referential mechanisms. That’s why my goose queries disappoint. If I were looking for pizza or Lady Gaga information, I would have hit pay dirt with a social search system. When I look for information based on an idiosyncratic social fingerprint or when I look for hard information to answer difficult questions related to client work, social search is not going to deliver the input which keeps this goose happy.

What is interesting is that so many are embracing a surveillance based system as the next big thing in search. I am glad I am old. I am delighted my old fashioned approach to obtaining information is working just fine without the special advantages a social graph delivers.

Will today’s social search users understand the old fashioned methods of obtaining information? In my opinion, nope. Does it matter? Not to me. I hope some of these social searchers do more than run a Facebook query to study for their electrical engineering certification or to pass board certification for brain surgery.

Stephen E Arnold, January 18, 2013

Fifteen Year Old Invents Information Filter App

January 18, 2013

Useful apps can be made by anyone, but Fast Company reported on how “This 15-Year-Old Built An App To Help His High School Debate Team. It Could Do Much More Than That.” Tanay Tandy invented an app he calls Clipped that was developed to extract information from news articles and other sources and create a bulleted list. It is being touted as a new tool that could put research assistants, Congressional aides, and judicial clerks out of work. Clipped has received mixed reviews so far, but Tandy is working on an upgrade that should resolve the problems.

Tandy personally created the algorithm for his debate prep. Here is how he uses it:

“I use it to scan over articles, and after using Clipped, if I like an article, I have to go back and read the whole thing. For a typical debate I have about 100 different evidence files about 2-3 pages in length. There might be an article where the title might sound appealing, but after running Clipped, I can see the focus of the article is definitely not what I’m looking for. Last year for a debate on animal rights, I found a paper on animal rights–but it was targeted towards the philosophical side of why to respect animal rights. But for that specific debate, I was looking for evidence from the scientific side, research showing that animals can think as much as humans.”

Tandy does not believe anyone is too young to launch a product as long as the right people are around and ego does not go to a person’s head. Tandy just built a tool to make his life easier and was not looking for fame, but now he has a project that will appeal to college review boards. Also Google might be keeping an eye on him for future jobs.

Whitney Grace, January 18, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

The Teflon Coated Google

January 17, 2013

For eighteen months, the Federal Trade Commission investigated Google to see if it was using its corner on the Internet search market to push its own products and services at the expense of its rivals. The Wall Street Journal reports in “Behind Google’s Antitrust Escape” that the FTC decided not to purse an antitrust suit, instead they opted for a series of smaller issues. Google agreed to make some changes in its search business. The FTC could not find any evidence that Google’s customers as well as its rivals were being harmed. All the FTC discovered were customers’ complaints about Google’s actions, which were not enough to make a case.

During the investigation, Google was setting itself up against the antitrust violation:

“Google also dispatched executive chairman Eric Schmidt and other employees to garner support from lawmakers, adding political pressure to the landscape. In November, for instance, staff members of U.S. Senator Mark Udall, a Democrat from Colorado, spoke with Google representatives. Afterward, Mr. Udall sent a letter to FTC Chairman Jon Leibowitz, encouraging the agency to proceed “cautiously” in its probes of Internet companies, which “have some of the highest consumer satisfaction rates in the country” and have created millions of jobs.”

Udall’s letter was only one of several letters that Congress members sent to the FTC. Many of these letters were leaked and Congress was concerned about information leaking. It was even suggested that the FTC leaked the info for strategic advantage. Whatever the truth is, Google got off with a slap on the hand and will continue on with its search dominance.

Whitney Grace, January 17, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

The Teflon Coated Google

January 17, 2013

For eighteen months, the Federal Trade Commission investigated Google to see if it was using its corner on the Internet search market to push its own products and services at the expense of its rivals. The Wall Street Journal reports in “Behind Google’s Antitrust Escape” that the FTC decided not to purse an antitrust suit, instead they opted for a series of smaller issues. Google agreed to make some changes in its search business. The FTC could not find any evidence that Google’s customers as well as its rivals were being harmed. All the FTC discovered were customers’ complaints about Google’s actions, which were not enough to make a case.

During the investigation, Google was setting itself up against the antitrust violation:

“Google also dispatched executive chairman Eric Schmidt and other employees to garner support from lawmakers, adding political pressure to the landscape. In November, for instance, staff members of U.S. Senator Mark Udall, a Democrat from Colorado, spoke with Google representatives. Afterward, Mr. Udall sent a letter to FTC Chairman Jon Leibowitz, encouraging the agency to proceed “cautiously” in its probes of Internet companies, which “have some of the highest consumer satisfaction rates in the country” and have created millions of jobs.”

Udall’s letter was only one of several letters that Congress members sent to the FTC. Many of these letters were leaked and Congress was concerned about information leaking. It was even suggested that the FTC leaked the info for strategic advantage. Whatever the truth is, Google got off with a slap on the hand and will continue on with its search dominance.

Whitney Grace, January 17, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

Facebook Search: How Disruptive?

January 16, 2013

Lots of punditry today. Facebook rolled out graph search. A registered user can run queries answered by content within the Facebook “database.” How will it work? Public content becomes the corpus. Navigate to the BBC write up “Facebook Unveils Social Search Tools for Users.”

A comment by Facebook’s founder which caught my attention was:

“We look at Facebook as a big social database,” said Mr Zuckerberg, adding that social search was Facebook’s “third pillar” and stood beside the news feed and timeline as the foundational elements of the social network.

The former Googler allegedly responsible for Facebook’s search allegedly observed:

On graph search, you can only see content that people have shared with you,” developer Lars Rasmussen, who was previously the co-founder of Google Maps, told reporters.

So no reprise of the various privacy missteps the GOOG made. Facebook wants to avoid some of its fast dancing over privacy too.

How disruptive will Facebook search be?

First, the Facebook users will give search a whirl. The initial queries will be tire kicking stuff. Once some patterns emerge, the Facebook bean counters will slip the switch on ads. That, not search, may cause Google some moments of concern. Google, like Microsoft, has to protect its one trick revenue pony. Facebook won’t stampede the cattle, but those doggies will wander. If the pasture is juicy, Facebook will let those cows roam. Green pastures can be fragile ecosystems.

Second, search sucks. Facebook could answer certain types of questions better than the brute force Web indexing services. If users discover the useful functions of Facebook, traffic for the weak sisters like Blekko and Yahoo could head south. The Google won’t be hurt right away, but the potential for Facebook to index only urls cited by registered users could be a more threatening step. Surgical search, not brute force, may slice some revenues from the Google.

Third, Facebook could learn, as Google did, that search is a darned good thing. Armed with the social info and the Facebook users’ curated urls, Facebook could cook up a next generation search solution that could snow on Googzilla’s parade. Google Plus is interesting but Facebook may be just the outfit to pop search up a level. Google is not an innovator, so Facebook may be triggering a new search arms race.

Thank goodness.

Stephen E Arnold, January 16, 2013

Venture Funding Tracker from Digimind Offers Enhanced Features

January 16, 2013

Digimind drives competitive intelligence information with its service that tracks venture funding. Now that’s smart digging. TheNextWeb informs us, “WhoGotFunded.com Unveils Premium Accounts Offering More Filters, Keyword Searches, and Data Exports.” Three new premium account types offer users a number of useful features.

The free version of WhoGotFunded is still available, but the paid options may be worth the cost if your organization requires extended filtering, more than three results from keyword searches, or the ability to export data on more than three deals per month. Paying up also gets users “power” email alerts and user support. The write-up by Ken Yeung reports:

Started by a group of technologists, the site uses text mining technology to curate funding news for any company around the world. When we spoke with Paul Vivant, one of the founders, he said that the site’s goal was to build the most comprehensive funding database in the world that would become ‘a source for venture capitalists, business angels, founders, CEOs, corporate executives, journalists, bloggers, and investment bankers’. . .

“The company is offering a free 14-day trial with its Starter plan, which costs $49 per month. The next two plans are $149 and $749 per month, respectively. Each plan offers the same amount of credits, search results, and features — the main difference is just how much data do you get.”

Yeung notes that there are similar services out there, but Digimind seems to be confident that it has something unique to offer. The company works to save its clients time and money by automating and streamlining the collection, analysis, and sharing of data. Its global client list includes organizations from a broad range of industries.

Cynthia Murrell, January 16, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Polyspot Provides New Search Engine for Algoma University

January 15, 2013

PolySpot, an open search solutions provider, has been all over the news lately for partnering with various companies to help out with their search troubles. The recent PolySpot blog post “Polyspot on the Algoma University Web Site” announces a new project that PolySpot is participating in with the University.

The blog post reveals:

“With our partners TerminalFour, we now provide Algoma University (Canada) with a new search engine for its website. After the Sacred Heart University, we are happy to offer Algoma students and potential students a relevant and easy search tool through all content available on the University Web site.”

Here at Beyond Search we are glad to see that Polyspot is sharing its cutting edge search technologies with Colleges and Universities. Its easy-to-use enterprise search solution is making great strides to improve the industry at large. Check out the company’s Web site for more information about PolySpot.

Jasmine Ashton, January 14, 2013

SEO Community Jumps to Conclusions About Google and Press Releases

January 15, 2013

Are press releases the red-headed stepchild of Google, or just misunderstood from a lack of complete information? An SEO pro schools his colleagues in Search Engine Journal’s “Get Over Yourself—Matt Cutts did Not Just Kill Another SEO Kitten.” His is a voice of reason in a field that tends to defensively vilify Google’s attempts to serve up only quality content.

The latest dustup began in the Google forums, where one poster asked about press release companies that only push their stories to “legitimate” (quality content) sites. Google’s Matt Cutts (probably unintentionally) stirred things up with his simple statement: “Note: I wouldn’t expect links form press release web sites to benefit your rankings, however.” Hyperbole ensued.

Many in the SEO community took those words to mean that Google will now ignore all links in every press release it encounters, and were quite perturbed. Writer and SEO veteran Alan Bleiweiss takes the alarmists to task, and it is entertaining to read. I’m more interested, though, in his comments on press releases. After acknowledging the wealth of garbage that is now often distributed as “press releases,” he wrote:

“REAL press releases, that communicate TRULY time sensitive newsworthy information, have, and always will be a valuable means of spreading information that deserves to be spread. REAL press releases don’t get written purely for the links. REAL press releases are designed to communicate with legitimate news people. REAL press releases are designed to let others know valid updated information.

“And a well-crafted press release, targeting truly accurate niche recipients can lead to legitimate journalists, bloggers and social media influencers contacting a site’s owners, or doing their own write-up on the subject, and potentially even generating their own links.

“So from a sustainable SEO perspective, press releases are STILL an SEO best practice recommendation. As part of a comprehensive marketing solution that is vital to providing multiple layers of direct and indirect signals for SEO purposes. But ONLY when those releases are executed properly.”

It is good to see such reasonable sentiments from someone in the search engine optimization field. Will Bleiweiss succeed in talking sense into his colleagues?

Cynthia Murrell, January 15, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Latest Desktop Version from dtSearch Available

January 14, 2013

We spotted dtSearch’s latest desktop version, v7.72.8085-Lz0, for sale at Release BB. Will this new release be a splash or a flash?

The product description reads:

“The dtSearch product line can instantly search terabytes of text across a desktop, network, Internet or Intranet site. dtSearch products also serve as tools for publishing, with instant text searching, large document collections to Web sites or portable media. Developers can embed dtSearch’s instant searching and file format support into their own applications.”

A few of the product’s features include a variety of helpful search options, data exports in several formats, and specialized forensic indexing and searching tools. See the company’s official Desktop product page for more details.

Incorporated in 1991, dtSearch began its R&D in 1988. They have since become a major provider of information management software, supplying award-winning solutions to firms in several fields and to numerous government agencies in the areas of defense, law enforcement, and space exploration. The company also makes its products available for incorporation into other commercial applications. dtSearch has distributors worldwide, and is headquartered in Bethesda, Maryland.

Cynthia Murrell, January 14, 2013

Sponsored by ArnoldIT.com, developer of Augmentext

Google Remains a Habit for a Reason

January 13, 2013

Despite Google’s current stronghold in Internet search, there are still a few other companies that believe they have a way to disrupt what has become status quo. A recent article published in Everything PR called “Interview Exclusive: Bing Search’s Stefan Weitz” discusses Bing Search’s goals.

The Q&A with Bing Director Stefan Weitz dives into a question about how to what extent the negative stigma that Bing has proved challenging in gaining a larger audience. Weitz believes that it is not a stigma that Bing faces, but rather they deal with people’s already formed habits.

“[We want] to get people to demand more from search than presenting a bunch of links in response to a keyword.  It’s why we’re investing so much in multimodal experiences where Bing simply becomes part of the fabric of your day whether it’s on your television, within your productivity suite, on your mobile device, or on your tablet.  We think the act of search should weave itself into the fabric of your daily experiences – not be something you ‘go do’.”

Weitz has a chance to direct the conversation on Bing with this interview and the choice word throughout it, habit, was a smart one for him to use because of the sometimes negative connotation associated with it that he wanted to drive towards Google. However, the word inherently recalls how habits are facilitated: through an efficient and intuitive user experience. Google created that and has logically become the norm and the standard.

Megan Feil, January 13, 2013

Sponsored by ArnoldIT.com, developer of Beyond Search

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta