Webinar from BrightFunnel Ties Marketing to Revenue
June 30, 2015
The webinar on BrightFunnel Blog titled Campaign Attribution: Start Measuring True Marketing Impact (How-To Video) adds value to marketing efforts. BrightFunnel defines itself as platform for marketing analytics that works to join marketing more closely to revenue. The webinar is focused on the attribution application. The video poses three major questions that the application can answer about how pipeline and revenue are affected by marketing channels and specific campaigns, as well as how to gain better insight on the customer. The article overviews the webinar,
“Marketers care. We care a lot about what happens to all those leads we generate for sales. It can be hard to get a complete view of marketing impact when you’re limited to trusting that the right contacts, if any, are being added to opportunities! In this recording from our recent webinar, see how BrightFunnel solves key attribution problems by providing seamless visibility into multi-touch campaign attribution so you can accurately measure the impact you have on pipeline and revenue.”
BrightFunnel believes in an intuitive approach, claiming that three to four weeks has been plenty of time for their users to get set up and get to work with their product. They host a series of webinars that allows interested parties to ask direct questions and be answered live.
Chelsea Kerwin, June 30, 2014
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Alternative Search Engines: The Gray Lady Way
June 29, 2015
I read “Alternative Search Engines.” (Note: If you have to pay to read the article, visit a library and look for the story in the New York Times Magazine.) The process was painful. Distinctions which I find important were not part of the write up. The notion that some outfits actually index Web sites, and other outfits use Bing and Google search results without telling the user or the New York Times this cost cutting, half measure. Well, who cares? I don’t.
The write up asserts:
I was investigating the more practical, or just more traditional, alternatives to Google: Bing (owned by Microsoft), Yahoo (operated by Google back then and by Bing now), Info.com (an aggregator of Yahoo/Bing, Google and others) and newer sites like DuckDuckGo and IxQuick (which don’t track your search history), Gibiru and Unbubble (which don’t censor results) and Wolfram Alpha (which curates results). They were all too organized, too logical — the results were all the same, with only slight differences in the order of their presentation. It seemed to me that the Search Engine of Tomorow couldn’t be concerned with the best way to find what users were searching for, but with the best way to find what users didn’t even know they were searching for.
In case the Gray Lady has not figured out the real world, tomorrow means mobile devices. Mobile devices deliver filtered, personalized, swizzled for advertisers results. If you expect to run key word queries on the next iPhone or Android device, give that a whirl and let me know how that works out for you.
The crisis in search is that content is not available. Obtaining primary and certain secondary information is time consuming, difficult, and tedious. The reality of alternative search engines is that these are few and far between.
Do you trust Yandex.com or Yandex.ru? Do you know what the size of the Exalead search index is? What’s included and what’s omitted from Qwant, the search engine based on Pertimm (who?) which allegedly causes Eric Schmidt to suffer Qwant induced insomnia?
Nah. In Beyond Search, our view has been that the old fashioned, library type of research is a gone goose. The even older fashioned “talk to humans” and “do original research which conforms to the minimal guidelines reviewed in Statistics 101 classes” is just too Baby Boomerish.
With the Gray Lady explaining search, the demise of precision and recall, relevancy, editorial policies for inclusion in an index, and latency between information being available and inclusion in an index is history.
Stephen E Arnold, June 29, 2015
Oracle Data Integrator Extension
June 29, 2015
The article titled Oracle Launches ODI in April with the Aim to Revolutionize Big Data on Market Realist makes it clear that Oracle sees big money in NoSQL. Oracle Data Integrator, or ODI, enables developers and analysts to simplify their lives and training. It cancels the requirement for their learning multiple programming languages and allows them to use Hadoop and the like without much coding expertise. The article states,
“According to a report from PCWorld, Jeff Pollock, Oracle vice president of product management, said, “The Oracle Data Integrator for Big Data makes a non-Hadoop developer instantly productive on Hadoop…” Databases like Hadoop and Spark are targeted towards programmers who have the coding knowledge expertise required to manipulate these databases with knowledge of the coding needed to manage them. On the other hand, analysts usually use software for data analytics.”
The article also relates some of Oracle’s claims about itself, including that it holds a larger revenue than IBM, Microsoft, SAP AG, and Teradata combined. Those are also Oracle’s four major competitors. With the release of ODI, Oracle intends to filter data arriving from a myriad of different places. Clustering data into groups related by their format or framework is part of this process. The end result is a more streamlined version without assumptions about the level of coding knowledge held by an analyst.
Chelsea Kerwin, June 29, 2014
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
==
A Xoogler Fixes Yahoo Mobile Search
June 27, 2015
If you have not explored Yahoo Search, give it a whirl. Try to find information about these topics:
The query “Yahoo Search: displays this result:
Note that the second hit is to Tumblr. There you go. The other hits point to the very same page I used to launch my search for “Yahoo Search.” Helpful?
Try this query: “price diapers”. On the left side of the results page, Yahoo displayed:
On the right side of the results page, Yahoo displayed:
These are prices from advertisers. Oh, there is a link to something called Yahoo Shopping. Okay, that is one way to generate revenue and create an extra click. Annoying to me. To Yahoo, fulfillment and joy.
Also, try this query: “Dark Web paste sites”.
Here’s the results page:
Ads and two links to Dot ONION addresses. For the Yahoo user, I am not sure if the user will know what to make of this result:
I suppose I can find some positives in these results pages. On the other hand, the impact for me was inconsistency.
Navigate now to “Yahoo Search Becomes More Like Google on Mobile Devices.” The headline tells the story. Yahoo is lost in search space, so the Xoogler running the Yahoo comedy hour is imitating Google.
So much for innovation. One hopes the approach works because when Yahoo is left to its own devices, the information access thing is a bit like a rice cake and water to a Big O tire changer taking a break from three hours of roadside work in the blazing sun.
Stephen E Arnold, June 27, 2015
Matchlight Lights Up Stolen Data
June 26, 2015
It is a common gimmick on crime shows for the computer expert to be able to locate information, often stolen data, by using a few clever hacking tricks. In reality it is not that easy and quick to find stolen data, but eWeek posted an article about a new intelligence platform that might be able to do the trick: “Terbium Labs Launches Matchlight Data Intelligence Platform.” Terbium Labs’ Matchlight is able to recover stolen data as soon as it is released on the Dark Web.
How it works is simply remarkable. Matchlight attaches digital fingerprints to a company’s files, down to the smallest byte. Data recovered on the Dark Web can then be matched to the Terbium Labs’s database. Matchlight is available under a SaaS model. Another option they have for clients is a one-way fingerprinting feature that keeps a company’s data private from Terbium Labs. They would only have access to the digital fingerprints in order to track the data. Matchlight can also be integrated into already existing SharePoint or other document management systems. The entire approach to Matchlight is taking a protective stance towards data, rather than a defensive.
“We see the market shifting toward a risk management approach to information security,” [Danny Rogers, CEO and co-founder of Terbium} said. “Previously, information security was focused on IT and defensive technologies. These days, the most innovative companies are no longer asking if a data breach is going to happen, but when. In fact, the most innovative companies are asking what has already happened that they might not know about. This is where Matchlight provides a unique solution.”
Across the board, data breaches are becoming common and Matchlight offers an automated way to proactively protect data. While the digital fingerprinting helps track down stolen data, does Terbium Labs have a way to prevent it from being stolen at all?
Whitney Grace, June 26, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Digital Reasoning a Self-Described Cognitive Computing Company
June 26, 2015
The article titled Spy Tools Come to the Cloud on Enterprise Tech shows how Amazon’s work with analytics companies on behalf of the government have realized platforms like “GovCloud”, with increased security. The presumed reason for such platforms being the gathering of intelligence and threat analysis on the big data scale. The article explains,
“The Digital Reasoning cognitive computing tool is designed to generate “knowledge graphs of connected objects” gleaned from structured and unstructured data. These “nodes” (profiles of persons or things of interest) and “edges” (the relationships between them) are graphed, “and then being able to take this and put it into time and space,” explained Bill DiPietro, vice president of product management at Digital Reasoning. The partners noted that the elastic computing capability… is allowing customers to bring together much larger datasets.”
For former CIA staff officer DiPietro it logically follows that bigger questions can be answered by the data with tools like the AWS GovCloud and subsequent Hadoop ecosystems. He cites the ability to quickly spotlight and identify someone on a watch list out of the haystack of people as the challenge set to overcome. They call it “cluster on demand,” the process that allows them to manage and bring together data.
Chelsea Kerwin, June 26, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Story Telling and Search: Smartlogic Fiction
June 25, 2015
One of my two or three readers sent me a link to an article appearing in the Smartlogic Web log. I found the write up unusual. You may want to check it out: Surviving without Content Intelligence? There’s an Elephant in the Room. The first chapter is here.
The approach is to tell a story which explains the value of Smartlogic’s content intelligence approach. I circled this passage in pale blue:
The OLAP cube and MDM solution he’s spent the first half of the year implementing [you can read about it here] is not going to help him with the emails, call records and file system data that he is being asked to include. He’d always known that 80% of an organization’s data was unstructured – he had hoped that they could get away with the 20% that was structured and easily managed. Now he’s got four times more data to work with, and he can’t just shovel it into the CRM system and hope they can deal with it.
The “read about it here” does not link to anything.
If the story resonates with you, Smartlogic may be exactly what you require.
The subhead “Next Week” includes this passage:
The Smartlogic Semaphore Search Application Framework is a tool for rapidly developing search applications that uniquely combine a Semantic Model with commodity tools such as SOLR and the Google Search Appliance, so users are not restricted to keywords, but can search by meaning as well which dramatically improves the user experience. Last, but not least, the Semaphore Classification Server would have allowed Archie to reliably link structured data and unstructured content without being dependent on existing structures and metadata; but that’s a story for next week.
I found one word fascinating, “commodity.” I think of the Google Search Appliance as an expensive way to process large volumes of content. The GSA no longer takes a one size fits all approach, but it is expensive to set up with fail over and customized functions. Solr is an open source solution perched on top of Lucene. A number of companies offer implementations of these open source products. The current stallion winning races is Elastic, but that is not a commodity like diapers.
The “story” is not complete. Part three will become available soon. Stay tuned.
Stephen E Arnold, June 25, 2015
How the Cloud Might Limit SharePoint Functionality
June 25, 2015
In the highly anticipated SharePoint Server 2016, on-premises, cloud, and hybrid functionality are all emphasized. However, some are beginning to wonder if functionality can suffer based on the variety of deployment chosen. Read all the details in the Search Content Management article, “How Does the Cloud Limit SharePoint Search and Integration?”
The article begins:
“All searches are not created equal, and tradeoffs remain for companies mulling deployment of the cloud, on-premises and hybrid versions of Microsoft’s collaboration platform, SharePoint. SharePoint on-premises has evolved over the years with a focus on customization and integration with other internal systems. That is not yet the case in the cloud with SharePoint Online, and there are still unique challenges for those who look to combine the two products with a hybrid approach.”
The article goes on to say that there are certain restrictions, especially with search customization, for the SharePoint Online deployment. Furthermore, a good amount of configuration is required to maximize search for the hybrid version. To keep up to date on how this might affect your organization, and the required workarounds, stay tuned to ArnoldIT.com. Stephen E. Arnold is longtime search professional, and his work on SharePoint is conveniently collocated in a dedicated feed to maximize efficiency.
Emily Rae Aldridge, June 25, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Twitter Gets a Search Facelift
June 25, 2015
Twitter has been experimenting with improving its search results and according to TechCrunch the upgrade comes via a new search results interface: “Twitter’s New Search Results Interface Expands To All Users.” The new search results interface is the one of the largest updates Twitter has made in 2015. It is supposed to increase the ease with a cleaner look and better filtering options. Users will now be able to filter search results by live tweets, photos, videos, news, accounts, and more.
Twitter made the update to help people better understand how to use the message service and to take a more active approach to using it, rather than passively reading other peoples tweets. The update is specifically targeted at new Twitter users.
The tweaked search interface will return tweets related to the search phrase or keyword, but that does not mean that the most popular tweets are returned:
“In some cases, the top search result isn’t necessarily the one with the higher metrics associated with it – but one that better matches what Twitter believes to be the searcher’s “intent.” For example, a search for “Steve Jobs” first displays a heavily-retweeted article about the movie’s trailer, but a search for “Mad Men” instead first displays a more relevant tweet ahead of the heavily-favorited “Mad Men” mention by singer Lorde.”
The new interface proves to be simpler and better list trends, related users, and news. It does take a little while to finesse Twitter, which is a daunting task to new users. Twitter is not the most popular social network these day and it’s using these updates to increase its appeal.
Whitney Grace, June 25, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Old Wine: Semantic Search from the Enlightenment
June 24, 2015
I read a weird disclaimer. Here it is:
This is an archived version of Pandia’s original article “Top 5 Semantic Search Engines”, we made it available to the users mainly because it is still among the most sought articles from old site. You can also check kids, radio search, news, people finder and q-cards sections.
An article from the defunct search newsletter Pandia surfaced in a news aggregation list. Pandia published one of my books, but at the moment I cannot remember which of my studies.
The write up identifies “semantic search engines.” Here’s the list with my status update in bold face:
- Hakia. Out of business
- SenseBot. Out of business.
- Powerset. Bought by Microsoft. Fate unknown in the new Delve/Bing world.
- DeepDyve. Talk about semantics but the system is a variation of the Dialog/BRS for fee search model from the late 1970s.
- Cognition (Cognition Technologies). May be a unit of Nuance?
What’s the score?
Two failures. Two sales to another company. One survivor which has an old school business model. My take? Zero significant impact on information retrieval.
Feel free to disagree, but the promise of semantic search seems to pivot on finding a buyer and surviving by selling online research. Why so much semantic cheerleading? Beats me. Semantic methods are useful in the plumbing as a component of a richer, more robust system. Most cyberOSINT systems follow this path. Users don’t care too much about plumbing in my experience.
Stephen E Arnold, June 24, 2015