Searching Bureaucracy

May 19, 2015

The rise of automatic document conversion could render vast amounts of data collected by government agencies useful. In their article, “Solving the Search Problem for Large-Scale Repositories,” GCN explains why this technology is a game-changer, and offers tips for a smooth conversion. Writer Mike Gross tells us:

“Traditional conversion methods require significant manual effort and are economically unfeasible, especially when agencies are often precluded from using offshore labor. Additionally, government conversion efforts can be restricted by  document security and the number of people that require access.     However, there have been recent advances in the technology that allow for fully automated, secure and scalable document conversion processes that make economically feasible what was considered impractical just a few years ago. In one particular case the cost of the automated process was less than one-tenth of the traditional process. Making content searchable, allowing for content to be reformatted and reorganized as needed, gives agencies tremendous opportunities to automate and improve processes, while at the same time improving workflow and providing previously unavailable metrics.”

The write-up describes several factors that could foil an attempt to implement such a system, and I suggest interested parties check out the whole article. Some examples include security and scalability, of course, as well as specialized format and delivery requirements, and non-textual elements. Gross also lists criteria to look for in a vendor; for instance, assess how well their products play with related software, like scanning and optical character recognition tools, and whether they will be able to keep up with the volumes of data at hand. If government agencies approach these automation advances with care and wisdom, instead of reflexively choosing the lowest bidder, our bureaucracies’ data systems may actually become efficient. (Hey, one can dream.)

Cynthia Murrell, May 19, 2015

Stephen E Arnold, Publisher of CyberOSINT at www.xenky.com

 

Popular and Problematic Hadoop

May 15, 2015

We love open source on principle, and Hadoop is indeed an open-source powerhouse. However, any organization considering a Hadoop system must understand how tricky implementation can be, despite the hype. A pair of writers at GCN asks and answers the question, “What’s Holding Back Hadoop?” The brief article reports on a recent survey of data management pros by data-researcher TDWI. Reporters Troy K. Schneider and Jonathan Lutton explain:

“Hadoop — the open-source, distributed programming framework that relies on parallel processing to store and analyze both structured and unstructured data — has been the talk of big data for several years now.  And while a recent survey of IT, business intelligence and data warehousing leaders found that 60 percent will Hadoop in production by 2016, deployment remains a daunting task. TDWI — which, like GCN, is owned by 1105 Media — polled data management professionals in both the public and private sector, who reported that staff expertise and the lack of a clear business case topped their list of barriers to implementation.”

The write-up supplies a couple bar graphs of survey results, including the top obstacles to implementation and the primary benefits of going to the trouble. Strikingly, only six percent or respondents say there’s no Hadoop in their organizations’ foreseeable future. Though not covered in the GCN write-up, the full, 43-page report includes word on best practices and implementation trends; it can be downloaded here (registration required).

Cynthia Murrell, May 15, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Explaining Big Data Mythology

May 14, 2015

Mythologies usually develop over a course of centuries, but big data has only been around for (arguably) a couple decades—at least in the modern incarnate.  Recently big data has received a lot of media attention and product development, which was enough to give the Internet time to create a big data mythology.  The Globe and Mail wanted to dispel some of the bigger myths in the article, “Unearthing Big Myths About Big Data.”

The article focuses on Prof. Joerg Niessing’s big data expertise and how he explains the truth behind many of the biggest big data myths.  One of the biggest items that Niessing wants people to understand is that gathering data does not equal dollar signs, you have to be active with data:

“You must take control, starting with developing a strategic outlook in which you will determine how to use the data at your disposal effectively. “That’s where a lot of companies struggle. They do not have a strategic approach. They don’t understand what they want to learn and get lost in the data,” he said in an interview. So before rushing into data mining, step back and figure out which customer segments and what aspects of their behavior you most want to learn about.”

Niessing says that big data is not really big, but made up of many diverse, data points.  Big data also does not have all the answers, instead it provides ambiguous results that need to be interpreted.  Have questions you want to be answered before gathering data.  Also all of the data returned is not the greatest.  Some of it is actually garbage, so it cannot be usable for a project.  Several other myths are uncovered, but the truth remains that having a strategic big data plan in place is the best way to make the most of big data.

Whitney Grace, May 14, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

The Forgotten List of Telegraph

May 13, 2015

Technology experts and information junkies in the European Union are in an uproar over a ruling that forces Google to remove specific information from search results.  “The right to be forgotten” policy upheld by the EU is supposed to help people who want “inadequate, irrelevant, or no longer relevant” information removed from Google search results.  Many news outlets in Europe have been affected, including the United Kingdom’s Telegraph.  The Telegraph has been recording a list called “Telegraph Stories Affected By ‘EU Right To Be Forgotten’” of all the stories they have been forced to remove.

According to the article, the Google has received over 250,000 requests to remove information.  Some of these requests concern stories published by Telegraph.  While many oppose the ‘right to be forgotten,’ including the House of Lords, others are still upholding the policy:

“But David Smith, deputy commissioner and director of data protection for the Information Commissioner’s Office (ICO), hit back and claimed that the criticism was misplaced, ‘as the initial stages of its implementation have already shown.’ ”

Many of the “to be forgotten” requests concern people with criminal pasts and misdeeds that are color them in an bad light.  The Telegraph’s content might be removed from Google, but they are keeping a long, long list on their website.  Read the stories there or head on over to the US Google website-freedom of the press still holds true here.

Whitney Grace, May 13, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

The Philosophy of Semantic Search

May 13, 2015

The article Taking Advantage of Semantic Search NOW: Understanding Semiotics, Signs, & Schema on Lunametrics delves into semantics on a philosophical and linguistic level as well as in regards to business. He goes through the emergence of semantic search beginning with Ray Kurzweil’s interest in machine learning meaning as opposed to simpler keyword search. In order to fully grasp this concept, the author of the article provides a brief refresher on Saussure’s semantics.

“a Sign is comprised of a signifier, or the name of a thing, and the signified, what that thing represents… Say you sell iPad accessories. “iPad case” is your signifier, or keyword in search marketing speak. We’ve abused the signifier to the utmost over the years, stuffing it onto pages, calculating its density with text tools, jamming it into title tags, in part because we were speaking to robot who read at a 3-year-old level.”

In order to create meaning, we must go beyond even just the addition of price tag and picture to create a sign. The article suggests the need for schema, in the addition of some indication of whom and what the thing is for. The author, Michael Bartholow, has a background in linguistics and marketing and search engine optimization. His article ends with the question of when linguists, philosophers and humanists will be invited into the conversation with businesses, perhaps making him a true visionary in a field populated by data engineers with tunnel-vision.

Chelsea Kerwin, May 13, 2014

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Elasticsearch Transparent about Failed Jepsen Tests

May 11, 2015

The article on Aphyr titled Call Me Maybe: Elasticsearch 1.5.0 demonstrates the ongoing tendency for Elasticsearch to lose data during network partitions. The author goes through several scenarios and found that users can lose documents if nodes crash, a primary pauses, a network partitions into two intersecting components or into two discrete components. The article explains,

“My recommendations for Elasticsearch users are unchanged: store your data in a database with better safety guarantees, and continuously upsert every document from that database into Elasticsearch. If your search engine is missing a few documents for a day, it’s not a big deal; they’ll be reinserted on the next run and appear in subsequent searches. Not using Elasticsearch as a system of record also insulates you from having to worry about ES downtime during elections.”

The article praises Elasticsearch for their internal approach to documenting the problems, and especially the page they opened in September going into detail on resiliency. The page clarifies the question among users as to what it meant that the ticket closed. The page states pretty clearly that ES failed their Jepsen tests. The article exhorts other vendors to follow a similar regimen of supplying such information to users.

Chelsea Kerwin, May 11, 2014

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

SharePoint Release Delayed and Criticized

April 28, 2015

SharePoint was lauded earlier in the year for committing to a new on-premises version of SharePoint Server 2016. However, since then the rollout has been beset by delays and criticism that on-site installations will continue to play the ugly stepsister to the cloud. The United Kingdom’s The Register provides a cynical assessment of the latest news in their article, “SharePoint’s Next Release Delayed Until Deep into 2016.”

The article begins:

“Exchange Server 2016 will be not much more than a rollup of features already deployed to cloud Exchange . . . Redmond’s also revealed that SharePoint server won’t get another refresh until the second quarter of 2016. There won’t even be a beta – or technical preview as Microsoft likes to call them these days – to play with until 2015’s fourth quarter . . . But all those cloudy bits may not be so welcome for the many smaller organisations that run SharePoint, or for organisations waiting for an upgrade. SharePoint 2013 was released in October 2012, so such users are looking at nearly four years between drinks.”

Every SharePoint rollout seems to be plagued by trouble of some variety, so the delay comes as little surprise. The test will be whether tried and true on-premises customers will settle for what increasingly seems to be little support. We will withhold ultimate judgment until the release is made available. In the meantime, head over to ArnoldIT.com to keep up with the latest news. Stephen E. Arnold has made a career out of following all things search, and his dedicated SharePoint feed keeps you informed at a glance.

Emily Rae Aldridge, April 28, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

EnterpriseJungle Launches SAP-Based Enterprise Search System

April 27, 2015

A new enterprise search system startup is leveraging the SAP HANA Cloud Platform, we learn from “EnterpriseJungle Tames Enterprise Search” at SAP’s News Center. The company states that their goal is to make collaboration easier and more effective with a feature they’re calling “deep people search.” Writer Susn Galer cites EnterpriseJungle Principal James Sinclair when she tells us:

“Using advanced algorithms to analyze data from internal and external sources, including SAP Jam, SuccessFactors, wikis, and LinkedIn, the applications help companies understand the make-up of its workforce and connect people quickly….

Who Can Help Me is a pre-populated search tool allowing employees to find internal experts by skills, location, project requirements and other criteria which companies can also configure, if needed. The Enterprise Q&A tool lets employees enter any text into the search bar, and find experts internally or outside company walls. Most companies use the prepackaged EnterpriseJungle solutions as is for Human Resources (HR), recruitment, sales and other departments. However, Sinclair said companies can easily modify search queries to meet any organization’s unique needs.”

EnterpriseJungle users manage their company’s data through SAP’s Lumira dashboard. Galer shares Sinclair’s example of one company in Germany, which used EnterpriseJungle to match employees to appropriate new positions when it made a whopping 3,000 jobs obsolete. Though the software is now designed primarily for HR and data-management departments, Sinclair hopes the collaboration tool will permeate the entire enterprise.

Cynthia Murrell, April 27, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Cyber Wizards Speak Publishes Exclusive BrightPlanet Interview with William Bushee

April 7, 2015

Cyber OSINT continues to reshape information access. Traditional keyword search has been supplanted by higher value functions. One of the keystones for systems that push “beyond search” is technology patented and commercialized by BrightPlanet.

A search on Google often returns irrelevant or stale results. How can an organization obtain access to current, in-depth information from Web sites and services not comprehensively indexed by Bing, Google, ISeek, or Yandex?

The answer to the question is to turn to the leader in content harvesting, BrightPlanet. The company was one of the first, if not the first, to develop systems and methods for indexing information ignored by Web indexes which follow links. Founded in 2001, BrightPlanet has emerged as a content processing firm able to make accessible structured and unstructured data ignored, skipped, or not indexed by Bing, Google, and Yandex.

In the BrightPlanet seminar open to law enforcement, intelligence, and security professionals, BrightPlanet said the phrase “Deep Web” is catchy but it does not explain what type of information is available to a person with a Web browser. A familiar example is querying a dynamic database, like an airline for its flight schedule. Other types of “Deep Web” content may require the user to register. Once logged into the system, users can query the content available to a registered user. A service like Bitpipe requires registration and a user name and password each time I want to pull a white paper from the Bitpipe system. BrightPlanet can handle both types of indexing tasks and many more. BrightPlanet’s technology is used by governmental agencies, businesses, and service firms to gather information pertinent to people, places, events, and other topics

In an exclusive interview, William Bushee, the chief executive officer at BrightPlanet, reveals the origins of the BrightPlanet approach. He told Cyber Wizards Speak:

I developed our initial harvest engine. At the time, little work was being done around harvesting. We filed for a number of US Patents applications for our unique systems and methods. We were awarded eight, primarily around the ability to conduct Deep Web harvesting, a term BrightPlanet coined.

The BrightPlanet system is available as a cloud service. Bushee noted:

We have migrated from an on-site license model to a SaaS [software as a service] model. However, the biggest change came after realizing we could not put our customers in charge of conducting their own harvests. We thought we could build the tools and train the customers, but it just didn’t work well at all. We now harvest content on our customers’ behalf for virtually all projects and it has made a huge difference in data quality. And, as I mentioned, we provide supporting engineering and technical services to our clients as required. Underneath, however, we are the same sharply focused, customer centric, technology operation.

The company also offers data as a service. Bushee explained:

We’ve seen many of our customers use our Data-as-a-Service model to increase revenue and customer share by adding new datasets to their current products and service offerings. These additional datasets develop new revenue streams for our customers and allow them to stay competitive maintaining existing customers and gaining new ones altogether. Our Data-as-a-Service offering saves time and money because our customers no longer have to invest development hours into maintaining data harvesting and collection projects internally. Instead, they can access our harvesting technology completely as a service.

The company has accelerated its growth through a partnering program. Bushee stated:

We have partnered with K2 Intelligence to offer a full end-to-end service to financial institutions, combining our harvest and enrichment services with additional analytic engines and K2’s existing team of analysts. Our product offering will be a service monitoring various Deep Web and Dark Web content enriched with other internal data to provide a complete early warning system for institutions.

BrightPlanet has emerged as an excellent resource to specialized content services. In addition to providing a client-defined collection of information, the firm can provide custom-tailored solutions to special content needs involving the Deep Web and specialized content services. The company has an excellent reputation among law enforcement, intelligence, and security professionals. The BrightPlanet technologies can generate a stream of real-time content to individuals, work groups, or other automated systems.

BrightPlanet has offices in Washington, DC, and can be contacted via the BrightPlanet Web site atwww.brightplanet.com.

The complete interview is available at the Cyber Wizards Speak web site at www.xenky.com/brightplanet.

Stephen E Arnold, April 7, 2015

Blog: www.arnoldit.com/wordpress Frozen site: www.arnoldit.com Current site: www.xenky.com

 

Apache Samza Revamps Databases

March 19, 2015

Databases have advanced far beyond the basic relational databases. They need to be consistently managed and have real-time updates to keep them useful. The Apache Software Foundation developed the Apache Samza software to help maintain asynchronous stream processing network. Samza was made in conjunction with Apache Kafka.

If you are interested in learning how to use Apache Samza, the Confluent blog posted “Turning The Database Inside-Out With Apache Samza” by Martin Keppmann. Kleppmann recorded a seminar he gave at Strange Loop 2014 that explains his process for how it can improve many features on a database:

“This talk introduces Apache Samza, a distributed stream processing framework developed at LinkedIn. At first it looks like yet another tool for computing real-time analytics, but it’s more than that. Really it’s a surreptitious attempt to take the database architecture we know, and turn it inside out. At its core is a distributed, durable commit log, implemented by Apache Kafka. Layered on top are simple but powerful tools for joining streams and managing large amounts of data reliably.”

Learning new ways to improve database features and functionality always improve your skill set. Apache Software also forms the basis for many open source projects and startups. Martin Kleppman’s talk might give you a brand new idea or at least improve your database.

Whitney Grace, March 20, 2015

Stephen E Arnold, Publisher of CyberOSINT at www.xenky.com

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta