Pharmaceutical Research Made Simple
October 3, 2016
Pharmaceutical companies are a major power in the United States. Their power comes from the medicine they produce and the wealth they generate. In order to maintain both wealth and power, pharmaceutical companies conduct a lot of market research. Market research is a field based on people’s opinions and their reactions, in other words, it contains information that is hard to process into black and white data. Lexalytics is a big data platform built with a sentiment analysis to turn market research into useable data.
Inside Big Data explains how “Lexalytics Radically Simplifies Market Research And Voice Of Customer Programs For The Pharmaceutical Industry” with a new package called the Pharmaceutical Industry Pack. Lexalytics uses a combination of machine learning and natural language processing to understand the meaning and sentiment in text documents. The new pack can help pharmaceutical companies interpret how their customers react medications, what their symptoms are, and possible side effects of medication.
Our customers in the pharmaceutical industry have told us that they’re inundated with unstructured data from social conversations, news media, surveys and other text, and are looking for a way to make sense of it all and act on it,’ said Jeff Catlin, CEO of Lexalytics. ‘With the Pharmaceutical Industry Pack — the latest in our series of industry-specific text analytics packages — we’re excited to dramatically simplify the jobs of CEM and VOC pros, market researchers and social marketers in this field.
Along with basic natural language processing features, the Lexalytics Pharmaceutical Industry Pack contains 7000 sentiment terms from healthcare content as well as other medical references to understand market research data. Lexalytics makes market research easy and offers invaluable insights that would otherwise go unnoticed.
Whitney Grace, October 3, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
VirtualWorks Purchases Natural Language Processing Firm
July 8, 2016
Another day, another merger. PR Newswire released a story, VirtualWorks and Language Tools Announce Merger, which covers Virtual Works’ purchase of Language Tools. In Language Tools, they will inherit computational linguistics and natural language processing technologies. Virtual Works is an enterprise search firm. Erik Baklid, Chief Executive Officer of VirtualWorks is quoted in the article,
“We are incredibly excited about what this combined merger means to the future of our business. The potential to analyze and make sense of the vast unstructured data that exists for enterprises, both internally and externally, cannot be understated. Our underlying technology offers a sophisticated solution to extract meaning from text in a systematic way without the shortcomings of machine learning. We are well positioned to bring to market applications that provide insight, never before possible, into the vast majority of data that is out there.”
This is another case of a company positioning themselves as a leader in enterprise search. Are they anything special? Well, the news release mentions several core technologies will be bolstered due to the merger: text analytics, data management, and discovery techniques. We will have to wait and see what their future holds in regards to the enterprise search and business intelligence sector they seek to be a leader in.
Megan Feil, July 8, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Big Data Myths Debunked
December 4, 2015
An abundance of data is not particularly valuable without the ability to draw conclusions from it. Forbes recognizes the value of data analysis in, “Text Analytics Gurus Debunk Four Big Data Myths.” Contributor Barbara Thau observes:
“And while retailers have hailed big data as the key to everything from delivering shoppers personalized merchandise offers to real-time metrics on product performance, the industry is mostly scratching its head on how to monetize all the data that’s being generated in the digital era. One point of departure: Over 80% of all information comes in text format, Tom H.C. Anderson, CEO of, which markets its text analytics software to clients such as Coca-Cola KO +0.00% told Forbes. So if retailers, for one, ‘aren’t using text analytics in their customer listening, whether they know it or not, they’re not doing too much listening at all,’ he said.”
Anderson and his CTO Chris Lehew went on to outline four data myths they’ve identified; mistakes, really: a misplaced trust in survey scores; putting more weight on social media data than direct contact from customers; valuing data from new sources over the customer-service department’s records, and refusing to keep an eye on what the competition is doing. See the article for the reasons these pros disagree with each of these myths.
Text analytics firm OdinText promises to draw a more accurate understanding from their clients’ data collections, whatever industry they are in. The company received their OdenText patent in 2013, and was incorporated earlier this year.
Cynthia Murrell, December 4, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
SAS Text Miner Promises Unstructured Insight
July 10, 2015
Big data is tools help organizations analyze more than their old, legacy data. While legacy data does help an organization study how their process have changed, the data is old and does not reflect the immediate, real time trends. SAS offers a product that bridges old data with the new as well as unstructured and structured data.
The SAS Text Miner is built from Teragram technology. It features document theme discovery, a function the finds relations between document collections; automatic Boolean rule generation; high performance text mining that quickly evaluates large document collection; term profiling and trending, evaluates term relevance in a collection and how they are used; multiple language support; visual interrogation of results; easily import text; flexible entity options; and a user friendly interface.
The SAS Text Miner is specifically programmed to discover data relationships data, automate activities, and determine keywords and phrases. The software uses predictive models to analysis data and discover new insights:
“Predictive models use situational knowledge to describe future scenarios. Yet important circumstances and events described in comment fields, notes, reports, inquiries, web commentaries, etc., aren’t captured in structured fields that can be analyzed easily. Now you can add insights gleaned from text-based sources to your predictive models for more powerful predictions.”
Text mining software reveals insights between old and new data, making it one of the basic components of big data.
Whitney Grace, July 10, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

