Investment Group Acquires Lexmark
February 15, 2017
We read with some trepidation the Kansas City Business Journal’s article, “Former Perceptive’s Parent Gets Acquired for $3.6B in Cash.” The parent company referred to here is Lexmark, which bought up one of our favorite search systems, ISYS Search, in 2012 and placed it under its Perceptive subsidiary, based in Lenexa, Kentucky. We do hope this valuable tool is not lost in the shuffle.
Reporter Dora Grote specifies:
A few months after announcing that it was exploring ‘strategic alternatives,’ Lexmark International Inc. has agreed to be acquired by a consortium of investors led by Apex Technology Co. Ltd. and PAG Asia Capital for $3.6 billion cash, or $40.50 a share. Legend Capital Management Co. Ltd. is also a member of the consortium.
Lexmark Enterprise Software in Lenexa, formerly known as Perceptive Software, is expected to ‘continue unaffected and benefit strategically and financially from the transaction’ the company wrote in a release. The Lenexa operation — which makes enterprise content management software that helps digitize paper records — dropped the Perceptive Software name for the parent’s brand in 2014. Lexmark, which acquired Perceptive for $280 million in cash in 2010, is a $3.7 billion global technology company.
If the Lexmark Enterprise Software (formerly known as Perceptive) division will be unaffected, it seems they will be the lucky ones. Grote notes that Lexmark has announced that more than a thousand jobs are to be cut amid restructuring. She also observes that the company’s buildings in Lenexa have considerable space up for rent. Lexmark CEO Paul Rooke is expected to keep his job, and headquarters should remain in Lexington, Kentucky.
Cynthia Murrell, February 15, 2017
Oracle Pays Big Premium for NetSuite and Larry Ellison Benefits
February 6, 2017
The article on Reuters titled Oracle-NetSuite Deal May Be Sweetest for Ellison emphasizes the perks of being an executive chairman like Larry Ellison, of Oracle. Ellison ranks as the third richest person in America and fifth in the world. The article suggests that his fortune of over $50B is often considered as mingling with Oracle’s $160B in a way that makes, if no one else, at least Reuters, very uncomfortable. The article does offer some context to the most recent acquisition of NetSuite, for which Oracle paid a 44% premium on a company of which Ellison owns a 45% stake.
NetSuite was founded by an ex-Oracle employee, bankrolled by Ellison. While Oracle concentrated on selling enterprise software to giant corporations, the upstart focused on servicing small and medium-sized companies using the cloud. The two companies’ businesses have increasingly overlapped as larger customers have become comfortable using web-based software.
As a result, it makes strategic sense to combine the two firms. And the process seems to have been handled right, with a committee of independent Oracle directors calling the shots.
The article also points out that such high surcharges aren’t all that unusual. Salesforce.com recently paid a 56% premium for Demandware. But in this case, things are complicated by Ellison’s potential conflict of interest. If Oracle had done more to invest in cloud business or NetSuite earlier, say four or five years ago, they would not find themselves forking over just under $10B now.
Chelsea Kerwin, February 6, 2017
Fight Fake News with Science
February 1, 2017
With all the recent chatter around “fake news,” one researcher has decided to approach the problem scientifically. An article at Fortune reveals “What a Map of the Fake-News Ecosystem Says About the Problem.” Writer Mathew Ingram introduces us to data-journalism expert and professor Jonathan Albright, of Elon University, who has mapped the fake-news ecosystem. Facebook and Google are just unwitting distributors of faux facts; Albright wanted to examine the network of sites putting this stuff out there in the first place. See the article for a description of his methodology; Ingram summarizes the results:
More than anything, the impression one gets from looking at Albright’s network map is that there are some extremely powerful ‘nodes’ or hubs, that propel a lot of the traffic involving fake news. And it also shows an entire universe of sites that many people have probably never heard of. Two of the largest hubs Albright found were a site called Conservapedia—a kind of Wikipedia for the right wing—and another called Rense, both of which got huge amounts of incoming traffic. Other prominent destinations were sites like Breitbart News, DailyCaller and YouTube (the latter possibly as an attempt to monetize their traffic).
Albright said he specifically stayed away from trying to determine what or who is behind the rise of fake news. … He just wanted to try and get a handle on the scope of the problem, as well as a sense of how the various fake-news distribution or creation sites are inter-connected. Albright also wanted to do so with publicly-available data and open-source tools so others could build on it.
Albright also pointed out the folly of speculating on sources of fake news; such guesswork only “adds to the existing noise,” he noted. (Let’s hear it for common sense!) Ingram points out that, armed with Albright’s research, Google, Facebook, and other outlets may be better able to combat the problem.
Cynthia Murrell, February 1, 2017
Rise of Fake News Should Have All of Us Questioning Our Realities
January 31, 2017
The article on NBC titled Five Tips on How to Spot Fake News Online reinforces the catastrophic effects of “fake news,” or news that flat-out delivers false and misleading information. It is important to separate “fake news” from ideologically-slanted news sources and the mess of other issues dragging any semblance of journalistic integrity through the mud, but the article focuses on a key point. The absolute best practice is to take in a variety of news sources. Of course, when it comes to honest-to-goodness “fake news,” we would all be better off never reading it in the first place. The article states,
A growing number of websites are espousing misinformation or flat-out lies, raising concerns that falsehoods are going viral over social media without any mechanism to separate fact from fiction. And there is a legitimate fear that some readers can’t tell the difference. A study released by Stanford University found that 82 percent of middle schoolers couldn’t spot authentic news sources from ads labeled as “sponsored content.” The disconnect between true and false has been a boon for companies trying to turn a quick profit.
So how do we separate fact from fiction? Checking the web address and avoiding .lo and .co.com addresses, researching the author, differentiating between blogging and journalism, and again, relying on a variety of sources such as print, TV, and digital. In a time when even the President-to-be, a man with the best intelligence in the world at his fingerprints, chooses to spread fake news (aka nonsense) via Twitter that he won the popular vote (he did not) we all need to step up and examine the information we consume and allow to shape our worldview.
Chelsea Kerwin, January 31, 2017
Declassified CIA Data Makes History Fun
January 26, 2017
One thing I have always heard to make kids more interested in learning about the past is “making it come alive.” Textbooks suck at “making anything come alive” other than naps. What really makes history a reality and more interesting are documentaries, eyewitnesses, and actual artifacts. The CIA has a wealth of history and History Tech shares with us some rare finds: “Tip Of The Week: 8 Decades Of Super Cool Declassified CIA Maps.” While the CIA Factbook is one of the best history and geography tools on the Web, the CIA Flickr account is chock full of declassified goodies, such as spy tools, maps, and more.
The article’s author shared that:
The best part of the Flickr account for me is the eight decades of CIA maps starting back in the 1940s prepared for the president and various government agencies. These are perfect for helping provide supplementary and corroborative materials for all sorts of historical thinking activities. You’ll find a wide variety of map types that could also easily work as stand-alone primary source.
These declassified maps were actually used by CIA personnel, political advisors, and presidents to make decisions that continue to impact our lives today. The CIA flickr account is only one example of how the Internet is a wonderful tool for making history come to life. Although you need to be cautious about where the information comes from since these are official CIA records they are primary sources.
Whitney Grace, January 26, 2017
Cybersecurity Technologies Fueled by Artificial Intelligence
December 28, 2016
With terms like virus being staples in the cybersecurity realm, it is no surprise the human immune system is the inspiration for the technology fueling one relatively new digital threat defense startup. In the Tech Republic article, Darktrace bolsters machine learning-based security tools to automatically attack threats, more details and context about Darktrace’s technology and positioning was revealed. Founded in 2013, Darktrace recently announced they raised $65 million to help fund their expansion globally. Four products, including their basic cyber threat defense solution called Darktrace, comprise their product suite. The article expands on their offerings:
Darktrace also offers its Darktrace Threat Visualizer, which provides analysts and CXOs with a high-level, global view of their enterprise. Darktrace Antigena complements the core Darktrace product by automatically defends against potential threats that have been detected, acting as digital “antibodies.” Finally, the Industrial Immune System is a version of Darktrace designed for Industrial Control Systems (ICS). The key value provided by Darktrace is the fact that it relies on unsupervised machine learning, and it is able to detect threats on its own without much human interaction.
We echo this article’s takeaway that machine learning and other artificial intelligence technologies continue to grow in the cybersecurity sector. The attention on AI is only building in this industry and others. Perhaps the lack of AI is particularly well-suited to cybersecurity as it’s behind-the-scenes nature that of Dark Web related crimes.
Megan Feil, December 28, 2016
UN Addresses Dark Web Drug Trade
December 16, 2016
Because individual nations are having spotty success fighting dark-web-based crime, the United Nations is stepping up. DeepDotWeb reports, “UN Trying to Find Methods to Stop the Dark Web Drug Trade.” The brief write-up cites the United Nation’s Office on Drugs and Crime’s (UNODC’s) latest annual report, which reveals new approaches to tackling drugs on the dark web. The article explains why law-enforcement agencies around the world have been having trouble fighting the hidden trade. Though part of the problem is technical, another is one of politics and jurisdiction. We learn:
Since most of the users use Tor and encryption technologies to remain hidden while accessing dark net marketplaces and forums, law enforcement authorities have trouble to identify and locate their IP addresses. …
Police often finds itself trapped within legal boundaries. The most common legal issues authorities are facing in these cases are which jurisdiction should they use, especially when the suspect’s location is unknown. There are problems regarding national sovereignties too. When agencies are hacking a dark net user’s account, they do not really know which country the malware will land to. For this reason, the UNODC sees a major issue when sharing intelligence when it’s not clear where in the world that intelligence would be best used.
The write-up notes that the FBI has been using tricks like hacking Dark Net users and tapping into DOD research. That agency is also calling for laws that would force suspects to decrypt their devices upon being charged. In the meantime, the UNODC supports the development of tools that will enhance each member state’s ability to “collect and exploit digital evidence.” To see the report itself, navigate here, where you will find an overview and a link to the PDF.
Cynthia Murrell, December 16, 2016
Nobody Really Knows What Goes on over Dark Web
December 16, 2016
While the mainstream media believes that the Dark Web is full of dark actors, research by digital security firms says that most content is legal. It only says one thing; the Dark Web is still a mystery.
The SC Magazine in an article titled Technology Helping Malicious Business on the Dark Web Grow says:
The Dark Web has long had an ominous appeal to Netizens with more illicit leanings and interests. But given a broadening reach and new technologies to access this part of the web and obfuscate dealings here, the base of dark web buyers and sellers is likely growing.
On the other hand, the article also says:
But despite its obvious and well-earned reputation for its more sinister side, at least one researcher says that as the dark web expands, the majority of what’s there is actually legal. In its recent study, intelligence firm Terbium Labs found that nearly 55 percent of all the content on the dark web is legal in nature, meaning that it may be legal pornography, or controversial discussions, but it’s not explicitly illegal by U.S. law.
The truth might be entirely different. The Open Web is equally utilized by criminals for carrying out their illegal activities. The Dark Web, accessible only through Tor Browser allows anyone to surf the web anonymously. We may never fully know if the Dark Web is the mainstay of criminals or of individuals who want to do their work under the cloak of anonymity. Till then, it’s just a guessing game.
Vishal Ingole, December 16, 2016
The Data Sharing of Healthcare
December 8, 2016
Machine learning tools like the artificial intelligence Watson from IBM can and will improve healthcare access and diagnosis, but the problem is getting on the road to improvement. Implementing new technology is costly, including the actual equipment and training staff, and there is always the chance it could create more problems than resolving them. However, if the new technology makes a job easier and resolves situations then you are on the path to improvement. The UK is heading that way says TechCrunch in, “DeepMind Health Inks New Deal With UK’s NHS To Deploy Streams App In Early 2017.”
London’s NHS Royal Free Hospital will employ DeepMind Health in 2017, taking advantage of its data sharing capabilities. Google owns DeepMind Health and it focuses on driving the application of machine learning algorithms in preventative medicine. The NHS and DeepMind Health had a prior agreement in the past, but when the New Scientist made a freedom of information request their use of patients’ personal information came into question. The information was used to power the Streams app to sent alerts to acute kidney injury patients. However, ICO and MHRA shut down Streams when it was discovered it was never registered as a medical device.
The eventual goal is to relaunch Streams, which is part of the deal, but DeepMind has to repair its reputation. DeepMind is already on the mend with the new deal and registering Streams as a medical device also helped. In order for healthcare apps to function properly, they need to be tested:
The point is, healthcare-related AI needs very high-quality data sets to nurture the kind of smarts DeepMind is hoping to be able to build. And the publicly funded NHS has both a wealth of such data and a pressing need to reduce costs — incentivizing it to accept the offer of “free” development work and wide-ranging partnerships with DeepMind…
Streams is the first step towards a healthcare system powered by digital healthcare products. As already seen is the stumbling block protecting personal information and powering the apps so they can work. Where does the fine line between the two end?
Whitney Grace, December 8, 2016
The Noble Quest Behind Semantic Search
November 25, 2016
A brief write-up at the ontotext blog, “The Knowledge Discovery Quest,” presents a noble vision of the search field. Philologist and blogger Teodora Petkova observed that semantic search is the key to bringing together data from different sources and exploring connections. She elaborates:
On a more practical note, semantic search is about efficient enterprise content usage. As one of the biggest losses of knowledge happens due to inefficient management and retrieval of information. The ability to search for meaning not for keywords brings us a step closer to efficient information management.
If semantic search had a separate icon from the one traditional search has it would have been a microscope. Why? Because semantic search is looking at content as if through the magnifying lens of a microscope. The technology helps us explore large amounts of systems and the connections between them. Sharpening our ability to join the dots, semantic search enhances the way we look for clues and compare correlations on our knowledge discovery quest.
At the bottom of the post is a slideshow on this “knowledge discovery quest.” Sure, it also serves to illustrate how ontotext could help, but we can’t blame them for drumming up business through their own blog. We actually appreciate the company’s approach to semantic search, and we’d be curious to see how they manage the intricacies of content conversion and normalization. Founded in 2000, ontotext is based in Bulgaria.
Cynthia Murrell, November 25, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph