IT Architecture Needs to Be More Seamless
August 14, 2015
IT architecture might appear to be the same across the board, but depending on the industry the standards change. Rupert Brown wrote “From BCBS to TOGAF: The Need For a Semantically Rigorous Business Architecture” for Bob’s Guide and he discusses how TOGAF is the defacto standard for global enterprise architecture. He explains that while TOGAF does have its strengths, it supports many weaknesses are its reliance on diagrams and using PowerPoint to make them.
Brown spends a large portion of the article stressing that information content and model are more important and a diagramed should only be rendered later. He goes on that as industries have advanced the tools have become more complex and it is very important for there to be a more universal approach IT architecture.
What is Brown’s supposed solution? Semantics!
“The mechanism used to join the dots is Semantics: all the documents that are the key artifacts that capture how a business operates and evolves are nowadays stored by default in Microsoft or Open Office equivalents as XML and can have semantic linkages embedded within them. The result is that no business document can be considered an island any more – everything must have a reason to exist.”
The reason that TOGAF has not been standardized using semantics is the lack of something to connect various architecture models together. A standardized XBRL language for financial and regulatory reporting would help get the process started, but the biggest problem will be people who make a decent living using PowerPoint (so he claims).
Brown calls for a global reporting standard for all industries, but that is a pie in the sky hope unless the government imposes regulations or all industries have a meeting of the minds. Why? The different industries do not always mesh, think engineering firms vs. a publishing house, and each has their own list of needs and concerns. Why not focus on getting industry standards for one industry rather than across the board?
Whitney Grace, August 14, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
CounterTack Partners with ManTech Cyber Solutions for a More Comprehensive Platform
August 13, 2015
A new acquisition by CounterTack brings predictive capability to that company’s security offerings, we learn from “CounterTack Acquires ManTech Cyber Solutions” at eWeek. Specifically, it is a division of ManTech International, dubbed ManTech Cyber Solutions International (MCSI), that has been snapped up under undisclosed terms by the private security firm.
CounterTack president and CEO Neal Chreighton says the beauty of the deal lies in the lack of overlap between their tech and what MCSI brings to the table; while their existing products can tell users what is happening or has already happened, MCSI’s can tell them what to watch out for going forward. Writer Sean Michael Kerner elaborates:
“MCSI’s technology provides a lot of predictive capabilities around malware that can help enterprises determine how dangerous a malicious payload might be, Creighton said. Organizations often use the MCSI Responder Pro product after an attack has occurred to figure out what has happened. In contrast, the MCSI Active Defense product looks at issues in real time to make predictions, he said. A big area of concern for many security vendors is the risk of false positives for security alerts. With the Digital DNA technology, CounterTack will now have a predictive capability to be able to better determine the risk with a given malicious payload. The ability to understand the potential capabilities of a piece of malware will enable organizations to properly provide a risk score for a security event. With a risk score in place, organizations can then prioritize malware events to organize resources to handle remediation, he said.”
Incorporation of the open-source Hadoop means CounterTack can scale to fit any organization, and the products can be deployed on-premises or in the cloud. Cleighton notes his company’s primary competitor is security vendor CrowdStrike; we’ll be keeping an eye on both these promising firms.
Based in Waltham, Massachusetts, CounterTack was founded in 2007. The company declares their Sentinel platform to be the only in-progress attack intelligence and response solution on the market (for now.) Founded way back in 1968, ManTech International develops and manages solutions for cyber security, C4ISR, systems engineering, and global logistics from their headquarters in Washington, DC. Both companies are currently hiring; click here for opportunities at CounterTack, and here for ManTech’s careers page.
Cynthia Murrell, August 13, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Teper Returns to SharePoint Division
August 11, 2015
SharePoint is a huge organization within the even larger corporation of Microsoft. Leaderships shifts are not uncommon, but they can often point toward something meaningful. The Seattle Times offers some insight into Microsoft’s latest shake-up in their article, “Microsoft Exec Teper Exits Strategy Role, Returns to Sharepoint.”
The article sums up the leadership change:
“Jeff Teper, Microsoft’s former head of corporate strategy, will return to the Office division he left a year ago. Teper, a longtime Office executive, shifted last year to corporate vice president of strategy, reporting to Chief Financial Officer Amy Hood. In April, he moved to Kurt DelBene’s team when that former Microsoft executive returned to the company to lead corporate strategy and planning.”
Teper’s earlier career is telling, as he led Microsoft’s move to Office 365. With the upcoming release of SharePoint Server 2016, users have been assured that on-premises versions will remain an option but that web-based services, including Office 365 features, will continue to shine. For continued updates on the future of SharePoint, stay tuned to the dedicated SharePoint feed on ArnoldIT.com. Stephen E. Arnold has made a career out of search and his work offers a lot of information without a huge investment in time.
Emily Rae Aldridge, August 11, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
How Do We Fail Thee? Let Me Say Some Ways
August 9, 2015
I read “Post Mortems.” The write up is an earthworm. That is my jargon for a list of disconnected items. Humans love lists. Right, Moses? This list points to information about failures. Most of the items have brief comments such as:
Kickstarter. Primary DB became inconsistent with all replicas, which wasn’t detected until a query failed. This was caused by a MySQL bug which sometimes caused
order byto be ignored.
and
Microsoft. A bad config took down Azure storage.
Interesting. Hopefully the earthworm will be fattened with examples like “Germans in ‘Brains Off, Just Follow Orders’ Hospital Data Centre Faff.” the main idea is some dutiful workers removed air conditioners from a server room.
Stephen E Arnold, August 9, 2015
How to Use Watson
August 7, 2015
While there are many possibilities for cognitive computing, what makes an idea a reality is its feasibility and real life application. The Platform explores “The Real Trouble With Cognitive Computing” and the troubles IBM had (has) trying to figure out what they are going to do with the supercomputer they made. The article explains that before Watson became a Jeopardy celebrity, the IBM folks came up 8,000 potential experiments for Watson to do, but only 20 percent of them.
The range is small due to many factors, including bug testing, gauging progress with fuzzy outputs, playing around with algorithmic interactions, testing in isolation, and more. This leads to the “messy” way to develop the experiments. Ideally, developers would have a big knowledge model and be able to query it, but that option does not exist. The messy way involves keeping data sources intact, natural language processing, machine learning, and knowledge representation, and then distributed on an infrastructure.
Here is another key point that makes clear sense:
“The big issue with the Watson development cycle too is that teams are not just solving problems for one particular area. Rather, they have to create generalizable applications, which means what might be good for healthcare, for instance, might not be a good fit—and in fact even be damaging to—an area like financial services. The push and pull and tradeoff of the development cycle is therefore always hindered by this—and is the key barrier for companies any smaller than an IBM, Google, Microsoft, and other giants.”
This is exactly correct! Engineering is not the same as healthcare and it not all computer algorithms transfer over to different industries. One thing to keep in mind is that you can apply different methods from other industries and come up with new methods or solutions.
Whitney Grace, August 7, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Thunderstone Rumbles About Webinator
August 6, 2015
There is nothing more frustrating than being unable to locate a specific piece of information on a Web site when you use its search function. Search is supposed to be quick, accurate, and efficient. Even if Google search is employed as a Web site’s search feature, it does not always yield the best results. Thunderstone is a company that specializes in proprietary software application developed specifically for information management, search, retrieval, and filtering.
Thunderstone has a client list that includes, but not limited to, government agencies, Internet developer, corporations, and online service providers. The company’s goal is to deliver “product-oriented R&D within the area of advanced information management and retrieval,” which translates to them wanting to help their clients found information very, very fast and as accurately as possible. It is the premise of most information management companies. On the company blog it was announced that, “Thunderstone Releases Webinator Web Index And Retrieval System Version 13.” Webinator makes it easier to integrate high quality search into a Web site and it has several new appealing features:
- “Query Autocomplete, guides your users to the search they want
- HTML Highlighting, lets users see the results in the original HTML for better contextual information
- Expanded XML/SOAP API allows integration of administrative interface”
We like the HTML highlighting that offers users the ability to backtrack and see a page’s original information source. It is very similar to old-fashioned research: go back to the original source to check a fact’s veracity.
Whitney Grace, August 6, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Humans Screw Up the Self-Driving Car Again
August 5, 2015
Google really, really wants its self-driving cars to be approved for consumer usage. While the cars have been tested on actual roads, they have also been accompanied by car accidents. The Inquirer posted the article, “Latest Self-Driving Car Crash Injures Three Google Employees” about how the public might not be ready for self-driving vehicles. Google, not surprisingly, blames the crash on humans.
Google has been testing self-driving cars for over six years and there have been a total of fourteen accidents involving the vehicles. The most recent accident is the first that resulted in injuries. Three Google employees were using the self-driving vehicle during Mountain View, California rush hour traffic on July 1. When the accident occurred, each of the three employees were treated for whiplash. Google says that its car was not at fault and a distracted driver was at caused the accident, which is also the reason for the other accidents.
While Google is upset, the accidents have not hindered their plans, they have motivated them to push forward. Google explained that:
“ ‘The most recent collision, during the evening rush hour on 1 July, is a perfect example. The light was green, but traffic was backed up on the far side, so three cars, including ours, braked and came to a stop so as not to get stuck in the middle of the intersection. After we’d stopped, a car slammed into the back of us at 17 mph? ?and it hadn’t braked at all.’ ”
Google continues to insist that human error and inattention are ample reason to allow self-driving cars on the road. While it is hard to trust a machine with driving a weapon going 50 miles per hour, why do we trust people who have proven to be poor drivers with a license?
Whitney Grace, August 5, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Hire Watson As Your New Dietitian
August 4, 2015
IBM’s supercomputer Watson is being “trained” in various fields, such as healthcare, app creation, customer service relations, and creating brand new recipes. The applications for Watson are possibly endless. The supercomputer is combining its “skills” from healthcare and recipes by trying its hand at nutrition. Welltok invented the CaféWell Health Optimization Platform, a PaaS that creates individualized healthcare plans, and it implemented Watson’s big data capabilities to its Healthy Dining CaféWell personal concierge app. eWeek explains that “Welltok Takes IBM Watson Out To Dinner,” so it can offer clients personalized restaurant menu choices.
” ‘Optimal nutrition is one of the most significant factors in preventing and reversing the majority of our nation’s health conditions, like diabetes, overweight and obesity, heart disease and stroke and Alzheimer’s,’ said Anita Jones-Mueller, president of Healthy Dining, in a statement. ‘Since most Americans eat away from home an average of five times each week and it can be almost impossible to know what to order at restaurants to meet specific health needs, it is very important that wellness and condition management programs empower smart dining out choices. We applaud Welltok’s leadership in providing a new dimension to healthy restaurant dining through its groundbreaking CaféWell Concierge app.’”
Restaurant menus are very vague when it comes to nutritional information. When it comes to knowing if something is gluten-free, spicy, or a vegetarian option, the menu will state it, but all other information is missing. In order to find a restaurant’s nutritional information, you have to hit the Internet and conduct research. A new law passed will force restaurants to post calorie counts, but that will not include the amount of sugar, sodium, and other information. People have been making poor eating choices, partially due to the lack of information, if they know what they are eating they can improve their health. If Watson’s abilities can decrease the US’s waistline, it is for the better. The bigger challenge would be to get people to use the information.
Whitney Grace, August 4, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Bodleian Library Gets Image Search
August 3, 2015
There is a lot of free information on the Internet, but the veracity is always in question. While libraries are still the gateway of knowledge, many of their rarer, more historic works are buried in archives. These collections offer a wealth of information that is often very interesting. The biggest problem is that libraries often lack the funds to scan archival collections and create a digital library. Oxford University’s Bodleian Library, one of the oldest libraries in Europe, has the benefit of funds and an excellent collection to share with the world.
Digital Bodleian boasts over 115,179 images as of writing this article, stating that it is constantly updating the collection. The online library takes a modern approach to how users interact with the images by taking tips from social media. Not only can users browse and search the images randomly or in the pre-sorted collections, they can also create their own custom libraries and sharing the libraries with friends.
It is a bold move for a library, especially for one as renowned as Bodleian, to embrace a digital collection as well as offering a social media-like service. In my experience, digital library collections are bogged down by copyright, incomplete indices or ontologies, and they lack images to perk a users’ interest. Digital Bodleian is the opposite of many of its sister archives, but another thing I have noticed is that users are not too keen on joining a library social media site. It means having to sign up for yet another service and also their friends probably aren’t on it.
Here is an idea, how about a historical social media site similar to Pinterest that pulls records from official library archives? It would offer the ability to see the actual items, verify information, and even yield those clickbait top ten lists.
Whitney Grace, August 3, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Online Ads Discriminate
August 3, 2015
In our modern age, discrimination is supposed to be a thing of the past. When it does appear, people take to the Internet to vent their rage and frustrations, eager to point out this illegal activity. Online ads, however, lack human intelligence and are only as smart as their programmed algorithm. Technology Review explains in “Probing The Dark Side of Google’s Ad-Targeting System” that Google’s ad service makes inaccurate decisions when it comes to gender and other personal information.
A research team at Carnegie Mellon University and the International Computer Science Institute built AdFisher, a tool to track targeted third party ads on Google. AdFisher found that ads were discriminating against female users. Google offers a transparency tool that allows users to select what types of ads appear on their browsers, but even if you use the tool it doesn’t stop some of your personal information from being used.
“What exactly caused those specific patterns is unclear, because Google’s ad-serving system is very complex. Google uses its data to target ads, but ad buyers can make some decisions about demographics of interest and can also use their own data sources on people’s online activity to do additional targeting for certain kinds of ads. Nor do the examples breach any specific privacy rules—although Google policy forbids targeting on the basis of “health conditions.” Still, says Anupam Datta, an associate professor at Carnegie Mellon University who helped develop AdFisher, they show the need for tools that uncover how online ad companies differentiate between people.”
The transparency tool only controls some of the ads and third parties can use their own tools to extract data. Google stands by its transparency tool and even offers users the option to opt-out of ads. Google is studying AdFisher’s results and seeing what the implications are.
The study shows that personal data spills out on the Internet every time we click a link or use a browser. It is frightening how the data can be used and even hurtful if interpreted incorrectly by ads. The bigger question is not how retailers and Google uses the data, but how do government agencies and other institutes plan to use it?
Whitney Grace, August 3, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

