SharePoint: In a Tuxedo and Ready for the Big Time
May 20, 2011
We at Search Technologies read the short news article called “SharePoint Naked” in Beyond Search on May 18, 2011. We found the write up somewhat amusing, but we also think that the comments about SharePoint as a development platform were at odds with our experience.
First, please, point your browser to the MSDN Developer Team Blog and the story “SharePoint 2010 Development Platform Stack.” The diagram presents the major building blocks of the SharePoint system.
This type of diagram presents what my college psychology professor called the gestalt.These types of broad views serve the same purpose as a city map. One has to know where the major features are, what roadways lead into and out of the city, and a range of other high level information.
The Microsoft blog diagram serves this function for a professional working with SharePoint.In fact, I doubt that a busy financial officer would look at this road map. Financial people monitor other types of information. The CFO works in one city and the SharePoint developer in another.
Both use maps, just different ones.Second, we think this diagram is extremely useful. It identifies the relationship among key components of the SharePoint development stack.
I found the inclusion of the Windows Server 2008 and the SharePoint Server 2010 as “book ends” insightful. Between these digital bookends, the focus on SharePoint Foundation 2010 was useful, clear, and complete. Third, the number of components in an enterprise system does not automatically mean increased costs.
Microsoft is doing an outstanding job of providing “snap in” components, tools, and documentation. In our experience, Search Technologies’ engineers can move from concept to operational status in a short span of time.
The foregoing does not mean that SharePoint is easier or harder than any other enterprise software. SharePoint is a robust system, which when appropriately configured and provisioned, can deliver outstanding return on investment and an excellent user experience.
Encouragingly for us, we’re finding that SharePoint adoptees– especially the big ones–get the importance of great search functionality as a foundation of productivity across the application spectrum. Encouragingly for Microsoft, who paid $1.2 billion for a Norwegian search company a couple of years ago, Fast Search for SharePoint fits the bill very nicely. We currently have a dozen organizations using our Fast Search for SharePoint proof of concept service.
Iain Fletcher, May 20, 2011
Search Technologies
More from IBM Watson: More PR That Is
May 19, 2011
IBM keeps flogging Watson, which seems to be Lucene wrapped with IBM goodness. We have reported on the apparent shift in search strategy at IBM; to wit, search now embraces content analytics. Many vendors are trying to spit shine worn toe cap oxfords in an effort to make search into a money machine. Good luck with that.
Network World tells us that “Watson Teaches ‘Big Analytics.’” Ah, more Watson hyperbole.
Skillful big analytics is necessary to make use of big data, of course, and in most cases speed is also a factor. Watson demonstrated proficiency at both with its Jeopardy win. Now, IBM hopes to use those abilities in enterprise products. As well they should; the need for such tools is expanding rapidly.
“Businesses successfully utilizing big analytics can take this process of knowledge discovery even further, identifying questions, exploring the answers and asking new questions based on those answers. This iterative quality of data analysis, rather than incremental exploration, can lead to a deeper understanding of business and markets, and begin to answer questions never before considered.”
Yep, we think we get it: Big data and a robust big analytic product are increasingly necessary to stay competitive. What we want to know, though, is this: when is all this going to change Web or Internet search? When will the Watson product be “a product”? Enough PR. That’s easy. How about a useful service we can test and compare to other systems?
Cynthia Murrell May 19, 2011
Freebie
The SharePoint Skeleton Exposed
May 19, 2011
Short honk: I absolutely love diagrams that explain SharePoint. First, end users do not want to look at this diagram. Second, chief financial officers must be distracted so that knowledge of this diagram does not reach their eyes. Consultants, certified SharePoint experts, and assorted SharePoint experts—You folks can wallow in this diagram all day long.
Here’s the “SharePoint 2010 Development Platform Stack.”
Elegant, clear, and inter-dependencies galore. Now what happens when you toss in Fast Search, its hundreds of configuration settings, and the bits and pieces needed to make Fast Search the lean, mean retrieval machine of your dreams? Well, you get to spend lots of time, brain cycles, and money to get everything humming right along.
Stephen E Arnold, May 18, 2011
Freebie unlike faux SharePoint expertise
Search: An Information Retrieval Fukushima?
May 18, 2011
Information about the scale of the horrific nuclear disaster in Japan at the Fukushima Daiichi nuclear complex is now becoming more widely known.
Expertise and Smoothing
My interest in the event is the engineering of a necklace of old-style reactors and the problems the LOCA (loss of coolant accident) triggered. The nagging thought I had was that today’s nuclear engineers understood the issues with the reactor design, the placement of the spent fuel pool, and the risks posed by an earthquake. After my years in the nuclear industry, I am quite confident that engineers articulated these issues. However, the technical information gets “smoothed” and simplified. The complexities of nuclear power generation are well known at least in engineering schools. The nuclear engineers are often viewed as odd ducks by the civil engineers and mechanical engineers. A nuclear engineer has to do the regular engineering stuff of calculating loads and looking up data in hefty tomes. But the nukes need grounding in chemistry, physics, and math, lots of math. Then the engineer who wants to become a certified, professional nuclear engineer has some other hoops to jump through. I won’t bore you with the details, but the end result of the process produces people who can explain clearly a particular process and its impacts.
Does your search experience emit signs of troubles within?
The problem is that art history majors, journalists, failed Web masters, and even Harvard and Wharton MBAs get bored quickly. The details of a particular nuclear process makes zero sense to someone more comfortable commenting about the color of Mona Lisa’s gown. So “smoothing” takes place. The ridges and outcrops of scientific and statistical knowledge get simplified. Once a complex situation has been smoothed, the need for hard expertise is diminished. With these simplifications, the liberal arts crowd can “reason” about risks, costs, upsides, and downsides.
A nuclear fall out map. The effect of a search meltdown extends far beyond the boundaries of a single user’s actions. Flawed search and retrieval has major consequences, many of which cannot be predicted with high confidence.
Everything works in an acceptable or okay manner until there is a LOCA or some other problem like a stuck valve or a crack in a pipe in a radioactive area of the reactor. Quickly the complexities, risks, and costs of the “smoothed problem” reveal the fissures and crags of reality.
Web search and enterprise search are now experiencing what I call a Fukushima event. After years of contentment with finding information, suddenly the dashboards are blinking yellow and red. Users are unable to find the information needed to do their job or something as basic as locate a colleague’s telephone number or office location. I have separated Web search and enterprise search in my professional work.
I want to depart for a moment and consider the two “species” of search as a single process before the ideas slip away from me. I know that Web search processes publicly accessible content, has the luxury of ignoring servers with high latency, and filtering content to create an index that meets the vendors’ needs, not the users’ needs. I know that enterprise search must handle diverse content types, must cope with security and access controls, and perform more functions that one of those two inch wide Swiss Army knives on sale at the airport in Geneva. I understand. My concern is broader is this write up. Please, bear with me.
New Landscape of Enterprise Search Details Available
May 18, 2011
Stephen E Arnold’s new report about enterprise search will be shipping in two weeks. The New Landscape of Enterprise Search: A Critical Review of the Market and Search Systems provides a fresh perspective on a fascinating enterprise application.
The centerpiece of the report are new analyses of search and retrieval systems offered by:
Unlike the “pay to play” analyses from industry consultant and self-appointed “experts,” Mr. Arnold’s approach is based on his work in developing search systems and researching search systems to support certain inquiries into systems’ performance and features.
, to focus on the broad changes which have roiled the enterprise search and content processing market. Unlike his first “encyclopedia” of search systems and his study of value added indexing systems, this new report takes an unvarnished look at the business and financial factors that make enterprise search a challenge. Then he uses a historical base to analyze the upsides and downsides of six vendors’ search solutions. He puts the firm’s particular technical characteristics in sharp relief. A reader gains a richer understanding of what makes a particular vendor’s system best suited for specific information access applications.
Other features of the report include:
- Diagrams of system architecture and screen shots of exemplary implementations
- Lists of resellers and partners of the profiled vendors
- A comprehensive glossary which attempts to cut through the jargon and marketing baloney which impedes communication about search and retrieval
- A ready-reference table for more than 20 vendors’ enterprise search solutions
- An “outlook” section which offers candid observations about the attrition and financial health of the hundreds of companies offering search solutions.
More information about the report is available at http://goo.gl/0vSql. You may reserve your copy by writing seaky2000 @ yahoo dot com. Full ordering information and pricing will be available in the near future.
Donald C Anderson, May 18, 2011
Post paid for by Stephen E Arnold
RedMonk and Open Source
May 18, 2011
If you have worked with traditional consulting firms, you know that “open” is not part of the standard method. At RedMonk, open is a pivot point. The company provides a range of services to organizations world wide which have a need for intelligence about open source software. RedMonk has emerged as one of the leaders in the open source community, providing traditional advisory services as well as specialized capabilities tailored to the fast growing open source sector.
You can learn read an exclusive interview with Stephen O’Grady, one of the founders of RedMonk. In a wide ranging discussion with Stephen E Arnold, publisher of Beyond Search, Mr. O’Grady talks about open source technology and its impact on traditional commercial, proprietary software.
In response to a question about the business implications of open source software, he said:
As with every other market with credible open source alternatives, the commercial landscape of search has unquestionably been impacted. Contrary to some of the more aggressive or doom crying assertions, open source does not preclude success for closed source products. It does, however, force vendors of proprietary solutions to compete more effectively. We talk about open source being like a personal trainer for commercial vendors in that respect; they can’t get lazy or complacent with open source alternatives readily available.
He continued:
Besides pushing commercial vendors to improve their technology, open source generally makes pricing more competitive, and search is no exception here. Closed source alternatives remain successful, but even if an organization does not want to use open source, search customers would be foolish not to use the proverbial Amdahl mug as leverage in negotiations.
You can read the complete interview with Mr. O’Grady at http://wp.me/pf6p2-4A2. He will be participating in the Lucene Revolution Conference as well.
Don C. Anderson, May 18, 2011
Freebie
Exclusive Interview: Stephen O’Grady, RedMonk
May 18, 2011
Introduction
The open source movement is expanding, and it is increasingly difficult for commercial software vendors to ignore. Some large firms have embraced open source. If you license, IBM OmniFind with Content Analytics, you get open source plus proprietary software. Oracle has opted for a different path, electing to acquire high profile open source solutions such as MySQL and buying companies with a heritage of open source. Sun Microsystems is now part of Oracle, and Oracle became an organization of influence with regard to Java. Google is open source, or at least Google asserts that it is open source. Other firms have built engineering and consulting services around open source. A good example is Lucid Imagination, a firm that provides one click downloads of Lucene/Solr and value-add software and consulting for open source search. The company also operates a successful conference series and has developed specialized systems and methods to handle scaling, big data, and other common search challenges.
I wanted to get a different view of the open source movement in general and probe about the more narrow business applications of open source technology. Fortunately I was able to talk with Stephen O’Grady, the co-founder and Principal Analyst of RedMonk, a boutique industry analyst firm focused on developers. Founded in 2002, RedMonk provides strategic advisory services to some of the most successful technology firms in the world. Stephen’s focus is on infrastructure software such as programming languages, operating systems and databases, with a special focus on open source and big data. Before setting up RedMonk, Stephen worked as an analyst at Illuminata. Prior to joining Illuminata, Stephen served in various senior capacities with large systems integration firms like Keane and consultancies like Blue Hammock. Regularly cited in publications such as the New York Times, NPR, the Boston Globe, and the Wall Street Journal, and a popular speaker and moderator on the conference circuit, Stephen’s advice and opinion is well respected throughout the industry.
The full text of my interview with him on May 16, 2011 appears below.
The Interview
Thanks for making time to speak with me.
No problem.
Let me ask a basic question. What’s a RedMonk?
That’s my favorite question. We are a different type of consultancy. We like to say we are “not your parents’ industry analyst firm.” We set up RedMonk in 2002.
Right. You take a similar view of industry analysts and mid tier consulting firms that I do as I recall.
Yes, pretty similar. We suggest that the industry analysis business has become a “protection racket… undoubtedly a profitable business arrangement, but ultimately neither sustainable nor ethical. In fact, we make our content open and accessible in most cases. We work under yearly retained subscriptions with clients.
Over the last nine years we have been able to serve big household names to a large number of startups. We deliver consulting hours, press services, and a variety of other value adds.
Quite a few firms say that. What’s your key difference?
We are practical.
First, RedMonk is focused on developers, whom we consider to be the new “kingmakers” in technology. If you think about it, most of the adoption we’ve seen in the last ten years has been bottom up.
We’re “practitioner-focused” rather than “buyer-focused”. RedMonk is focused on developers, whom we consider to be the new “kingmakers” in technology. If you think about it, most of the adoption we’ve seen in the last ten years has been bottom up. Our core thesis is that technology adoption is increasingly a bottom up proposition, as demonstrated by Linux, Apache, MySQL, PHP, Firefox, or Eclipse. Each is successful because these solutions have been built from the ground floor, often in grassroots fashion.
Third, we are squarely in the big data space. The database market was considered saturated, but it exploded with new tools and projects. A majority of these are open source, and thus developer friendly. We are right in the epicenter of that shift.
Do you do commissioned research?
No, we don’t do commissioned research of any kind. We just don’t see it as high value, even if the research is valid.
How has the commercial landscape of search specifically, and data infrastructure generally, been impacted – for better or for worse – by open source?
As with every other market with credible open source alternatives, the commercial landscape of search has unquestionably been impacted. Contrary to some of the more aggressive or doom crying assertions, open source does not preclude success for closed source products. It does, however, force vendors of proprietary solutions to compete more effectively. We talk about open source being like a personal trainer for commercial vendors in that respect; they can’t get lazy or complacent with open source alternatives readily available.
Isn’t there an impact on pricing?
Great point.
Besides pushing commercial vendors to improve their technology, open source generally makes pricing more competitive, and search is no exception here. Closed source alternatives remain successful, but even if an organization does not want to use open source, search customers would be foolish not to use the proverbial Amdahl mug as leverage in negotiations.
When the software is available for free, what are customers paying for?
Revenue models around open source businesses vary, but the most common is service and support. The software, in other words, is free, and what customers pay for is help with installation and integration, or the ability to pick up the phone when something breaks.
A customer may also be paying for updates, whereby vendors backport fixes or patches to older software versions. Broadly then, the majority of commercial open source users are paying for peace of mind. Customers want the same assurances they get from traditional commercial software vendors. Customers want to know that there will be there someone to help when bugs inevitably appear: open source vendors provide that level of support and assurance.
What’s the payoff to the open source user?
That’s my second favorite question.
The advantages to this model from the customer perspective are multiple, but perhaps the most important is what Simon Phipps once observed: users can pay at the point of value, rather than acquisition. Just a few years ago, if you had a project to complete, you’d invite vendors in to do a bake off. They would try to prove to you in an hour or two demo that their software could do the job well enough for you to pay to get it.
This is like an end run, right?
In general, but we believe open source software inverts the typical commercial software process. You download the software for free, employ it as you see fit and determine whether it works or not. If it does, you can engage a commercial vendor for support. If it doesn’t, you’re not out the cost of a license. This shift has been transformative in how vendors interact with their customers, whether they’re selling open source software or not.
The general complexion of software infrastructure appears to be changing. Relational databases, once the only choice, are becoming rather one of many. Where does search fit in, and how do customers determine which pieces fit which needs?
The data infrastructure space is indeed exploding. In the space of eighteen months we’ve gone from relational databases are the solution to every data problem to, seemingly, a different persistence mechanism per workload.
As for how customers put the pieces together, the important thing is to work backwards from need. For example, customers that have search needs should, unsurprisingly, look at search tools like Solr. But the versatility of search makes it useful in a variety of other contexts; AT&T for example uses it for Web page composition.
What’s driving the adoption of search? Is it simply a function of data growth, as the headlines seem to imply, or is there more going on?
Certainly data growth is a major factor. Every year there’s a new chart asserting things like we’re going to produce more information in the next year than in all of recorded history, but the important part is that it’s true. We are all–every one of us–generating massive amounts of information. How do you extract, then, the proverbial needle from the haystack? Search is one of the most effective mechanisms for this.
Just as important, however, has been the recognition amongst even conservative IT shops that the database does not need to be the solution to every problem. Search, like a variety of other non-relational tools, is far more of a first class citizen today than it was just a few short years ago.
What is the most important impact effective search can have on an organization?
That’s a very tough question. I would say that one of the most important impacts search can have is that a good answer to one question will generate the next question. Whether it’s a customer searching your Web site for the latest Android handset or your internal analyst looking for last quarter’s sales figures, it’s crucial to get the right answer quickly if you ever want them to ask a second.
If your search fails they don’t ask a second question, you’ll either have lost a potential customer or your analyst is making decisions without last quarter’s sales figures. Neither is a good outcome.
Looking at the market ahead, what trends do you see impacting the market in the next year or two? What should customers be aware of with respect to their data infrastructure?
There are a great many trends that will affect search, but two of the most interesting from my view will be the increasing contextual intelligence of search and the accelerating integration of search into other applications. Far from being just a dumb search engine, Solr increasingly has an awareness of what specifically it is searching, and in some cases, how to leverage and manipulate that content whether it’s JSON or numeric fields. This broadens the role that search can play, because it’s no longer strictly about retrieval.
And integration?
Okay, as for integration, data centers are increasingly heterogeneous, with databases deployed alongside MapReduce implementations, key-value stores and document databases.
Search fills an important role, which is why we’re increasingly seeing it not simply pointed at a repository to index, but leveraged in conjunction with tools like Hadoop.
What kind of threat does Oracle’s lawsuit over Google plus Java pose to open source?How does it compare to the SCO controversy with Linux some years back?
In my view, Oracle’s ongoing litigation of Google over Java related intellectual property has profound implications for both participants, but also for the open source community as a whole.
The real concern is that the litigation, particularly if it is successful, could have chilling effects on Java usage and adoption. As far as SCO is concerned, this is somewhat different in that it targets a reimplementation of the platform in Android rather than the Java platform itself. SCO was threatening Linux rather than a less adopted derivative.
While users of both Java and MySQL should be aware of the litigation, however, realistically the implications for them, if any are, are very long term. No one is going to abandon Java based open source projects, for example, based on the outcome of Oracle’s suit.
It seems like everyone who is anyone in the software world has an open source strategy, even through to Microsoft’s embrace of php. Should information technology executives and decision makers, who were once suspicious of open source, be suspicious of software vendors without a solid open source strategy?
With the possible exception of packaged applications, open source is a fact of life in most infrastructure software markets. Adoption is accelerating, the volume of options is growing, and – frequently – the commercial open source products are lower cost. So it is no surprise that vendors might feel threatened by open source.
But even if they choose not to sell open source software, as many do not, those without a solid open source interoperability and partnership story will be disadvantaged in a marketplace that sees open source playing crucial roles at every layer of the data center. Like it or not, that is the context in which commercial vendors are competing. Put more simply, if you’re building for a market of all closed source products, that’s not that large a market. In such cases, then, I would certainly have some hard questions for vendors who lack an open source strategy.
Where can a reader get more information about RedMonk?
Please, visit our Web site at www.redmonk.com.
ArnoldIT Comment
RedMonk’s approach to professional services is refreshing and a harbinger of change in the consulting sector. But more importantly, the information in this interview makes clear that open source solutions and open source search technology are part of the disruption that is shaking the foundation of traditional computing. Vendors without an open source strategy are likely to face both customer and price pressure. Open source is no longer a marginalized option. Companies from Twitter to Cisco Systems to Skype, now a unit of Microsoft, rely on open source technology. RedMonk is the voice of this new wave of technical opportunity.
Stephen E Arnold, May 18, 2011
Protected: Fixing Development Projects in SharePoint 2010
May 18, 2011
Autonomy Mines Iron Mountain
May 16, 2011
I have written about Stratify in the three editions of the Enterprise Search Report which I wrote when “search” was hot, and in my Gilbane report named after this blog. Since late 2010, Stratify (originally named Purple Yogi which got some In-Q-Tel love in 2001) has gotten lost within Iron Mountain’s labyrinth of organizational tunnels. Now Iron Mountain seems to face significant financial, technical, business, and management challenges. The details of what Autonomy snagged are fuzzy, but based on the sketchy information that flowed to me since May 12, 2011, here’s what I have been able to “mine”:
Autonomy mines Iron Mountain for revenue, customer, and upsell “gold.” Image source: http://www.davestravelcorner.com/articles/goldcountry/article.htm
- Autonomy will get the archiving, eDiscovery, and online back up business of Iron Mountain
- No word on the fate of Mimosa Systems which Iron Mountain bought in early 2010. (My recollection is that Mimosa used a mid tier search solution obtained from a third party. I want to link Mimosa with dtSearch, but I may be mistaken on that point.)
- Autonomy will apply is well-honed management method to the properties. Expect to see Autonomy push ever closer to $1.0 billion in revenues, maybe this calendar year.
You can get some numbers from the news item “Autonomy Acquires Some Iron Mountain Digital Assets for $380 Million.”
Stratify’s technology was the cat’s pajamas years ago. More recently, the technology has lagged. Iron Mountain’s own difficulties distracted the company from its digital opportunities. My view is the Iron Mountain made an all to familiar error: Online looks easy but looks are deceiving.
Some of the former Web masters, failed “real” journalists, and self appointed search experts will enjoy the opportunity to berate Autonomy for its acquisitions and growth tactics, but I think those folks are wrong.
Autonomy does manage its acquisitions to generate stakeholder and customer value.
In fact, Autonomy’s track record with its acquisitions is, in my opinion, better than either Google’s or Microsoft’s. As for Endeca, that company has fallen behind Autonomy due to different management strategies and growth tactics. Don’t believe me?
Just look at Autonomy’s track record, top line revenue, profits, and customer base, not tweets from a yesterday thinker at a lumber-filled, pay to play meet up.
Stephen E Arnold, May 16, 2011
Freebie.
A New Mr. Microsoft Platform Ecosystem in France
May 14, 2011
ITespresso announces “Microsoft France Has Found Its New ‘Mr. Platform Ecosystem’ at Sinequa.”
The former President & CEO of business search provider Sinequa, Jean Ferré, will be leading a team of 50 in his new position at Microsoft France. They will focus largely on building relationships with start-up companies. The article elaborates on the placement:
Aged 42, Jean Ferré finds himself in a strategic position in the organization of Microsoft France: platforms (software like Office and for Web 365, cloud with Windows Azure, Windows Phone), market places applications and tools development (Visual Studio …). And he joined the steering committee.
We wish M. Ferré the best of luck in his new position. Prior to joining MSFT, Mr. Ferré was the top dog at Sinequa, an enterprise search and solutions vendor. As for Sinequa, word is that Alexandre Bilger, formerly that company’s Managing Director, will be taking the reins. Good luck to him, too.
Stephen E Arnold, May 13, 2011
Freebie