Larry Ellison May Not Understand Cloud Computing but Oracle Is Doing It
December 19, 2008
Not long ago, Larry Ellison, the wizard who built Oracle to a multi-billion dollar colossus said he didn’t understand cloud computing. I don’t understand it either. I think the idea of a data center and a network connection is old hat. But today’s Brown University graduates enjoy coining new phrases to put lipstick on a very old pig. I liked the IBM data center. I got paid to terrorize people who needed their punch card decks run. What a power trip.
Now, AT&T and Oracle have fallen in love and will be offering a cloud-based subscription to that nifty enterprise system, PeopleSoft. You can read the TMCnet.com story “AT&T and Oracle Team for Subscription-Based PeopleSoft Solution for Midsize Businesses” here. The comment I found glinting like a diamond amidst the gray prose was:
Keith Block, executive vice president, Oracle North America Sales, said: “We are excited to work with AT&T to address the need for a cost-effective, predictably priced human resource service for companies that prefer to focus their efforts on their core differentiators. This is especially true in today’s uncertain economic climate where companies of all sizes are looking for ways to reduce costs, increase flexibility and decrease business risk.”
With a snappy description like this, maybe Mr. Ellison could not figure out that this was a cloud computing play. AT&T and Oracle have the Midas touch when it comes to making money. This venture might be the home run both companies need to make 2009 a banner year. However, both companies have to become agile, deal with latency, and deliver a service that outshines similar services already in the market. AT&T. Oracle. Agile? Absolutely. These companies can turn on a dime and their executives often work weekends at Cirque du Soleil. Such versatility is the norm. I wonder of Secure Enterprise Search 10g is part of the package? If you know, use the comments section to share your knowledge.
Stephen Arnold, December 19, 2008
Microsoft Gets Dinged for Windows Live Customer Service or Lack of It
December 19, 2008
I am getting burned out with executive revolving doors and complaints about Microsoft. I was going to delete the ZDNet article “Windows Live Drops the Ball on Support” which you can find here. But Ed Bott does a good job, so I read the story. The problem with Microsoft customer service is that it is tough to get. I find that companies talk about customer service and then force me to jump though hoops. Mr. Bott reports a similar experience. For me, the most important comment in his article was:
What baffles me is that Windows Live Wave 3 has been under development for … well, it seems like forever. The software side has run exceptionally well, hitting its dates and delivering a generally excellent product. So what happened to customer support? From this vantage point, it looks like management treated support as an afterthought and is only now beginning to build the support resources it should have had in place months ago.
Microsoft seems to have what William James called “a certain blindness.” Not only is customer support an issue, what about the security problems with Internet Explorer? Anyone want to get some customer support when a bad person sucks your PC’s brain?
Stephen Arnold, December 19, 2008
SharePoint: ChooseChicago
December 18, 2008
I scanned the MSDN Web log postings and saw this headline: “SharePoint Web Sites in Government.” My first reaction was that the author Jamesbr had compiled a list of public facing Web sites running on Microsoft’s fascinating SharePoint content management, collaboration, search, and Swiss Army Knife software. No joy. Mr. Jamesbr pointed to another person’s list which was a trifle thin. You can check out this official WSS tally here. Don’t let the WSS fool you. The sites are SharePoint, and there are 432 of them as of December 16, 2008. I navigated to the featured site, ChooseChicago.com. My broadband connection was having a bad hair day. It took 10 seconds for the base page to render and I had to hit the escape key after 30 seconds to stop the page from trying to locate a missing resource. Sigh. Because this was a featured site that impressed Jamesbr, I did some exploring. First, I navigated to the ChooseChicago.com site and saw this on December 16, 2008:
The search box is located at the top right hand corner of the page and also at the bottom right hand corner. But the search system was a tad sluggish. After entering my query “Chinese”, the system cranked for 20 seconds before returning the results list:
Microsoft: Search Revolving Door
December 17, 2008
I have a tough time keeping track of who is running Microsoft’s search effort. In a lousy market, Google continues to keep a paw clamped around the throats of 70 percent of Web search users. Microsoft, Yahoo, and the troubled Ask.com are not making much, if any, headway in closing the gap or slowing Googzilla. When Google makes a mistake as it did with its OpenEdge play, none of these competitors exploits the error.
So, I was not surprised to read in Todd Bishop’s TechFlash that Brad Goldberg is leaving Microsoft and the Live search operation. He was the general manager, and I am not sure where in the wild wonderful world of Microsoft he reported or what his duties were. If he was the person who had the job of crippling Google, he certainly did not deliver. You can read “Live Search GM Leaving Microsoft” here. Mr. Bishop said that Mr. Goldberg will join an investment company. In today’s economy, I am not very confident that investment companies will flourish for a while. The US Treasury suggested that it will print as much money as needed to jump start the economy. The most interesting comment in Mr. Bishop’s write up was:
His wife, Michelle Goldberg, is a partner at the Ignition Partners venture capital firm.
Wow. A banking family in today’s economy. The Goldbergs are optimists. I wonder, “Does Mr. Goldberg’s replacement Yusuf Mehdi has the time, resources, or technical infrastructure to deal with the GOOG?”
Stephen Arnold, December 17, 2008
Semantic Search Laid Bare
December 17, 2008
Yahoo’s Search Blog here has an interesting interview with Dr. Rudi Studer. The focus is semantic search technologies, which are all the rage in enterprise search and Web search circles. Dr. Studer, according to Yahoo:
is no stranger to the world of semantic search. A full professor in Applied Informatics at University of Karlsruhe, Dr. Studer is also director of the Karlsruhe Service Research Institute, an interdisciplinary center designed to spur new concepts and technologies for a services-based economy. His areas of research include ontology management, semantic web services, and knowledge management. He has been a past president of the Semantic Web Science Association and has served as Editor-in-Chief of the journal Web Semantics.
If you are interested in semantics, you will want to read and save the full text of this interview. I want to highlight three points that caught my attention and then–in my goosely manner–offer several observations.
First, Dr. Studer suggests that “lightweight semantic technologies” have a role to play. He said:
In the context of combining Web 2.0 and Semantic Web technologies, we see that the Web is the central point. In terms of short term impact, Web 2.0 has clearly passed the Semantic Web, but in the long run there is a lot that Semantic Web technologies can contribute. We see especially promising advancements in developing and deploying lightweight semantic approaches.
The key idea is lightweight, not giant semantic engines grinding in a lights out data center.
Second, Dr. Studer asserts:
Once search engines index Semantic Web data, the benefits will be even more obvious and immediate to the end user. Yahoo!’s SearchMonkey is a good example of this. In turn, if there is a benefit for the end user, content providers will make their data available using Semantic Web standards.
The idea is that in this chicken and egg problem, it will be the Web page creators’s job to make use of semantic tags.
Finally, Dr. Studer identifies tools as an issue. He said:
One problem in the early days was that the tool support was not as mature as for other technologies. This has changed over the years as we now have stable tooling infrastructure available. This also becomes apparent when looking at the at this year’s Semantic Web Challenge. Another aspect is the complexity of some of the technologies. For example, understanding the foundation of languages such as OWL (being based on Description Logics) is not trivial. At the same time, doing useful stuff does not require being an expert in Logics – many things can already be done exploiting only a small subset of all the language features.
I am no semantic expert. I have watched several semantic centric initiatives enter the world and–somewhat sadly–watched them die. Against this background, let me offer three observations:
- Semantic technology is plumbing and like plumbing, semantic technology should be kept out of sight. I want to use plumbing in a user friendly, problem free setting. Beyond that, I don’t want to know anything about plumbing. Lightweight or heavyweight, I think some other users may feel the same way. Do I look at inverted indexes? Do you?
 - The notion of putting the burden on Web page or content creators is a great idea, but it won’t work. When I analyzed the five Programmable Search Engine inventions by Ramanathan Guha as part of an analysis for the late, great BearStearns, it was clear that Google’s clever Dr. Guha assumed most content would not be tagged in a useful way. Sure, if content was properly tagged, Google could ingest that information. But the core of the PSE invention was Google’s method for taking the semantic bull by the horns. If Dr. Guha’s method works, then Google will become the semantic Web because it will do the tagging work that most people cannot or will not do.
 - The tools are getting better, but I don’t think users want to use tools. Users want life to be easy, and figuring out how to create appropriate tags, inserting them, and conforming to “standards” such as they are is no fun. The tools will thrill developers and leave most people cold. Check out the tools section at a hardware store. What do you see? Hobbyists and tinkerers and maybe a few professionals who grab what they need and head out. Semantic tools will be like hardware: of interest to a few.
 
In my opinion, the Google – Guha approach is the one to watch. The semantic Web is gaining traction, but it is in its infancy. If Google jump starts the process by saying, “We will do it for you”, then Google will “own” the semantic Web. Then what? The professional semantic Web folks will grouse, but the GOOG will ignore the howls of protest. Why do you think the GOOG hired Dr. Guha from IBM Almaden? Why did the GOOG create an environment for Dr. Guha to write five patent applications, file them on the same day, and have the USPTO publish five documents on the same day in February 2007? No accident tell you I.
Stephen Arnold, December 17, 2008
Stephen Arnold
K-Now: Here and Now
December 17, 2008
Guest Feature by Dawn Marie Yankeelov, AspectX.com
I have been discussing progress in semantic knowledge structures with Entrepreneur and Researcher Sam Chapman of K-Now who has recently left the University of Sheffield, Department of Computer Science, in the United Kingdom to go full-time into the delivery of semantic technologies in the enterprise. His attendance at the ISWC 2008 has created some momentum to engage new corporations in a discussion on a recently presented paper on “Creating and Using Organisational Semantic Webs in Large Networked Organisations” by Ravish Bhagdev, Ajay Chakravarthy, Sam Chapman, Fabio Ciravegna and Vita Lanfranchi. Knowledge management has shifted as evidenced in his paper. He contends with others that a more localized approach based on a particular perspective of the world in which one operates is far more useful than a centralized company view. All-encompassing ontologies are not the answer, according to Chapman. In the paper, his team indicates:
A challenge for the Semantic Web is to support the change in knowledge management mentioned above, by defining tools and techniques supporting: 1) definition of community-specific views of the world; 2) capture and acquisition of knowledge according to them; 3) integration of captured knowledge with the rest of the organisation’s knowledge; 4) sharing of knowledge across communities.
At K-Now, his team is focused upon supporting large scale organizations to do just this:capturing, managing and storing knowledge and its structures, as well as focusing upon how to reuse and query flexible dynamic knowledge. Repurposing trusted knowledge in K-Now is not based on fixed corporate structures and portal forms, but rather from capturing knowledge in user alterable forms at the point of its generation. Engineering forms, for example, that assist in monitoring aerospace engines during operations worldwide can be easily modified to suit differing local needs. Despite such modifications being enabled this still captures integrated structured knowledge suitable for spotting trends. Making quantitative queries without any pre-agreed central schemas is the objective. This is possible, under K-Now’s approach, due to the use of agreed semantic technology and standards.
Amazon’s Approach to Staff Motivation
December 16, 2008
The London Times has a tendency to report news that is more like the made up stories that I thought only big US newspapers offered. A colleague in Europe sent me a link to this London Times’s story here about Amazon, the much loved online retailer run by the world’s smartest man. Amazon has, if the story in the London Times is true, seems to have an personnel touch closer to the idiosyncrasies reported by Richard von Krafft-Ebing (Amazon link is here) than those softies Robert L. Mathis and John H. Jackson (Amazon link is here). The Times’s headline is certainly a grabber, “Revealed: Amazon Staff Punished for Being Ill.” You can read the zippy prose here. As a really fragile goose, I find the thought of my game master putting me in the roaster because my feathers fall out quite troubling. That’s why I am not sure the Times has reported the whole story. But what Claire Newell and Daniel Foggo stated is chilling; to wit:
Warned that the company refuses to allow sick leave, even if the worker has a legitimate doctor’s note. Taking a day off sick, even with a note, results in a penalty point. A worker with six points faces dismissal.
I would be a gone goose for sure. The Times’s editors allowed Amazon to respond. I found this comment interesting:
We want our associates to enjoy working at Amazon.co.uk and the interests of all workers are represented by a democratically elected employee forum who meets regularly with senior management. This forum was consulted before the workforce elected to reduce breaks to 15 and 20 minutes on an eight hour shift in order to cut the total working day by half an hour.
Amazon has a remarkable balance sheet. Its expenditures for R&D and infrastructure seem modest. Compared to companies like Google and Microsoft, Amazon seems to have cracked the code for creating massive Web centric systems on a shoestring. Furthermore, Amazon has been quicker than either Google or Microsoft to release commercial Web services such as the Amazon cloud based computing service and its online storage system. Amazon also has its home grown search system. The guru of search high tailed it to Google. I wonder if the Amazon holiday personnel policies influenced that decision?
If the Times’s story is correct, maybe some of that balance sheet magic comes by applying Herr Krafft-Ebing’s use cases to staff. I bet I could work long hours if I wore the cruel shoes brilliantly described by Herr Krafft-Ebing. If you are not familiar with Herr Krafft-Ebing and his research into human motivation, dive in here. Just make sure no colleagues or children are peeking over your shoulder. Please, keep in mind that I am pointing to a London Times’s news story and I am not sure that story is rolling down the same railroad tracks as I. I do fancy the image of Amazon managers in Herr Krafft-Ebing’s high heels, though.
Stephen Arnold, December 16, 2008
Dead Tree Mouthpiece Asks What Is XML
December 16, 2008
Search and content processing vendors are reasonably comfortable with XML or documents in Extensible Markup Language formats. I don’t think much of the content management industry, but I know that most of these outfits can figure out when and how to use XML. Even Word 2007 takes a run at XML. Like an inexperienced soccer player, sometimes Microsoft gets to the right spot and then misses the goal. But the company is trying. You will find “What the Hell Is XML? And Should It Really Make Any Difference to My Business?” in Publishers Weekly here a good read. The author is Mike Shatzkin, and he does a good job of explaining that publishers have to slice and dice their content; that is, repurpose information to make new products or accelerate the creation of new information. He then presents XML as “information to go”. I think the notion is to embed XML tags into content so that software can do some of the work once handled by expensive human editors. For me, the most interesting comment in the article was this passage:
Here’s what we call the Copernican Change. We have lived all our lives in a universe where the book is “the sun” and everything else we might create or sell was a “subsidiary right” to the book, revolving around that sun. In our new universe, the content encased in a well-formed XML file is the sun. The book, an output of a well-formed XML file, is only one of an increasing number of revenue opportunities and marketing opportunities revolving around it. It requires more discipline and attention to the rules to create a well-formed XML file than it did to create a book. But when you’re done, the end result is more useful: content can be rendered many different ways and cleaved and recombined inexpensively, unlocking sales that are almost impossible to capture cost-effectively if you start with a “book.”
XML and its antecedents have been around for 30 years. Anyone remember CALS or SGML? The metaphor of Copernicus’ insights into how the solar system worked seems to suggest a new world view. Okay, but after Copernicus there was a period of cultural adjustment. I don’t think the dead tree crowd has the luxury of time. My recollection is that the clock strikes midnight for the New York Times in a couple of months. Sam Zell has already embraced bankruptcy as a gentlemanly way of dealing with the economics of the dead tree business model. The Newsweek Magazine staff is working on résumés and Web logs, not the jumbo next issue. Heliocentrism is a nifty concept, but it won’t work because like Copernicus De revolutionibus orbium coelestium was finally delivered from the print shop. Oh, Copernicus allegedly got the book right before he ascended to his caelum.
I think that it is too late for most of the dead tree outfits. Fitting in way, I suppose, Copernicus died just as his insights became available to a clueless public… printed on paper. There is a possible symmetry exists between Mr. Shatzkin’s reference to Copernicus and what has happened to most traditional publishers.
Stephen Arnold, December 16, 2008
Google: Worms Are Turning
December 16, 2008
Google is not accustomed to having its plans jeopardized by the likes of the Wall Street Journal. After a decade of baffling the pundits with free Odwalla beverages and lunch entertainment from the likes of Tony Bennett, the GOOG is thrashing. To add to the misery of the Wall Street Journal story here, the SFGate online site published “Google Off List of 20 Most Trusted Companies.” You can read this story here. American Express and eBay are allegedly perceived as more trustworthy than Google. Wow. eBay and PayPal. More trusted. When will other shoes begin to drop? Last week I listened as a Googler ran the game plan; that is, did a standard presentation about the firm’s capabilities. The presentation was warm, interesting, and what is on the Google Web site. Googlers, I opine, only know what Mother Google wants them to know. I have often mentioned Googler Cyrus, a high ranking Googler, who told me I Photoshopped a Google report that looked a lot like a dossier prepared by the police on a suspect. I pointed out to dear Cyrus that the image came from a Google patent document. The Googler did not believe me. Now you try to find in Google a hit on my name, my study Google Version 2.0, and patents. You won’t be able to find it. Somehow the links to my study of Google patents are really tough to find. I find this amusing. I wonder if Google finds my analyses a wee bit off putting? Now the GOOG is battling a dead tree traditional media company and finding itself no longer among the most trusted companies. What’s amazing to me is that it has taken a decade for pundits, wizards, and assorted Google search experts to figure out some of the Google’s more interesting initiatives. There’s more in the closet. I can hardly wait to see what antics dead tree media and the GOOG will display. For a quick primer, check out my Google studies here.
Stephen Arnold, December 16, 2008
Wall Street Journal Figures Out What Google Is Doing, Gets Criticized
December 15, 2008
The Wall Street Journal’s Vishesh Kuman and Christopher Rhoads stumbled into a hornet’s nest. I think surprise may accompany these people and their editor for the next few days. The story “Google Wants Its Own Fast Track on the Web.” The story is here at the moment, but it will probably disappear or be unavailable due to heavy click traffic. Read it quickly so you have the context for the hundreds of comments this story has generated. Pundits whose comments I found useful are the Lessig Blog, Om Malik’s GigaOM, and Google’s own comment here.
The premise of the article is that the GOOG wants to create what Messrs. Kuman and Rhoads call “a fast lane.” In effect, the GOOG wants to get preferential treatment for its traffic. The story wanders forward with references to network neutrality, which is probably going to die like a polar bear sitting on an ice chunk in the Arctic circle. Network neutrality is a weird American term that is designed to prevent a telco from charging people based on arbitrary benchmarks. The Bell Telephone Co. figured out that differential pricing was the way to keep the monopoly in clover a long time ago. The lesson has not be forgotten by today’s data barons. The authors drag in the president elect and wraps up with use of a Google-coined phrase “OpenEdge.”
Why the firestorm? Here are my thoughts:
First, I prepared a briefing for several telcos in early 2008. My partner at the Mercer Island Group and I did a series of briefings for telecommunication companies. In that briefing, I showed a diagram from one of Google’s patent documents and enriched with information from Google’s technical papers. The diagram showed Google as the intermediary between a telco’s mobile customers and the Internet. In effect, with Google in the middle, the telco would get low latency rendering of content in the Googleplex (my term for Google’s computer and software infrastructure). The groups to a person snorted derision. I recall one sophisticated telco manager saying in the jargon of the Bell head, “That’s crap.” I had no rejoinder to that because I was reporting what my analyses of Google patents and technical papers said. So, until this Wall Street Journal story appeared, the notion of Google becoming the Internet was not on anyone’s radar. After all, I live in Kentucky and the Mercer Island Group is not McKinsey & Co. or Boston Consulting Group in terms of size and number of consultants. But MIG has some sharp nails is its toolkit.
Second, in my Google Version 2.0, which is mostly a summary of Google’s patent documents from August 2005 to June 2007, I reported on a series of give patent documents, filed the same day and eventually published on the same day by the ever efficient US Patent & Trademark Office. the five documents disclosed a big, somewhat crazy system for sucking in data from airline ticket sellers, camera manufacturers, and other structured data sources. The invention figured out the context of each datum and built a great big master database containing the data. The idea was that some companies could push the data to Google. Failing that, Google would use software to fill in the gaps and therefore have its own master database. BearStearns was sufficiently intrigued by this analysis to issue a report to its key clients about this innovation. Google’s attorneys asserted that the report contained proprietary Google data, an objection that went away when I provided the patent document number and the url to download the patent documents. Google’s attorneys, like many Googlers, are confident but sometimes uninformed about what the GOOG is doing with one paw while the other paw adjusts the lava lamps.
Third, in my Beyond Search study for the Gilbane Group, I reported that Google had developed the “dataspace” technology to provide the framework for Google to become the Internet. Sue Feldman at IDC, the big research firm near Boston, was sufficiently interested to work with me to create a special IDC report on this technology and its implications. The Beyond Search study and the IDC report went to hundreds of clients and was ignored. The idea of a dataspace with metadata about how long a person looks at a Web page and the use of meta metadata to make queries about the lineage and certainty of data was too strange.
What the Wall Street Journal has stumbled into is a piece of the Google strategy. My view is that Google is making an honest effort to involve the telcos in its business plan. If the telcos pass, then the GOOG will simply keep doing what it has been doing for a decade; that is, building out what I called in January 2008 in my briefings “Google Global Telecommunications”. Yep, Google is the sibling of the “old” AT&T model of a utility. Instead of just voice and data, GGT will combine smart software with its infrastructure and data to marginalize quite a few business operations.
Is this too big an idea today? Not for Google. But the idea is sufficiently big to trigger the storm front of comments. My thought is, “You ain’t seen nothing yet.” Ignorance of Google’s technology is commonplace. One would have thought that the telcos would take Google seriously by now. Guess not. If you want to dig into Google’s technology, you can still buy copies of my studies:
- The Google Legacy: How Google’s Internet Search Is Transforming Application Software, Infonortics, 2005 here
 - Google Version 2.0: The Calculating Predator, Infonortics, 2007 here
 - Beyond Search: What to Do When Your Enterprise Search System Doesn’t Work, Gilbane Group, 2008 here
 
Bear Stearns is out of business, so I don’t know how you can get a copy of that 40 page report. You can order the dataspaces report directly from IDC. Just ask for Report 213562.
If you want me to brief your company on Google’s technology investments over the last decade, write me at seaky2000 at yahoo dot com. I have a number of different briefings, including the telco analysis and a new one on Google’s machine learning methods. These are a blend of technology analysis and examples in Google’s open source publications. I rely on my analytical methods to identify key trends and use only open source materials. Nevertheless, the capabilities of Google are–shall we say–quite interesting. Just look at what the GOOG has done in online advertising. The disruptive potential of its other technologies is comparable. What do you know about containers, janitors, and dataspaces? Not much I might suggest if I were not an addled goose.
Oh, let me address Messrs. Kumar and Rhoads, “You are somewhat correct, but you are grasping at straws when you suggest that Google requires the support and permission of any entity or individual. The GOOG is emerging as the first digital nation state.” Tough to understand, tough to regulate, and tough to thwart. Just ask the book publishers suggest I.
Stephen Arnold, December 15, 2008
	
