Google and Real Time Maps
August 11, 2009
A happy quack to the reader who alerted me to GoogleMapMania’s “Real Time Google Maps”. The article contains a number of links to real time Google Maps created by developers. The one that I found most useful was the Chicago Transit Authority map. Google has a burgeoning transportation services business. Those operating bus, rail, and shuttle services may want to take note of this CTA-centric gizmo.
Stephen Arnold, August 11, 2009
Google Marketing to the Enterprise
August 10, 2009
I usually find Larry Dignan’s view of information technology spot. I was not too surprised with the argument in his “Google’s Campaign for Apps Doesn’t Address the IT Data Elephant in the Room.” The key passage in the article for me was:
In fact, nothing in Google’s marketing toolbox—the viral emails, the YouTube videos and the posters you can plaster near the water cooler—are going to change fact that your corporate data is hosted by Google. If Google really wants to entice the enterprise it should have skipped the YouTube videos and allowed companies to store some of their own data.
I agree that Google has not done a good job of addressing the “Google has your data” argument.
Google has some patent documents that describe clever ways to have some data processed by Google’s systems and other data on a client’s servers with the client retaining control over the data. I am sitting in an airport at 4 50 am Eastern and don’t have access to my Google files. My recollection is that Google has been beavering away with systems and methods to provide different control methods.
The problem with Google is the loose coupling between the engineering and marketing at Google. The push to the enterprise strikes me as a way to capitalize on several market trends for which Google has data. Keep in mind that Google does not take actions in a cavalier way. Data drive most decisions, based on my research. Then group think takes over. In that process, the result is a way to harvest low hanging fruit.
After some time passes, engineering methods follow along that add features, functions, and some robustness. A good example is the Google Search Appliance. In its first version security was lax. The present version provides a number of security features. Microsoft uses the same approach which has caused me to wait until version 3 of a Microsoft product before I jump on board. For Google, the process of change is incremental and much less visible.
My hunch is that once Google’s “go Google” program responds to the pent up demand for more hands on support for the appliance, Apps, and maps—then the Google will add additional features. The timeline may well be measured in years.
If a company wants to use Google technology to reduce costs now and reduce to some degree the hurdles that traditional information technology approaches put in the way of senior management, the “go Google” program will do its job.
Over time, Google will baby step forward. Those looking for traditional approaches to enterprise software will have a field day criticizing Google and its approach. My thought is that Google seems to be moving forward with serious intent.
I think there will be even louder and more aggressive criticism of Google’s new enterprise push. In my opinion, that criticism will not have much of an impact on the Google. The company seems to be making money, growing, and finding traction despite its unorthodox methods.
Will Google “win” in the enterprise sector? I don’t know. I do know that Google is disruptive, and that the effects of the disruption create opportunities. Traditional enterprise software companies may want to look at those opportunities, not argue that the ways of the past are the ways of the future. The future will be different from what most of us have spent years learning to love. Google’s approach is based on the fact that customers * want * Google solutions, particularly applications that require search and access to information. That is not what traditional information technology professional want.
Stephen Arnold, August 10, 2009
SAP and Its Evolving Business Model
August 10, 2009
First, the Tibco rumor and now the SAP on demand software strategy. Managing Automation’s “SAP Unveils an On-Demand Software Strategy for Large Enterprise Customers” surprised me. I had dismissed SAP’s cloud chatter as fog. Not so according to Jeff Moad, a member of the Managing Automation editorial staff. He wrote:
At a recent SaaS conference in Amsterdam, John Wookey, SAP’s executive vice president for large enterprise on-demand, said the company plans to roll out a series of SaaS products for large enterprises that integrate tightly with SAP’s on-premise Business Suite and run on the Java-based on-demand platform that SAP acquired along with Frictionless Commerce in 2006. SAP will concentrate on selling the SaaS offerings to existing users of its business suite rather than new accounts, Wookey said. SAP’s SaaS offerings for large enterprises will include some existing and some new products. Existing products include SAP’s CRM on-demand and e-sourcing services. SAP CRM on-demand will be migrated to the Frictionless on-demand platform…
You can get a consultant’s viewpoint and some verbiage from SAP top brass. For me, the article triggered three thoughts:
- How will SAP make up the shortfall between the revenue from its traditional approach to licensing and deploying its software and the “cloud” model?
- If there is strong uptake for SAP cloud services, from where will the engineers needed to service the clients come? If SAP trims down the functionality, won’t the savvy buyer look for lower cost cloud options or just fire up some coders to create a solution using Google or Microsoft functions?
- What will customers be getting? Will this service be a 2009 version of Microsoft’s early push into cloud computing?
- Whither TREX search?
I don’t have answers, and I didn’t see them in the Managing Automation write up.
Stephen Arnold, August 11, 2009
Google’s Data Center Strategy Questioned
August 10, 2009
Google fired up its engineering engines in the period between 1996 and 2002. As the company entered its run up to its initial public offering, Google had locked and loaded on some core principles. I am sure you have internalized these by now. It has been more than a 11 years since the Google came on the search world’s radar.
The Register’s headline “Will Google Regret the Mega Data Center?” raises an interesting question. The story was written in August 2009, more than a decade after the GOOG launched itself. Can decade old technology remain viable in today’s wild and crazy technical world? Cade Metz reported:
In the wake of Microsoft’s decision to remove its Windows Azure infrastructure from the state of Washington – where a change in local tax law has upped the price of building out the proverbial cloud – the company’s former director of data center services has warned that Microsoft and other cloud-happy giants may soon find that the mega data center isn’t all it’s cracked up to be. “[Large cloud providers] are burning through tremendous amounts of capital believing that these facilities will ultimately give them strategic advantage,” reads a blog post from Mike Manos, who recently left Microsoft for data center outfit Digital Realty Trust.
Yikes! Google. A fat and out-of-step dinosaur?
Google has three dozen data centers, a model that Microsoft has emulated. Google has according to chatter about a million servers humming along. What is The Register’s take on this important issue? You will have to read Mr. Metz’s article.
My view:
Google and the Open Source Card
August 7, 2009
Digital video is a high stakes game and only high rollers can play. Hulu.com has the backing of several motivated outfits with deep pockets. Smaller video sites are interesting but the punishing costs associated with dense bit media are going to be too much for most of these companies over the next couple of years.
Google is committed to video. A big chunk of the under 40 crowd love to fiddle with, wallow in, and learn via video. I don’t, but that does not make any difference whatsoever.
There are two different views of the Google acquisition of On2’s video compression technology. On one side of the fence is a traditional media company, the Guardian newspaper. You can read “Google Buy Up Will Help Cut YouTube Costs.” The idea is that Google is not making money via YouTube.com. Therefore, the all-stock deal worth about $110 million gets Google some compression technology that will reduce bandwidth costs and deliver other efficiencies. The On2 technology also has the potential to give Google an edge in video quality. This is an AP story, so I don’t want to quote from the item. I do want to point out that this on the surface seems like a really great analysis.
On the other side of the fence is the viewpoint expressed in The Register. Its story “Is Google Spending $106.5 Million to Open Source a Codec”?” is quite different. Cade Metz, a good thinker in the opinion of the goslings here in Harrod’s Creek, wrote:
But if you also consider the company’s so far fruitless efforts to push through a video tag for HTML 5 – the still gestating update to the web’s hypertext markup language – the On2 acquisition looks an awful lot like an effort to solve this browser-maker impasse.
Mr. Metz sees the On2 buy as a way for Google to offer an alternative video codec which sidesteps some issues with H.264 and other beasties in the video jungle.
In my opinion, The Register is closer to the truth that the Guardian. Google is playing an open source trump card. Making open source moves delivers two benefits. The first is the short term solution to the hassle over video standards. Google offers an attractive alternative to the issues described by Mr. Metz. The second advantage is that Google reaps the benefits of contributing to open source in a substantive way.
Open source is a major threat to Microsoft and some other enterprise software vendors. Google is playing a sophisticated game and playing that game well in our opinion. The Register’s story gets it; the Guardian’s story does not.
Stephen Arnold, August 7, 2009
Oracle, Cloud Computing, and Search
August 6, 2009
I recall that Oracle was once skeptical of cloud computing. I also remember when Oracle bought me breakfast in New York to explain the wonders of Secure Enterprise Search 10g. The world has changed. I read “Oracle To Keynote Cloud Computing Expo” and did a quick double take. The Sys Con publication stated:
Private clouds for the exclusive use of one enterprise can however mitigate these concerns by giving the enterprise greater control. In a keynote address to be given at SYS-CON’s 4th International Cloud Computing Conference & Expo, Richard Sarwal (pictured), SVP of Development for Oracle Enterprise Manager, will explore how enterprises are likely to adopt public and private cloud computing, building on a foundation of virtualization infrastructure and management systems. The keynote will be titled: “Cloud Computing: Separating Hype from Reality.”
Now I know that cloud computing within the Oracle embrace is good. But what about Secure Enterprise Search 10g, on premises or from the cloud?
Stephen Arnold, August 6, 2009
Online Trail
August 6, 2009
Short honk: Here is an example of the type of online trail that one leaves. Note that the Google search history was obtained by fiddling with user name and password. The link may be removed. I verified it at 10 pm Eastern on August 6, 2009.
Stephen Arnold, August 6,2009
Flickr Thunderstorms
August 6, 2009
Right after inking a yet-to-be-approved deal with Microsoft, Yahoo rolled out enhancements to Flickr’s image search. If you have not tried the new-and-improved Flickr, click here and give the system a whirl. My test queries were modest. I need pictures of train wrecks, collapsed houses, and skiers who are doing headers into snow drifts. These illustrations amuse me and I find them useful in illustrating the business methods of some dinosaur-like organizations. The search “train wreck” worked. I received image results that were on a par with Google’s. Yahoo’s Flickr did not allow me to NOT out jpgs or narrow the query to line art. The system was fine. My query for “house collapse” was less satisfying, but the results were usable. I had to click and browse before I found a suitable image for a company that is shaken by financial upheavals and management decisions.
Source: http://www.flickr.com/photos/tbruce/193295658/
What surprised me about Flickr was the story “Cloud Storage Nightmare with Flickr.” Hubert Nguyen reported:
A Flickr user learned the hard way when his account got hacked and 3000 of his photos were deleted by the hacker, who also closed his account. The account owner is now campaigning against Flickr’s support. You can imagine how mad that person was, but it gets worse: Flickr cannot retrieve his data and we guess that this is because they were deleted in a seemingly “legitimate” manner (from Flickr’s point of view). We think that Flickr is built to survive some catastrophic hardware failure, but if an account is closed, the data is immediately deleted – permanently.
This strikes me as a policy issue, but it underscores the types of challenges that Microsoft may find itself trying to free itself from the thorn bush. If the revenue from the yet-to-be-approved tie up does not produce a truck load of dough, the situation could become even thornier for Microsoft.
Stephen Arnold, August 6, 2009
How to Build a Catamaran, Not an Aircraft Carrier
August 5, 2009
I have been thinking about the “aircraft carrier” metaphor I used to describe Microsoft. Aircraft carriers are good at many nautical tasks. Aircraft carriers, however, may not work in certain situations. What would I do if I were building an online service with content processing, search, and discovery functions? Part of the answer was in a link sent to me by one of my two or three readers: “Building a Data Intensive Web Application with Cloudera, Hadoop, Hive, Pig, and EC2”. The Web page is a tutorial. My only concern is that the Amazon technology is used for some of the plumbing. The Orwell incident gives me pause. High handed actions taken without the type of communications I expect would make me look for an alternative cloud solution. (Hurry, gentle Google, hurry, and release your Amazon killer). The difference between the old style aircraft carrier approach to online services and the method in this tutorial is significant. I think that Cloudera presented a useful chunk of information.
Stephen Arnold, August 6, 2009
Microsoft Embraces Scale
August 4, 2009
The year was 2002. A cash-rich, confused outfit paid me to write a report about Google’s database technology. In 2002, Google was a Web search company with some good buzz among the alleged wizards of Web search. Google did not have much to say when its executives gave talks. I recall an exchange between me and Larry Page at the Boston Search Engine Meeting in 1999. The topic? Truncation. Now that has real sizzle among the average Web surfer. I referenced an outfit called InQuire, which supported forward truncation. Mr. Page asserted that Google did not have to fool around with truncation. The arguments bored even those who were search experts at the Boston meeting.
I realized then that Google had some very specific methods, and those methods were not influenced by the received wisdom of search as practiced at Inktomi or Lycos, to name two big players in 2000. So I began my research looking for differences between what Google engineers were revealing in their research papers. I compiled a list of differences. I won’t reference my Google studies, because in today’s economic climate, few people are buying $400 studies of Google or much else for that matter.
I flipped through some of the archives I have on one of my back up devices. I did a search for the word “scale”, and I found that it was used frequently by Google engineers and also by Google managers. Scale was a big deal to Google from the days of BackRub, according to my notes. BackRub did not scale. Google, scion of BackRub, was engineered to scale.
The reason, evident to Messrs. Brin and Page in 1998, was that the problem with existing Web search systems was that the operators ran out of money for exotic hardware needed to keep pace with the two rapidly generating cells of search: traffic and new / changed content. The stroke of genius, as I have documented in my Google studies, was that Google tackled the engineering bottlenecks. Other search companies such as Lycos lived with the input output issues, the bottlenecks of hitting the disc for search results, and updating indexes by brute force methods. Not the Google.
Messrs. Brin and Page hired smart men and women whose job was “find a solution”. So engineers from Alta Vista, Bell Labs, Sun Microsystems, and other places where bright folks get jobs worked to solve these inherent problems. Without solutions, there was zero chance that Google could avoid the fate of the Excites, OpenText Web index, and dozens of other companies without a way to grow without consuming the equivalent of the gross domestic product for hardware, disc space, bandwidth, chillers, and network devices.
Google’s brilliance (yes, brilliance) was to resolve in a cost effective way the technical problems that were deal breakers for other search vendors. AltaVista was a pretty good search system but it was too costly to operate. When the Alpha computers were online, you could melt iron ore, so the air condition bill was a killer.
Keep in mind that Google has been working on resolving bottlenecks and plumbing problems for more than 11 years.
I read “Microsoft’s Point Man on Search—Satya Nadella—Speaks: It’s a Game of Scale” and I shook my head in disbelief. Google operates at scale, but scale is a consequence of Google’s solutions to getting results without choking a system with unnecessary disc reads. Scale is a consequence of using dirt cheap hardware that is mostly controlled by smart software interacting with the operating system and the demands users and processes make on the system. Scale is a consequence of figuring out how to get heat out of a rack of servers and replacing conventional uninterruptable power supplies with on motherboard batteries from Walgreen’s to reduce electrical demand, heat and cost. Scale comes from creating certain propriety bits of hardware AND software to squeeze efficiencies out of problems caused by physics of computer operation.
If you navigate to Google and poke around you will discover “Publications by Googlers”. I suggest that anyone interested in Google browse this list of publications. I have tried to read every Google paper, but as I age, I find I cannot keep up. The Googlers have increased their output of research into plumbing and other search arcana by a factor of 10 since I first began following Google’s technical innovations. Here’s one example to give you some context for my comments about Mr. Nadella’s comments, reported by All Things Digital; to wit: “Thwarting Virtual Bottlenecks in Multi-Bitrate Streaming Servers” by Bin Liu and Raju Rangaswami (academics_) and Zora Dimitrijevic (Googler). Yep, there it is in plain English—an innovation plus hard data that shows that Google’s system anticipates bottlenecks. Software makes decisions to avoid these “virtual bottlenecks.” Nice, right? The bottleneck imposed by the way computers operate and the laws of physics are identified BEFORE they take place. The Google system then changes its methods in order to eliminate the bottleneck. Think about that the next time you wait for Oracle to respond to a query across a terabyte set of data tables or you wait as SharePoint labors to generate a new index update. Google’s innovation is predictive analysis and automated intervention. This explains why it is sometimes difficult to explain why a particular Web page declined in a Google set of relevance ranked results. The system, not humans, is adapting.
I understand the frustration that many Google pundits, haters, and apologists express to me. But if you take the time to read Google’s public statements about what it is doing and how it engineers its systems, the Google is quite forthcoming. The problem, as I see it, has two parts. First, Googlers write for those who understand the world as Google does. Notice the language of the “Thwarting” paper. Have you thought about Multi-bitrate streaming servers in a YouTube.com type of environment. YouTube.com has lots of users, and streams a lot of content. The problems are that Google’s notion of clarity is show in the statement below:
Second, very few people in the search business deal with the user loads that Google experiences. Looking up the location of one video and copying it from one computer to another is trivial. Delivering videos to a couple of million people at the same time is a different class of problem. So, why read the “Thwarting” paper? The situation described does not exist for most search companies or streaming media companies. The condition at Google is, by definition, an anomaly. Anomalies are not what make most information technology companies hearts go pitter patter more quickly. Google has to solve these problems or it is not Google. A company that is not Google does not have these Google problems. Therefore, Google solves problems that are irrelevant to 99 percent of the companies in the content processing game.
Back to Mr. Nadella. This comment sums up what I call the Microsoft Yahoo search challenge:
Nadella does acknowledge in the video interview here that Microsoft has has not been able to catch up with Google and talks about how that might now be possible.
I love the “might”. The thoughts the went through my mind when I worked through this multi media article from All Things Digital were:
- Microsoft had access to similar thinking about scale in 1999. Microsoft hired former AltaVista engineers, but the Microsoft approach to data centers is a bit like the US Navy’s approach to aircraft carriers. More new stuff has been put on a design that has remained unchanged for a long time. I have written about Microsoft’s “as is” architecture in the Web log with snapshots of the approach at three points in time
- Google has been unchallenged in search for 11 years. Google has an “as is” infrastructure capable of supporting more than 2,200 queries per second as well as handling the other modest tasks such as YouTube.com, advertising, maps, and enterprise applications. In 2002, Google had not figured out how to handle high load reads and writes because Google focused on eliminating disc reads and gating writes. Google solved that problem years ago.
- Microsoft has to integrate the Yahoo craziness into the Microsoft “as is”, aircraft carrier approach to data centers. The affection for Microsoft server products is strong, but adapting to Yahoo search innovations will require some expensive, time consuming, and costly engineering.
In short, I am delighted that Mr. Nadella has embraced scale. Google is becoming more like a tortoise, but I think there was a fable about the race between the tortoise and the hare. Google’s reflexes are slowing. The company has a truck load of legal problems. New competitors like Collecta.com are running circles around Googzilla. Nevertheless, Microsoft has to figure out the Google problem before the “going Google” campaign bleeds revenue and profits from Microsoft’s pivotal business segments.
My hunch is that Microsoft will run out of cash before dealing the GOOG a disabling blow.
Stephen Arnold, August 4, 2009

