Apple May Challenge in Advertising
September 28, 2010
Google is mostly about advertising revenue, not search, in my opinion. The shift took place sometime in the 2006 and 2007 period. My yardstick continues to be Google’s technical documents in open source. These include patent applications, blog posts, and published papers.
Now Business Week has added some information that indicates a possible weakness in not just Google’s grip on the increasingly important mobile online ad market but also Microsoft’s and other companies’ prospects. “Apple Threatens Search Giants’ Mobile Ad Shares” reported:
Apple may be gaining share in the U.S. mobile advertising market this year at the expense of Google and Microsoft. Apple will end the year with 21 percent of the market, according to estimates provided to Businessweek.com by researcher IDC. Google’s share will drop to 21 percent, from 27 percent last year, when combined with results from AdMob, the ad network it bought in May. Microsoft will drop to 7 percent, from 10 percent.
These type of data are almost always interesting and based on methods that are not described in detail. Let’s assume that the Apple iAd system is operating as described. Business Week notes that no company is a slam dunk.
Nevertheless, our view of these data is that the company most likely to be encouraged by the write up is not Apple. Apple understands the value of its customer base and its methods of providing access to Apple’s customers.
Our take is that Facebook is in an ideal position to leverage its “members” and the data the company has about these individuals. With Google’s approach relatively well known and Apple’s becoming increasingly clear, Facebook can sit back, tweak its online ad offerings, and use a “me too” approach when its mobile tactics become a reality later this year.
Will Apple’s push into social with its somewhat overly visible “Ping” link help Apple cope with Facebook? Can Google respond to the social dominance of Facebook and the Apple hardware/software ecosystem in rich media?
We don’t know the answer, but Google may be the company with most significant challenge. And what happens to search? The odds seem to be rising that search will become the servant of online advertising. Search is a means to generate ad revenue, not a way to help users solve an information problem. If we are correct, this is an important moment in findability. Any pretense to objectivity in public Web search results may be swept away.
Stephen E Arnold, September 28, 2010
Freebie
Egentia: Another Aggregation Play
September 28, 2010
The newspaper is no longer the most sought after when it comes to finding the latest information. More and more people are putting down their paper and turning to online news to keep them informed. The company Eqentia aims to build a business portal that will have the same prestige for entrepreneurs that Google News has for the average user who wants to be informed about general news and developments.”
The article “Eqentia.com – Like Google News but For Businessmen” on KillerStartups.com explains a little more about the site. Basically, the company wants to allow users to customize their news options and get only the business news they want. Users can get the latest news from their business sector, keep an eye on the competition or see consumer patterns which can be helpful when coming up with marketing or media campaigns. A similar setup is already used by Silobreaker, which is dedicated to providing users with relevant news. Users perform automated searches in order to find in depth and relevant news instead of unsubstantiated chatter. Both sites give new meaning to the phrase “have it your way.”
The challenge seems to be marketing, not technology. There is an abundance of choices.
April Holmes, September 28, 2010
Freebie
White Paper on Data Mashups
September 28, 2010
We continue to work on projects that shift the emphasis from basic search to more sophisticated types of information retrieval. The companies in this market sector range from giants like SPSS (now part of IBM) and SAS to smaller firms that owe their origins to investments from the US government; for example, Recorded Future and Purple Yogi (now Stratify).
We noted the article titled “InetSoft Publishes Business Intelligence White Paper on Data Mashups”. The document of interest is a white paper authored by InetSoft, one of the companies pushing forward with data mashup technology. The paper asserts:
Data mashup is a data transformation and integration technique that puts control into the hands of the business user. Data mashup melds the flexibility of a spreadsheet with enterprise-level security, performance, repeatability, and collaboration. Data mashup can function in a complimentary relationship with warehousing, and can serve as a cost-effective substitute for traditional ETL [extract, transform, and load].
Spotting, digging out, and analyzing business data from disparate sources is expensive and time consuming. Further, this data must then be used for business intelligence (BI) decision-making and analysis. BI aims to support better decision-making by transforming raw data into meaningful and useful information used to enable effective and strategic decision-making. The main point is that business intelligence can save licensees of mashup systems time and eliminate reporting costs. Like other next generation companies, the implication is that InetSoft offers a flexible framework.
Our view is that the “mashup” or data fusion sector is now the next big thing in search and content processing. We are uncertain about the time and expense of marketing these next generation systems, however. In our view, as traditional search vendors face commoditization for low value, low complexity solutions, the hunt for new revenue will create significant opportunities for confusing potential customers.
Search is not really simple, but it is now tired. The next generation content processing systems have vigor, but will the excesses of enthusiasm create the same type of market perplexity that befuddles some procurement teams? What will the azurini do? How will the marketers at the rising number of “data fusion” firms position their products?
Excitement ahead.
Stephen E Arnold, September 28, 2010
Freebie
Tibco: Money and Mentos
September 27, 2010
Tibco (founded and directed by MIT- and Harvard-grad Vivek Ranadive) reported strong third quarter earnings. The company also made an interesting acquisition. Tibco purchased OpenSpirit, a maker of software used in oil and gas exploration in September 2010.
The “information bus” upon which Tibco’s fame rests is used as plumbing in a number of high profile industries. These include news, financial services, and government entities.
What’s important about Tibco is that the firm, in my opinion, has been one of the leaders in real time computing and information systems. Tibco’s approach can alert, pass messages, and transform content. With a bit of work, Tibco becomes the equivalent of the nervous system of a client. Many companies assert that their technology delivers a platform. Palantir, for example, is a relative newcomer to the platform pitch. But the reality is that companies like Tibco deliver a deeper, more fundamental architectural approach.
And Tibco makes the efficacy of its architecture easy to understand. How does Tibco communicate the value of its real time architecture? Click here.
For more information about Tibco, what I call a real platform company, navigate to the firm’s Web site at www.tibco.com. When I visited Tibco’s offices a decade ago, I remember see Yahoo News chugging happily away on Tibco’s servers. Yep, Tibco is more than Mentos.
Stephen E Arnold, September 27, 2010
Facebook and Google: Philosophies Collide
September 27, 2010
I listened to the Thursday, Buzz Out Loud podcast. On the show the talent explained that a certain high profile blog (Techcrunch) wrote a story about a rumored Facebook phone. The high profile blog garnered a meeting with the founder of Facebook (Wizard Zuck or Mark Zuckerberg). In that discussion, if I heard correctly as I was peddling my exercise bike at 66 year old goose pace, Mr. Zuckerberg point out something along the lines that social functions could not be added on. The idea I took away was that Facebook is built for social functions. Google was built for search or some other function.
As I thought about this, the comment highlighted what I think of as a “platform” fight.
The idea has surfaced elsewhere. I have started to write about the i2-Palantir tussle. That seems to be about lots of different technical issues, but it is really a platform fight. i2 has been one of the leaders if not the leader in data fusion and analysis for law enforcement and intelligence applications for 20 years. Keep in mind that I have done some work for the i2 folks. The Palantir outfit—stuffed with $90 million in semi-worthless US bucks—is a comparative newcomer. These two outfits are struggling to keep or get, depending on one’s point of view—control of a very esoteric market niche. Most of the azurini and mid-tier consultants steer clear of this sector. The types of baloney generated by the azurinis’ spam plants can harm people, not just get procurement teams reassigned. The i2-Palantir issue interests me because it is a platform tussle.
I think Facebook and Google are in a platform war as well.
Now keep in mind that if you are a Googler, you see the world through Google goggles. If you are a Facebook fan, you see the world through the friend lens. I am in the middle, and here’s my take on Wizard Zuck’s alleged comment about “adding” social instead of building a social platform.
First, I think the shift from Google to Facebook as a go-to resource is an important change. The reason Facebook “works” for 500 million or more people is that the information (good, bad, right, wrong, made up, crazy, or indifferent) comes from humans. If you have some relationship with that human, the information exists within a relationship context. When I run a search on Google, I have to figure out for myself whether the information is right, wrong, made up, crazy, indifferent or an advertisement. I don’t get much human help to figure out what’s what. As a result, the Google algorithmic and “secret sauce” results strike me as somewhat less useful now that there are “contextual” results and what I call “friend cues.” Your mileage may vary, but these friend cues also exist in services like Twitter and its derivatives/applications like Tweetmeme.
Second, Google is definitely in Microsoft Word feature mode. I am impressed with some of Google’s new services such as its new authentication method, which I will write about in one of my October columns. I am not too impressed with other Google innovations such as “Instant”. The ration of Word type features to useful features seems to be tilting toward the Microsoft model. I don’t use Word because it is a program that tries to do everything well and ends up becoming a wild and crazy exercise in getting text on the screen. My goodness: green lines, red lines, auto bullets, disappearing images, weird table behavior. Give me Framemaker 7.2. Facebook is a complicated system, but the basics work reasonably well even though the firm’s immature approach to information reminds me of the last group of 20 somethings I spoke with in Slovenia several months ago. Google is now at risk of letting features get in the way of functional improvements. Facebook is in refinement mode. When it comes to social, Facebook is refining social actions. When it comes to social, Google is still trying to figure it out.
Third, Google is a platform built originally to deliver Web search results in the manner of AltaVista without Hewlett Packard in my opinion. Facebook is a platform built to let those who are young at heart find old and new pals. Google has morphed search into advertising and now faces the challenge of figuring out how to go beyond Orkut, which as I write this is struggling with some crazy virus or malware. Facebook is, according to a rumor I heard, working to provide search that uses the content within the Facebook ecosystem as the spider list. Curation versus search/advertising. Which platform is better to move search forward in the social space? Google is layering on a new approach to people and content and Facebook is simply indexing a subset of content. Curated content at that.
My view is that Facebook and Google are in a platform battle. Who will win? Wizard Zuck and Xooglers who know technically what Google errors to avoid in the Facebook social environment? Googlers who are trying to keep an 11 year old platform tuned for brute force Web indexing and on the fly ad matching run by smart algorithms?
Interesting platform battle. And a big one. This may not be a Socrates-hemlock type of tussle but it is a 21st century philosophical collision.
Stephen E Arnold, September 27, 2010
Freebie
TEMIS and Its Luxid Toolbar
September 26, 2010
A reader in Europe alerted us to the new Luxid Toolbar. TEMIS, which assets that it is the leading provider of text analytics solutions for the enterprise, offers a free LuxidBar. You can get the software from www.temis.com. According to Tagline, the TEMIS Web log:
The publicly available LuxidBar connects to a Luxid® Content Enrichment Platform hosted and maintained by TEMIS in the cloud. The platform performs a broad range of business and scientific entities extractions together with their semantic relationships.
The company say that the software “inserts smart links on the fly within the text” and “displays information analytics dynamically.”
The add in reminds us of some of the functionality available to users of the Inxight system before the company was acquired by Business Objects, which in turn was acquired by SAP.
TEMIS says, “This unique Internet browser sidebar accelerates Web page and document reading and connects users to related knowledge.” There is a stampede for this type of value adding in content processing. Other firms in the race include i2 Ltd. (which is not chasing the consumer market after 20 years of labor in this particular vineyard), Palantir (a company involved in what seems to be a tar pit related to its content refining methods technologies), JackBe (a former government centric outfit now probing the enterprise mashup market), and dozens of other companies moving from the intelligence market to the commercial market as funds in war fighting get redirected.
Worth a look.
Stephen E Arnold, September 26, 2010
Freebie
Tweets with Pickles: DataSift and Its Real Time Recipe
September 25, 2010
We have used Tweetmeme.com to see what Twitter users are doing right now. The buzz word real time has usurped “right now” but that’s the magic of folks born between 1968 and 1978.
DataSift combines some nifty plumbing with an original scripting language for filtering 800 tweets a second. The system can ingest and filter other types of content, but as a Twitter partner, DataSift is in the Twitterspace at the moment.
Listio describes the service this way:
DataSift gives developers the ability to leverage cloud computing to build very precise streams of data from the millions and millions of tweets sent everyday. Tune tweets through a graphical interface or through its bespoke programming language. Streams consumable through our API and real-time HTTP. Comment upon and rank streams created by the community. Extend one or more existing streams to create super streams.
The idea is that a user will be able to create a filter that plucks content, patterns like Social Security Numbers, and metadata like the handle, geographic data, and the like. With these items, the system generates a tweet stream that matches the parameters of the filter. The language is called “Filtered Stream Definition Language” and you can see an example of its lingo below:
RULE “33e3891a3aebad56f962bb5e7ae4dc94” AND twitter.user.followers_count > 1000
A full explanation of the syntax appears in the story “FSDL”.
You can find an example on the DataSift blog which is more accessible than the videos and third party write ups about a service that is still mostly under wraps.
The wordsmiths never rest. Since I learned about DataSift, the service has morphed into “cloud event processing.” As an phrase for Google indexing, this one is top notch. In terms of obfuscating the filter, storage, and analysis aspect of DataSift, I don’t really like cloud event processing or the acronym CEP. Once again, I am in the minority.
The system’s storage component is called “pickles.” The filters can cope with irrelevant hash tags and deal with such Twitter variables as name, language, location, profiles, and followers, among others. There are geospatial tricks so one can specify a radius around a location or string together multiple locations and get tweets from people close to bankrupt Blockbuster stores in Los Angeles.
The system is what I call a next generation content processing service. Perched in the cloud, DataSift deals with the content flowing through the system. To build an archive, the filtered outputs have to be written to a storage service like Pickles. Once stored, clever users can slice and dice the data to squeeze gems from the tweet stream.
The service seems on track to become available in October or November 2010. A graphical interface is on tap, a step that most next generation content processing systems have to make. No one wants to deal with an end user who can set up his own outputs and make fine decisions based on a statistically-challenged view of his or her handiwork.
For more information point your browser at www.datasift.net.
Stephen E Arnold, September 25, 2010
SAP Gets Agile like an Aging Quarterback
September 24, 2010
A year round exercise program and a grueling pre season warm up can put bounce in an aging athlete’s step. SAP is back, buying companies and making waves in the enterprise software sector. The company’s most recent announcement caught the attention of Computerworld. The story “SAP Rolls Out Wave of ‘Rapid Deployment’ Apps” explains that “relationship management modules” can be up and running in as little as three months. Yep, the 40 yard time for an aging athlete is about that when racing against a 23 year old.
One of the more interesting comments in the story was, in my opinion, this passage:
SAP’s announcement is the latest effort by the vendor to shed its image as a provider of monolithic, difficult-to-maintain ERP systems. In recent years it has rolled out a series of “enhancement packs” that help customers of its flagship Business Suite add significant new features without the pain of a full-blown upgrade.
To me, this means that the future rapid deployment customer already has SAP up and running. That process can, in my experience, consume more than three months.
What we are learning is that our clients expect changes to be made quickly. For example, we are building one of our news filtering systems. The entire project had to be designed, implemented, and made operational in four days. We hit the target.
I don’t have too many clients who think in terms of a minimum of 12 weeks for a solution. SAP has and I envy the time windows in which their work may be viewed. I look out the window of a jet plane, so my window is open only briefly. That seems to be a trend here in Harrod’s Creek.
The write up strikes me as wishful marketing type thinking packaged as an announcement. To an aging athlete, leisurely agility is as good as real agility I suppose. Just my opinion. Honk.
Stephen E Arnold, September 24, 2010
Freebie
Free File Conversions
September 23, 2010
Connectors are tough to code. If you license them, you may require one of the hefty Ford F 250s with four rear wheels to move the money from your bank to the vendor’s office. Oracle, for example, has connectors for sale, and we have used them. They work as advertised, but some clients find the license fee interesting.
We have found a service that may warrant some testing, gentle reader.
Online-Convert.com provides the facility that can “convert media files online from one format into another.” You can now convert instantly and freely, all your audio, video, image, document, eBook into many different formats online, without installing any software.
The procedure for conversion is quite simple. The user uploads the file in the original file format, which is stored on a server, where it converts into the specified new file format, and then is provided to the user as a unique download link. There is a file size limit of 100 Megabyte to upload a file for free conversion, and the link for downloading the converted file is valid for 24 hours or up to 10 downloads. Using the facility of QR codes, one can also access the download links on the mobile phone as well. Definitely, a useful service worth bookmarking that might come handy anytime.
Harleena Singh, September 23, 2010
Freebie
Exclusive Interview: Quentin Gallivan, Aster Data
September 22, 2010
In the last year or two, a new type of data management opportunity has blossomed. I describe this sector as “big data analytics”, although the azure chip consultants will craft more euphonious jargon. One of the most prominent companies in the big data market is Aster Data. The company leverages BigTable technology (closely associated with Google) and moves it into the enterprise. The company has the backing of some of the most prestigious venture firms; for example, Sequoia Capital and Institutional Venture Partners, among others.
Aster Data, therefore, is one of the flagships in big data management and big data analysis for data-driven applications. Aster Data’s nCluster is the first MPP data warehouse architecture that allows applications to be fully embedded within the database engine to enable fast, deep analysis of massive data sets.
The company offers what it calls an “applications-within” approach. The idea is to allow application logic to exist and execute with the data itself. Termed a “Data-Analytics Server,” Aster Data’s solution effectively utilizes Aster Data’s patent-pending SQL-MapReduce together with parallelized data processing and applications to address the big data challenge. Companies using Aster Data include Coremetrics, MySpace, comScore, Akamai, Full Tilt Poker, and ShareThis. Aster Data is headquartered in San Carlos, California.
I spoke with Quentin Gallivan, the company’s new chief executive officer on Tuesday, September 22. Mr. Gallivan made a number of interesting points. He told me that data within the enterprise is “growing at a rate of 60% a year.” What was even more interesting was that data growth within Internet-centric organizations was growing at “100% a year.”
I asked Mr. Gallivan about the key differentiator for Aster Data. Data management and chatter about “big data” peppers the information that flows to me from vendors each day. He said:
Aster Data’s solution is unique in that it allows complete processing of analytic applications ‘inside’ the Aster Data MPP database. This means you can now store all your data inside of Aster Data’s MPP database that runs on commodity hardware and deliver richer analytic applications that are core to improving business insights and providing more intelligence on your business. To enable richer analytic applications we offer both SQL and MapReduce. I think you know that MapReduce was first created by Google and provides a rich parallel processing framework. We run MapReduce in-database but expose it to analysts via a SQL-MapReduce interface. The combination of our MPP DBMS and in-database MapReduce makes it possible to analyze and process massive volumes of data very fast.
In the interview he describes an interesting use case for Barnes & Noble, one of Aster Data’s high profile clients. You can read the full text of the interview in the ArnoldIT.com Search Wizards Speak service by clicking this link. For a complete list of interviews with experts in search and content processing click here. Most of the azure chip consultants recycle what is one of the largest collection of free information about information retrieval in interview form available at this time.
Stephen E Arnold, September 22, 2010
Freebie. Maybe another Jamba juice someday?