EasyAsk Embraces the NetSuite Cloud Platform

October 7, 2010

EasyAsk is looking to cloud computing to expand services to their clients according to RedOrbit. “EasyAsk Integrates EasyAsk Business Edition With NetSuite’s Cloud Computing Platform” reports that EasyAsk is combing its EasyAsk Business Edition with the NetSuite cloud computing platform. EasyAsk Business Edition can be thought of as a search engine for the corporate world. This program allows users to search and explore corporate data on a user friendly Google like interface. EasyAsk Business Edition changes business questions into back-end queries, retrieves the data and then delivers answers to the user. The application also employs semantic intelligence which allows it to analyze user searches and provide helpful inquiries and suggestions in order to guide users. NetSuite’s SuiteCloud offers a variety of products, development tools and other services to help companies be more productive while also taking advantage of economical benefits. “EasyAsk Business Edition for NetSuite features rapid implementation and a superior user experience.” The dynamic duo EasyAsk Business Edition and NetSuite’s SuiteCloud development platform gives corporate users access to valuable information that can provide additional information to help them serve customers better and increase overall productivity.

April Holmes, October 7, 2010

Freebie

Tweets with Pickles: DataSift and Its Real Time Recipe

September 25, 2010

We have used Tweetmeme.com to see what Twitter users are doing right now. The buzz word real time has usurped “right now” but that’s the magic of folks born between 1968 and 1978.

DataSift combines some nifty plumbing with an original scripting language for filtering 800 tweets a second. The system can ingest and filter other types of content, but as a Twitter partner, DataSift is in the Twitterspace at the moment.

Listio describes the service this way:

DataSift gives developers the ability to leverage cloud computing to build very precise streams of data from the millions and millions of tweets sent everyday. Tune tweets through a graphical interface or through its bespoke programming language. Streams consumable through our API and real-time HTTP. Comment upon and rank streams created by the community. Extend one or more existing streams to create super streams.

The idea is that a user will be able to create a filter that plucks content, patterns like Social Security Numbers, and metadata like the handle, geographic data, and the like. With these items, the system generates a tweet stream that matches the parameters of the filter. The language is called “Filtered Stream Definition Language” and you can see an example of its lingo below:

RULE 33e3891a3aebad56f962bb5e7ae4dc94AND twitter.user.followers_count > 1000

A full explanation of the syntax appears in the story “FSDL”.

You can find an example on the DataSift blog which is more accessible than the videos and third party write ups about a service that is still mostly under wraps.

The wordsmiths never rest. Since I learned about DataSift, the service has morphed into “cloud event processing.” As an phrase for Google indexing, this one is top notch. In terms of obfuscating the filter, storage, and analysis aspect of DataSift, I don’t really like cloud event processing or the acronym CEP. Once again, I am in the minority.

The system’s storage component is called “pickles.” The filters can cope with irrelevant hash tags and deal with such Twitter variables as name, language, location, profiles, and followers, among others. There are geospatial tricks so one can specify a radius around a location or string together multiple locations and get tweets from people close to bankrupt Blockbuster stores in Los Angeles.

The system is what I call a next generation content processing service. Perched in the cloud, DataSift deals with the content flowing through the system. To build an archive, the filtered outputs have to be written to a storage service like Pickles. Once stored, clever users can slice and dice the data to squeeze gems from the tweet stream.

The service seems on track to become  available in October or November 2010. A graphical interface is on tap, a step that most next generation content processing systems have to make. No one wants to deal with an end user who can set up his own outputs and make fine decisions based on a statistically-challenged view of his or her handiwork.

For more information point your browser at www.datasift.net.

Stephen E Arnold, September 25, 2010

IGear and Search without Search

September 15, 2010

One of the clearest signals about the “search without search” trend is the use of Web technology to make tailored information available. One of the writers for Beyond Search called my attention to “I/Gear TV”, a system that allows a manufacturing and production operation to roll up information from different sources, tailor it, and display it on standard LCD televisions or computer monitors. The system features a graphical administrative control panel. The authorized user can select information to display, specify what data are displayed and when, and then stream the content to connected devices.

The system has applications beyond the shop or plant floor. When I reviewed the company’s description in its blog post this morning, I could see that this type of system would be useful in schools, government agencies, and almost any office setting.

However, the key point about IGear’s system is that it delivers tailored information to users who can see the content at a glance. There is no searching required. The approach has been used in various forms in many business sectors, but IGear’s use of this technology in a heavy-duty industrial environment makes clear that the application of Web technology to data acquisitions, transformation, and dissemination has moved from a specialist realm into the bone marrow of industry.

If you are not familiar with I/Gear, a firm specializing in production monitoring and cloud computing to enable access to disparate data, you will want to visit the firm’s Web site and take a look at the product line up.

Search without search is gaining momentum and in some interesting market sectors. What does this type of innovation mean for search engine optimization? No easy, quick answer at hand I fear.

Stephen E Arnold, September 15, 2010

Freebie

LinkedIn’s Data Infrastructure

August 17, 2010

LinkedIn is one of the premier networking sights for business professionals. It is a great way for professionals to meet other industry professionals or stay connected to their colleagues. LinkedIn has millions of members located in various countries so this adds up to massive amounts of data being process daily. In “LinkedIn’s Data Infrastructure” LinkedIn’s Principal Engineer and Engineering Manager Jay Kreps provides attendees at the Hadoop Summit an insight look at how LinkedIn processes data.

LinkedIn keeps the majority of its vital data offline. The offline data process is relatively slow so LinkedIn processes batches of data daily. They use the open source program Hadoop in their daily calculations. The term open source describes any type of program where the owner provides the source code along with the software license that allows users to modify the software if necessary. Hadoop is a popular framework because it is designed to help users work with massive amounts of data.

Kreps made sure to mention two of LinkedIn’s most vital open source projects, Azkaban and Voldemort. The article describes Azkaban as “an open source workflow system for Hadoop, providing cron-like scheduling and make-like dependency analysis, including restart. Its main purpose in the system is to control ETL jobs which are responsible for pushing the database as well as any event logs to Voldemort.

Voldemort can simply be described as a storage facility for LinkedIn’s NoSQL key/value. LinkedIn produces a daily relationship graph that is used for querying in web page results. The data must go through an extensive process which was once done by a database. However, this process was counterproductive because the database had to modify first and then the data had to be manually moved. Voldemort makes partitioning along with the entire data movement process easier and more productive.

Readers can go to Data Applications and Infrastructure at LinkedIn Hadoop Summit 2010 to view the data path and additional information. LinkedIn also has a handy index structure implemented in the Hadoop pipeline for extensive searches. The open source Lucene/Solr is used in the search features to make sure users can conduct advanced searches and obtain accurate information quickly. Open source was instrumental in LinkedIn being able to build a productive database able to specifically handle their massive data load which was exactly what the doctor ordered.

April Holmes, August 17, 2010

Open Source May Be Disruptive

August 16, 2010

I do not know much about software, information, or the big-time doings of corporate giants. I am not an azurini, a self-appointed poobah, or the cat’s paw of a group of 20-something MBAs from schools that require a great family and a high IQ. I am an addled goose. I float around in a pond filled with mine drainage water and offer some humble observations which the great and powerful dismiss as silly, irrelevant, or just plain incorrect.

No problemo.

However, even the intellectual black hole of the addled goose’s analytic muck pond can figure out from two articles that open source is scaring the heck out of a really tough, superstar executive.

You make your own decision about the accuracy and significance of these two news stories:

First, point your browser thingy with monitoring functions activated so those watching know you really did navigate to a “real” news source and read “Oracle Kills Open Solaris, Moves Development behind Closed Doors.” The idea is pretty easy to understand. Those super-marketers at Sun Microsystems gave away an operating system as open source. Nope. Oracle’s Larry Ellison and his sharp-pencil crowd shut that door. The reason? Open source equals bad business. “Bad”, I presume, means a threat to Oracle’s pricing tactics. Free and Oracle are not words that I associate when I hear the word “Oracle.”

image

A happy quack to http://jordanhall.co.uk/general-articles/dont-be-evil-licensing-1301401/ for this great illustration.

Second, Oracle is suing Google over its use of Java. Now Java is sort of a piggy, but, hey, lots of universities teach Java, and it can be quite useful when running in today’s nifty hardware environments. Overlook those flaws which have been documented in some detail in the Software Engineering podcasts at www.se-radio.net. Notice: SE-Radio is not exactly an Adam Carolla Leo LaPorte type podcast. You can get some information about the this tussle between two former bosom buddies by tapping to “Initial thoughts on Oracle vs Google Patent Lawsuit.” Yep, those are links to patent documents, so I don’t think the skimmers among my readers will invest much dwell time on the Tirania post.

Nevertheless, the headlines may be enough for a “real” azure chip consultant. The details are murky and former English majors and sociology minors won’t spend too much time doing the analysis a “real” poobah does.

Let me Cliff Notes it: Larry Ellison is a smart, rich person. He understands that open source is a problem for a company like Oracle that charges really big fees for its software. Open source with its unruly developers and hard-to-make-do-push-ups work ethic are the enemy. The fix. Kill open source. If total annihilation is not possible, make open source expensive in terms of legal fees. The way law works for rich people is that a rich person’s lawyers can make a less rich person spend lots of time fighting the rich person’s legal actions. Eventually money wins, particularly when there are fuzzy wuzzy ideas like open source, intellectual property, and rich people arguing as the main action.

There is just one snag. Even rich people have trouble keeping those peasants under control. For my readers who stayed awake during world history, you know that lots of peasants with sticks and rocks can be a real problem. Honk off enough peasants, and the excitement can end in a revolution.

At this moment in the capitalistic, free market sun:

  1. Lots of companies are strapped for cash. Free is pretty darned appealing when you have to decide how to pay the light bill, the actress assisting the company at a trade show, and paying the lease on a new Bimmer.
  2. Open source is pretty good, and there are some robust solutions available with the click of a mouse. Examples include Drupal, Hadoop, Lucene/Solr and * lots * more.
  3. The open source stuff is fun. Training and certification for Oracle or other “carrier class” enterprise solutions are not as much fun as blasting around the lake on a jet ski at 30 knots.

If I focus on relational databases, I am in a Roman ruin. You can see or at least imagine the splendor of the structure. But rebuilding it after a crash and getting it back to the “way it was” is just too much work, too expensive, and too labor intensive. Why not build a new structure, using the tips and methods that HGTV puts on display each night on my local cable channel. Need a granite counter top and have neither money, stone cutting tools, nor expertise. Hey, just shoot over to Home Depot and get an epoxy alternative. That’s the open source approach: New materials, new methods, and new benefits.

image

Roman ruin. What’s the cost of a rebuild and then upkeep? How do you modify a limestone flying balcony? You don’t. Get some slaves.

Read more

Data Centers for Facebook and Google: Juiciness in Alleged Facts

August 16, 2010

Navigate to “Two Data Centers Present a Study in Contrasts.” The information in the write up is germane to search and social networking. A happy qua ck to Theodoric Meyer, who did a very good job on this article for the Dalles (Oregon) Chronicle.

Dalles? Yep, that’s the town in which first Google, then Facebook, decided to set up power sucking data centers. In the olden days, data centers had lots of people. Today’s data centers are designed to be as close to people free as possible. Humans wandering around a data center filled with itty-bitty gizmos crunching lots of data can get screwed up in a heartbeat if a clumsy human does something like pull a plug or punch a button to see what happens.

You will want to read the full write up by Mr. Meyer. Here are the factoids that I noted:

  • Google set up shop in scenic and struggling Dalles in 2006. Now Facebook with its Xooglers is on the same path.
  • Data center managers have to make nice with city officials, particularly in places like scenic and struggling Dalles.
  • Facebook is doing a better job of building bridges that Google’s Math Club crowd did.
  • No Oreogon taxes will be paid on the data centers for 15 years. (A big yes to the American market system.) A minimum number of hires and higher pay were the requirements Google and Facebook had to meet.
  • Facebook’s facility will have 147,000 square feet or about 2.2 American football fields. That’s almost as big a typical trailer here in Harrod’s Creek, Kentucky.
  • Facebook power consumption will be at 30 megawatts with a need to access up to 90 megawatts of power. BGF (before Google and Facebook), the township used 30 megawatts of power.
  • Google has done “a lot of good” in Dalles.

The key factoid. The fellow responsible for Google’s Dalles facility has been hired by Facebook. You can take the Xoogler out of Google but you can’t take the Google out of the Xoogler.

And that contrast? Math Club compared to making nice with political officials.

Stephen E Arnold, August 16, 2010

Cloud and Context: Fuzzy and Fuzzier

August 11, 2010

I got a kick out of “Gartner Says Relevancy of Search Results Can be Improved by Combining Cloud and Context-Aware Services.” Fire up your iPad and check out this write up which has more big ideas and insights than Leonardo, Einstein, or Andy Rooney ever had. You will want to read the full text of the article. What I want to do is list the memes that dot the write up like chocolate chips in Toll House cookies. Here goes:

  • value
  • cloud-based services
  • context-based services
  • revenue facing external search installation
  • informational services
  • integration engineers
  • contextual information
  • value from understanding
  • Web search efforts
  • market dynamics
  • general inclination
  • search in the cloud
  • discoverable information
  • offloading
  • quantifiable improvements
  • social networking
  • user’s explicit statement of interests
  • rich profile data

Cool word choice, right? Concrete. Specific. Substantive. Now here’s the sentence that I was tempted to flag as a quote to note. I decided to include it in this blog post:

Optimizing search through the effective application of context is a particularly helpful and effective way to deliver valuable improvements in results sets under any circumstances.

Got that? Any circumstance. Well, except being understandable to an addled goose.

Stephen E Arnold, August 11, 2010

Freebie

Google Apps and the US Government

August 10, 2010

eWeek runs these odd slideshows which seem to be dot points strung out to get page views. I clicked through “Cloud Computing: Google Apps Leads Microsoft in Federal Cloud Race: 10 Reasons Why It Matters.” The hook is Google’s getting a government security certification. However, I keep hearing a great deal about Amazon, IBM, and Microsoft. Each of these companies is putting a body slam on the US government. The winner or at least the horse in the lead is Amazon. I know it sound wacky, but the AWS offerings seem to be at the right price point and have enough goodies to entice some procurement teams. Google may have to fine tune its pricing to be less Googley and more Bezosey. The eWeek slide show seems to offer information oddly disconnected from the reality I hear about. That’s why I am an addled goose, I suppose.

Stephen E Arnold, August 10, 2010

Freebie

Microsoft and Focus

August 1, 2010

I am the type of goose who can do one thing at a time. If I am plugging away on one of my silly projects and you speak to me, I will jump. I might even emit a “yikes”. That’s nuts, of course, but that is an example of how I get things done. Focus. One thing at a time.

When someone tells me that he can focus on two things at once, I know that focus means something different to this person than to me. When someone tells me, he can do four or five things at once, I know that is pretty much impossible. Here’s a test. I have a rock. You are talking on your mobile, watching your kid on the swing, and standing in the middle of a highway. So, explain that focus thing to me again. Now I throw a rock at your head. I will cheat by creeping around behind you and throwing the rock without your watching me. What about that focus? If you try to keep me in sight, you are going to lose sight of the kid. Maybe a car will hit you? Whoever is talking to you on your phone is going to ask, “Dude, what are you doing?”

So navigate to “Microsoft: We Are Focusing on Eight Core Businesses.” Here’s the statement I noted:

Actually, Microsoft has eight core focus areas, General Manager of Investor Relations Bill Koefoed, told the Wall Street analysts (and a few of us press types) attending the day-long event.

The eight, according to Koefoed:

  • Xbox and TV
  • Bing
  • Office
  • Windows Server
  • Windows Phone
  • Windows
  • Business users
  • SQL Server

So now read “Steve Ballmer: Microsoft Has Been Focusing on cloud for 15 Years.” Notice that the cloud is not on the Koefoed list. Also, the statement about Microsoft’s activity in the cloud for 15 years puts us back to 1995. I may have missed something, but I never thought about Microsoft as a master of hosted services.

Obviously I was not focusing. I am flawed because I try to keep my lists consistent and I cannot focus on nine things at once. So maybe that “eight” should be “nine”? And what happened to search; specifically, $1.2 billion Fast Search thing? Not a point of focus apparently.

Stephen E Arnold, August 1, 2010

Freebie

Microsoft Comes on Strong at July Partner Conference

July 22, 2010

Well, it was not Thai kickboxing, but it was close. The July partner conference touted the cloud, the losers at Apple characterized as “Apple’s Vista”, and that old time lock in religion.

It seems that Microsoft is anything but intimidated by the iPhone4 and ready to put all their collective weight behind cloud computing. That was the message it appears the IT giant wants out there if the article titled Microsoft Talks Up Cloud, Slaps Google, Apple in the InternetNews.com is any indication.

COO Kevin Turner was unwavering in his emphasis of the cloud theme and anyone that attended the conference couldn’t miss the fact that was to be the major point. He even went on to explain that the firm is ‘rebooting’ as they move more and more toward cloud technology.

He was by all accounts firm in his emphasis, even going so far as to say he understands the process will necessitate major changes for Microsoft partners. He also mentioned the fact that Bing’s share for the search market has gown by 51% since the Google rival debuted.

With partners paying for the privilege of becoming a “real” partner, the showmanship and bravado are exactly with those with dollar signs in their eyes want to hear.

Rob Starr, July 22, 2010

Freebie

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta