How to Make Money with Google AdSense Video Released
October 15, 2009
You can watch a four minute video that provides a quick primer on how to make money with AdSense. To view the video, navigate to ArnoldIT.com and click on the Video link or click here. The video has been produced by the ArnoldIT.com team to fill a gap in the flood of information about AdSense. “The idea,” said Stephen E. Arnold, “was to put in one place a quick overview and links that a person needs to get started with AdSense. My hope is that libraries will point patrons who want to find possible business ideas to these videos.” He added, “Google provides the information, but we learned from our client work that a quick overflight of the Google money making options as needed.”
The video series was announced at the International Computers in Library Conference in London, England today, October 15, 2009. In his talk he said, “Google offers the same type of opportunity for third parties as did Microsoft in the early 1980s. In these tough economic times, an understanding of the revenue potential the Google platform provides is a prudent business step.”
Five more videos in the “How to Make Money with Google” series will be released in the coming weeks. A person looking for extra revenue or a way to build a new career by focusing on the opportunities presented by the Google platform can view one or more of these videos to get ArnoldIT.com’s view about what Google offers.
The next free video “Search Engine Optimization Consulting” will be released at the end of October 2009. Other free videos in the series cover writing programs for the Google platform, becoming a Google partner, and introductory and wrap up videos.
The videos are provided without charge for two reasons. According to Mr. Arnold, “We received client questions and spam promising “get rich quick” schemes regarding Google. I decided it would be a useful exercise to produce brief, factual videos to make clear that Google is a significant opportunity for motivated individuals, organizations, and commercial enterprises. Many people see Google as a one trick pony, even though Google has matured into a platform for programmers, consultants, and computer service businesses.”
The full series will be out by the end of November 2009 and can be viewed as individual videos or as a 35 minute program. ArnoldIT.com is not affiliated with Google. The videos were designed and funded by Stephen E. Arnold.
Jessica Bratcher, October 15, 2009
No one paid for this write up.
Google Wants to Be a Media Company = Content Delivery Network Rumors
October 15, 2009
Barron’s is one of those business newspapers that blends caution with molecules of nouns to whip investors into a frenzy of uncertainty. Barron’s “Akamai Rallies on Rumor of Google Bid” is an interesting write up. CDNs or content delivery networks are complicated. Akamai has proprietary technology, legions of ISPs on board, and nifty methods for getting popular content to a user quickly. An investor type, who actually bought me lunch at Taco Bell, floated this idea past me. I pointed out:
- Akamai is sophisticated outfit
- Akamai has plumbing in place and on-board ISPs who get financial and bandwidth benefits from their support of the Akamai methods. These involve the injection of smart bits in packets and some other magic
- Video is becoming the method of communication in the emerging semi literate world of the US of A
- Companies with a plan to be a media giant can benefit from owning an Akamai or similar outfit because it generates revenue and provides a convenient way to slash certain operational costs.
Barron’s said:
Briefing.com notes that AKAM calls are seeing buying interest this morning amid “GOOG for AKAM chatter.” I’m not sure that Google really wants to be in the content delivery network business, particularly given a spreading view on the Street that AKAM’s results could be hurt by intensifying pricing pressure in the CDN market. But clearly, somebody believe the rumor.
See fan and back peddle. Fan and back peddle.
With churn the name of one popular game on Wall Street, I sure don’t know if Googzilla is going to gobble up the staff and the technology at Akamai. Google has its own CDN in place, but with the volume of rich media that will be coming down the road in the months ahead, this type of acquisition makes sense to me. Akamai has technology, ISP relationships, plumbing, and people. Did I mention really good people?
Stephen Arnold, October 15, 2009
Sadly no one paid me to write this article. The investor on Friday bought me a chicken thing with a made up name, though.
The Microsoft UX Wins AP Love
October 14, 2009
A happy quack to the reader who sent me the link to Gawker’s “AP’s Betting the Farm Microsoft Will Crush Google”. The story reports that Microsoft’s new interface (user experience or UX) approach is going to allow Microsoft to catch up with Google. If you are a fan of the AP’s view of technology, check out the article. If you think that Google’s 80 percent market share is too large a gap to narrow, you may want to skip the article. For me the most interesting point in the write up was the hint that Google and the AP have not been engaged in productive, frequent discussions. I don’t think the AP is sufficiently Googley to click with the Mountain View crowd.
Stephen Arnold, October 14, 2009
Exclusive Interview with CTO of BrightPlanet Now Available
October 13, 2009
William Bushee, BrightPlanet’s Vice President of Development and the company’s chief technologist, spoke with Stephen E. Arnold. The exclusive interview appears in the Search Wizards Speak series. Mr. Bushee was among the first search professionals to tackle Deep web information harvesting. The “Deep Web” refers to content that traditional Web indexing systems cannot access. Deep Web sites include most major news archives as well as thousands of specialized sources. These sources typically represent the best, most definitive content sources for their subject area. For example, in the health sciences field, the Centers for Disease Control, National Institutes of Health, PubMed, Mayo Clinic, and American Medical Association are all Deep Web sites, often inaccessible from conventional Web crawlers like Google and Yahoo. BrightPlanet supported the ArnoldIT.com analysis of the firm’s system. As a result of this investigation, the technology warranted an in depth discussion with Mr. Bushee.
The wide ranging interview focuses on BrightPlanet’s search, harvest, and OpenPlanet technology. Mr. Bushee told Search Wizards Speak: “As more information is being published directly to the Web, or published only on the Web, it is becoming critical that researchers and analysts have better ways of harvesting this content.”
Mr. Bushee told Search Wizards Speak:
There are two distinct problems that BrightPlanet focuses on for our customers. First we have the ability to harvest content from the Deep Web. And second, we can use our OpenPlanet framework to add enrichment, storage and visualization to harvested content. As more information is being published directly to the Web, or published only on the Web, it is becoming critical that researchers and analysts have better ways of harvesting this content. However, harvesting alone won’t solve the information overload problems researches are faced with today. The answer to a research project cannot be simply finding 5,000 raw documents, no matter how good they are. Researchers are already overwhelmed with too many links from Google and too much information in general. The answer needs to be better harvested content (not search), better analytics, better enrichment and better visualization of intelligence within the content – this is where BrightPlanet’s OpenPlanet framework comes into play. While BrightPlanet has a solid reputation within the Intelligence Community helping to fight the “War on Terror” our next mission is to be known as the commercial and academic leaders in harvesting relevant, high quality content from the Deep Web for those who need content for research, business intelligence or analysis.
You can read the full text of the interview at http://www.arnoldit.com/search-wizards-speak/brightplanet.html. More information about the company’s products and services is available at http://www.brightplanet.com. Mr. Bushee’s technology has gained solid support from some professional researchers and intelligence agencies. BrightPlanet has moved “beyond search” with its suite of content processing technology.
Stephen Arnold, October 13, 2009
Google and Content Processing
October 12, 2009
I find the buzz about Google’s upgrades to its existing services and the chatter about Google Books interesting but not substantive. My interest is hooked when Google provides a glimpse of what its researchers are investigating. I had a conversation last week that pivoted on the question, “Why would anyone care what a researcher or graduate students working with Google do?” The question is a good one and it illustrates how angle of view determines what is or what is not important. The media find Google Books fascinating. The Web log authors focus on incremental jumps in Google’s publicly accessible functions. I look for deeper, tectonic clues about this trans-national, next generation company. I sometimes get lonely out on my frontier of research and analysis, but, as I said, perspective is important.
That’s why I want to highlighting a dense, turgid, and opaque patent application with the fetching title “Method and System for Processing Published Content on the Internet”. The document was published on October 8, 2009, but the ever efficient USPTO. The application was filed on June 9, 2009, but its technology drags like an earthworm through a number of previous Google filings in 2004 and more recent disclosures such as the control panel for a content owner’s administering of a distribution and charge back for content. As an isolated invention, the application is little more than a different charge at the well understood world of RSS feeds. The problem Google’s application resolves is inserting ads into RSS content without creating “unintended alerts”. When one puts the invention is a broader context, the system and method of the invention is more flexible and has a number of interesting applications. These are revealed in the claims section of the patent application.
Keep in mind that I am not a legal eagle. I am an addled goose. Nevertheless, what I found suggestive is that the system and method hooks into my analysis of Google’s semantic functions, its data management systems, and, of course, the guts of the Google computational platform itself for scale, performance, and access to other Google services. In short, this is a nifty little invention. The component that caught my attention is the controls made available to publishers. The idea is that a person with a Web log can “steer” or “control” some of the Google functions. The notion of an “augmented” feed in the context of advertising speaks to me of Google’s willingness to allow a content producer to use the Google system like a giant information facility. Everything is under one roof and the content producer can derive revenue by using this facility like a combination production, distribution, and monetization facility. In short, the invention builds out the “digital Gutenberg” aspect of the Google platform.
Here’s how Google explains this invention:
The invention is a method for processing content published on-line so as to identify each item in a unique manner. The invention includes software that receives and reads an RSS feed from a publisher. The software then identifies each item of content in the feed and creates a unique identifier for each item. Each item then has third party content or advertisements associated with the item based on the unique identifier. The entire feed is then stored and, when appropriate, updated. The publisher then receives the augmented feed which contains permanent associations between the third party advertising content and the items in the feed so that as the feed is modified or extended, the permanent relationships between the third party content and previously existing feed items are retained and readers of the publisher’s feed do not receive a false indication of new content each time the third party advertising content is rotated on an item.
The claims wander into the notion of a unique identifier for content objects, item augmentation, and other administrative operations that have considerable utility when applied at scale within the context of other Google services such as the programmable search engine. This is a lot more interesting than a tweak to an existing Google service. Plumbing is a foundation, but it is important in my opinion.
Stephen Arnold, October 12, 2009
History of Social Media
October 9, 2009
I find the social media “revolution” a combination of old and new. The guts of the technology have been exposed for years. Some of the newest applications take advantage of mash up methods and bandwidth to create quite interesting online services. In my next Information World Review column I write about an innovation from Georgia Tech. The system displays real time data from devices such as traffic cameras. Writing the column forced me to do a quick review of the history of social media. I located a useful article that some readers may want to read. “Major Advances in Social Networking” provides a helpful summary of important milestones in this sector of content creation and processing. I found the selection of examples and the categories useful; for example, Lifestreaming. I did not agree with everything in the article, but I found it helpful in looking at the sweep of the social media innovation machinery.
Stephen Arnold, October 9, 2009
Comments about Google and Content Preservation
October 8, 2009
The search pundits are chasing the Google press conference. The addled goose flapped right over the media event and spent time with “Google’s Abandoned Library of 700 Million Titles”. The article, which appeared in Wired Magazine, tackles the history of the Deja.com usenet archive. The article is interesting for three reasons:
- The content is “ancient ruins”; that is, not in good shape
- Access is problematic because the search function is, according to Wired, are “extremely poor”
- There has not been much attention focused on this content collection.
I access Google Groups’s content occasionally. My personal experience is that I can find useful information on certain topics. For me, the most interesting comment in the Wired article was:
In the end, then, the rusting shell of Google Groups is a reminder that Google is an advertising company — not a modern-day Library of Alexandria.
Not as affirming as the news flowing from the Google media event, but I found the Wired article suggestive.
Stephen Arnold, October 8, 2009
Intelligenx Profiled in CIO
October 2, 2009
A happy quack to the Intelligenx team. The write up in the Spanish language CIO was a PR coup for this Washington, DC area company. You can read the story “La Base de Datos no es el Futuro de los Datos” in Spanish here or in English via Google Translate. Intelligenx delivers blistering performance. The profile said:
Un muy importante Banco Latinoamericano, no llamó por que tenía una amenaza latente de seguridad, el tiempo de indexación de sus logs de todo un día era de 11 horas, utilizaban un servidor de 4 procesadores y 4 Gb de ram. Nosotros tomamos los datos los colocamos en una notebook con 2Gb de ram e indexamos todo en 20 minutos. Se podrán imaginar que no es posible brindar seguridad a un sistema con una demora de 11 horas para saber que ocurre en mis logs. Otro caso similar ocurrió con una empresa de telecomunicaciones que necesitaba guardar los registros de llamadas durante 30 días y estos registros sumaban 30 billones de registros, Cuando tenían un requerimiento judicial para buscar un dato específico en su base, le llevaba mas de 24 horas encontrar un dato y recibían mas de 30 requerimientos judiciales al mes…Otro caso interesante en el que confluyen la capacidad de Search con las capacidades de interoperabilidad de nuestro producto se dio en el Ministerio de Justicia de Brasil, con cinco regiones y cientos de juzgados que tenían plataformas y sistemas diferentes y consultar jurisprudencia era una tarea imposible. Con nuestro producto generamos una capa de interoperabilidad que se adapta a todas y cada una de las plataformas de cada juzgado y disponibilizamos cualquier documento en tiempos que no superan los 150 milisegundos.
A flap of the wings to the Zubair and Iqbal Talib.
Stephen Arnold, October 2, 2009
XML May Get Marginalized
September 29, 2009
I found the write up by Jack Vaughan interesting and thought provoking. XML (a wonderful acronym for Extensible Markup Language), a child of CALS and SGML, two fine parents in my opinion, may have its salad days behind it. You can read “XML on the Wane? Say It Isn’t So, Jack” and make up your own mind. Let’s assume that XML is a bum and no longer the lunch guest of big name executives. What happens? First, the Google methods are what I would call “quasi XML”; that is, XML in but Googley once processed by the firm’s proprietary recipes. My view is that Google gets an advantage because its internal data management methods, disclosed to some extent in its open source technical documents, remains above the fray. Second, if XML goes the way of the dodo, then the outfit with the optimal transformation tools can act like one of those infomercial slicers and dicers—for a fee, of course. Finally, the publishers who have invested in XML face yet another expense. More costs will probably thin the herd. In a quest for more revenue, XML junkies may be forced to boost their prices which will further narrow their customer base. In short, if XML gets the bum’s rush, Google may get a boost and others get a dent in the pocketbook.
Stephen Arnold, September 29, 2009
Yebol Web Search: Semantics, Facets, and More
September 28, 2009
“Do We Really Need Another Search Engine?” is an article about Yebol. Yebol is another search engine. The write up included this description of the new system:
According to its developers, “Yebol utilizes a combination of patented algorithms paired with human knowledge to build a Web directory for each query and each user. Instead of the common ‘listing’ of Web search queries, Yebol automatically clusters and categorizes search terms, Web sites, pages and contents.” What this actually means is that Yebol uses a combination of methods – web crawlers and algorithms combined with human intelligence – to produce a “homepage” for each and every search query. For example, search Bell Canada in Yebol and, instead of a Google-style listing of results, you’re presented with a “homepage” that provides details about Bell’s various enterprises, executives, competitors as well as a host of other information including recent Tweets that mention Bell.
The site at http://www.yebol.com includes the phrase “knowledge based smart search.” I ran a query for Google and received a wealth of information: links, facets, hot links to Google Maps, etc.
My search for dataspace, on the other hand, was not particularly useful. I anticipate that the service will become more robust in the months ahead.
The PC World write up about Yebol said:
At launch, Yebol can provide categorized results for more than 10 million search terms. According to the company it intends to provide results for ‘every conceivable search term’ in the next three to six months.
The founder is Hongfeng Yin, was a senior data mining researcher at Yahoo! Data Mining Research team, where he built the core behavioral targeting technologies and products which generate multi-hundred millions revenue. Prior to Yahoo, he was a software manager and Sr. staff software engineer with KLA-Tencor. He worked several years on noetic sciences and human think theory with professor Dai Ruwei and professor Tsien Hsue-shen (Qian Xuesen) at Chinese Academy of Sciences. He has a Ph.D. in Computer Science from Concordia University, Canada and Master degree from Huazhong University of Science and Technology, China. Hongfeng has multiple patents on search engine, behavioral targeting and contextual targeting.
The Yebol launch news release is here. The challenge will be to deliver a useful service without running out of cash. The use of patented algorithms is a positive. Combining these recipes with human knowledge can be tricky and potentially expensive.
Stephen Arnold, September 28, 2009