Artificial General Intelligence: Batting the Knowledge Ball toward IBM and Google
December 15, 2013
If you are interested in “artificial intelligence” or “artificial general intelligence”, you will want to read “Creative Blocks: The Very Laws of Physics Imply That Artificial Intelligence Must Be Possible. What’s Holding Us Up?” Artificial General Intelligence is a discipline that seeks to render in a computing device the human brain.
Dr. Deutsch asserts:
I cannot think of any other significant field of knowledge in which the prevailing wisdom, not only in society at large but also among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.
Adherents of making a machine’s brain work like a human’s are, says Dr. Deutsch:
split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken. The first, initially predominant, camp cited a plethora of reasons ranging from the supernatural to the incoherent. All shared the basic mistake that they did not understand what computational universality implies about the physical world, and about human brains in particular. But it is the other camp’s basic mistake that is responsible for the lack of progress. It was a failure to recognize that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much.
One of the examples Dr. Deutsch invokes is IBM’s game show “winning” computer Watson. He explains:
Nowadays, an accelerating stream of marvelous and useful functionalities for computers are coming into use, some of them sooner than had been foreseen even quite recently. But what is neither marvelous nor useful is the argument that often greets these developments, that they are reaching the frontiers of AGI. An especially severe outbreak of this occurred recently when a search engine called Watson, developed by IBM, defeated the best human player of a word-association database-searching game called Jeopardy. ‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterized its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do. The thing is, playing Jeopardy — like every one of the computational functionalities at which we rightly marvel today — is firmly among the functionalities that can be specified in the standard, behaviorist way that I discussed above. No Jeopardy answer will ever be published in a journal of new discoveries. The fact that humans perform that task less well by using creativity to generate the underlying guesses is not a sign that the program has near-human cognitive abilities. The exact opposite is true, for the two methods are utterly different from the ground up.
IBM surfaces again with regard to playing chess, a trick IBM demonstrated years ago:
Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is. Programming AGI is not the same sort of problem as programming Jeopardy or chess.
After I read Dr. Deutsch’s essay, I refreshed my memory about Dr. Ray Kurzweil’s view. You can find an interesting essay by this now-Googler in “The Real Reasons We Don’t Have AGI Yet.” The key assertions are:
The real reasons we don’t have AGI yet, I believe, have nothing to do with Popperian philosophy, and everything to do with:
- The weakness of current computer hardware (rapidly being remedied via exponential technological growth!)
- The relatively minimal funding allocated to AGI research (which, I agree with Deutsch, should be distinguished from “narrow AI” research on highly purpose-specific AI systems like IBM’s Jeopardy!-playing AI or Google’s self-driving cars).
- The integration bottleneck: the difficulty of integrating multiple complex components together to make a complex dynamical software system, in cases where the behavior of the integrated system depends sensitively on every one of the components.
Dr. Kurzweil concludes:
The difference between Deutsch’s perspective and my own is not a purely abstract matter; it does have practical consequence. If Deutsch’s perspective is correct, the best way for society to work toward AGI would be to give lots of funding to philosophers of mind. If my view is correct, on the other hand, most AGI funding should go to folks designing and building large-scale integrated AGI systems.
These discussions are going to be quite important in 2014. As search systems do more thinking for the human user, disagreements that appear to be theoretical will have a significant impact on what information is displayed for a user.
Do users know that search results are shaped by algorithms that “think” they are smarter than humans? Good question.
Stephen E Arnold, December 15, 2013
Math, Proofs, and Collaboration
December 15, 2013
I know that the search engine optimization folks already are on top of this idea, but for the mere mortals of the “search” world, check out “Voevodsky’s Mathematical Revolution.” Vladimir Voevodsky is a Fields winner and he was thinking about some fresh challenges. He hit upon one: The use of a computer to verify proofs. The write up explains a “new foundation is that the fundamental concepts are much closer to where ordinary mathematicians do their work.”
The comment I noted pertains to mathematical proofs. As you know, creating a proof is, for many, mathematics. However, verifying proofs is tough work. The quote I noted is:
“I can’t see how else it will go,” he said. “I think the process will be first accepted by some small subset, then it will grow, and eventually it will become a really standard thing. The next step is when it will start to be taught at math grad schools, and then the next step is when it will be taught at the undergraduate level. That may take tens of years, I don’t know, but I don’t see what else could happen.”
The consequence of automated methods like Coq is even more interesting:
He also predicts that this will lead to a blossoming of collaboration, pointing out that right now, collaboration requires an enormous trust, because it’s too much work to carefully check your collaborator’s work. With computer verification, the computer does all that for you, so you can collaborate with anyone and know that what they produce is solid. That creates the possibility of mathematicians doing large-scale collaborative projects that have been impractical until now.
Interesting.
Stephen E Arnold, December 15, 2013
Big Data Still Faces a Few Hitches
December 15, 2013
Writer Mellisa Tolentino assesses the state of big data in, “Big Data Economy: The Promises + Hindrances of BI, Advanced Analytics” at SiliconAngle. Pointing to the field’s expected $50 billion in revenue by 2017, she says the phenomenon has given rise to a “Data Economy.” The article notes that enterprises in a number of industries have been employing big data tech to increase their productivity and efficiency.
However, there are still some wrinkles to be ironed out. One is the cumbersome process of pulling together data models and curating data sources, a real time suck for IT departments. This problem, though, may find resolution in nascent services that will take care of all that for a fee. The biggest issue may be the debate about open source solutions.
The article explains:
“Proponents of the open-source approach argue that it will be able to take advantage of community innovations across all aspects of product development, that it’s easier to get customers especially if they offer fully-functioning software for free. Plus, they say it is easier to get established partners that could easily open up market opportunities.
Unfortunately, the fully open-source approach has some major drawbacks. For example, the open-source community is often not united, making progress slower. This affects the long-term future of the product and revenue; plus, businesses that offer only services are harder to scale. As for the open core approach, though it has the potential to create value differentiation faster than the open source community, experts say it can easily lose its value when the open-source community catches up in terms of functionality.”
Tolentino adds that vendors can find themselves in a reputational bind when considering open source solutions: If they eschew the open core approach, they may be seen as refusing to support the open source community. However, if they do embrace open source solutions, some may accuse them of taking advantage of that community. Striking the balance while doing what works best for one’s company is the challenge.
Cynthia Murrell, December 15, 2013
Sponsored by ArnoldIT.com, developer of Augmentext
Palantir: What Is the Main Business of the Company?
December 11, 2013
I read about Palantir and its successful funding campaign in “Palantir’s Latest Round Valuing It at $9B Swells to $107.8M in New Funding.” Compared to the funding for ordinary search and content processing companies, Palantir is obviously able to attract investors better than most of the other companies that make sense out of data.
If you run a query for “Palantir” on Beyond Search, you will get links to articles about the company’s previous funding and to a couple of stories about the companies interaction with IBM i2 related to an allegation about Palantir’s business methods.

Image from the Louisiana Lottery.
I find Palantir interesting for three reasons.
First, it is able to generate significant buzz in police and intelligence entities in a number of countries. Based on what I have heard at conferences, the Palantir visualizations knock the socks off highly placed officials who want killer graphics in their personal slide presentations.
Second, the company has been nosing into certain financial markets. The idea is that the Palantir methods will give some of the investment outfits a better way to figure out what’s going up and what’s going down. The visuals are good, I have heard, but the Palantir analytics are perceived, if my sources are accurate, as better than those from companies like IBM SPSS, Digital Reasoning, Recorded Future, and similar analytics firms.
Third, the company may have moved into a new business sector. The firm’s success in fund raising begs the question, “Is Palantir becoming a vehicle to raise more and more cash?”
Palantir is worth monitoring. The visualizations and the math are not really a secret sauce. The magic ingredient at Palantir may be its ability to sell its upside to investors. Is Palantir introducing a new approach to search and content processing? The main business of the company could be raising more and more money.
Stephen E Arnold, December 11, 2013
Big Thinking about Big Data
December 9, 2013
Big data primarily consists of unstructured data that forces knowledge professionals to spend 25% of their time searching for information, says Peter Auditore and George Everitt in their article “The Anatomy Of Big Data” published by Sand Hill. The pair run down big data’s basic history and identify four pillars that encompass all the data types: big tables, big text, big metadata, and big graphs. They identify Hadoop as the most important big data technology.
Big data companies and projects are anticipated to drive more than $200 billion in IT spending, but the sad news is that only a small number of these companies are currently turning a profit. One of the main reasons is that open source is challenging proprietary companies. It is noted that users are selling their data to social media Web sites. Users have become more of a product than a client and social media giants are not sharing a user’s personal information with the actual user.
Social media is large part of the big data bubble:
“The majority of organizations today are not harvesting and staging Big Data from these networks but are leveraging a new breed of social media listening tools and social analytics platforms. Many are employing their public relations agencies to execute this new business process. Smarter data-driven organizations are extrapolating social media data sets and performing predictive analytics in real time and in house. There are, however, significant regulatory issues associated with harvesting, staging and hosting social media data. These regulatory issues apply to nearly all data types in regulated industries such as healthcare and financial services in particular.”
Guess what one of the biggest big data startups is? Social media big data analytics.
The article ends by stating that big data helps organizations make better use of their data assets and it will improve decision-making. This we already know, but Auditore and Everitt do provide some thought-provoking insights.
Whitney Grace, December 09, 2013
Sponsored by ArnoldIT.com, developer of Augmentext
Watson Loses to Amazon Look Ahead for Work at Healthcare.gov or Homeland Security
December 5, 2013
The article titled IBM Introduces Watson to the Public Sector Cloud on GCN explores the potential for Watson now that IBM has opened it up to developers. IBM Watson Solutions recently won the 2013 North America New Product Innovation award for its combination of communication skills and evaluation abilities. Even more recently, IBM gave up on its competition with Amazon Web Services for a CIA contract for 10 years and $600M. But the loss has not rained out the parade, as the article explains:
“The initial target market for IBM Watson Developers Cloud is the private sector, with IBM touting third-party applications in such areas as retail and health care. But analysts say the offering will impact big data problems in the public sector, too. McCarthy sees potential for Watson-powered apps in such areas as fraud analysis, which the White House is ramping up due to worries about scammers taking advantage of consumers signing up for its new health care plans. “
Sounds like there is a job for Watson at Healthcare.gov, what with the massive potential for fraud issues. Another possibility is putting Watson to work on entity analytics for Homeland Security, looking for patterns in data. Entity analytics is mainly about comparing huge amounts of data and who could be better at that than IBM’s supercomputer?
Chelsea Kerwin, December 05, 2013
Sponsored by ArnoldIT.com, developer of Augmentext
Mathematical Modeling Applied to Folk Tales
December 3, 2013
A new application of mathematical modeling reminds us how versatile the approach to data can be. Phys.org reports that “Mathematical Modeling Provides Insights Into Evolution of Folk Tales.” Anthropologist Jamie Tehrani at England’s Durham University approaches folk-tale development with methods used to examine biological evolution.
The article tells us that his study:
“… resolves a long-running debate by demonstrating that Little Red Riding Hood shares a common but ancient root with another popular international folk tale The Wolf and the Kids, although the two are now distinct stories. ‘This is rather like a biologist showing that humans and other apes share a common ancestor but have evolved into distinct species,’ explained Dr Tehrani.”
Other stories share this literary ancestor, like the Tiger Grandmother tale found in Japan, China, and Korea. Dr. Tehrani performed his phylogenetic analysis on 58 variations of the story, focusing on 72 specific plot variables. He made a branching map of the variants (an illustration is included in the article).
Of the results, he states:
“This exemplifies a process biologists call convergent evolution, in which species independently evolve similar adaptations. The fact that Little Red Riding Hood ‘evolved twice’ from the same starting point suggests it holds a powerful appeal that attracts our imaginations.
“‘There is a popular theory that an archaic, ancestral version of Little Red Riding Hood originated in Chinese oral tradition…. My analysis demonstrates that in fact the Chinese version is derived from European oral traditions, and not vice versa.'”
Tehrani notes that this research could do a good deal more than satisfy literary curiosity. He hopes that it will help clarify migration patterns of ancient humans by tracing where and when certain stories, and story variants, appeared. It is always nice to see someone successfully using an established tool in a new way.
Cynthia Murrell, December 03, 2013
Sponsored by ArnoldIT.com, developer of Augmentext
The Perks of HP Autonomy’s IDOL
November 25, 2013
The promotional article on HP Autonomy titled IDOL, The OS For Human Information touts the abilities of the HP IDOL, (even including a fancy diagram.) The amount of data that HP IDOL can manage seems to be of central importance, but also its versatility in sorting and collecting data from different types of sources, be it social media, cloud, on premise, image, audio, and structured data. The article explains,
“With HP IDOL, you can access, analyze, understand, and act on large amounts of human information from virtually any source… These capabilities make IDOL the OS for human information. With IDOL’s exploratory analytics, you can unlock key ideas, patterns, and concepts in your structured and unstructured data with streamlined processing, tuned for optimal performance. Uncover new opportunities, spot new trends, automate processes, break down silos, mitigate risks, and cut costs to elevate your organizational efficiency and effectiveness by enabling your data to tell you the answers.”
White papers are also available, such as Transitioning to a New Era of Human Information, but first you must register. The article also exclaims over IDOL’s 360-degree viewing platform, ensuring that the information from social media is just as understandable and viewable as anything from a spreadsheet. Unfortunately, this mass-data handling might cause a sluggish system.
Chelsea Kerwin, November 25, 2013
Sponsored by ArnoldIT.com, developer of Augmentext
Update On Basis Technology And ODNI And DIA Partnership
November 22, 2013
Basis Technology, a multilingual search and text analytics company, and The Office of the Director of National Intelligence (ODNI) and the Defense Intelligence Agency (DIA) partnered up not too long ago. Global News Wire updates us on where the partnership has taken the three organizations in the article, “Basis Technology Releases Highlight 6.0 In Continued Partnership With ODNi And DIA.” Basis Technology has added key enhancements to Highlight, its flagship tool for Intelligence Community (IC) linguists and analysts to standardize named entities in documents. The DIA and ODNI will use Highlight to overcome issues related to transliterating foreign names and places into IC standards.
Small differences in names and places leads to thousands of errors and IC personnel need to eliminate them to save time and resources. Highlight simplifies the process and reduces the number of mistakes and inconsistencies.
The article states:
” ‘The increased data collection of both domestic and foreign information has created a very critical need for quick and accurate text analysis,’ said Carl Hoffman, CEO of Basis Technology. ‘Our ongoing work with the ODNI and DIA has uniquely positioned us to provide the Intelligence Community with a proven solution that takes the guesswork out of translators jobs and provides the end user with the actionable intelligence to meet their mission critical needs. We look forward to continuing this relationship and providing our customers with the innovative text analytics and linguistic solutions they have come to expect from Basis Technology.’ “
Is this a form of predictive analytics? Highlight must really come in handy when translating Japanese and Chinese characters when the slightest difference in the wording or tonality of a sentence can change a word’s entire meaning.
Whitney Grace, November 22, 2013
Sponsored by ArnoldIT.com, developer of Augmentext
Tableau Is The Windows 95 Of Analytics
November 21, 2013
Directions Magazine notes that “Tableau Continues Its Visual Analytics Revolution” by using location analytics to improve business processes. How is Tableau making this possible? The company’s visual analytics software is the main key to advancing how users access and understand information.
“Tableau represents a new class of business intelligence (BI) software that is designed for business analytics allowing users to visualize and interact on data in new ways and does not mandate that relationships in the data be predefined. This business analytics focus is critical as it is the top ranked technology innovation in business today as identified by 39 percent of organizations as found in our research.”
Tableau wants data usage and understanding to be seamless without having to configure it to preset niches. The problem is that Tableau’s software is a dream for data scientists, but there is still a barrier for average user interaction. Tableau is making analytics software the equivalent of Microsoft Office, however. Business analysts are noting that Tableau’s software is a business intelligence solution that curb’s IT’s involvement by keeping it down to a minimum as well as demonstrating the quick value of data.
Tableau is making data software for the average user akin to what Microsoft did with Windows 95. What the company is doing needs to be monitored, not because it is alarming, but because it is going to be big.
Whitney Grace, November 21, 2013
Sponsored by ArnoldIT.com, developer of Augmentext

