DOUG LENAT AND MICHAEL WHITBROCK SPEAK AT STANFORD AI SYMPOSIUM
Explicit semi-formal reasoning is the super-power of human beings! Even though it’s implemented on a poor platform, it allows us to do powerful, even life saving, things. Rather than abandoning the methods AI has developed for this sort of reasoning, we should seek to scale them up in just the same way that neural nets, deep learning, etc. were recently scaled up, in the expectation that similar technological and scientific gains will result. As the workshop title suggests, the most fruitful approach is likely to be a hybrid one.
To that end, we will begin by summarizing the current state of Cyc. Over the last 31 years, we’ve built its knowledge base by hand-axiomatizing ten million general, default-true things about the world, maximizing its deductive closure. That led us to make the CycL representation language increasingly expressive, to introduce argumentation and context mechanisms, and so on. At the same time, we’ve been trying to maximize the fraction of that deductive closure which can efficiently be reached. That led us to make the Cyc inference engine a hybrid of 1050 specialized reasoners, and to overlay that with dozens of meta-level control structures, techniques, and, yes, tricks.
This talk quickly reviewed all that, and then focused on how and why some cognitive tasks are easy for Cyc to do but difficult for neural systems, and vice versa. That in turn argues that some problems are best addressed by a hybrid approach, e.g., (i) applying explicit Cyc meta-reasoning on top of neural systems; or (ii) having Cyc invoke external, trained, neural and statistical components act as Heuristic Level modules (as though they were task specific reasoners #1051, 1052,…) We described two current such “hybrid” Cyc applications. The ability to rationalize their decisions will make near future systems like autonomous cars, household robots, and automated assistants far more trusted and far more trustworthy.
More information on the Symposium can be found here.