“Artificial Cognitive Architectures”
James A. Crowder, John N. Carbone, Shelli A. Friess
Aficionados of artificial intelligence often fantasize, speculate, and debate the holy grail that is a fully autonomous artificial life form, yet rarely do we find a proposed architecture approaching a credible probability of success. With “Artificial Cognitive Architectures”, Drs Crowder, Carbone and Friess have painstakingly pulled together many disparate pieces of the robot puzzle in sufficient form to convince this skeptic that a human-like robot is finally within the realm of achievement, even if still at the extreme outer bounds of applied systems.
The authors propose an architecture for a Synthetic system, which is an Evolving, Life Form (SELF):
A prerequisite for a SELF consciousness includes methodologies for perceiving its environment, take in available information, make sense out of it, filter it, add to internal consciousness, learn from it, and then act on it.
SELF mimics the human central nervous system through a highly specific set of integrated components within the proposed Artificial Cognitive Neural Framework (ACNF), which includes an Artificial Prefrontal Cortex (APC) that serves as the ‘mediator’. SELF achieves its intelligence through the use of Cognitrons, which are software programs that serve in this capacity as ‘subject matter experts’. An artificial Occam abduction process is then tapped to help manage the ‘overall cognitive framework’ called ISAAC (Intelligent information Software Agents to facilitate Artificial Consciousness).
The system employs much of the spectrum across advanced computer science and engineering to achieve the desired results for SELF, reflecting extensive experience. Dr. Jim Crowder is Chief Engineer, Advanced Programs at Raytheon Intelligence and Information Systems. He was formerly Chief Ontologist at Raytheon which is where I first came across his work. Dr. John Carbone is also at Raytheon; a quick search will reveal many of his articles and patents in related areas. Dr. Shelli Friess is a cognitive psychologist; a discipline that until recently was rarely found associated with advanced computing architecture, even though mimicry of the human nervous system clearly calls for a deep transdisciplinary approach. For example, “Artificial Cognitive Architectures“ introduces ‘acupressure’, ‘deep breathing’, ‘positive psychology’ and other techniques to SELF as proposed to become ‘a real-time, fully functioning, autonomous, self-actuating, self-analyzing, self-healing, fully reasoning and adapting system.’
While even the impassioned AI post-doc may experience acronym fatigue while consuming “Artificial Cognitive Architectures”, the 18 years of research behind the book with careful attention to descriptive terminology helps to minimize the confusion surrounding a topic that by necessity begins to take on the complexity of our species.
Serious students and practitioners of AI will find “Artificial Cognitive Architectures” particularly interesting for the broad systems approach, while most others with curiosity surrounding this topic will find the book technical but fascinating. Those searching for HAL 9000 will be delighted to see similar reasoning and emotions on display, while simultaneously disappointed to discover designed-in governance and security features that will hopefully prevent such Hollywood scenarios from occurring. The security design was apparently influenced by an actual entertaining case when an earlier version of intelligent agent developed for the U.S. government was inadvertently left plugged in by Dr. Crowder, resulting in a late night Instant Messaging exchange between a human colleague and a Cognitron slumber party of sorts.
Readers will find a more mature posture regarding policy and security than commonly found in popular AI culture, apparently reflecting the serious work of applying AI to missile and other systems at Raytheon.
I personally found the book refreshing as it overlaps much of my own work at the confluence of human-driven AI systems. I also share a concern for internal security as it appears inevitable that machines with even the most basic cognitive ability will immediately observe how irresponsible their organic brethren have conducted themselves as stewards of earth’s resources.J.A. Crowder et al., Artificial Cognition Architectures DOI 10.1007/978-1-4614-8072-3_5, © Springer Science+Business Media New York 2014