George Rebane
Yesterday IBM announced the first fruits of its cognitive computing initiative. It’s a processor chip with an architecture that mimics some of the neural net topology (how neurons hook up to other neurons) of a human brain. The cognitive computing chip (CCC) from IBM Research shown here is not to be programmed in the traditional sense of programmers writing code that computer scientists have designed (more here).
Instead, CCCs will be hooked into an array called a cognitive computer – one that we may think of as a ‘mini-brain’ – and it will be fed data comprised of observations of, say, a process like the progress of a disease revealed through symptoms and tests. The mini-brain is then prompted to learn and label the process (disease) in essentially the same manner that critters learn processes. They observe, detect repeatable patterns, and test their knowledge by attempting to predict what comes next in the process. If they predict correctly, then that positive feedback reinforces the pattern (i.e. model of the process). Otherwise, their prediction errors become the negative feedback which prompts them to try something else.
The structure of the new CCC is such that it promotes more facile and rapid learning. And the equally important benefit that CCC structures provide is much lower power consumption than conventional (cf. von Neuman computer) processing chips demand. To assemble the brain’s processing power from conventional chips would require an enormous computer system that may consume as much electricity as a town. Clearly, critter brainbones are much more efficient – none of us run around with smoke coming out of our ears.
As a long time researcher in machine learning, I am heartened by this announcement because it vindicates the direction many of us have advised (predicted) for the last thirty or so years. Machines with peer intelligence to humans will not be programmed by humans. Instead, we will learn how to replicate and improve upon the computational structures of real brains in hardware, and then turn them loose on learning from observational and archival data. Such a machine will then tell us when it has ‘arrived’.
Of course, we still nurture the hubris that we will be in control of the process leading up to peer intelligence, aka the Singularity. The global Internet today has (exceeds?) the processing power of any human brain, and its parts are already interconnected in ways that can be adapted (through dynamic messaging algorithms) to mimic any of countless ways to learn, become intelligent, and then conscious (having the ability to evince ‘I-ness’ as a separate agent seeking to survive and ‘grow’ in an environment).
Now imagine if some computer science lab(s) somewhere start hooking up such CCC-based mini-brains to the web in order to facilitate data transfers, or ask the such brains to learn some internet resident processes. Readers of these pages know that what sends chills down my spine is the day when/if on someone’s laptop the first non-human intelligence announces, ‘I am here.’ You can bet that this will be a post-Singularity moment, and our universe will have undergone its biggest change since certain nucleic acids coiled themselves into a double helix.


Leave a comment