Rebane's Ruminations
August 2011
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031  

ARCHIVES


OUR LINKS


YubaNet
White House Blog
Watts Up With That?
The Union
Sierra Thread
RL “Bob” Crabb
Barry Pruett Blog

George Rebane

CCC-IBM Yesterday IBM announced the first fruits of its cognitive computing initiative.  It’s a processor chip with an architecture that mimics some of the neural net topology (how neurons hook up to other neurons) of a human brain.  The cognitive computing chip (CCC) from IBM Research shown here is not to be programmed in the traditional sense of programmers writing code that computer scientists have designed (more here). 

Instead, CCCs will be hooked into an array called a cognitive computer – one that we may think of as a ‘mini-brain’ – and it will be fed data comprised of observations of, say, a process like the progress of a disease revealed through symptoms and tests.  The mini-brain is then prompted to learn and label the process (disease) in essentially the same manner that critters learn processes. They observe, detect repeatable patterns, and test their knowledge by attempting to predict what comes next in the process.  If they predict correctly, then that positive feedback reinforces the pattern (i.e. model of the process).  Otherwise, their prediction errors become the negative feedback which prompts them to try something else.

The structure of the new CCC is such that it promotes more facile and rapid learning.  And the equally important benefit that CCC structures provide is much lower power consumption than conventional (cf. von Neuman computer) processing chips demand.  To assemble the brain’s processing power from conventional chips would require an enormous computer system that may consume as much electricity as a town.  Clearly, critter brainbones are much more efficient – none of us run around with smoke coming out of our ears.


As a long time researcher in machine learning, I am heartened by this announcement because it vindicates the direction many of us have advised (predicted) for the last thirty or so years.  Machines with peer intelligence to humans will not be programmed by humans.  Instead, we will learn how to replicate and improve upon the computational structures of real brains in hardware, and then turn them loose on learning from observational and archival data.  Such a machine will then tell us when it has ‘arrived’.

Of course, we still nurture the hubris that we will be in control of the process leading up to peer intelligence, aka the Singularity.  The global Internet today has (exceeds?) the processing power of any human brain, and its parts are already interconnected in ways that can be adapted (through dynamic messaging algorithms) to mimic any of countless ways to learn, become intelligent, and then conscious (having the ability to evince ‘I-ness’ as a separate agent seeking to survive and ‘grow’ in an environment).

Now imagine if some computer science lab(s) somewhere start hooking up such CCC-based mini-brains to the web in order to facilitate data transfers, or ask the such brains to learn some internet resident processes.  Readers of these pages know that what sends chills down my spine is the day when/if on someone’s laptop the first non-human intelligence announces, ‘I am here.’  You can bet that this will be a post-Singularity moment, and our universe will have undergone its biggest change since certain nucleic acids coiled themselves into a double helix.

Posted in , , ,

8 responses to “Singularity Signposts – Cognitive Computing Milestone”

  1. Douglas Keachie Avatar
    Douglas Keachie

    Big Improvement over previous post.

    Like

  2. Douglas Keachie Avatar
    Douglas Keachie

    Hey, hey, hey, here comes Colossus!
    http://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

    Like

  3. Ryan Mount Avatar

    Well yeah, but can it play chess and beat the pants off of Celebrity Jeopardy contestants? Probably.
    “The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.” ~HAL

    Like

  4. George Rebane Avatar

    Good point RyanM, but we have to remember that HAL was represented as a human-programmed AI, and even then Arthur Clarke did not assert that HAL committed an error. If anything, it was HAL’s human handlers who seemed to have overlooked providing the machine with a proper utility function within which it should make its decisions. The one HAL was given, or cooked up for itself, clearly demonstrated that it acted reasonably.

    Like

  5. Ryan Mount Avatar

    Good point about HAL, BTW. I was thinking more about hard drive failures, etc. But given enough redundancy and fail over, such a CPU might be able predict and adapt to errors and crashes. We have that now to some extent (disclosure: I work for IBM), but IMO it’s largely a marketing exercise with PowerPoint Slides with a lot (Data Centers worth) of hardware thrown at it. And it’s very expensive.
    In all seriousness though, the application for such a processor are endless. It could detect human error by analyzing patterns of behavior, rather than just crunching numbers for analysis by a human. Imagine a future version of Microsoft Word that could coach your writing style, much like a teacher might by analyzing large chunks of prose.
    LOL. It could put economists out of work. And bloggers, I guess. 😉

    Like

  6. George Rebane Avatar

    RyanM – here’s a ‘secret agent request’ – can you find out or point me (and interested readers) to an IBM Research URL that describes the structures used in this CCC. I’m particularly interested if IBM is supporting or adapting something from/with Numenta’s HTM technology that was first described Jeff Hawkins.
    As a look-ahead, my next question would be how such brain-similar structures would simplify the implementation of Bayes nets (since actual critter brains do that very well).

    Like

  7. Ryan Mount Avatar

    Most of the Research comes out of the Alamaden Research Center down by San Jose:
    http://www.almaden.ibm.com/
    I do know that they have an entire team dedicated to something called Cognitive Computing:
    http://www.ibm.com/smarterplanet/us/en/business_analytics/article/cognitive_computing.html
    I find it interesting that they are using the human brain as a template for this project. So rather than creating something from scratch, they’re attempting to some extent mimic the “functionality” of the human brain.
    It’s a very interesting epistemological question/problem and it’s certainly one that has daunted many post-modern thinkers: How do we know what we know? what are the limits of our knowledge? Is there such a thing? Perhaps through the aid of technology (enter: Arthur C. Clarke), we might be able to surpass the capabilities of the human brain.

    Like

  8. George Rebane Avatar

    Exactly so RyanM, and that is the point of the Singularity Signposts section on RR. And thank you for the input and links.

    Like

Leave a comment