Google’s DeepMind Creates A Computer That Mimics Human Short Term Memory

Replicating the capabilities of the human brain in a computer is not easy, and it goes beyond simply wiring a lot of CPUs together. There are many processes that need to be better understood before we can build a brain, for example memory. Google’s DeepMind startup has just unveiled a prototype computer that has its own short-term memory.

Google acquired DeepMind earlier this year for a whopping $400 million. That’s a lot to pay for a company with almost no name recognition. However, that was not for lack of expertise. DeepMind was (and still is) very secretive. It does high-level neural network and AI research. It’s the basic science that needs to be done before an artificial brain can do anything interesting for regular people.

Neural Network

The new paper explains how DeepMind went about leveraging cognitive science to replicate some aspects of a working memory. A traditional computer neural network consists interconnected processors (i.e. “neurons”) that can change the strength of their connection based on some external input. This models the plasticity and ability to learn of a real brain. DeepMind has added a new component described by pioneering computer scientist Alan Turing. In Turing’s model of computation, the memory acts as a tickertape that can pass back and forth through a computer, sorting variables for later processing. It’s basically external memory, which is what DeepMind added.

The addition of this new component allows DeepMind’s so-called “Neural Turing Machine” to understand new data as chunks, the deceptively vague term used by cognitive scientists to quantify units of memory. American cognitive psychologist George Miller performed experiments in the 1950s that found a human brain’s short term memory can hold about seven chunks of data at any given time. That’s about all you can juggle without getting confused.

As for what a “chunk” is, it can be almost anything — a person, a simple concept, a location, whatever. A single sentence can have two or three chunks, or it can have a few dozen. That’s why more complicated sentences are harder to parse out. The DeepMind computer uses the external memory component to keep the chunks active to it can go back and grab them at different points in a calculation.


So what does this do for a neural network? It allows it to learn new behaviors without being explicitly programmed to do them. For example, the Neural Turing Machine was trained to copy a data set. Then it was instructed to copy the same data sequentially a certain number of times. It was never taught how to do this, but using its newly acquired short-term memory, it can learn a simple algorithm from the examples it has. In this way, it outperforms traditional neural networks in tests.

This is the a big step toward making neural networks more like the real thing, but there’s still a lot of work to do. The human brain might only be able to handle about seven chunks of data at a time in memory, but it has a way of merging chunks of data into single units after they’ve been fully processed. This recoding in short term memory could be the path to artificial intelligence — the key to a machine understanding a concept and using it to extrapolate new ones. This is what DeepMind is aiming for, so maybe that $400 million purchase price is starting to make a little more sense.

I Write Things.