1

Your brain does process information but it is not a computer

I recently came across an article pretending that your brain “does not process information, retrieve knowledge or store memories” and cannot be viewed as a computer.

I personally think that this assertion is fundamentally wrong regarding the information processing.

The computer metaphor is still a valid one

First, I’d like to show you that the computer metaphor couldn’t be discarded so easily.

Of course our brain is not a computer. It is embodied and cannot be considered as an autonomous system.

Exercise Plays Vital Role Maintaining Brain Health, by a health blog

Credits: Exercise Plays Vital Role Maintaining Brain Health, by a health blog

An embodied system…

“Many features of cognition are embodied in that they are deeply dependent upon characteristics of the physical body of an agent” – RA Wilson and L Foglia, Embodied Cognition in The Stanford Encyclopedia of Philosophy

A downloaded brain, if it were possible, would probably not be able to function without a body having human-like sensors and effectors… since a lot of brain activity is dedicated to monitoring sensory inputs, regulating and interacting with the environment.

A system which is itself made up of of multiple systems

Rather than, the brain should be considered as multiples systems that interact together as a whole. Each of them is specialized and is able to communicate with others through different means. The basic elements of those systems are able to transmit signals at different speed, through different pathways: electrical signaling, chemical signaling (neurohormone)… even by using important diffuse systems (glia and immune system).

A limited metaphor…

To put in a nutshell,

“Humans, along with other organisms with brains, differ from computers because they are driven by emotions and motivations. The brain is much too hot and wet to be represented by a computer. The brain is electrical, but it is also driven by fluids (blood) and chemicals (hormones and neurotransmitters). Most importantly, the brain is part of a body which it drives to action, and research from an embodiment perspective also shows that the whole body (not just the brain) affects emotion, motivation, and other psychological processes.”

So what is the adequate metaphor?

Eddie Harmon-Jones, Ph.D., Professor of Psychology at Texas A&M University suggested that

“So, how can we replace the computer metaphor with a metaphor that more accurately represents the brain of an emotion-driven, motivated organism such as a human? I like the metaphor of a car.

A car may have a computer on board, and may be able to process information. But it is driven by fluids (gasoline, oil, etc.). It is both electrical and mechanical, and it can move.”

Renault Scénic Front Cut by Sovxx

Credits: Renault Scénic Front Cut by  Sovxx

Despite the fact that computers can be so complex using most recent advances (for instance VLSI or, simply supercomputers), the car metaphor is far more comprehensible.

We should also consider that computers have no significance if not considered through human perspective. On the contrary, all beings those have a brain exists by themselves.

But a useful metaphor

The metaphor of a brain as a computer helped scientists to gain a better understanding of our brain functioning. No more, no less. Just like both past metaphors of the brain and current ones.

Cognitivism (top down approach), connectionism (bottom up approach) and embodied cognition all succeed at explaining or modeling some aspects of our cognition. In neuroscience, several approaches contribute to explain the neural substrate functioning.

The working memory model , as well as the second revolution in linguistics, etc. were all useful for gaining a better knowledge. A knowledge that is helpful to evaluate individual’s cognitive aspects, model functions and, somehow, understand how the brain works.

These theories are still useful for natural language processing, cognitive remediation, and so on. We haven’t discarded Newtonian theory when Einstein published his general relativity. So is the computer metaphor.

The metaphor is helpful to explore the brain and to exploit its properties but it is not necessarily the ultimate metaphor. Nobody knows what the future has in store…

The brain process information

The brain is not able to recall a detailed picture you saw thousands times (a bank note for instance). That does not mean that it doesn’t store information.

As far as we know the brain stores information using synaptic plasticity (1, 2), i.e. connectivity changes, and brain network topology. The information is scattered through the brain and can be unlearned or mixed with new pieces of information.

Neurons network

What we exactly store is still on debate but it is synthetized and segmented information.

Actually, our brain does process information:

“Information is what is conveyed or represented by a particular arrangement or sequence of things” – Oxford dictionary

Which is exactly what the brain does.

To go further in this explanation:

“Information is an abstract concept. There is a temptation to think of information as fully representable by the bits used in a digital computer. However with signals, the timing of the signal makes a difference. When precisely a signal arrives at a neuron carries information about what the signal means, and there is increasing evidence that the relative timing among neural signals carries information as well. The brain accomplishes information processing using signal processing, but there is more going on than the phrase “information processing” alone would suggest. […]All signal processing is carried out by the spontaneous “random” interactions of molecular collisions, which are loosely guided by the continuously hierarchical structural form of the brain.” – Paul King, Computational Neuroscientist, Data Scientist, Technology Entrepreneur

What is your opinion on the question?

 

 

 




An affordable credit card-sized supercomputer by NVIDIA

Jetson TX1

NVIDIA announced yesterday the Jetson TX1, a small form-factor Linux system-on-module, credit card sized for various application ranging from autonomous navigation to deep learning-driven inference and analytics.

Jetson TX1

It will soon be available as development kit, e.g. a mini-ITX carrier board that includes the pre-mounted module and has low power consumption which provides an out of the box desktop user experience (it comes with a linux’s ubuntu custom distribution). Unfortunately, the development kit requires a USB hub to work with a keyboard and a mouse and the 16GB eMMC memory storage is probably too few.

Since I really enjoyed performing artificial intelligence at the university and during an experience as contractor in a public research center, I think I will ask the developer kit for christmas. I plan to use it as media center, intelligent home automation and for personal deep learning projets.

You may wonder why I chose this solution? Just because this card packs several interesting characteristics:

  • a Tegra X1 SoC : an ARM A57 CPU and a Maxwell-based GPU packing 256 CUDA cores (delivering 1 teraflop at 5.7W, i.e. the same peak speed as a small 15 years old supercomputer!)
  • 4GB of RAM shared between the CPU and GPU

It sounds interesting to me.




Deep learning for everyone!

TensorFlow

That’s great news! Google just open-sourced TensorFlow, its deep (machine) learning library.

The engine is widely used at Google: by speech recognition systems, in the new Google photo product, in Gmail, in search, etc.

TensorFlow

From now on startups will be able to develop systems as intelligent as a 4 year old children. More interestingly, code sharing in python between researchers or data scientists has never been easier.

The limitations of the previous system no longer exist:

[DistBelief] was narrowly targeted to [artificial] neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure — making it nearly impossible to share research code externally. […] TensorFlow has extensive built-in support for deep learning, but is far more general than that — any computation that you can express as a computational flow graph, you can compute with TensorFlow (see some examples). Any gradient-based machine learning algorithm will benefit from TensorFlow’s auto-differentiation and suite of first-rate optimizers. And it’s easy to express your new ideas in TensorFlow via the flexible Python interface.

Maybe the engine will soon get available for its cloud-based service on a clustered architecture…