Artificial Intelligence, intense learning, is often compared to the Brain – but there are four significant differences. One of them presents researchers with a particular puzzle.
Deep neural networks have shaped the last decade of artificial intelligence (AI) research – and their very name suggests that they were inspired by the brain’s nerve cells, called neurons. The analogy with the biological model is repeatedly emphasized in media reports and, of course, in the advertising material of the software providers. But how are artificial and natural intelligence connected?
How the neurosciences gave decisive impulses to computer science as early as the 1940s and 50s, the present text is intended to show how modern algorithms and the brain are similar and the most important differences. All of this relates purely to the brain’s capabilities as a biological computing machine that can solve complex problems like object recognition or motion control. Phenomena such as consciousness, emotional perception and other psychological or philosophical aspects are excluded here since our current AI does not provide any relevant services in any of these areas.
Many Algorithms – But Only One Brain
The desired comparison is made more difficult because the term artificial intelligence encompasses an almost unmanageable variety of different algorithms. Whether convolutional neural network, long short-term memory, sparse coding or one of the numerous other methods: computer science has a whole zoo of algorithms at its disposal, which differ in their objectives, the assumptions made about the input data, their learning strategy and many other points.
On the biology side, things are exceptionally more straightforward: for the sake of this article, there is only one brain. It may surprise or irritate, but at least in the case of mammals, the organs are so similar that it makes no difference to the computer science comparison whether we are talking about human, monkey or mouse brains.
Unless explicitly stated otherwise, in this article, AI refers to the deep neural networks that have been particularly successful in the field of image recognition in recent years and have thus practically become mainstream. They lend themselves particularly well to comparison with the brain because they are the direct descendants of the perceptron from the 1950s, directly inspired by brain research. In addition, the visual system is also one of the best-researched functions in the brain.
Here we briefly summarize the most important properties of these AI algorithms, concentrating on the classic artificial neurons as structural elements of the networks. Variations on this, such as convolutional layers, while necessary in practice, are ignored here for the sake of clarity and brevity.
The Network Learns
The smallest unit of a deep neural network is the artificial neuron (see Figure 1). It receives signals from other neurons or direct input data, such as the pixels of an image, via several channels. Depending on the sum of these input signals, the neuron itself becomes active and sends values via its output channel. Therefore, the neuron can be thought of as a small pattern detector that clicks or fires when it detects its preferred pattern in the input channels.
Artificial neural networks are built in layers, each consisting of many neurons. Typically, the cells in one layer (only) forward their signals to the cells in the next layer, and in this case, one speaks of a forward-directed architecture. So if you look at each neuron as a pattern detector, the cells in the higher layers can, so to speak, recognize patterns of patterns.
Before neural networks can solve a problem, they must undergo a training process. In deep learning, this is typically done through so-called supervised learning. The network is repeatedly shown example tasks for which the solution is already known. The connections between the individual neurons are changed so that the network solves the tasks better and better.
Also Read: Artificial Intelligence And Games