Belgian researchers have found ways to mimic the human brain to improve sensors and the way they pass data to mainframe computers.
- raised bed,Pat Brans Associates/Grenoble School of Management
Published:April 6, 2023
The human brain is far more efficient than the world's most powerful computers. A human brain with an average volume of about 1260 cm3consumes about 12W (watts) of power.
Using this biological marvel, the average person learns a huge number of faces in a very short time. You can recognize one of these faces immediately, regardless of the expression. People can also look at an image and recognize objects from a seemingly infinite number of categories.
Compare that to the world's most powerful supercomputer,Should, which works at the Oak Ridge National Laboratory, with an extension of 372 m2and consuming 40 million watts of peak power. Frontier processes large amounts of data to train artificial intelligence (AI) models to recognize large numbers of human faces, as long as the faces do not display unusual expressions.
But the training process is very energy intensive, and even though the resulting models run on smaller computers, they still consume a lot of power. Also, Frontier-generated models can only recognize objects from a few hundred categories, eg person, dog, car, etc.
Scientists know a few things about how the brain works. They know, for example, that neurons communicate through spikes (accumulated potential thresholds). The scientists used brain probes to peer deep into the human cortex and record neural activity. These measurements show that a typical neuron fires only a few times per second, which is very little firing. At a very high level, this and other basic principles are clear. But how neurons compute, how they participate in learning, and how connections are made and redone to form memories remains a mystery.
However, many of the principles that researchers are working on today are likely to be part of a new generation of chips that will replace computer processing units (CPUs) and graphics processing units (GPUs) within 10 years or more. Computer designs must also change, moving away from what is called von Neumann architecture, where processing and data are in different places and share a bus to transfer information.
New architectures, for example, will locate processing and storage, just like in the brain. Researchers are borrowing this concept and other features of the human brain to make computers faster and more energy efficient. This field of study is known asneuromorphic computing, and much of the work is being done in theInteruniversity Center for Microelectronics(Imec) in Belgium.
“We tend to think of peak behavior as the fundamental level of computation within biological neurons. There are much deeper calculations going on that we don't understand, probably down to the quantum level."Oket Ilja, manager of the Neuromorphic Computing program at Imec.
“Even between quantum effects and the high-level behavior model of a neuron, there are other intermediate functions, such as ion channels and dendritic calculus. The brain is much more complicated than we know. But we have already found some aspects that we can emulate with current technology, and we are already seeing great returns.”
There is a spectrum of partially neuromorphic and already industrialized techniques and optimizations. For example, GPU designers are already implementing some of what has been learned from the human brain; and computer designers are already reducing bottlenecks by using multilayer memory stacks. Massive parallelism is another bio-inspired principle that is used in computers, for example, in deep learning.
However, it is very difficult for neuromorphic computer researchers to break into computing because there is already so much momentum around traditional architectures. So instead of trying to disrupt the world of computers, Imec turned its attention to sensors. Imec researchers are looking for ways to "sparse" the data and exploit this scatter to speed up processing in the sensors and reduce power consumption at the same time.
“We focus on sensors that are temporary in nature,” says Ocket. “That includes audio, radar and lidar. It also includes event-based vision, which is a new type of vision sensor that is not frame-based, but works on the principle of your retina. Each pixel independently sends a signal if it detects a significant change in the amount of light it receives.
“We borrowed these ideas and developed new algorithms and new hardware to support these maximal neural networks. Our job now is to demonstrate how low power and low latency it can be when integrated into a sensor.”
Embedding neural networks on a chip
A neuron accumulates information from all the other neurons to which it is connected. When the membrane potential reaches a certain threshold, the axon, the connection leading out of the neuron, emits a spike. This is one of the ways your brain performs calculations. And that's what Imec does now on a chip, usingneural network spikes.
“We use digital circuitry to emulate the escape, integrate, and trigger the behavior of biological neurons,” says Ocket. “They are permeable in the sense that as they integrate, they also lose a little bit of voltage across their membrane; they are integrating because they accumulate spikes coming in; and they are activated because the output is activated when the membrane potential reaches a certain threshold. We imitate that behavior.”
The benefit of this mode of operation is that until the data changes, no events are generated or computations are performed on the neural network. Consequently, no energy is used. Spike scattering within the neural network inherently offers low power consumption because computation does not occur constantly.
A spiky neural network is considered recurrent when it has memory. A peak is not calculated just once. Instead, it resonates through the network, creating a form of memory, which allows the network to recognize temporal patterns, just like the brain does.
Using spike neural network technology, a sensor transmits tuples that include the X coordinate and Y coordinate of the pixel that is rising, the polarity (whether it is rising or falling), and the time the spike occurs. When nothing happens, nothing is transmitted. On the other hand, if things change in several places at the same time, the sensor creates many events, which becomes a problem due to the size of the tuples.
To minimize this increase in transmission, the sensor filters and decides how much bandwidth to produce based on the dynamics of the scene. For example, in the case of an event-based camera, if everything in a frame changes, the camera will send a large amount of data. A framework based system would handle this much better because it has a constant data rate. To overcome this problem, the designers put a lot of intelligence into the sensors to filter the data, yet another way to mimic human biology.
"The retina has 100 million receptors, which is like having 100 million pixels in your eye," says Ocket. “But the fiber optics that run through your brain only carry a million channels. That means the retina does 100 times compression, and that's a real calculation. Certain features are detected, such as small circles, left to right, or top to bottom. We are trying to mimic the filtering algorithm that takes place on the retina in these event-based sensors, which operate at the edge and send data to a central computer. You can think of the computation that takes place on the retina as a form of edge AI.”
People have been imitating neurons with silicon spikes since the 1980s. But the main hurdle that prevented this technology from reaching the market or any kind of real application was training spiked neural networks as efficiently and conveniently as train deep neural networks. “Once you establish a good mathematical understanding and good techniques for training piconeural networks, hardware implementation is almost trivial,” says Ocket.
In the past, people would build spikes into their network chips and then do a lot of tweaking for the neural networks to do something useful. Imec took another approach, developing software algorithms that showed that a given configuration of neuron spikes with a given set of connections would work at a given level. So they built the hardware.
This type of advancement in software and algorithms is unconventional for IMEC, where progress often comes in the form of hardware innovation. Another thing that was not conventional for Imec was that they did all this work in standards.CMOS, which means that its technology can be industrialized quickly.
The future impact of neuromorphic computing
“The next direction we are taking is towards sensor fusion, which is a hot topic in automotive, robotics, drones and other domains,” says Ocket. “A good way to get high-fidelity 3D perception is to combine multiple sensory modalities. Dotted neural networks will allow us to do this with low power and low latency. Our new goal is to develop a new chip specifically for sensor fusion in 2023.
“Our goal is to merge multiple sensor streams into a coherent and complete 3D representation of the world. Like the brain, we don't want to think about what's coming from the camera versus what's coming from the radar. We're going for an intrinsically fused representation.
“We look forward to showing some very relevant demos for the automotive industry, and for robotics and drones across all industries, where the performance and low latency of our technology really shine,” says Ocket. "First, we're looking at breakthroughs to solve certain edge cases in automotive sensing or robotic sensing that aren't possible today because latency is too high or power consumption is too high."
Two other things that Imec expects to happen in the market are the use of event-based cameras and sensor fusion. Event-based cameras have very high dynamic range and very high temporal resolution. Sensor fusion can take the form of a single module with cameras in the middle, some radar antennas around it, perhaps a lidar, and the data is fused on the sensor itself, using piconeural networks.
But even when the market adopts neural networks in sensors, the general public may not be aware of the underlying technology. That is likely to change when the first event-based camera is integrated into a smartphone.
“Let's say you want to use a camera to recognize your hand gestures as a form of human-machine interface,” Ocket explains. “If this was done with an ordinary camera, it would constantly look at every pixel in every frame. He would fit into a frame and decide what is happening in the frame. But with an event-driven camera, if nothing happens in its field of view, no processing occurs. He has an intrinsic trigger mechanism that he can exploit to start computing only when there's enough activity coming from his sensor."
Human-machine interfaces can suddenly become much more natural, all thanks to neuromorphic sensing.
Read more about IT and Business Problem Management
artificial superintelligence (ASI)
By:nick barney(Video) Advanced Neuroimaging Program 2016 - Day 1 Intro with Mark Cohen
- How Imec hopes to industrialize artificial intelligenceBy: PatBrans
red neuronal convolucional (CNN)
Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements.What technology mimics the human brain? ›
Neural network: A subset of machine learning that mimics the neurons in the human brain and how they signal to one another. Neural networks pass data through interconnected layers of nodes until the network creates the output. Neural networks are at the heart of deep learning algorithms.Which of the following is artificially created efficient computing systems designed to stimulate the human brain? ›
What Is Neuromorphic Computing? At its core, neuromorphic computing is an approach to artificial intelligence that seeks to mimic the architecture and function of the human brain. This means designing AI systems capable of learning and adapting in a more similar way to human cognition than traditional AI algorithms.Why do scientists struggle to replicate the working of human brains into artificial? ›
Complexity of the human brain: The human brain consists of approximately 86 billion neurons, each connected to thousands of other neurons. This level of complexity is difficult to replicate in artificial neural networks.Which computer network is based on human brain? ›
Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.Which neural networks algorithms are inspired from the structure and functioning of the brain? ›
Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.Can a human brain be simulated on a computer? ›
K computer and human brain
In late 2013, researchers in Japan and Germany used the K computer, then 4th fastest supercomputer, and the simulation software NEST to simulate 1% of the human brain. The simulation modeled a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses.
“Neurotechnology” refers to any technology that provides greater insight into brain or nervous system activity, or affects brain or nervous system function. Neurotechnology can be used purely for research purposes, such as experimental brain imaging to gather information about mental illness or sleep patterns.Which type of technology is used to study the brain? ›
Magnetic resonance imaging (MRI) uses changes in electrically charged molecules in a magnetic field to form images of the brain. Both technologies are more precise than ordinary X-rays and can help find problems when people fall ill.Which artificial refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions? ›
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.
Sensory AI is learning through sensory inputs: information from the five human senses, vision, hearing, smell, taste, and touch.Does the human brain have software? ›
The biological brain does not require engineered software to function. It rather self-organises in a learning process through continuous interaction with the physical world.Why is the human brain better than artificial intelligence? ›
Artificial intelligence robots will never be able to comprehend the idea of "action and reaction" since they do not have any sensibility. Only humans have the capacity to learn, comprehend, and then put their learned information to use by combining it with their comprehension, rationality, and thinking.Why is it better to have a human brain than a neural network? ›
For one, human brains are far more complex and sophisticated than neural networks. Additionally, human brains are able to learn and adapt much more quickly than neural networks. Finally, human brains are able to generate and process information in a far more efficient and effective manner than neural networks.Can the human brain be replicated? ›
A model that replicates the functions of the human brain is feasible in 10 years according to neuroscientist Professor Henry Markram of the Brain Mind Institute in Switzerland. "I absolutely believe it is technically and biologically possible. The only uncertainty is financial.How is human brain related to artificial neural network? ›
An artificial neural network is an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a humanlike manner. ANNs are created by programming regular computers to behave as though they are interconnected brain cells.What are the three types of brain networks? ›
The three brain networks that are connected and work together that are essential to learning are the recognition, strategic, and affective networks. All brains share these characteristics but individual brains differ significantly.Does the human brain have more switches than all computers on earth? ›
In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth," he said.What type of neural networks has gates in the neural network that control the flow of information? ›
Long Short-Term Memory (LSTM) Networks
LSTM is a type of RNN that is designed to handle the vanishing gradient problem that can occur in standard RNNs. It does this by introducing three gating mechanisms that control the flow of information through the network: the input gate, the forget gate, and the output gate.
A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.
Convolutional neural networks: one of the most popular models used today. This neural network computational model uses a variation of multilayer perceptronsand contains one or more convolutional layers that can be either entirely connected or pooled.Is it possible to transfer your consciousness into a computer? ›
The science of the brain and of consciousness suggests that, in theory, we can transfer consciousness to a machine. This is also known as 'mind uploading'. However, conventional ideas on mind uploading rely on scanning our postmortem brains, where 'we' are not the ones who live on.Can you replicate the human brain with AI? ›
With further advancements in technology, it is possible to create AI software that mimics the human brain vastly better than it does today. This endeavor requires more complex network creation, injection of large data doses, and time input to make AI approximate the vast complexities of the natural world.How close are we to simulating a human brain? ›
A human brain's neuronal activity is incredibly complex and simulating it at a 1:1 ratio is impossible with current technology. Achieving just a 10 percent simulation rate maxes out the supercomputers that such limited simulations have been run on in the past.What are 5 technologies used to scan the brain? ›
Many brain imaging tools are available to cognitive neuroscientists, including positron emission tomography (PET), near infrared spectroscopy (NIRS), magnetoencephalogram (MEG), electroencephalography (EEG), and functional magnetic resonance imaging (fMRI).What are three commonly used methods of brain research? ›
These technological methods include the encephalogram (EEG), magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).What is the new invention for the brain? ›
Neuralink is Musk's neural-interface-technology company. It's developing a device that would be embedded in a person's brain, where it would record brain activity and potentially stimulate it. Musk has compared the technology to a "FitBit in your skull."What is used to study the human brain today? ›
Scientists use imaging devices to better understand the working brain. One device commonly used to explore the brain is called functional Magnetic Resonance Imaging, or fMRI. fMRI measures changes in the brain as they are happening.What devices are used to study the brain's electrical activity? ›
Neurodiagnostic professionals use an electroencephalograph, a medical device that records the brain's electrical signals, to create a picture of the electrical activity and health of the brain. The recording is known as an electroencephalogram.What is the main research method used to study the brain now? ›
Such techniques include computed tomography (CT), positron emission tomography (PET) and single photon emission computed tomography (SPECT) scans.
Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.What kind of system is used to develop machines that have human kind intelligence? ›
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.Which type of AI is used to create a simulation of human thought and interaction? ›
Weak AI. Weak AI, sometimes referred to as narrow AI or specialized AI, operates within a limited context and is a simulation of human intelligence applied to a narrowly defined problem (like driving a car, transcribing human speech or curating content on a website).What are artificial intelligence systems that attempt to imitate the way a human brain works called? ›
Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.Which artificial intelligence AI system mimics the structure and functioning of the human brain? ›
In simple terms, a neural network is a set of algorithms that are used to find the elemental relationships across the bunches of data via the process that imitates the human brain operating process.Is the human brain equivalent to a computer? ›
The computer is faster at doing logical things and computations. However, the brain is better at interpreting the outside world and coming up with new ideas. The brain is capable of imagination. Both brains and computers are studied by scientists.How fast is the human brain compared to a computer? ›
Despite the fact that brains have so many more synapses than computers have transistors, the computer is only 100 times slower than the brain, by this measure, because of the computer's multi-gigahertz processor speed.What is the difference between human brain and computer? ›
The most significant difference between the brain and a computer is that the human brain has the ability to make decisions on its own and it can store an infinite amount of information. In contrast, a computer has to be programmed to perform the functions and has a limited capacity to store data and information.Why are humans smarter than robots? ›
Certain characteristics of our brains are very difficult to simulate. The human brain is extremely flexible and can adapt intuitively to unpredictable environments. Humans are creative, curious and endowed with social skills, all of which continues to set us apart from even the most intelligent computer.How does technology affect the brain? ›
So it may make us more easily distractible because it reduces our ability to ignore distractions. In addition to its negative effects on cognition, excess internet use has been associated with a higher risk for depression and anxiety, and can make us feel isolated and/or overwhelmed.
This would lead to an exponential situation where human intelligence is quickly and irretrievably left far behind by machine intelligence. Consequently, we'd lose authority and control. Best case, we become slaves to the machines; worst case, we're exterminated as surplus to requirements.What advantage does the human brain have over artificial intelligence? ›
By comparisons, human brains can process far more information than the fastest computers. In fact, in the 2000s, the complexity of the entire Internet was compared to a single human brain. After all, brains are great at parallel processing and sorting information.Are neural networks modeled after the human brain? ›
Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.Can the human brain create new things? ›
Certainly our brains are capable of inventing a unique person (although even a “unique” creation would be composed of facial and body features that we've seen before), and there is nothing that would necessarily prevent a sleeping brain from doing so.Can the human brain grow new cells? ›
Many neuroscientists disagree about how many and how often new neurons are created in the brain. Most of the brain's neurons are already created by the time we're born, but there is evidence to support the theory that neurogenesis is a lifelong process.What type of architecture is neural network? ›
What Is Neural Network Architecture? The architecture of neural networks is made up of an input, output, and hidden layer. Neural networks themselves, or artificial neural networks (ANNs), are a subset of machine learning designed to mimic the processing power of a human brain.What is the architecture of the human brain? ›
Brain architecture is comprised of billions of connections between individual neurons across different areas of the brain. These connections enable lightning-fast communication among neurons that specialize in different kinds of brain functions.What are neural networks modeled after? ›
Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected.What is neural networks in computer architecture? ›
A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. It is a type of machine learning process, called deep learning, that uses interconnected nodes or neurons in a layered structure that resembles the human brain.What are the two types of neural network design? ›
Convolution Neural Networks (CNN) Recurrent Neural Networks (RNN)
- LeNet5. LeNet5 is a neural network architecture that was created by Yann LeCun in the year 1994. ...
- Dan Ciresan Net. ...
- AlexNet. ...
- Overfeat. ...
- VGG. ...
- Network-in-network. ...
- GoogLeNet and Inception. ...
- Bottleneck Layer.
Cognitive scientists and philosophers of mind call this structure of the mind cognitive architecture.What is the structure of human brain and function? ›
The largest part of the brain, the cerebrum initiates and coordinates movement and regulates temperature. Other areas of the cerebrum enable speech, judgment, thinking and reasoning, problem-solving, emotions and learning. Other functions relate to vision, hearing, touch and other senses.What is human brain theory? ›
According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost.What is neural network example in real life? ›
Neural networks solve problems that require pattern recognition. For example, a neural network could be trained to recognize handwritten digits. Another example is the Google self-driving car, which is trained to classically recognize a dog, a truck, or a car.What is a real example of neural network? ›
Three examples of neural networks include convolutional neural networks (CNNs) used for image recognition and classification, recurrent neural networks (RNNs) used for natural language processing and sequence prediction, and generative adversarial networks (GANs) used for generating realistic images and data.What is the most advanced type of neural network? ›
Convolutional Neural Network
One of the most powerful supervised deep learning models is the Convolutional Neural Networks (the CNNs). The final structure of a CNN is actually very similar to Feedforward neural networks (FfNNs), where there are neurons with weights and biases.
Neural is a computer system modeled on the human brain and nervous system. While Social networking is the practice of expanding the number of one's business and/or social contacts by making connections through individuals, often through social media sites such as Facebook, Twitter, Instagram and Google +.What is the purpose of neural networks? ›
Neural networks are computing systems with interconnected nodes that work much like neurons in the human brain. Using algorithms, they can recognize hidden patterns and correlations in raw data, cluster and classify it, and – over time – continuously learn and improve.