Inspired by the human brain, Belgian researchers are developing a new generation of sensors | computers weekly (2023)

Feature

Belgian researchers have found ways to mimic the human brain to improve sensors and the way they pass data to mainframe computers.

Inspired by the human brain, Belgian researchers are developing a new generation of sensors | computers weekly (1)

By

  • raised bed,Pat Brans Associates/Grenoble School of Management

Published:April 6, 2023

The human brain is much more efficient than the most powerful computers in the world. A human brain with an average volume of about 1260 cm3consumes about 12W (watts) of power.

(Video) Different generation connect differently: Rocio Briceno in #WhoIsagile #Wia63

Using this biological marvel, the average person learns a huge number of faces in a very short time. So you can immediately recognize one of these faces, regardless of the expression. People can also look at an image and recognize objects from a seemingly infinite number of categories.

Compare that to the world's most powerful supercomputer,border, running at Oak Ridge National Laboratory, spanning 372 m2and consuming 40 million watts of power at its peak. Frontier processes large amounts of data to train artificial intelligence (AI) models to recognize large numbers of human faces, as long as the faces don't display unusual expressions.

But the training process is energy intensive, and although the resulting models run on smaller computers, they still use a lot of energy. Furthermore, the models generated by Frontier can only recognize objects from a few hundred categories, for example, person, dog, car, etc.

Scientists know a few things about how the brain works. They know, for example, that neurons communicate through spikes (cumulative potential thresholds). Scientists used brain probes to peer deep into the human cerebral cortex and record neural activity. These measurements show that a typical neuron fires only a few times per second, which is a very small fire. At a very high level, this and other basic principles are clear. But how neurons compute, how they participate in learning and how connections are made and remade to form memories remain a mystery.

However, many of the principles that researchers are working on today are likely to be part of a new generation of chips that will replace computer processing units (CPUs) and graphics processing units (GPUs) within 10 years or so. Computer designs must also change, moving away from what is called von Neumann architecture, where processing and data are in different places and share a bus to transfer information.

New architectures, for example, will localize processing and storage, just like in the brain. Researchers are borrowing this concept and other features of the human brain to make computers faster and more energy efficient. This field of study is known asneuromorphic computing, and much of the work is being done on theInteruniversity Center for Microelectronics(Imec) in Belgium.

“We tend to think of spike behavior as the fundamental level of computation within biological neurons. There are much deeper calculations going on that we don't understand, probably down to the quantum level," he says.Ok Ilya, manager of the Neuromorphic Computing program at Imec.

“Even between quantum effects and the high-level behavior model of a neuron, there are other intermediate functions such as ion channels and dendritic calculus. The brain is much more complicated than we know. But we’ve already found some aspects that we can emulate with current technology, and we’re already reaping a big reward.”

(Video) Open co-design workshop on next generation tools to manage ALS and MS

There is a spectrum of partially neuromorphic and already industrialized techniques and optimizations. For example, GPU designers are already implementing some of what has been learned from the human brain; and computer designers are already reducing bottlenecks through the use of multilayer memory stacks. Massive parallelism is another bio-inspired principle used in computers, for example in deep learning.

However, it is very difficult for neuromorphic computing researchers to make breakthroughs in computing because there is already so much momentum around traditional architectures. So, rather than trying to disrupt the computer world, Imec turned its attention to sensors. Imec researchers are looking for ways to "spread out" the data and exploit this scarcity to speed up processing in the sensors and, at the same time, reduce energy consumption.

“We focused on sensors that are temporary in nature,” says Ocket. “That includes audio, radar, and lidar. It also includes event-based vision, which is a new type of vision sensor that is not frame-based, but works on the principle of your retina. Each pixel independently sends a signal if it detects a significant change in the amount of light it receives.

“We borrowed these ideas and developed new algorithms and new hardware to support these peak neural networks. Our job now is to demonstrate how low power and latency it can be when integrated into a sensor.”

Spiking neural networks on a chip

A neuron accumulates information from all the other neurons to which it is connected. When the membrane potential reaches a certain threshold, the axon, the connection coming out of the neuron, emits a spike. This is one of the ways your brain performs calculations. And that's what Imec does now on a chip, usingneural network spikes.

"We used digital circuitry to emulate the leakage, integration, and firing behavior of biological spike neurons," says Ocket. “They are permeable in the sense that as they integrate, they also lose some voltage across their membrane; they are integrating because they accumulate peaks that arrive; and they fire because the output fires when the membrane potential reaches a certain threshold. We mimic that behavior.”

The benefit of this mode of operation is that, until the data changes, no events are generated and no calculations are performed in the neural network. Consequently, no energy is used. The sparseness of peaks within the neural network inherently offers low power consumption because computation does not occur constantly.

A spiked neural network is considered recurrent when it has memory. A peak is not calculated just once. Instead, it feeds back into the network, creating a form of memory that allows the network to recognize temporal patterns, similar to what the brain does.

Using spiked neural network technology, a sensor transmits tuples that include the X coordinate and Y coordinate of the pixel being fired, the polarity (whether it is firing up or down) and the time it is fired. When nothing happens, nothing is transmitted. On the other hand, if things change in many places at once, the sensor creates many events, which becomes a problem due to the size of the tuples.

(Video) Reality as code : How close are we at generating humans and it's environment by Patrick Debois

To minimize this increase in transmission, the sensor performs filtering by deciding the bandwidth to emit based on the dynamics of the scene. For example, in the case of an event-based camera, if everything in a frame changes, the camera will send a lot of data. A frame-based system would handle this much better because it has a constant data rate. To overcome this problem, the designers put a lot of intelligence into the sensors to filter the data, yet another way to mimic human biology.

"The retina has 100 million receptors, which is like having 100 million pixels in your eye," says Ocket. “But the optical fibers running through your brain only carry a million channels. This means that the retina compresses 100 times, and this is a real calculation. Certain features are detected, such as left-to-right, up-and-down, or small circles. We're trying to mimic the filtering algorithm that takes place on the retina in these event-based sensors, which operate at the edge and send data to a central computer. You could think of the computation that takes place on the retina as a form of cutting-edge AI.”

People have been mimicking neurons with silicon spikes since the 1980s. But the main obstacle that prevented this technology from reaching a market or any kind of real application was training state-of-the-art neural networks so efficiently. are trained. “Once you establish a good mathematical understanding and good techniques for training pico neural networks, the hardware implementation is almost trivial,” says Ocket.

In the past, people built spikes into their networking chips and then made a lot of tweaks to get the neural networks to do something useful. Imec took another approach, developing software algorithms that showed that a given configuration of spike neurons with a given set of connections would work at a given level. So they built the hardware.

This kind of advancement in software and algorithms is unconventional for IMEC, where progress often takes the form of hardware innovation. Another thing that was not conventional for Imec was that they did all this work in standardCMOS, which means that its technology can be industrialized quickly.

The future impact of neuromorphic computing

“The next direction we are taking is towards sensor fusion, which is a hot topic in automotive, robotics, drones and other domains,” says Ocket. “A good way to get high-fidelity 3D perception is to combine multiple sensory modalities. Dithered neural networks will allow us to do this with low power and low latency. Our new goal is to develop a new chip specifically for sensor fusion in 2023.

“Our goal is to merge multiple sensor streams into a coherent and complete 3D representation of the world. Just like the brain, we don't want to have to think about what's coming from the camera versus what's coming from the radar. We're going for an intrinsically fused representation.

“We look forward to showing some very relevant demos for the automotive industry and for robotics and drones across industries, where the performance and low latency of our technology really shines,” says Ocket. “First, we are looking for innovations to solve certain edge cases in automotive or robotic perception that are not possible today because the latency is too high or the power consumption is too high.”

Two other things Imec hopes to happen in the market are the use of event-based cameras and sensor fusion. Event-based cameras have very high dynamic range and very high temporal resolution. Sensor fusion can take the form of a single module with cameras in the middle, some radar antennas around it, maybe a lidar, and the data is fused into the sensor itself, using spiked neural networks.

(Video) Meet the Researchers

But even when the market adopts neural networks in sensors, the general public may not be aware of the underlying technology. That will likely change when the first event-based camera is integrated into a smartphone.

“Say you want to use a camera to recognize your hand gestures as a form of human-machine interface,” explains Ocket. “If this were done with a normal camera, it would constantly look at every pixel in every frame. It would take a frame and then decide what is happening in the frame. But with an event-driven camera, if nothing happens in its field of view, no processing takes place. It has an intrinsic trigger mechanism that it can exploit to start computing only when there's enough activity coming from its sensor."

Human-machine interfaces can suddenly become much more natural, all thanks to neuromorphic detection.

Learn about managing IT and business issues

FAQs

What type of computer architecture modeled after the human brains network of neurons? ›

Neuromorphic computing is a method of computer engineering in which elements of a computer are modeled after systems in the human brain and nervous system. The term refers to the design of both hardware and software computing elements.

What technology mimics the human brain? ›

Neural network: A subset of machine learning that mimics the neurons in the human brain and how they signal to one another. Neural networks pass data through interconnected layers of nodes until the network creates the output. Neural networks are at the heart of deep learning algorithms.

Why do scientists struggle to replicate the working of human brains into artificial? ›

Complexity of the human brain: The human brain consists of approximately 86 billion neurons, each connected to thousands of other neurons. This level of complexity is difficult to replicate in artificial neural networks.

What is an example of a neuromorphic technology? ›

In the medium term we may expect neuromorphic technologies to deliver a range of applications more efficiently than conventional computers, for example to deliver speech and image recognition capabilities in smart phones.

What was the first truly brain inspired neural network called? ›

The perceptron is the oldest neural network, created by Frank Rosenblatt in 1958. Feedforward neural networks, or multi-layer perceptrons (MLPs), are what we've primarily been focusing on within this article. They are comprised of an input layer, a hidden layer or layers, and an output layer.

Which neural networks algorithms are inspired from the structure and functioning of the brain? ›

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

What technology do scientists use to research the brain? ›

Scientists use imaging devices to better understand the working brain. One device commonly used to explore the brain is called functional Magnetic Resonance Imaging, or fMRI.

What technology is used in brain research? ›

“Neurotechnology” refers to any technology that provides greater insight into brain or nervous system activity, or affects brain or nervous system function. Neurotechnology can be used purely for research purposes, such as experimental brain imaging to gather information about mental illness or sleep patterns.

Which type of technology is used to study the brain? ›

Magnetic resonance imaging (MRI) uses changes in electrically charged molecules in a magnetic field to form images of the brain. Both technologies are more precise than ordinary X-rays and can help find problems when people fall ill.

Which of the following is an AI function that mimics the working of the human brain in processing data for use in detecting objects? ›

Deep learning is an AI function that mimics the workings of the human brain in processing data for use in detecting objects, recognizing speech, translating languages, and making decisions.

Which of the following technology is based on the adaptation of how the human brain works? ›

Neural network​ See what the community says and unlock a badge.

Which of the following artificial intelligence systems mimic the structure and functioning of the human brain? ›

Answer: Explanation: An emerging field called "neuromorphic computing" focuses on the design of computational hardware inspired by human brain. ...

What is a real life example of artificial neural network? ›

For example, a neural network could be trained to recognize handwritten digits. Another example is the Google self-driving car, which is trained to classically recognize a dog, a truck, or a car. They are good for Pattern Recognition, Classification and Optimization.

What are the features of brain inspired computing? ›

Different brain-inspired ('neuromorphic') platforms use combinations of different approaches: analogue data processing, asynchronous communication, massively parallel information processing or spiking-based information representation. These properties distinguish them from von Neumann computers.

Which is an example of whole brain emulation where a machine can think? ›

Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer.

Which theory the human brain was described as a neural network? ›

The idea of neural networks began unsurprisingly as a model of how neurons in the brain function, termed 'connectionism' and used connected circuits to simulate intelligent behaviour . In 1943, portrayed with a simple electrical circuit by neurophysiologist Warren McCulloch and mathematician Walter Pitts.

What is the first artificial neural network called which year was it invented? ›

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron's design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

What is inspired by the biological neural networks in the human brain? ›

Artificial neural networks were inspired by the biological neural networks of the human body. The modeling of biological neural networks was a crucial step in the development of artificial neural networks. Many scientists attempted to understand the working of the brain.

What type of neural networks has gates in the neural network that control the flow of information? ›

Long Short-Term Memory (LSTM) Networks

LSTM is a type of RNN that is designed to handle the vanishing gradient problem that can occur in standard RNNs. It does this by introducing three gating mechanisms that control the flow of information through the network: the input gate, the forget gate, and the output gate.

What is the most commonly used and successful neural network? ›

Convolutional neural networks: one of the most popular models used today. This neural network computational model uses a variation of multilayer perceptronsand contains one or more convolutional layers that can be either entirely connected or pooled.

Which activation function is the most commonly used in neural networks? ›

The ReLU is the most used activation function in the world right now.Since, it is used in almost all the convolutional neural networks or deep learning.

What are the two methods used by scientists to study the human brain? ›

These technological methods include the encephalogram (EEG), magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).

What are 3 technologies used to image the brain? ›

Many brain imaging tools are available to cognitive neuroscientists, including positron emission tomography (PET), near infrared spectroscopy (NIRS), magnetoencephalogram (MEG), electroencephalography (EEG), and functional magnetic resonance imaging (fMRI).

Which technique for studying the brain can measure structures and activity in the brain? ›

Functional Magnetic Resonance Imaging (fMRI)

fMRI has become a staple of modern neuroscience research because it allows brain anatomy (obtained from a structural, rather than functional, MRI scan) and function to be correlated in humans.

What type of imaging is used by researchers to study the developing brain? ›

Quantitative MRI (magnetic resonance imaging): Measures fetal brain tissue volume and brain fold development using a magnetic field and radio waves.

Which type of technology enables researchers to observe how the brain changes over time? ›

Functional magnetic resonance imaging (fMRI) measures blood flow in the brain during different activities, providing information about the activity of neurons and thus the functions of brain regions.

What is a specific type of machine learning that uses layers of artificial neural networks to mimic brain functions? ›

Deep learning is a form of machine learning that models patterns in data as complex, multi-layered networks.

What sensory systems is AI trying to copy or replace in a human being? ›

Sensory AI is learning through sensory inputs: information from the five human senses, vision, hearing, smell, taste, and touch.

Which type of artificial intelligence uses algorithms based on the way the human brain operates? ›

Deep learning, a subset of machine learning, is based on our understanding of how the brain is structured. Deep learning's use of artificial neural networks structure is the underpinning of recent advances in AI, including self-driving cars and ChatGPT.

What human invention are best comparable with the human brain? ›

Throughout history, people have compared the brain to different inventions. In the past, the brain has been said to be like a water clock and a telephone switchboard. These days, the favorite invention that the brain is compared to is a computer.

Which technology stimulates human thinking? ›

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

Which artificial refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions? ›

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

What is an intelligent machine that is programmed to mimic a human action? ›

Artificial intelligence (AI) broadly refers to any human-like behavior displayed by a machine or system. In AI's most basic form, computers are programmed to “mimic” human behavior using extensive data from past examples of similar behavior.

Which of the following types of AI involves machines that have human level consciousness? ›

Self-aware AI

This will be when machines are not only aware of emotions and mental states of others, but also their own. When self-aware AI is achieved we would have AI that has human-level consciousness and equals human intelligence with the same needs, desires and emotions.

What are 3 examples of neural network? ›

Some common types of neural networks are Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN).

What is an example of a neural network in psychology? ›

For example, when you see a ball thrown to you and you try to catch it, sensory neurons in your eyes send a signal along a network that connects to your visual and motor cortices in your brain that then send signals to the neurons connected to your arm, hand and finger muscles so you can lift your hands and catch the ...

What is the name for brain inspired computer hardware design? ›

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations.

What theory is the brain as a machine? ›

The computational theory of mind holds that the mind is a computational system that is realized (i.e. physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood.

Could a human brain be used as a computer? ›

A "biocomputer" powered by human brain cells could be developed within our lifetime, according to Johns Hopkins University researchers who expect such technology to exponentially expand the capabilities of modern computing and create novel fields of study.

What type of architecture is neural network? ›

What Is Neural Network Architecture? The architecture of neural networks is made up of an input, output, and hidden layer. Neural networks themselves, or artificial neural networks (ANNs), are a subset of machine learning designed to mimic the processing power of a human brain.

What type of neural network is the human brain? ›

The neurons in the human brain perform their functions through a massive inter-connected network known as synapses.

What is the architecture of the human brain? ›

Brain architecture is comprised of billions of connections between individual neurons across different areas of the brain. These connections enable lightning-fast communication among neurons that specialize in different kinds of brain functions.

What are neural networks modeled after? ›

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected.

What are the two types of neural network design? ›

Convolution Neural Networks (CNN) Recurrent Neural Networks (RNN)

Which neural network in the human brain is associated with emotions and memory? ›

The amygdala is part of the limbic system, a neural network that mediates many aspects of emotion and memory.

What is the most common type of neural network? ›

The four most common types of neural network layers are Fully connected, Convolution, Deconvolution, and Recurrent, and below you will find what they are and how they can be used.

What are the main ideas of the human brain as the neural network? ›

NEURAL NETWORKS. In the brain, a typical neuron collect signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through the axon (the out put and conducting structure) which can split into thousands of branches.

What is human brain theory? ›

According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. This allows the brain to maintain function and memory even when it is damaged. It is only when there exist no parts big enough to contain the whole that the memory is lost.

What are the three parts of the human brain psychology? ›

Main Parts of the Brain and Their Functions. At a high level, the brain can be divided into the cerebrum, brainstem and cerebellum.

What is the most advanced type of neural network? ›

Convolutional Neural Network

One of the most powerful supervised deep learning models is the Convolutional Neural Networks (the CNNs). The final structure of a CNN is actually very similar to Feedforward neural networks (FfNNs), where there are neurons with weights and biases.

What is neural network example in real life? ›

Neural networks solve problems that require pattern recognition. For example, a neural network could be trained to recognize handwritten digits. Another example is the Google self-driving car, which is trained to classically recognize a dog, a truck, or a car.

What is the science behind simulating structures inside the brain called? ›

Artificial neural networks (ANNs) consist of input, hidden, and output layers with connected neurons (nodes) to simulate the human brain.

Videos

1. Webinar: Brain health, the next challenge of the 21st century
(HumanBrainProject)
2. 'Brain Complexity and Consciousness' with Alain Destexhe, Marcello Massimini, and Steven Laureys
(HumanBrainProject)
3. Brain Matters #5 - Introduction to The EBRAINS Virtual Big Brain
(HumanBrainProject)
4. Defragmentation TS (2022) - Day 1 - cloud-based BioImage Analysis (NEUBIAS Academy & EOSC-Life)
(NEUBIAS)
5. Engaging Young People as Active Citizens and Citizen Scientists
(ClairCity Project)
6. Day 2: Methods and Models in Biomedical Research: Building Bridges
(QuantOCancer Project)
Top Articles
Latest Posts
Article information

Author: Aron Pacocha

Last Updated: 28/06/2023

Views: 5529

Rating: 4.8 / 5 (48 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Aron Pacocha

Birthday: 1999-08-12

Address: 3808 Moen Corner, Gorczanyport, FL 67364-2074

Phone: +393457723392

Job: Retail Consultant

Hobby: Jewelry making, Cooking, Gaming, Reading, Juggling, Cabaret, Origami

Introduction: My name is Aron Pacocha, I am a happy, tasty, innocent, proud, talented, courageous, magnificent person who loves writing and wants to share my knowledge and understanding with you.