A team of researchers from the University of Alberta has created the first real-world artificial neural network that can process the world’s most complex data, and then learn how to translate that knowledge into new ways of doing things.
The team, led by PhD student Dr. Michael Tkacik, has created a neural network based on a novel class of proteins called G-protein coupled receptors.
These receptors are part of a class of receptors known as GPCRs, which are known to have some remarkable properties.
The proteins, which act as a type of synapse between neurons, allow the cells to communicate with one another, which makes the system able to learn to respond to stimuli that can be presented to it.
“This system is the first system to learn how the human brain processes data, but it is also the first to be able to use this learning to create a new way of doing stuff,” Tkacsik said.
The researchers developed the system by using a combination of a technique called convolutional neural networks (CNNs) and a technique known as supervised learning.
CNNs are neural networks that are trained using a series of trials to learn a set of algorithms that predict future behavior based on previous results.
In the case of CNNs, these predictions are often based on information from previous trials.
“The goal of the CNN is to make predictions for the world, and that can only be done using a limited number of inputs,” TKacsik explained.
“But the problem is that a lot of these predictions don’t come from the brain at all.”
In order to understand how the CNN works, Tkacesi developed a model to model how the brain processes information.
This model was then used to build the network.
TkACsi’s model was trained on images of objects that are known as “facial recognition objects” that are common in everyday life, such as pictures of people’s faces, or pictures of objects.
The researchers used a variety of images to build their model.
“Our approach is to take images of different shapes and sizes and then take a random image of the object that was selected and see how much of that object is actually representing a person,” Tskacsik told NBC News.
“So if we take a photo of a person and then use a computer model to see how it’s represented, we get a picture that is actually different from that person’s face, which is important because the more accurate the model, the more we can make predictions about the world.”
The researchers then took this image and created a model that used the same set of data to train the CNNs.
The resulting model was able to accurately predict the face of the person with 95 percent accuracy.
“We’ve created the most powerful machine learning system in the world to do this, and we hope to use that to create new ways for humans to interact with the world,” Tackacik said in a statement.
The scientists hope to eventually create a system that can automatically interpret the input and output of the human body, and make predictions based on what that input might be.
For example, a person could be wearing a watch, or a person might have an eating disorder.
Tackacsi and his team are now working on a more advanced system that could use the data from sensors like the eye and nose to predict a person’s heartbeat.
The results of the research have already been published in the journal Scientific Reports, and are expected to be released in the near future.
Follow NBC News Science Alert for the latest in science news.