Deep learning, a subset of machine learning, simulates a human brain by taking in large amounts of data and attempting to learn from it. In IBM’s definition of the term, deep learning enables systems to “cluster data and make predictions with incredible accuracy.” However, as incredible as deep learning is, IBM poignantly notes it can’t touch the human brain’s ability to process and learn from information.
Deep learning and DNNs (deep neural networks) are being applied to solve complex problems in the real world, like weather prediction, facial recognition, and chatbots, plus perform other types of complex data analysis. Allied Market Research suggests the global deep learning market size will reach nearly $180 billion by 2030, up from $6.85 billion in 2020. Another study from Allied Market Research suggests the global neural network market is expected to reach nearly $153 billion by 2030 thanks to growth in the field of AI (artificial intelligence) and the growing need for data and advanced analytical tools.
Better understanding of deep learning will benefit future applications of AI and machine learning-derived technologies, including fully autonomous vehicles and the next generation of virtual assistants. In the future, deep learning may evolve to enable unsupervised learning and provide more insight into how the human brain works. It’s this second pursuit that led researchers at the University of Glasgow to investigate just how similar DNNs are to the human brain. Current understanding of DNN technology is relatively limited, and no one fully understands exactly how deep neural networks process information, according to the University of Glasgow.
To further the scientific community’s understanding, in the recently published “Degrees of algorithmic equivalence between the brain and its DNN models”, researchers proposed and tested a method of understanding how AI models compare to the human brain in their methods of processing information. The goal was to ascertain whether DNN models recognize things in the same way as a human brain, using similar steps of computations. The work identified similarities and differences between AI models and the human brain, providing a step forward in creating AI technology that processes information as closely to a human brain as possible.
“Having a better understanding of whether the human brain and its DNN models recognize things the same way would allow for more accurate real-world applications using DNNs,” said Philippe Schyns, dean of research technology at the University of Glasgow. “If we have a greater understanding of the mechanisms of recognition in human brains, we can then transfer that knowledge to DNNs, which in turn will help improve the way DNNs are used in applications such as facial recognition, where there are currently not always accurate.”
If the goal is to create the most human-like decisionmaking process possible, then the technologies must be able to process information and make decisions at least as well as a human could—and ideally better than a human can. At the end of the published document, where the authors made a list of outstanding questions based on the research, they included: “How do DNNs predict the diversity of human categorization behaviors?” This too is a question worth investigating, since not all humans would make the same decision when confronted with the same input—and in what ways would a more human-like AI model take this diversity into account?
Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #digitaltransformation #machinelearning #futureofwork #deeplearning #DNN #deepneuralnetworks #artificialintelligence#UniversityofGlasgow #IBM #AlliedMarketResearch #facialrecognition