The recent commercial AI revolution has been largely driven by deep neural networks. Deep NNs, first invented in the 1960s, have embraced the combination of Internet-scale data sets and distributed GPU farms.
But the field of AI is much richer than just this one type of algorithm. Symbolic argumentation algorithms such as artificial logic systems, which were also developed in the 1960s, could come into focus – in part by themselves, but also with neural networks in the form of so-called "neuronal symbols" hybridized "systems.
Deep Neural Network Weaknesses
Deep Neural Networks have done amazing things for certain tasks, such as image recognition and machine translation. In many more complex applications, however, traditional deep learning approaches cannot keep up with the ability of hybrid architecture systems that also use other AI techniques such as probabilistic thinking, germ ontologies and self-reprogramming ability.
Deep neural networks on their own. lack of generalization, ie discovery of new regularities and extrapolation beyond the training sentences. Deep neural networks interpolate and approximate what is already known, which is why they cannot really be creative in the human sense, although they can produce creative works that deviate from the recorded data.
For this reason, large training sets are required in order to teach deep neural networks and why data enlargement is such an important technique for deep learning, in which humans have to specify known data transformations. Even the interpolation cannot be carried out perfectly without learning the underlying regularities, which is clearly demonstrated by known adversarial attacks on deep neural networks.
The slavish adherence of deep neural networks to the details of their training data also makes them difficult to interpret. People cannot fully rely on their results or interpret them, especially in novel situations.
Combining the Strengths of Neural and Symbolic AI Methods
It is interesting that the disadvantages of deep neural networks are largely strengths of symbolic systems (and vice versa) that are inherently compositional, interpretable, and can show real generalization. In contrast to neural networks, previous knowledge can also be easily integrated into symbolic systems.
Neural network architectures are very powerful for certain types of learning, modeling and acting, but only have a limited ability to abstraction. That is why they are compared to the Ptolemaic epicyclic model of our solar system ̵
The symbolic AI is powerful in the manipulation and modeling of abstractions, but it is difficult to deal with massive empirical data streams.
For this reason, we believe that deep integration of neural and symbolic AI is possible. Systems is the best way to AGI on a human level on modern computer hardware.
In this light, it is worth noting that many of the recent successes in the field of "deep neural net" are actually hybrid architectures. Z The AlphaGo architecture from Google DeepMind integrates two neural networks into a game tree. Her new MuZero architecture, which masters both board and Atari games, continues on this path by using deep neural networks and planning with a learned model.
The extremely successful ERNIE architecture for natural language processing from the Tsinghua University integrates knowledge graphs into neural networks. The symbolic sides of these special architectures are relatively simple, but can be seen as an indication of more complex neural symbolic hybrid systems.
Cisco's success in analyzing neural symbolic street scenes
The integration of neural and symbolic methods is based to a large extent on the most profound revolution in AI in the past 20 years – the advent of probabilistic methods: z Neuronal generative models, Bayes & # 39; Inference techniques, estimation of distribution algorithms, probabilistic programming.
As an example of the emerging practical applications of probabilistic neural symbolic methods at the Artificial General Intelligence (AGI) 2019 conference in Shenzhen  In August last year, Hugo Latapie from Cisco Systems described the work of his team in collaboration with our AI team at the SingularityNET Foundation, using the OpenCog AGI engine together with deep neural networks for analyzing street scenes .
The OpenCog framework offers a neural-symbolic framework that is particularly rich in symbols and works with common deep neural net frameworks. It contains a combination of probabilistic logic networks (PLNs), probabilistic evolutionary program learning (MOSES) and probabilistic generative neural networks.
The traffic analysis system demonstrated by Latapie uses OpenCog-based symbolic reasoning based on deep neural models for street cameras and enables functions such as the detection of semantic anomalies (detection of collisions, jaywalking and other expected deviations) as well as the unattended labeling of scenes for new cameras and single-shot transfer learning (e.g. learning about new signals for bus stops using a single example).
The difference between a purely deep neural network approach and a neural symbolic approach is strong in this case. With deep neural networks that are easily provided, each neural network maps what is seen by a single camera. A holistic view of what's going on at a particular intersection, let alone an entire city, is a much bigger challenge.
In the architecture with neural symbols, the symbol level offers a common ontology, so that all cameras can be connected for an integrated traffic management system. If an ambulance has to be routed in such a way that no significant traffic occurs or is caused, this type of symbolic understanding is just right for the whole scenario.
The same architecture can be applied to many other related use cases where one occurs. With neural symbolic AI, both local intelligence can be enriched and multiple sources / locations can be combined to form a holistic view for reasoning and action.
It may not be impossible to solve this particular problem with a more complex deep neural network architecture in which multiple neural networks work together in a subtle way. However, this is an example of something that is easier and easier to deal with using a neural symbol approach. And it comes very close to machine vision, one of the great strengths of deep neural networks.
In other, more abstract areas of application such as mathematical theorem testing or biomedical discovery, the critical value of the symbolic side of the neural symbol hybrid is even more dramatic.
2020: The Year of Neural Symbolic AI
Deep neural networks have done astonishing things in recent years and brought applied AI to a whole new level. We bet that the next phase of incredible AI success will be achieved through hybrid AI architectures like neural-symbolic systems. This trend started relatively calmly in 2019, and we expect it to pick up speed dramatically in 2020.
Published on January 15, 2020 – 09:00 UTC