How Does it Work?

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

There are four types of AI:

Reactive Machines

Early AI algorithms had one thing in common; they lacked memory and were purely reactional. Given a specific input, the output would always be the same.

That is the case with many machine learning models. Stemming from statistical math, these models were able to consider huge chunks of data, then produce a seemingly intelligent output. For instance, it is extremely difficult (if not impossible) to write a math formula for movie recommendations. But machine learning models were able to yield great results by looking at the purchase history of other customers. Solving that problem became one of the factors behind Netflix's success.

The same mechanism works for spam filters, which can statistically determine if the presence and density of certain words should raise a red flag.

This kind of AI is known as "reactional" or "reactive AI," and it works great -- even performing beyond human capacity in certain domains. Most notably, it defeated chess Grandmaster Garry Kasparov in 1997. However, reactive AI is also extremely limited.

In real life, many of our actions are not reactive -- in the first place, we may not have all information at hand to react on. Yet, we are masters of anticipation and can prepare for the unexpected, even based on imperfect information. This "imperfect information" scenario has been one of the target milestones in the evolution of AI and is necessary for a range of use cases from natural language understanding to self-driving cars.

For that reason, researchers worked to develop the next level of AI, which had the ability to remember and learn.

  • Limited Memory

    In 2012 we witnessed the deep learning revolution. Based on our understanding of the brain's inner mechanisms, an algorithm was developed which was able to imitate the way our neurons connect. One of the characteristics of deep learning is that it gets smarter the more data it is trained on.

    Deep learning dramatically improved AI's image recognition capabilities, and soon other kinds of AI algorithms were born, such as deep reinforcement learning.

    These AI models were much better at absorbing the characteristics of their training data, but more importantly, they were able to improve over time.

    One notable example is Google's AlphaStar project, which managed to defeat top professional players at the real-time strategy game StarCraft 2. The models were developed to work with imperfect information and the AI repeatedly played against itself to learn new strategies and perfect its decisions.

    We witness the same concept in self-driving cars, where the AI must predict the trajectory of nearby cars in order to avoid collisions. In these systems, the AI is basing its actions on historical data. Needless to say, reactive machines were incapable of dealing with situations like these.

    Despite all these advancements, AI still lags behind human intelligence. Most notably, it requires huge amounts of data to learn simple tasks. While the models can be retrained to advance and improve, changes to the environment the AI was trained on would force it into full retraining from scratch. For instance, consider a language: Once we learn a second language, learning a third and fourth become proportionally easier. For AI, it makes no difference.

    That is the limitation of narrow AI -- it can become perfect at doing a specific task but fails miserably with the slightest alterations.

    Theory of mind

    Theory of mind capability refers to the AI machine's ability to attribute mental states to other entities. The term is derived from psychology and requires the AI to infer the motives and intents of entities (e.g., their beliefs, emotions, goals).

    Emotion AI, currently under development, aims to recognize, simulate, monitor and respond appropriately to human emotion by analyzing voice, image and other kinds of data. But this capability, while potentially invaluable in advertising, customer service, healthcare and many other areas, is still far from being an AI possessing theory of mind: The latter is not only capable of varying its treatment of human beings based on its ability to detect their emotional state, but is also able to understand them.

    Indeed, "understanding," as it is generally defined, is one of AI's huge barriers. The type of AI that can generate a masterpiece portrait still has no clue what it has painted. It can generate long essays without understanding a word of what it has said. An AI that has reached the theory of mind state would have overcome this limitation.

    Self Awareness

    The types of AI discussed above are precursors to self-aware or conscious machines, i.e., systems that are aware of their own internal state as well as that of others. This essentially means an AI that is on par with human intelligence and can mimic the same emotions, desires or needs.

    This is a very long-shot goal, for which we possess neither the algorithms nor the hardware.

    Whether artificial general intelligence and self-aware AI are correlative is to be seen in the far future. We still know too little about the human brain to build an artificial one that is nearly as intelligent.