In 2012 we witnessed the deep learning revolution. Based on our understanding of the brain's inner mechanisms, an algorithm was developed which was able to imitate the way our neurons connect. One of the characteristics of deep learning is that it gets smarter the more data it is trained on.
Deep learning dramatically improved AI's image recognition capabilities, and soon other kinds of AI algorithms were born, such as deep reinforcement learning.
These AI models were much better at absorbing the characteristics of their training data, but more importantly, they were able to improve over time.
One notable example is Google's AlphaStar project, which managed to defeat top professional players at the real-time strategy game StarCraft 2. The models were developed to work with imperfect information and the AI repeatedly played against itself to learn new strategies and perfect its decisions.
We witness the same concept in self-driving cars, where the AI must predict the trajectory of nearby cars in order to avoid collisions. In these systems, the AI is basing its actions on historical data. Needless to say, reactive machines were incapable of dealing with situations like these.
Despite all these advancements, AI still lags behind human intelligence. Most notably, it requires huge amounts of data to learn simple tasks. While the models can be retrained to advance and improve, changes to the environment the AI was trained on would force it into full retraining from scratch. For instance, consider a language: Once we learn a second language, learning a third and fourth become proportionally easier. For AI, it makes no difference.
That is the limitation of narrow AI -- it can become perfect at doing a specific task but fails miserably with the slightest alterations.