Tech Media Today
Artificial Intelligence

Deep Learning: The revolution of Artificial Intelligence

Deep Learning: the revolution of Artificial Intelligence

How a computer manages to defeat the world champion of Go, considered the most complex strategy game in the world? How can you learn to play Atari games autonomously, achieving a performance similar to that of an average human player? The answer lies in Deep Learning, a technique that has revolutionized Artificial Intelligence.

Its name derives from the use of artificial neural networks with many layers of neurons in machine learning tasks.  If the neural network has many layers is said to be deep, as opposed to having a small number of layers and be shallow. A network with many layers, has many neurons and many connections between them and, therefore, a great capacity for learning.

Also Read: Blockchain Technology in banking- Why banks are investing

The learning capacity is fundamental to build high performance recognition systems, because the current paradigm of construction of these systems is that they learn to recognize the objects of interest using examples. The examples consist of images, thousands or millions, in which the objects to be recognized are found and for each image there is an indicator of the type of an object present; for example, an image of a car on a road and the “auto” indicator.  

To solve a certain recognition task, the deep network must be trained by these examples and learn that given an input image it must generate the correct indicator at its output. Everything that the network learns is stored in the connections between neurons (synapses). Hence the importance of having many neurons, very connected and therefore with a great capacity for learning.

Deep learning is a technique of recent use, although research has been conducted into the use of neural networks since the middle of the last century. The boom since 2012 has to do with technical aspects of machine learning: first, the introduction of new neural networks with specific structures that require fewer internal parameters to be learned; and the use of learning techniques of greater sophistication, which allow to train networks with many layers in a fast and robust manner.

But it also influences the existence of computers with greater processing capacity, with larger memories and with specialized computing units (GPU – Graphics Processing Unit). This factor is fundamental, since training a deep network of many layers, with thousands of neurons, using millions of images, requires a large amount of memory and greater processing capacity.  

The fourth factor, often ignored, is the tendency to develop deep networks under the open source paradigm, which means that at present thousands of researchers and developers are solving deep learning applications   in a collaborative way, using   standard work frameworks , for example TensorFlow, Caffe or PyTorch , which facilitate the sharing of designed and trained networks.

More to Read:

5 RPA (Robotics Process Automation) Trends to Watch Out For in 2019

How AI Can Simplify These Ecommerce Functions? Is There A Flipside?

Related posts

5 AI Business Tools You’re Probably Not Using – But Should

Team TMT

This Is How AI Imperfection Can Help Us!

Rebecca James

How Facial Recognition Technology is Used to Combat Crime?

Team TMT

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.