Deep Learning: What it is and why it will be a key technology in the future of artificial intelligence?

Since the 50s of the last century and until a few years ago the usual field of Artificial Intelligence (AI) was mostly advanced research laboratory and science fiction. Except for a few cases, almost all systems with human – like intelligence have appeared in futuristic movies or works such as Isaac Asimov. However, this scenario is changing radically in recent years.

The great technological impulse to which we often refer under the term Big Data has revolutionized the business environment. Organizations subject to the need for digital transformation have become thirsty creatures of vast amounts of data; and first, in the history there is a widespread demand for systems with advanced intelligence , equivalent to that of a human, that are able to process that data. This is happening in virtually every industry, it is rare business or public administration that cannot benefit from an intelligent, automated data analysis.

Deep Learning

We are living a historic moment, not because organizations want to incorporate something radically new, but because they are now aware that there is technology that can process all the data at their disposal, do so at scales less than human time and even provide the necessary intelligence .

We could say that the Big Data has simply been the first wave and the great tsunami is about to arrive. The new Big Data architectures have appeared in the very large Internet companies, digital native organizations and fully connected from conception. Today we are seeing the Big Data mushrooms to encompass all organizations and all sectors, because in a digital and global ecosystem of companies that are not digital natives also need to become guzzlers data.

Machine Learning, machine learning

One of the keys is in the advanced AI learning. It is increasingly common to ask them to machines that learn by themselves. We cannot afford to pre-set rules to deal with the endless combinations of input data and situations encountered in the real world.

Rather than do that, we need the machines are capable of self-programmed, in other words, we want machines that learn from their own experience. The discipline of Machine Learning (Machine Learning) addresses this challenge and thanks to the perfect storm that we just delve all Internet giants have entered fully into the world of machine learning, offering cloud services to build learning applications from data ingest.

Today automatic learning is more than ever accessible to any programmer. To experiment with these services have platforms such as IBM Watson Developer Cloud, Amazon Machine Learning, Azure Machine Learning, TensorFlow or BigML.

Understanding the learning algorithms is easy if you look at ourselves how we learn as children. Reinforcement learning encompasses a group of machine learning techniques often used in artificial systems. In these systems, as well as children, behaviors that are rewarded tend to increase their probability of occurrence, while behaviors that are punished tend to disappear.

Such approaches are called supervised learning, requiring the intervention of humans to indicate what is right and what is wrong (i.e. for a proportional booster). In many other applications of human cognitive computing, apart from reinforcing also provide part of the semantic algorithms necessary to learn. For example, in the case of software that must learn to differentiate between different types of documents received by an office, it is humans who have initially labeled a significant set of examples so that later the machine can learn.

That is, humans are what initially really know whether a document is a complaint, an instance, a claim, a registration, a change request, etc. Once the algorithms have a training set provided by humans, then they are able to generalize and begin collating documents automatically without human intervention.

Today there are these restrictions or limitations training algorithms which largely limit their power, for good sets of training data are required (often labeled manually by humans) for algorithms learn effectively. In the field of artificial vision, algorithms to learn to detect objects in the images automatically have previously trained with a good set of tagged images, such as Microsoft COCO.

Deep Learning: The approach to human perception

Possibly the future of machine learning passes through a shift towards unsupervised learning, in this paradigm algorithms are able to learn without human intervention, pulling themselves conclusions about the semantics embedded in the data. There are already companies that focus on approaches fully automatic unsupervised learning, as Loop AI Labs, whose cognitive platform is able to process millions of unstructured documents and build autonomously structured representations.

The discipline of machine learning is seething with its application in the world of Big Data and IoT. No longer appear advances and improvements of the traditional algorithms, from the sets of classifiers (ensemble learning) to the Deep Learning, which is very fashionable today for their ability to get closer and closer to human perceptive power.

In the approach Deep Learning used logical structures that resemble more to the organization of the nervous system of mammals, having layers of processing units (artificial neurons) that specialize in detecting certain existing features in the perceived objects. The artificial vision is one of the areas where the Deep Learning provides a considerable improvement compared to more traditional algorithms. There are several environments and code libraries Deep Learning that run on powerful CUDA GPUs modern type, such as NVIDIA cuDNN.

The Deep Learning represents a more intimate mode of functioning of the human nervous system approach. Our brain has microarchitecture of great complexity, in which nuclei have been discovered and differentiated areas whose networks of neurons are specialized to perform specific tasks.

Thanks to neuroscience, the study of clinical cases of brain damage and advances in imaging know for example that there are specific language centers (such as Broca’s or Wernicke), or that there are specialized networks to detect different aspects of vision, such as borders, the slope of the lines, symmetry and even areas closely related to face recognition and emotional expression thereof.

The computational models of Deep Learning mimic these architectural features of the nervous system, allowing networks has processing units that specialize in detecting certain hidden features in the data within the global system. This approach has allowed better results in computational tasks perception when compared to monolithic artificial neural networks.

So far we have seen that cognitive computing is based on the integration of typically human psychological processes such as learning or language. In the coming years, we may see artificial cognitive systems spans multiple applications in the digital ecosystem.

In addition, we will see how learning and language begin to integrate more psychological functions such as semantic memory, reasoning, attention, motivation and emotion, so that artificial systems go closer and closer to the human level of intelligence, or perhaps, as is well advanced in ConsScale (a scale to measure cognitive development), the machines can achieve higher levels to humans.

The value of information: Data Fever

Once companies have data and systems capable of processing is the time to enter fully into the next phase: the understanding of the data, knowledge acquisition and extraction of value. On a small scale, this is something that humans do traditionally, we access the data, interpret them using our brain and us supposedly intelligent decisions. However, when we talk about gigabytes, terabytes or even petabytes of information, along with the need to make decisions on time scales of the order of milliseconds, humans are literally out of action.

We have no choice but to resort to machines and these machines also we need to be able to interpret the data, understand and draw conclusions intelligently. In other words, we need artificial cognitive systems, brains made of hardware and software, able to make decisions for us, capable of performing millions of different tasks that in the past could only do humans.

Today many products and services as well as the strategies of marketing that surround them, depend on machines made automatically tasks such as reading web pages (with an excellent reading comprehension), recognize faces in images published in social networks, understand the emotion contained in the voice of a telephone conversation, answering questions from a customer in a chat, to understand the dynamics and the motives of the geographical movements of people, predict the energy expenditure of a factory infer what movies or songs each person’s taste, recommend diet and healthy exercise for each person depending on their current health status and genotype, etc.

All these tasks have something in common. All require perceive what is happening in the environment through data acquisition and all require perform information processing to interpret reality and extract meaning (so that later can reason about the meaning and make decisions for adaptive actions) . Precisely for this reason, a fever of data across all industries occurs. As in the gold rush, there is a huge hidden value in the millions and millions of tons of data that an organization may collect.

The first objective is, therefore, to reach to handle such huge amounts of data. Once modern architectures Big Data can store and process tens or hundreds of petabytes of data, the challenge is to the phases of data acquisition and interpretation of data for knowledge extraction.

Cognitive Ubiquitous Internet and, the next step

The internet of things (IoT) is a breakthrough in the challenge of data acquisition while cognitive computing provides the necessary knowledge for the extraction of intelligence. The moment we live in today is of great importance for the development of intelligent systems because we are facing a “perfect storm” caused by the convergence of Cloud, Mobile, IoT, Big Data and Cognitive Computing technologies. According to IDC forecasts, the companies will invest more than 31,000 million dollars in artificial cognitive systems in 2019. The main sectors for investment in cognitive systems are banking, trade (retail) and health.

To understand the magnitude and implications of this perfect storm can think of these technologies as integral parts of a super organism highly complex. This new Cognitive Ubiquitous Internet and has a sensory system that continues to extend thanks to the billions of connected and sensors deployed everywhere.

Simply count the number of existing mobile lines today to realize that the number already far exceeds the human inhabitants of planet Earth. In addition, each mobile device has multiple sensors. The new network ceaselessly devours huge volumes of data that allow you to get information about the world. Thanks to this are many new business opportunities based on the availability and exploitation of these new data sources.

The boom (Machine to Machine) M2M systems under the Internet of Things have promoted an exponential growth of data exchange between the machines. We have gone from a traditional model, in which the sensors obtained information which is then used humans to a model in which machines gain autonomy, as sensor data and not directly consumed by humans, they become part the perceptual system of the network.

The new Internet is a network that needs to perceive the world. Just as humans perceive the world around us through our senses, the IoT network model has a repertoire of very superior to the human senses. As we see, hear and smell what we have around us, new sensor networks can span thousands of kilometers using the cloud to communicate and store data and can also use much more sensory modalities.

Connected devices perceive the world as varied as the geolocation of a mobile, heart rate of the person wearing a bracelet connected, engine temperature of a connected car, the dynamics of spending reactor fuel of an airplane, the PH level of the land where a vine is grown, the altitude of a flying drone, the infrared radiation emitted by the sign of a road or electroencephalographic signal a user of a wheelchair. Thanks to IoT all these data become part of the perceptual apparatus of an artificial system.

Cognitive computing, the revolution in information processing

Once the IoT environments allow real – time information from the most varied sources we have the problem of interpretation and understanding of the data. This is where are necessary approaches to cognitive computing because the machines must be able to make sense and extract meaning hidden behind the trillions of bytes that move through the network.

M2M data sensors are relatively modern sources of data, but there is also a lot of knowledge available in the website and social networks. This is huge amounts of text, audio and video that contains much of interest.

One of the main reasons for the rise of cognitive computing is to gain effective access to all these sources of information do we need machines capable of reading millions of documents for us. In recent years, thanks to new developments such as IBM ‘s Watson, is usually identified cognitive computing with the ability of machines to process natural language.

Cognitive systems like Watson are used to exploit all available knowledge in document libraries or the Internet itself. Thus, anyone who uses a cognitive system of this type would be able to take advantage of all available knowledge about a topic. The cognitive computing revolution implies a radical change in the way we access information.

In the traditional approach make queries to a search engine like Google and we have to read the most relevant results, while with the help of cognitive assistants we just do the question and machine handles give the answer based on what he has learned in his reading millions of documents and sensors (is what is called Q & A systems – Question & Answering).

In this perfect technological storm, we are experiencing the impact goes far beyond the machines. We are seeing the way we interact with others has changed dramatically in a few years due to the invasion of mobile devices. With the advent of technologies such as Deep Computing Cognitive Learning and the way we learn, interact and understand the world will also change radically.

Intelligence occurs increasingly more distributed way, to solve our problems now we can ask directly to our machines and expect an increasingly intelligent response. The great responsibility that is us humans is asking the right questions.

About the author

Sue

View all posts