Understanding Artificial Intelligence and the Shades of Difference Among its Subsets

In the era of the fourth industrial revolution or the second machine age as some term it, Artificial Intelligence or AI, as it is commonly referred to, has emerged as the disruptive technology which will have a major impact on business models and our day-to-day lives. Some have rebranded AI as 'cognitive computing' or 'machine intelligence' while some others have incorrectly used the terms AI and 'machine learning' interchangeably. Let us delve a little deeper into some of these here and the branches of AI which are likely to have a significant impact on businesses and economies even, in the coming decade.

Artificial Intelligence (AI) has been around for a long time. Very early European computers were conceived as 'logical machines' and by reproducing capabilities such as basic arithmetic and memory, engineers at that time saw their jobs as attempting to create 'mechanical brains'. However, as technology and our understanding about the working of the human mind has progressed rapidly, the concept of what constitutes AI has changed. Rather than performing increasingly complex calculations, the ultimate goal of AI has changed to one of building machines and systems capable of performing tasks and cognitive functions that are only within the scope of human intelligence. In order to get there, machines and systems must be able to learn these capabilities automatically instead of each of them having to be programmed explicitly, end-to-end. AI has thus become a broad field, involving many disciplines ranging from robotics to machine learning and deep learning.

Thus, AI may be broadly defined as machines developing the capability to carry out necessary tasks in a way that humans would consider 'smart'. Machine learning is a subset of AI that envisages giving computing systems access to large volumes of data that will enable them to 'learn' and carry out necessary tasks without having to be explicitly programmed, end-to-end. The emergence of the internet and the consequent huge amounts of digital information that can be stored, accessed and analyzed, have also provided a major fillip to the domain of machine learning. The internet has given rise to the development of data lakes which may be visualized as a storage repository holding a vast amount of raw data in its native format, including structured, semi-structured and unstructured data, which may be accessed, depending on the need. Unlike a data warehouse which holds data in predefined hierarchical formats in files and folders and is expensive to maintain, a data lake uses a flat architecture to store data.

One of the early approaches of machine learning was in the development of neural networks. Neural networks are inspired by our understanding of the biology of the human brain and the interconnection between all those neurons. But unlike the human brain where any neuron can connect to any other neuron within a certain physical distance, computer-based neural networks have discrete layers, connections and directions of data propagation. Neural networks work on a system of probabilities. Based on the data fed to it, it is able to make statements, decisions or predictions with a certain degree of certainty. The addition of a 'feedback loop' enables the learning part. By sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.

Machine Learning applications currently can peruse text and figure out the sentiment of the person who wrote that piece of text. They can listen to a piece of music, decide whether it is likely to make people happy or sad and find other pieces of music which match the mood. Natural Language Processing (NLP) which has gained pre-eminence in the last couple of years or so, is heavily dependent on Machine Learning (ML). NLP applications attempt to understand natural human communications, either written or spoken, and attempt to communicate back using similar natural language. ML is used here to understand the vast nuances of human language and learn to respond in a way that the given target audience can comprehend.

'Deep Learning', another subset of AI, is finding increasing use these days in a number of areas. Essentially, 'Deep Learning' starts off with the neural networks which we have mentioned before and then goes on to make them huge, by increasing the layers and the neurons manifold while running massive amounts of data through it to 'train' it. The 'deep' in 'deep learning' describes all the layers and their interconnections in this neural network. Today, image recognition by machines trained by 'deep learning' is better than humans even in several instances. Google's AlphaGo  learned the game and trained for its Go match by tuning its neural network simply through 'deep learning' methods which involved playing against itself, over and over again.

Some of the key applications of 'Deep Learning' today  are in the following areas:

Autonomous vehicles: Using a variety of sensors and onboard analytics, together with massive existing datasets, 'deep learning' systems are all the time learning to react to a variety of obstacles and road conditions appropriately in real-time.

Recolouring black and white images: By teaching computers to recognize objects and what they should look like to humans, the right colours can be restored to various images and videos.

Predicting the outcome of legal proceedings: When fed massive amounts of data about a case, including similar cases historically, the system is able to predict the court's decision fairly accurately.

Precision medicine: Using 'Deep Learning' methodology, medicines genetically tailored to an individual's genome are being developed.

-- Raja Mitra