The Various Approaches for Development of AI and Natural Language Processing


Image result for AI Acting humanly

Artificial Intelligence (AI) may be defined broadly along two dimensions with four discrete definitions in each of these. They may also be subdivided into aligning with thought processes and reasoning and with behaviour. Success measures are based on fidelity to expected human performance and also against an ideal performance measure which may be called rationality. A system is 'rational' if it does the 'right thing', given what it knows.

Acting humanly: The Turing Test Approach
The Turing test is designed to provide a satisfactory operational definition of intelligence. A computer is said to have passed this test if a human interrogator, after posing some questions, cannot tell whether the answers are coming from a human or a machine. A computer capable of doing this must necessarily have the following  capabilities
  • Natural Language Processing (NLP): to enable it to communicate successfully in a predefined language, say English in this case. 
  • Knowledge Representation: to be able to store what it knows or gets to hear.
  • Automated Reasoning: To use the stored information to answer questions put to it and also to draw new conclusions.
  • Machine Learning: To adapt to new circumstances and to be able to detect patterns and extrapolate them. 

Thinking Humanly: The Cognitive Modelling Approach 
Getting inside the actual workings of human minds can be done in three ways. These are a)through introspection b)through psychological experiments and c)through observing the brain in action and carrying out brain imaging. 

The fascinating and rapidly evolving field of cognitive science brings together computer models from AI and experimental techniques from the domain of psychology to construct precise and testable theories of the human mind.

Thinking Rationally: The 'Laws of Thought' Approach
The Greek philosopher Aristotle was one of the first ones to attempt to codify 'right thinking'. His syllogisms provided patterns for argument structures that always led to the correct conclusions when provided with the right premises. Logicians in the 19th century developed a precise notation for statements about all kinds of objects in the world and the relations among them. By 1965, programs had been developed that could solve any solvable problem described using this logical notation. If no solution exists, the program would go into a never-ending loop though. The logicist tradition within AI hopes to build further on such programs to create intelligent systems. 

Acting Rationally: The Rational Agent approach.
An agent is something that just acts. A computer agent is expected to operate independently, perceive the environment, persist for a prolonged time period, adapt to change and create and pursue goals. A rational agent is thus defined as one that acts to achieve the best outcome and, when there is uncertainty, the best expected outcome. 
The rational agent approach has two advantages over the other approaches. The first of these is that it is more general than the 'laws of thought' approach because correct inference is just one of several possible mechanisms for achieving rationality. Secondly, it is much more amenable to scientific development than are approaches based on human behaviour or human thought. 


Natural Language Processing (NLP)
Humans are differentiated from other species by the capacity to communicate in language. Well over a trillion pages of information can be found on the web and almost all of it is in some natural language. Any agent interested in knowledge acquisition needs to understand, at least to a great extent, the messy and often ambiguous language that humans use. The problem can be examined from the point of view of specific information seeking tasks like text classification, information retrieval and information extraction. A common factor in addressing these tasks are language models, that is models that predict the probability distribution of language expressions. 

Formal languages, like Java or VB have precisely defined language models. Natural languages, such as English or French, cannot be characterized as a definitive set of sentences. Thus, it is more fruitful to define a natural language as a probability distribution over sentences, rather than a definitive set. 

Natural languages are also ambiguous. Thus, it is often not possible to attribute a single meaning to a sentence but rather work out a probability distribution over several possible meanings. Also, since natural languages are typically very large and constantly changing, they are pretty difficult to deal with. Hence any language model is, at best, an approximation. Typically, one starts with the simplest possible approximations and moves up from there. 


-- Raja Mitra


Comments