The field of Artificial Intelligence (AI) has increasingly expanded to create intelligent behavior and consequently has improved the system(s) efficiency and performance. Automation had become necessary in various applications where human intervention necessarily needed to be minimized.
Introduction
This article provides a brief history of Artificial Intelligence (AI) from traditional techniques of rule-based AI, FSMs, and Fuzzy Logic to more advanced AI techniques used for learning, and looks at evolving autonomous behaviors in stochastic and dynamic environments. The evolution of these techniques over past decades is discussed. Many AI techniques have been implemented in various applications including health care, e-commerce, robotics, computer games, and the film industry. AI has become a necessity in today’s world, particularly in applications where human intervention needs to be minimized as in space exploration, disaster recovery, and search and rescue operations. Research on AI is often costly, especially if it involves creating hardware and using tools and laboratories. Computer games have provided useful testing environments for AI research. They initially require low set-up costs to implement and there is minimal associated risk. Games from traditional board games to the modern computer games of shooter and strategy genres have contributed a great deal to the development of AI and the generation of simulated human-like characters and behaviors. Much research has been conducted on AI using a variety of test-beds.
History of Artificial Intelligence
The term Artificial Intelligence (AI) was first coined by John McCarthy to describe a conference held in 1956 at Dartmouth, where this field of study was established [1]. Later, McCarthy [2] stated that it would be more appropriate to use the term Computational Intelligence (CI). CI is the science and engineering of making intelligent machines do tasks that humans are capable of [3]. Although some researchers consider CI to be a branch of AI, textbooks broadly consider CI a synonym of AI [4, 5, 6].
Russell and Norvig [6] categorized a definition of AI as the study of creating systems that think or act like humans, or think in a rational way, meaning that they do the ‘right thing’, given what they know of the environment. They preferred the notation of rational agents that receive input from the environment via sensors and provided outputs using effectors respectively, which has since been adopted by many in the AI community.
Alan Turing proposed a test to measure computing intelligence and distinguished two different approaches to AI known as Top-Down and Bottom-Up [7, 8]. AI began as Top-Down, or the traditional symbolic AI approach, where cognition is a high-level concept independent of the lower-level details of the implementing mechanism [9]. The Bottom-Up approach aims for cognition to emerge from the operation of many simple elements similar to how the human brain processes information. ANN is the core of this approach. The domain of AI has evolved over the past 60 years. Many techniques linked to the modeling of cognitive aspects of human behavior have emerged; including, perceiving, reasoning, communicating, planning, and learning. Public perception of what AI can achieve has been contaminated by science-fiction movies. Like any domain, AI has evolved smoothly and incrementally. For example, research in ANN almost ceased after Minsky and Papert [10] showed limitations of Perceptrons in learning linearly inseparable problems. During the 1980s, researchers [11, 12, 13] realized that such problems could be solved using a new learning method for Multi-Layer Perceptron (MLP) ANN called backpropagation. These developments and many other significant contributions [14, 15, 16, 17, 18] aided in the resurgence of ANN research enabling the achievement of the original goals.
Good Old-Fashioned AI
AI originally started with basic techniques that fall into the category of the symbolic computational approach. This was also referred to as Weak AI or Good Old-Fashioned Artificial Intelligence (GOFAI) [19]. Some developers considered those techniques that exhibited some limited form of intelligence to represent AI. Several AI techniques suggested in the fields of computing, robotics, and gaming included those related to Search and Optimisation, Path Finding, Collision Avoidance, Chasing and Evading, Pattern Movement, Probability, Potential Function-Based Movement, Flocking, and Scripted AI. Most of these techniques fall within the category of deterministic AI and are easy to understand, implement and debug. The main pitfall of deterministic methods is that developers must anticipate all scenarios and explicitly code all behaviors. This form of implementation becomes predictable after several attempts.
Modern AI
A number of techniques were integrated or hybridized as the field evolved. There was a transition period where more modern techniques were progressively introduced. Some of these techniques include; “Rule-Based AI, Finite State Machine (FSM), Fuzzy Logic, and Fuzzy State Machines (FuSM)” [6]. Rule-Based AI comprises If-Then rules that map the actions of the system based on various conditions and criteria. FSM and Fuzzy Logic fall into the general category of Rule-Based AI. The idea in FSM is to specify a group of actions and/or states for agents and execute and make transitions between them. Fuzzy Logic deals with fuzzy concepts that may not have discrete values and allows representation of conditions by degrees of truth rather than a two-valued binary system [20, 21]. FuSM combines the concept of Fuzzy Logic with FSM to create more realistic and somewhat less predictable behavior. These techniques have led to the emergence of Expert Systems, which are rule-based processing systems consisting of a knowledge base with working memory and inference engine that process data with a defined reasoning logic [22, 23]. Many expert systems have shown great success, including the Chess-playing program called Deep Blue that defeated the world champion in 1997 [24].
Advanced AI
Advanced AI techniques have evolved to exhibit cognitive aspects of human behavior including reasoning, learning, and evolving. Advanced AI includes non-deterministic techniques that enable agents to evolve, learn, and adapt [25]. Artificial Neural Network (ANN), Bayesian Networks, Evolutionary Algorithm (EA), and RL are the mainstream of these techniques. Bayesian Networks enable reasoning during uncertainty. ANN mimics the function and information processing of biological neurons [6]. The training algorithms of ANN fall within three main categories of Supervised Learning, Unsupervised Learning, and Reinforcement Learning.
In Supervised Learning the ANN is presented with a set of input data and corresponding desired target values to train it, and to find the mapping function between inputs and their correct (desired) outputs. In Unsupervised Learning, no specific target outputs are available and the ANN finds patterns in the data without receiving any help or feedback from the environment.
Reinforcement Learning allows the agent to learn by trial-and-error using feedback from the environment, which could be in a form of reward or punishment [18]. Some examples of this learning paradigm include Temporal Difference learning [26] and Q-Learning [27]. EA techniques are within the category of Evolutionary Computation and have been used for learning. They include GA [28], Genetic Programming (GP) [29], Evolutionary Strategies (ES) [30, 31], and Neuro-Evolution (NE) [32].
GA techniques offer opportunities for optimization and evolving AI. NE is a machine-learning technique that uses EAs to train ANN. Examples of NE techniques include Neuro-Evolution of Augmenting Topologies (NEAT) [33], Feature Selective Neuro-Evolution of Augmenting Topologies (FS-NEAT) [33], and rtNEAT [34, 35]. Some of these techniques are discussed in further detail in this article. The concept of Multi-Agent System (MAS) has emerged to tie together the isolated subfields of AI. MAS consists of teams of Intelligent Agent (IA) that are able to perceive the environment using their sensory information that process this information using different AI techniques to reason and plan actions in order to achieve certain goals [36, 37]. IA can be equipped with different capabilities including learning and reasoning. Consequently, these teams are able to communicate and interact with each other to share knowledge and skills to solve problems as a team. MASs can be used in various applications to create intelligent behavior and improve the efficiency and performance of the system. Further information on MASs is discussed in this article.
References and Credits
Feature Photo by Markus Winkler on Unsplash
Originally published at https://brainsloop.com on November 24, 2020.