While the Act provides its own definitions, there is currently no universally agreed-upon definition of artificial intelligence. The term intelligence is understood as a measure of a machine’s ability to successfully achieve an intended goal. Like humans, machines exhibit varying levels of intelligence subject to the machine’s design and training. However, there are different perspectives on how to define and categorize AI.
In 2009, a foundational textbook classified AI into four categories:
- Ones that think like humans;
- Ones that think rationally;
- Ones that act like humans; and
- Ones that act rationally.
Most of the progress seen in AI has been considered "narrow," having addressed specific problem domains like playing games, driving cars, or recognizing faces in images. In recent years, AI applications have surpassed human abilities in some narrow tasks, and rapid progress is expected to continue, opening up new opportunities in critical areas such as health, education, energy, and the environment. This is in contrast to “general” AI, which would replicate intelligent behavior equal to or surpassing human abilities across the full range of cognitive tasks. Experts involved with the National Science and Technology Council (NSTC) Committee on Technology believe that it will take decades before society advances to artificial "general" intelligence.
According to Stanford University’s 100-year study of AI, by 2010, advances in three key areas of technology intersected to increase the promise of AI in the US economy:
- Big data: Large quantities of structured and unstructured data amassed from e-commerce, business, science, government, and social media on a daily basis;
- Increasingly powerful computers: Greater storage and parallel processing of big data; and
- Machine learning: Using increased access to big data as raw materials, increasingly powerful computers can be taught to automatically improve their performance tasks by observing relevant data via statistical modeling.
Key AI applications include the following:
- Machine learning is the basis for many of the recent advances in AI. Machine learning is a method of data analysis that attempts to find structure (or a pattern) within a data set without human intervention. Machine learning systems search through data to look for patterns and adjust program actions accordingly, a process defined as training the system. To perform this process, an algorithm (called a model) is given a training set (or teaching set) of data, which it uses to answer a question. For example, for a driverless car, a programmer could provide a teaching set of images tagged either “pedestrian” or “not pedestrian.” The programmer could then show the computer a series of new photos, which it could then categorize as pedestrians or non-pedestrians. Machine learning would then continue to independently add to the teaching set. Every identified image, right or wrong, expands the teaching set, and the program effectively gets “smarter” and better at completing its task over time.
Machine learning algorithms are often categorized as supervised or unsupervised. In supervised learning, the system is presented with example inputs along with desired outputs, and the system tries to derive a general rule that maps input to outputs. In unsupervised learning, no desired outputs are given and the system is left to find patterns independently.
- Deep learning is a subfield in machine learning. Unlike traditional machine learning algorithms that are linear, deep learning utilizes multiple units (or neurons) stacked in a hierarchy of increasing complexity and abstraction inspired by structure of human brain. Deep learning systems consist of multiple layers and each layer consists of multiple units. Each unit combines a set of input values to produce an output value, which in turn is passed to the other unit downstream. Deep learning enables the recognition of extremely complex, precise patterns in data.
- Advances in AI will bring the possibility of autonomy in a variety of systems. Autonomy is the ability of a system to operate and adapt to changing circumstances without human control. It also includes systems that can diagnose and repair faults in their own operation such as identifying and fixing security vulnerabilities.
Experimental research in artificial intelligence includes several key areas that mimic human behaviors, including reasoning, knowledge representation, planning, natural language processing, perception, and generalized intelligence:
- Reasoning includes performing sophisticated mental tasks that people can do (e.g., play chess, solve math problems).
- Knowledge representation is information about real-world objects the AI can use to solve various problems. Knowledge in this context is usable information about a domain, and the representation is the form of the knowledge used by the AI.
- Planning and navigation includes processes related to how a robot moves from one place to another. This includes identifying safe and efficient paths, dealing with relevant objects (e.g., doors), and manipulating physical objects.
- Natural language processing includes interpreting and delivering audible speech to and from users.
- Perception research includes improving the capability of computer systems to use sensors to detect and perceive data in a manner that replicates humans’ use of senses to acquire and synthesize information from the world around them.
Because the performance of the AI is dependent on the quality of the data, data that is not fully representative of the target population can lead to AI that is biased.
Ultimately, success in the discrete AI research domains could be combined to achieve generalized intelligence, or a fully autonomous “thinking” robot with advanced abilities such as emotional intelligence, creativity, intuition, and morality.