Search SciPol

Brought to you by
What it does 

The White House report, Artificial Intelligence, Automation, and the Economy provides a review of the positive and negative effects of artificial intelligence (AI)-driven automation on the U.S. economy and describes three broad strategies designed to augment the benefits and reduce the costs. The report was developed by a team from the White House Council of Economic Advisers (CEA), Domestic Policy Council (DPC), National Economic Council (NEC), Office of Management and Budget (OMB), and Office of Science and Technology Policy (OSTP).

AI is expected to disrupt the U.S. labor market in a number of ways, which introduces a challenge for policymakers who will need to develop policy in response to effects of AI. This report describes the expected changes to the economy and the workforce and recommends strategies that address those changes. When making strategy recommendations, the report points to existing and proposed programs for examples of policy approaches.

The report is divided into two main sections. The first section describes five categories of projected economic effects of AI, including:

  • Improved overall productivity and economic growth;
  • Changes in the demand of skills in the labor market, including an increased need for advanced technical skills;
  • Uneven distribution of the effects across industries, income levels, education and skill levels, job types, and geographic locations;
  • A change in the landscape of the job market as new jobs are created and existing jobs disappear; and
  • Short- and long-term job loss for some workers.

The second section describes the three recommended strategies for policymakers to influence these effects on the labor market. These strategies are intended to educate and train new workers, support those who lose their jobs while keeping them in the labor force, and correct emerging inequalities. The strategies include:

Strategy 1: Invest in and Develop AI for its Many Benefits

The report recommends that government should play a role in AI research and development (R&D) to augment the projected benefits of AI. The report recommends support in the following areas:

Strategy 2: Educate and Train Americans for Jobs of the Future

The increasing application of AI is likely to require realignment in education focusing on math, reading, computer science, and critical thinking across all age groups. These changes should be developed for young people entering the workforce and to help experienced workers navigate an evolving job market. Focus areas for this strategy include:

  • Educate youth for success in the future job market, particularly in Science, Technology, Engineering and Math (STEM) fields with an emphasis on computer science. This includes increasing preschool enrollment, improving secondary education so more students graduate from high school prepared for college and/or a career, and broadening access to post-secondary education. Computer Science for All and America’s College Promise are examples of initiatives intended to prepare a broad array of students and adults for careers in technology.
  • Expand access to training and re-training that matches the scale of the increasing need for high-tech positions. This includes expanding availability of training and apprenticeship programs through the Workforce Innovation and Opportunity Act and the POWER Initiative.

Strategy 3: Aid Workers in the Transition and Empower Workers to Ensure Broadly Shared Growth

This strategy identifies ways to help job seekers weather job losses, pursue employment opportunities for which they are most qualified, and receive competitive wages. Focus areas within this strategy include:

  • Modernize and strengthen the social safety net by strengthening unemployment insurance and improving guidance to workers when navigating job transitions. Existing federal support programs that could be improved include unemployment insurance, Medicaid, Supplemental Nutrition Assistance Program (SNAP), and Temporary Assistance for Needy Families (TANF). Additional funding could also be provided to states for local support.
  • Increase wages, competition, and worker bargaining power to respond to depressed wages resulting from displacement of large numbers of workers in industries disrupted by AI. This could include raising the minimum wage, extending overtime protections, strengthening protections for organized labor, and protecting wages for low- and middle-skilled workers.
  • Identify strategies to address differential geographic impact by reducing geographic work barriers and pursuing “place-based” solutions. Reducing barriers to affordable housing, such as local zoning and land use regulations, recognizing consistent occupational licensing requirements among states, expanding broadband access, and improving public transportation can help give more people access to new jobs. Choice Neighborhoods, Promise Zones and TechHire are initiatives that support this focus area.
  • Modernize tax policy, such as implementing progressive taxation and strengthening work-related tax credits such as the Earned Income Tax Credit.
  • Prepare for all contingencies by investigating more aggressive policy responses, such as exploring alternative job creation strategies, new training supports, a more robust social safety net, or additional strategies to combat inequality.
Relevant Science 

There is currently no universally agreed-upon definition of AI. As quoted in Stanford University’s 100-year study of AI, Nils J. Nilsson defines AI research as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”

Here, intelligence is understood as a measure of a machine’s ability to successfully achieve an intended goal. Like humans, machines exhibit varying levels of intelligence subject to the machine’s design and training. However, there are different perspectives on how to define and categorize AI.  In 2009, a foundational textbook classified AI into four categories:

  • Ones that think like humans;
  • Ones that think rationally;
  • Ones that act like humans; and
  • Ones that act rationally.

Most of the progress seen in AI has been considered "narrow," having addressed specific problem domains like playing games, driving cars, or recognizing faces in images. In recent years, AI applications have surpassed human abilities in some narrow tasks, and rapid progress is expected to continue, opening up new opportunities in critical areas such as health, education, energy, and the environment. This is in contrast to “general” AI, which would replicate intelligent behavior equal to or surpassing human abilities across the full range of cognitive tasks. Experts involved with the National Science and Technology Council (NSTC) Committee on Technology believe that it will take decades before society advances to artificial "general" intelligence.

According to Stanford University’s 100-year study of AI, by 2010, advances in three key areas of technology intersected to increase the promise of AI in the US economy:

  • Big data: Large quantities of structured and unstructured data amassed from e-commerce, business, science, government, and social media on a daily basis;
  • Increasingly powerful computers: Greater storage and parallel processing of big data; and
  • Machine learning: Using increased access to big data as raw materials, increasingly powerful computers can be taught to automatically improve their performance tasks by observing relevant data via statistical modeling.

Key AI applications include the following:

  • Machine learning is the basis for many of the recent advances in AI. Machine learning is a method of data analysis that attempts to find structure (or a pattern) within a data set without human intervention. Machine learning systems search through data to look for patterns and adjust program actions accordingly, a process defined as training the system. To perform this process, an algorithm (called a model) is given a training set (or teaching set) of data, which it uses to answer a question. For example, for a driverless car, a programmer could provide a teaching set of images tagged either “pedestrian” or “not pedestrian.” The programmer could then show the computer a series of new photos, which it could then categorize as pedestrians or non-pedestrians. Machine learning would then continue to independently add to the teaching set. Every identified image, right or wrong, expands the teaching set, and the program effectively gets “smarter” and better at completing its task over time.

Machine learning algorithms are often categorized as supervised or unsupervised. In supervised learning, the system is presented with example inputs along with desired outputs, and the system tries to derive a general rule that maps input to outputs. In unsupervised learning, no desired outputs are given and the system is left to find patterns independently.

  • Deep learning is a subfield in machine learning. Unlike traditional machine learning algorithms that are linear, deep learning utilizes multiple units (or neurons) stacked in a hierarchy of increasing complexity and abstraction inspired by structure of human brain. Deep learning systems consist of multiple layers and each layer consists of multiple units. Each unit combines a set of input values to produce an output value, which in turn is passed to the other unit downstream. Deep learning enables the recognition of extremely complex, precise patterns in data.
  • Advances in AI will bring the possibility of autonomy in a variety of systems. Autonomy is the ability of a system to operate and adapt to changing circumstances without human control. It also includes systems that can diagnose and repair faults in their own operation such as identifying and fixing security vulnerabilities.

Important areas of AI research:

  • AI researcher John McCarthy of Stanford University describes AI research and development as comprising of both theory and experimentation. AI theory includes contemplating the ways in which one defines the field of research itself as well as how to integrate AI with human notions of rationality, morality, and ethics. AI experimentation involves attempting to mimic human and animal physiology and psychology in machines as well as problem solving for actions outside the scope of biological organisms.
  • Experimental research in artificial intelligence includes several key areas that mimic human behaviors, including reasoning, knowledge representation, planning, natural language processing, perception, and generalized intelligence:
  • Reasoning includes performing sophisticated mental tasks that people can do (e.g., play chess, solve math problems).
    • Knowledge representation is information about real-world objects the AI can use to solve various problems. Knowledge in this context is usable information about a domain, and the representation is the form of the knowledge used by the AI.
    • Planning and navigation includes processes related to how a robot moves from one place to another. This includes identifying safe and efficient paths, dealing with relevant objects (e.g., doors), and manipulating physical objects.
    • Natural language processing includes interpreting and delivering audible speech to and from users.
    • Perception research includes improving the capability of computer systems to use sensors to detect and perceive data in a manner that replicates humans’ use of senses to acquire and synthesize information from the world around them.

Ultimately, success in the discrete AI research domains could be combined to achieve generalized intelligence, or a fully autonomous “thinking” robot with advanced abilities such as emotional intelligence, creativity, intuition, and morality.

Relevant Experts 

Vincent Conitzer, Ph.D. is the Kimberly J. Jenkins University Professor of New Technologies, Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University.

“Artificial intelligence researchers have made rapid progress in recent years. The resulting capabilities allow us to make the world a better place, but they have also led to a broad variety of concerns.  How should autonomous vehicles be designed and regulated? Will AI cause massive technological unemployment? Will weapons systems become increasingly autonomous, and should autonomous weapons be banned? Is there perhaps even a chance that AI will end up broadly superseding human capabilities, making us obsolete at best and extinct at worst?”

Relevant publications:

Conitzer, Vincent. 2016. " Today’s Artificial Intelligence Does Not Justify Basic Income " MIT Technology Review, October 31. Accessed March 20, 2017.

Frank Levy, Ph.D. is the Daniel Rose Professor (Emeritus) at MIT, a Senior Research Associate in the Department of Health Care Policy of the Harvard Medical School, and a Faculty Associate at the Duke Robotics Center.  Since late 1990's Levy has studied the impact of computerized work and offshoring on the U.S. occupations, skill demands, and income.  Some of his recent work includes "Dancing with Robots" (Third Way Foundation, 2013) co-authored with Richard J. Murnane and "Can Robots be Lawyers?" (forthcoming, The Georgetown Journal of Legal Ethics) co-authored with Dana Remus. 

“[…] scholarly attention should broaden from a narrow focus on computers’ employment effects to a more comprehensive look into the ways in which computers are changing, rather than replacing, the work […]  In the short-run, we can expect artificial intelligence and automation to impact the same working class jobs that have been affected by global trade.  Therefore, computerization has similar political consequences leading with the public seeking short-run solutions.  Long-run solutions, however, will come from investments to innovate education for early childhood and beyond.”

Relevant publications:

Remus, Dana, and Frank Levy. 2015. Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law. Social Science Research Network, December 30. http://​ssrn.​com/​abstract=​2701092


This report was issued following two other reports created by the NSTC’s Committee on Technology. The first was Preparing for the Future of Artificial Intelligence (SciPol brief available), which surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raises for society and public policy. The second, National Artificial Intelligence Research and Development Strategic Plan (SciPol brief available), prioritizes key federal R&D investments to maximize the benefits of AI technology.

This report and the two predecessors were produced following the White House's June 22nd, 2016, Public Request for Information (RFI) regarding AI as well as a series of workshops addressing the applications of AI. The workshops included:

Social and Economic Impacts of AI (July 7, 2016).

Endorsements & Opposition 


Computing Community Consortium (CCC) AI Task Force Co-Chair Gregory D. Hager, a Professor of Computer Science at The Johns Hopkins University, and CCC Director Ann Drobnis wrote in Computing Research News at the time of the Report’s release:  

“[The Report] turns an important corner by articulating how actions by government could help to shape the future of the economy and the workforce to maximize the benefits of AI for all.”


Clyde Wayne Crews, Policy Director at the Competitive Enterprise Institute has stated on Forbes:

 “The new report is a social-policy document written in a way that deflects from government's role in economic stagnation (95 million out of the labor force). By preemptively blaming AI and automation for tomorrow’s labor disruptions, Washington can create a basis for massive expansions of social-spending programs.”


The Report was released on December 20th, 2016 and is currently hosted by the Obama White House Archives.

Primary Author 
Scott “Esko” Brummel, MA Candidate
Michael Clamann, PhD, CHFP
Recommended Citation 

Duke SciPol, “Artificial Intelligence, Automation, and the Economy” available at (07/06/2017).

Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Please distribute widely but give credit to Duke SciPol and the primary author(s) listed above, linking back to this page if possible.