What AI Actually Does
From hype to mechanism: understanding how AI systems interact with work and cognition
Learning Objectives
By the end of this module you will be able to:
- Distinguish between task automation and task augmentation as distinct modes of AI deployment.
- Explain why AI affects individual tasks within occupations rather than eliminating whole jobs at once.
- Identify the kinds of cognitive work that fall within and outside current AI capability boundaries.
- Describe the routine/non-routine task partition and why it matters for understanding AI's effects.
- Recognize that AI adoption varies significantly across occupation types, income levels, and sectors.
Core Concepts
AI operates on tasks, not jobs
When people say "AI will take jobs," they are compressing a more precise mechanism. Research consistently shows that automation occurs at the task level, not the job level. An occupation is a bundle of heterogeneous tasks — some of which may be susceptible to automation, others resistant, and others better suited to augmentation. Treating a job as a single unit produces imprecise predictions; treating it as a collection of distinct tasks produces much more accurate ones.
A radiologist's job, for instance, includes image pattern recognition (high automation potential), clinical communication with patients (low automation potential), and treatment planning in collaboration with other clinicians (mixed). AI does not eliminate the radiologist; it reshapes which tasks take more or less of their time.
Throughout this course, when you encounter claims about AI "eliminating" or "creating" jobs, the more precise and accurate question is always: which tasks within those jobs, and in which direction?
The routine/non-routine partition
The foundational framework for analyzing automation comes from labor economics. The task-based framework partitions occupational work along two axes:
- Routine vs. non-routine: Can the task be accomplished through a set of explicit, codifiable rules?
- Cognitive vs. manual: Does the task involve information processing and judgment, or physical action?
This creates four quadrants:
The practical prediction from this framework: computer capital substitutes for workers in routine cognitive and manual tasks (top-left and bottom-left quadrants), while it complements workers performing non-routine problem-solving and complex communication tasks (right side).
The historical shift: from skill-bias to task-content
It is worth understanding how this framework emerged, because it explains a real shift in automation's logic. During the 1980s and 1990s, computerization appeared to favor higher-educated workers — more college graduates were employed, and their wages rose relative to workers without degrees. This was called skill-biased technological change (SBTC).
From the mid-1990s onward, however, research documented a different pattern: computers were substituting for routine tasks — regardless of whether those tasks were performed by educated or uneducated workers. A data entry clerk with a college degree and a factory line worker without one both saw their routine tasks automated. The organizing principle had shifted from the worker's education level to the task's routineness. This is routine-biased technological change (RBTC).
This distinction is not just historical trivia — it is the foundation for understanding why the current AI wave feels different, which we will come to shortly.
Automation vs. augmentation: not the same thing
The same technology can interact with a task in two fundamentally different ways:
- Automation: Capital (the AI system) substitutes for human labor in executing the task. The human is removed from the loop.
- Augmentation: Capital and human labor work together. The AI amplifies what the human can do without replacing them.
Research across 1,500 workers in 104 occupations finds that workers themselves prefer higher levels of human agency than experts deemed technologically necessary on nearly half (47.5%) of tasks. The dominant worker-preferred mode in 47 out of 104 occupations was equal partnership — human-agent collaboration rather than full automation.
Automation is not solely a technological inevitability. It reflects managerial and organizational decisions about how to deploy technology.
This is not a minor point. It means that when you observe a given AI deployment in practice — a customer service chatbot, a coding assistant, an AI-assisted diagnostic tool — the question of whether it is automating or augmenting is partly a design and organizational choice, not just a technological one.
What tasks resist automation
Some task categories remain resistant to AI automation despite significant technological advances. Research identifies four main properties that confer resistance:
- Physical and unstructured contexts — tasks requiring adaptation to novel physical environments.
- Novel situational judgment — decisions that cannot be reduced to patterns in prior data.
- Creative and generative work involving genuine originality — though this boundary is actively shifting.
- Emotional and social intelligence — empathy, trust-building, nuanced interpersonal navigation.
Roles dominated by these task types — sociologists, management analysts, roles with high interpersonal components — tend to experience AI as an augmenting force rather than a substituting one.
The LLM exception: when the frontier moves
Here is where the established framework gets complicated. Large language models and generative AI represent a departure from prior automation waves in a specific way: they are targeting non-routine cognitive tasks — the top-right quadrant that was previously considered protected.
Earlier automation displaced manual and routine cognitive workers. Generative AI is demonstrating capability to perform or augment complex reasoning, creative generation, and judgment-requiring tasks in fields like law, journalism, and software development — industries where workers apply specialized training to non-routine problem solving.
This does not mean all non-routine cognitive work is now automatable. It means the boundary has moved, and the task-based framework's core assumption — that computational systems complement rather than substitute for abstract analytical work — no longer holds universally.
The routine/non-routine partition remains a useful lens, but generative AI has pushed the boundary of what counts as "routine" into territory that was previously off-limits. The framework needs updating, not discarding.
AI adoption is not uniform
One final conceptual anchor: AI adoption does not hit all occupations, firms, or geographies equally. Across 38 OECD countries, a one standard deviation increase in AI adoption correlates with a 2.3% reduction in employment in routine cognitive occupations — but a 1.8% increase in employment requiring complex problem solving and interpersonal skills.
Wage effects are asymmetric too: workers in the top income quintile experience wage gains, while middle quintile workers face modest declines. This heterogeneity matters because aggregate statistics ("AI will create X million jobs and destroy Y million") can obscure the distributional effects that matter most for specific people and sectors.
Researchers use occupational task databases — particularly the U.S. Department of Labor's O*NET — to classify and measure exposure to automation at the task level. This granular analysis predicts occupational AI exposure with greater precision than job-level or sector-level approaches.
What AI is doing to the occupational map
AI adoption is also creating new roles that did not previously exist. Employment in AI-related roles — software developers, data scientists, database architects — is growing significantly faster than average (17.9% projected growth for software developers between 2023 and 2033, versus 4.0% average). New occupational categories are emerging: AI trainers, prompt engineers, AI ethicists, explainability specialists.
The net balance of displacement and creation is a genuinely contested empirical question — and one that varies by timeframe, region, and which workers you are asking about. Later modules will examine the evidence on both sides.
Analogy Bridge
If you want a mental model for the automation/augmentation distinction, consider power tools.
A power saw can replace a handsaw entirely for cutting lumber (automation). But a surgeon's powered drill does not replace the surgeon — it amplifies precision in tasks the surgeon still controls (augmentation). The same class of technology, two different relationships with human labor.
Now extend the analogy: imagine a power tool that can handle not just physical cutting, but also planning the cut, adapting the design when the wood grain changes unexpectedly, and explaining its reasoning to the client. That is roughly what generative AI is doing to cognitive work — it is a power tool for thought that can increasingly handle parts of the task that previously required a skilled human throughout.
The analogy breaks down at the edges — AI is more adaptive and generative than any physical tool. But as a starting intuition for the automation/augmentation split, it is a reliable frame.
Compare & Contrast
Automation vs. augmentation at the task level
| Dimension | Automation | Augmentation |
|---|---|---|
| Human role | Removed or minimized | Remains central; AI extends capacity |
| Locus of control | AI system | Human, with AI assistance |
| Worker preference (research finding) | Often lower than expert assumptions | Preferred mode in most occupations studied |
| Risk profile | Displacement if task constitutes bulk of role | Generally productivity-enhancing |
| Example | AI chatbot handles customer query end-to-end | AI draft + human editor for written content |
Routine-biased vs. generative AI automation
| Dimension | Routine-biased (RBTC era) | Generative AI era |
|---|---|---|
| Target tasks | Routine cognitive and manual | Non-routine cognitive, increasingly |
| Affected workers | Clerical, administrative, factory | Knowledge workers, professionals |
| Framework status | Well-validated, stable | Framework under revision |
| Complementarity assumption | Holds for non-routine cognitive | No longer universal |
| Example | Spreadsheet software replacing bookkeepers | LLM drafting legal briefs |
Key Takeaways
- AI affects tasks, not jobs as wholes. A single occupation contains automatable and non-automatable tasks. Analyzing at the job level obscures the actual mechanism.
- The routine/non-routine partition is the foundational lens. Routine tasks (cognitive and manual) are most susceptible to automation; non-routine tasks — especially physical and interpersonal ones — show greater resistance.
- Automation and augmentation are distinct outcomes, and the same technology can produce either. Organizational choices, not just technological capability, determine which mode predominates.
- Generative AI has moved the boundary. LLMs are demonstrating capability in non-routine cognitive tasks previously thought protected, which is a qualitative shift from earlier automation waves.
- Adoption and impact are heterogeneous. AI's effects vary significantly by occupation type, income level, firm size, and geography. Aggregate statistics mask the distributional patterns that matter most.
Further Exploration
Foundational Research
- The task approach to labor markets: an overview — Canonical overview of the task-based framework from MIT
- Skills, Tasks and Technologies: Implications for Employment and Earnings — Foundational paper on the shift from skill-biased to routine-biased technological change
Generative AI & Work
- Generative AI at Work — Empirical study of generative AI's effects on knowledge workers
- Future of Work with AI Agents: Auditing Automation and Augmentation Potential — Large-scale audit across 104 occupations
Employment & Impact Analysis
- Agentic AI and Occupational Displacement: Multi-Regional Task Exposure Analysis — Task exposure analysis using O*NET data
- Incorporating AI impacts in BLS employment projections — U.S. Bureau of Labor Statistics analysis