AI kognitiv avlastning: Produktivitet kostar expertise

AI kognitiv avlastning skapar augmentation trap visar MIT Sloan dynamisk modell + longitudinal evidence: AI raises productivity short-term MEN sustained use erodes skill on which gains depend. Cancer specialists 1-year study (Ehsan et al. 2026): initial productivity gains kom med gradual dulling expert judgment – “intuition rust”. ChatGPT learners retain significantly less material 45-day follow-up (Barcaui 2025). Programming experiments: participants delegating tasks learned least (Shen & Tamkin 2026). För dig som projektledare betyder detta: Even fully informed adoption rational när front-loaded gains outweigh long-run costs = steady-state loss (worker ends less productive than before). Managers’ short-termism (δ_firm > δ_worker) turns loss into TRAP: worker worse off än if AI never adopted. Model identifies five regimes (non-adoption, augmentation worse/better, automation worse/better) separating beneficial från harmful deployment. Cognitive offloading displaces practice through which expertise develops → permanent skill divergence.

Evidence base: Skill erosion NOT theoretical – empirically validated

Longitudinal studies visa gradual decline:

Cancer specialists (1-year AI decision support):

  • Initial productivity gains confirmed
  • Over time: gradual dulling av expert judgment
  • Term coined: “intuition rust”
  • Experts begin missing errors de förut caught easily
  • Source: Ehsan et al. 2026

ChatGPT learning study:

  • Students using ChatGPT för learning
  • 45-day follow-up: Significantly LESS material retained
  • Compared to no-AI control group
  • Cognitive offloading = degraded long-term retention
  • Source: Barcaui 2025

Programming tasks + neuroimaging:

  • Degraded performance efter sustained AI use
  • Lab experiments, programming tasks, brain imaging ALL converge
  • Source: Lee et al. 2025, Patra et al. 2025

Coding delegation experiment (Shen & Tamkin 2026):

  • Participants delegating tasks: learned LEAST
  • Participants staying cognitively engaged: fared BETTER (men still below no-AI)
  • Critical insight: Offloading drives skill loss EVEN when goal is learning
  • Production settings (där incentive preserve skill weaker) → unlikely fare better

Experienced developers vs. novices (Sarkar 2026):

  • Experienced: produce more aligned outputs, accept suggestions higher rates (agents)
  • Gradient REVERSES för autocompletion: less experienced accept MORE
  • Seasoned expertise = distinguish good från merely plausible answers
  • Men expertise built through continuous practice coding/debugging
  • Deadline pressure → rational rely på passable AI → expert gradually stops exercising skills

För projektledare: This is NOT speculation. Multiple independent studies across domains (medicine, education, programming, neuroimaging) converge på samma pattern. Cognitive offloading erodes expertise gradually but persistently.

Two productivity channels: α (skill-neutral) + β (scales with expertise)

Model decomposes AI productivity into:

Channel 1: α (skill-neutral component) Raw AI output independent av who’s using it

  • Example: Translation tool – novice + veteran benefit equally
  • Template-based report drafting – senior partner extracts only marginally more value än first-year

Channel 2: β (knowledge-complementary component) Scales with worker’s judgment

  • Example: Client strategy work – model alone provides little, veteran consultant extracts significant insight
  • Coding: Expert evaluates AI-generated code spot mistakes, anticipate technical debt, reject poor suggestions

Three regimes:

β > 1 (Skill complement):

  • Productivity gain from AI more than compensates displaced human contribution
  • Higher-skill workers benefit MORE från tool
  • Usage INCREASES with skill
  • Feedback: Self-correcting (high-skill uses heavy → loses skill → reduces usage → recovers)

β = 1 (Skill neutral):

  • AI provides same net benefit regardless expertise
  • Usage FLAT across skill levels
  • Translation, basic automation

β < 1 (Skill substitute):

  • AI partially substitutes för skill
  • Narrows gap between high/low-skill workers
  • Lower-skill workers gain MORE at margin → adopt HEAVIER
  • Usage DECREASES with skill
  • Feedback: Self-reinforcing (low-skill uses heavy → loses skill → increases usage → deskills faster)

För projektledare: Same language model produces different effective α, β depending på workflow embedding. Design determines whether tool complements eller substitutes expertise.

Steady-state loss: Rational adoption, permanent degradation

Even fully informed decision-maker adopts AI when: Front-loaded productivity gains outweigh discounted long-run skill costs

Result: Worker ends up LESS productive än before adoption at steady state

Three adoption thresholds (skill-neutral case β=1):

α₀ (adoption onset): Below: AI never adopted Above: Productivity boost justifies usage

α₁ (break-even threshold): Between α₀ and α₁: STEADY-STATE LOSS REGION

  • Adoption raises current productivity ✓
  • Long-run value < no-AI benchmark ✗
  • Privately rational för decision-maker
  • Worker permanently worse off

Above α₁: Adoption improves BOTH short-run + long-run

  • Productivity gain outweighs skill cost

För projektledare: Loss region EXPANDS monotonically med discount rate δ. Mer impatient decision-maker → wider set parameters där adoption privately rational men long-run harmful.

Augmentation trap: När loss becomes welfare problem

Steady-state loss = informed tradeoff (worker choosing own usage)

Augmentation trap = moral hazard when: Decision-maker + worker misaligned

Two misalignment forms:

1. Managerial short-termism (δ_firm > δ_worker):

  • Manager evaluated på quarterly output
  • Worker investing i long career
  • Firm discounts future heavier → places LESS value preserving skill
  • Chooses HIGHER AI usage at every skill level

Concrete example (illustrative parameters):

  • Manager 3-year effective tenure (δ_F=0.33)
  • Worker 10-year career plan (δ_W=0.10)
  • Manager sets usage: û=0.26
  • Worker would choose: û=0.14
  • Manager’s policy NEARLY TWICE usage
  • Worker’s steady-state skill: 14% LOWER än self-chosen (Ŝ=0.75 vs 0.86)

2. Worker skill externality (ω): Workers value skill för reasons firm ignores:

  • Side projects
  • Intellectual communities
  • Ability understand independently
  • Long-term career mobility

Firm’s objective OMITS these returns → externality

För projektledare: Privately rational managerial decisions systematically overuse AI relative worker’s long-term interest. Misalignment NOT accidental – structural feature when horizons differ.

Five deployment regimes: Beneficial vs harmful adoption

Region I (Non-adoption):

  • α too small justify adoption
  • Examples: Plumbing, complex negotiations
  • AI adds too little value

Region II (Augmentation, worse-off):

  • STEADY-STATE LOSS
  • Adoption rational short-run
  • Long-run skill cost exceeds gain
  • Examples: Entry-level financial analysis med LLM drafting
  • Worker uses AI, productivity initially up, skill erodes, ends worse off

Region III (Automation, worse-off):

  • Full automation (u=1)
  • Skill erodes to zero
  • Raw AI output < worker potential
  • Examples: Customer service scripting
  • Worker replaced by insufficient AI

Region IV (Augmentation, better-off):

  • Productivity gain outweighs costs
  • Adoption improves long-run position
  • Examples: Experienced doctors med diagnostic AI
  • High β (skill complement) → skill retains value under AI usage

Region V (Automation, better-off):

  • Full automation (u=1)
  • Raw AI output > worker potential (α – γ > Ŝ)
  • Examples: Data entry, templated correspondence
  • Automation legitimately superior

För projektledare: Job’s region depends på HOW tool embedded i workflow, NOT tool itself. Same ChatGPT deployment can fall Region II (harmful) eller Region IV (beneficial) beroende på α, β parameters determined by usage practice.

Permanent skill divergence: β<1 triggers bifurcation

When AI productivity depends LESS på worker expertise (β<1):

Positive feedback loop:

  • Low-skill worker uses AI heavily
  • Loses skill
  • Usage increases further (because AI substitutes skill)
  • Deskills faster
  • Eventually: Skill → 0

Negative feedback loop:

  • High-skill worker avoids AI
  • Builds skill
  • Usage decreases further
  • Skill grows faster
  • Eventually: Achieves full potential Ŝ

Result: PERMANENT DIVERGENCE

  • Experienced workers realize full potential
  • Novices deskill to zero
  • Small differences i managerial incentives determine which path worker takes

Example scenario: Two junior analysts, identical potential, slightly different managers

  • Manager A: 5% more short-term focused
  • Analyst under Manager A: deskills to zero over 2 years
  • Analyst under Manager B: achieves 85% potential

För projektledare: β<1 deployments EXTREMELY DANGEROUS för workforce development. Initial small differences (manager tenure, quarterly pressure, promotion timing) → massive long-term divergence. Organizations risk creating two-tier workforce: veterans WHO started before AI vs. novices WHO never developed expertise.

Practical protection mechanisms

1. Monitor usage practice, not just tool: Track effective α, β parameters

  • High α, low β = danger zone (skill substitute)
  • Low α, high β = safer (skill complement) Don’t ask “do we use ChatGPT?” Ask “HOW do teams interact med it?”

2. Design för β>1 (skill complement): Keep human in loop där judgment shapes AI quality Examples:

  • Code review (not blind acceptance)
  • AI drafts, human edits (not copy-paste)
  • Collaborative workflows (not full delegation)

3. Align incentives across horizons: Manager evaluation metrics include long-term skill preservation

  • NOT just quarterly productivity
  • Track team capability trajectory
  • Penalize deskilling patterns

4. Protect learning opportunities: Mandate AI-free practice zones Junior developers: 40% tasks WITHOUT AI (build foundational “muscles”) Experienced workers: Periodic AI detox (maintain sharp evaluation skills)

5. Measure skill directly, repeatedly: Don’t assume productivity = capability Track:

  • Error detection rates (cancer specialists catching mistakes)
  • Retention tests (learning material without AI crutch)
  • Performance degradation when AI unavailable

6. Recognize externality: Workers value skill beyond firm’s immediate productivity

  • Career mobility
  • Professional identity
  • Intellectual autonomy Firm-optimal ≠ worker-optimal → need governance

7. Regime classification før deployment: Before rolling out AI tool, estimate α, β för intended usage

  • Region II, III → redesign workflow eller don’t deploy
  • Region IV, V → proceed with monitoring Use pilot studies measure actual skill trajectories

Bottom line

AI kognitiv avlastning skapar augmentation trap: productivity vinster short-term cost expertise long-term. Evidence robust (cancer specialists “intuition rust” 1-year, ChatGPT learners 45-day retention loss, programming deskilling). Model decomposes productivity α (skill-neutral) + β (expertise-scaling). β>1 complements skill (safer), β<1 substitutes (dangerous divergence). Steady-state loss: Even informed adoption rational när front-loaded gains outweigh costs → worker ends less productive. Augmentation trap: Managerial short-termism (δ_firm > δ_worker) + skill externality turn loss into welfare problem. Example: 3-year manager sets usage 2x higher än 10-year worker would choose → 14% lower steady-state skill. Five regimes separate beneficial (IV, V) från harmful (II, III) adoption. Permanent divergence när β<1: Small managerial differences → some workers achieve potential, others deskill zero. Protection: Monitor α, β not tool, design för complement (β>1), align horizons, protect learning zones, measure skill directly, recognize externality, classify regime før deployment.

Källa:The Augmentation Trap: AI Productivity and the Cost of Cognitive Offloading” av Michael Cao & Sinan Aral, MIT Sloan School of Management, publicerad 10 april 2026.

Projektledarpodden
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.