AI team intelligence: Varför flertal slår singularity

Agentic AI social intelligence avslöjar att AI singularity vision är fundamentally wrong. Google-forskning visar reasoning models som DeepSeek-R1 förbättras inte genom “thinking longer” utan genom simulating multi-agent conversations inside modellen – “society of thought”. Emergent behavior: models spontaneously generate internal debates among distinct perspectives som argue, question, verify, reconcile. För dig som projektledare betyder detta: future AI scaling är inte about building singular oracle utan composing richer social systems – human-AI centaurs där institutional alignment, role protocols och governance structures avgör success.

Society of Thought: Vad händer inside reasoning models

Traditional assumption: Model “tänker längre” = bättre reasoning

Empirisk reality: Models simulate complex multi-agent interactions internally

Bevis: När RL rewards bara accuracy, models spontaneously increase conversational, multi-perspective behaviors. Detta är emergent – ingen training för detta explicit.

Example pattern:

  • Perspective A: Proposes solution
  • Perspective B: Challenges assumption
  • Perspective C: Verifies calculation
  • Perspectives A-C: Reconcile differences → final answer

Causal validation: Explicitly priming multi-party conversation improves accuracy på hard reasoning tasks.

För projektledare: Single AI agent = mindre kapabel än ensemble of perspectives. Design teams of agents, inte reliance på singular model.

Human-AI Centaurs: Hybrid configurations replacing singular actors

Not “human OR AI”“human AND AI” in shifting configurations

Centaur variations:

  • One human directing many AI agents
  • One AI serving many humans
  • Many humans + many AIs collaborating

Platforms embryonic glimpse:

  • OpenClaw: Multi-purpose AI agents persisting in computer
  • Moltbook: Social network för AI agents interact

Advanced capability: Agents can fork themselves, splitting into versions, differentiate subtasks, recombine results. Recursive descent: agent facing complex problem spawns internal society of thought, which spawns subordinate societies – infinite folding/unfolding.

För projektledare: Future workflow är not assigning tasks to single AI but orchestrating agent ecosystems med fission-fusion dynamics. Project management becomes system choreography.

Institutional Alignment: Scaling beyond parent-child correction

Current paradigm – RLHF: Reinforcement Learning from Human Feedback = parent-child correction model. Fundamentally dyadic, cannot scale to billions of agents.

Alternative – Institutional Alignment: Persistent institutional templates defining roles/norms:

  • Courtrooms: judge, attorney, jury (slots independent of who occupies)
  • Markets: buyer, seller, regulator
  • Bureaucracies: clerk, manager, auditor

Digital equivalents needed: Agent institutions där identity matters less än ability fulfill role protocol.

För projektledare: Stop thinking “training better AI”. Start thinking “designing institutional scaffolds”. Constitutional structures med checks/balances mellan agent roles.

Governance: Constitutional AI for high-stakes decisions

Problem: When AI deployed i hiring, sentencing, benefits allocation – who audits auditors?

Solution: Constitutional structure med distinct, explicitly invested values:

Example government AI architecture:

  • Labor department AI audits corporate hiring algorithms for disparate impact
  • Judicial branch AI evaluates executive branch AI risk assessments mot constitutional standards
  • Financial regulator AI combats high-dimensional collusion of trading platforms

Alternative failure mode: SEC hiring business school graduates med Excel vs AI-augmented trading platforms (absurd mismatch)

För projektledare: Build conflict/oversight into institutional architecture. Power must check power. No single concentration of intelligence (human OR artificial) should regulate itself.

Design Space från Social Sciences: Century of research unutilized

Team science insights applicable to AI:

  • Team size effects
  • Composition diversity
  • Hierarchy vs flat structures
  • Role differentiation
  • Conflict norms (constructive vs destructive)
  • Network structures

Current AI limitation: Today’s reasoning models produce single conversation (town hall transcript).

Future requirement: Multiple parallel, converging, diverging deliberation streams. Architectures där brainstorming, devil’s advocacy, constructive conflict är designed features, not accidental emergent properties.

För projektledare: Apply organizational design principles to AI team composition. 100 years social science research = untapped blueprint för next-generation AI development.

Fem konkreta implementation insights

1. Abandon monolithic agent approach: Design agent ensembles med role differentiation. Example: brainstorm agent, critique agent, verify agent, synthesis agent working in structured interaction.

2. Implement governance protocols: Multi-stakeholder deliberation means, procedural task delegation, reliable scaffolds för inter-agent collaboration. Build these into systems as they grow.

3. Constitutional checks/balances: High-stakes AI decisions require AI systems med distinct values auditing each other. Transparency AI audits efficiency AI, equity AI audits optimization AI.

4. Role protocols over identity: Define slots (like courtroom roles) that any qualified agent can occupy. Interchangeability enables scaling, prevents single-point-of-failure dependencies.

5. Humans remain in loop: Agent institutions populated by both humans AND AI i different configurations. Not replacement, augmentation via hybrid participation.

Intelligence History Lesson: Every explosion was social

Not individual cognitive hardware upgrades:

  • Primate intelligence scaled med social group size (not habitat difficulty)
  • Human language created “cultural ratchet” (Tomasello)
  • Writing/law/bureaucracy externalized social intelligence into infrastructure
  • Sumerian scribe running grain system didn’t comprehend macroeconomic function – system was more intelligent than individual

AI extends sequence: LLMs trained på accumulated human social cognition. Cultural ratchet made computationally active. Every parameter = compressed residue of communicative exchange.

För projektledare: Intelligence är inherently social. Singular “superintelligence” vision misleads policy toward preventing technology that may never exist. Focus instead på designing mixed human-AI social systems.

Bottom line

Agentic AI social intelligence driver next intelligence explosion via plurality, not singularity. Society of thought inside reasoning models emergent från accuracy optimization alone. Human-AI centaurs replacing singular actors i fission-fusion configurations. Institutional alignment scales beyond RLHF via role protocols, constitutional checks. Governance requires AI systems auditing AI systems med distinct values. 100 years social science unutilized design space. Intelligence history shows every explosion was social aggregation, not individual upgrade. AI = externalized human social cognition encountering itself on new substrate. Future är compositional societies complexifying (intelligence growing like city), not single meta-mind ascending.

Källa:Agentic AI and the next intelligence explosion” av James Evans, Benjamin Bratton & Blaise Agüera y Arcas, Google Paradigms of Intelligence Team, publicerad 21 mars 2026.

Projektledarpodden
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.