• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Large Population Models

Copyright

Camera Culture - Media Lab

Ayush Chopra MIT Media Lab

Many of society’s most critical risks—from pandemic outbreaks to supply chain disruptions to cyber vulnerabilities—arise from the decisions of millions of individuals interacting daily. Tackling these issues, which cost trillions annually, requires understanding how individual actions aggregate into widespread consequences that no single entity could have predicted.

Consider Maya, a restaurant owner in Brooklyn. Her business hinges on the world around her—customers thin out when a subway line shuts down, supply costs jump with global egg shortages, and staff availability fluctuates when infections surge. Observing these converging pressures, Maya switches to a take-out menu with local ingredients—a choice driven by countless external factors, from construction updates to neighborhood buzz.

Current AI research  has made remarkable progress creating "digital humans"—machines that mimic Maya's reasoning and decision-making. Yet it falls short of a critical next step: understanding how millions of individual decisions collectively drive crises like disease outbreaks or economic swings. This is where Large Population Models (LPMs) come in—that build "digital societies" to simulate entire populations with their complex interactions and emergent behaviors.

Imagine a digital microscope revealing an entire city—8.4 million New Yorkers living their daily lives in a computational world. In this virtual society, Maya's takeout-only decision triggers cascading effects: customers modify their dining habits, delivery workers adjust routes, suppliers revise schedules, and local infection trajectories shift. As these millions of individual choices ripple through networks of interactions, patterns emerge that no single decision-maker could foresee. This living laboratory of human behavior is the vision behind LPMs—already saving lives and strengthening global systems by turning everyday decisions into real answers.

Copyright

Media Lab Copyright

Research: Three Fundamental Breakthroughs

Building this digital New York required solving three fundamental challenges:

1. The Scale vs. Detail Dilemma: We need to simulate millions of New Yorkers as distinct individuals with their unique circumstances and interactions. Traditional simulations forced an impossible tradeoff—either model realistic behaviors for a few hundred individuals OR track simplified movements for millions—but never both at once. It's like trying to simultaneously film an entire stadium while capturing each person's facial expressions.

Our breakthrough: We can now simulate all 8.4 million New Yorkers with their individual behaviors on a single GPU—600× faster than previously possible—without sacrificing the rich detail of each person's unique situation. First, we efficiently process billions of interactions simultaneously across customer, supply chain and community networks— in minutes instead of hours. Second, we learn behavioral patterns across individuals, allowing us to accurately capture unique decisions for millions of people while separately modeling only few thousands—recreating a digital New York for $500.  We can now see how Maya's decision will ripple out across thousands of other businesses to shape city-wide health and economic outcomes—connections that were impossible to discover before.

2.The Puzzle Piece Challenge: Officials need to understand how Maya and millions like her would respond to new policies like stimulus checks or lockdowns. They have data fragments—restaurant bookings, mobility patterns, infection rates, compliance behaviors—but traditionally couldn't connect this real-world information with simulations without building simplified approximations (surrogates) that sacrifice critical understanding.

Our breakthrough: We've eliminated the need for simplified approximations by making our simulations differentiable—transforming months of computation into minutes. This allows simulations to learn directly from diverse real-world data sources—hospital records, mobility patterns, economic indicators—providing 2-20x better precision and  3000x faster calibration over traditional surrogate models. When Maya's restaurant sees fewer customers, our model rapidly determines whether this resulted from rising infections, new restrictions, or consumer confidence changes—and projects how specific interventions might help her business while improving public health.

3. The Simulation vs. Reality Gap:Traditional simulations treat agents like Maya purely as synthetic entities that mimic real people. This creates a fundamental disconnect—the digital Maya can never truly reflect how the real Maya adapts to changing conditions, and insights from the simulation can't easily reach the real Maya when she needs them. By the time data is collected, cleaned, and fed into models, the real world has already moved on.

Our breakthrough:  We've transformed personal devices—like real Maya's phone—from passive data collectors into active simulation agent. This enables decentralized simulations that run across networks of real-world devices. Instead of bringing sensitive data to central systems, we bring computation directly to where information naturally exists, using secure multi-party computation to preserve privacy while estimating simulation outputs and gradients. This creates a powerful two-way connection: Maya's actual restaurant decisions help update our digital New York in real-time, while insights from millions of simulated scenarios provide her with personalized recommendations. This establishes a practical collaboration between real and synthetic New Yorkers, where each improves the other. The result transforms simulations from isolated analysis tools into living systems embedded within real communities, providing timely insights that evolve with the changing world.

LPMs realize this vision by making fundamental advances in agent-based modeling, decentralized computation and machine learning. Our research has resulted in several publications at top-tier AI conferences and journals, and received multiple best-paper awards. Our work has received research awards from industry (e.g. JP Morgan, Adobe) and government (e.g. NSF).

Copyright

Media Lab Camera Culture

AgentTorch: Tools for Digital Societies

AgentTorch, our open-source platform, makes building and running massive LPMs accessible. It integrates GPU acceleration,  differentiable environments, large language model capabilities, and privacy-preserving protocols in a unified platform—allowing researchers to build, calibrate, and deploy sophisticated population models without specialized expertise. Think PyTorch, but for large-scale agent-based simulations. Find below a quick demo and a code-snippet. The AgentTorch platform is open-source at github.com/AgentTorch/AgentTorch

Copyright

Media Lab Camera Culture

Real-world Impact

AgentTorch LPMs are already making impact globally. They've been used to help immunize millions of people by optimizing vaccine distribution strategies, and to track billions of dollars in global supply chains, improving efficiency and reducing waste - across governments and enterprises.  

As your read this, AgentTorch LPMs  are helping the New Zealand crown stop a measles outbreak, facilitating peer-2-peer energy grids in small Indian towns and enabling global enterprises to reimagine their supply chains for a sustainable future.  Our long-term goal is to "re-invent the census": built entirely in simulation, captured actively and used to safeguard nations worldwide.

From pandemics to climate adaptation to urban planning, LPMs turn the chaos of millions of decisions into clear, actionable solutions—reshaping how we tackle our toughest challenges.

Copyright

Camera Culture Media Lab

Curious about LPMs: Learn More

We would love to collaborate with you in advancing fundamental research and deploying LPMs within your enterprise. For thoughts and questions, please reach out to Ayush Chopra at [ayushc] [at] [mit.edu]. We look forward to hearing from you!