Fields medalist Terence Tao sits down with Math, Inc.'s Jesse Han and Jared Duker Lichtman for a conversation on the future of mathematics.
As the inaugural Veritas Fellow, Tao is partnering with Math, Inc. to formalize explicit estimates in analytic number theory, turning computation-heavy arguments into verified, reusable building blocks.
Frontier research in mathematics is typically conceived of as a series of deep insights. But in practice, it is packed with significant drudgery: sifting through complicated proofs, literature reviews, and modifying existing arguments for perturbed inputs. Formalization holds the prospect to streamline mathematical practice. In doing so, researchers must confront the hidden assumptions and establish completely rigorous proofs. Once such proofs become compiling code, we all gain a depth of understanding that traditional writing often conceals. It reveals the precise inputs and outputs of every logical step, transforming vague intuition into machine-verified thought.
This new paradigm unlocks collaboration at scale. Past efforts to crowdsource mathematics were restricted: centralized experts became bottlenecks for verification in large groups. Verified software removes this constraint, enabling a true division of labor where contributors can make specific optimizations without needing to recall the entire context. Just as software engineering evolved to support enterprise-scale systems, mathematics can now move toward a model of mass production, robust to the fragility of individual trust.
The definition of a mathematician will broaden during this ongoing transformation.
In analytic number theory, the literature contains a large, interconnected web of explicit estimates. But that web is not easily interoperable. In practice, results come in three layers:
The difficulty is that these layers do not update cleanly over time. A tertiary paper may rely on the best primary estimate available at the time. But years later improved computations refine the primary input, without being systematically propagated through the secondary and tertiary chain. As a result, the “same theorem with updated constants” is often unknown, even when it should be a routine consequence of updating inputs.
The goal is to formalize key papers across these layers and then abstract them so their dependencies become explicit, composable, and machine-checkable. The long-term vision is to create a living network of implications: when a primary estimate improves, every downstream implication is automatically upgraded. This transforms the mathematical literature into modular software.
Number theory is a strong test case because its estimates has a relatively clear structure, and a shared set of standard inputs and outputs. But in many areas such as PDEs, researchers constantly spend effort on modification: adapting lemmas and hypotheses, translating between incompatible frameworks, “fitting square pegs into round holes.” A composable, machine-verified implication network would directly target this friction.
The same infrastructure is poised to scale to other fields and enable crowdsourced, large-scale projects that are currently hard to coordinate. A classic example is the classification of finite simple groups: a decades-long effort distributed across many contributors, with inevitable complexity around bookkeeping, integration, and confidence in completeness.
With modern tooling, we envision tackling moonshots of comparable scope: many contributors handling diverse cases, and automated systems gluing the pieces together. The field becomes a live progress dashboard that records what is proved, what remains, and exactly which dependencies each component requires.
This opens up a much faster-pace and engaging way to do mathematics.
Math, Inc.