| Outcome | Probability | Yes Bid | Yes Ask | 24h Change | Volume | |
|---|---|---|---|---|---|---|
| Before July | 0% | 0¢ | 0¢ | — | $0 | Trade → |
| Before October | 0% | 0¢ | 0¢ | — | $0 | Trade → |
| Before 2027 | 0% | 0¢ | 0¢ | — | $0 | Trade → |
This market asks when an AI system will produce a verifiable solution to a major, widely recognized open mathematics problem. The outcome matters because such an achievement would be a clear milestone for machine reasoning and would reshape expectations for how mathematics is discovered and validated.
Automated theorem proving and interactive proof assistants have been used for decades to formalize and check complex proofs, and projects have fully formalized several famous results. More recently, machine-learning methods have begun to assist in search, conjecture generation, and guiding proof assistants, but most frontier problems still require deep conceptual innovation and lengthy human validation. Determining that AI has truly “solved” a frontier problem requires both a correct proof and acceptance by the mathematical community or rigorous formal verification.
Market prices reflect traders’ collective judgments about when credible, verifiable AI-driven proofs of major open problems will appear; prices move in response to new technical advances, demonstrations, or verification events. Because resolution depends on documented, community-accepted evidence, short-lived claims without independent verification typically have limited market impact.
A solution must consist of a complete, rigorous proof of a problem recognized as a major open question, accompanied by documentation sufficient for independent expert scrutiny; acceptance can come via peer-reviewed publication, widely accepted community verification, or formalization in a proof assistant. The market generally treats attribution to AI as meaningful when AI produced the decisive ideas or constructions, even if humans participated in verification.
Attribution is based on the role AI played in producing the key insights or constructions and on how the authors document that role; public disclosure of methods, code, and logs that demonstrate AI-generated content, along with attestations from contributing researchers, are typically required for the claim to carry weight.
Frontier problems are widely recognized, long-standing open questions or conjectures whose solution would represent a substantial conceptual advance in mathematics—for example, problems of the Millennium Prize variety or others broadly acknowledged by researchers in the relevant fields. The qualifying set is defined by the community’s recognition of the problem’s difficulty and significance.
Substantive signals include: fully detailed preprints or peer-reviewed papers claiming a proof; formal proof developments in mainstream proof assistants; independent expert analyses confirming correctness; and reproducible code or models that demonstrate how the AI produced the result. By contrast, unverifiable or vague claims are unlikely to lead to consensus.
Progress is most likely to come from a mix of actors: AI research labs developing models and algorithms for symbolic and abstract reasoning, academic groups working on theorem proving and formalization, mathematicians who engage with AI tools, collaborative open-source communities that build and verify formal proofs, and funders enabling sustained compute and research programs.