A Bespoke Labs and DataComp community effort to curate the best open reasoning datasets.
Our first goal is to curate a reasoning dataset to train state of the art small reasoning models that surpass DeepSeek-R1-Distill-32B and DeepSeek-R1-Distill-7B on math and code reasoning benchmarks.
Latest Results
Model | AIME24 | MATH500 | GPQA-D | LCB Easy | LCB Med | LCB Hard |
---|---|---|---|---|---|---|
Open-Thinker-7B | 43.3 | 83.0 | 42.4 | 75.3 | 28.6 | 6.5 |
Bespoke-Stratos-7B | 16.6 | 79.6 | 38.9 | 71.4 | 25.2 | 0.8 |
DeepSeek-R1-Distill-Qwen-7B | 60.0 | 88.2 | 46.9 | 79.7 | 45.1 | 14.6 |
gpt-4o-2024-08-06 | 10.0 | 75.8 | 46.5 | 87.4 | 42.7 | 8.9 |
o1-mini | 63.0 | 85.6 | 60.0 | 92.8 | 74.7 | 39.8 |
The numbers reported in the table above are evaluated with our open-source tool Evalchemy.
About us
We are a team of researchers and engineers from Bespoke Labs, Stanford, University of California Berkeley, University of Washington, Juelich Supercomputing Center (JSC), LAION, UCLA, UNC Chapel Hill, and Toyota Research Institute united around building the best datasets (and thus the best models). See our previous works at datacomp.ai and mlfoundations.
Open Thoughts is supported by Bespoke Labs, NSF IFML, UT Austin Machine Learning Lab, Juelich Supercomputing Center, Toyota Research Institute, Lambda Labs.