Open Thoughts Project

A Bespoke Labs and DataComp community effort to curate the best open reasoning datasets.

Our first goal is to curate a reasoning dataset to train state of the art small reasoning models that surpass DeepSeek-R1-Distill-32B and DeepSeek-R1-Distill-7B on math and code reasoning benchmarks.

Latest Results

ModelAIME24MATH500GPQA-DLCB EasyLCB MedLCB Hard
Open-Thinker-7B43.383.042.475.328.66.5
Bespoke-Stratos-7B16.679.638.971.425.20.8
DeepSeek-R1-Distill-Qwen-7B60.088.246.979.745.114.6
gpt-4o-2024-08-0610.075.846.587.442.78.9
o1-mini63.085.660.092.874.739.8

The numbers reported in the table above are evaluated with our open-source tool Evalchemy.

About us

We are a team of researchers and engineers from Bespoke Labs, Stanford, University of California Berkeley, University of Washington, Juelich Supercomputing Center (JSC), LAION, UCLA, UNC Chapel Hill, and Toyota Research Institute united around building the best datasets (and thus the best models). See our previous works at datacomp.ai and mlfoundations.

Open Thoughts is supported by Bespoke Labs, NSF IFML, UT Austin Machine Learning Lab, Juelich Supercomputing Center, Toyota Research Institute, Lambda Labs.

Announcements