Cursor Got Caught. I'm Reading It as a Strategy Guide.
A developer found it in 24 hours.
The model ID in Cursor's API: kimi-k2p5-rl-0317. Their new "proprietary" Composer 2 was Kimi K2.5 — an open-weight model from Chinese AI company Moonshot — fine-tuned with reinforcement learning and shipped as an in-house breakthrough.
The internet called it a scandal. I'm reading it as a strategy guide.
Cursor's engineers surveyed the landscape, picked the strongest open-weight foundation for coding tasks, applied continued pre-training and RL on their domain, and shipped something that outperformed frontier models on their specific task. That's not a shortcut. That's the right call.
The mistake was a single line in a license. Moonshot's terms require attribution for products above $20M monthly revenue. Cursor runs at $167M/month. Eight times over. Whoever reviewed the license either didn't read it or didn't flag it.
What I can't stop thinking about is the bigger signal: the best foundation models for specialized work are now open-weight and increasingly coming from Chinese labs. Kimi K2.5. Qwen3. DeepSeek V3.2. Most Western engineering teams have never benchmarked them. The ones that have are building faster and cheaper than competitors still paying frontier API rates.
Cursor accidentally made this strategy legible to everyone.
Take the best open foundation. Specialize it on your domain. Ship something competitors can't replicate without your data and your training runs.
That's not a scandal. That's the whole game now.