A rumor is currently circulating that the AI coding assistant Cursor may be using a Chinese open-weights model in its new flagship product without giving proper credit.
At the heart of the issue is Cursor's new "Composer 2," which the company markets as its own "frontier-level" model. However, developers have shared evidence suggesting it runs on "Kimi K2.5," a powerful model created by the Chinese company Moonshot AI. This becomes a problem because Kimi's license has a specific rule: any company using it that makes more than $20 million a month must prominently display the name "Kimi K2.5" in their product. Given Cursor's large scale, it would almost certainly meet this threshold.
This situation didn't appear overnight; it's the result of several connected events. First, Moonshot AI made its powerful Kimi K2.5 model publicly available, creating an opportunity for companies like Cursor to build on top of it. Second, Cursor consistently branded its Composer series as its "own model," setting public expectations of proprietary technology. Third, just days before this rumor, reports emerged that Cursor was in talks for a new funding round at a massive $50 billion valuation, which puts immense pressure on the company to prove its unique technological edge. Finally, when Cursor launched Composer 2 without naming its base model, developers grew suspicious and uncovered logs pointing to Kimi K2.5, triggering the current controversy.
This is more than just a technical disagreement; it's about trust and valuation. A $50 billion price tag is often justified by unique, defensible intellectual property. If Cursor’s "secret sauce" is primarily a refinement of a publicly available model, investors might reconsider that valuation. It also creates uncertainty for large enterprise customers, who are very careful about the origin of the software they use and where their data is being processed.
Ultimately, building new products on open-source technology is a standard and innovative practice in the AI industry. The real issue here is one of transparency and compliance. The controversy centers on the potential gap between Cursor's marketing and its underlying technology, and more critically, its possible failure to adhere to the clear terms of a license agreement. The focus now is on how Cursor will address these claims of disclosure and compliance.
- Open-weights model: AI models whose internal parameters are publicly released, allowing anyone to use, study, and build upon them.
- Reinforcement Learning (RL): A method of training AI models by giving them rewards or penalties, helping them learn to perform tasks better over time.
- Model Provenance: The documented history of an AI model, including its origins and the data it was trained on, which is crucial for trust and security.
