Skip to content(if available)orjump to list(if available)

Windsurf SWE-1: Our First Frontier Models

blixt

> Enabled from the insight from our heavily-used Windsurf Editor, we got to work building a completely new data model (the shared timeline) and a training recipe that encapsulates incomplete states, long-running tasks, and multiple surfaces.

This data is very valuable if you're trying to create fully automated SWEs, while most foundation model providers have probably been scraping together second hand data to simulate long horizon engineering work. Cursor probably has way more of this data, and I wonder how Microsoft's own Copilot is doing (and how they share this data with the foundation model providers)...

firejake308

I'm confused why they are working on their own frontier models if they are going to be bought by OpenAI anyway. I guess this is something they were working on before the announcement?

dyl000

openAI models have an issue where they are pretty good at everything but not incredible at anything. They're too well rounded.

for coding you use anthropic or google models, I haven't found anyone who swears by openAI models for coding... Their reasoning models are either too expensive or hallucinate massively to the point of being useless... I would assume the gpt 4.1 family will be popular for SWE's

Having a smaller scope model (agentic coding only) allows for much cheaper inference and windsurf building its own moat (so far agentic IDE's haven't had a moat)

jjani

> openAI models have an issue where they are pretty good at everything but not incredible at anything. They're too well rounded.

This suggests OpenAI models do have tasks they're better at than the "less rounded" competition, who have taks they're weaker in. Could you name a single sucg task (except for image generation, which is an entirely different usecase), that OpenAI models are better at than Gemini 2.5 and Claude 3.7 without costing at least 5x as much?

allenleein

It seems OpenAI acquired Windsurf but is letting it operate independently, keeping its own brand and developing its own coding models. That way, if Windsurf runs into technical problems, the backlash lands on Windsurf—not OpenAI. It’s a smart way to innovate while keeping the main brand safe.

riffraff

But doesn't this mean they have twice the costs in training? I was under the impression that was still the most expensive part of these companies' balance.

kcorbitt

It's very unlikely that they're doing their own pre-training, which is the longest and most expensive part of creating a frontier model (if they were, they'd likely brag about it).

Most likely they built this as a post-train of an open model that is already strong on coding like Qwen 2.5.

kristopolous

Must have been. These things take months.

anshumankmr

Getting more money perhaps also, if they believed their model to be good, and had amassed some good training data Open AI can leverage, apart from the user base.

dyl000

it was only a matter of time, they have too much good data to not train their own models, not to mention that claude API calls were probably killing their profitability.

open source alternative https://huggingface.co/SWE-bench/SWE-agent-LM-32B

though I haven't been able to find a mlx quant that wasn't completely broken.