pull down to refresh
Well smaller local LLMs are getting much better as well. Frankly my use of SOTA models has dropped off over last 3 months, I get lots of work out of qwen3-coder-next and now qwen3.6 / gemma4
I don't have enough local horsepower to run Minimax-2.7 or Kimi-2.5 but I could basically see ditching Cursor entirely very soon and using MM or Kimi to write detailed plans, and then pass off to local qwen to implement.
I don't think SOTA models will ever go away, but at some point local LLMs are going to become "good enough" to seriously curtail their usage.
reply
The bigger question might be whether costs are rising or if we’re just getting more capability per unit and calling it inflation.