While Im still personally skeptical on the ability of these tools to produce a GOOD software engineer, it’s something I should probably consider testing in a limited capacity.
I’ve noticed Deepseek has a few integrations, both official and hobbyist, with text editors like Claude Code. Plus, I’d rather not pay £20/mo for any of this stuff, let alone to any AI company NOT linked to the CPC.
I might consider a locally hosted model but the upfront cost for anything that can run it decently fast/high params are quite prohibitive. My home server isn’t really set up for good cooling!


I also use Zed and I hook it up to small Qwen models like the new 4B 2507 Thinking model through LM Studio. I just have a 3070 with 8GB of VRAM, and 32GB of regular ram to help offload.
Small models leapfrog each other every 6 months or so kind of like computer hardware and phones. I don’t think you really need to be able to use full 30B or higher models to get use out of them. They’re of course smarter, but if you’re mainly using them as tools for syntax correction, error finding, and small problems like that vs. asking it to spit an entire program, the small ones are pretty good.
Maybe in a few years I’ll have the hardware to host AI locally. Right now my home server is just an i5-9500 (or 8500 i forgor 💀) for the iGPU transcoding on Jellyfin. 3070 would double my power draw immediately at full tilt.
Thankfully, I dont think the mental capacity and knowledge to write code is going to balloon in the future, so eventually something will be adequate enough for my purposes but for local hosting!
Fair enough, I must say I haven’t tried local models (tfw no GPU ;_;). I guess my take is that if it costs a tenth of a cent on OpenRouter to use a SOTA open source model, I might as well do that, but I can see the appeal of local models for easier queries.