Hacker News new | past | comments | ask | show | jobs | submit login

Hello...

Is it possible to use aider with a local model running in LMStudio (or ollama)?

From a quick glance i did not see an obvious way to do that...

Hopefully i am totally wrong!







Thanks for your interest in aider.

Yes, absolutely you can work with local models. Here are the docs for working with lmstudio and ollama:

https://aider.chat/docs/llms/lm-studio.html

https://aider.chat/docs/llms/ollama.html


Yes absolutely

In the left bar there's a "connecting to LLMs" section

Check out ollama as an example


Yes and is easy

yeah:

    aider --model ollama_chat/deepseek-r1:32b
(or whatever)

This didn't work well for me, no changes are ever made but maybe it's because I'm just using the 14B model.

In case you are on a 32+GB Mac, you could try deepseek-r1-distill-qwen-32b-mlx in LM Studio. It’s just barely usable speed-wise, but gives useful results most of the time.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: