I’ve tried running models locally. I found that collocating the models on my computer/laptp took up too much resource to impact my work. My solution is to run the models on my home servers since they can be served via http. Then run VPN to my home network to access them if I’m on the road. That actually works well and it’s scalable.
I’ve tried running models locally. I found that collocating the models on my computer/laptp took up too much resource to impact my work. My solution is to run the models on my home servers since they can be served via http. Then run VPN to my home network to access them if I’m on the road. That actually works well and it’s scalable.