Maybe I’m missing something but does this blog post conclude with a service to do inference off device? Why explain all of the steps to inference on device if you’re offering an API to do cloud inference?
Off-device and on-device are alternatives to do deep learning inference on pi, both with pros and cons. For an example, with on-device inference, you will need to run a smaller architecture to get decent FPS and will also be dependent on hardware. Using cloud API removes those restrictions but there will be some latency in the web request however you can use much more accurate model and will be independent of pi hardware. Just trying to paint a complete picture. Any suggestions for blog post are welcome.
> Maybe I’m missing something but does this blog post conclude with a service to do inference off device?
You didn't answer this question. Your "sorta-answer" suggests "yes", but the title "How to easily Detect Objects with Deep Learning on Raspberry Pi" suggests that your answer should be "no".
The title wasn't "How to easily Detect Objects with Deep Learning on Raspberry Pi with cloud services".
Hey the blog has a way to implement the entire algorithm yourself in python or implement using a docker image on your own machine or see The source code for the Docker image that uses tensorflow so you can play around with it. To answer your question, yes, the last part of the blog has a way to do the same thing with a cloud based API. Up to the user to pick their preferred method.
> Your "sorta-answer" suggests "yes", but the title "How to easily Detect Objects with Deep Learning on Raspberry Pi" suggests that your answer should be "no".
How am I suggesting "yes"? And how the title is suggesting answer as "no"? There are pros and cons of both methods. If you are doing inference on a remote place with no access to internet, off-device is out of question. We are just trying to give a complete landscape so that if someone has a use case and trying to come up with solution, it might be helpful. Depending on use case, can pick on-device or off-device.
Exactly. I’m certainly interested in “How to easily Detect Objects with Deep Learning on Raspberry Pi”. Because I’m interested in that, I am most definitely not interested in “How to easily Detect Objects with Deep Learning on Raspberry Pi with cloud services”.
Because the blog post is advertising for the service. The goal is to make it look like enough of a pain in the butt that it's worth paying someone else to do it, but not such a big pain in the butt that a cloud service would be expensive or impractical.
I agree with the fact that there is potential for bias. The post starts of with a disclaimer explaining the conflict of interest. Anything else we can do to make the post more objective and less prone to bias?
Honestly I didn’t mean that as negatively as it sounds. It’s good advertising.
Squeezing a full blown ML tutorial into a blog post is a tall order. Of course it’s too thin to really do it yourself without a lot of further research, you can’t really expect anything else. But I think the title leads people to expect more detail, hence posts like this and the owl cartoon above.
Maybe add a breakdown of performance for doing this local on a pi vs using the api? Would make it easier for people to weigh pros and cons.
I suspect it'd be fun to play with a hybrid approach there - use the local on-device capability to detect "scenes of interest", then ship those out to the cloud service (with significantly more horsepower) to get more accurate results. Possibly, if it works for your use case, you could detect and store "interesting looking stuff" and ship it to the cloud later for analysis if your device only has intermittent internet connection.