Hi HN! hietalajulius and I have been working on a toolkit for solving computer vision problems.
These days, there are a lot of fancy solutions to many computer vision problems, but there aren't good implementations of the algorithms, getting to a working solution requires figuring out lots of different steps, tools are buggy and not well maintained and often, you need a lot of training data to feed the algorithms. Projects easily balloon into months long R&D projects, even when done by seasoned computer vision engineers. With the Stray Robots toolkit, we aim to lower the barrier for deploying computer vision solutions.
Currently, the toolkit allows you to build 3D scenes from a stream of depth camera images, annotate the scenes using a GUI and fit computer vision algorithms to infer the labels from single images, among a few other things. In this project, we used the toolkit to build a simple electric scooter detector using only 25 short video clips of electric scooters.
Using video to automatically build a large training set is smart! Well done! I was thinking about making a properly free and open dataset from just walking around London, and this gives me some ideas...
Super cool, especially the way it was able to differentiate that Posti box from the scooters, even though they have vaguely the same shape. Just out of curiosity, what confidence level did the classifier assign to the Posti box as a scooter?
Looks like it picks up parts of the mail thing as a scooter only in a few frames and the score is way below 1% (I set the minimum threshold to 0.01%), here's an example: https://imgur.com/a/9pSTwut
Pretty much yeah. Just to be clear, we only use the color and depth images from the camera. There is actually an offline calibration step to obtain camera intrinsic parameters, which are copied into each scene.
The integrate step runs a SLAM pipeline to compute the trajectory of the camera. Then we run an integration step to obtain the mesh.
Our core philosophy is to not stand in the way once you want to do something custom. So totally, if you want to just read the camera poses and 3D labels and do your own thing, you can totally do that and the data is available in each scene folder.
You only have to label the 3D Bounding Box once. Then you can automatically generate 2D bounding boxes for every frame of the video. So instead of annotating every frame with a 2D box, you only annotate once with a 3D box.
Though, I wonder if the whole hassle of relying on RGB-D sensor of a phone, copying from your phone and using a yet another annotation tool, is worth it, when you can instead use some tracking bbox annotation tool, which interpolates many frames.
With those, you can even annotate moving and distant objects, which I would argue is even better for generalization (since the background changes).
But I bet there are some use cases/users which can profit from it.
Yes if you only care about 2d bbox detection, a smart bounding box annotation tool has some advantages. If you need to solve 3D vision tasks, as is the case in 3D bbox detection, 3D keypoint detection, 6D pose estimation, then you need a tool that can also label the z dimension.
Yes it relies on the target being static when capturing the training data, but it’s ok for the background to move. We were actually surprised by how well it works on moving objects without being trained on them. In the post you can see Julius riding on a scooter and that is an unseen example with a detector that was only trained on static scooters.
Newbie here, where's the intersection between object detection and OCR?
For example, if I have images in different pdf files that I want to compare or trying to identify information on the wine label, what are criteria to consider on which method to use?
Heads up for anyone else, I was interested in the strayscanner app to try on my iPhone 11, but I’m getting an error when trying to record: “unsupported device: this device doesn’t seem to have the required level of ARKit support”.
Ah yeah. The app store doesn’t seem to have a way to restrict downloads to only lidar devices. The description does mention the limitation, but there doesn’t seem to be a way to set a hard constraint. So sorry about this! Wonder if there is a way to issue refunds on the app store.
Maybe worth to support front camera (TrueDepth camera) as well? Record3d gives pretty good accuracy (https://record3d.app/). I know probably not the best way to scan something without seeing the screen but better that than nothing. As a workaround people can use small mirror as well to do scanning and see result on the screen at the same time.
^ probably not, since they use detectron2, but given the labeled images are really the core part of this, there’s no reason you can’t use them on a different mode that is compatible.
Yeah the labels are loaded into the Detectron2 format from the 3D annotations json at train time, we plan to add similar data loading for YOLOv3 etc soon. Starting out with Detectron2 was mainly for POC/demo purposes, the idea in the future is to be able to feed the data anywhere it might be needed.
I don't know the specifics of that camera/its software, but the trained models are saved as TorchScript (https://pytorch.org/docs/stable/jit.html) which can be used very flexibly in python/C++.
These days, there are a lot of fancy solutions to many computer vision problems, but there aren't good implementations of the algorithms, getting to a working solution requires figuring out lots of different steps, tools are buggy and not well maintained and often, you need a lot of training data to feed the algorithms. Projects easily balloon into months long R&D projects, even when done by seasoned computer vision engineers. With the Stray Robots toolkit, we aim to lower the barrier for deploying computer vision solutions.
Currently, the toolkit allows you to build 3D scenes from a stream of depth camera images, annotate the scenes using a GUI and fit computer vision algorithms to infer the labels from single images, among a few other things. In this project, we used the toolkit to build a simple electric scooter detector using only 25 short video clips of electric scooters.
If you want to try it out, you can install the toolkit by following the instructions here: https://docs.strayrobots.io/installing/index.html
Going forward we plan to add other components such as 3D keypoint detection, semantic segmentation and 6D object pose estimation.
Let us know what you think! Both of us are here to answer any questions you may have.