The singer from UK indie pop (?) band Everything Everything self-studied over three weeks this app along with Blender and some othe stuff to make a very stylised, glitchy music video the other week.
He posted "Thanks for the help, guys!" messages in several Reddit forums. It was weird. "Wait, that's your band?" "Yeah, I'm the singer." "Oh, you sound great keep going!" These guys have sold out rather large venues in the UK. Anyway, this seems to have turned in to an endorsement of the band. Ok, yes it is. But I swear this isn't the singer again.
It looks great, but weighing in at almost 400K triangles and 300K vertices though, so it's going to be difficult to do anything practical with a model that size.
Because I'm trying to get a photogrammetry pipeline working, and able to produce 50M vertices near-term. 1G long-term. All without deferring to chunking. 10fps on the serial parts is a medium goal for throughout.
Agisoft Metashape is pretty good as well. Results are comparable to RealityCapture (depending a bit on what you want) but personal license is only 179$.
I tried all sorts of free photogrammetry softwares way back but after I tried Metashape I decided to buy a license. With 32 GB of memory it's great. I'm taking photogrammetry models as a hobby - here's a recent one made with Metashape https://skfb.ly/6S6NR
Btw, super shameless plug: If you're looking for 3D capture where photogrammetry doesn't work well (indoors, complex environments) you may find Dot3D (https://www.dotproduct3d.com/dot3dpro.html) quite helpful. It uses depth sensors and works on any recent Android or Windows device. You can get a 14 day free trial and we're happy to extend if needed.
Yes, we're getting there. Of course a > $50k laser scanner will still outperform it in terms of accuracy but the new generation of depth sensors (e.g. Intel RealSense L515) is really quite good. We're going to announce support for this very soon and will also put some datasets online for comparison.
Hey, that sounds potentially relevant to my employer. I'm a small fry in our org but would like to get some of our heavy hitters to take a closer look at what you guys have. Who should we contact (if folks find this relevant)? I prefer not to link my private online persona to my professional role but the page https://www.dotproduct3d.com/workflows.html has several references to our products already so you likely are aware of my employer :)
I would echo the recommendation, but add that you really also need a good GPU. All the steps are highly parallelisable and take significantly longer on a CPU only.
Agisoft has a relatively simple interface, but there's a lot of complexity underneath if you need to tweak things.
This is great! Do you have any resources (youtube videos, articles, etc) for how to do this with a drone? I got outside trying to take pictures of my house but I wasn't sure how many photos to take to get a good render.
Same rules of thumb apply to both drone and handheld cameras. With a drone you can capture large wider areas, though, which is really cool (a city block... a small mountain... whatever).
The 3d reproduction quality is roughly to what you provide in photos-meaning if you view your model from a similar camera distance relatively as the source image it will look good - or further away.
For a high resolution scan you likely want to have a hierachical set of pictures, First a roundabout of say 32 pics.
Then similar turn around of the details that you want to model clearly like columns, stairs, and whatnot protruding and receding from the house facade.
Prefer a bright overcast day - it's way better than a direct sunlight.
Overall rule of thumb is that when you move, consequent picks should have 80% overlap
I usually notice only after taking the photos that I missed an angle and some details I wanted to see are not there. No prob. Since you took the images on overcast day just go back and take the extra few picks missing.
For full on neurotic mode, Put camera on manual, take RAW images, and keep your aperture, shutter speed and ISO constant throughout the shoot.
Even though you have a drone, you still likely want to take photos of ground details by hand (unless you have a real good camera on the drone - my 12 Mp really is noticeably worse than the 24 Mp - in this case you want all the pixels as you can as that helps with the alignment - just as long as they are NOT mushed by poor JPG compression - like my Mavic does-hence RAW:s).
There are lots of tips in archaeology blogs, I think, as well.
For reference, check out the stuff around open source packages of visualsfm, colmap and meshroom, as well as the relevant online reaources of the software you are using.
Don't want to speak for the OP, but that's typically the kind of quality I get directly out of Meshroom with no work other than throwing the pics in it and waiting ~ 4 hours (less if you have a fast, modern GPU)
"with no work other than throwing the pics in it and waiting ~ 4 hours (less if you have a fast, modern GPU)"
Personally I find taking the photos (~1000) is more work than actually waiting the software to complete the processing. Sure, the processing takes longer, but it's not as if you are turning the crank on the PC the whole time :)
Biggest job was sourcing the images. That took maybe two hours. Of course PC had to compute some hours, but I can do other things while that is going on so not really a time investment as such (do it over the night).
Roughly 800. Half by drone (Mavic Pro) with 12 MP camera and half on foot with Sony Alpha 6000 (24 MP) with fixed lens. Had to take low shots on foot mainly because 24 MP picks are much better source data than 12 MP picks. That's why the roof details are not as good as the details on ground.
I haven't tried the commercial packages, but Meshroom is really impressive.
With enough drone pics of a building, it's pretty much:
- launch meshroom
- throw pics in there
- press the "run" button
- come back 3/4 hours later
- load hi-rez fully textured model into blender
- bit of cleanup
- render
In particular, in the last step, you can do orthographic renderings that look like architect plans.
I highly recommend Meshroom. In particular, unlike many photogrammetry software I used before, my experience has been that there is almost no tweaking required.
The only minor gripe: the models come out a bit "heavy" by default (in some places, too much geometry).
Was this always open source? Somehow in my photogrammetry journey I’ve heard of this and not tried it. I guess I assumed it was proprietary. Anyone have any comments on how this compares to COLMAP? Looks like better UI at least.
The install process for Alicevision and Meshroom is a pain in the ass, I basically gave up at some point, would love to hear some input from others though. I'm currently using OpenSFM and it works great, just very slow.
It's really nice! I read more about the project and it's apparently always been open source. It looks like a high quality project run by some researchers with university and EU backing. I've been trying to reconstruct large outdoor environments for robotics simulation and this might be just what I need!
Nice. I don't have a clear plan for localization yet. Right now I am working on deep net based trail following. [1] Eventually I will probably do some kind of vision based localization. [2]
If you can share info on your project I'd love to see it!
I can't unfortunately, work stuff, but definitely check out this if you haven't already https://github.com/xdspacelab/openvslam Also that ethz-asl repo contains a library called libpointmatcher, its pretty awesome if you ever need to do point cloud alignment
I actually have had really good success with it outdoors, much better than indoor. Worth trying out at least, very easy to get started with if you can use docker
I've been watching people do photogrammetry for a while now, but the results have seemed really bad. When you remove the texture, the underlying model always seems off. This is important for doing object->mesh->modify->3D-print workflows.
This implementation looks a bit better than the rest.
I very much second this, as I hate relying on proprietary software of any sort for stuff that's essential, has a steep learning curve, or that I am going to use on a regular basis (plus other criteria). GPU drivers match the first and last point.
It seems like this has good promise, but scale is random on the output.
I've been looking for a good way to make a customized face mask using photogrammetry and my 3d printer, so I'm going to give this a shot. The only issue will be scaling the model so that my head is the right size for the modeling process.
The scale issue is a problem with any software like this unfortunately, it makes doing this type of processing in chunks very difficult as you are getting different scale outputs and nothing really relating the chunks together. OpenSFM does some neat stuff where they use location data and just search for matches between images near that location.
Yes, you definitely could? Not sure for this piece of software specifically though. But for land-based photogrammetry in general, they usually use known control-points that have fixed 3d coordinates attached to them in order to calibrate their models. They also use that to effectively "tie" the resulting model to an actual location on the map.
Unfortunately, scale gets messed up even when transferring regular 3D files between different software packages. It's pretty common when importing a 3D file to have to scale your model by either 0.1, 0.01, 10, 100 etc. just to get things back to normal.
If you don't want to mess with setup etc check out https://get.display.land/ which will let you download and do whatever you want with the resulting models.
Just buy a cheap cuda-capable NVidia GPU. For Meshroom, you don't need the latest and greatest, a two generation old GPU will do the trick and can be had for peanuts on ebay.
COLMAP is generally underrated IMO. It's what I primarily use because it's very flexible for different pipelines, it produces one of the best SFM results as a starting place, and it's open source with direct access to the database and decent documentation.
Has anyone tried it? I've been trying for a while to find a way to 3D scans stuff to 3D print but the quality of the final file has always been so crappy that I didn't even want to waste time and filament printing it
I recently experimented with photogrammetry and I tried several different applications, including Meshroom.
From memory, 3DF Zephyr had the best quality, at least in my limited experience. It was also very easy to use to clean up unnecessary points, which is an important step for high quality output.
Meshroom to me feels like a research testbed. It's not a "product" that's ready to consume. The user interface especially is an unmitigated disaster.
For example: What are the little connected boxes at the bottom, and why do I care? If I need to care about what they are, why many of them have text that is cut off or contracted with ellipses? E.g.: "dept...lder".
Next to this is a node property editor where the labels line wrap and can't be resized to fit (despite there being plenty of room). The text next to it aligns to the right, so I see only the random suffixes of the files, not the actual path prefix. It's literally a window showing me a bunch of random numbers I don't care about.
I could go on and on about how unusable it is, but it feels like kicking a puppy.
The real issue is that overall it's quite slow. I suspect because despite requiring a GPU, they're not very efficient with it. The concurrent CPU usage is high, yet progress doesn't seem to go forward very much. I rarely see the GPU utilisation exceed 20% or thereabouts. Other tools breeze through the same data sets in far less time.
What have you tried that's giving you the crappy file?
I've used Scandy Pro[1] a few times and was surprised by it's resolution and ease-of-use. Only downsides are it's iPhone only and you get 1 free scan/24 hrs.
I've gotten ok results with large objects like a chair. But for small trinkets like a figurine or something, I get almost no depth detail, almost like I smoothed the surface.
https://youtu.be/mcWwGBHa24g
He posted "Thanks for the help, guys!" messages in several Reddit forums. It was weird. "Wait, that's your band?" "Yeah, I'm the singer." "Oh, you sound great keep going!" These guys have sold out rather large venues in the UK. Anyway, this seems to have turned in to an endorsement of the band. Ok, yes it is. But I swear this isn't the singer again.