This is nerd sniping at its best. I immediately fell into the exact trap described in the article. My first immediate reaction: can we represent it with 3D B-splines and solve analytically for tangent rays? My second reaction: humans are pretty good at contouring but we rely on shading and contrast change information a lot. Can we ray trace a bunch of images shaded from different points and apply convolutional nets to get contours? I am pretty sure the second approach is a bit of a chicken and egg problem and the first one has plenty of gotchas, but it was entertaining to think about it.
My first thought, not being good with math, or spatial thinking, was just to postprocess the rendered depth map somehow, look for sharp dropoffs, and then do some kind of intersection between the lines from the map and the model
> postprocess the rendered depth map somehow, look for sharp dropoffs
This is fine, if you want a simple ~solid stroke. A simple edge detection kernel filter (e.g. Sobel) on the depth and/or normal map is the basis of most outline-like things you see (in games at least) today. This is useful, but it's not an accurate representation of the occluding contour, and it's in an unhelpful form for further processing (kind of like a raster vs. vector image).
If you use ray marching then you could basically find the occluded contours for free, or at least it works when ray marching to SDFs. Not sure if meshes would present more of a problem though.
Can you describe what an internal contour is. As far as I can tell from reading the article and linked pages, it's just an edge with a sharp drop off in depth map?
Consider the pig picture in the article (the one under "The Occluding Contour Problem"). The method proposed by dvh would give the outline, but not the contour of the pig's left ear for example.
It would, if the enlargement is not by linear scaling but instead produces a shell around the object at some distance from the surface. The inverted-normal shell of the ear would then occlude the body behind it, while itself being occluded by the slightly smaller ear, producing a contour line.
That hack was a common way to render outlines in fixed function pipeline days, yes. It only really works for simple, convex forms and "works" is probably a bit generous. It's far from a general-purpose outline technique and an outline is just one effect you might want an occluding contour for.
Something that has been eating at my brain a lot lately is that we are still using triangles to represent 3D objects; even where we really shouldn't.
Sure, we've gotten really good at it; but at this point it seems like we are starting with a complex smooth shape, approximating that shape with triangles, then trying to make that collection of triangles look like a smooth shape. Can we factor out the triangle step?
The most painful part of this experience is 3D printing: millions of people are using mathematical formulas to first construct a 3D object (CSG modeling), then to wrap the object in triangles (export to an STL file), so that a slicer can tell the printer how to fill in the domain of triangles. The triangles are literally inserting themselves into our reality!
This introduces a lot of finicky problems:
* Sharp edges and flat faces on what was supposed to be a smooth sphere or cylinder
* Broken manifolds that result in slicer errors
* Holes that get mistakenly filled in by the slicer.
* The topology of triangles defining what can and cannot be changed in the object, and where.
Back in the good old days™, I could tell POVRay that there is a 2-unit diameter sphere with a 1-unit diameter cylindrical hole in it; and it would go right ahead and raytrace that. If I wanted to do the same in Blender, I could only create a vain approximation of that, spoiled by triangles and entropy.
It's not just that we got good at dealing with triangles, it's that triangles are really easy to work with.
I think the reason for using triangle meshes in 3D printing is for the benefit of the slicer (or its developers) -- the slicers consume triangle meshes because that made building the software feasible. Your printer quite likely implements some amount of non-linear (as in curved) moves.
Having said that, I'm sure I heard relatively recently about some slicers starting to support STEP input files or some other solid modeling format, but my searches just turned up "how to convert your STL files to STEP files" SEO trash, which is a bit funny.
It seems like some of the difficulty comes from trying to calculate the the contour as a vector rather than a raster image.
I would think that a method that only calculates the contour up to a given resolution to be much easier. Rendering the model using location mapped to color and then a post processing step on the image seems like it should be able to do the job.
It is almost certainly easier (in practice) to perform an edge detection or similar post processing step (usually on depth and/or normal map) in image space to get something that looks like the occluding contour. The utility of that data is limited, however.
At the risk of getting semantic, I'd argue that the raster representation of such a contour is not the contour. That is, calculating the contour as a raster image is just calculating something different.
The author's detailed analysis and breakdown of occluding contours in different forms of art, from painting to animation, offer a unique perspective on how artists use visual cues to convey depth and dimension.