Hacker News new | past | comments | ask | show | jobs | submit login

If you want to see the same thing done 18 years ago without new-age machine learning, read https://www.mrl.nyu.edu/projects/image-analogies/index.html IMO the most elegant vision/graphics algorithm ever written.

Specifically this is the "texture-by-numbers" application. Ex: https://www.mrl.nyu.edu/projects/image-analogies/potomac.htm...

Every single fancypants application of neural nets in graphics today is a retread of one of the applications of the Image Analogies algorithm.




Is it really the same thing? The method you cite is a type of style transfer. Input sharp focused image and you get a blurrier version out with the required style. You’re removing information with a particular type of convolution.

The nvidia version seems to inpaint new details into the user segmented areas, like a collage of sorts.


This is not same:

"we first label a photograph or painting by hand to indicate its component textures. We then give a new labeling, from which the analogies algorithm produces a new photograph"

DL approach generalizes over many images and can derive some some "idea" of how given class should look like (ie. a tree)


Wow, thanks for sharing!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: