There's a fascinating animated movie "Loving Vicent" coming up that is made in the style of Vincent van Gogh. Apparently the individual frames are actual oil paintings made by hand.
When I first saw the trailer my first thought was the neural network approach shown here. I wonder how close you could get to "Loving Vincent" style by simply applying this algorithm to individual video camera frames.
There's a not very well know software that's written by one guy what's been around for 10 year so which can do this for moving images with an amazing amount of flexibility. It's a lot of fun: http://synthetik.com/
The authors of the paper have their own website providing similar service http://deepart.io/page/about/ (which claims a patent over this, unfortunately).
Well, yeah, but if they can't even make one single sample image for me, I'd rather not give them money. It's been more than a day and the image is still not done. It'd be faster for me to just install caffe and one of the open source solutions and do it myself. For free, as many as I want.
...the same way we don't need actors for movies anymore since we have 3D rendered special effects. /s
More likely it's going to become another tool to use (and heavily overuse for some time). The arts will be different but people won't stop being creative. The art of making cave paintings didn't really go away - now they are called graffity and are illegal in most modern caves.
Before that becomes the main problem, that other problem must be solved where you give it your picture and the Mona Lisa whose style is to be copied and the way it copies the style is it paints her eyebrow atop your mouth. 'Cause it's not like this thing can really distinguish between "style" and "content". What it does is it pastes patches from the "style" image onto the "content" image. Sometimes the result is interesting, much of the time it's between boring and terrifying. It's cool and all, but it's not much more intelligent than your favorite Photoshop filter.
If the rating is more important than the creation, perhaps they should create a neural network with an image as input, and a single signal as output, that indicates on a scale of 0 to 1 how interesting that image is, from an artistic point of view.
Alternatively, equip a human with an EEG headset, let them look at images, and take the rating from the output of the scanner.
Looks like standard DeepDream to me so not really the same thing. In OP they combine pictures while this just apply DeepDream over and over until it looks like that.
Do you know how they get stable videos from DeepDream? I've seen it elsewhere too, but my initial guess would have been that DeepDream would give ridiculous results if you ran it on each frame individually. I expect it's sensitive to tiny differences between frames.
Awesome, would be even cooler if it could pick up specific features like faces and replicate the style. Now it kind of brushes over most faces unless it dominates the picture.
I'm interested in the question too and, if he's of similar mind to me, that's not what he meant.
I'm curious what it takes to make a piece of software like this. People say "Using NN" and stuff. But if I make an app with a way to load a picture and then add NN to it, this won't come out. I'm wondering what the specific function of the networks are here.
Definitely. I want to understand what's going on here and be able to apply some creativity to the process, not just use existing tools without understanding.
It would appear that in each of the examples, the result uses the top source image for "content" and the bottom one for "style". So no, it's not commutative.
https://github.com/jcjohnson/neural-style
and the paper:
http://arxiv.org/pdf/1508.06576v2.pdf