With all the crap Adobe gets for Flash, this is where their primary area of expertise lies. I've been using Photoshop from when there were no layers and very limited number of undos. They never fail to deliver something incredibly useful and time-saving with each iteration.
I'm still using Photoshop 4 that I got in 1996 or so. There are layers, but only one undo. I don't use it enough to justify the upgrade, but I have done a lot of everything in this video from time to time and it is extremely time consuming. This is quite an advancement. I usually give Adobe a lot of crap, but this is really very nice.
But I wonder... are tools like this to manipulate reality making really less and less interesting. Are losing our love of reality because of the dreams we can so easily create?
Most importantly, it’s not especially interesting. Figuring out what needs fixing takes a human eye, but actually cutting a distracting pole or bit of lens flare out of a picture is work for a patient robot.
> Are losing our love of reality because of the dreams we can so easily create?
Um... no?
How do you define “reality”? If it depends on what can be shown in photographs, then the time to have this discussion was around 1850.
The most impressive thing about it, is that they still manage to put in so much useful stuff in such a small amount of time. If this keeps up, I doubt I'll ever use more than 30 minutes to photoshop an image. Ever.
> they still manage to put in so much useful stuff in such a small amount of time.
What do you mean? It’s a 20-year-old program. If every year or two you implement 3 or 4 SIGGRAPH papers, after a decade you too will have quite a nifty thing.
> I doubt I'll ever use more than 30 minutes to photoshop an image. Ever.
Depends what your job description is. If you’re a retoucher paid by the photo, this is already pretty much true. If you’re a perfectionist artist, it will probably never be.
I'm not into SIGGRAPH papers, though if they implement those, it would explain the short time it takes to make. I am, however, still amazed by the possibilities of Photoshop: Photoshop shows us how the human brain is capable of finding patterns in nature and copy that realistically.
Surely, the latter comment was an overstatement if you're working professionally with Photoshop. But I do think it sooner or later would be some truth in my statement: If Photoshop manage to imitate realism well enough that humans don't see mistakes, then the only thing they would have to work on is speeding up the algorithm. (in that area of Photoshop, of course)
Hi this is Connelly I'm the first author on that paper. Adobe ATL's business model is that they do research and then put it in products. So Dan took the code I wrote for a summer internship (The paper you referenced. To be precise, we actually did the work in the fall before SIGGRAPH, because I only thought of the algorithm on my last week as an intern) and put it in Photoshop. So we're not ripping off anyone but ourselves :-).
By the way a shout to pg. I have mad respect for him and Jim Clark, both computer scientists/entrepreneurs who have worked really hard and selflessly to get engineers more equity over the wealth they create.
Awesome! Yeah, I realized that two of the authors were at Adobe and had a suspicion that you might have been an intern there as well, but wasn't sure.
As a computer vision researcher, I guess it sometimes irks me when I see great research work done, but credit for which often falls on the company that commercializes it.
Glad to hear that you were yourself involved at Adobe, and it seems like you've gotten a fair amount of good publicity from this great work. Congratulations!
I compared the Adobe CS image with what I could get from resynthesizer on GIMP, http://imgur.com/W8k1B. GIMP is the top one, I just cut off the stuff outside the selection line and ran it.
By changing its position in the sentence, you alter the meaning of “sufficient” (this “sufficiently” and this “insufficiently” are not opposites), and the primary implications of the two sentences are thus not contrapositives.
At 4:44 it just looks like they masked off an area and then switch off the mask. Looks fake because it's so good - without actually using it one couldn't tell.
Do you mean the fact that it seems to load in two parts? That's quite normal behaviour, as the images that are used probably have quite a high resolution and the downscaling takes some time. Just try taking any picture with a nice big resolution and then applying an effect. It will load gradually and not at once.
If you meant something else, could you explain it a bit clearer?
It looks potentially fake because its such a good final result. In short it's almost unbelievably good (in the video).
Elsewhere in the thread I repeated the experiment on one of the images with resynthesizer GIMP plugin and got very good results too, but not quite as good. Of course I could probably have chosen an image which would have appeared to produce equally good results...
If you look at the cloud area above the 'created' mountain, then look at the slope to the left of the mountain (inside the panorama) and the cloud area above that, you can see that it's very similar.
My guess as to how this works is that it looks some distance in from the selection edge, and then repeats that outward (or inward), but matches up the edges to the selection. You can see it in the deleted road where the 'desert' on the bottom right of the new area matches the brush to the left of it, but the rest matches the brush to the top and right of it.
It's like creating a seamless tiling background, except instead of doing it in a square, it does it in an arbitrary selection path.
I don't think so; it looks like it does a lot of stuff in the frequency domain that you wouldn't get by just smearing inwards or outwards along the surface normal of the selection.
It's super cool, for sure. But I don't see how it's different from Alien Skin's Image Doctor (http://www.alienskin.com/imagedoctor/index.aspx ), which has been available as a plugin for years. I've been using it from PaintShopPro for quite some time.
And this is why I shoot mostly film... with digital it's too easy and tempting to resort to parlor tricks instead of taking a genuinely good photograph.
I just scored a semi-nice film SLR the other day; it used to belong to my parents but they've gone digital and it hasn't been used in yonks. It's certainly nowhere near as programmable as even a cheap digital SLR, but whatever, it's fun to use.
The difference, in terms of taking shots, is night and day - with my digital camera, I'll snap 10+ shots of the same thing to try and find the right one; with this, I spend five minutes angling around and trying to find the right shot off the bat.
I'm not a good photographer by any means; in fact, I'm substantially below average if I'm honest about it. But raising the bar to entry certains makes you focus on what you're doing a lot more.
Which I guess is a long and winding way of saying I agree. It's more fun when there' more effort involved.
I feel the same way - there's a greater sense of accomplishment when you allow yourself only a single opportunity to capture what you see in front of you.
In photography parlance we call it chimping - i.e., jumping around like a monkey and clicking the shutter like crazy, trying to get the right shot. Even if you get the right one, you have no idea if it's your skill, luck, or just law of large numbers...
When I shoot film I feel more purposeful - I rarely ever take more than a single exposure of any one particular thing - it also teaches you to be patient, and hold the shutter until you know you have the right shot. On rapid-fire you will often get a good picture, but no idea what makes it good, whereas with film I'm aware that this picture works because, say, I waited until there was no one in the background.
My current favored camera is a Leica R4s - only manual and aperture priority modes, manual focus, manual aperture control, and only 2 metering modes (you only ever need one - center-weighted). When you strip a machine down to the bare minimum it opens a lot of creative doors.
I shoot film (as an amateur) by choice (sold my DSLR last year), but I still scan all of it, so every now and again it still gets tempting to give it the PS treatment..
I used to have a second hand Minolta Scan Dual IV, which is a dedicated 35mm scanner.
I've since started shooting 35mm & medium format, and I sold the Minolta and bought an Epson V700, which is a flatbed. I also used an Epson 4490 while I was in NY for a while last year, really good value for money.
Thanks, sorry for the late response. I have a friend (Wiktor Wołkow) in Poland that has an archive of about 150,000 images, no backups.
It's his life work, most of it is 35 mm, some on 60x60, he's been a professional photographer his whole life long and the thought of a fire there scares me.
So I have been thinking off and on about how to tackle that job, it would take quite a bit of time and money to do it properly.
There was a company a couple of years ago that did outsourced bulk film scanning. You basically sent them your film en masse, they sent it to India and had it scanned there, and then sent you back the film and the scans.
Couldn't find them with a quick google though, they may not have survived the GFC.
That made my day. I'm kind of worried though that the examples are hand-picked, and by default it doesn't look so nice. I would love, though, to know the algorithm behind this.
I was pretty impressed just by the first part, but when it got to the desert and finally the cloud, my jaw literally was open by the end. Totally gobsmacked.
Unbelievable. My head is hurting from trying to imagine the concept behind the algorithm to do this. The desert and sky recreations are especially mind boggling.
Mine too. My initial (naive) thought on developing something like this would be to have advanced users perform the tasks that I wanted to automate, and recording the series of actions that they took to perform the task so I could analyze it and try to find the patterns.
noidi's yotube link explains it pretty well. It searches the rest of the image for sections that are similar to the edges of the deleted area and extrapolates from there. I imagine for large deleted areas this would have to be done multiple times to build up the deleted area. It woudl be kind of lie markov chains, but for image data.
The current patch tool in Photoshop can already adjust contrast and hue to make a patch source match the patched area, so when the algorithm is searching for similar areas to use as a source it can just focus on the image details and not the lightness or actual colours.
This page http://schwarzvogel.de/resynth-tut-sa.shtml shows an example of growing and image by auto-filling the missing edges around a photo (similar to last example in the video)
I suppose the new algorithms in PS are better, but still it's been over 5 years now :-)
I'm pretty sure this isn't going to be in CS5. The engineer says as much and John Nack, who just posted it, says it'll be in a "future" version of Photoshop.
Which, if I'm right, makes this a horrible way to steal CS5's thunder!
This is absolutely amazing. The PS guys are really pushing the envelope of image editing. I think it's really cool that the editing process is more structural, more high-level, than working with the basic tools. The lens flare removal, especially, is an incredibly common and tricky problem.
At the same time, I am glad that the effects are still immediately visible, even in a low-res youtube video. It's nice to have hints of what the original actually looked like, some times.
It is indeed a great algorithm. I think similar algorithms will be used to "create" music in the future, although I shudder to think of how terrible pop music will be when they can just create "content aware" music by selecting a few bars and generating the rest.
wow. i'd definitely be interested in seeing the full image and seeing if there are any weird artifacts going on at the transition area between the selected border and the generated "content-aware" fill. from what i can tell , this is close to magic.
It reminds me of that other algorithm that went around a couple of months ago, where they do similar things by combining photographs on the web. Might have been from Google, Microsoft, or some independent researchers.
That was actually one of the less impressive things the demo did, since it replaced the tree with what is essentially a smooth gradient. This is something that can be done by early image inpainting algorithms, such as this one from SIGGRAPH 2000:
Yes, but you have to give it some credit for deciding to use a smooth gradient, when it is clearly not the only choice. (Also, it's hard to tell in this video exactly what it really replaced it with, at the pixel level.)
I don't believe any amount of technology will ever remove a human out of a creative task. Technology can offer tools and speed and power, but it is worth nothing without a creative mind to make use of it.
I mentioned this in another comment thread, but there was an article about Gary Kasparov discovering that a human supported by a computer can beat both a human or a computer alone.
Technology simply makes mundane tasks easier so more energy can be spent on creative possibilities.
>I don't believe any amount of technology will ever remove a human out of a creative task. Technology can offer tools and speed and power, but it is worth nothing without a creative mind to make use of it.
This might not be quite what you meant, but if creativity is not a computable function, what do you think it is?