Mine too. My initial (naive) thought on developing something like this would be to have advanced users perform the tasks that I wanted to automate, and recording the series of actions that they took to perform the task so I could analyze it and try to find the patterns.
noidi's yotube link explains it pretty well. It searches the rest of the image for sections that are similar to the edges of the deleted area and extrapolates from there. I imagine for large deleted areas this would have to be done multiple times to build up the deleted area. It woudl be kind of lie markov chains, but for image data.
The current patch tool in Photoshop can already adjust contrast and hue to make a patch source match the patched area, so when the algorithm is searching for similar areas to use as a source it can just focus on the image details and not the lightness or actual colours.
How is something like this actually developed?