Correct, it doesn't render all clicks (I dread to think how browsers would react if I tried) - all the results are put through a k-means cluster analysis at regular intervals to produce approximately 30 visible results. The percentages are calculated relative to all the clicks, though.
Do you know if these kinds of interactive content perform better (ie clicks/shares/views) vs other forms, ie longform or just plain data visualization pieces? I personally enjoyed it a lot.
This isn't really my area of expertise - we have people at the company dedicated to studying this kind of thing in far more detail than I know how, who may end up reading this - hi! - so I don't want to speak out of turn.
But in my experience there's a lot of variation - things like this interactive aren't as tied to the news cycle as many articles are, so it won't peak as high but may make up for it in longer term traffic.
One possible way of generating such images is to use more images since a lot of sport photos seem to be taken as part of a burst sequence. All you need is two such photos where the ball has moved over enough and the rest shouldn't be difficult.
But won't the players have moved roughly as much as the ball from photo to photo (since they're moving at about the same speed)? I suppose as long as the players haven't moved into the space where the ball used to be you could still trivially use that space to replace the ball in an earlier image.
Try with the stylus, I could be wrong but last time i played with a Surface Pro, I noticed that using your fingers on the touch screen game touchEvents and using the stylus gave mouseEvents.
That's the work of Sam Manchester, deputy editor on the Sports desk and chief Photoshop wizard. I believe most of it is just cloning different parts of the photo to cover up the ball, though it can get more complex. For example, on the 4th photo of this previous round:
I was wondering how he did that one!!! Part of my thinking was that you'd choose pictures that had the ball in an easy to photoshop location, and that threw me off
Sports photographers also take many shots in rapid succession. I bet you could clone the background from a shot a half second before the one in the article.
The accuracy is calculated compared to other readers, so if you all clicked on the wrong corner you can still be better than 80% of them!
I mostly did it this way because there's no hard number that makes sense here - we don't know ft/metres, and pixels aren't a unit everyone is used to thinking about.
I wanted to ask how long it took to make this, but that's an impossible thing to answer - so how much lead time did you (and the team) have before the first version went live?
I just checked my e-mail - it looks like we decided that we were definitely going to do it approximately two weeks before the first round went live. That's not typical but not necessarily unusual, if that's a sentence that even makes sense.
The concept is not new at all - Spot the Ball is a competition that ran in UK (and possible other) newspapers going back at least as far as the 70s. It was a cash prize competition and was pretty popular, though it's died out in recent years.
I wanted to bring it back to get people to interact a little more with a highlights photo gallery - it's a lot more fun that way. IMO, it's interesting because it's just the right level of infuriating.
One thing that people don't realise about newspaper spot the ball competitions is that the winning position was not where the ball originally was in the photo, but where the competition organisers thought it should be.
That removed any actual element of skill ("where are the players looking?") and turned it into pure guesswork.
People could buy rubber stamps of a grid of crosses so they could make very many simultaneous guesses.
With regard to languages, there's a real mix. PHP is still probably the most widely used language (including on the main desktop site, and our blogs are powered by Wordpress), but the mobile site runs on Node, and Go is definitely being used in the building.
mobile.nytimes.com is written in CoffeeScript (frontend and backend) and is maintained by a team of around 8 people. AFAIK all of the developers are fine with it.
At least it's given the world a new alternative to 'a to-do list' in demonstrating new technologies/languages/etc. There are ones for various different gaming frameworks:
Oh cool, I made something similar at PennApps this year. I made a Chrome extension that let you play a bunch of flappy bird clones by flapping your arms on webcam.
There's no fundamental difference between MJPEG and streaming a sequence of independent JPEG files over HTTP.
The MJPEG format doesn't really exist anymore: it was designed in the '90s to account for interlaced video content, but that's a rare breed nowadays. For progressive video, Photo-JPEG is equivalent.
Many popular intraframe video codecs are basically the JPEG algorithm with some modifications for specific pixel formats and some custom metadata. These include Apple ProRes, Avid DNxHD and the stalwart DV format (as in MiniDV tapes).