So it’s ok for google to scrape and save images and display them without compensating photographers (which I’m cool with), but not users? This is bizarre, anti user functionality.
By default if the image is available on the web, it is meant to be seen. This is going to make high school PowerPoint users spend more time, and result in zero extra revenue and zero drop in copyright infringement.
The more interesting question is why google is doing this.
Why not just remove the feature for pictures from Getty then, or simply not show their pictures at all? Surely that is within the technical capability of Google.
Do you think every image company would want to be removed from Google Search results instead of building their services without the ability to scrape the big resolution images? How would users get on their websites?
It doesn't matter. Once one company wins in court, it sets a precedent. It attracts more companies to follow the same path as they'll think they have a better shot at winning. Google obviously wouldn't risk this.
Because Getty wasn't suing to stop Google from sharing Getty images in image search. They were suing because they felt coerced into participating in image search. Forcing Getty not to participate would have been even worse for Getty.
I don’t believe that this is the sole motivator. As others have said, they could remove this for Getty images only (based on domain, based on reverse image search, lots of ways). But also, this seems like an absurd judgement, that if true they would want to highlight how the judgement had a very specific design fix as part of the settlement.
Since Google isn’t highlighting any such absurd ruling’s specific language then there must be some other reason.
If Getty can successful sue them, then any image rights holder can do so. Making an exception for Getty images doesn't resolve the underlying general liability or infringement. That means if another rights holder were to sue them for not addressing that liability for their images, now Google are infringing wilfully and subject to punitively increased sanctions or damages. That's reason enough to apply this remedy across the board.
YouTube and many other sites have systems to work with rightsholders. It would be trivial for Google to allow Getty and others to manage their images that show up while still retaining functionality for the billions of images created by people who don’t care or who even explicitly allow Creative Commons usage.
Thus my suspicion. What’s a little sad is 15 years ago I would give Google the benefit of the doubt, but this reminds me how times have changed.
YouTube, where a user chooses to upload content and Google defines the platform features and standards, is pretty different to search.
‘Still retaining functionality’ would require assuming permissive access to images as the default, which is what has got a Google into this mess in the first place. The whole point of this is that, for sites and rights holders that don’t explicitly allow access, Google cannot assume they grant it. They have to assume the most restrictive grant of rights, and adoption of metadata to indicate more permissive access is so poorly adopted it’s just irrelevant in practice.
> They have to assume the most restrictive grant of rights, and adoption of metadata to indicate more permissive access is so poorly adopted it’s just irrelevant in practice.
Try the advanced image search options, just like many other modern image search engines, as a user, you can filter for permissive licenses. I'm not sure how they gather that metadata, but the results I'm getting, it's not poorly adopted at all.
So they could in fact retain the functionality for images with the right license.
I think Google prefers to minimize the: "if this domain, and only this domain, do this special thing". Like you said, given that it would be absurd to remove a UI element based on some whim that it'd put them in compliance, I'm sure a major element of the plaintiff's litigation case hinged on this very feature.
They have many cases of this. They actively filter based on domain to comply with takedown notices. That’s the most public example of how they have a huge number of exceptions based on domain and file.
Of course, I’m sure they do this in an intelligent way rather than literally a bunch of one off kludges.
Maybe completely ignoring a domain is different than a implementing subroutine that ignores indexing a domain's images but not its other (text-based) content?
I try not to overuse the reference, but "Anatomy of a Search Engine" [1] is so deliciously prescient and ironic that it's hard to avoid. It's funny how hard Sergey and Larry shifted once personal fortunes changed their motivation.
"For example, a search engine could add a small factor to search results from "friendly" companies, and subtract a factor from results from competitors"
That's basically what Google got sued for in the EU and they lost. The friendly companies were other Google properties.
https://duckduckgo.com/ also had this link as its #1 result. I much prefer DDG - have converted across all my devices now in the past month and I'm really, really liking it. (Sorry off topic, but it's contextual.)
I use Bing regularly. Sometimes I'll say I've binged something, but I have a hard time doing so without a little hint of sarcasm or self-awareness in my voice.
It gets weirder when people are using the term "Google-fu" - gotta say "search-fu" if you can't stomach "Bing-fu"; neither option is great.
But hey, the rewards points are redeemable for Amazon cards...
The content of 3rd party websites is the merchandise. Users are users. Just because it's financed by ads doesn't mean they're not interested in your experience. Facebook currently experiences how problematic declining user engagement can be. If Google search quality ever drops below that of others (Bing, Yandex), they'd be in real trouble. Their superior search is the only reason they make all that money.
I get that some are dissatisfied by "paying" with ads. But the alternative would be a paid-for search engine. Not only would that potentially exclude some, it would also make it harder to access search for people from low-income countries. Currently, US and Europe subsidise operations for money other areas, allowing them to get the same search experience. That'd change with a paid-for model.
Exactly. The number of wikipedia users is used when soliciting for donations. Donators and wikipedia users probably have some overlap, but that's also true of Google Search users and advertisers..
Hose would argue with the http protocol and standard then.
Some would argue that speech in public between two people should not be overheard. But so what, the fact that some argue has limited value without lots of qualifiers.
Photographers should meticulously watermark their photos and offer non-watermarked via paid portals or with whatever ugly DRM there is. Unless they want to do it for free, which many want.
> This is going to make high school PowerPoint users spend more time
This is going to make Senior Architect PowerPoint users spend more time...
When I show my PHB why kubernetes is "the shit" and something we "really need", as I do with all my technical presentations: I get my propaganda directly from the suppliers. Am I gonna spend time visually describing how Oracles product suite hangs together? No, that's the job of someone who works at Oracle.
The sites themselves often have lower res versions in their articles, so 'view image' has repeatedly saved my bacon getting high res, clean, versions for reuse. Google Image search historically cut through the noise and got relevant images out in a highly usable format, while saving oooooodles of time digging out the appropriate graphic.
While it would be cool to cut down on such presentations as you describe, I would hope that even PHB style architects can right click, or use the dev console, or pull from cache.
Right click is blocked, natch, and dev console on google images trying to get sources is nowhere close to being as fast as visiting the source site and right clicking. Pulling from cache is also more time intensive, and therefore irrelevant.
Also, I said architect reporting to a PHB, not a PHB architect, and that the time was used creating presentations, not that the volume of presentations themselves could be 'cut down'...
Paywalled photos as from news agencies are already published in severely reduced resolution and often with huge watermarks plastered over the entire photo, if they are even on the public web! If you're publishing paywalled photos in public, high resolution, and perfectly usable for stealing, you are doing something wrong already.
Actually, for _any_ case of content in websites, if you rely on a search engine of all things (designed to spider your website) to protect it you are doing something terribly wrong.
I wish Google would just have removed the Getty Images from their search index altogether and be done with it. If they don't want them there, just remove them all already. I hate this sort of collective punishment for the entire web just because of someone's "special interests" that honestly seem to boil down to webmasters with "special needs"!
>Google has long been under fire from photographers and publishers who felt that image search allowed people to steal their pictures, and the removal of the view image button is one of many changes being made in response. A deal to show copyright information and improve attribution of Getty photos was announced last week and included these changes.
Using the context menu to "Open Image in New Tab" essentially does the same thing so the functionality is still there.
> using the context menu to “Open Image in New tab” essentially dos the same thing so the functionality is still there.
No, actually “view image” loaded the source image at source resolution while viewing the image loaded into the DOM would show you the re-encoded, lower resolution, smaller Google thumbnail version.
Edit: actually it seems that Google has simultaneously deployed an update that makes the actual source image open so that might actually work.
What I'm seeing on the current site is that the large preview seen on clicking a thumbnail is the original image, loaded from the original server and scaled in CSS.
Nothing's really changed in terms of capability; the UI is just a bit more awkward.
Only images you click on (so that the larger preview pops up, with the Visit button and which used to have the View image button) load the original images, rescaled with CSS. The grid of smaller images consists of thumbnails.
Are you sure you're actually clicking the images first, so they're brought up in the image viewer box with the other options available, and then right-clicking -> View Image/Open Image/whatever?
If you are, then that's very strange, and I have no idea why that's happening for you and not for anybody else. What browser and OS are you using?
> It is too slow to load many such large images automatically.
The images don't all load automatically, though; an image only begins to load when you click on it individually, opening it within the viewer, on a Google Images results page.
Here[0] is a screenshot example. Clicking on 'Open image in new tab' (or 'View Image' in Firefox) will open the image[1] directly, and inspecting the page's source will reveal it is simply an <img> element with the URL as the src value. The size ought to make no difference; I tried up to 10 MP.
It may take a second to load, but that's how Google Images works: once an image is open in the viewer, it is quite literally just embedding the unadulterated source image within the page. It has been this way since at least 2010.
(Unless, of course, there is behaviour unique to Safari or OS X: of that I have no idea.)
In this comment and especially in https://news.ycombinator.com/item?id=16323396, you've violated the site rules badly. You've also posted a bunch of unsubstantive comments and ignored our request to stop doing this.
We ban accounts that behave this way, so would you please (re-)read https://news.ycombinator.com/newsguidelines.html and comment only in the spirit of this site from now on? That means being scrupulously respectful of others, and generally not posting unless you have something substantive to say and can do so thoughtfully.
I am not sure, but read somewhere on HN Google related article comments that what Google does is first load all image search page with own encoded compressed thumbnails, enlarged by css, and then async fetches the real images, replaces the thumbnails, and scales down by css to show as same size of thumnail.
By default if the image is available on the web, it is meant to be seen. This is going to make high school PowerPoint users spend more time, and result in zero extra revenue and zero drop in copyright infringement.
The more interesting question is why google is doing this.