Kids today might not realize that, for about twenty years there, you could go to Google Search and find things you were looking for! Google now features a hilariously unreliable AI summary as the f…
As I understand it, this is only about using search results for summaries. If it’s just that and links to the source, I think it’s OK. What would be absolutely unacceptable is to use the web in general as training data for text and image generation (=write me a story about topic XY).
No one will click on the source, which means the only visitor to your site is Googlebot.
That was the argument with the text snippets from news sources. Publishers successfully lobbied for laws to be passed in many countries that required search engine operators to pay fees. It backfired when Google removed the snippets from news sources that demanded fees from Google. Their visitors dropped by a massive amount, 90% or so, because those results were less attractive to Google users to click on than the nicer results with a snippet and a thumbnail. So “No one will click on the source” has already been disproven 10 or so years ago when the snippet issue was current. All those publishers have entered a free of charge licensing agreement with Google and the laws are still in place. So Google is fine, upstart search engines are not because those cannot pressure the publishers into free deals.
This has already happened and continues to happen.
The context is not the same. A snippet is incomplete and often lacking important details. It’s minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn’t require you to click on any sources; you simply ask follow up questions.
With Gemini?
Yes. How do you think the Gemini model understands language in the first place?
It’s not the same but it’s similar enough when, as the article states, it is solely about short summaries. The article may be wrong, Google may be outright lying, maybe, maybe, maybe.
Google, as by far the web’s largest ad provider, has a business incentive to direct users towards the web sites, so the website operators have to pay Google money. Maybe I’m missing something but I just don’t see the business sense in Google not doing that and so far I don’t see anything approximating convincing arguments.
Yes. How do you think the Gemini model understands language in the first place?
Licensed and public domain content, of which there is plenty, maybe even content specifically created by Google to train the data. “the Gemini model understands language” in itself hardly is proof of any wrongdoing. I don’t claim to have perfect knowledge or memory, so it’s certainly possible that I missed more specific evidence but “the Gemini model understands language” by itself definitively is not.
As I understand it, this is only about using search results for summaries. If it’s just that and links to the source, I think it’s OK. What would be absolutely unacceptable is to use the web in general as training data for text and image generation (=write me a story about topic XY).
No one will click on the source, which means the only visitor to your site is Googlebot.
This has already happened and continues to happen.
That was the argument with the text snippets from news sources. Publishers successfully lobbied for laws to be passed in many countries that required search engine operators to pay fees. It backfired when Google removed the snippets from news sources that demanded fees from Google. Their visitors dropped by a massive amount, 90% or so, because those results were less attractive to Google users to click on than the nicer results with a snippet and a thumbnail. So “No one will click on the source” has already been disproven 10 or so years ago when the snippet issue was current. All those publishers have entered a free of charge licensing agreement with Google and the laws are still in place. So Google is fine, upstart search engines are not because those cannot pressure the publishers into free deals.
With Gemini?
The context is not the same. A snippet is incomplete and often lacking important details. It’s minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn’t require you to click on any sources; you simply ask follow up questions.
Yes. How do you think the Gemini model understands language in the first place?
It’s not the same but it’s similar enough when, as the article states, it is solely about short summaries. The article may be wrong, Google may be outright lying, maybe, maybe, maybe.
Google, as by far the web’s largest ad provider, has a business incentive to direct users towards the web sites, so the website operators have to pay Google money. Maybe I’m missing something but I just don’t see the business sense in Google not doing that and so far I don’t see anything approximating convincing arguments.
Licensed and public domain content, of which there is plenty, maybe even content specifically created by Google to train the data. “the Gemini model understands language” in itself hardly is proof of any wrongdoing. I don’t claim to have perfect knowledge or memory, so it’s certainly possible that I missed more specific evidence but “the Gemini model understands language” by itself definitively is not.