From e6bb7cae9ead3e33078c3b9632a44b3234f241ba Mon Sep 17 00:00:00 2001 From: Glyn Normington Date: Thu, 17 Oct 2024 12:27:05 +0100 Subject: [PATCH] Augment the "why" FAQ Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2419078796 --- FAQ.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/FAQ.md b/FAQ.md index 1b3f247..4d58350 100644 --- a/FAQ.md +++ b/FAQ.md @@ -10,6 +10,8 @@ They're extractive, confer no benefit to the creators of data they're ingesting **[How AI copyright lawsuits could make the whole industry go extinct](https://www.theverge.com/24062159/ai-copyright-fair-use-lawsuits-new-york-times-openai-chatgpt-decoder-podcast)** > The New York Times' lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI. +Crawlers also sometimes impact the performance of crawled sites, or even take them down. + ## How do we know AI companies/bots respect `robots.txt`? The short answer is that we don't. `robots.txt` is a well-established standard, but compliance is voluntary. There is no enforcement mechanism.