mirror of
https://github.com/ai-robots-txt/ai.robots.txt.git
synced 2025-04-07 04:17:46 +00:00
chore: move additional resources to wiki
This commit is contained in:
parent
9be338094b
commit
0d9b75b227
1 changed files with 0 additions and 20 deletions
20
README.md
20
README.md
|
@ -8,26 +8,6 @@ A number of these crawlers have been sourced from [Dark Visitors](https://darkvi
|
|||
|
||||
If you'd like to add information about a crawler to the list, please make a pull request with the bot name added to `robots.txt`, `ai.txt`, and any relevant details in `table-of-bot-metrics.md` to help people understand what's crawling.
|
||||
|
||||
---
|
||||
|
||||
## Additional resources
|
||||
|
||||
**Spawning.ai**
|
||||
[Create an ai.txt](https://spawning.ai/ai-txt#create): an additional avenue to block crawlers. Example file:
|
||||
|
||||
```text
|
||||
# Spawning AI
|
||||
# Prevent datasets from using the following file types
|
||||
|
||||
User-Agent: *
|
||||
Disallow: /
|
||||
Disallow: *
|
||||
```
|
||||
|
||||
**[Have I Been Trained?](https://haveibeentrained.com/)**
|
||||
Search datasets for your content and request its removal.
|
||||
|
||||
|
||||
---
|
||||
|
||||
Thank you to [Glyn](https://github.com/glyn) for pushing [me](https://coryd.dev) to set this up after [I posted about blocking these crawlers](https://coryd.dev/posts/2024/go-ahead-and-block-ai-web-crawlers/).
|
||||
|
|
Loading…
Reference in a new issue