Compare commits

...

303 commits
v1.2 ... main

Author SHA1 Message Date
dark-visitors
305188b2e7 Update from Dark Visitors
Some checks are pending
/ run-tests (push) Waiting to run
2025-04-11 00:55:52 +00:00
ai.robots.txt
4a764bba18 Merge pull request #102 from ai-robots-txt/imgproxy-bot
Some checks are pending
/ run-tests (push) Waiting to run
chore(robots.json): adds imgproxy crawler
2025-04-10 19:22:34 +00:00
a891ad7213
Merge pull request #102 from ai-robots-txt/imgproxy-bot
chore(robots.json): adds imgproxy crawler
2025-04-10 12:22:23 -07:00
b65f45e408
chore(robots.json): adds imgproxy crawler 2025-04-10 10:12:51 -07:00
Glyn Normington
49e58b1573
Merge pull request #100 from fbartho/fb/fix-perplexity-users
Some checks failed
/ run-tests (push) Has been cancelled
Fix html-mangled hyphen in 'Perplexity-Users' bot name
2025-04-05 17:32:19 +01:00
Frederic Barthelemy
c6f308cbd0
PR Feedback: log special-case, comment consistency 2025-04-05 09:01:52 -07:00
Frederic Barthelemy
5f5a89c38c
Fix html-mangled hyphen in Perplexity-Users
Fixes: #99
2025-04-04 17:34:14 -07:00
Frederic Barthelemy
6b0349f37d
fix python complaining about f-string syntax
```
python code/tests.py
Traceback (most recent call last):
  File "/Users/fbarthelemy/Code/ai.robots.txt/code/tests.py", line 7, in <module>
    from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx
  File "/Users/fbarthelemy/Code/ai.robots.txt/code/robots.py", line 144
    return f"({"|".join(map(re.escape, lst))})"
                ^
SyntaxError: f-string: expecting '}'
```
2025-04-04 15:20:30 -07:00
ai.robots.txt
5b8650b99b Update from Dark Visitors
Some checks failed
/ run-tests (push) Has been cancelled
2025-03-29 00:54:10 +00:00
dark-visitors
c249de99a3 Update from Dark Visitors 2025-03-28 00:54:28 +00:00
ec18af7624
Revert "Merge pull request #91 from deyigifts/perplexity-user"
This reverts commit 68d1d93714.
2025-03-27 12:51:22 -07:00
ai.robots.txt
6851413c52 Merge pull request #94 from ThomasLeister/feature/implement-nginx-configuration-snippet-export
Implement Nginx configuration snippet export
2025-03-27 19:49:15 +00:00
Glyn Normington
dba03d809c
Merge pull request #94 from ThomasLeister/feature/implement-nginx-configuration-snippet-export
Implement Nginx configuration snippet export
2025-03-27 19:49:05 +00:00
ai.robots.txt
68d1d93714 Merge pull request #91 from deyigifts/perplexity-user
Update perplexity bots
2025-03-27 19:29:30 +00:00
1183187be9
Merge pull request #91 from deyigifts/perplexity-user
Update perplexity bots
2025-03-27 12:29:21 -07:00
Thomas Leister
7c3b5a2cb2
Add tests for Nginx config generator 2025-03-27 18:28:21 +01:00
Thomas Leister
4f3f4cd0dd
Add assembled version of nginx-block-ai-bots.conf file 2025-03-27 12:43:36 +01:00
Thomas Leister
5a312c5f4d
Mention Nginx config feature in README 2025-03-27 12:43:29 +01:00
Thomas Leister
da85207314
Implement new function "json_to_nginx" which outputs an Nginx
configuration snippet
2025-03-27 12:27:09 +01:00
deyigifts
6ecfcdfcbf
Update perplexity bot
Update based on perplexity bot docs
2025-03-24 14:16:57 +08:00
5e7c3c432f
Merge pull request #83 from glyn/81-doc-testing
Document testing in README
2025-02-19 09:19:44 -08:00
Glyn Normington
9f41d4c11c
Merge pull request #84 from sideeffect42/tests-workflow
Add run-tests workflow
2025-02-18 19:42:55 +00:00
Dennis Camera
8a74896333 Add workflow to run tests on pull request or push to main 2025-02-18 20:30:27 +01:00
Glyn Normington
1d55a205e4 Document testing in README
Fixes: https://github.com/ai-robots-txt/ai.robots.txt/issues/81
2025-02-18 16:49:08 +00:00
Glyn Normington
8494a7fcaa
Merge pull request #80 from sideeffect42/htaccess-allow-robots_txt
.htaccess: Allow robots access to `/robots.txt`
2025-02-18 16:42:36 +00:00
Dennis Camera
c7c1e7b96f robots.py: Make executable 2025-02-18 12:55:17 +01:00
Dennis Camera
17b826a6d3 Update tests and convert to stock unittest
For these simple tests Python's built-in unittest framework is more than enough.
No additional dependencies are required.

Added some more test cases with "special" characters to test the escaping code
better.
2025-02-18 12:55:15 +01:00
Dennis Camera
0bd3fa63b8 table-of-bot-metrics.md: Escape robot names for Markdown table
Some characters which could occur in a crawler's name have a special meaning in
Markdown. They are escaped to prevent them from having unintended side effects.

The escaping is only applied to the first (Name) column of the table. The rest
of the columns is expected to already be Markdown encoded in robots.json.
2025-02-18 12:53:27 +01:00
Dennis Camera
a884a2afb9 .htaccess: Make regex in RewriteCond safe
Improve the regular expression by removing unneeded anchors and
escaping special characters (not just space) to prevent false positives
or a misbehaving rewrite rule.
2025-02-18 12:53:22 +01:00
Dennis Camera
c0d418cd87 .htaccess: Allow robots access to /robots.txt 2025-02-18 12:49:29 +01:00
dark-visitors
abfd6dfcd1 Update from Dark Visitors 2025-02-17 00:53:32 +00:00
ai.robots.txt
693289bb29 chore: add Brightbot 1.0 2025-02-16 21:37:52 +00:00
a9ec4ffa6f
chore: add Brightbot 1.0 2025-02-16 13:36:39 -08:00
Glyn Normington
03aa829913
Merge pull request #79 from always-be-testing/main
List of AI bots Cloudflare considers "Verified"
2025-02-16 04:33:40 +00:00
always-be-testing
5b13c2e504
add more concise message about verified bots
Co-authored-by: Glyn Normington <work@underlap.org>
2025-02-15 11:22:10 -05:00
always-be-testing
af87b85d7f include return after heading 2025-02-14 12:39:08 -05:00
always-be-testing
f99339922f grammar update and include syntax for verified bot condition 2025-02-14 12:36:33 -05:00
always-be-testing
e396a2ec78 forgot to include heading 2025-02-14 12:31:20 -05:00
always-be-testing
261a2b83b9 update README to inclide list of ai bots Cloudflare considers verified 2025-02-14 12:26:19 -05:00
dark-visitors
bebffccc0c Update from Dark Visitors 2025-02-02 00:52:50 +00:00
ai.robots.txt
89d4c6e5ca Merge pull request #73 from nisbet-hubbard/patch-8
Actually block Semrush’s AI tools
2025-02-01 10:51:01 +00:00
Glyn Normington
f9e2c5810b
Merge pull request #73 from nisbet-hubbard/patch-8
Actually block Semrush’s AI tools
2025-02-01 10:50:50 +00:00
nisbet-hubbard
05b79b8a58
Update robots.json 2025-01-27 19:41:03 +08:00
dark-visitors
9c060dee1c Update from Dark Visitors 2025-01-21 00:49:22 +00:00
ai.robots.txt
6c552a3daa Merge pull request #71 from jsheard/patch-1
Add Crawlspace
2025-01-20 17:45:42 +00:00
Glyn Normington
f621fb4852
Merge pull request #71 from jsheard/patch-1
Add Crawlspace
2025-01-20 17:45:29 +00:00
Joshua Sheard
7427d96bac
Update robots.json
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 10:59:02 +00:00
Glyn Normington
81cc81b35e
Merge pull request #68 from MassiminoilTrace/main
Implementing automatic htaccess generation
2025-01-20 07:33:54 +00:00
Massimo Gismondi
4f03818280 Removed if condition and added a little comments 2025-01-20 06:51:06 +01:00
Massimo Gismondi
a9956f7825 Removed additional sections 2025-01-20 06:50:48 +01:00
Massimo Gismondi
33c38ee70b
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:28:32 +01:00
Massimo Gismondi
52241bdca6
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:27:56 +01:00
Massimo Gismondi
013b7abfa1
Update README.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:27:02 +01:00
Massimo Gismondi
70fd6c0fb1
Add mention of htaccess in readme
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-20 06:25:07 +01:00
Joshua Sheard
5aa08bc002
Add Crawlspace 2025-01-19 22:03:50 +00:00
Massimo Gismondi
d65128d10a
Removed paragraph in favour of future FAQ.md
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:41:09 +01:00
Massimo Gismondi
1cc4b59dfc
Shortened htaccess instructions
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:40:03 +01:00
Massimo Gismondi
8aee2f24bb
Fixed space in comment
Co-authored-by: Glyn Normington <work@underlap.org>
2025-01-18 12:39:07 +01:00
Massimo Gismondi
b455af66e7 Adding clarification about performance and code comment 2025-01-17 21:42:08 +01:00
Massimo Gismondi
189e75bbfd Adding usage instructions 2025-01-17 21:25:23 +01:00
Massimo Gismondi
933aa6159d Implementing htaccess generation 2025-01-07 11:02:29 +01:00
Glyn Normington
b7f908e305
Merge pull request #66 from fabianegli/patch-1
Allow Action to succeed even if no changes were made
2025-01-07 03:54:40 +00:00
ai.robots.txt
ec454b71d3 Merge pull request #67 from Nightfirecat/semrushbot
Block SemrushBot
2025-01-06 20:51:56 +00:00
565dca3dc0
Merge pull request #67 from Nightfirecat/semrushbot
Block SemrushBot
2025-01-06 12:51:43 -08:00
Jordan Atwood
143f8f2285
Block SemrushBot 2025-01-06 12:34:38 -08:00
8e98cc6049
Merge pull request #61 from glyn/improve-naming
Rename Python code
2025-01-06 08:10:47 -08:00
Fabian Egli
30ee957011
bail when NO changes are staged 2025-01-06 12:05:42 +01:00
Fabian Egli
83cd546470
allow Action to succeed even if no changes were made
Before, the Action would fail in case there were no changes made to any files by the converter.
2025-01-06 11:39:41 +01:00
ai.robots.txt
ca8620e28b Merge pull request #63 from glyn/push-paths
Convert robots.json more frequently
2025-01-05 05:05:20 +00:00
Glyn Normington
b9df958b39
Merge pull request #63 from glyn/push-paths
Convert robots.json more frequently
2025-01-05 05:05:01 +00:00
Glyn Normington
c01a684036 Convert robots.json more frequently
Specifically, when github workflows or code
is changed as either of these can affect the
conversion results.

Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/60
2025-01-05 05:03:50 +00:00
Glyn Normington
d2be15447c
Merge pull request #62 from ai-robots-txt/missing-dependency
Ensure dependency installed
2025-01-05 01:46:27 +00:00
Glyn Normington
9e372d0696 Ensure dependency installed
Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/60#issuecomment-2571437913
Ref: https://stackoverflow.com/questions/11783875/importerror-no-module-named-bs4-beautifulsoup
2025-01-05 01:45:33 +00:00
Glyn Normington
996b9c678c Improve job name
The purpose of the job is to convert the JSON file
to the other files.
2025-01-04 05:28:41 +00:00
Glyn Normington
e4c12ee2f8 Rename in test code 2025-01-04 05:03:48 +00:00
Glyn Normington
3a43714908 Rename Python code
The name dark_visitors.py gives the impression that the code is entirely
related to the dark visitors website, whereas the update command relates
to dark visitors and the convert command is unrelated to dark visitors.
2025-01-04 04:55:34 +00:00
dark-visitors
2036a68c1f Update from Dark Visitors 2024-12-04 00:55:50 +00:00
Glyn Normington
24666e8b15
Merge pull request #58 from fabianegli/fabianegli-restore-attribution
Restore attribution
2024-11-29 09:05:16 +00:00
fabianegli
eb8e1a49b5 Revert "specify file encodings in tests"
This reverts commit bd38c30194.
2024-11-29 09:02:47 +01:00
fabianegli
b64284d684 restore correct attribution logic to before PR #55 2024-11-26 09:41:46 +01:00
fabianegli
bd38c30194 specify file encodings in tests 2024-11-26 09:12:11 +01:00
dark-visitors
609ddca392 Updated from new robots.json 2024-11-24 00:57:06 +00:00
dark-visitors
37065f9118 Update from Dark Visitors 2024-11-24 00:57:05 +00:00
dark-visitors
58985737e7 Updated from new robots.json 2024-11-19 16:46:21 +00:00
584e66cb99
Merge pull request #56 from glyn/40-exclude-facebookexternalhit
Allow facebookexternalhit
2024-11-19 08:46:05 -08:00
Glyn Normington
80002f5e17 Allow facebookexternalhit
At the time of writing, this crawler does not
appear to be for the purpose of AI.

See: https://developers.facebook.com/docs/sharing/webmasters/web-crawlers/
(accessed on 19 November 2024).

Fixes https://github.com/ai-robots-txt/ai.robots.txt/issues/40
2024-11-19 03:33:45 +00:00
Glyn Normington
71db599b41
Merge pull request #55 from norwd/feature/add-robots.txt-file-to-release
Create workflow to upload `robots.txt` file as release artefact
2024-11-13 01:39:11 +00:00
Y. Meyer-Norwood
e8f0784a00
Explicitly use release tag for checkout 2024-11-13 10:26:37 +13:00
Y. Meyer-Norwood
94ceb3cffd
Add authentication for gh command 2024-11-11 13:04:55 +13:00
Y. Meyer-Norwood
adfd4af872
Create upload-robots-txt-file-to-release.yml 2024-11-11 12:58:40 +13:00
Glyn Normington
d50615d394 Improve formatting
This clarifies the scope of the tip is Apache httpd.
2024-11-10 01:06:13 +00:00
Glyn Normington
2c88909be3 Fix formatting 2024-11-10 01:02:18 +00:00
Glyn Normington
6f58ddc623
Merge pull request #54 from glyn/rationale
Clarify our rationale
2024-11-10 00:58:29 +00:00
Glyn Normington
9295b6a963 Clarify our rationale
I deleted the point about excessive load on
crawled sites as any other crawler could potentially
be guilty of this and I wouldn't want our scope to
creep to all crawlers.

Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/53#issuecomment-2466042550
2024-11-09 04:45:47 +00:00
dark-visitors
9e06cf3bc9 Updated from new robots.json 2024-10-29 00:52:12 +00:00
dark-visitors
bc0a0ad0e9 Update from Dark Visitors 2024-10-29 00:52:12 +00:00
dark-visitors
fe5f407673 Update from Dark Visitors 2024-10-27 00:54:47 +00:00
Adam Newbold
a66b16827d
Merge pull request #51 from fabianegli/php-to-python-plus-tests
PHP to Python plus tests and stuff
2024-10-22 21:32:58 -04:00
fabianegli
3ab22bc498 make conversions and updates separately triggerable 2024-10-19 19:56:41 +02:00
fabianegli
6ab8fb2d37 no more failure when run without network 2024-10-19 19:11:01 +02:00
fabianegli
7e2b3ab037 rename action 2024-10-19 19:09:34 +02:00
fabianegli
0c05461f84 simplify repo and added some tests 2024-10-19 13:06:34 +02:00
fabianegli
6bb598820e ignore venv 2024-10-19 11:56:00 +02:00
Glyn Normington
d62cab66c5
Merge pull request #50 from glyn/fix-typo
Fix typo and trigger rerun of main job
2024-10-19 04:43:09 +01:00
ai.robots.txt
6a359e7fd7 Fix typo and trigger rerun of main job 2024-10-19 03:43:00 +00:00
Glyn Normington
38a388097c Fix typo and trigger rerun of main job 2024-10-19 04:42:27 +01:00
Glyn Normington
83c8603071
Merge pull request #49 from glyn/php-diagnostics
PHP diagnostics
2024-10-19 04:34:53 +01:00
ai.robots.txt
a80bd18fb8 Dump out file contents in PHP script 2024-10-19 03:34:29 +00:00
Glyn Normington
bdf30be7dc Dump out file contents in PHP script 2024-10-19 04:33:46 +01:00
Glyn Normington
4d47b17c45
Merge pull request #47 from fabianegli/fabianegli-patch-1
log the diff in the update actions
2024-10-19 02:58:05 +01:00
dark-visitors
faf81efb12 Daily update from Dark Visitors 2024-10-19 01:17:15 +00:00
Fabian Egli
25adc6b802
log git repository status 2024-10-19 00:28:41 +02:00
Fabian Egli
b584f613cd
add some signposts to the log 2024-10-19 00:13:09 +02:00
Fabian Egli
b3068a8d90
add some signposts 2024-10-19 00:12:25 +02:00
Fabian Egli
a46d06d436
log changes made by the action in main.yml 2024-10-19 00:04:15 +02:00
Fabian Egli
cfaade6e2f
log the diff in the update action daily_update.yml 2024-10-19 00:01:15 +02:00
04f630f7f8
Merge pull request #45 from glyn/faq-update
Update the FAQ
2024-10-18 06:35:47 -07:00
Glyn Normington
898c8ab82d
Merge pull request #46 from isagalaev/case-insensitive-sorting
Sort the content of robots.json by keys, case-insensitively
2024-10-18 07:57:56 +01:00
Ivan Sagalaev
7bb5efd462
Sort the content case-insensitively before dumping to JSON 2024-10-17 21:08:43 -04:00
Glyn Normington
e6bb7cae9e Augment the "why" FAQ
Ref: https://github.com/ai-robots-txt/ai.robots.txt/issues/40#issuecomment-2419078796
2024-10-17 12:27:05 +01:00
Glyn Normington
b229f5b936 Re-order the FAQ
The "why" question should come first.
2024-10-17 12:25:54 +01:00
dark-visitors
b1491d2694 Daily update from Dark Visitors 2024-10-09 01:17:37 +00:00
ai.robots.txt
9be286626d Merge pull request #43 from lxjv/main
Update robots.json with Claude respect link
2024-10-08 02:30:17 +00:00
Glyn Normington
01993b98c3
Merge pull request #43 from lxjv/main
Update robots.json with Claude respect link
2024-10-08 03:30:07 +01:00
Laker Turner
dc15afe847
Update robots.json with Claude respect link 2024-10-07 17:38:01 +01:00
ai.robots.txt
6da804e826 chore: add ISSCyberRiskCrawler 2024-09-30 23:50:18 +00:00
9c2394f23b
chore: add ISSCyberRiskCrawler 2024-09-30 16:25:20 -07:00
ai.robots.txt
6d9ce1d62a chore: add sidetrade bot 2024-09-28 20:58:18 +00:00
6a988be27f
chore: add sidetrade bot 2024-09-28 13:58:00 -07:00
ai.robots.txt
632e9d6510 Daily update from Dark Visitors 2024-09-28 01:17:19 +00:00
dark-visitors
7851cea4fd Daily update from Dark Visitors 2024-09-27 01:18:04 +00:00
Glyn Normington
75343c790e
Merge pull request #38 from urvish-p80/main
Add an additional resource - README.md
2024-09-27 01:26:04 +01:00
ai.robots.txt
44d975c799 Merge pull request #42 from commoncrawl/main
feat: make CCBot entry more accurate
2024-09-27 00:21:49 +00:00
Glyn Normington
2f67e77ddb
Merge pull request #42 from commoncrawl/main
feat: make CCBot entry more accurate
2024-09-27 01:21:37 +01:00
Greg Lindahl
a6de89e6bd feat: make CCBot entry more accurate 2024-09-26 21:41:28 +00:00
60bdfa7eb3
Merge pull request #41 from cityrolr/patch-1
Update README.md
2024-09-24 12:53:52 -07:00
Julian Mair
af05890b07
Update README.md
For people who don't use or don't want to use RSS for this, I've added a little explanation of how to subscribe to releases via GitHub.
2024-09-23 23:27:27 +02:00
Urvish Patel
0106d4b15a
Add additional resource - README.md
A detailed blogpost to - See the live dashboard showing the websites that are blocking AI Bots such as GPTBot, CCBot, Google-extended and ByteSpider from crawling and scraping the content on their website. Learn which AI crawlers / scrapers do what and how to block them using Robots.txt.
2024-09-23 08:19:27 -04:00
ai.robots.txt
6b8d7f5890 Daily update from Dark Visitors 2024-09-09 01:16:21 +00:00
dark-visitors
5963cbf9f7 Daily update from Dark Visitors 2024-09-08 01:19:31 +00:00
Glyn Normington
b15b8062ce
Merge pull request #36 from cramforce/patch-1
Add instructions for AI bot blocking on Vercel
2024-09-08 01:26:07 +01:00
Malte Ubl
809851ae88
Add instructions for AI bot blocking on Vercel 2024-09-07 15:59:25 -07:00
ai.robots.txt
1c1b423684 chore: add iaskspider/2.0 2024-09-07 02:05:43 +00:00
8373294404
chore: add iaskspider/2.0 2024-09-06 19:05:26 -07:00
b30ca5f193
Merge pull request #35 from nisbet-hubbard/patch-7
Improve main workflow
2024-09-02 18:40:57 -07:00
ai.robots.txt
fb5c995243 Daily update from Dark Visitors 2024-09-03 01:12:57 +00:00
ai.robots.txt
7151f6c569 Removing previously generated files 2024-09-03 01:12:56 +00:00
nisbet-hubbard
cc18b8617c
Update main.yml 2024-09-03 07:48:48 +08:00
ai.robots.txt
c9325c9e18 Daily update from Dark Visitors 2024-09-02 01:15:07 +00:00
ai.robots.txt
567bd00aec Removing previously generated files 2024-09-02 01:15:07 +00:00
ai.robots.txt
543e993b08 Daily update from Dark Visitors 2024-09-01 01:24:53 +00:00
ai.robots.txt
01589718df Removing previously generated files 2024-09-01 01:24:52 +00:00
ai.robots.txt
9a7f556d87 Daily update from Dark Visitors 2024-08-31 01:13:04 +00:00
ai.robots.txt
9a4ebb57ee Removing previously generated files 2024-08-31 01:13:04 +00:00
ai.robots.txt
054c97ad4f Daily update from Dark Visitors 2024-08-30 01:13:29 +00:00
ai.robots.txt
b2970316d8 Removing previously generated files 2024-08-30 01:13:29 +00:00
ai.robots.txt
008a34ceb4 chore: add ai2bot 2024-08-29 03:07:52 +00:00
ai.robots.txt
3bce634e4a Removing previously generated files 2024-08-29 03:07:51 +00:00
0f8723558f
chore: add ai2bot 2024-08-28 20:07:32 -07:00
ai.robots.txt
6dc900b582 Daily update from Dark Visitors 2024-08-29 01:13:19 +00:00
ai.robots.txt
71eefcdb05 Removing previously generated files 2024-08-29 01:13:19 +00:00
ai.robots.txt
1d417ffab9 Daily update from Dark Visitors 2024-08-28 01:12:35 +00:00
ai.robots.txt
00ef18f93c Removing previously generated files 2024-08-28 01:12:35 +00:00
ai.robots.txt
84a2376f65 Daily update from Dark Visitors 2024-08-27 01:12:20 +00:00
ai.robots.txt
699862f4bd Removing previously generated files 2024-08-27 01:12:19 +00:00
ai.robots.txt
ccec3eef15 Daily update from Dark Visitors 2024-08-26 01:11:41 +00:00
ai.robots.txt
6cb9bc8ebf Removing previously generated files 2024-08-26 01:11:40 +00:00
ai.robots.txt
42a7ca7eda Daily update from Dark Visitors 2024-08-25 01:16:28 +00:00
ai.robots.txt
907866301f Removing previously generated files 2024-08-25 01:16:27 +00:00
ai.robots.txt
b202b9e1e3 Daily update from Dark Visitors 2024-08-24 01:09:29 +00:00
ai.robots.txt
ac1250cfa5 Removing previously generated files 2024-08-24 01:09:29 +00:00
ai.robots.txt
d95f2e8072 Daily update from Dark Visitors 2024-08-23 01:10:54 +00:00
ai.robots.txt
61d851baf5 Removing previously generated files 2024-08-23 01:10:53 +00:00
dark-visitors
7bfc1647a8 Daily update from Dark Visitors 2024-08-22 01:11:43 +00:00
ai.robots.txt
3580a7096f Daily update from Dark Visitors 2024-08-21 01:10:11 +00:00
ai.robots.txt
fad335178f Removing previously generated files 2024-08-21 01:10:10 +00:00
ai.robots.txt
358df0833e Daily update from Dark Visitors 2024-08-20 01:10:11 +00:00
ai.robots.txt
7e0dd921db Removing previously generated files 2024-08-20 01:10:11 +00:00
ai.robots.txt
591a99c320 Daily update from Dark Visitors 2024-08-19 01:11:49 +00:00
ai.robots.txt
394e447c78 Removing previously generated files 2024-08-19 01:11:49 +00:00
ab4a6547f6
Merge branch 'main' of github.com:ai-robots-txt/ai.robots.txt 2024-08-18 11:34:47 -07:00
1d3194f75d
chore: update readme 2024-08-18 11:34:43 -07:00
2363e57608
chore: minor update 2024-08-18 11:34:08 -07:00
ai.robots.txt
b8e68c12f3 Daily update from Dark Visitors 2024-08-18 01:14:50 +00:00
ai.robots.txt
60ff792ba9 Removing previously generated files 2024-08-18 01:14:49 +00:00
ai.robots.txt
3afcefdff5 Daily update from Dark Visitors 2024-08-17 01:08:17 +00:00
ai.robots.txt
558d5871b2 Removing previously generated files 2024-08-17 01:08:17 +00:00
ai.robots.txt
2a075cb2f1 Daily update from Dark Visitors 2024-08-16 01:10:14 +00:00
ai.robots.txt
3ef9cb7ce4 Removing previously generated files 2024-08-16 01:10:13 +00:00
dark-visitors
5937434aff Daily update from Dark Visitors 2024-08-15 01:07:15 +00:00
407b9e12e6
chore: sort output 2024-08-14 17:10:29 -07:00
bc66d10afd
chore: update faq 2024-08-14 09:21:26 -07:00
ai.robots.txt
df5b6ef647 Daily update from Dark Visitors 2024-08-14 01:11:03 +00:00
ai.robots.txt
2c8ed062b9 Removing previously generated files 2024-08-14 01:11:02 +00:00
ai.robots.txt
2e8e8af8e4 Daily update from Dark Visitors 2024-08-13 01:12:03 +00:00
ai.robots.txt
f1d0c5b1fe Removing previously generated files 2024-08-13 01:12:02 +00:00
ai.robots.txt
53a39b2f71 Daily update from Dark Visitors 2024-08-12 01:12:23 +00:00
ai.robots.txt
274d48b8f0 Removing previously generated files 2024-08-12 01:12:23 +00:00
ai.robots.txt
6472e07f09 Daily update from Dark Visitors 2024-08-11 01:16:04 +00:00
ai.robots.txt
cb98669cc2 Removing previously generated files 2024-08-11 01:16:03 +00:00
7662d06eb3
Merge pull request #33 from nisbet-hubbard/patch-6
Add links for reporting and FAQ to README.md
2024-08-09 19:42:36 -07:00
ai.robots.txt
53449ad1bd Daily update from Dark Visitors 2024-08-10 01:10:53 +00:00
ai.robots.txt
4242f8cc7b Removing previously generated files 2024-08-10 01:10:53 +00:00
nisbet-hubbard
46540633ba
Update README.md 2024-08-10 08:22:28 +08:00
ai.robots.txt
21e5cd96a9 Daily update from Dark Visitors 2024-08-09 01:11:12 +00:00
ai.robots.txt
ed7d7d3fdf Removing previously generated files 2024-08-09 01:11:11 +00:00
ai.robots.txt
57f006150b Daily update from Dark Visitors 2024-08-08 01:10:13 +00:00
ai.robots.txt
40f9325a4f Removing previously generated files 2024-08-08 01:10:12 +00:00
ai.robots.txt
0122dea1e9 Merge pull request #32 from ChenghaoMou/main
Tracking Dark Visitors Automatically
2024-08-07 22:40:24 +00:00
ai.robots.txt
663b85cc07 Removing previously generated files 2024-08-07 22:40:24 +00:00
Adam Newbold
5c8b4593f4
Merge pull request #32 from ChenghaoMou/main
Tracking Dark Visitors Automatically
2024-08-07 18:40:13 -04:00
Chenghao Mou
6f96795edc restore cron 2024-08-07 12:43:44 +01:00
ai.robots.txt
ab17662f96 Daily update from Dark Visitors 2024-08-07 11:41:00 +00:00
ai.robots.txt
8738c66c65 Removing previously generated files 2024-08-07 11:40:59 +00:00
Chenghao Mou
b00067bc86 restore files deleted by failed workflow and fix main commit message 2024-08-07 12:36:21 +01:00
ai.robots.txt
4a63c482c4 Removing previously generated files 2024-08-07 11:31:02 +00:00
Chenghao Mou
366e49dc6d restore files deleted by failed workflow and fix main commit message 2024-08-07 12:21:40 +01:00
ai.robots.txt
aaa55594e1 Removing previously generated files 2024-08-07 11:13:16 +00:00
Chenghao Mou
fbebbbfefb restore files deleted by failed workflow 2024-08-07 12:02:50 +01:00
dark-visitors
6a275366be Daily update from Dark Visitors 2024-08-07 10:50:45 +00:00
Chenghao Mou
09c6b78b46 fix job dependency 2024-08-07 11:45:37 +01:00
ai.robots.txt
d4f34363ec Removing previously generated files 2024-08-07 10:40:50 +00:00
ai.robots.txt
30eaff1447 call main after update 2024-08-07 10:32:13 +00:00
ai.robots.txt
bd3eee7a30 Removing previously generated files 2024-08-07 10:32:12 +00:00
Chenghao Mou
944bee0f56 call main after update 2024-08-07 11:31:58 +01:00
dark-visitors
cebf809391 Daily update from Dark Visitors 2024-08-07 00:14:26 +00:00
ai.robots.txt
3d4bf2c3db restore original robots.json 2024-08-06 18:50:54 +00:00
ai.robots.txt
d6a5e8cd81 Removing previously generated files 2024-08-06 18:50:53 +00:00
Chenghao Mou
4cf82b703f restore original robots.json 2024-08-06 19:50:38 +01:00
Chenghao Mou
0b6eba8dd5 skip push if no change 2024-08-06 19:41:38 +01:00
Chenghao Mou
379c339f97 skip push if no change 2024-08-06 19:41:38 +01:00
Chenghao Mou
01edb6c78c
Merge branch 'ai-robots-txt:main' into main 2024-08-06 19:35:03 +01:00
Chenghao Mou
2a3685385c restrict scope 2024-08-06 19:33:49 +01:00
85275e55b8
Merge pull request #31 from glyn/addfaq
Add FAQ
2024-08-06 11:15:29 -07:00
Chenghao Mou
8c6482fb45 restore the cron 2024-08-06 18:12:41 +01:00
dark-visitors
63c7e742c3 Daily update from Dark Visitors 2024-08-06 16:54:29 +00:00
Chenghao Mou
55e92f4324 update existing ones 2024-08-06 17:48:06 +01:00
Chenghao Mou
52d54cf127 restore the cron 2024-08-06 17:28:07 +01:00
dark-visitors
fdd261dad4 Daily update from Dark Visitors 2024-08-06 16:27:02 +00:00
ai.robots.txt
6d2285f5e0 Add FAQ 2024-08-06 16:21:01 +00:00
ai.robots.txt
83d9397f17 Removing previously generated files 2024-08-06 16:21:00 +00:00
Glyn Normington
b4d25bf0cb Add FAQ 2024-08-06 17:20:26 +01:00
Chenghao Mou
8ab1e30a6c test workflow 2024-08-06 17:12:26 +01:00
Chenghao Mou
192bf67631 add dark visitor workflow 2024-08-06 17:02:23 +01:00
ai.robots.txt
e12ddc0f42 Merge pull request #29 from jbowdre/dev
only build on changes to robots.json
2024-08-06 15:44:54 +00:00
ai.robots.txt
b54e274bbc Removing previously generated files 2024-08-06 15:44:53 +00:00
3e91a84d11
Merge pull request #29 from jbowdre/dev
only build on changes to robots.json
2024-08-04 16:04:59 -07:00
John Bowdre
b0a93aeb70 only build on changes to robots.json 2024-08-04 17:45:18 -05:00
ai.robots.txt
eb924b9856 Merge pull request #28 from jsheard/patch-2
Add Cloudflares first-party scraper blocking to FAQ
2024-08-04 21:54:17 +00:00
ai.robots.txt
1cfc071498 Removing previously generated files 2024-08-04 21:54:16 +00:00
24c3509a6e
Merge pull request #28 from jsheard/patch-2
Add Cloudflares first-party scraper blocking to FAQ
2024-08-04 14:54:06 -07:00
ai.robots.txt
c2f177870f Merge pull request #27 from jsheard/patch-1
Fix Imagesift user agent
2024-08-04 21:53:48 +00:00
ai.robots.txt
0072b8f5f0 Removing previously generated files 2024-08-04 21:53:47 +00:00
9c7257e7cf
Merge pull request #27 from jsheard/patch-1
Fix Imagesift user agent
2024-08-04 14:53:36 -07:00
Joshua Sheard
8dbbdbf44c
Add Cloudflares first-party scraper blocking to FAQ 2024-08-04 21:38:02 +01:00
Joshua Sheard
146fd4ffba
Fix Imagesift user agent 2024-08-04 21:33:04 +01:00
ai.robots.txt
c7b781034e chore: restore FriendlyCrawler + ImageSift 2024-08-04 19:29:01 +00:00
ai.robots.txt
9a8fa66772 Removing previously generated files 2024-08-04 19:29:00 +00:00
1ca936ce11
chore: restore FriendlyCrawler + ImageSift 2024-08-04 12:28:48 -07:00
ai.robots.txt
8de5bc8e01 Merge pull request #25 from mirium999/add_icc_crawler
Add ICC-Crawler
2024-08-04 01:21:56 +00:00
ai.robots.txt
8c632e1ba4 Removing previously generated files 2024-08-04 01:21:55 +00:00
Adam Newbold
8d4d52cdab
Merge pull request #25 from mirium999/add_icc_crawler
Add ICC-Crawler
2024-08-03 21:21:45 -04:00
Mirium999
5826c18909 Add ICC-Crawler 2024-08-04 10:11:25 +09:00
ai.robots.txt
ffbad453f3 Merge pull request #24 from nisbet-hubbard/patch-5
Add last line of defence to FAQ
2024-08-03 14:27:47 +00:00
ai.robots.txt
b1907d86be Removing previously generated files 2024-08-03 14:27:46 +00:00
55c585e9e3
Merge pull request #24 from nisbet-hubbard/patch-5
Add last line of defence to FAQ
2024-08-03 07:27:37 -07:00
nisbet-hubbard
2b56c72bac
Update FAQ.md 2024-08-03 14:27:25 +08:00
nisbet-hubbard
b24e5cb3bb
Update FAQ.md 2024-08-03 14:12:50 +08:00
nisbet-hubbard
74b1502839
Update FAQ.md 2024-08-03 14:04:58 +08:00
ai.robots.txt
d8de1ebdd5 chore: contribution note 2024-08-02 16:32:00 +00:00
ai.robots.txt
9d8d3de8ed Removing previously generated files 2024-08-02 16:31:59 +00:00
349c35eed6
chore: contribution note 2024-08-02 09:31:48 -07:00
ai.robots.txt
b144225ece chore: drop in additional data 2024-08-01 22:33:23 +00:00
ai.robots.txt
06b950bce9 Removing previously generated files 2024-08-01 22:33:23 +00:00
b20dfec1e4
chore: drop in additional data 2024-08-01 15:33:07 -07:00
ai.robots.txt
f18f0d99b9 chore: remove test data 2024-08-01 22:29:02 +00:00
ai.robots.txt
747cc834c4 Removing previously generated files 2024-08-01 22:29:01 +00:00
efabf3e721
chore: remove test data 2024-08-01 15:25:55 -07:00
Adam Newbold
1fdc79dacb Adding GitHub Action 2024-08-01 18:17:19 -04:00
17a84f2c2d
chore: update robots table 2024-08-01 15:06:49 -07:00
6c596a50ea
chore: move FAQ into repo 2024-08-01 07:53:43 -07:00
6a8e7a8eb0
Merge pull request #22 from nisbet-hubbard/patch-4
Add `PetalBot` (and `facebookexternalhit`?)
2024-08-01 07:49:30 -07:00
nisbet-hubbard
df89722038
Add PetalBot (and facebookexternalhit?) 2024-07-31 18:27:29 +08:00
fa7b64ae4b
chore: add Scrapy 2024-07-30 10:28:46 -07:00
55b4505e30
chore: add Timpibot 2024-07-29 12:38:22 -07:00
d49e860b74
chore: add VelenPublicWebCrawler 2024-07-29 12:12:42 -07:00
6e323554c6
chore: add Meta-ExternalAgent 2024-07-29 08:27:31 -07:00
2972926532
chore: add OAI-SearchBot 2024-07-26 09:06:10 -07:00
Glyn Normington
c17cae6e9d
link to bot metrics table
Make it easier to view the table.
2024-07-17 02:28:32 +01:00
3692d66918
chore: update bots table 2024-07-16 12:07:01 -07:00
af52578965
chore: drop google adbot; add GoogleOther bots 2024-07-16 12:05:34 -07:00
570fd36ea2
chore: update bots table 2024-07-10 19:47:23 -07:00
0ca6bce87e
chore: add ImagesiftBot 2024-07-09 17:41:32 -07:00
0971af19b6
chore: peer39 unrelated to ai 2024-07-09 17:39:51 -07:00
74fa789985
Merge pull request #18 from glyn/add-ref
Add reference
2024-06-29 12:22:23 -07:00
Glyn Normington
4a8ce0b51d Add reference
This links to https://github.com/glyn/nginx_robot_access
2024-06-29 12:20:38 +01:00
70e8985622
chore: remove unused image 2024-06-22 12:44:02 -07:00
fd4ade555c
Merge pull request #17 from corbindavenport/main
More bot information and improved README
2024-06-22 12:42:52 -07:00
Corbin Davenport
fe98e41546
Update Perplexity information 2024-06-21 22:17:45 -04:00
Corbin Davenport
20dc4e16ad
Update releases feed links 2024-06-21 22:02:54 -04:00
Corbin Davenport
149a72be0c
Add bot info for omgili, peer39, and youbot 2024-06-21 21:43:41 -04:00
Corbin Davenport
ec4610a118
Add information for Google, Meta, and img2dataset bots 2024-06-21 20:49:25 -04:00
4163ca92a5
chore: clean up unnecessary ai.txt 2024-06-21 08:58:44 -07:00
22 changed files with 1403 additions and 60 deletions

14
.github/FUNDING.yml vendored
View file

@ -1,14 +0,0 @@
# These are supported funding model platforms
github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
buy_me_a_coffee: cory
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

31
.github/workflows/ai_robots_update.yml vendored Normal file
View file

@ -0,0 +1,31 @@
name: Updates for AI robots files
on:
schedule:
- cron: "0 0 * * *"
jobs:
dark-visitors:
runs-on: ubuntu-latest
name: dark-visitors
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- run: |
pip install beautifulsoup4 requests
git config --global user.name "dark-visitors"
git config --global user.email "dark-visitors@users.noreply.github.com"
echo "Updating robots.json with data from darkvisitor.com ..."
python code/robots.py --update
echo "... done."
git --no-pager diff
git add -A
git diff --quiet && git diff --staged --quiet || (git commit -m "Update from Dark Visitors" && git push)
shell: bash
convert:
name: convert
needs: dark-visitors
uses: ./.github/workflows/main.yml
secrets: inherit
with:
message: "Update from Dark Visitors"

48
.github/workflows/main.yml vendored Normal file
View file

@ -0,0 +1,48 @@
on:
workflow_call:
inputs:
message:
type: string
required: true
description: The message to commit
push:
paths:
- 'robots.json'
- '.github/workflows/**'
- 'code/**'
branches:
- "main"
jobs:
ai-robots-txt:
runs-on: ubuntu-latest
name: ai-robots-txt
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- run: |
pip install beautifulsoup4
git config --global user.name "ai.robots.txt"
git config --global user.email "ai.robots.txt@users.noreply.github.com"
git log -1
git status
echo "Updating robots.txt and table-of-bot-metrics.md if necessary ..."
python code/robots.py --convert
echo "... done."
git --no-pager diff
git add -A
if [ -z "$(git diff --staged)" ]; then
# To have the action run successfully, if no changes are staged, we
# manually skip the later commits because they fail with exit code 1
# and this would then display as a failure for the Action.
echo "No staged changes to commit. Skipping commit and push."
exit 0
fi
if [ -n "${{ inputs.message }}" ]; then
git commit -m "${{ inputs.message }}"
else
git commit -m "${{ github.event.head_commit.message }}"
fi
git push
shell: bash

21
.github/workflows/run-tests.yml vendored Normal file
View file

@ -0,0 +1,21 @@
on:
pull_request:
branches:
- main
push:
branches:
- main
jobs:
run-tests:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Install dependencies
run: |
pip install -U requests beautifulsoup4
- name: Run tests
run: |
code/tests.py

View file

@ -0,0 +1,29 @@
---
name: "Upload robots.txt file to release"
run-name: "Upload robots.txt file to release"
on:
release:
types:
- published
permissions:
contents: write
jobs:
upload-robots-txt-file-to-release:
name: "Upload robots.txt file to release"
runs-on: ubuntu-latest
steps:
- name: "Checkout"
uses: actions/checkout@v4
with:
ref: ${{ github.event.release.tag_name }}
- name: "Upload"
run: gh --repo "${REPO}" release upload "${TAG}" robots.txt
env:
GH_TOKEN: ${{ github.token }}
REPO: ${{ github.repository }}
TAG: ${{ github.event.release.tag_name }}

5
.gitignore vendored
View file

@ -1 +1,4 @@
.DS_Store
.DS_Store
.venv
venv
__pycache__

3
.htaccess Normal file
View file

@ -0,0 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|PanguBot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot) [NC]
RewriteRule !^/?robots\.txt$ - [F,L]

57
FAQ.md Normal file
View file

@ -0,0 +1,57 @@
# Frequently asked questions
## Why should we block these crawlers?
They're extractive, confer no benefit to the creators of data they're ingesting and also have wide-ranging negative externalities: particularly copyright abuse and environmental impact.
**[How Tech Giants Cut Corners to Harvest Data for A.I.](https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html?unlocked_article_code=1.ik0.Ofja.L21c1wyW-0xj&ugrp=m)**
> OpenAI, Google and Meta ignored corporate policies, altered their own rules and discussed skirting copyright law as they sought online information to train their newest artificial intelligence systems.
**[How AI copyright lawsuits could make the whole industry go extinct](https://www.theverge.com/24062159/ai-copyright-fair-use-lawsuits-new-york-times-openai-chatgpt-decoder-podcast)**
> The New York Times' lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.
**[Reconciling the contrasting narratives on the environmental impact of large language models](https://www.nature.com/articles/s41598-024-76682-6)**
> Studies have shown that the training of just one LLM can consume as much energy as five cars do across their lifetimes. The water footprint of AI is also substantial; for example, recent work has highlighted that water consumption associated with AI models involves data centers using millions of gallons of water per day for cooling. Additionally, the energy consumption and carbon emissions of AI are projected to grow quickly in the coming years [...].
**[Scientists Predict AI to Generate Millions of Tons of E-Waste](https://www.sciencealert.com/scientists-predict-ai-to-generate-millions-of-tons-of-e-waste)**
> we could end up with between 1.2 million and 5 million metric tons of additional electronic waste by the end of this decade [the 2020's].
## How do we know AI companies/bots respect `robots.txt`?
The short answer is that we don't. `robots.txt` is a well-established standard, but compliance is voluntary. There is no enforcement mechanism.
## Why might AI web crawlers respect `robots.txt`?
Larger and/or reputable companies developing AI models probably wouldn't want to damage their reputation by ignoring `robots.txt`.
Also, given the contentious nature of AI and the possibility of legislation limiting its development, companies developing AI models will probably want to be seen to be behaving ethically, and so should (eventually) respect `robots.txt`.
## Can we block crawlers based on user agent strings?
Yes, provided the crawlers identify themselves and your application/hosting supports doing so.
Some crawlers — [such as Perplexity](https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/) — do not identify themselves via their user agent strings and, as such, are difficult to block.
## What can we do if a bot doesn't respect `robots.txt`?
That depends on your stack.
- Nginx
- [Blocking Bots with Nginx](https://rknight.me/blog/blocking-bots-with-nginx/) by Robb Knight
- [Blocking AI web crawlers](https://underlap.org/blocking-ai-web-crawlers) by Glyn Normington
- Apache httpd
- [Blockin' bots.](https://ethanmarcotte.com/wrote/blockin-bots/) by Ethan Marcotte
- [Blocking Bots With 11ty And Apache](https://flamedfury.com/posts/blocking-bots-with-11ty-and-apache/) by fLaMEd fury
> [!TIP]
> The snippets in these articles all use `mod_rewrite`, which [should be considered a last resort](https://httpd.apache.org/docs/trunk/rewrite/avoid.html). A good alternative that's less resource-intensive is `mod_setenvif`; see [httpd docs](https://httpd.apache.org/docs/trunk/rewrite/access.html#blocking-of-robots) for an example. You should also consider [setting this up in `httpd.conf` instead of `.htaccess`](https://httpd.apache.org/docs/trunk/howto/htaccess.html#when) if it's available to you.
- Netlify
- [Blockin' bots on Netlify](https://www.jeremiak.com/blog/block-bots-netlify-edge-functions/) by Jeremia Kimelman
- Cloudflare
- [Block AI bots, scrapers and crawlers with a single click](https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click) by Cloudflare
- [Im blocking AI crawlers](https://roelant.net/en/2024/im-blocking-ai-crawlers-part-2/) by Roelant
- Vercel
- [Block AI Bots Firewall Rule](https://vercel.com/templates/firewall/block-ai-bots-firewall-rule) by Vercel
## How can I contribute?
Open a pull request. It will be reviewed and acted upon appropriately. **We really appreciate contributions** — this is a community effort.

View file

@ -2,25 +2,57 @@
<img src="/assets/images/noai-logo.png" width="100" />
**[Subscribe to updates via RSS/Atom by clicking on this link.](https://github.com/ai-robots-txt/ai.robots.txt/releases.atom)**
_(Or paste the link into your preferred feed reader.)_
---
This is an open list of web crawlers associated with AI companies and the training of LLMs to block. We encourage you to contribute to and implement this list on your own site.
This is an open list of web crawlers associated with AI companies and the training of LLMs to block. We encourage you to contribute to and implement this list on your own site. See [information about the listed crawlers](./table-of-bot-metrics.md) and the [FAQ](https://github.com/ai-robots-txt/ai.robots.txt/blob/main/FAQ.md).
A number of these crawlers have been sourced from [Dark Visitors](https://darkvisitors.com) and we appreciate the ongoing effort they put in to track these crawlers.
If you'd like to add information about a crawler to the list, please make a pull request with the bot name added to `robots.txt`, `ai.txt`, and any relevant details in `table-of-bot-metrics.md` to help people understand what's crawling.
## Usage
This repository provides the following files:
- `robots.txt`
- `.htaccess`
- `nginx-block-ai-bots.conf`
`robots.txt` implements the Robots Exclusion Protocol ([RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html)).
`.htaccess` may be used to configure web servers such as [Apache httpd](https://httpd.apache.org/) to return an error page when one of the listed AI crawlers sends a request to the web server.
Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/current/howto/htaccess.html), more performant methods than an `.htaccess` file exist.
`nginx-block-ai-bots.conf` implements a Nginx configuration snippet that can be included in any virtual host `server {}` block via the `include` directive.
## Contributing
A note about contributing: updates should be added/made to `robots.json`. A GitHub action will then generate the updated `robots.txt`, `table-of-bot-metrics.md`, `.htaccess` and `nginx-block-ai-bots.conf`.
You can run the tests by [installing](https://www.python.org/about/gettingstarted/) Python 3 and issuing:
```console
code/tests.py
```
## Subscribe to updates
You can subscribe to list updates via RSS/Atom with the releases feed:
```
https://github.com/ai-robots-txt/ai.robots.txt/releases.atom
```
You can subscribe with [Feedly](https://feedly.com/i/subscription/feed/https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [Inoreader](https://www.inoreader.com/?add_feed=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [The Old Reader](https://theoldreader.com/feeds/subscribe?url=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), [Feedbin](https://feedbin.me/?subscribe=https://github.com/ai-robots-txt/ai.robots.txt/releases.atom), or any other reader app.
Alternatively, you can also subscribe to new releases with your GitHub account by clicking the ⬇️ on "Watch" button at the top of this page, clicking "Custom" and selecting "Releases".
## Report abusive crawlers
If you use [Cloudflare's hard block](https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click) alongside this list, you can report abusive crawlers that don't respect `robots.txt` [here](https://docs.google.com/forms/d/e/1FAIpQLScbUZ2vlNSdcsb8LyTeSF7uLzQI96s0BKGoJ6wQ6ocUFNOKEg/viewform).
But even if you don't use Cloudflare's hard block, their list of [verified bots](https://radar.cloudflare.com/traffic/verified-bots) may come in handy.
## Additional resources
- [Blocking Bots with Nginx](https://rknight.me/blog/blocking-bots-with-nginx/) by Robb Knight
- [Blockin' bots.](https://ethanmarcotte.com/wrote/blockin-bots/) by Ethan Marcotte
- [Blocking Bots With 11ty And Apache](https://flamedfury.com/posts/blocking-bots-with-11ty-and-apache/) by fLaMEd fury
- [Blockin' bots on Netlify](https://www.jeremiak.com/blog/block-bots-netlify-edge-functions/) by Jeremia Kimelman
---
Thank you to [Glyn](https://github.com/glyn) for pushing [me](https://coryd.dev) to set this up after [I posted about blocking these crawlers](https://coryd.dev/posts/2024/go-ahead-and-block-ai-web-crawlers/).
- [Blocking AI web crawlers](https://underlap.org/blocking-ai-web-crawlers) by Glyn Normington
- [Block AI Bots from Crawling Websites Using Robots.txt](https://originality.ai/ai-bot-blocking) by Jonathan Gillham, Originality.AI

6
ai.txt
View file

@ -1,6 +0,0 @@
# Spawning AI
# Prevent datasets from using the following file types
User-Agent: *
Disallow: /
Disallow: *

Binary file not shown.

Before

Width:  |  Height:  |  Size: 363 B

240
code/robots.py Executable file
View file

@ -0,0 +1,240 @@
#!/usr/bin/env python3
import json
import re
import requests
from bs4 import BeautifulSoup
from pathlib import Path
def load_robots_json():
"""Load the robots.json contents into a dictionary."""
return json.loads(Path("./robots.json").read_text(encoding="utf-8"))
def get_agent_soup():
"""Retrieve current known agents from darkvisitors.com"""
session = requests.Session()
try:
response = session.get("https://darkvisitors.com/agents")
except requests.exceptions.ConnectionError:
print(
"ERROR: Could not gather the current agents from https://darkvisitors.com/agents"
)
return
return BeautifulSoup(response.text, "html.parser")
def updated_robots_json(soup):
"""Update AI scraper information with data from darkvisitors."""
existing_content = load_robots_json()
to_include = [
"AI Assistants",
"AI Data Scrapers",
"AI Search Crawlers",
# "Archivers",
# "Developer Helpers",
# "Fetchers",
# "Intelligence Gatherers",
# "Scrapers",
# "Search Engine Crawlers",
# "SEO Crawlers",
# "Uncategorized",
"Undocumented AI Agents",
]
for section in soup.find_all("div", {"class": "agent-links-section"}):
category = section.find("h2").get_text()
if category not in to_include:
continue
for agent in section.find_all("a", href=True):
name = agent.find("div", {"class": "agent-name"}).get_text().strip()
name = clean_robot_name(name)
desc = agent.find("p").get_text().strip()
default_values = {
"Unclear at this time.",
"No information provided.",
"No information.",
"No explicit frequency provided.",
}
default_value = "Unclear at this time."
# Parse the operator information from the description if possible
operator = default_value
if "operated by " in desc:
try:
operator = desc.split("operated by ", 1)[1].split(".", 1)[0].strip()
except Exception as e:
print(f"Error: {e}")
def consolidate(field: str, value: str) -> str:
# New entry
if name not in existing_content:
return value
# New field
if field not in existing_content[name]:
return value
# Unclear value
if (
existing_content[name][field] in default_values
and value not in default_values
):
return value
# Existing value
return existing_content[name][field]
existing_content[name] = {
"operator": consolidate("operator", operator),
"respect": consolidate("respect", default_value),
"function": consolidate("function", f"{category}"),
"frequency": consolidate("frequency", default_value),
"description": consolidate(
"description",
f"{desc} More info can be found at https://darkvisitors.com/agents{agent['href']}",
),
}
print(f"Total: {len(existing_content)}")
sorted_keys = sorted(existing_content, key=lambda k: k.lower())
sorted_robots = {k: existing_content[k] for k in sorted_keys}
return sorted_robots
def clean_robot_name(name):
""" Clean the robot name by removing some characters that were mangled by html software once. """
# This was specifically spotted in "Perplexity-User"
# Looks like a non-breaking hyphen introduced by the HTML rendering software
# Reading the source page for Perplexity: https://docs.perplexity.ai/guides/bots
# You can see the bot is listed several times as "Perplexity-User" with a normal hyphen,
# and it's only the Row-Heading that has the special hyphen
#
# Technically, there's no reason there wouldn't someday be a bot that
# actually uses a non-breaking hyphen, but that seems unlikely,
# so this solution should be fine for now.
result = re.sub(r"\u2011", "-", name)
if result != name:
print(f"\tCleaned '{name}' to '{result}' - unicode/html mangled chars normalized.")
return result
def ingest_darkvisitors():
old_robots_json = load_robots_json()
soup = get_agent_soup()
if soup:
robots_json = updated_robots_json(soup)
print(
"robots.json is unchanged."
if robots_json == old_robots_json
else "robots.json got updates."
)
Path("./robots.json").write_text(
json.dumps(robots_json, indent=4), encoding="utf-8"
)
def json_to_txt(robots_json):
"""Compose the robots.txt from the robots.json file."""
robots_txt = "\n".join(f"User-agent: {k}" for k in robots_json.keys())
robots_txt += "\nDisallow: /\n"
return robots_txt
def escape_md(s):
return re.sub(r"([]*\\|`(){}<>#+-.!_[])", r"\\\1", s)
def json_to_table(robots_json):
"""Compose a markdown table with the information in robots.json"""
table = "| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |\n"
table += "|------|----------|-----------------------|----------|------------------|-------------|\n"
for name, robot in robots_json.items():
table += f'| {escape_md(name)} | {robot["operator"]} | {robot["respect"]} | {robot["function"]} | {robot["frequency"]} | {robot["description"]} |\n'
return table
def list_to_pcre(lst):
# Python re is not 100% identical to PCRE which is used by Apache, but it
# should probably be close enough in the real world for re.escape to work.
formatted = "|".join(map(re.escape, lst))
return f"({formatted})"
def json_to_htaccess(robot_json):
# Creates a .htaccess filter file. It uses a regular expression to filter out
# User agents that contain any of the blocked values.
htaccess = "RewriteEngine On\n"
htaccess += f"RewriteCond %{{HTTP_USER_AGENT}} {list_to_pcre(robot_json.keys())} [NC]\n"
htaccess += "RewriteRule !^/?robots\\.txt$ - [F,L]\n"
return htaccess
def json_to_nginx(robot_json):
# Creates an Nginx config file. This config snippet can be included in
# nginx server{} blocks to block AI bots.
config = f"if ($http_user_agent ~* \"{list_to_pcre(robot_json.keys())}\") {{\n return 403;\n}}"
return config
def update_file_if_changed(file_name, converter):
"""Update files if newer content is available and log the (in)actions."""
new_content = converter(load_robots_json())
filepath = Path(file_name)
# "touch" will create the file if it doesn't exist yet
filepath.touch()
old_content = filepath.read_text(encoding="utf-8")
if old_content == new_content:
print(f"{file_name} is already up to date.")
else:
Path(file_name).write_text(new_content, encoding="utf-8")
print(f"{file_name} has been updated.")
def conversions():
"""Triggers the conversions from the json file."""
update_file_if_changed(file_name="./robots.txt", converter=json_to_txt)
update_file_if_changed(
file_name="./table-of-bot-metrics.md",
converter=json_to_table,
)
update_file_if_changed(
file_name="./.htaccess",
converter=json_to_htaccess,
)
update_file_if_changed(
file_name="./nginx-block-ai-bots.conf",
converter=json_to_nginx,
)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser = argparse.ArgumentParser(
prog="ai-robots",
description="Collects and updates information about web scrapers of AI companies.",
epilog="One of the flags must be set.\n",
)
parser.add_argument(
"--update",
action="store_true",
help="Update the robots.json file with data from darkvisitors.com/agents",
)
parser.add_argument(
"--convert",
action="store_true",
help="Create the robots.txt and markdown table from robots.json",
)
args = parser.parse_args()
if not (args.update or args.convert):
print("ERROR: please provide one of the possible flags.")
parser.print_help()
if args.update:
ingest_darkvisitors()
if args.convert:
conversions()

View file

@ -0,0 +1,3 @@
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash) [NC]
RewriteRule !^/?robots\.txt$ - [F,L]

View file

@ -0,0 +1,3 @@
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)") {
return 403;
}

331
code/test_files/robots.json Normal file
View file

@ -0,0 +1,331 @@
{
"AI2Bot": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
},
"Ai2Bot-Dolma": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
},
"Amazonbot": {
"operator": "Amazon",
"respect": "Yes",
"function": "Service improvement and enabling answers for Alexa users.",
"frequency": "No information provided.",
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
},
"anthropic-ai": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"Applebot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot"
},
"Applebot-Extended": {
"operator": "[Apple](https://support.apple.com/en-us/119829#datausage)",
"respect": "Yes",
"function": "Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others.",
"frequency": "Unclear at this time.",
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
},
"Bytespider": {
"operator": "ByteDance",
"respect": "No",
"function": "LLM training.",
"frequency": "Unclear at this time.",
"description": "Downloads data to train LLMS, including ChatGPT competitors."
},
"CCBot": {
"operator": "[Common Crawl Foundation](https://commoncrawl.org)",
"respect": "[Yes](https://commoncrawl.org/ccbot)",
"function": "Provides open crawl dataset, used for many purposes, including Machine Learning/AI.",
"frequency": "Monthly at present.",
"description": "Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers)."
},
"ChatGPT-User": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Takes action based on user prompts.",
"frequency": "Only when prompted by a user.",
"description": "Used by plugins in ChatGPT to answer queries based on user input."
},
"Claude-Web": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"ClaudeBot": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"cohere-ai": {
"operator": "[Cohere](https://cohere.com)",
"respect": "Unclear at this time.",
"function": "Retrieves data to provide responses to user-initiated prompts.",
"frequency": "Takes action based on user prompts.",
"description": "Retrieves data based on user prompts."
},
"Diffbot": {
"operator": "[Diffbot](https://www.diffbot.com/)",
"respect": "At the discretion of Diffbot users.",
"function": "Aggregates structured web data for monitoring and AI model training.",
"frequency": "Unclear at this time.",
"description": "Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training."
},
"FacebookBot": {
"operator": "Meta/Facebook",
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)",
"function": "Training language models",
"frequency": "Up to 1 page per second",
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
},
"facebookexternalhit": {
"description": "Unclear at this time.",
"frequency": "Unclear at this time.",
"function": "No information.",
"operator": "Meta/Facebook",
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)"
},
"FriendlyCrawler": {
"description": "Unclear who the operator is; but data is used for training/machine learning.",
"frequency": "Unclear at this time.",
"function": "We are using the data from the crawler to build datasets for machine learning experiments.",
"operator": "Unknown",
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
},
"Google-Extended": {
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
"function": "LLM training.",
"frequency": "No information.",
"description": "Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search."
},
"GoogleOther": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Image": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Video": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GPTBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Scrapes data to train OpenAI's products.",
"frequency": "No information.",
"description": "Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies."
},
"iaskspider/2.0": {
"description": "Used to provide answers to user queries.",
"frequency": "Unclear at this time.",
"function": "Crawls sites to provide answers to user queries.",
"operator": "iAsk",
"respect": "No"
},
"ICC-Crawler": {
"description": "Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business.",
"frequency": "No information.",
"function": "Scrapes data to train and support AI technologies.",
"operator": "[NICT](https://nict.go.jp)",
"respect": "Yes"
},
"ImagesiftBot": {
"description": "Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images.",
"frequency": "No information.",
"function": "ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products",
"operator": "[ImageSift](https://imagesift.com)",
"respect": "[Yes](https://imagesift.com/about)"
},
"img2dataset": {
"description": "Downloads large sets of images into datasets for LLM training or other purposes.",
"frequency": "At the discretion of img2dataset users.",
"function": "Scrapes images for use in LLMs.",
"operator": "[img2dataset](https://github.com/rom1504/img2dataset)",
"respect": "Unclear at this time."
},
"ISSCyberRiskCrawler": {
"description": "Used to train machine learning based models to quantify cyber risk.",
"frequency": "No information.",
"function": "Scrapes data to train machine learning models.",
"operator": "[ISS-Corporate](https://iss-cyber.com)",
"respect": "No"
},
"Kangaroo Bot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
},
"Meta-ExternalAgent": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
"respect": "Yes.",
"function": "Used to train models and improve products.",
"frequency": "No information.",
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
},
"Meta-ExternalFetcher": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"OAI-SearchBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "[Yes](https://platform.openai.com/docs/bots)",
"function": "Search result generation.",
"frequency": "No information.",
"description": "Crawls sites to surface as results in SearchGPT."
},
"omgili": {
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/)",
"function": "Data is sold.",
"frequency": "No information.",
"description": "Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training."
},
"omgilibot": {
"description": "Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io.",
"frequency": "No information.",
"function": "Data is sold.",
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
},
"Perplexity-User": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
"function": "Used to answer queries at the request of users.",
"frequency": "Only when prompted by a user.",
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
},
"PerplexityBot": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/)",
"function": "Used to answer queries at the request of users.",
"frequency": "Takes action based on user prompts.",
"description": "Operated by Perplexity to obtain results in response to user queries."
},
"PetalBot": {
"description": "Operated by Huawei to provide search and AI assistant services.",
"frequency": "No explicit frequency provided.",
"function": "Used to provide recommendations in Hauwei assistant and AI search services.",
"operator": "[Huawei](https://huawei.com/)",
"respect": "Yes"
},
"Scrapy": {
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
"frequency": "No information.",
"function": "Scrapes data for a variety of uses including training AI.",
"operator": "[Zyte](https://www.zyte.com)",
"respect": "Unclear at this time."
},
"Sidetrade indexer bot": {
"description": "AI product training.",
"frequency": "No information.",
"function": "Extracts data for a variety of uses including training AI.",
"operator": "[Sidetrade](https://www.sidetrade.com)",
"respect": "Unclear at this time."
},
"Timpibot": {
"operator": "[Timpi](https://timpi.io)",
"respect": "Unclear at this time.",
"function": "Scrapes data for use in training LLMs.",
"frequency": "No information.",
"description": "Makes data available for training AI models."
},
"VelenPublicWebCrawler": {
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\"",
"frequency": "No information.",
"function": "Scrapes data for business data sets and machine learning models.",
"operator": "[Velen Crawler](https://velen.io)",
"respect": "[Yes](https://velen.io)"
},
"Webzio-Extended": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"YouBot": {
"operator": "[You](https://about.you.com/youchat/)",
"respect": "[Yes](https://about.you.com/youbot/)",
"function": "Scrapes data for search engine and LLMs.",
"frequency": "No information.",
"description": "Retrieves data used for You.com web search engine and LLMs."
},
"crawler.with.dots": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression dots need to be escaped."
},
"star***crawler": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression stars need to be escaped."
},
"Is this a crawler?": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression spaces and question marks need to be escaped."
},
"a[mazing]{42}(robot)": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped."
},
"2^32$": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression RE anchor characters need to be escaped."
},
"curl|sudo bash": {
"operator": "Test suite",
"respect": "No",
"function": "To ensure the code works correctly.",
"frequency": "No information.",
"description": "When used in the .htaccess regular expression pipes need to be escaped."
}
}

View file

@ -0,0 +1,48 @@
User-agent: AI2Bot
User-agent: Ai2Bot-Dolma
User-agent: Amazonbot
User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended
User-agent: Bytespider
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: Claude-Web
User-agent: ClaudeBot
User-agent: cohere-ai
User-agent: Diffbot
User-agent: FacebookBot
User-agent: facebookexternalhit
User-agent: FriendlyCrawler
User-agent: Google-Extended
User-agent: GoogleOther
User-agent: GoogleOther-Image
User-agent: GoogleOther-Video
User-agent: GPTBot
User-agent: iaskspider/2.0
User-agent: ICC-Crawler
User-agent: ImagesiftBot
User-agent: img2dataset
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: Meta-ExternalAgent
User-agent: Meta-ExternalFetcher
User-agent: OAI-SearchBot
User-agent: omgili
User-agent: omgilibot
User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: Scrapy
User-agent: Sidetrade indexer bot
User-agent: Timpibot
User-agent: VelenPublicWebCrawler
User-agent: Webzio-Extended
User-agent: YouBot
User-agent: crawler.with.dots
User-agent: star***crawler
User-agent: Is this a crawler?
User-agent: a[mazing]{42}(robot)
User-agent: 2^32$
User-agent: curl|sudo bash
Disallow: /

View file

@ -0,0 +1,49 @@
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|------|----------|-----------------------|----------|------------------|-------------|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude\-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
| facebookexternalhit | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | No information. | Unclear at this time. | Unclear at this time. |
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
| Meta\-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [No](https://www.macstories.net/stories/wired-confirms-perplexity-is-bypassing-efforts-by-websites-to-block-its-web-crawler/) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |
| crawler\.with\.dots | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression dots need to be escaped. |
| star\*\*\*crawler | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression stars need to be escaped. |
| Is this a crawler? | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression spaces and question marks need to be escaped. |
| a\[mazing\]\{42\}\(robot\) | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression parantheses, braces, etc. need to be escaped. |
| 2^32$ | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression RE anchor characters need to be escaped. |
| curl\|sudo bash | Test suite | No | To ensure the code works correctly. | No information. | When used in the .htaccess regular expression pipes need to be escaped. |

73
code/tests.py Executable file
View file

@ -0,0 +1,73 @@
#!/usr/bin/env python3
"""To run these tests just execute this script."""
import json
import unittest
from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx
class RobotsUnittestExtensions:
def loadJson(self, pathname):
with open(pathname, "rt") as f:
return json.load(f)
def assertEqualsFile(self, f, s):
with open(f, "rt") as f:
f_contents = f.read()
return self.assertMultiLineEqual(f_contents, s)
class TestRobotsTXTGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_robots_txt_generation(self):
robots_txt = json_to_txt(self.robots_dict)
self.assertEqualsFile("test_files/robots.txt", robots_txt)
class TestTableMetricsGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 32768
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_table_generation(self):
robots_table = json_to_table(self.robots_dict)
self.assertEqualsFile("test_files/table-of-bot-metrics.md", robots_table)
class TestHtaccessGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_htaccess_generation(self):
robots_htaccess = json_to_htaccess(self.robots_dict)
self.assertEqualsFile("test_files/.htaccess", robots_htaccess)
class TestNginxConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
maxDiff = 8192
def setUp(self):
self.robots_dict = self.loadJson("test_files/robots.json")
def test_nginx_generation(self):
robots_nginx = json_to_nginx(self.robots_dict)
self.assertEqualsFile("test_files/nginx-block-ai-bots.conf", robots_nginx)
class TestRobotsNameCleaning(unittest.TestCase):
def test_clean_name(self):
from robots import clean_robot_name
self.assertEqual(clean_robot_name("PerplexityUser"), "Perplexity-User")
if __name__ == "__main__":
import os
os.chdir(os.path.dirname(__file__))
unittest.main(verbosity=2)

3
nginx-block-ai-bots.conf Normal file
View file

@ -0,0 +1,3 @@
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|PanguBot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot)") {
return 403;
}

338
robots.json Normal file
View file

@ -0,0 +1,338 @@
{
"AI2Bot": {
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes",
"function": "Content is used to train open language models.",
"frequency": "No information provided.",
"description": "Explores 'certain domains' to find web content."
},
"Ai2Bot-Dolma": {
"description": "Explores 'certain domains' to find web content.",
"frequency": "No information provided.",
"function": "Content is used to train open language models.",
"operator": "[Ai2](https://allenai.org/crawler)",
"respect": "Yes"
},
"Amazonbot": {
"operator": "Amazon",
"respect": "Yes",
"function": "Service improvement and enabling answers for Alexa users.",
"frequency": "No information provided.",
"description": "Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses."
},
"anthropic-ai": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"Applebot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Search Crawlers",
"frequency": "Unclear at this time.",
"description": "Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot"
},
"Applebot-Extended": {
"operator": "[Apple](https://support.apple.com/en-us/119829#datausage)",
"respect": "Yes",
"function": "Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others.",
"frequency": "Unclear at this time.",
"description": "Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools."
},
"Brightbot 1.0": {
"operator": "Browsing.ai",
"respect": "Unclear at this time.",
"function": "LLM/AI training.",
"frequency": "Unclear at this time.",
"description": "Scrapes data to train LLMs and AI products focused on website customer support."
},
"Bytespider": {
"operator": "ByteDance",
"respect": "No",
"function": "LLM training.",
"frequency": "Unclear at this time.",
"description": "Downloads data to train LLMS, including ChatGPT competitors."
},
"CCBot": {
"operator": "[Common Crawl Foundation](https://commoncrawl.org)",
"respect": "[Yes](https://commoncrawl.org/ccbot)",
"function": "Provides open crawl dataset, used for many purposes, including Machine Learning/AI.",
"frequency": "Monthly at present.",
"description": "Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers)."
},
"ChatGPT-User": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Takes action based on user prompts.",
"frequency": "Only when prompted by a user.",
"description": "Used by plugins in ChatGPT to answer queries based on user input."
},
"Claude-Web": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "Unclear at this time.",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"ClaudeBot": {
"operator": "[Anthropic](https://www.anthropic.com)",
"respect": "[Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler)",
"function": "Scrapes data to train Anthropic's AI products.",
"frequency": "No information provided.",
"description": "Scrapes data to train LLMs and AI products offered by Anthropic."
},
"cohere-ai": {
"operator": "[Cohere](https://cohere.com)",
"respect": "Unclear at this time.",
"function": "Retrieves data to provide responses to user-initiated prompts.",
"frequency": "Takes action based on user prompts.",
"description": "Retrieves data based on user prompts."
},
"cohere-training-data-crawler": {
"operator": "Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler"
},
"Crawlspace": {
"operator": "[Crawlspace](https://crawlspace.dev)",
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
"function": "Scrapes data",
"frequency": "Unclear at this time.",
"description": "Provides crawling services for any purpose, probably including AI model training."
},
"Diffbot": {
"operator": "[Diffbot](https://www.diffbot.com/)",
"respect": "At the discretion of Diffbot users.",
"function": "Aggregates structured web data for monitoring and AI model training.",
"frequency": "Unclear at this time.",
"description": "Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training."
},
"DuckAssistBot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot"
},
"FacebookBot": {
"operator": "Meta/Facebook",
"respect": "[Yes](https://developers.facebook.com/docs/sharing/bot/)",
"function": "Training language models",
"frequency": "Up to 1 page per second",
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
},
"FriendlyCrawler": {
"description": "Unclear who the operator is; but data is used for training/machine learning.",
"frequency": "Unclear at this time.",
"function": "We are using the data from the crawler to build datasets for machine learning experiments.",
"operator": "Unknown",
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
},
"Google-Extended": {
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
"function": "LLM training.",
"frequency": "No information.",
"description": "Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search."
},
"GoogleOther": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Image": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GoogleOther-Video": {
"description": "\"Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development.\"",
"frequency": "No information.",
"function": "Scrapes data.",
"operator": "Google",
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)"
},
"GPTBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "Yes",
"function": "Scrapes data to train OpenAI's products.",
"frequency": "No information.",
"description": "Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies."
},
"iaskspider/2.0": {
"description": "Used to provide answers to user queries.",
"frequency": "Unclear at this time.",
"function": "Crawls sites to provide answers to user queries.",
"operator": "iAsk",
"respect": "No"
},
"ICC-Crawler": {
"description": "Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business.",
"frequency": "No information.",
"function": "Scrapes data to train and support AI technologies.",
"operator": "[NICT](https://nict.go.jp)",
"respect": "Yes"
},
"ImagesiftBot": {
"description": "Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images.",
"frequency": "No information.",
"function": "ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products",
"operator": "[ImageSift](https://imagesift.com)",
"respect": "[Yes](https://imagesift.com/about)"
},
"img2dataset": {
"description": "Downloads large sets of images into datasets for LLM training or other purposes.",
"frequency": "At the discretion of img2dataset users.",
"function": "Scrapes images for use in LLMs.",
"operator": "[img2dataset](https://github.com/rom1504/img2dataset)",
"respect": "Unclear at this time."
},
"imgproxy": {
"frequency": "No information.",
"function": "Not documented or explained on operator's site.",
"operator": "[imgproxy](https://imgproxy.net)",
"respect": "Unclear at this time.",
"description": "AI-powered image processing."
},
"ISSCyberRiskCrawler": {
"description": "Used to train machine learning based models to quantify cyber risk.",
"frequency": "No information.",
"function": "Scrapes data to train machine learning models.",
"operator": "[ISS-Corporate](https://iss-cyber.com)",
"respect": "No"
},
"Kangaroo Bot": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
},
"Meta-ExternalAgent": {
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
"respect": "Yes.",
"function": "Used to train models and improve products.",
"frequency": "No information.",
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
},
"Meta-ExternalFetcher": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Assistants",
"frequency": "Unclear at this time.",
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
},
"OAI-SearchBot": {
"operator": "[OpenAI](https://openai.com)",
"respect": "[Yes](https://platform.openai.com/docs/bots)",
"function": "Search result generation.",
"frequency": "No information.",
"description": "Crawls sites to surface as results in SearchGPT."
},
"omgili": {
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/)",
"function": "Data is sold.",
"frequency": "No information.",
"description": "Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training."
},
"omgilibot": {
"description": "Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io.",
"frequency": "No information.",
"function": "Data is sold.",
"operator": "[Webz.io](https://webz.io/)",
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
},
"PanguBot": {
"operator": "the Chinese company Huawei",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot"
},
"Perplexity-User": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[No](https://docs.perplexity.ai/guides/bots)",
"function": "Used to answer queries at the request of users.",
"frequency": "Only when prompted by a user.",
"description": "Visit web pages to help provide an accurate answer and include links to the page in Perplexity response."
},
"PerplexityBot": {
"operator": "[Perplexity](https://www.perplexity.ai/)",
"respect": "[Yes](https://docs.perplexity.ai/guides/bots)",
"function": "Search result generation.",
"frequency": "No information.",
"description": "Crawls sites to surface as results in Perplexity."
},
"PetalBot": {
"description": "Operated by Huawei to provide search and AI assistant services.",
"frequency": "No explicit frequency provided.",
"function": "Used to provide recommendations in Hauwei assistant and AI search services.",
"operator": "[Huawei](https://huawei.com/)",
"respect": "Yes"
},
"Scrapy": {
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
"frequency": "No information.",
"function": "Scrapes data for a variety of uses including training AI.",
"operator": "[Zyte](https://www.zyte.com)",
"respect": "Unclear at this time."
},
"SemrushBot-OCOB": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Crawls your site for ContentShake AI tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"SemrushBot-SWA": {
"operator": "[Semrush](https://www.semrush.com/)",
"respect": "[Yes](https://www.semrush.com/bot/)",
"function": "Checks URLs on your site for SWA tool.",
"frequency": "Roughly once every 10 seconds.",
"description": "You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL)."
},
"Sidetrade indexer bot": {
"description": "AI product training.",
"frequency": "No information.",
"function": "Extracts data for a variety of uses including training AI.",
"operator": "[Sidetrade](https://www.sidetrade.com)",
"respect": "Unclear at this time."
},
"Timpibot": {
"operator": "[Timpi](https://timpi.io)",
"respect": "Unclear at this time.",
"function": "Scrapes data for use in training LLMs.",
"frequency": "No information.",
"description": "Makes data available for training AI models."
},
"VelenPublicWebCrawler": {
"description": "\"Our goal with this crawler is to build business datasets and machine learning models to better understand the web.\"",
"frequency": "No information.",
"function": "Scrapes data for business data sets and machine learning models.",
"operator": "[Velen Crawler](https://velen.io)",
"respect": "[Yes](https://velen.io)"
},
"Webzio-Extended": {
"operator": "Unclear at this time.",
"respect": "Unclear at this time.",
"function": "AI Data Scrapers",
"frequency": "Unclear at this time.",
"description": "Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended"
},
"YouBot": {
"operator": "[You](https://about.you.com/youchat/)",
"respect": "[Yes](https://about.you.com/youbot/)",
"function": "Scrapes data for search engine and LLMs.",
"frequency": "No information.",
"description": "Retrieves data used for You.com web search engine and LLMs."
}
}

View file

@ -1,24 +1,49 @@
User-agent: AdsBot-Google
User-agent: AI2Bot
User-agent: Ai2Bot-Dolma
User-agent: Amazonbot
User-agent: anthropic-ai
User-agent: Applebot
User-agent: Applebot-Extended
User-agent: Brightbot 1.0
User-agent: Bytespider
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: ClaudeBot
User-agent: Claude-Web
User-agent: ClaudeBot
User-agent: cohere-ai
User-agent: cohere-training-data-crawler
User-agent: Crawlspace
User-agent: Diffbot
User-agent: DuckAssistBot
User-agent: FacebookBot
User-agent: FriendlyCrawler
User-agent: Google-Extended
User-agent: GoogleOther
User-agent: GoogleOther-Image
User-agent: GoogleOther-Video
User-agent: GPTBot
User-agent: iaskspider/2.0
User-agent: ICC-Crawler
User-agent: ImagesiftBot
User-agent: img2dataset
User-agent: imgproxy
User-agent: ISSCyberRiskCrawler
User-agent: Kangaroo Bot
User-agent: Meta-ExternalAgent
User-agent: Meta-ExternalFetcher
User-agent: OAI-SearchBot
User-agent: omgili
User-agent: omgilibot
User-agent: peer39_crawler
User-agent: peer39_crawler/1.0
User-agent: PanguBot
User-agent: Perplexity-User
User-agent: PerplexityBot
User-agent: PetalBot
User-agent: Scrapy
User-agent: SemrushBot-OCOB
User-agent: SemrushBot-SWA
User-agent: Sidetrade indexer bot
User-agent: Timpibot
User-agent: VelenPublicWebCrawler
User-agent: Webzio-Extended
User-agent: YouBot
Disallow: /

View file

@ -1,24 +1,50 @@
|Name |Operator |Respects `robots.txt` |Data use |Visit regularity |Description |
|----------------|---------|-----------------------|----------|------------------|-------------|
| AdsBot-Google | Google | Yes (Exceptions for Dynamic Search Ads) | Analyzes website content for ad relevancy, improves ad serving for Google Ads. Data anonymized according to [Google's Privacy Policy](https://policies.google.com/privacy). Unclear on data retention or use by other products. | Varies depending on campaign activity and website updates. Crawls optimized to minimize impact, specific frequency not public. | Web crawler by Google Ads to analyze websites for ad effectiveness and ensure ad relevancy to webpage content. |
|Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|anthropic-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|Applebot-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | | | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
|Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
|CCBot | [Common Crawl](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides crawl data for an open source repository that has been used to train LLMs. | Unclear at this time. | Sources data that is made openly available and is used to train AI models. |
|ChatGPT-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
|ClaudeBot | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|Claude-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|cohere-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
|Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
|FacebookBot | | | | | |
|Google-Extended| | | | | |
|GoogleOther | | | | | |
|GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information provided. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| img2dataset | | | | | |
|omgili | | | | | |
|omgilibot | | | | | |
|peer39_crawler| | | | | |
|peer39_crawler/1.0| | | | | |
|PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/docs/perplexitybot) | Used to answer queries at the request of users. | Takes action based on user prompts. | Operated by Perplexity to obtain results in response to user queries. |
|YouBot | | | | | |
| Name | Operator | Respects `robots.txt` | Data use | Visit regularity | Description |
|------|----------|-----------------------|----------|------------------|-------------|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
| Applebot\-Extended | [Apple](https://support.apple.com/en-us/119829#datausage) | Yes | Powers features in Siri, Spotlight, Safari, Apple Intelligence, and others. | Unclear at this time. | Apple has a secondary user agent, Applebot-Extended ... [that is] used to train Apple's foundation models powering generative AI features across Apple products, including Apple Intelligence, Services, and Developer Tools. |
| Brightbot 1\.0 | Browsing.ai | Unclear at this time. | LLM/AI training. | Unclear at this time. | Scrapes data to train LLMs and AI products focused on website customer support. |
| Bytespider | ByteDance | No | LLM training. | Unclear at this time. | Downloads data to train LLMS, including ChatGPT competitors. |
| CCBot | [Common Crawl Foundation](https://commoncrawl.org) | [Yes](https://commoncrawl.org/ccbot) | Provides open crawl dataset, used for many purposes, including Machine Learning/AI. | Monthly at present. | Web archive going back to 2008. [Cited in thousands of research papers per year](https://commoncrawl.org/research-papers). |
| ChatGPT\-User | [OpenAI](https://openai.com) | Yes | Takes action based on user prompts. | Only when prompted by a user. | Used by plugins in ChatGPT to answer queries based on user input. |
| Claude\-Web | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GoogleOther\-Video | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
| GPTBot | [OpenAI](https://openai.com) | Yes | Scrapes data to train OpenAI's products. | No information. | Data is used to train current and future models, removed paywalled data, PII and data that violates the company's policies. |
| iaskspider/2\.0 | iAsk | No | Crawls sites to provide answers to user queries. | Unclear at this time. | Used to provide answers to user queries. |
| ICC\-Crawler | [NICT](https://nict.go.jp) | Yes | Scrapes data to train and support AI technologies. | No information. | Use the collected data for artificial intelligence technologies; provide data to third parties, including commercial companies; those companies can use the data for their own business. |
| ImagesiftBot | [ImageSift](https://imagesift.com) | [Yes](https://imagesift.com/about) | ImageSiftBot is a web crawler that scrapes the internet for publicly available images to support our suite of web intelligence products | No information. | Once images and text are downloaded from a webpage, ImageSift analyzes this data from the page and stores the information in an index. Our web intelligence products use this index to enable search and retrieval of similar images. |
| img2dataset | [img2dataset](https://github.com/rom1504/img2dataset) | Unclear at this time. | Scrapes images for use in LLMs. | At the discretion of img2dataset users. | Downloads large sets of images into datasets for LLM training or other purposes. |
| imgproxy | [imgproxy](https://imgproxy.net) | Unclear at this time. | Not documented or explained on operator's site. | No information. | AI-powered image processing. |
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
| Meta\-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
| SemrushBot\-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SWA tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
| YouBot | [You](https://about.you.com/youchat/) | [Yes](https://about.you.com/youbot/) | Scrapes data for search engine and LLMs. | No information. | Retrieves data used for You.com web search engine and LLMs. |