mirror of
https://github.com/ai-robots-txt/ai.robots.txt.git
synced 2025-05-19 16:53:11 +00:00
Compare commits
No commits in common. "main" and "v1.27" have entirely different histories.
15 changed files with 7 additions and 334 deletions
7
.github/workflows/run-tests.yml
vendored
7
.github/workflows/run-tests.yml
vendored
|
@ -19,10 +19,3 @@ jobs:
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
run: |
|
run: |
|
||||||
code/tests.py
|
code/tests.py
|
||||||
lint-json:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Check out repository
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
- name: JQ Json Lint
|
|
||||||
run: jq . robots.json
|
|
||||||
|
|
|
@ -1,3 +1,3 @@
|
||||||
RewriteEngine On
|
RewriteEngine On
|
||||||
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Perplexity\-User|PerplexityBot|PetalBot|QualifiedBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot) [NC]
|
RewriteCond %{HTTP_USER_AGENT} (AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|PanguBot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot) [NC]
|
||||||
RewriteRule !^/?robots\.txt$ - [F,L]
|
RewriteRule !^/?robots\.txt$ - [F,L]
|
||||||
|
|
|
@ -1,3 +0,0 @@
|
||||||
@aibots {
|
|
||||||
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Perplexity\-User|PerplexityBot|PetalBot|QualifiedBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot)"
|
|
||||||
}
|
|
8
FAQ.md
8
FAQ.md
|
@ -55,11 +55,3 @@ That depends on your stack.
|
||||||
## How can I contribute?
|
## How can I contribute?
|
||||||
|
|
||||||
Open a pull request. It will be reviewed and acted upon appropriately. **We really appreciate contributions** — this is a community effort.
|
Open a pull request. It will be reviewed and acted upon appropriately. **We really appreciate contributions** — this is a community effort.
|
||||||
|
|
||||||
## I'd like to donate money
|
|
||||||
|
|
||||||
That's kind of you, but we don't need your money. If you insist, we'd love you to make a donation to the [American Civil Liberties Union](https://www.aclu.org/), the [Disasters Emergency Committee](https://www.dec.org.uk/), or a similar organisation.
|
|
||||||
|
|
||||||
## Can my company sponsor ai.robots.txt?
|
|
||||||
|
|
||||||
No, thank you. We do not accept sponsorship of any kind. We prefer to maintain our independence. Our costs are negligible as we are entirely volunteer-based and community-driven.
|
|
||||||
|
|
21
README.md
21
README.md
|
@ -14,8 +14,6 @@ This repository provides the following files:
|
||||||
- `robots.txt`
|
- `robots.txt`
|
||||||
- `.htaccess`
|
- `.htaccess`
|
||||||
- `nginx-block-ai-bots.conf`
|
- `nginx-block-ai-bots.conf`
|
||||||
- `Caddyfile`
|
|
||||||
- `haproxy-block-ai-bots.txt`
|
|
||||||
|
|
||||||
`robots.txt` implements the Robots Exclusion Protocol ([RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html)).
|
`robots.txt` implements the Robots Exclusion Protocol ([RFC 9309](https://www.rfc-editor.org/rfc/rfc9309.html)).
|
||||||
|
|
||||||
|
@ -24,25 +22,6 @@ Note that, as stated in the [httpd documentation](https://httpd.apache.org/docs/
|
||||||
|
|
||||||
`nginx-block-ai-bots.conf` implements a Nginx configuration snippet that can be included in any virtual host `server {}` block via the `include` directive.
|
`nginx-block-ai-bots.conf` implements a Nginx configuration snippet that can be included in any virtual host `server {}` block via the `include` directive.
|
||||||
|
|
||||||
`Caddyfile` includes a Header Regex matcher group you can copy or import into your Caddyfile, the rejection can then be handled as followed `abort @aibots`
|
|
||||||
|
|
||||||
`haproxy-block-ai-bots.txt` may be used to configure HAProxy to block AI bots. To implement it;
|
|
||||||
1. Add the file to the config directory of HAProxy
|
|
||||||
2. Add the following lines in the `frontend` section;
|
|
||||||
```
|
|
||||||
acl ai_robot hdr_sub(user-agent) -i -f /etc/haproxy/haproxy-block-ai-bots.txt
|
|
||||||
http-request deny if ai_robot
|
|
||||||
```
|
|
||||||
(Note that the path of the `haproxy-block-ai-bots.txt` may be different in your environment.)
|
|
||||||
|
|
||||||
|
|
||||||
[Bing uses the data it crawls for AI and training, you may opt out by adding a `meta` tag to the `head` of your site.](./docs/additional-steps/bing.md)
|
|
||||||
|
|
||||||
### Related
|
|
||||||
|
|
||||||
- [Robots.txt Traefik plugin](https://plugins.traefik.io/plugins/681b2f3fba3486128fc34fae/robots-txt-plugin):
|
|
||||||
middleware plugin for [Traefik](https://traefik.io/traefik/) to automatically add rules of [robots.txt](./robots.txt)
|
|
||||||
file on-the-fly.
|
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
|
|
|
@ -30,7 +30,6 @@ def updated_robots_json(soup):
|
||||||
"""Update AI scraper information with data from darkvisitors."""
|
"""Update AI scraper information with data from darkvisitors."""
|
||||||
existing_content = load_robots_json()
|
existing_content = load_robots_json()
|
||||||
to_include = [
|
to_include = [
|
||||||
"AI Agents",
|
|
||||||
"AI Assistants",
|
"AI Assistants",
|
||||||
"AI Data Scrapers",
|
"AI Data Scrapers",
|
||||||
"AI Search Crawlers",
|
"AI Search Crawlers",
|
||||||
|
@ -179,19 +178,6 @@ def json_to_nginx(robot_json):
|
||||||
return config
|
return config
|
||||||
|
|
||||||
|
|
||||||
def json_to_caddy(robot_json):
|
|
||||||
caddyfile = "@aibots {\n "
|
|
||||||
caddyfile += f' header_regexp User-Agent "{list_to_pcre(robot_json.keys())}"'
|
|
||||||
caddyfile += "\n}"
|
|
||||||
return caddyfile
|
|
||||||
|
|
||||||
def json_to_haproxy(robots_json):
|
|
||||||
# Creates a source file for HAProxy. Follow instructions in the README to implement it.
|
|
||||||
txt = "\n".join(f"{k}" for k in robots_json.keys())
|
|
||||||
return txt
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def update_file_if_changed(file_name, converter):
|
def update_file_if_changed(file_name, converter):
|
||||||
"""Update files if newer content is available and log the (in)actions."""
|
"""Update files if newer content is available and log the (in)actions."""
|
||||||
new_content = converter(load_robots_json())
|
new_content = converter(load_robots_json())
|
||||||
|
@ -221,15 +207,6 @@ def conversions():
|
||||||
file_name="./nginx-block-ai-bots.conf",
|
file_name="./nginx-block-ai-bots.conf",
|
||||||
converter=json_to_nginx,
|
converter=json_to_nginx,
|
||||||
)
|
)
|
||||||
update_file_if_changed(
|
|
||||||
file_name="./Caddyfile",
|
|
||||||
converter=json_to_caddy,
|
|
||||||
)
|
|
||||||
|
|
||||||
update_file_if_changed(
|
|
||||||
file_name="./haproxy-block-ai-bots.txt",
|
|
||||||
converter=json_to_haproxy,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
|
@ -1,3 +0,0 @@
|
||||||
@aibots {
|
|
||||||
header_regexp User-Agent "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|Diffbot|FacebookBot|facebookexternalhit|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot|crawler\.with\.dots|star\*\*\*crawler|Is\ this\ a\ crawler\?|a\[mazing\]\{42\}\(robot\)|2\^32\$|curl\|sudo\ bash)"
|
|
||||||
}
|
|
|
@ -1,47 +0,0 @@
|
||||||
AI2Bot
|
|
||||||
Ai2Bot-Dolma
|
|
||||||
Amazonbot
|
|
||||||
anthropic-ai
|
|
||||||
Applebot
|
|
||||||
Applebot-Extended
|
|
||||||
Bytespider
|
|
||||||
CCBot
|
|
||||||
ChatGPT-User
|
|
||||||
Claude-Web
|
|
||||||
ClaudeBot
|
|
||||||
cohere-ai
|
|
||||||
Diffbot
|
|
||||||
FacebookBot
|
|
||||||
facebookexternalhit
|
|
||||||
FriendlyCrawler
|
|
||||||
Google-Extended
|
|
||||||
GoogleOther
|
|
||||||
GoogleOther-Image
|
|
||||||
GoogleOther-Video
|
|
||||||
GPTBot
|
|
||||||
iaskspider/2.0
|
|
||||||
ICC-Crawler
|
|
||||||
ImagesiftBot
|
|
||||||
img2dataset
|
|
||||||
ISSCyberRiskCrawler
|
|
||||||
Kangaroo Bot
|
|
||||||
Meta-ExternalAgent
|
|
||||||
Meta-ExternalFetcher
|
|
||||||
OAI-SearchBot
|
|
||||||
omgili
|
|
||||||
omgilibot
|
|
||||||
Perplexity-User
|
|
||||||
PerplexityBot
|
|
||||||
PetalBot
|
|
||||||
Scrapy
|
|
||||||
Sidetrade indexer bot
|
|
||||||
Timpibot
|
|
||||||
VelenPublicWebCrawler
|
|
||||||
Webzio-Extended
|
|
||||||
YouBot
|
|
||||||
crawler.with.dots
|
|
||||||
star***crawler
|
|
||||||
Is this a crawler?
|
|
||||||
a[mazing]{42}(robot)
|
|
||||||
2^32$
|
|
||||||
curl|sudo bash
|
|
|
@ -4,7 +4,7 @@
|
||||||
import json
|
import json
|
||||||
import unittest
|
import unittest
|
||||||
|
|
||||||
from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx, json_to_haproxy, json_to_caddy
|
from robots import json_to_txt, json_to_table, json_to_htaccess, json_to_nginx
|
||||||
|
|
||||||
class RobotsUnittestExtensions:
|
class RobotsUnittestExtensions:
|
||||||
def loadJson(self, pathname):
|
def loadJson(self, pathname):
|
||||||
|
@ -60,33 +60,12 @@ class TestNginxConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
||||||
robots_nginx = json_to_nginx(self.robots_dict)
|
robots_nginx = json_to_nginx(self.robots_dict)
|
||||||
self.assertEqualsFile("test_files/nginx-block-ai-bots.conf", robots_nginx)
|
self.assertEqualsFile("test_files/nginx-block-ai-bots.conf", robots_nginx)
|
||||||
|
|
||||||
class TestHaproxyConfigGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
|
||||||
maxDiff = 8192
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
self.robots_dict = self.loadJson("test_files/robots.json")
|
|
||||||
|
|
||||||
def test_haproxy_generation(self):
|
|
||||||
robots_haproxy = json_to_haproxy(self.robots_dict)
|
|
||||||
self.assertEqualsFile("test_files/haproxy-block-ai-bots.txt", robots_haproxy)
|
|
||||||
|
|
||||||
class TestRobotsNameCleaning(unittest.TestCase):
|
class TestRobotsNameCleaning(unittest.TestCase):
|
||||||
def test_clean_name(self):
|
def test_clean_name(self):
|
||||||
from robots import clean_robot_name
|
from robots import clean_robot_name
|
||||||
|
|
||||||
self.assertEqual(clean_robot_name("Perplexity‑User"), "Perplexity-User")
|
self.assertEqual(clean_robot_name("Perplexity‑User"), "Perplexity-User")
|
||||||
|
|
||||||
class TestCaddyfileGeneration(unittest.TestCase, RobotsUnittestExtensions):
|
|
||||||
maxDiff = 8192
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
self.robots_dict = self.loadJson("test_files/robots.json")
|
|
||||||
|
|
||||||
def test_caddyfile_generation(self):
|
|
||||||
robots_caddyfile = json_to_caddy(self.robots_dict)
|
|
||||||
self.assertEqualsFile("test_files/Caddyfile", robots_caddyfile)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
import os
|
import os
|
||||||
os.chdir(os.path.dirname(__file__))
|
os.chdir(os.path.dirname(__file__))
|
||||||
|
|
|
@ -1,36 +0,0 @@
|
||||||
# Bing (bingbot)
|
|
||||||
|
|
||||||
It's not well publicised, but Bing uses the data it crawls for AI and training.
|
|
||||||
|
|
||||||
However, the current thinking is, blocking a search engine of this size using `robots.txt` seems a quite drastic approach as it is second only to Google and could significantly impact your website in search results.
|
|
||||||
|
|
||||||
Additionally, Bing powers a number of search engines such as Yahoo and AOL, and its search results are also used in Duck Duck Go, amongst others.
|
|
||||||
|
|
||||||
Fortunately, Bing supports a relatively simple opt-out method, requiring an additional step.
|
|
||||||
|
|
||||||
## How to opt-out of AI training
|
|
||||||
|
|
||||||
You must add a metatag in the `<head>` of your webpage. This also needs to be added to every page on your website.
|
|
||||||
|
|
||||||
The line you need to add is:
|
|
||||||
|
|
||||||
```plaintext
|
|
||||||
<meta name="robots" content="noarchive">
|
|
||||||
```
|
|
||||||
|
|
||||||
By adding this line, you are signifying to Bing: "Do not use the content for training Microsoft's generative AI foundation models."
|
|
||||||
|
|
||||||
## Will my site be negatively affected
|
|
||||||
|
|
||||||
Simple answer, no.
|
|
||||||
The original use of "noarchive" has been retired by all search engines. Google retired its use in 2024.
|
|
||||||
|
|
||||||
The use of this metatag will not impact your site in search engines or in any other meaningful way if you add it to your page(s).
|
|
||||||
|
|
||||||
It is now solely used by a handful of crawlers, such as Bingbot and Amazonbot, to signify to them not to use your data for AI/training.
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
Bing Blog AI opt-out announcement: https://blogs.bing.com/webmaster/september-2023/Announcing-new-options-for-webmasters-to-control-usage-of-their-content-in-Bing-Chat
|
|
||||||
|
|
||||||
Bing metatag information, including AI opt-out: https://www.bing.com/webmasters/help/which-robots-metatags-does-bing-support-5198d240
|
|
|
@ -1,59 +0,0 @@
|
||||||
AI2Bot
|
|
||||||
Ai2Bot-Dolma
|
|
||||||
aiHitBot
|
|
||||||
Amazonbot
|
|
||||||
anthropic-ai
|
|
||||||
Applebot
|
|
||||||
Applebot-Extended
|
|
||||||
Brightbot 1.0
|
|
||||||
Bytespider
|
|
||||||
CCBot
|
|
||||||
ChatGPT-User
|
|
||||||
Claude-Web
|
|
||||||
ClaudeBot
|
|
||||||
cohere-ai
|
|
||||||
cohere-training-data-crawler
|
|
||||||
Cotoyogi
|
|
||||||
Crawlspace
|
|
||||||
Diffbot
|
|
||||||
DuckAssistBot
|
|
||||||
FacebookBot
|
|
||||||
Factset_spyderbot
|
|
||||||
FirecrawlAgent
|
|
||||||
FriendlyCrawler
|
|
||||||
Google-CloudVertexBot
|
|
||||||
Google-Extended
|
|
||||||
GoogleOther
|
|
||||||
GoogleOther-Image
|
|
||||||
GoogleOther-Video
|
|
||||||
GPTBot
|
|
||||||
iaskspider/2.0
|
|
||||||
ICC-Crawler
|
|
||||||
ImagesiftBot
|
|
||||||
img2dataset
|
|
||||||
imgproxy
|
|
||||||
ISSCyberRiskCrawler
|
|
||||||
Kangaroo Bot
|
|
||||||
meta-externalagent
|
|
||||||
Meta-ExternalAgent
|
|
||||||
meta-externalfetcher
|
|
||||||
Meta-ExternalFetcher
|
|
||||||
NovaAct
|
|
||||||
OAI-SearchBot
|
|
||||||
omgili
|
|
||||||
omgilibot
|
|
||||||
Operator
|
|
||||||
PanguBot
|
|
||||||
Perplexity-User
|
|
||||||
PerplexityBot
|
|
||||||
PetalBot
|
|
||||||
QualifiedBot
|
|
||||||
Scrapy
|
|
||||||
SemrushBot-OCOB
|
|
||||||
SemrushBot-SWA
|
|
||||||
Sidetrade indexer bot
|
|
||||||
TikTokSpider
|
|
||||||
Timpibot
|
|
||||||
VelenPublicWebCrawler
|
|
||||||
Webzio-Extended
|
|
||||||
YouBot
|
|
|
@ -1,3 +1,3 @@
|
||||||
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|aiHitBot|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Cotoyogi|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|Factset_spyderbot|FirecrawlAgent|FriendlyCrawler|Google\-CloudVertexBot|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo\ Bot|meta\-externalagent|Meta\-ExternalAgent|meta\-externalfetcher|Meta\-ExternalFetcher|NovaAct|OAI\-SearchBot|omgili|omgilibot|Operator|PanguBot|Perplexity\-User|PerplexityBot|PetalBot|QualifiedBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|TikTokSpider|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot)") {
|
if ($http_user_agent ~* "(AI2Bot|Ai2Bot\-Dolma|Amazonbot|anthropic\-ai|Applebot|Applebot\-Extended|Brightbot\ 1\.0|Bytespider|CCBot|ChatGPT\-User|Claude\-Web|ClaudeBot|cohere\-ai|cohere\-training\-data\-crawler|Crawlspace|Diffbot|DuckAssistBot|FacebookBot|FriendlyCrawler|Google\-Extended|GoogleOther|GoogleOther\-Image|GoogleOther\-Video|GPTBot|iaskspider/2\.0|ICC\-Crawler|ImagesiftBot|img2dataset|imgproxy|ISSCyberRiskCrawler|Kangaroo\ Bot|Meta\-ExternalAgent|Meta\-ExternalFetcher|OAI\-SearchBot|omgili|omgilibot|PanguBot|Perplexity\-User|PerplexityBot|PetalBot|Scrapy|SemrushBot\-OCOB|SemrushBot\-SWA|Sidetrade\ indexer\ bot|Timpibot|VelenPublicWebCrawler|Webzio\-Extended|YouBot)") {
|
||||||
return 403;
|
return 403;
|
||||||
}
|
}
|
83
robots.json
83
robots.json
|
@ -13,13 +13,6 @@
|
||||||
"operator": "[Ai2](https://allenai.org/crawler)",
|
"operator": "[Ai2](https://allenai.org/crawler)",
|
||||||
"respect": "Yes"
|
"respect": "Yes"
|
||||||
},
|
},
|
||||||
"aiHitBot": {
|
|
||||||
"operator": "[aiHit](https://www.aihitdata.com/about)",
|
|
||||||
"respect": "Yes",
|
|
||||||
"function": "A massive, artificial intelligence/machine learning, automated system.",
|
|
||||||
"frequency": "No information provided.",
|
|
||||||
"description": "Scrapes data for AI systems."
|
|
||||||
},
|
|
||||||
"Amazonbot": {
|
"Amazonbot": {
|
||||||
"operator": "Amazon",
|
"operator": "Amazon",
|
||||||
"respect": "Yes",
|
"respect": "Yes",
|
||||||
|
@ -104,13 +97,6 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler"
|
"description": "cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler"
|
||||||
},
|
},
|
||||||
"Cotoyogi": {
|
|
||||||
"operator": "[ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/)",
|
|
||||||
"respect": "Yes",
|
|
||||||
"function": "AI LLM Scraper.",
|
|
||||||
"frequency": "No information provided.",
|
|
||||||
"description": "Scrapes data for AI training in Japanese language."
|
|
||||||
},
|
|
||||||
"Crawlspace": {
|
"Crawlspace": {
|
||||||
"operator": "[Crawlspace](https://crawlspace.dev)",
|
"operator": "[Crawlspace](https://crawlspace.dev)",
|
||||||
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
|
"respect": "[Yes](https://news.ycombinator.com/item?id=42756654)",
|
||||||
|
@ -139,20 +125,6 @@
|
||||||
"frequency": "Up to 1 page per second",
|
"frequency": "Up to 1 page per second",
|
||||||
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
|
"description": "Officially used for training Meta \"speech recognition technology,\" unknown if used to train Meta AI specifically."
|
||||||
},
|
},
|
||||||
"Factset_spyderbot": {
|
|
||||||
"operator": "[Factset](https://www.factset.com/ai)",
|
|
||||||
"respect": "Unclear at this time.",
|
|
||||||
"function": "AI model training.",
|
|
||||||
"frequency": "No information provided.",
|
|
||||||
"description": "Scrapes data for AI training."
|
|
||||||
},
|
|
||||||
"FirecrawlAgent": {
|
|
||||||
"operator": "[Firecrawl](https://www.firecrawl.dev/)",
|
|
||||||
"respect": "Yes",
|
|
||||||
"function": "AI scraper and LLM training",
|
|
||||||
"frequency": "No information provided.",
|
|
||||||
"description": "Scrapes data for AI systems and LLM training."
|
|
||||||
},
|
|
||||||
"FriendlyCrawler": {
|
"FriendlyCrawler": {
|
||||||
"description": "Unclear who the operator is; but data is used for training/machine learning.",
|
"description": "Unclear who the operator is; but data is used for training/machine learning.",
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
|
@ -160,13 +132,6 @@
|
||||||
"operator": "Unknown",
|
"operator": "Unknown",
|
||||||
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
|
"respect": "[Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler)"
|
||||||
},
|
},
|
||||||
"Google-CloudVertexBot": {
|
|
||||||
"operator": "Google",
|
|
||||||
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
|
|
||||||
"function": "Build and manage AI models for businesses employing Vertex AI",
|
|
||||||
"frequency": "No information.",
|
|
||||||
"description": "Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents."
|
|
||||||
},
|
|
||||||
"Google-Extended": {
|
"Google-Extended": {
|
||||||
"operator": "Google",
|
"operator": "Google",
|
||||||
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
|
"respect": "[Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers)",
|
||||||
|
@ -251,27 +216,13 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
|
"description": "Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot"
|
||||||
},
|
},
|
||||||
"meta-externalagent": {
|
"Meta-ExternalAgent": {
|
||||||
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
|
"operator": "[Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers)",
|
||||||
"respect": "Yes",
|
"respect": "Yes.",
|
||||||
"function": "Used to train models and improve products.",
|
"function": "Used to train models and improve products.",
|
||||||
"frequency": "No information.",
|
"frequency": "No information.",
|
||||||
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
|
"description": "\"The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly.\""
|
||||||
},
|
},
|
||||||
"Meta-ExternalAgent": {
|
|
||||||
"operator": "Unclear at this time.",
|
|
||||||
"respect": "Unclear at this time.",
|
|
||||||
"function": "AI Data Scrapers",
|
|
||||||
"frequency": "Unclear at this time.",
|
|
||||||
"description": "Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent"
|
|
||||||
},
|
|
||||||
"meta-externalfetcher": {
|
|
||||||
"operator": "Unclear at this time.",
|
|
||||||
"respect": "Unclear at this time.",
|
|
||||||
"function": "AI Assistants",
|
|
||||||
"frequency": "Unclear at this time.",
|
|
||||||
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
|
|
||||||
},
|
|
||||||
"Meta-ExternalFetcher": {
|
"Meta-ExternalFetcher": {
|
||||||
"operator": "Unclear at this time.",
|
"operator": "Unclear at this time.",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -279,13 +230,6 @@
|
||||||
"frequency": "Unclear at this time.",
|
"frequency": "Unclear at this time.",
|
||||||
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
|
"description": "Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher"
|
||||||
},
|
},
|
||||||
"NovaAct": {
|
|
||||||
"operator": "Unclear at this time.",
|
|
||||||
"respect": "Unclear at this time.",
|
|
||||||
"function": "AI Agents",
|
|
||||||
"frequency": "Unclear at this time.",
|
|
||||||
"description": "Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact"
|
|
||||||
},
|
|
||||||
"OAI-SearchBot": {
|
"OAI-SearchBot": {
|
||||||
"operator": "[OpenAI](https://openai.com)",
|
"operator": "[OpenAI](https://openai.com)",
|
||||||
"respect": "[Yes](https://platform.openai.com/docs/bots)",
|
"respect": "[Yes](https://platform.openai.com/docs/bots)",
|
||||||
|
@ -307,13 +251,6 @@
|
||||||
"operator": "[Webz.io](https://webz.io/)",
|
"operator": "[Webz.io](https://webz.io/)",
|
||||||
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
|
"respect": "[Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html)"
|
||||||
},
|
},
|
||||||
"Operator": {
|
|
||||||
"operator": "Unclear at this time.",
|
|
||||||
"respect": "Unclear at this time.",
|
|
||||||
"function": "AI Agents",
|
|
||||||
"frequency": "Unclear at this time.",
|
|
||||||
"description": "Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator"
|
|
||||||
},
|
|
||||||
"PanguBot": {
|
"PanguBot": {
|
||||||
"operator": "the Chinese company Huawei",
|
"operator": "the Chinese company Huawei",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -342,13 +279,6 @@
|
||||||
"operator": "[Huawei](https://huawei.com/)",
|
"operator": "[Huawei](https://huawei.com/)",
|
||||||
"respect": "Yes"
|
"respect": "Yes"
|
||||||
},
|
},
|
||||||
"QualifiedBot": {
|
|
||||||
"description": "Operated by Qualified as part of their suite of AI product offerings.",
|
|
||||||
"frequency": "No explicit frequency provided.",
|
|
||||||
"function": "Company offers AI agents and other related products; usage can be assumed to support said products.",
|
|
||||||
"operator": "[Qualified](https://www.qualified.com)",
|
|
||||||
"respect": "Unclear at this time."
|
|
||||||
},
|
|
||||||
"Scrapy": {
|
"Scrapy": {
|
||||||
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
|
"description": "\"AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets.\"",
|
||||||
"frequency": "No information.",
|
"frequency": "No information.",
|
||||||
|
@ -377,13 +307,6 @@
|
||||||
"operator": "[Sidetrade](https://www.sidetrade.com)",
|
"operator": "[Sidetrade](https://www.sidetrade.com)",
|
||||||
"respect": "Unclear at this time."
|
"respect": "Unclear at this time."
|
||||||
},
|
},
|
||||||
"TikTokSpider": {
|
|
||||||
"operator": "ByteDance",
|
|
||||||
"respect": "Unclear at this time.",
|
|
||||||
"function": "LLM training.",
|
|
||||||
"frequency": "Unclear at this time.",
|
|
||||||
"description": "Downloads data to train LLMS, as per Bytespider."
|
|
||||||
},
|
|
||||||
"Timpibot": {
|
"Timpibot": {
|
||||||
"operator": "[Timpi](https://timpi.io)",
|
"operator": "[Timpi](https://timpi.io)",
|
||||||
"respect": "Unclear at this time.",
|
"respect": "Unclear at this time.",
|
||||||
|
@ -412,4 +335,4 @@
|
||||||
"frequency": "No information.",
|
"frequency": "No information.",
|
||||||
"description": "Retrieves data used for You.com web search engine and LLMs."
|
"description": "Retrieves data used for You.com web search engine and LLMs."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
11
robots.txt
11
robots.txt
|
@ -1,6 +1,5 @@
|
||||||
User-agent: AI2Bot
|
User-agent: AI2Bot
|
||||||
User-agent: Ai2Bot-Dolma
|
User-agent: Ai2Bot-Dolma
|
||||||
User-agent: aiHitBot
|
|
||||||
User-agent: Amazonbot
|
User-agent: Amazonbot
|
||||||
User-agent: anthropic-ai
|
User-agent: anthropic-ai
|
||||||
User-agent: Applebot
|
User-agent: Applebot
|
||||||
|
@ -13,15 +12,11 @@ User-agent: Claude-Web
|
||||||
User-agent: ClaudeBot
|
User-agent: ClaudeBot
|
||||||
User-agent: cohere-ai
|
User-agent: cohere-ai
|
||||||
User-agent: cohere-training-data-crawler
|
User-agent: cohere-training-data-crawler
|
||||||
User-agent: Cotoyogi
|
|
||||||
User-agent: Crawlspace
|
User-agent: Crawlspace
|
||||||
User-agent: Diffbot
|
User-agent: Diffbot
|
||||||
User-agent: DuckAssistBot
|
User-agent: DuckAssistBot
|
||||||
User-agent: FacebookBot
|
User-agent: FacebookBot
|
||||||
User-agent: Factset_spyderbot
|
|
||||||
User-agent: FirecrawlAgent
|
|
||||||
User-agent: FriendlyCrawler
|
User-agent: FriendlyCrawler
|
||||||
User-agent: Google-CloudVertexBot
|
|
||||||
User-agent: Google-Extended
|
User-agent: Google-Extended
|
||||||
User-agent: GoogleOther
|
User-agent: GoogleOther
|
||||||
User-agent: GoogleOther-Image
|
User-agent: GoogleOther-Image
|
||||||
|
@ -34,25 +29,19 @@ User-agent: img2dataset
|
||||||
User-agent: imgproxy
|
User-agent: imgproxy
|
||||||
User-agent: ISSCyberRiskCrawler
|
User-agent: ISSCyberRiskCrawler
|
||||||
User-agent: Kangaroo Bot
|
User-agent: Kangaroo Bot
|
||||||
User-agent: meta-externalagent
|
|
||||||
User-agent: Meta-ExternalAgent
|
User-agent: Meta-ExternalAgent
|
||||||
User-agent: meta-externalfetcher
|
|
||||||
User-agent: Meta-ExternalFetcher
|
User-agent: Meta-ExternalFetcher
|
||||||
User-agent: NovaAct
|
|
||||||
User-agent: OAI-SearchBot
|
User-agent: OAI-SearchBot
|
||||||
User-agent: omgili
|
User-agent: omgili
|
||||||
User-agent: omgilibot
|
User-agent: omgilibot
|
||||||
User-agent: Operator
|
|
||||||
User-agent: PanguBot
|
User-agent: PanguBot
|
||||||
User-agent: Perplexity-User
|
User-agent: Perplexity-User
|
||||||
User-agent: PerplexityBot
|
User-agent: PerplexityBot
|
||||||
User-agent: PetalBot
|
User-agent: PetalBot
|
||||||
User-agent: QualifiedBot
|
|
||||||
User-agent: Scrapy
|
User-agent: Scrapy
|
||||||
User-agent: SemrushBot-OCOB
|
User-agent: SemrushBot-OCOB
|
||||||
User-agent: SemrushBot-SWA
|
User-agent: SemrushBot-SWA
|
||||||
User-agent: Sidetrade indexer bot
|
User-agent: Sidetrade indexer bot
|
||||||
User-agent: TikTokSpider
|
|
||||||
User-agent: Timpibot
|
User-agent: Timpibot
|
||||||
User-agent: VelenPublicWebCrawler
|
User-agent: VelenPublicWebCrawler
|
||||||
User-agent: Webzio-Extended
|
User-agent: Webzio-Extended
|
||||||
|
|
|
@ -2,7 +2,6 @@
|
||||||
|------|----------|-----------------------|----------|------------------|-------------|
|
|------|----------|-----------------------|----------|------------------|-------------|
|
||||||
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
| AI2Bot | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
||||||
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
| Ai2Bot\-Dolma | [Ai2](https://allenai.org/crawler) | Yes | Content is used to train open language models. | No information provided. | Explores 'certain domains' to find web content. |
|
||||||
| aiHitBot | [aiHit](https://www.aihitdata.com/about) | Yes | A massive, artificial intelligence/machine learning, automated system. | No information provided. | Scrapes data for AI systems. |
|
|
||||||
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|
| Amazonbot | Amazon | Yes | Service improvement and enabling answers for Alexa users. | No information provided. | Includes references to crawled website when surfacing answers via Alexa; does not clearly outline other uses. |
|
||||||
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| anthropic\-ai | [Anthropic](https://www.anthropic.com) | Unclear at this time. | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
||||||
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
|
| Applebot | Unclear at this time. | Unclear at this time. | AI Search Crawlers | Unclear at this time. | Applebot is a web crawler used by Apple to index search results that allow the Siri AI Assistant to answer user questions. Siri's answers normally contain references to the website. More info can be found at https://darkvisitors.com/agents/agents/applebot |
|
||||||
|
@ -15,15 +14,11 @@
|
||||||
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
| ClaudeBot | [Anthropic](https://www.anthropic.com) | [Yes](https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler) | Scrapes data to train Anthropic's AI products. | No information provided. | Scrapes data to train LLMs and AI products offered by Anthropic. |
|
||||||
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
|
| cohere\-ai | [Cohere](https://cohere.com) | Unclear at this time. | Retrieves data to provide responses to user-initiated prompts. | Takes action based on user prompts. | Retrieves data based on user prompts. |
|
||||||
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
|
| cohere\-training\-data\-crawler | Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products | Unclear at this time. | AI Data Scrapers | Unclear at this time. | cohere-training-data-crawler is a web crawler operated by Cohere to download training data for its LLMs (Large Language Models) that power its enterprise AI products. More info can be found at https://darkvisitors.com/agents/agents/cohere-training-data-crawler |
|
||||||
| Cotoyogi | [ROIS](https://ds.rois.ac.jp/en_center8/en_crawler/) | Yes | AI LLM Scraper. | No information provided. | Scrapes data for AI training in Japanese language. |
|
|
||||||
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
|
| Crawlspace | [Crawlspace](https://crawlspace.dev) | [Yes](https://news.ycombinator.com/item?id=42756654) | Scrapes data | Unclear at this time. | Provides crawling services for any purpose, probably including AI model training. |
|
||||||
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
|
| Diffbot | [Diffbot](https://www.diffbot.com/) | At the discretion of Diffbot users. | Aggregates structured web data for monitoring and AI model training. | Unclear at this time. | Diffbot is an application used to parse web pages into structured data; this data is used for monitoring or AI model training. |
|
||||||
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
|
| DuckAssistBot | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | DuckAssistBot is used by DuckDuckGo's DuckAssist feature to fetch content and generate realtime AI answers to user searches. More info can be found at https://darkvisitors.com/agents/agents/duckassistbot |
|
||||||
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
|
| FacebookBot | Meta/Facebook | [Yes](https://developers.facebook.com/docs/sharing/bot/) | Training language models | Up to 1 page per second | Officially used for training Meta "speech recognition technology," unknown if used to train Meta AI specifically. |
|
||||||
| Factset\_spyderbot | [Factset](https://www.factset.com/ai) | Unclear at this time. | AI model training. | No information provided. | Scrapes data for AI training. |
|
|
||||||
| FirecrawlAgent | [Firecrawl](https://www.firecrawl.dev/) | Yes | AI scraper and LLM training | No information provided. | Scrapes data for AI systems and LLM training. |
|
|
||||||
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
|
| FriendlyCrawler | Unknown | [Yes](https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler) | We are using the data from the crawler to build datasets for machine learning experiments. | Unclear at this time. | Unclear who the operator is; but data is used for training/machine learning. |
|
||||||
| Google\-CloudVertexBot | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Build and manage AI models for businesses employing Vertex AI | No information. | Google-CloudVertexBot crawls sites on the site owners' request when building Vertex AI Agents. |
|
|
||||||
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
|
| Google\-Extended | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | LLM training. | No information. | Used to train Gemini and Vertex AI generative APIs. Does not impact a site's inclusion or ranking in Google Search. |
|
||||||
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
| GoogleOther\-Image | Google | [Yes](https://developers.google.com/search/docs/crawling-indexing/overview-google-crawlers) | Scrapes data. | No information. | "Used by various product teams for fetching publicly accessible content from sites. For example, it may be used for one-off crawls for internal research and development." |
|
||||||
|
@ -36,25 +31,19 @@
|
||||||
| imgproxy | [imgproxy](https://imgproxy.net) | Unclear at this time. | Not documented or explained on operator's site. | No information. | AI-powered image processing. |
|
| imgproxy | [imgproxy](https://imgproxy.net) | Unclear at this time. | Not documented or explained on operator's site. | No information. | AI-powered image processing. |
|
||||||
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
|
| ISSCyberRiskCrawler | [ISS-Corporate](https://iss-cyber.com) | No | Scrapes data to train machine learning models. | No information. | Used to train machine learning based models to quantify cyber risk. |
|
||||||
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
|
| Kangaroo Bot | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Kangaroo Bot is used by the company Kangaroo LLM to download data to train AI models tailored to Australian language and culture. More info can be found at https://darkvisitors.com/agents/agents/kangaroo-bot |
|
||||||
| meta\-externalagent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
|
| Meta\-ExternalAgent | [Meta](https://developers.facebook.com/docs/sharing/webmasters/web-crawlers) | Yes. | Used to train models and improve products. | No information. | "The Meta-ExternalAgent crawler crawls the web for use cases such as training AI models or improving products by indexing content directly." |
|
||||||
| Meta\-ExternalAgent | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Meta-ExternalAgent is a web crawler used by Meta to download training data for its AI models and improve its products by indexing content directly. More info can be found at https://darkvisitors.com/agents/agents/meta-externalagent |
|
|
||||||
| meta\-externalfetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
|
||||||
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
| Meta\-ExternalFetcher | Unclear at this time. | Unclear at this time. | AI Assistants | Unclear at this time. | Meta-ExternalFetcher is dispatched by Meta AI products in response to user prompts, when they need to fetch an individual links. More info can be found at https://darkvisitors.com/agents/agents/meta-externalfetcher |
|
||||||
| NovaAct | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Nova Act is an AI agent created by Amazon that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/novaact |
|
|
||||||
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
|
| OAI\-SearchBot | [OpenAI](https://openai.com) | [Yes](https://platform.openai.com/docs/bots) | Search result generation. | No information. | Crawls sites to surface as results in SearchGPT. |
|
||||||
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
|
| omgili | [Webz.io](https://webz.io/) | [Yes](https://webz.io/blog/web-data/what-is-the-omgili-bot-and-why-is-it-crawling-your-website/) | Data is sold. | No information. | Crawls sites for APIs used by Hootsuite, Sprinklr, NetBase, and other companies. Data also sold for research purposes or LLM training. |
|
||||||
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
|
| omgilibot | [Webz.io](https://webz.io/) | [Yes](https://web.archive.org/web/20170704003301/http://omgili.com/Crawler.html) | Data is sold. | No information. | Legacy user agent initially used for Omgili search engine. Unknown if still used, `omgili` agent still used by Webz.io. |
|
||||||
| Operator | Unclear at this time. | Unclear at this time. | AI Agents | Unclear at this time. | Operator is an AI agent created by OpenAI that can use a web browser. It can intelligently navigate and interact with websites to complete multi-step tasks on behalf of a human user. More info can be found at https://darkvisitors.com/agents/agents/operator |
|
|
||||||
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
|
| PanguBot | the Chinese company Huawei | Unclear at this time. | AI Data Scrapers | Unclear at this time. | PanguBot is a web crawler operated by the Chinese company Huawei. It's used to download training data for its multimodal LLM (Large Language Model) called PanGu. More info can be found at https://darkvisitors.com/agents/agents/pangubot |
|
||||||
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
|
| Perplexity\-User | [Perplexity](https://www.perplexity.ai/) | [No](https://docs.perplexity.ai/guides/bots) | Used to answer queries at the request of users. | Only when prompted by a user. | Visit web pages to help provide an accurate answer and include links to the page in Perplexity response. |
|
||||||
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
|
| PerplexityBot | [Perplexity](https://www.perplexity.ai/) | [Yes](https://docs.perplexity.ai/guides/bots) | Search result generation. | No information. | Crawls sites to surface as results in Perplexity. |
|
||||||
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
|
| PetalBot | [Huawei](https://huawei.com/) | Yes | Used to provide recommendations in Hauwei assistant and AI search services. | No explicit frequency provided. | Operated by Huawei to provide search and AI assistant services. |
|
||||||
| QualifiedBot | [Qualified](https://www.qualified.com) | Unclear at this time. | Company offers AI agents and other related products; usage can be assumed to support said products. | No explicit frequency provided. | Operated by Qualified as part of their suite of AI product offerings. |
|
|
||||||
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
|
| Scrapy | [Zyte](https://www.zyte.com) | Unclear at this time. | Scrapes data for a variety of uses including training AI. | No information. | "AI and machine learning applications often need large amounts of quality data, and web data extraction is a fast, efficient way to build structured data sets." |
|
||||||
| SemrushBot\-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
| SemrushBot\-OCOB | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Crawls your site for ContentShake AI tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SWA tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
| SemrushBot\-SWA | [Semrush](https://www.semrush.com/) | [Yes](https://www.semrush.com/bot/) | Checks URLs on your site for SWA tool. | Roughly once every 10 seconds. | You enter one text (on-demand) and we will make suggestions on it (the tool uses AI but we are not actively crawling the web, you need to manually enter one text/URL). |
|
||||||
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
|
| Sidetrade indexer bot | [Sidetrade](https://www.sidetrade.com) | Unclear at this time. | Extracts data for a variety of uses including training AI. | No information. | AI product training. |
|
||||||
| TikTokSpider | ByteDance | Unclear at this time. | LLM training. | Unclear at this time. | Downloads data to train LLMS, as per Bytespider. |
|
|
||||||
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
|
| Timpibot | [Timpi](https://timpi.io) | Unclear at this time. | Scrapes data for use in training LLMs. | No information. | Makes data available for training AI models. |
|
||||||
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
|
| VelenPublicWebCrawler | [Velen Crawler](https://velen.io) | [Yes](https://velen.io) | Scrapes data for business data sets and machine learning models. | No information. | "Our goal with this crawler is to build business datasets and machine learning models to better understand the web." |
|
||||||
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
|
| Webzio\-Extended | Unclear at this time. | Unclear at this time. | AI Data Scrapers | Unclear at this time. | Webzio-Extended is a web crawler used by Webz.io to maintain a repository of web crawl data that it sells to other companies, including those using it to train AI models. More info can be found at https://darkvisitors.com/agents/agents/webzio-extended |
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue