No, because there isn’t a single IP range or user agent, and many developers are going to lengths to defeat anti-scraping measures, which include user agent spoofing as well as vpns and the like to mask the source of the traffic.
If you read the few articles about people being attacked by AI in the recent months they all tell the same story: it’s not possible. The AI companies are targetting on purpose other sites and working non stop to actively avoid any kind of blocking that could be active. They rotate IPs regularly, they change User agents, they ignore robots.txt, deduplicate requests over bunch of ips, if they detect they are being blocked they start only doing one request in each ip, they change user agents the moment they detect one is being blocked, etc etc etc.
Is there not some way to just blacklist the AI domain or IP range?
No, because there isn’t a single IP range or user agent, and many developers are going to lengths to defeat anti-scraping measures, which include user agent spoofing as well as vpns and the like to mask the source of the traffic.
If you read the few articles about people being attacked by AI in the recent months they all tell the same story: it’s not possible. The AI companies are targetting on purpose other sites and working non stop to actively avoid any kind of blocking that could be active. They rotate IPs regularly, they change User agents, they ignore robots.txt, deduplicate requests over bunch of ips, if they detect they are being blocked they start only doing one request in each ip, they change user agents the moment they detect one is being blocked, etc etc etc.
whitelists and the end of anonymity
Or just decent regulation. You’re offering an AI product? You can’t attest that it’s been trained in a legitimate way?
Into the shadow realm with you.
Nope, there’s no specific range of IPs that AI scrapers use.