I just started using this myself, seems pretty great so far!
Clearly doesn’t stop all AI crawlers, but a significantly large chunk of them.
I just started using this myself, seems pretty great so far!
Clearly doesn’t stop all AI crawlers, but a significantly large chunk of them.
It’s a clever solution but I did see one recently that IMO was more elegant for noscript users. I can’t remember the name but it would create a dummy link that human users won’t touch, but webcrawlers will naturally navigate into, but then generates an infinitely deep tree of super basic HTML to force bots into endlessly trawling a cheap-to-serve portion of your webserver instead of something heavier. Might have even integrated with fail2ban to pick out obvious bots and keep them off your network for good.
Wouldn’t the bot simply limit the depth of it’s seek?
It could be infinitely wide too if they desired. It shouldn’t be that hard to do I wouldn’t think. I would suspect they limit the time a chain can use though to eventually escape out, though this still protects data because it obfuscates legitimate data that it wants. The goal isn’t to trap them forever. It’s to keep them from getting anything useful.