• anytimesoon@feddit.uk
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    I’m not sure I fully understand.

    This generates garbage if it thinks the client making the request is an ai crawler. That much I get.

    What I don’t understand is when it talks about trapping the crawler. What does that mean?

    • the_strange@feddit.org
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 days ago

      Simply put, a crawler reads a site, takes note of all the links in the site then reads all of these sites, again notes all the links there, reads those, etc. This website always and only links to internal resources which were randomly generated and again only link to other randomly generated sources, trapping the crawler if it has no properly configured exit condition.