• the_strange@feddit.org
    link
    fedilink
    English
    arrow-up
    22
    ·
    3 days ago

    Simply put, a crawler reads a site, takes note of all the links in the site then reads all of these sites, again notes all the links there, reads those, etc. This website always and only links to internal resources which were randomly generated and again only link to other randomly generated sources, trapping the crawler if it has no properly configured exit condition.