The way DNS works in i2p makes it unreliable and vulnerable to attacks. It wouldn’t be to hard for an adversary to do a man in the middle or even do a fake version of a site. Also resolving DNS names is hard and takes a lot of effort.

Honestly the entire system needs to be rethought.

  • sploodged@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 days ago

    non maliciously this is occasionally a problem. different registrars have different rules, some will delete a name after so long the destination is dead, others wont. so registrars will let you register an abandoned name with a new destination, but some wont. But local address books will default to the older destination over the newer one.

    i think it was done this way so there could be no one thing declaring google.i2p goes to a destination, locally you decide. wouldn’t be a bad idea to incorporate some sort of cert though, a lot of that work would fall to the registrars to agree i’d think, like on expiring names.

    i think the idea of using dht for this so it’s more like a network consensus thing has come up, but there’s reasons not to do this.

    • Possibly linuxOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I think the reason it isn’t like that is because it is incredibly complex to do. Also if there is a design flaw it could be used to attack people.