The way DNS works in i2p makes it unreliable and vulnerable to attacks. It wouldn’t be to hard for an adversary to do a man in the middle or even do a fake version of a site. Also resolving DNS names is hard and takes a lot of effort.

Honestly the entire system needs to be rethought.

  • Possibly linuxOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I2p is vulnerable to a malicious party spreading a “alternate” base address for a domain name. All someone would need to do is get a bad entry into something like notbob.

    Ideally domain names should work via consensus. A node could request a domain name and then the major of the network could agree to issue a cert. On the client side there could be some sort of cert verification.

    • sploodged@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      non maliciously this is occasionally a problem. different registrars have different rules, some will delete a name after so long the destination is dead, others wont. so registrars will let you register an abandoned name with a new destination, but some wont. But local address books will default to the older destination over the newer one.

      i think it was done this way so there could be no one thing declaring google.i2p goes to a destination, locally you decide. wouldn’t be a bad idea to incorporate some sort of cert though, a lot of that work would fall to the registrars to agree i’d think, like on expiring names.

      i think the idea of using dht for this so it’s more like a network consensus thing has come up, but there’s reasons not to do this.

      • Possibly linuxOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I think the reason it isn’t like that is because it is incredibly complex to do. Also if there is a design flaw it could be used to attack people.