The way DNS works in i2p makes it unreliable and vulnerable to attacks. It wouldn’t be to hard for an adversary to do a man in the middle or even do a fake version of a site. Also resolving DNS names is hard and takes a lot of effort.

Honestly the entire system needs to be rethought.

  • Possibly linuxOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I2p is vulnerable to a malicious party spreading a “alternate” base address for a domain name. All someone would need to do is get a bad entry into something like notbob.

    Ideally domain names should work via consensus. A node could request a domain name and then the major of the network could agree to issue a cert. On the client side there could be some sort of cert verification.

    • sploodged@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      non maliciously this is occasionally a problem. different registrars have different rules, some will delete a name after so long the destination is dead, others wont. so registrars will let you register an abandoned name with a new destination, but some wont. But local address books will default to the older destination over the newer one.

      i think it was done this way so there could be no one thing declaring google.i2p goes to a destination, locally you decide. wouldn’t be a bad idea to incorporate some sort of cert though, a lot of that work would fall to the registrars to agree i’d think, like on expiring names.

      i think the idea of using dht for this so it’s more like a network consensus thing has come up, but there’s reasons not to do this.

      • Possibly linuxOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        I think the reason it isn’t like that is because it is incredibly complex to do. Also if there is a design flaw it could be used to attack people.

        • sploodged@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          definitely opens up another surface for attack, could see flooding namespace, sibyl, hijacking consensus mechanism somehow, lots of very bad content would surface too which some of the current “curators” try to dampen. Consensus mechanism would be tricky to get right

          • Possibly linuxOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            It would need to have some sort of overhead cost to make attacks unfeasible. By adding that you would then be slowing everything down and creating a new source of problems.

            It isn’t a winning battle I guess.