Simple question, difficult solution. I can’t work it out. I have a server at home with a site-to-site VPN to a server in the cloud. The server in the cloud has a public IP.

I want people to access server in the cloud and it should forward traffic through the VPN. I have tried this and it works. I’ve tried with nginx streams, frp and also HAProxy. They all work, but, in the server at home logs I can only see that people are connecting from the site-to-site VPN, not their actual source IP.

Is there any solution (program/Docker image) that will take a port, forward it to another host (or maybe another program listening on the host) that then modifies the traffic to contain the real source IP. The whole idea is that in the server logs I want to see people’s real IP addresses, not the server in the cloud private VPN IP.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Is there any solution (program/Docker image) that will take a port, forward it to another host (or maybe another program listening on the host) that then modifies the traffic to contain the real source IP. The whole idea is that in the server logs I want to see people’s real IP addresses, not the server in the cloud private VPN IP.

    Not that I’m aware of. Most methods require some kind of out-of-band way to send the client’s real IP to the server. e.g. X-Forwarded-For headers, Proxy Protocol, etc.

    If your backend app supports proxy protocol, you may be able to use HAProxy in front on the VPS and use proxy protocol from there to the backend. Nginx may also support this for streams (I don’t recall if it does or not since I mainly use HAProxy for that).

    Barring that, there is one more way, but it’s less clean.

    You can use iptables on the VPS to do a prerouting DNAT port forward. The only catch to this is that the VPN endpoint that hosts the service must have its default gateway set to the VPN IP of the VPS, and you have to have a MASQUERADE rule so traffic from the VPN can route out of the VPS. I run two services in this configuration, and it works well.

    iptables -t nat -A PREROUTING -d {VPS_PUBLIC_IP}/32 -p tcp -m tcp --dport {PORT} -j DNAT --to-destination {VPN_CLIENT_ADDRESS}
    iptables -t nat -A POSTROUTING -s {VPN_SUBNET}/24 -o eth0 -j MASQUERADE
    

    Where eth0 is the internet-facing interface of your VPS.

    Edit: One more catch to the port forward method. This forward happens before the traffic hits your firewall chain on the VPS, so you’d need to implement any firewalls on the backend server.

    • nickshanks@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thank you so much for the quick and detailed reply, appreciate it!

      Done all of the iptables stuff, just trying to change the default gateway on the server at home now:

      network:
        version: 2
        renderer: networkd
        ethernets:
          eth0:
            dhcp4: true
            routes:
              - to: 0.0.0.0/0
                via: <vps public ip>
      

      Does the above netplan yaml look right? When it’s applied, I can’t access the internet or even the VPS public IP.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Forgot to ask: Is your server a VPN client to the VPS or a VPN server with the VPS as a client? In my config, the VPS is the VPN server.

        Not sure about the netplan config (all my stuff is debian and uses oldschool /etc/network/interfaces), but you’d need logic like this:

        Server is VPN client of the VPS:

          routes:
            # Ensure your VPS is reachable via your default gateway
            - to: <vps public ip>
              via:  <your local gateway>
            # Route all other traffic via the VPS's VPN IP
            - to: 0.0.0.0/0
              via:  <vps vpn ip>
        

        You may also need to explicitly add a route to your local subnet via your eth0 IP/dev. If the VPS is a client to the server at home, then I’m not sure if this would work or not.

        Sorry this is so vague. I have this setup for 2 services, and they’re both inside Docker with their own networks and routing tables; I don’t have to make any accommodations on the host.

        • nickshanks@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Everything I use is in Docker too, I’d much rather use Docker than mess around with host files, but to try it out I don’t mind. If you have an image you could share, I’d appreciate it.

          Anyway, neither are clients or servers as I just used ZeroTier as a quick setup. On my other infra I use wireguard with the VPS being the server (that setup works well but I only reverse proxy HTTP stuff so X-Forwarded-For works well).

          • Admiral Patrick@dubvee.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’ve no experience with Zerotier, but I use a combo of WG and Openvpn. I use OpenVPN inside the Docker containers since it’s easier to containerize than WG.

            Inside the Docker container, I have the following logic:

            1. supervisord starts openvpn along with the other services in the container (yeah, yeah, it’s not “the docker way” and I don’t care)
            2. OpenVPN is configured with an “up” and “down” script
            3. When OpenVPN completes the tunnel setup, it runs the up script which does the following:
            # Get the current default route / Docker gateway IP
            export DOCKER_GW=$(ip route | grep default | cut -d' ' -f 3)
            
            # Delete the default route so the VPN can replace it.
            ip route del default via $DOCKER_GW;
            
            # Add a static route through the Docker gateway only for the VPN server IP address
            ip route add $VPN_SERVER_IP via $DOCKER_GW; true
            ip route add $LAN_SUBNET via $DOCKER_GW; true
            
            

            LAN_SUBNET is my local network (e.g. 192.168.0.1/24) and VPN_SERVER_IP is the public IP of the VPS (1.2.3.4/32). I pass those in as environment variables via docker-compose.

            The VPN server pushes the default routes to the client (0.0.0.0/1 via <VPS VPN IP> and 128.0.0.0/1 via <VPS VPN IP>

            Again, sorry this is all generic, but since you’re using different mechanisms, you’ll need to adapt the basic logic.

            • nickshanks@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Thanks, this helps a lot. So in your OpenVPN config, on the client, do you have it to send all traffic back through the VPN?

                • nickshanks@lemmy.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Okay, can we go back to those iptables commands?

                  iptables -t nat -A PREROUTING -d {VPS_PUBLIC_IP}/32 -p tcp -m tcp --dport {PORT} -j DNAT --to-destination {VPN_CLIENT_ADDRESS}
                  iptables -t nat -A POSTROUTING -s {VPN_SUBNET}/24 -o eth0 -j MASQUERADE
                  

                  Just to confirm, is the -o eth0 in the second command essentially the interface where all the traffic is coming in? I’ve setup a quick Wireguard VPN with Docker, setup the client so that it routes ALL traffic through the VPN. Doing something like curl ifconfig.me now shows the public IP of the VPS… this is good. But it seems like the iptables command aren’t working for me.

                  • Admiral Patrick@dubvee.org
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    1 year ago

                    Just to confirm, is the -o eth0 in the second command essentially the interface where all the traffic is coming in?

                    That is the interface the masqueraded traffic should exit.

      • nickshanks@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Do I need to specify to forward VPN traffic through my router and then traffic to 0.0.0.0/0 through the VPN?

        • Admiral Patrick@dubvee.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          See my other response.

          You may need to move the logic from netplan to a script that gets executed when the VPN is brought up. Otherwise, it will likely fail since it won’t have the VPN tunnel interface up to route traffic to.

    • z3bra@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Setting the default gateway to the VPN has many implications that you must take into account before doing it:

      • you need to allow ALL traffic through the VPN ACL, which nullify the concept of ACL as a security measure.
      • it breaks the VPN as the encapsulated packets cannot reach the other site. You need a /32 route to the other site to keep the VPN up.
      • it will route ALL the internet traffic from this host through the VPN, and the internet access of the other site.
      • it could break access to LAN of the server, so you might need to set your local routes manually.
      • it can let your server access the LAN of the remote server, this leaking local networks.

      A better option would be to use VRFs to route back traffic coming through the VPN back to it.