I’m in the process of setting up homelab stuff and i’ve been doing some reading. It seems the consensus is to put everything behind a reverse proxy and use a vpn or cloudflare tunnel.

I plan to use a VPN for accessing my internal network from outside and to protect less battle tested foss software. But I feel like if I cant open a port to the internet to host a webserver then the internet is no longer a free place and we’re cooked.

So my question is, Can I expose webserver, SSH, WireGuard to the internet with reasonable safety? What precautions and common mistakes do I need to watchout for.

  • irotsoma@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    You can mitigate some risks with software like fail2ban to slow down some of the hacking attempts, but you will still be susceptible to, sometimes unintentional, denial of service attacks from ever persistent “AI” crawler bots as well as the constant barrage of automated hacking attempts. If you’re bandwidth is not able to handle it or you have bandwidth caps, you’re likely going to have issues.

      • irotsoma@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        To a point yes, for the crawler bots, but Anubis uses a lot more resources to keep the bots busy than a simple firewall ignoring the request. And if there’s no response vs a negative response, the requests are likely to fall off more quickly. And the even more significant load might be from malicious login attempts which use even more resources and Anubis likely won’t be as effective on those more targeted attacks depending on the types of services we’re talking about. Either way, firewall blocks are way, way less resource intensive than any of that, so as soon as you open up that firewall and start responding to those malicious or abusive requests they will become progressively more resource intensive to mitigate.

        • Auth@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          24 hours ago

          Yes but im spite driven. I’ll take the extra hit to inflict damage to the crawlers

          • irotsoma@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            1 hour ago

            Problem is many of us are stuck with very low upstream bandwidth due to cable company ISP monopolies and/or data caps or just were running things on a small raspberry pi or something and the malicious requests will create extra expense or flat put denial of service for real traffic.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    5 days ago

    I used to do this myself, just with OpenVPN instead of Wire guard, worked fine, then I found overlay networks like Tailscale and it changed my life.

    Just use an overlay network. Tailscale or Netbird are my personal recommendations, Netbird if you want 100% open source right out of the box, Tailscale if you don’t mind their default coordination server being closed source, (you can run the open source Headscale server if you want)

    Overlay networks make all this sooooo much easier. Encrypted secure access to any and all of your internal network devices, with fine tuned access control depending on how you want it set up.

    I will never portforward or manually set up a VPN tunnel again, overlay networks perfectly fit my use case and they are so much easier to get working.

  • sainth@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    6 days ago

    You can. I recommend making sure you have logging in place so you know what’s going on. This could include not just service logs but firewall logs as well. You might want to rate limit the connection attempts for SSH and WireGuard and consider Fail2Ban or something similar.

    • chonkyninja@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      Fail2ban is useless for a wireguard endpoint. Wireguard never sends a response unless there’s a valid signed handshake request. It’s basically a blackhole.

  • frongt@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    6 days ago

    The Internet is a free place, in the sense that it’s very, very public.

    Expose the VPN and nothing else, if you can. There are always automated attacks scanning literally the entire Internet.

  • dethmetaljeff@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    5 days ago

    SSH is almost always a terrible idea to open on the internet. It’s just not worth the risk for the slight convenience. Web, VPN, etc… go for it. Just make sure you take appropriate precautions, fail2ban, geoip blocks and keep your exposed software patched. Use something like hostedscan to make sure you don’t have any known vulnerabilities exposed to the internet or obvious misconfigurations.

    I additionally use crowdsec on my webserver it functions as a slightly more intelligent fail2ban. It rarely triggers but it’s a nice additional layer. My fail2ban triggers several times a day. I’ve got it following my default virtual host and banning anyone that hits it (if you don’t at a minimum know my external hostnames then you have no business accessing my ports).

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      If a port is forwarded in NAT and an application is listening, outside traffic can reach it directly without the application needing to initiate a connection first.

    • Björn Tantau@swg-empire.de
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      The application doesn’t have to actively reach outside, just to listen at that port. If there is no application listening an open port does nothing. Though a port can really only be called open if an application is listening.

        • Björn Tantau@swg-empire.de
          link
          fedilink
          arrow-up
          3
          ·
          5 days ago

          That’s the point of port forwarding. Yes, normally applications aren’t reachable and have to reach out first. That’s how your browser can receive answers. With port forwarding you instruct your router to always forward incoming traffic for a specific port to a specific computer in your LAN.

    • thecoffeehobbit@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      This post considers the situation where you expose your ports to the internet, on the edge of your residential network, for example by setting your router to forward requests with port 443 to a certain host in your network. In this case you do have a public ip address and the configured port on your home server is now reachable from the internet. This is different from just exposing a port on a machine inside a residential network for local use.

      • Auth@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        If you set your router to only forward traffic from port 443 to a certain host does this drop all non port 443 traffic to that host?

        • thecoffeehobbit@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          I’d expect so, but you’ll need to test with your exact router model how it behaves. Some have a ‘DMZ’ function that you can use to pass all ports to a certain host. I use it to expose the WAN interface of my opnsense router to the internet through the ISP router. Then I can fine tune the open ports further in opnsense which is better designed for that than the usual ISP box.

  • litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    6 days ago

    Can I expose webserver, SSH, WireGuard to the internet with reasonable safety?

    Yes, yes, and yes. Though in all three cases, you would want to have some sort of filtering and IPS in place, like fail2ban or similar, at an absolute minimum. There are port scanners of all kinds scanning for vulnerable software that can be exploited. Some people suggest changing the port numbers away from the default, and while security through obscurity can be a valid tactic, it alone is not a layer of your security onion.

    A reverse proxy plus tunnel is a reasonable default recommendation because it is easy and prevents a large class of low-effort attacks and exploits, but tunneling has its drawbacks such as adding a component that exists outside of your direct control. It is also not a panacea. Reverse proxying alone is also workable, as it means just one point of entry to reinforce with logging and blocking.

    But I feel like if I cant open a port to the internet to host a webserver then the internet is no longer a free place and we’re cooked.

    The Internet is still (for now) a free place, but just like with free speech, effort must be expended to keep it free. The threats have increased and while other simpler options have arisen to fill demand for self hosting, this endeavor is about investing sufficient time and effort to keep it going.

    In my estimation, it is no different then tending to a garden in the face of rising environmental calamities. You can and should do it, so long as you’re fully informed about the effort required.

    • Auth@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Thanks for the answer, it was very helpful and thnaks to everyone else you answered in this thread.

  • freagle@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    6 days ago

    So, hate to break this to you but it’s been almost 20 years since you shouldn’t just open ports directly to your computer from your home router AND it’s been about that long since ISPs just don’t allow traffic to customers on standard ports like 80, 443, 21, 22, etc.

    The way to do this is actually to have multiple computers, with the first computer acting as your firewall, IDS, and IPS. That computer should run no other services and should be heavily locked down after it’s setup, as in most things should be made read-only except the few variable files that are required for operations.

    That computer should then route traffic to computers behind it that provide services like https, ssh, etc. This setup makes everything much safer.

    But you’ll still have to contend with your ISP and they don’t usually budge, so you’ll have to run services on non-standard ports.

    • 0x0@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      ISPs just don’t allow traffic to customers on standard ports like 80, 443, 21, 22, etc.

      YMMV