glizzyguzzler
- 3 Posts
- 32 Comments
glizzyguzzler@piefed.blahaj.zoneto Selfhosted@lemmy.world•Exposing docker socket to a containerEnglish1·3 days agoI wanted Jellyfin on its own IP so I could think about implementing VLANs. I havent yet, and I’m not sure what I did is even needed. But I did do it! You very likely don’t need to do it.
There are likely guides on enabling Jellyfin hardware acceleration on your Asustor NAS - so just follow them!
I do try to set up separate networks for each service.
On one server I have a monolithic docker compose file with a ton of networks defined to keep services from talking to the internet or each other if it’s not useful (pdf converter is prevented from talking to the internet or the Authentik database, for example). Makes the most sense here, has the most power.
On this server I have each service split up with its own docker compose file. The network bit makes more sense on services that have an external database and other bits, it lets me set it up so only the service can talk to its database and its database cannot reach the internet at large (via adding a ‘internal: true’ to the networks: section). In this case, yes the pdf converter can talk to other services and I’d need to block its internet access at the router somehow.
The monolithic method gets more annoying to deal with with many services via virtue of a gigantic docker compose file and the up/down time (esp. for services that don’t acknowledge shutdown commands). But it lets me use fine-grained networking within the docker compose file.
For each service on its own, they expose a port and things talk to them from there. So instead of an internal docker network letting Authentik talk to a service, Authentik just looks up the address of the service. I don’t notice any difference in perceptible lag.
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish1·3 days agoGood to know, didn’t know IPv6 can come with efficiency gains. Makes sense since the designers had a beat to think about why IPv4 sucks. I’ll avoid NAT IPv6
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish1·3 days agoI got it, ULA for everything that doesn’t care, 1 GUA for the server. When everything else starts to care about the lack of IPv6 or has routing issues, convert the ULA to GUA and rock n roll.
Thanks for providing a sane way to approach it slowly and methodically!
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish1·3 days agoI do appreciate you taking the time to write that up! Is the 50.50.0.0/22 crossing US and EU IPv4 allocations? From searching it looks like it’s around the boundary between US and Germany allocations. Interesting, I had no idea IP anonymization existed or was applied in such a haphazard way
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish1·3 days agoThanks for writing this up, really highlights the effective differences.
So for the internal delegation I’d SLAAC it and let things “just work” or DHCPv6 if I cared to specify IPv6s (which I will need to to have a static IPv6 address for a server to be reached at). Thanks again!
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish2·4 days agoThanks for taking the time to go into detail on this, it helps because I just haven’t been able to put acronyms to actionable meaning from just reading blogs and posts.
How do things outside the LAN talk to things inside the LAN that have ULA addresses (which I’m assuming are equivalent of 10.0.0.0/16 idea)? Will devices that are given ULA addresses be NAT’d just like IPv4 or will they not be able to talk to the outside world on IPv6?
Edit: I am getting more what you said; you answered this: the ULA addresses will not be able to talk to the outside world on IPv6 so those devices will be IPv4-only to websites that use IPv6 too. Follow-on Q would then be, is kludging NAT for IPv6 not a better solution versus ULA addresses? Or is the clear answer just use IPv6 as intended and let the devices handle their privacy with IPv6 privacy extensions?
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish1·4 days agoI see now that a limitation I just understood for IPv4 (expose one port from one device only on the router) isn’t a thing for IPv6 working without NAT, every device on a LAN can be given a world wide routable address and expose the same port. Interesting, in my home I don’t think I’d ever run into that, but I can see issues like that pile up quick in big deployments.
Thanks for taking the time to explain all of this in detail!
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish2·4 days agoI gather people talk like NAT is a rung of hell, but I guess it works because I never think of it. Maybe it becomes shittastic at multiple NATs? With one router it seems straight forward to have port forwarding.
I do not understand why I want better inbound connections - but maybe if I get hit with a cgnat then I’ll understand?
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish2·4 days agoMobile devices are largely IPv6-only now, messing with VPN to home. The IPv6-to-4 conversion seems to be shoddy for my mobile carrier.
Not here for what it represents, just want it to work.
I haven’t run into NAT issues that I’ve noticed, would IPv6 avoid issues with cgnat that people complain about? (If/when it happens in the future)
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish1·4 days agoI know, but when you get captcha’d all of the time you feel like you’re kinda winning (but not really of course). I don’t want them to just have a nice fingerprint of my devices without having to try at all. I see others have mentioned “IPv6 privacy extensions” that let the devices cycle the multitude of IPv6 address space to keep a semblance of privacy - that seems to be the “default” solution
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish4·4 days agoI see, I saw someone else mention “IPv6 privacy extensions”. So basically it’s up to the individual devices to handle privacy instead of the router doing it for them in IPv6 land
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish2·4 days agoI had never picked up on this, thank you for name dropping what to look for!
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish3·4 days agoI see people say “not worth it” but never expound on what exactly makes it not worth it?
Most I get is a vibe (using a metaphor) “python-like judging where people prefer to do it in a ‘pythonic’ way” but of course that’s silly. There must be more to it, but I never seen interoperability issues called out
glizzyguzzler@piefed.blahaj.zoneOPto Selfhosted@lemmy.world•IPv6 & Opnsense & Not Exposing Machine-Specific IPv6s to CorposEnglish3·4 days agoThank you for the guide! It’s very straightforward and looks hella easy to implement. From reading it I would not have guessed it would do what I wished
glizzyguzzler@piefed.blahaj.zoneto Selfhosted@lemmy.world•Exposing docker socket to a containerEnglish3·4 days agoSo I’ve found that if you use the
user:
option with auser: UserName
it requires the container to have that UserName alsoo inside. If you do it with a UID/GID, it maps the container’s default user (likely root 0) to the UID/GID you provideuser: 1500:1500
. For many containers it just works, for linuxserver (a group that produces containers for stuff) containers I think it biffs it - those are way jacked up. I put the containers that won’t play ball in a LXC container (via Incus GUI), or for simple permission fixes I just make a permissions-fixing version of the container (runs as root, but only executes commands I provide) to fill a volume with the data that has the right permissions then load that volume into the container. Luckily jellyfin doesn’t need that.I give jellyfin read-only access (via
:ro
in thevolumes:
) to my media stuff because it doesn’t need to write to it. I think it’s fine if your use-case needs:rw
, keep a backup (even if you:ro
!).Here’s my docker-compose.yml, I gave jellyfin its own IP with macvlan. It’s pretty janky and I’m still working it, but you can have jellyfin use your server’s IP by deleting everything after
jellyfin-nw:
(but keepjellyfin-nw:
!) in both thenetworks:
section andservices:
section. Delete themac:
in theservices:
section too. In theports:
part that10.0.1.69
would be the IP of your server (or in this case, what I declare the jellyfin container’s IP to be) - it makes it so the container can only bind to the IP you provide, otherwise it can bind to anything the server has access to (as far as I understand).And of course, I have GPU acceleration working here with some embeded Intel iGPU. Hope this helps!
# --- NETWORKS --- networks: jellyfin-nw: # In docker, `macvlan` gets similar stuff to driver: macvlan driver_opts: parent: 'br0' # mode: 'l2' name: 'doc0' ipam: config: - subnet: "10.0.1.0/24" gateway: "10.0.1.1" # --- SERVICES --- services: jellyfin: container_name: jellyfin image: ghcr.io/jellyfin/jellyfin:latest environment: - TZ=America/Los_Angeles - JELLYFIN_PublishedServerUrl=https://jellyfin.guzzlezone.local/ ports: - '10.0.1.69:8096:8096/tcp' - '10.0.1.69:7359:7359/udp' - '10.0.1.69:1900:1900/udp' devices: - '/dev/dri/renderD128:/dev/dri/renderD128' # - '/dev/dri/card0:/dev/dri/card0' volumes: - '/mnt/ssd/jellyfin/config:/config:rw,noexec,nosuid,nodev,Z' - '/mnt/cache/jellyfin/log:/config/log:rw,noexec,nosuid,nodev,Z' - '/mnt/cache/jellyfin/cache:/cache:rw,noexec,nosuid,nodev,Z' - '/mnt/cache/jellyfin/config-cache:/config/cache:rw,noexec,nosuid,nodev,Z' # Media links below - '/mnt/spinner/movies:/data/movies:ro,noexec,nosuid,nodev,z' - '/mnt/spinner/shows:/data/shows:ro,noexec,nosuid,nodev,z' - '/mnt/spinner/music:/data/music:ro,noexec,nosuid,nodev,z' restart: unless-stopped # Security stuff read_only: true tmpfs: - /tmp:uid=2200,gid=2200,rw,noexec,nosuid,nodev # mac address is 02:42 then 10.0.1.69 in hex for each # betwen the .s mapped to the :s in the mac address # its how docker assigns so there will never be a mac address collision mac_address: 02:42:0A:00:01:45 networks: jellyfin-nw: # Docker is pretty jacked up and can't get an IP via DHCP so manually specify it ipv4_address: 10.0.1.69 user: 2200:2200 # gpu capability needs render capability, see the # for your server with `getent group render | cut -d: -f3` group_add: - "109" security_opt: - no-new-privileges:true cap_drop: - ALL
Lastly thought I should add the external stuff needed for the hardware acceleration to work/get the user going:
# For jellyfin low power (LP) intel QSV stuff # if trouble see https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-and-verify-lp-mode-on-linux sudo apt install -y firmware-linux-nonfree #intel-opencl-icd sudo mkdir -p /etc/modprobe.d sudo sh -c "echo 'options i915 enable_guc=2' >> /etc/modprobe.d/i915.conf" sudo update-initramfs -u sudo update-grub APP_NAME="jellyfin" APP_PID=2200 sudo useradd -u $APP_PID $APP_NAME
The Jellyfin user isn’t added to the render group, rather the group is added to the container in the docker-compose.yml file.
glizzyguzzler@piefed.blahaj.zoneto Selfhosted@lemmy.world•Exposing docker socket to a containerEnglish2·4 days agoThanks for explaining the underworkings, never dug to see what happened and how it works - I see it bad
glizzyguzzler@piefed.blahaj.zoneto Selfhosted@lemmy.world•Exposing docker socket to a containerEnglish5·4 days agoPer this guide https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html I do not. I have a cron/service script that updates containers automatically (‘docker compose pull’ I think) that I don’t care if they fail for a bit (pdf converter, RSS reader, etc.) or they’re exposed to the internet directly (Authentik, caddy).
Note that smart peeps say that the docker socket is not safe as read-only. Watchtower is inherently untenable sadly, so is Traefik (trusting a docker-socket-proxy container with giga root permissions only made sense to me if you could audit the whole thing and keep auditing with updates and I cannot). https://stackoverflow.com/a/52333163 https://blog.quarkslab.com/why-is-exposing-the-docker-socket-a-really-bad-idea.html
I then just have scripts to do the ‘docker compose pull’ for things with oodles of breaking changes (Immich) or things I’d care if they did break suddenly (paperless).
Overall, I’ve only had a few break over a few years - and that’s because I also run all services (per link above) as a user, read-only, and with no capabilities (that aren’t required, afaik none need any). And while some containers are well coded, many are not, and if an update makes changes that want to write to ‘/npm/staging’ suddenly, the read-only torches that until I can figure it out and put in a tmpfs fix. The few failures are worth the peace of mind that it’s locked the fuck down.
I hope to move to podman sometime to eliminate the last security risk - the docker daemon running the containers, which runs as root. Rootless docker seems to be a significant hassle to do at any scale, so I haven’t bothered with that.
Edit: this effort is to prevent the attack vector of “someone hacks or buys access to a well-used project (e.g., Watchtower last updated 2 years ago, commonly used docker socket proxy, etc.) which is known to have docker socket access and then pushes a malicious update that to encrypt and ransom your server with root access escalations from the docker socket”. As long as no container has root, (and the container doesn’t breach the docker daemon…) the fallout from a good container turned bad is limited to the newly bad container.
glizzyguzzler@piefed.blahaj.zoneto Selfhosted@lemmy.world•Securing a 'public' service for familyEnglish5·6 days agoJust came back to say the same thing, I use this for geo ip blocking and it’s so well featured it’s insane. Any VPS, just make sure to clear local IPs (incl. docker range if using docker - though it’s been improving so much it may handle that automatically now)
glizzyguzzler@piefed.blahaj.zoneto Selfhosted@lemmy.world•Securing a 'public' service for familyEnglish3·6 days agoAssuming you’re accessing the service (Peertube in this case) from a web browser and not an app - a thing I decided on “good enough” plus “easy enough” is Authentik sitting in front of the service.
Thought process is: Peertube or some other service’s first job is the purpose for the service, so security likely won’t be as good as a service who’s first job is security.
Authentik can also do stuff like OIDC if the service likes it - and you can chain them together. I’ve got services that hit Authentik 1st and then after you’re allowed to talk to service then you can log in with Authentik OIDC. Some services seem to do it seamlessly, some make you click a “log in with Authentik” again - either way painless enough. Everyone I know is haunted by the MS “remember this login y/n” page that pops up every time you log into some stupid MS thing and it never matters if you choose y or n, it’ll be back. So even 2 steps are chill in comparison for them.
Harden Authentik, and then you can apply it to any other service you want in the future too (maybe stirling PDF, don’t even need users for that). (Feel free to harden Peertube though too - just less important and likely not needed!)
Lastly: I say “not an app” because apps can’t deal with hitting Authentik 1st afaik. APIs for apps or other purposes can be cleared to go to the service directly if you’re confident that that’ll be ok (authenticated gets and limited scope of puts, etc. but I’m unfamiliar how to be truly confident in an API’s security). But like jellyfin’s api is too dangerous to expose so no go on that - it’s VPN city.
Right right things don’t just have one… from searching I’ve found “SLAAC assisted mode” allows for the router to let SLAAC SLAAC while also being able to declare addresses for a server. Thanks for that tiny note!