I’m currently planning to build a new server after I discovered what my system uses in Idle. As I have to set up a new system anyways I would like to add a NAS to it to manige my storage. Currently I just have a zfs-Pool in proxmox for my Data-Drives and all VMs/Containers that need access have escalated rights and can directly access the pool (and all other storage on proxmox) which is a bit janky and definetly not best practice security-wise. Another negative side effect is that the drives are barely spun down. Thats why I now want to have a Nas as the only System controling the Drive pool. Here’s where my question comes up: Should I run TrueNas (scale?) in a VM and pass the drives through somehow (is that possible without mounting them in Proxmox, as I would like them fully controled by the Nas, including running the zfs pool, etc. ?)? Or do I install TrueNas scale and then run Proxmox as a VM inside, would my performance penalty be huge here, would I still be able to pass throught USB/PCI devices (maybe even the cpu’s igup to forward that to jellyfin if that’s even possible in Proxmox?)?

  • einsteinx2@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Oh my home NAS I started this way and it worked great. I only bought an HBA card because I needed more ports. Your mobo probably exposes your SATA controller as a PCI-E device that can be used via pass through in a VM. In my case I booted Proxmox off of NVME drive and passed my SATA controller to a Debian VM where I just use simple NFS and Samba for sharing and SnapRAID for drive parity (but TrueNas should work just as well).

    I had zero issues with it and when I upgraded to an HBA card I just switched the drives to those ports and switched the PCIE device I was passing through and everything just worked (helps I always mount using partition UUIDs).