I’m sorry to disappoint you, in that this is a consumer motherboard. So yes, AMD PSP exists, no iLO.

However, that’s where the bad part ends.

Behold, what is in my opinion, the most server-like MicroATX board released to the consumers: the MSI PRO B550M-VC WIFI Micro ATX AM4 Motherboard/alternate link.

If you followed the link and read the specifications you would know exactly what I’m talking about, but for people who didn’t, here is the summary:

  • 4 x16 ports (3 of them work at x1 speeds, the one closest to the CPU is PCI Express 5.0 x16).
  • 8 SATA3 ports (people who wanted to build a NAS should be visibly salivating at this point).

Apart from that, there are 2 nice features that I would personally like to point out, as I look for these features in every board:

  • 128GB of RAM supported (no ECC, and I suppose a lot of motherboards support it now, but nice to have).
  • A 2230 E-key slot.
    • I know a few of you might be wondering why I’m mentioning the slot used for a WiFi card in this post - I invite you to take a look at this.
      • It’s a link to an Aliexpress item, so if you don’t want to click, here’s a short version: It’s an E-key to 2.5Gbe converter, using a Realtek RTL8125B chip. It’s amazing, I learnt about these adapters from a random Level1Techs video. I think a lot of people could use this.

And there you have it. If you’re building a system that requires heavy PCIe access and a lot of SATA3 storage, I think this is the best value you can find when purchasing new.

Cheers

  • Kata1yst@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    ZFS doesn’t need ECC more than any other filesystem. Technically it needs it less. But what it does do is expose just how common memory errors are.

    ECC exists for a reason at the enterprise level. A very important reason. You need to be able to trust that the data that the CPU put in memory is the same as the data written to disk.

      • krolden@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Don’t buy a consumer board for a server unless you aren’t using it to store important data. Like, say, a GPU cloud gaming or Plex server would be fine on consumer hardware ifyouf dont need out of band management.

        Also ECC RDIMM is much easier to come by than ecc udimm but only works with epyc or Xeon chips. I say if you’re going for storage then definitely buy enterprise gear and if you’re going for raw CPU/GPU compute you should be fine with consumer hardware.

        If you just want am4/5 ryzen chips, asrock rack makes some good boards with IPMI.

        here’s their x570 board you can browse their site and they also have am5 boards and they dont have three gimped x16@x1 sockets.

        • MigratingtoLemmy@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          The problem with enterprise hardware is:

          1. If it’s old, it’s not efficient.
          2. If it’s new, it’s prohibitively expensive.

          Consumer hardware solves both of these problems. Yes, we don’t have iLO, but if someone is really motivated, they can use PiKVM. I am yet to figure out if I can run PiKVM without the hats on a different SBC but I think it can be done.

          For me personally, I’ll be using said board in a NAS. With this board, I would no longer need an LSI HBA hogging my x16 port, which means if I ever decide to train ML models, I can get a GPU for myself.

          The Asrock Rack series is exciting, but from what I have seen, that line of motherboards are really expensive. I’ll keep a look out though, I find one of their motherboards under $150 which fits my needs, and it will become my number 1 choice.

          I do not see why I absolutely need ECC memory for a NAS. I’m not going to store PBs of media/documents, it’ll likely be under 30TB (that’s a conservative estimate). I thought ECC memory is a nice-to-have (this is no enterprise workload).

          Cheers

          • krolden@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            If it’s old, it’s not efficient.

            Efficiency is relative. I’m not suggesting you get a high clock much core server chip (though you could technically limit the clock speed and TDP and it would use as much power as a typical desktop) as there’s plenty of low power options that are ‘old’ (read:~4 years old is not that old). Maybe look into some Xeon-D embedded boards solely for your storage system. Many of those boards were made specifically for storage appliances. They can also be had pretty cheap on ebay or wherever.

            If it’s new, it’s prohibitively expensive.

            I’d say $120 is too expensive for this motherboard. Seems like it should be ~$60 with those specs and not to mention it being last gen. So even though you’re buying new you have an upgrade ceiling so why not buy a year or two older gear with more features and expansion.

            Consumer hardware solves both of these problems. Yes, we don’t have iLO, but if someone is really motivated, they can use PiKVM. I am yet to figure out if I can run PiKVM without the hats on a different SBC but I think it can be done.

            FYI iLO is HPs out of band (IPMI) implementation. PiKVM is definitely cool but its just adding more cost and another point of failure in your setup.

            For me personally, I’ll be using said board in a NAS. With this board, I would no longer need an LSI HBA hogging my x16 port, which means if I ever decide to train ML models, I can get a GPU for myself.

            If you want to train models and other gpu compute stuff like that, I would definitely shoot for a more current gen box just for that. In my (good) opinion, you should not run heavy compute loads on a server that is also serving/backing up your data,

            I do not see why I absolutely need ECC memory for a NAS. I’m not going to store PBs of media/documents, it’ll likely be under 30TB (that’s a conservative estimate). I thought ECC memory is a nice-to-have (this is no enterprise workload).

            ESPECIALLY if you’re not going to use ECC memory. No reason to put your important data at risk of corruption like that. I highly recommend holding out for something simple with DDR5 and a discrete GPU of your choosing for any actual compute workloads like that.

            Prices for newer hardware like this may fall before you’re even ready to build this system, so keep that in mind. You’ll also have a much easier time selling it over older gen hardware in the future if you change your mind about whatever.

            A 2230 E-key slot.

            from original post. why would you want to do this in a server? if you got a different board with sockets that werent x1 you could just get a 2.5gbe card… or you know, 10gbit.

            • MigratingtoLemmy@lemmy.worldOP
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Thank you for the tip. I will look into Xeon-D integrated motherboards. I will not be running very heavy loads (other than a Suricata instance for an IDS/traffic analyser - I would love suggestions which might be lighter on compute - which might be heavy). The idea for training ML models was just a remote possibility.

              My apologies, I kept saying iLO/iDRAC when I meant IPMI.

              Why do you suggest having separate devices for storage/compute?

              My idea was to run FreeBSD on a ZFS mirror of NVME drives as the base, and run VMs/Jails on a pool of SATA SSDs. These would exist alongside HDDs but would otherwise not affect their functioning. In this scenario, how does having 2 machines make my infrastructure more reliable, other than FreeBSD not running as intended?

              Have you had instances of memory corruption because you didn’t use ECC? I was under the impression from r/selfhosted that this problem was blown out of proportion.

              The reason I mentioned the E-key slot is because that way, I don’t have to use a PCIe slot for the adapter, which I might use for something else. I have no need for 10Gbe.

              Thanks!

              • krolden@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Why do you suggest having separate devices for storage/compute?

                because storage never needs to be upgraded beyond drive capacities, unless you need for a bunch of NVMe storage which requires more pcie lanes. The only reason you should have to change a board or cpu in a storage server is if it dies. If you need a new piece of hardware for its new features, it would be much easier to upgrade a different system rather than taking your storage offline to do it. whatever gpu you put in there now is going to be dated in a couple years when you may want to upgrade.

                Have you had instances of memory corruption because you didn’t use ECC? I was under the impression from r/selfhosted that this problem was blown out of proportion.

                no because i dont run big storage pools on desktop hardware. you may be able to run non ecc memory for a long time and not get any data corruptions, but that doesn’t mean you wont. also it’s not always obvious when there’s corruption especially in older data.

                The reason I mentioned the E-key slot is because that way, I don’t have to use a PCIe slot for the adapter, which I might use for something else. I have no need for 10Gbe.

                what are you going to do on those two x1 sockets? theyre really not good for anything other than usb and 1gbit (maybe 2.5gbit?) networking, or maybe a sound card. those m.2 adapters are more suited for minipcs that dont have any other pcie expansion options. not saying you can’t or shouldn’t do it, but why? especially when 10gbit options are much cheaper if you buy used.