cost efficient server that performs. x86 is almost at its scaling limit with 5nm process unmatched 5ghz per core performance optimal for running cryptocurrency validators great ipmi
The only server motherboard that allows you to use a consumer grade Zen 4 CPU. For workload that is not memory bound, a 16-core Ryzen 7950X will outrun any 16-core EPYC, at half the price or less.
VGA out and 10Gbit NICs were worth the upgrade to the NT edition.
IPMI alone has to be the biggest feature! Board looks built to last decades
- 128 PCIe Lanes - 2 Onboard m.2 With 22110 Support - SlimSAS x8 can operate 2x u.2 nvme drives with the proper cable - Updated IPMI Interface (from my old X11 board)
- Setup was mostly easy, SuperMicro including the 8-pin PCIe to 8-pin EPS12v adapter was a nice touch since lots of high watt PSUs don't have 3 CPU connectors for whatever reason. All of the issues were with stupid design choices in the Fractal Define 7 XL I used. - Came updated to most recent bios and BMC versions. - Everything is very fast. - Despite lack of options for fan control, the default settings seem to do a good enough job. With a Noctua NH-U14S TR4-SP3 cooler and two fans, prime 95's torture test running for 3 hours was never able to get the CPU to throttle or raise the temperatures of anything but the CPU much. The BMC only raised the on-CPU fans to max speed when all 32 cores were running the small FFT portion at the same time, and temps hit around 88 for a second before cooling began dropping them again. I'm not going to bother with liquid cooling, it seems to be solving a problem that doesn't exist for this situation. - The board is large enough, and the headers placed close enough to the edge, that even stupidly out-of-spec oversized GPUs like the ASRock 6900XT OC leave just enough room for (in my case) the USB-C front panel connector to be plugged in. This has been an ongoing nightmare even with more normal sized cards (Among many, many others, that were mostly Asus inflicted). 3 of the M.2 slots can't be accessed with a GPU in most locations, but honestly, how often are those going to be changed out? If I felt the need for more easily changeable internal M.2 drives I could just get a $20 PCIe 4 16x -> 4 x 4x NVMe card and fill some of the gigantic number of PCIe lanes with it, or get an M.2 -> U.2 housing. - Installing the TR Pro wasn't quite as big a nightmare as I was expecting, but my last computer was a 2011-v3 where the procedure was "set the processor on top of the pins, and force 2 tension levers closed while wondering if the horrible crunching sound was normal". Also, Asus was shipping out boards with pre-bent pins. The most complicated part of this was wondering if I'd missed the click of the torque driver that shipped with the processor (you can't miss it, it's louder than heck and the driver slips). - Despite the manual saying onboard VGA was default, seemed to boot to GPU in a PCIe slot just fine. The first boot took a very long time, so I kinda gave up and ordered a VGA cable, not having owned one since some time in the 90s. Oh well. I can show it to friends and say, "some of the most expensive servers you can get are still using this outdated garbage as a video interface for some reason". - I was kind of worried because it seemed to be missing in the manual, but any x16 slot can be bifurcated to 4x4x4x4... having the 8x8 option would be nice on the off chance that I need lanes sent over SlimSAS for some reason, but I don't see it happening. No idea what I'll possibly do in the next decade that needs more than (or all of) 512GB of registered ECC. I like having a large excess so Windows is basically running everything on my system from a ramdrive after a few days of uptime; the caching system is fantastic. It'll take roughly that long for DDR5 to get affordable if it ever does; from the leaked 9000 series Epyc memory benchmarks I saw, all of the dual channel consumer boards basically have broken DDR5 controllers right now which doesn't inspire confidence... the base speed ECC sticks running at who knows what crazy latencies are involved in 12 channel configurations should in no way have almost 8 times higher memory bandwidth than the highest speed overclocked dual channel configurations, but there you are. Usually there's good scaling from higher channel counts, but it's not usually 1:1 comparing the same speed let alone stupidly overclocked XMP garbage (wonder if that's the problem?). My 2 favorite parts about this system were knowing I'll never have to deal with XMP memory again, and knowing that I won't have to hunt for hours to find hardware without RGB lights as long as I stay away from consumer devices. ASRock is the last bastion of Sanity in desktop GPUs, providing a hardware RGB kill switch so you never have to have it on. I actually had to do a custom beige paint job because nobody makes a beige case any more (was feeling nostalgic, and angry that it didn't exist), and finding a suitable case without tempered glass and no built-in RGB strip lighting took almost a week. This incredibly dull looking motherboard was a breath of fresh air. :D Supermicro will fix the loud PCH fan issue if you're willing to send the board in, apparently.
Easy to set up.
All the features you could ever want. Has sensors for days! HTML5 WebUI for Remote Console to your OS!
Does what it says on the tin
I went with an i7-13700k as I heard there were power limitations on the upper end of the i9-13900k(s) variations. This thing is fast, stable (ECC helps), and has lots of expansion compared to its x12 predecessor. I am using both the onboard ASPEED and Intel iGPU by setting Max TOLUD to between 1-2GB and enabling onboard as default video.
Great product !!! 9.8 of 10 Work with the All server OS & Nas Os ,Windows Server ,TrueNas, Linux , Open Media Vault , Ubuntu Server ,Rockstor....etc. No need for Graphics Card and you can control it remotely even turn on and off ! :) no need for monitor , and does not consume a lot of power .Its work 24/7 Price Vs performance 10 of 10 .........very fast ....and .powerful ...nice Asrock !!