• sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 months ago

    Linux works great on ARM, I just want something similar to most mini-ITX boards (4x SATA, 2x mini-PCIe, and RAM slots), and I’ll convert my DIY NAS to ARM. But there just isn’t anything between RAM-limited SBCs and datacenter ARM boards.

    • Dudewitbow@lemmy.zip
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      4 months ago

      arm is a mixed bag. iirc atm the gpu on the Snapdragon X Elite is disabled on Linux, and consumer support is reliant on how well the hardware manufacturer supports it if it closed source driver. In the case of qualcomm, the history doesnt look great for it

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Eh, if they give me a PCIe slot, I’m happy to use that in the meantime. My current NAS uses an old NVIDIA GPU, so I’d just move that over.

        • Zangoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Apparently (from another comment on a thread about arm from a few weeks ago) consumer GPU bioses contain some x86 instructions that get run on the CPU, so getting full support for ARM isn’t as simple as swapping the cards over to a new motherboard. There are ways to hack around it (some people got AMD GPUs booting on a raspberry pi 5 using its PCIe lanes with a bunch of adapters) but it is pretty unreliable.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            Yeah, there are some software issues that need to be resolved, but the bigger issue AFAIK is having the hardware to handle it. The few ARM devices with a PCIe slot often don’t fully implement the spec, such as power delivery. Because of that, driver work just doesn’t happen, because nobody can realistically use it.

            If they provide a proper PCIe slot (8-16 lanes, on-spec power delivery, etc), getting the drivers updated should be relatively easy (months, not years).

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Datacenter cpus are actually really good for NASes considering the explosion of NVMe storage. Most consumer CPUs are limited to just 5 m.2 drives and a 10gbit NIC. But a server mobo will open up for 10+ drives. Something cheap like a first gen Epyc motherboard gives you a ton of flexibility and speed if you’re ok with the idle power consumption.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        if you’re ok with the idle power consumption

        I’m kind of not. I don’t need a ton of drives, and I certainly don’t need them to be NVMe. I just want 2-4 SATA drives for storage and 1-2 NVMe drives for boot, and enough RAM to run a bunch of services w/o having to worry about swapping. Right now my Ryzen 1700 is doing a fine job, but I’d be willing to sacrifice some performance for energy savings.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        4 months ago

        Eh, it looks like ARM laptops are coming along. I give it a year or so for the process to be smooth.

        For servers, AWS Graviton seems to be pretty solid. I honestly don’t need top performance and could probably get away with a Quartz64 SBC, I just don’t want to worry about RAM and would really like 16GB. I just need to server a dozen or so docker containers with really low load, and I want to do that with as little power as I can get away with for minimum noise. It doesn’t need to transcode or anything.

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          4 months ago

          ARM laptops don’t support ACPI, which makes them really hard for Linux to support. Having to go back two years to find a laptop with wifi and gpu support on Linux isn’t practical. If Qualcomm and Apple officially supported Linux like Intel and AMD do, it would be a different story. As it is right now, even Android phones are forced to use closed-source blobs just to boot.

          Those numbers from Amazon are misleading. Linus Torvalds actually builds on an Ampere machine, but they don’t actually do that well in benchmarks.

          https://www.phoronix.com/review/graviton4-96-core

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            AWS’ benchmark is about lambda functions, not compile workloads, which are quite different beasts. Lambdas are about running a lot of small (so task switching), independent scripts, whereas compiling is about running heavy CPU workloads (so feeding caches). Server workloads tend to be more of the former than the latter.

            That said, I’m far less interested in raw performance and way more interested in power efficiency and idle and low utilization. I’m very rarely going to be pushing any kind of meaningful load on it, and when I do, I don’t mind if it takes a little longer, provided I’m saving a lot of electricity in the meantime.

        • CancerMancer@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Man so many SBCs come so close to what you’re looking for but no one has that level of I/O. I was just looking at the ZimaBlade / ZimaBoard and they don’t quite get there either: 2 x SATA and a PCIe 2.0 x4. ZimaBlade has Thunderbolt 4, maybe you can squeeze a few more drives in there with a separate power supply? Seems mildly annoying but on the other hand, their SBCs only draw like 10 watts.

          Not sure what your application is but if you’re open to clustering them that could be an option.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 months ago

            Here’s my actual requirements:

            • 2 boot drives in mirror - m.2 or SATA is fine
            • 4 NAS HDD drives - will be SATA, but could use PCIe expansion; currently have 2 8TB 3.5" HDDs, want flexibility to add 2x more
            • minimum CPU performance - was fine on my Phenom II x4, so not a high bar, but the Phenom II x4 has better single core than ZimaBlade

            Services:

            • I/O heavy - Jellyfin (no live transcoding), Collabora (and NextCloud/ownCloud), samba, etc
            • CPU heavy - CI/CD for Rust projects (relatively infrequent and not a hard req), gaming servers (Minecraft for now), speech processing (maybe? Looking to build Alexa alt)
            • others - actual budget, vault warden, Home Assistant

            The ZimaBlade is probably good enough (would need to figure out SATA power), I’ll have to look at some performance numbers. I’m a little worried since it seems to be worse than my old Phenom II x4, which was the old CPU for this machine. I’m currently using my old Ryzen 1700, but I’d be fine downgrading a bit if it meant significantly lower power usage. I’d really like to put this under my bed, and it needs to be very quiet to do that.

            • CancerMancer@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Those are tough requirements to meet, I’m not sure there is a low power CPU that can do it all. You would likely need to cluster some devices but that means you need a separate NAS anyway and that kind of defeats the purpose for your case.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        Servers being slow is usually fine. They’re already at way lower clocks than consumer chips because almost all that matters is power efficiency.