• Viri4thus@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Intel has had a node disadvantage regarding Zen since the 8700K… From then on the entire point is moot.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      From then on the entire point is moot.

      No it’s not, because the point is that design matters. When Ryzen came out originally, it was far more energy efficient than the Intel Skylake. And Intel had the node advantage.

      • Viri4thus@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        2 months ago

        https://www.techpowerup.com/review/intel-core-i7-8700k/16.html

        https://www.techpowerup.com/cpu-specs/core-i7-6700k.c1825

        Ryzen was not more efficient than skylake. In fact, the 1500x was actually consuming more energy in nT workloads than skylake while performing worse, which is consistent with what I wrote. What Ryzen was REALLY efficient at was being almost as fast as skylake for a fraction of the price.

        https://www.notebookcheck.net/Apple-M3-Max-16-Core-Processor-Benchmarks-and-Specs.781712.0.html

        Will you look at that, in nT workloads the M3 Max is actually less efficient than competitors like the ryzen 7k hs. The first N3 products had less than ideal yields so apple went with a less dense node thus losing the tech advantage for one generation. That can be seen in their laughable nT performance/watt. Design does matter however, and in 1T workloads Apple’s very wide design excells by performing very well while consuming lower energy, which is what I’ve been saying since this thread started.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        Not to mention ARM chips which by and large were/are more efficient on the same node than x86 because of their design: ARM chip designers have been doing that efficiency thing since forever, owing to the mobile platform, while desktop designers only got into the game quite late. There’s also some wibbles like ARM insn decoding being inherently simpler but big picture that’s negligible.

        Intel just really, really has a talent for not seeing the writing on the wall while AMD made a habit out of it out of sheer necessity to even survive. Bulldozer nearly killed them (and the idea itself wasn’t even bad, it just didn’t work out) while Intel is tanking hit after hit after hit.