• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: August 4th, 2023

help-circle
  • The thing is, steam’s market dominance is one of user choice rather than anticompetitive strategies or lack of alternatives. Steam doesn’t do exclusives, they don’t charge you for external sales, they don’t even prevent you from selling steam keys outside the platform, or users from launching non steam games in the client. The only real restriction is that access to steam services requires a license in the active steam account. Even valve-produced devices like the steam deck can install from other stores.

    Sure, dominance is bad in an abstract theoretical way and it’d be nice if Gog, itch.io, etc were more competitive, but Steam is dominant because consumers actively choose it.




  • No, the “non-fungibility” simply means that anyone who creates an NFT with the same link will be distinct from your link to the image, even if the actual URL is the same. Both NFTs can also be traced back to when they were created/minted because they’re on a blockchain, a property called provenance. If the authentic tokens came from a well known minting, you can establish that your token is “authentic” and the copy token is a recreation, even if the actual link (or other content) is completely identical.

    Nothing about having the “authentic” token would give you actual legal rights though.



  • No. Nvidia will be licensing the designs to mediatek, who will build out the ASIC/silicon in their scaler boards. That solves a few different issues. For one, no FPGAs involved = big cost savings. For another, mediatek can do much higher volume than Nvidia, which brings costs down. The licensing fee is also going to be significantly lower than the combined BOM cost + licensing fee they currently charge. I assume Nvidia will continue charging for certification, but that may lead to a situation where many displays are gsync compatible and simply don’t advertise it on the box except on high end SKUs.






  • You can sometimes deal with performance issues by caching, if you want to trade one hard problem for another (cache invalidation). There’s plenty of cases where that’s not a solution though. I recently had a 1ns time budget on a change. That kind of optimization is fun/impossible to do in Python and straightforward to accomplish Rust or C/C++ once you’ve set up your measurements.









  • WSL is just a well integrated VM running Linux. It’s mainly intended for CLI tools, but there’s nothing preventing you from e.g. running an X server and having programs appear in the Windows “window manager”.

    The super key is largely inaccessible though. It’s tied very deeply into Windows, which is still the one talking to the keyboard.


  • I’m not assuming it’s going to fail, I’m just saying that the exponential gains seen in early computing are going to be much harder to come by because we’re not starting from the same grossly inefficient place.

    As an FYI, most modern computers are modified Harvard architectures, not Von Neumann machines. There are other architectures being explored that are even more exotic, but I’m not aware of any that are massively better on the power side (vs simply being faster). The acceleration approaches that I’m aware of that are more (e.g. analog or optical accelerators) are also totally compatible with traditional Harvard/Von Neumann architectures.