I do not give unfederated or proprietary entities permission to import or use my content. Content on this account is pushlished CC all rights reserved.

  • 1 Post
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle

  • Right, most of the complaints people have about Zuckerberg is that he’s a stereotypical tech bro ceo lacking a moral compass.

    People calling Zuckerberg a lizard person or robot mostly come from how he talked and acted when under intense public questioning by legislators regarding user privacy and their business model. That’s a high pressure situation where he was coached on what he could and could not say by legal to minimize the fallout, so his awkward expressions and stilted speech are understandable.

    People don’t like him because he’s a ruthless ceo, and that requires some level of sociopathy pull off. Musk, on the other hand, actively antagonizes people and seems to thrive on controversy. His primary goal seems to be ego-driven, unlike Zuckerberg who’s solely in it for the money.


  • I use my HP printer infrequently enough that every time I booted up my inkjet, I had to put it through a printer head cleaning cycle. I’d be surprised if I got more than 20 sheets of paper for each cartridge do to the wasted ink, and the dang thing malfunctioned frequently even after cleaning (streaks, blots, complaining about missing colors when printing b/w, etc).

    After switching to a Brother mono laser, I haven’t had to do any maintenance in 3 years and it’s still on the original toner cart which it came with.

    This is the way.


  • Eh, you can improve reporting, time usage, and statistics all you want. It won’t help people stop making stupid short-sighted decisions. If it isn’t middle management, it’ll be the people controlling the AI’s which replace them.

    CEO: “AI, give me a plan to improve profits by at least 10% in the next quarter.”

    AI: “<insert plan>. Note: enacting this plan will cause talent attrition and there is a 70% chance of -50% revenue over the following 5 years.”

    CEO: “Sounds great, I’m retiring next year!”

    The people up top have plenty information on how to run a long-term successful business, but still choose to make illogical decisions which screw them over the long term. Changing the source of data to an AI just means that the CEO can ignore any feedback or metrics which don’t agree with their internal model and incentive structure.





  • I don’t think this will ever happen. The web is more than a network of changing documents. It’s a network of portals into systems which change state based on who is looking at them and what they do.

    In order for something like this to work, you’d need to determine what the “official” view of any given document is, but the reality is that most documents are generated on the spot from many sources of data. And they aren’t just generated on the spot, they’re Turing complete documents which change themselves over time.

    It’s a bit of a quantum problem - you can’t perfectly store a document while also allowing it to change, and the change in many cases is what gives it value.

    Snapshots, distributed storage, and change feeds only work for static documents. Archive.org does this, and while you could probably improve the fidelity or efficiency, you won’t be able to change the underlying nature of what it is storing.

    If all of reddit were deleted, it would definitely be useful to have a publically archived snapshot of Reddit. Doing so is definitely possible, particularly if they decide to cooperate with archival efforts. On the other hand, you can’t preserve all of the value by simply making a snapshot of the static content available.

    All that said, if we limit ourselves to static documents, you still need to convince everyone to take part. That takes time and money away from productive pursuits such as actually creating content, to solve something which honestly doesn’t matter to the creator. It’s a solution to a problem which solely affects people accessing information after those who created it are no longer in a position to care about said information, with deep tradeoffs in efficiency, accessibility, and cost at the time of creation. You’d never get enough people to agree to it that it would make a difference.



  • It wouldnt really be full P2P: I’d expect moderated communities to act as a funnel which everyone interacts with each other through. I wasn’t really considering the hypothetical micro instances to be like a normal server, since even when federated its unlikely that they would consume as much federation bandwidth as a large instance. Most people wouldn’t run a community, simply because they don’t want to moderate it.

    Realistically, the abuse problems you mention can already currently happen if someone wants to. It’s easier to make an account on an existing server with a fresh email, spam a bit, and get banned than it is to register a new domain ($) and federate before doing the same. I think social networks would have a lot less spam if every time you wanted to send an abusive message, you had to spend $10 to burn a domain name.

    Most of the content would still live on larger servers, so you end up moderating in the same place. Not much difference between banning an abusive user on your instance and banning an abusive single-user instance.





  • I think the main difference between fediverse and email WRT cache instances is that if you create a cache instance for email, you’re only caching your personal emails. If you create a cache instance for a lemmy community, you’re caching every event on the community.

    My intuition says there’s probably a breakpoint in community size where the cost of federating all events to the users who subscribe to them becomes greater than the cost of individually serving API requests to them on demand. Primarily because you’ll be caching a far greater amount of content than you actually consume, unlike with email.

    Edit: That said, scaling out async work queues is a heck of a lot easier than scaling out web servers and databases. That fact alone might skew the breakpoint far enough that only communities with millions of subscribers see a flip in the cost equation…





  • Maybe something closer to migration management in mastodon? Two groups of moderators on separate servers agree to a common set of moderation guidelines, publish an event or setting which says “these communities are merging”, and from that point on they act like aliases for a merged community which share responsibility across servers.

    These “merged” communities could be visually flagged as distinct from the normal rules / moderation of their respective servers to prevent conflicts arising from differences in server management.

    Feature support would be limited by the server events are sourced from. E.g. if beehaw.org and lemmy.ml merged their technology communities, people on beehaw still wouldn’t be able to downvote posts or see downvotes, but lemmy.ml would unless they explicitly disable to feature as a part of the merge contract.

    When subscribing, you might see a list of merged communities which share responsibility for moderating the final one, and you have the ability to choose which “entrypoint” you use.



  • It’s not a AAA open world game where you glue yourself to your minimap.

    You’ll spend your time solving in-world puzzles, platforming, and trying to trigger specific series of circumstances in order to get into new areas using your knowledge from previous time loops. New areas expose more lore through “journals” or conversations about what is happening in-world, which points you at new places to explore or hints on content that is difficult to find without out-of-sequence knowledge. That’s the basic game loop.

    E.g. you might see an event which blocks off an area partway through the timeline, then next loop you make a beeline to the area and explore it before it gets blocked off.