The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 0 Posts
  • 436 Comments
Joined 9 months ago
cake
Cake day: January 12th, 2024

help-circle


  • We’re talking about two different problems.

    The one that I’m talking about is Reddit admins being clearly hostile towards the community, including mods, and the mods still being willing to lick the admins’ boots, instead of migrating their comms to another site. Even at the expense of the userbases of the subreddits that they moderate.

    Here in Lemmy this shit does not roll - both because it’s easier to migrate comms across instances, and because the userbase is mostly composed of people with low tolerance towards admin abuse.

    Now, regarding the problem that you’ve spotted: yes, it is a problem here that boils down to

    1. Lack of transparency: plenty mods and admins here have a nasty tendency to enforce hidden rules - because actually writing those rules down would piss off the userbase.
    2. Excessive polarisation and oversimplification of some topics, mostly dealing with recent events. (Such as the one that we both were talking about not too long ago.)

    I am really not sure on how to compare the extent of both issues in Lemmy vs. Reddit, nor how to address them here, and thus to get rid of the problem that you’re noticing.


  • For a casual observer, who was never engaged with that platform, it might actually look like Reddit is back to normal, based on a casual glance at the activity.

    You only notice the cracks leaking water when you actually look closer, and you remember that the stone dam didn’t have so many of them. The surge on bot activity, the lower level of discourse in the comments, the further concentration of activity into larger subs, the content feeling more and more repetitive…




  • To be a moral agent, your actions towards others need to have consequences for yourself - be those consequences direct, social, emotional, or something else. And intelligence on itself doesn’t provide those consequences.

    The nearest that you could do, with AGI alone, would be to hardcode it with ethical principles, but that’s another matter. (I’m saying this because people often conflate ethics and morality, even if they’re two different cans of worms.)









  • What you’re proposing is effectively the same as "they should publish inaccurate guidelines that do not actually represent their informed views on the matter, misleading everybody, to pretend that they can prevent the stupid from being stupid." It defeats the very reason why guidelines exist - to guide you towards the optimal approach in a given situation.

    And sometimes the optimal approach is not a bigger min length. Convenience and possible vectors of attack play a huge role; if

    • due to some input specificity, typing out the password is cumbersome, and
    • there’s no reasonable way to set up a password manager in that device, and
    • your blocklist of compromised passwords is fairly solid, and
    • you’re reasonably sure that offline attacks won’t work against you, then

    min 8 chars is probably better. Even if that shitty manager, too dumb to understand that he shouldn’t contradict the “SHOULD [NOT]” points without a good reason to do so, screws it up. (He’s likely also violating the “SHALL [NOT]” points, since he used the printed copy of the guidelines as toilet paper.)



  • They might mean well, but the reason we require a special character and number is to ensure the amount of possible characters are increased.

    The problem with this sort of requirement is that most people will solve it the laziest way. In this case, “ah, I can’t use «hospital»? Mkay, «Hospital1» it is! Yay it’s accepted!”. And then there’s zero additional entropy - because the first char still has 26 states, and the additional char has one state.

    Someone could of course “solve” this by inserting even further rules, like “you must have at least one number and one capital letter inside the password”, but then you get users annotating the password in a .txt file because it’s too hard to remember where they capitalised it or did their 1337.

    Instead just skip all those silly rules. If offline attacks are such a concern, increase the min pass length. Using both lengths provided by the guidelines:

    • 8 chars, mixing:minuscules, capitals, digits, and any 20 special chars of your choice, for a total of 82 states per char. 82⁸ = 2*10¹⁵ states per password.
    • 15 chars, using only minuscules, for a total of 26 states per char. Number of states: 26¹⁵ = 1.7*10²¹ states per password.


  • That stipulation goes rather close to #5, even not being a composition rule. EDIT: see below.

    I think that a better approach is to follow the recommended min length (15 chars), unless there are good reasons to lower it and you’re reasonably sure that your delay between failed password attempts works flawlessly.

    EDIT: as I was re-reading the original, I found the relevant excerpt:

    If the CSP [credential service provider] disallows a chosen password because it is on a blocklist of commonly used, expected, or compromised values (see Sec. 3.1.1.2), the subscriber SHALL be required to choose a different password. Other complexity requirements for passwords SHALL NOT be imposed. A rationale for this is presented in Appendix A, Strength of Passwords.

    So they are requiring CSPs to do what you said, and check it against a list of compromised passwords. However they aren’t associating it with password length; on that, the Appendix 2 basically says that min length depends on the threat model being addressed; as in, if it’s just some muppet trying passwords online versus trying it offline.