![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.deadca.de/api/v3/image_proxy?url=https%3A%2F%2Ffry.gs%2Fpictrs%2Fimage%2Fc6832070-8625-4688-b9e5-5d519541e092.png)
It’s not configurable through the UI, but if you’re the admin of an instance you can change the character limit with some fairly simple source code tweaks.
It’s not configurable through the UI, but if you’re the admin of an instance you can change the character limit with some fairly simple source code tweaks.
It felt like it happened practically overnight when Let’s Encrypt released.
Not well versed in the field, but understand that large tech companies which host user-generated content match the hashes of uploaded content against a list of known bad hashes as part of their strategy to detect and tackle such content.
Could it be possible to adopt a strategy like that as a first-pass to improve detection, and reduce the compute load associated with running every file through an AI model?
The rule of the 196 community is that you’re supposed to post a submission of your own before leaving, and it’s customary to include the word “rule” in your post in reference to that rule.
It’s not as though the existence and mechanisms of piracy are a coveted secret. There’s a decent chance that they’ll learn about and attempt it independently, and the method they learn about online might expose them to greater risk than if they did it with more consideration.
On that basis, I think that knowledge transfer is at worst harm reduction. If it’s immoral, which I don’t believe it is, then at the very least your intervention could prevent them from being preyed upon by some copyright troll company when they do it despite your silence or protestations.