When you delve into the details of what those bullet points actually entailed, they were all far far worse in medieval times.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
When you delve into the details of what those bullet points actually entailed, they were all far far worse in medieval times.
I suppose Biden could have him officially assassinated. That’s legal now.
The specific subject that Triton is telling Ariel about is where babies come from.
The problem isn’t stuff going in, it’s the baby coming out.
Wait until she finds out how she’ll be doing it once she’s human. I suspect she’ll prefer this approach.
Yup. Fortunately unsubscribing from politics subreddits is generally advisable whether one has been banned from them or not.
Being slightly wrong means more of an endorphin rush when people realize they can pounce on the flaw they’ve spotted, I guess.
Don’t sweat downvotes, they’re especially meaningless on the Fediverse. I happen to like a number of applications for AI technology and cryptocurrency, so I’ve certainly collected quite a few of those and I’m still doing okay. :)
There was a politics subreddit I was on that had a “downvoting is not allowed” rule. There’s literally no way to tell who’s downvoting on Reddit, or even if downvoting is happening if it’s not enough to go below 0 or trigger the “controversial” indicator.
I got permabanned from that subreddit when someone who’d said something offensive asked “why am I being downvoted???” And I tried to explain to them why that was the case. No trial, one million years dungeon, all modmail ignored. I guess they don’t get to enforce that rule often and so leapt at the opportunity to find an excuse.
Downvotes for not getting it right, I presume.
Which makes me concerned that the “Hole for Pepnis” answer has so many upvotes.
Those holes look open to me.
Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions.
Yes, but this is exactly the point of deduplication - you don’t want identical inputs, you want variety. If you want the AI to understand the concept of cats you don’t keep showing it the same picture of a cat over and over, all that tells it is that you want exactly that picture. You show it a whole bunch of different pictures whose only commonality is that there’s a cat in it, and then the AI can figure out what “cat” means.
They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict.
Why do you think this?
There actually isn’t a downside to de-duplicating data sets, overfitting is simply a flaw. Generative models aren’t supposed to “memorize” stuff - if you really want a copy of an existing picture there are far easier and more reliable ways to accomplish that than giant GPU server farms. These models don’t derive any benefit from drilling on the same subset of data over and over. It makes them less creative.
I want to normalize the notion that copyright isn’t an all-powerful fundamental law of physics like so many people seem to assume these days, and if I can get big companies like Meta to throw their resources behind me in that argument then all the better.
Remember when piracy communities thought that the media companies were wrong to sue switch manufacturers because of that?
It baffles me that there’s such an anti-AI sentiment going around that it would cause even folks here to go “you know, maybe those litigious copyright cartels had the right idea after all.”
We should be cheering that we’ve got Meta on the side of fair use for once.
look up sample recover attacks.
Look up “overfitting.” It’s a flaw in generative AI training that modern AI trainers have done a great deal to resolve, and even in the cases of overfitting it’s not all of the training data that gets “memorized.” Only the stuff that got hammered into the AI thousands of times in error.
You get out ahead of the locomotive knowing that most of the directions you go aren’t going to pan out. The point is that the guy who happens to pick correctly will win big by getting out there first. Nothing wrong with making the attempt and getting it wrong, as long as you factored that risk in (as McDonalds’ seems to have done given that this hasn’t harmed them).
Training an AI does not involve copying anything so why would you think that fair use is even a factor here? It’s outside of copyright altogether. You can’t copyright concepts.
Downloading pirated books to your computer does involve copyright violation, sure, but it’s a violation by the uploader. And look at what community we’re in, are we going to get all high and mighty about that?
Training an AI on something doesn’t involve copying it.
And under copyleft licensing, they’re allowed to do that. Both to GitHub repositories and Wikipedia.
Why would that matter? You can fork such projects too.
If you want to argue that Lemmy doesn’t represent users at large, or that the people complaining about AI are a loud minority, go for it.
Yes, that’s exactly what I’m doing. Though specifically this community, not Lemmy as a whole (I’m not a Lemmy user myself for that matter).
Cloaks are actually quite historical, they’re very easy to make and useful in a variety of conditions.