It’s steady pressure and it’s only in one direction. Some countries resist more than others. I’m guessing you are not in the EU, because if so, you’d be aware of the “chat control” push.
Even so, it’s not the days of Napster anymore. Think about hardware DRM. It stops no one but you, too, paid to have it developed and built into your devices. Think about Content ID. That’s not going away. It’s only going to be expanded. That frog will be boiled.
Recently, intellectual property has been reframed as being about “consensual use of data”. I think this is proving to be very effective. It’s no longer “piracy” or “theft”, it’s a violation of “consent”. The deepfake issue creates a direct link to sexual aggression. One bill in the US, that ostensibly targets deepfakes, would apply to any movie with a sex scene; making sharing it a federal felony.
Hey, I’m just saying how it’s going. Look at, say, threads here about deepfakes. See all the calls for laws and government action. How can that be enforced?
it would be if internet regulation was practically enforceable for anyone other than commercial businesses operating out in the open.
Well, then I guess we just have to call for more government enforcement.
In the EU, there is certainly more government pressure, instead of just lawsuits between big (or small) players.
I just described what’s going on. The world outside of China or Russia is going slower but the direction is the same.
Borders in cyberspace is the future. There are increased efforts to regulate the internet everywhere. Think copyright, age verification, the GDPR, or even anti-CSAM laws. It’s all about making sure that information is only available to people who are permitted to access it. China is really leading the way here.
We do not agree with China’s regulations, but that only means that we need border controls. Data must be checked for regulatory compliance with local laws.
The way it looks, Adobe has to do this to comply with EU law.
Why is she claiming that the bill is about liability?
No competent engineer would use NFTs for the purpose. It’s inconvenient, slow and ridiculously expensive. No one uses the “technology” because it’s rubbish.
Implementing such a feature is trivial. Steam has a marketplace. They don’t let you sell used games because the developers don’t want it.
I can relate to the sentiment, but that just makes it worse. How do you enforce ownership of data?
There’s only 1 thing for it: More internet surveillance.
It’s not.
It’s very tamperable. It lacks common safety features like 2FA. Hacks are common and stolen NFTs can not be recovered.
It doesn’t provide any evidence of ownership, much less proof. Anyone can mint NFTs without providing any evidence of ownership or anything. There is no legal requirement that ownership of anything is transferred along with an NFT.
But it’s also possible to do things like build a mass facial recognition database with image data,
Facebook built one years ago, but ended up destroying it. https://www.theverge.com/2021/11/2/22759613/meta-facebook-face-recognition-automatic-tagging-feature-shutdown
What about is wrong?
Artists are allowed to do the exact same thing. That’s probably not a helpful answer, but it’s the correct answer to your question. You’re making some wrong assumptions about the law, and probably about the economics, as well. Writing a proper explanation would take me quite a while and I’m not sure if it would be appreciated.
There are some companies, EG Adobe and Shutterstock, that offer “commercially safe” image generators trained on licensed images. Artists who would like to make money by licensing images for AI training can deal with them.
How does that work? By definition, a rich person owns a lot of property. Therefore, laws that give more power to property owners favor the rich. Copyright is a type of property.
Property rights are the only thing that protects the poor from the rich. Sure.
The winners of a system don’t have an incentive to undermine the rules. Quite the opposite. The NYT wants these rules because it would benefit from them. There are at least 2 image generators that adhere to capitalist ethics. I don’t know what Claro uses, but I see no indication that they are being uppity.
The background is that French law requires ISPs to retain the IPs of their customer for some time. That way, an IP address can be associated with a customer.
If I download music in a Starbucks, can they fine the Starbucks CEO then?
A CEO is an employee. You generally can’t sue employees for this sort of thing. It may be possible to sue the company as a whole for enabling the copyright infringement, but that’s not to do with this case. Perhaps in the future, operators of WiFi-hotspots will be required to use something like Youtube’s Content ID system.
Anyway I hope I hope online artists, and authors are able to use this to sue AI companies for stealing their copyrighted works.
They can use this to go after “pirates”. It’s got nothing to do with AI.
One of the top tier models would probably do well on a standardized test like that. You don’t get them for free, though.
You can try some different chat models free at DDG. https://duckduckgo.com/?q=DuckDuckGo&ia=chat
Was a reference to the thread next door that revealed - horror of horrors - that photos of children were part of the training data. Sure, you never know who is behind these hit pieces, but there doesn’t really need to be anyone behind it.
In a future where this is established, wouldn’t you expect non-compliant hardware to be treated just as drugs or machine guns are treated now?
I think that’s hardly an immediate worry, though. Various services already scan for illegal content or suspicious activity. It wouldn’t take much to get ISPs to snitch on their customers.