• 0 Posts
  • 52 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • It could of course go up to the scotus and effectively a new right be legislated from the bench, but it is unlikely and the nature of these models in combination with what is considered a copy under the rubric copyright in the US has operated effectively forever means that merely training and deploying a model is almost certainly not copyright infringement. This is pretty common consensus among IP attorneys.

    That said, a lot of other very obvious infringement in coming out in discovery in many of these cases. Like torrenting all the training data. THAT is absolutely an infringement but is effectively unrelated to the question of whether lawfully accessed content being used as training data retroactively makes its access unlawful (it really almost certainly doesn’t).


  • Even in your latter paragraph, it wouldn’t be an infringement. Assuming the art was lawfully accessed in the first place, like by clicking a link to a publicly shared portfolio, no copy is being encoded into the model. There is currently no intellectual property right invoked merely by training a model-- if people want there to be, and it isn’t an unreasonable thing to want (though I don’t agree it’s good policy), then a new type of intellectual property right will need to be created.

    What’s actually baffling to me is that these pieces presumably are all effectively public domain as they’re authored by AI. And they’re clearly digital in nature, so wtf are people actually buying?


  • If you are “torn” on whether it is a good thing to grant a wealthy campaign donor unfettered and unquestionably illegal access to government and bureaucratic infrastructure, with zero accountability or oversight, and who has displayed absolutely zero competence at managing any public institution (and in fact has a record of incompetence at managing private enterprises), then I honestly think you’re one of the millions of Americans who just needs to fuck off and stop contributing to adult decision-making. You’re simply not up to the task.






  • AI in health and medtech has been around and in the field for ages. However, two persistent challenges make roll out slow-- and they’re not going anywhere because of the stakes at hand.

    The first is just straight regulatory. Regulators don’t have a very good or very consistent working framework to apply to to these technologies, but that’s in part due to how vast the field is in terms of application. The second is somewhat related to the first but really is also very market driven, and that is the issue of explainability of outputs. Regulators generally want it of course, but also customers (i.e., doctors) don’t just want predictions/detections, but want and need to understand why a model “thinks” what it does. Doing that in a way that does not itself require significant training in the data and computer science underlying the particular model and architecture is often pretty damned hard.

    I think it’s an enormous oversimplification to say modern AI is just “fancy signal processing” unless all inference, including that done by humans, is also just signal processing. Modern AI applies rules it is given, explicitly or by virtue of complex pattern identification, to inputs to produce outputs according to those “given” rules. Now, what no current AI can really do is synthesize new rules uncoupled from the act of pattern matching. Effectively, a priori reasoning is still out of scope for the most part, but the reality is that that simply is not necessary for an enormous portion of the value proposition of “AI” to be realized.


  • Summary judgement is not a thing separate from a lawsuit. It’s literally a standard filling made in nearly every lawsuit (even if just as a hail mary). You referenced “beyond a reasonable doubt” earlier. This is also not the standard used in (US) civil cases–it’s typically a standard consisting of the preponderance of the evidence.

    I’m also not sure what you mean by “court approved documentation.” Different jurisdictions approach contract law differently, but courts don’t “approve” most contracts–parties allege there was a binding and contractual agreement, present their evidence to the court, and a mix of judge and jury determines whether under the jurisdictions laws and enforceable agreement occurred and how it can be enforced (i.e., are the obligations severable, what damages, etc.).


  • There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.









  • My point is just that they’re effectively describing a discriminator. Like, yeah, it entails a lot more tough problems to be tackled than that sentence makes it seem, but it’s a known and very active area of ML. Sure, there may be other metadata and contextual features to discriminate upon, but eventually those heuristics will inevitably be closed up and we’ll just end up with a giant distributed, quasi-federated GAN. Which, setting aside the externalities that I’m skeptical anyone in a position of power to address is equally in an informed position of understanding, is kind of neat in a vacuum.



  • I think if you can actually define reasoning, your comments (and those like yours) would be much more convincing. I’m just calling yours out because I’ve seen you up and down in this thread repeating it, but it’s a general observed of the vocal critics of the technology overall. Neither intelligence nor reasons (likewise understanding and knowing, for that matter) are easily defined in a way that is more useful than invoking spirits and ghosts. In this case, detecting patterns certainly seems a critical component of what we would consider to be reasoning. I don’t think it’s sufficient, buy it is absolutely necessary.