• 0 Posts
  • 279 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle
  • I found a source that includes the second half which makes it more obvious:

    Dr. Macho believes #42’s behavior is intentional and aimed specifically at him. “I caught him laughing at me once while I was trying to sort data he’d fucked up. I know what you’re thinking, ‘Can rats even laugh? And what would it look like?’ Trust me, when a rat laughs at you, you’ll know.”

    When asked why he doesn’t simply exchange #42 for a less malicious rat, Dr. Macho explained, “You can’t just use an infinite number of lab rats. They start to think you’re a psycho if you keep asking for more.” Dr. Macho sighed. “I feel like I’m living in an annoying Pixar movie where I’m the bad guy – oh, wait….I’m the bad guy. I’m the evil scientist performing experiments on a sassy, smart rat. And my name is Dr. Stu Macho? Oof, yeah, I’m the wrong one here.”

    Just behind Dr. Macho, #42 winked and walked directly into his food bowl.


  • The diet experiment is presented as present tense. The cat smell experiment is described with “I once ran an experiment”, part of the speaker’s “thesis”, which is in the past. They are clearly keeping #42 alive to be used in totally separate research, in this fictional The Onion esque scenario.







  • But I think the point is, the OP meme is wrong to try painting this as some kind of society-wide psychological pathology, when it’s rather business people coming up with simple reliable formulas to make money. The space of possible products people could want is large, and this choice isn’t only about what people want, but what will get attention. People will readily pay attention to and discuss with others something they already have a connection to in a way they wouldn’t with some new thing, even if they would rather have something new.



  • that is not the … available outcome.

    It demonstrably is already though. Paste a document in, then ask questions about its contents; the answer will typically take what’s written there into account. Ask about something you know is in a Wikipedia article that would have been part of its training data, same deal. If you think it can’t do this sort of thing, you can just try it yourself.

    Obviously it can handle simple sums, this is an illustrative example

    I am well aware that LLMs can struggle especially with reasoning tasks, and have a bad habit of making up answers in some situations. That’s not the same as being unable to correlate and recall information, which is the relevant task here. Search engines also use machine learning technology and have been able to do that to some extent for years. But with a search engine, even if it’s smart enough to figure out what you wanted and give you the correct link, that’s useless if the content behind the link is only available to institutions that pay thousands a year for the privilege.

    Think about these three things in terms of what information they contain and their capacity to convey it:

    • A search engine

    • Dataset of pirated contents from behind academic paywalls

    • A LLM model file that has been trained on said pirated data

    The latter two each have their pros and cons and would likely work better in combination with each other, but they both have an advantage over the search engine: they can tell you about the locked up data, and they can be used to combine the locked up data in novel ways.


  • Ok, but I would say that these concerns are all small potatoes compared to the potential for the general public gaining the ability to query a system with synthesized expert knowledge obtained from scraping all academically relevant documents. If you’re wondering about something and don’t know what you don’t know, or have any idea where to start looking to learn what you want to know, a LLM is an incredible resource even with caveats and limitations.

    Of course, it would be better if it could also directly reference and provide the copyrighted/paywalled sources it draws its information from at runtime, in the interest of verifiably accurate information. Fortunately, local models are becoming increasingly powerful and lower barrier of entry to work with, so the legal barriers to such a thing existing might not be able to stop it for long in practice.




  • chicken@lemmy.dbzer0.comtoScience Memes@mander.xyz...
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    19 days ago

    can’t see correlation without social agenda—theyre just two very different things. Science and agenda; or agenda using “science”. It’s bias. That’s very unscientific.

    The idea is that the place the OP meme is coming from is likely a belief that science and agenda are not different things and rather are inseparable. It is very unscientific, it’s a fundamentally anti-intellectual attitude.


  • chicken@lemmy.dbzer0.comtoScience Memes@mander.xyz...
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    12
    ·
    20 days ago

    I think you’re reading statement B too literally. I’m pretty sure the idea behind it is related to critical theory and is an objection to the idea that rationality is trustworthy and that class conflict should be regarded as a higher truth. In that way statement B is relevant to statement A; it’s an implicit rejection of it.