• Stepos Venzny@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 hour ago

    Training it on research papers wouldn’t make it smarter, it would just make it better at mimicking their writing style.

    Don’t fall for the hype.

    • Melatonin@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      ·
      17 minutes ago

      Hmmm. Not sure if I’m being insulted. Is that one of those fish fossils that looks kind of like a horseshoe crab?

      • Tabooki@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 minutes ago

        Dictionary Definitions from Oxford Languages · Learn more noun (especially in prehistoric times) a person who lived in a cave. a hermit. a person who is regarded as being deliberately ignorant or old-fashioned.

    • spongebue@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      2 hours ago

      Machine learning has some pretty cool potential in certain areas, especially in the medical field. Unfortunately the predominant use of it now is slop produced by copyright laundering shoved down our throats by every techbro hoping they’ll be the next big thing.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 hours ago

      It’s marketing hype, even in the name. It isn’t “AI” as decades of the actual AI field would define it, but credulous nerds really want their cyberpunkerino fantasies to come true so they buy into the hype label.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        8
        ·
        3 hours ago

        The term AI was coined in 1956 at a computer science conference and was used to refer to a broad range of topics that certainly would include machine learning and neural networks as used in large language models.

        I don’t get the “it’s not really AI” point that keeps being brought up in discussions like this. Are you thinking of AGI, perhaps? That’s the sci-fi “artificial person” variety, which LLMs aren’t able to manage. But that’s just a subset of AI.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        3 hours ago

        Yeah, these are pattern reproduction engines. They can predict the most likely next thing in a sequence, whether that’s words or pixels or numbers or whatever. There’s nothing intelligent about it and this bubble is destined to pop.

        • UlyssesT [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          That “Frightful Hobgoblin” computer toucher would insist otherwise, claiming that a sufficient number of Game Boys bolted together equals or even exceeds human sapience, but I think that user is currently too busy being a bigoted sex pest.

  • Rampsquatch@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    ·
    3 hours ago

    You could feed all the research papers in the world to an LLM and it will still have zero understanding of what you trained it on. It will still make shit up, it can’t save the world.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    21
    ·
    3 hours ago

    Because AI needs a lot of training data to reliably generate something appropriate. It’s easier to get millions of reddit posts than millions of research papers.

    Even then, LLMs simply generate text but have no idea what the text means. It just knows those words have a high probability of matching the expected response. It doesn’t check that what was generated is factual.

  • ryathal@sh.itjust.works
    link
    fedilink
    arrow-up
    25
    ·
    4 hours ago

    Both are happening. Samples of casual writing are more valuable to use to generate an article than research papers though.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      3 hours ago

      Yeah. Scientific papers may teach an AI about science, but Reddit posts teach AI how to interact with people and “talk” to them. Both are valuable.

      • geekwithsoul@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 hours ago

        Hopefully not too pedantic, but no one is “teaching” AI anything. They’re just feeding it data in the hopes that it can learn probabilities for certain types of output. It “understands” neither the Reddit post nor the scientific paper.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 hours ago

          Describe how you ‘learned’ to speak. How do you know what word comes after the next. Until you can describe this process in a way that doesn’t make it ‘human’ or ‘biological’ only it’s no different. The only thing they can’t do is adjust their weights dynamically. But that’s a limitation we gave it not intrinsic to the system.

          • geekwithsoul@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            I inherited brain structures that are natural language processors. As well as the ability to understand and repeat any language sounds. Over time, my brain focused in on only the language sounds I heard the most and through trial and repetition learned how to understand and make those sounds.

            AI - as it currently exists - is essentially a babbling infant with none of the structures necessary to do anything more than repeat sounds back without understanding any of them. Anyone who tells you different is selling you something.

  • Destide@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 hours ago

    Redditors are always right, peer reviewed papers always wrong. Pretty obvious really. :D

  • callouscomic@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 hour ago

    Most research papers are likely ad valid as an average reddit point.

    Getting published is a circlejerk, and rarely are they properly tested, or does anyone actually read them.

  • cobysev@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 hours ago

    We are. I just read an article yesterday about how Microsoft paid research publishers so they could use the papers to train AI, with or without the consent of the papers’ authors. The publishers also reduced the peer review window so they could publish papers faster and get more money from Microsoft. So… expect AI to be trained on a lot of sloppy, poorly-reviewed research papers because of corporate greed.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    5
    ·
    3 hours ago

    Anyone running a webserver and looking at their logs will know AI is being trained on EVERYTHING. There are so many crawlers for AI that are literally ripping the internet wholesale. Reddit just got in on charging the AI companies for access to freely contributed content. For everyone else, they’re just outright stealing it.

  • tiddy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 hours ago

    Papers are most importantly a documentation of exactly what and how a procedure was performed, adding a vagueness filter over that is only going to decrease its value infinitely.

    Real question is why are we using generative ai at all (gets money out of idiot rich people)