• Unanimous_anonymous@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I think I kind of understand the term, but what does “hallucinations” in this context refer to? It seems like it might be fabricated unformation?

      • Joker@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Not sure why someone downvoted you. That’s exactly what the term means in this context. It’s those confidently written answers that contain false or fabricated information.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          7
          arrow-down
          6
          ·
          1 year ago

          And this seems like the biggest limitation for the LLM approach. The model just knows that a certain set of tokens tends to follow another set of tokens.

          It has no understanding of what the tokens represent. So it does a great job of producing sentences that look meaningful, but any actual meaning in them is purely incidental.