Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.

I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?

  • keepthepace@slrpnk.netOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Do you consider that there is a way to add facts to a model without rising the probability of hallucinations? Yes, RAG is a necessity, but if we want the model to display some sort of reasoning on a variety of facts, we need them embedded more deeply. The email example I gave can’t be done with RAG.

    • namnnumbr@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I think I get what you’re after now. I’ll have to think on this further - interesting problem!