Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.

I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?

  • keepthepace@slrpnk.netOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Ah I should have made a bit more detailed message explaining the road I wen through already I guess :-)

    I know that RAG gets recommended more for adding information. It is the fastest way to retrieve information. However it allows only a shallow understanding of it and the LLM will have problem using information from several different files to give you. You can’t, for example, give it 1000 emails and ask to list the problems encountered in project A and how they were solved.

    Fine tuning can add facts. This person added the documentation for Unreal Engine 5 in Llama 7B. Or this company added financial knowledge to Llama 13B. These are my inspiration. When using LORA it requires higher ranks and crucially to do the fine-tuning on a foundation model and only after your own fine-tuning, do the instruction fine-tune.

    I am wondering if there is a way to make the last step easier by reapplying the same LORA.

    I guess I am also wondering why we can’t directly fine-tune facts into an instruction-tuned model. I tried, it does tend to remember the way to interact with instruct prompts but the format is a bit corrupted by the new dataset. I find it a bit weird the speed at which such models forget past things as they are fed new tokens.

    • namnnumbr@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      IMO there is a difference between adding “knowledge” and adding “facts”. You can fine tune in domain knowledge but it will be prone to hallucination. To ground the instructions, you’d need to introduce RAG for fact lookup; possibly with a summarization step if you want to bring in large bodies of facts.

      • keepthepace@slrpnk.netOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Do you consider that there is a way to add facts to a model without rising the probability of hallucinations? Yes, RAG is a necessity, but if we want the model to display some sort of reasoning on a variety of facts, we need them embedded more deeply. The email example I gave can’t be done with RAG.

        • namnnumbr@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I think I get what you’re after now. I’ll have to think on this further - interesting problem!

    • Turun@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      At least in stable diffusion Loras are composable. You can combine different loras and have both effects applied to the resulting image.

      • keepthepace@slrpnk.netOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes, but my understanding is that they are commutable? (i.e. the order does not matter) If so, it looks like that a “facts-adding” LORA seem to induce forgetting of formatting.

        And I am especially curious if a facts-LORA + a instructions-LORA results in a model that can use the new facts in the instructions or not. I’ll run experiments but would have loved if people here knew about it already.