Model description
We fine-tuned the bass with LoRA using ORKG/SciQA benchmark for question answering over sholarly knowledge graphs.
We tried several training epochs: 3,5,7,10,15,20. The best performance was obtained on 15 epochs, so we pubished this model with 15 epochs.
For more details on the evaluation, please check this GitHub repoFIRESPARQL
- Downloads last month
- 19
Model tree for Sherry791/Meta-Llama-3-8B-Instruct-ft4sciqa
Base model
meta-llama/Meta-Llama-3-8B-Instruct