Model description

We fine-tuned the bass with LoRA using ORKG/SciQA benchmark for question answering over sholarly knowledge graphs.

We tried several training epochs: 3,5,7,10,15,20. The best performance was obtained on 15 epochs, so we pubished this model with 15 epochs.

For more details on the evaluation, please check this GitHub repoFIRESPARQL

Downloads last month
19
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sherry791/Meta-Llama-3-8B-Instruct-ft4sciqa

Finetuned
(858)
this model

Dataset used to train Sherry791/Meta-Llama-3-8B-Instruct-ft4sciqa