|
|
--- |
|
|
license: mit |
|
|
datasets: |
|
|
- orkg/SciQA |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- bleu |
|
|
- rouge |
|
|
- exact_match |
|
|
base_model: |
|
|
- meta-llama/Meta-Llama-3-8B-Instruct |
|
|
pipeline_tag: question-answering |
|
|
--- |
|
|
|
|
|
### Model description |
|
|
|
|
|
We fine-tuned the bass with LoRA using ORKG/SciQA benchmark for question answering over sholarly knowledge graphs. |
|
|
|
|
|
We tried several training epochs: 3,5,7,10,15,20. |
|
|
The best performance was obtained on 15 epochs, so we pubished this model with 15 epochs. |
|
|
|
|
|
For more details on the evaluation, please check this GitHub repo[FIRESPARQL](https://github.com/sherry-pan/FIRESPARQL) |