burtenshaw HF Staff commited on
Commit
2c187b8
·
verified ·
1 Parent(s): 16b50bc

Add HLE evaluation result

Browse files

## Evaluation Results

This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).

### What This Enables

- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

![Model Evaluation Results](https://huggingface.co/huggingface/documentation-images/resolve/main/evaluation-results/eval-results-previw.png)

### Format Details

Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.

---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*

Files changed (1) hide show
  1. .eval_results/hle.yaml +7 -0
.eval_results/hle.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ - dataset:
2
+ id: cais/hle
3
+ value: 11.1
4
+ date: '2026-01-15'
5
+ source:
6
+ url: https://huggingface.co/tiiuae/Falcon-H1R-7B
7
+ name: Model Card