Add HLE evaluation result
Browse files## Evaluation Results
This PR adds structured evaluation results using the new [`.eval_results/` format](https://huggingface.co/docs/hub/eval-results).
### What This Enables
- **Model Page**: Results appear on the model page with benchmark links
- **Leaderboards**: Scores are aggregated into benchmark dataset leaderboards
- **Verification**: Support for cryptographic verification of evaluation runs

### Format Details
Results are stored as YAML in `.eval_results/` folder. See the [Eval Results Documentation](https://huggingface.co/docs/hub/eval-results) for the full specification.
---
*Generated by [community-evals](https://github.com/huggingface/community-evals)*
- .eval_results/hle.yaml +7 -0
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
- dataset:
|
| 2 |
+
id: cais/hle
|
| 3 |
+
value: 11.1
|
| 4 |
+
date: '2026-01-15'
|
| 5 |
+
source:
|
| 6 |
+
url: https://huggingface.co/tiiuae/Falcon-H1R-7B
|
| 7 |
+
name: Model Card
|