| task_categories: | |
| - visual-question-answering | |
| language: | |
| - en | |
| size_categories: | |
| - 1K<n<10K | |
| license: mit | |
| # Towards Event-oriented Long Video Understanding | |
| <font size=3><div align='center'> [[๐ arXiv Paper]()]</div> | |
| --- | |
| ## ๐ Overview | |
| We introduce **Event-Bench**, an event-oriented long video understanding benchmark built on existing datasets and human annotations. **Event-Bench** consists of three event understanding abilities and six event-related tasks, including 2,190 test instances to comprehensively evaluate the ability to understand video events. | |
| <p align="center"> | |
| <img src="./asset/fig_benchmark.jpg" width="100%" height="100%"> | |
| </p> | |
| **Event-Bench** provides a systematic comparison across different kinds of capabilities for existing video MLLMs, and points out the major shortcomings of open-source MLLMs. | |
| ## ๐ Dataset | |
| Download the raw videos in VNBench from the [google drive link](https://drive.google.com/file/d/1wjjH2dK-KpaObFdS1yc-TBUTCvXsaLwc/view?usp=sharing). | |
| **License**: | |
| ``` | |
| Event-Bench is only used for academic research. Commercial use in any form is prohibited. | |
| ``` | |
| ## ๐ฎ Evaluation Pipeline | |
| Please refer to https://github.com/RUCAIBox/Event-Bench | |
| ## ๐ Experimental Results | |
| - **Evaluation results of different Video MLLMs.** | |
| <p align="center"> | |
| <img src="./asset/performance.png" width="96%" height="50%"> | |
| </p> | |
| ## Citation | |
| If you find our work helpful for your research, please consider citing our work. | |
| ```bibtex | |
| ``` | |