File size: 1,500 Bytes
6d28ffd
 
 
 
 
 
 
 
 
 
250b810
 
 
 
b7301fe
250b810
 
 
 
 
 
 
6d28ffd
250b810
 
 
 
 
 
 
 
 
 
6d28ffd
250b810
 
 
 
 
 
 
6d28ffd
250b810
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
task_categories:
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
license: mit
---

# Towards Event-oriented Long Video Understanding



<font size=3><div align='center'> [[๐Ÿ“– arXiv Paper]()]</div>

---



## ๐Ÿ‘€ Overview

We introduce **Event-Bench**, an event-oriented long video understanding benchmark built on existing datasets and human annotations. **Event-Bench** consists of three event understanding abilities and six event-related tasks, including 2,190 test instances to comprehensively evaluate the ability to understand video events.
<p align="center">
    <img src="./asset/fig_benchmark.jpg" width="100%" height="100%">
</p>


**Event-Bench** provides a systematic comparison across different kinds of capabilities for existing video MLLMs, and points out the major shortcomings of open-source MLLMs.


## ๐Ÿ” Dataset
Download the raw videos in VNBench from the [google drive link](https://drive.google.com/file/d/1wjjH2dK-KpaObFdS1yc-TBUTCvXsaLwc/view?usp=sharing).

**License**:
```
Event-Bench is only used for academic research. Commercial use in any form is prohibited.
```


## ๐Ÿ”ฎ Evaluation Pipeline
Please refer to https://github.com/RUCAIBox/Event-Bench


## ๐Ÿ“ˆ Experimental Results
- **Evaluation results of different Video MLLMs.**

<p align="center">
    <img src="./asset/performance.png" width="96%" height="50%">
</p>




## Citation

If you find our work helpful for your research, please consider citing our work.   

```bibtex

```