Safetensors
egogpt_qwen
multimodal
Jingkang commited on
Commit
c82e161
·
verified ·
1 Parent(s): 5390b4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
 
13
  ## Model Summary
14
 
15
- `EgoGPT-7b-EgoIT` is an omni-modal model trained on egocentric datasets, achieving state-of-the-art performance on egocentric video understanding. Built on the foundation of `llava-onevision-qwen2-7b-ov`, it has been finetuned on [EgoIT-QA (41k)](https://huggingface.co/datasets/lmms-lab/EgoIT) egocentric datasets.
16
 
17
  EgoGPT excels in two primary scenarios:
18
  - **Advanced Model Integration**: EgoGPT combines LLaVA-OneVision and Whisper, improving its ability to process visual and auditory information.
 
12
 
13
  ## Model Summary
14
 
15
+ `EgoGPT-7b-EgoIT` is an omni-modal model trained on egocentric datasets, achieving state-of-the-art performance on egocentric video understanding. Built on the foundation of `llava-onevision-qwen2-7b-ov`, it has been finetuned on [EgoIT-QA (99k)](https://huggingface.co/datasets/lmms-lab/EgoIT-99K) egocentric datasets.
16
 
17
  EgoGPT excels in two primary scenarios:
18
  - **Advanced Model Integration**: EgoGPT combines LLaVA-OneVision and Whisper, improving its ability to process visual and auditory information.