Dataset Viewer
Auto-converted to Parquet Duplicate
clip
string
frame
int64
person_id
int64
bbox_x
float64
bbox_y
float64
bbox_width
float64
bbox_height
float64
gaze_class
string
gaze_x
float64
gaze_y
float64
is_child
int64
1Ab4vLMMAbY_2354-2439
1
1
901
79
397
559
outside_frame
-1
-1
1
1Ab4vLMMAbY_2354-2439
1
2
0
2
486
1,007
inside_visible
1,075.364
454.091
0
1Ab4vLMMAbY_2354-2439
2
1
902.857143
78.285714
396.857143
562.571429
outside_frame
-1
-1
1
1Ab4vLMMAbY_2354-2439
2
2
0
2
486.833333
1,009.5
inside_visible
1,077.0515
454.858063
0
1Ab4vLMMAbY_2354-2439
3
1
904.714286
77.571429
396.714286
566.142857
outside_frame
-1
-1
1
1Ab4vLMMAbY_2354-2439
3
2
0
2
487.666667
1,012
inside_visible
1,078.739
455.625125
0
1Ab4vLMMAbY_2354-2439
4
1
906.571429
76.857143
396.571429
569.714286
inside_visible
1,340.455
1,003.909
1
1Ab4vLMMAbY_2354-2439
4
2
0
2
488.5
1,014.5
inside_visible
1,080.4265
456.392188
0
1Ab4vLMMAbY_2354-2439
5
1
908.428571
76.142857
396.428571
573.285714
inside_visible
1,366.2275
954.818
1
1Ab4vLMMAbY_2354-2439
5
2
0
2
489.333333
1,017
inside_visible
1,082.114
457.15925
0
1Ab4vLMMAbY_2354-2439
6
1
910.285714
75.428571
396.285714
576.857143
inside_visible
1,392
905.727
1
1Ab4vLMMAbY_2354-2439
6
2
0
2
490.166667
1,019.5
inside_visible
1,083.8015
457.926313
0
1Ab4vLMMAbY_2354-2439
7
1
912.142857
74.714286
396.142857
580.428571
inside_visible
1,391.938625
905.727
1
1Ab4vLMMAbY_2354-2439
7
2
0
2
491
1,022
inside_visible
1,085.489
458.693375
0
1Ab4vLMMAbY_2354-2439
8
1
914
74
396
584
inside_visible
1,391.87725
905.727
1
1Ab4vLMMAbY_2354-2439
8
2
0
2
490.571429
1,024.428571
inside_visible
1,087.1765
459.460438
0
1Ab4vLMMAbY_2354-2439
9
1
915
77
395.75
582.75
inside_visible
1,391.815875
905.727
1
1Ab4vLMMAbY_2354-2439
9
2
0
2
490.142857
1,026.857143
inside_visible
1,088.864
460.2275
0
1Ab4vLMMAbY_2354-2439
10
1
916
80
395.5
581.5
inside_visible
1,391.7545
905.727
1
1Ab4vLMMAbY_2354-2439
10
2
0
2
489.714286
1,029.285714
inside_visible
1,090.5515
460.994563
0
1Ab4vLMMAbY_2354-2439
11
1
917
83
395.25
580.25
inside_visible
1,391.693125
905.727
1
1Ab4vLMMAbY_2354-2439
11
2
0
2
489.285714
1,031.714286
inside_visible
1,092.239
461.761625
0
1Ab4vLMMAbY_2354-2439
12
1
918
86
395
579
inside_visible
1,391.63175
905.727
1
1Ab4vLMMAbY_2354-2439
12
2
0
2
488.857143
1,034.142857
inside_visible
1,093.9265
462.528688
0
1Ab4vLMMAbY_2354-2439
13
1
917.9
87.1
395.1
579
inside_visible
1,391.570375
905.727
1
1Ab4vLMMAbY_2354-2439
13
2
0
2
488.428571
1,036.571429
inside_visible
1,095.614
463.29575
0
1Ab4vLMMAbY_2354-2439
14
1
917.8
88.2
395.2
579
inside_visible
1,391.509
905.727
1
1Ab4vLMMAbY_2354-2439
14
2
0
2
488
1,039
inside_visible
1,097.3015
464.062813
0
1Ab4vLMMAbY_2354-2439
15
1
917.7
89.3
395.3
579
inside_visible
1,391.447625
905.727
1
1Ab4vLMMAbY_2354-2439
15
2
0
2
495
1,037
inside_visible
1,098.989
464.829875
0
1Ab4vLMMAbY_2354-2439
16
1
917.6
90.4
395.4
579
inside_visible
1,391.38625
905.727
1
1Ab4vLMMAbY_2354-2439
16
2
0
2
493
1,035
inside_visible
1,100.6765
465.596938
0
1Ab4vLMMAbY_2354-2439
17
1
917.5
91.5
395.5
579
inside_visible
1,391.324875
905.727
1
1Ab4vLMMAbY_2354-2439
17
2
0
2
491
1,033
inside_visible
1,102.364
466.364
0
1Ab4vLMMAbY_2354-2439
18
1
917.4
92.6
395.6
579
inside_visible
1,391.2635
905.727
1
1Ab4vLMMAbY_2354-2439
18
2
0
2
486.5
1,030.75
inside_visible
1,377.273
986.728
0
1Ab4vLMMAbY_2354-2439
19
1
917.3
93.7
395.7
579
inside_visible
1,391.202125
905.727
1
1Ab4vLMMAbY_2354-2439
19
2
0
2
482
1,028.5
inside_visible
1,347.8185
979.3645
0
1Ab4vLMMAbY_2354-2439
20
1
917.2
94.8
395.8
579
inside_visible
1,391.14075
905.727
1
1Ab4vLMMAbY_2354-2439
20
2
0
2
477.5
1,026.25
inside_visible
1,318.364
972.001
0
1Ab4vLMMAbY_2354-2439
21
1
917.1
95.9
395.9
579
inside_visible
1,391.079375
905.727
1
1Ab4vLMMAbY_2354-2439
21
2
0
2
473
1,024
inside_visible
1,217.7275
935.183
0
1Ab4vLMMAbY_2354-2439
22
1
917
97
396
579
inside_visible
1,391.018
905.727
1
1Ab4vLMMAbY_2354-2439
22
2
0
2
473.833333
1,026.166667
inside_visible
1,117.091
898.365
0
1Ab4vLMMAbY_2354-2439
23
1
917.583333
97.833333
396
579
inside_visible
1,390.956625
905.727
1
1Ab4vLMMAbY_2354-2439
23
2
0
2
474.666667
1,028.333333
inside_visible
1,027.5
883.6375
0
1Ab4vLMMAbY_2354-2439
24
1
918.166667
98.666667
396
579
inside_visible
1,390.89525
905.727
1
1Ab4vLMMAbY_2354-2439
24
2
0
2
475.5
1,030.5
inside_visible
937.909
868.91
0
1Ab4vLMMAbY_2354-2439
25
1
918.75
99.5
396
579
inside_visible
1,390.833875
905.727
1
1Ab4vLMMAbY_2354-2439
25
2
0
2
476.333333
1,032.666667
inside_visible
910.909
861.5465
0
1Ab4vLMMAbY_2354-2439
26
1
919.333333
100.333333
396
579
inside_visible
1,390.7725
905.727
1
1Ab4vLMMAbY_2354-2439
26
2
0
2
477.166667
1,034.833333
inside_visible
883.909
854.183
0
1Ab4vLMMAbY_2354-2439
27
1
919.916667
101.166667
396
579
inside_visible
1,390.711125
905.727
1
1Ab4vLMMAbY_2354-2439
27
2
0
2
478
1,037
inside_visible
844.636
871.365
0
1Ab4vLMMAbY_2354-2439
28
1
920.5
102
396
579
inside_visible
1,390.64975
905.727
1
1Ab4vLMMAbY_2354-2439
28
2
0
2
483.5
1,039.5
inside_visible
1,134.272
471.274
0
1Ab4vLMMAbY_2354-2439
29
1
921.083333
102.833333
396
579
inside_visible
1,390.588375
905.727
1
1Ab4vLMMAbY_2354-2439
29
2
0
2
489
1,042
inside_visible
1,130.659455
471.951136
0
1Ab4vLMMAbY_2354-2439
30
1
921.666667
103.666667
396
579
inside_visible
1,390.527
905.727
1
1Ab4vLMMAbY_2354-2439
30
2
0
2
494.5
1,044.5
inside_visible
1,127.046909
472.628273
0
1Ab4vLMMAbY_2354-2439
31
1
922.25
104.5
396
579
inside_visible
1,390.465625
905.727
1
1Ab4vLMMAbY_2354-2439
31
2
0
2
500
1,047
inside_visible
1,123.434364
473.305409
0
1Ab4vLMMAbY_2354-2439
32
1
922.833333
105.333333
396
579
inside_visible
1,390.40425
905.727
1
1Ab4vLMMAbY_2354-2439
32
2
0
2
503
1,047
inside_visible
1,119.821818
473.982545
0
1Ab4vLMMAbY_2354-2439
33
1
923.416667
106.166667
396
579
inside_visible
1,390.342875
905.727
1
1Ab4vLMMAbY_2354-2439
33
2
0
2
506
1,047
inside_visible
1,116.209273
474.659682
0
1Ab4vLMMAbY_2354-2439
34
1
924
107
396
579
inside_visible
1,390.2815
905.727
1
1Ab4vLMMAbY_2354-2439
34
2
0
2
509
1,047
inside_visible
1,112.596727
475.336818
0
1Ab4vLMMAbY_2354-2439
35
1
923.125
109.125
396
579
inside_visible
1,390.220125
905.727
1
1Ab4vLMMAbY_2354-2439
35
2
0
2
512
1,047
inside_visible
1,108.984182
476.013955
0
1Ab4vLMMAbY_2354-2439
36
1
922.25
111.25
396
579
inside_visible
1,390.15875
905.727
1
1Ab4vLMMAbY_2354-2439
36
2
0
2
514.444444
1,047
inside_visible
1,105.371636
476.691091
0
1Ab4vLMMAbY_2354-2439
37
1
921.375
113.375
396
579
inside_visible
1,390.097375
905.727
1
1Ab4vLMMAbY_2354-2439
37
2
0
2
516.888889
1,047
inside_visible
1,101.759091
477.368227
0
1Ab4vLMMAbY_2354-2439
38
1
920.5
115.5
396
579
inside_visible
1,390.036
905.727
1
1Ab4vLMMAbY_2354-2439
38
2
0
2
519.333333
1,047
inside_visible
1,098.146545
478.045364
0
1Ab4vLMMAbY_2354-2439
39
1
919.625
117.625
396
579
inside_visible
1,389.974625
905.727
1
1Ab4vLMMAbY_2354-2439
39
2
0
2
521.777778
1,047
inside_visible
1,094.534
478.7225
0
1Ab4vLMMAbY_2354-2439
40
1
918.75
119.75
396
579
inside_visible
1,389.91325
905.727
1
1Ab4vLMMAbY_2354-2439
40
2
0
2
524.222222
1,047
inside_visible
1,090.921455
479.399636
0
1Ab4vLMMAbY_2354-2439
41
1
917.875
121.875
396
579
inside_visible
1,389.851875
905.727
1
1Ab4vLMMAbY_2354-2439
41
2
0
2
526.666667
1,047
inside_visible
1,087.308909
480.076773
0
1Ab4vLMMAbY_2354-2439
42
1
917
124
396
579
inside_visible
1,389.7905
905.727
1
1Ab4vLMMAbY_2354-2439
42
2
0
2
529.111111
1,047
inside_visible
1,083.696364
480.753909
0
1Ab4vLMMAbY_2354-2439
43
1
916.666667
124
396
579
inside_visible
1,389.729125
905.727
1
1Ab4vLMMAbY_2354-2439
43
2
0
2
531.555556
1,047
inside_visible
1,080.083818
481.431045
0
1Ab4vLMMAbY_2354-2439
44
1
916.333333
124
396
579
inside_visible
1,389.66775
905.727
1
1Ab4vLMMAbY_2354-2439
44
2
0
2
534
1,047
inside_visible
1,076.471273
482.108182
0
1Ab4vLMMAbY_2354-2439
45
1
916
124
396
579
inside_visible
1,389.606375
905.727
1
1Ab4vLMMAbY_2354-2439
45
2
0
2
543.4
1,052.8
inside_visible
1,072.858727
482.785318
0
1Ab4vLMMAbY_2354-2439
46
1
915.666667
124
396
579
inside_visible
1,389.545
905.727
1
1Ab4vLMMAbY_2354-2439
46
2
0
2
552.8
1,058.6
inside_visible
1,069.246182
483.462455
0
1Ab4vLMMAbY_2354-2439
47
1
915.333333
124
396
579
gaze_shift
1,387.82675
906.586
1
1Ab4vLMMAbY_2354-2439
47
2
0
2
562.2
1,064.4
inside_visible
1,065.633636
484.139591
0
1Ab4vLMMAbY_2354-2439
48
1
915
124
396
579
gaze_shift
1,386.1085
907.445
1
1Ab4vLMMAbY_2354-2439
48
2
0
2
571.6
1,070.2
inside_visible
1,062.021091
484.816727
0
1Ab4vLMMAbY_2354-2439
49
1
906.666667
119
400
579
gaze_shift
1,384.39025
908.304
1
1Ab4vLMMAbY_2354-2439
49
2
0
2
581
1,076
inside_visible
1,058.408545
485.493864
0
1Ab4vLMMAbY_2354-2439
50
1
898.333333
114
404
579
gaze_shift
1,382.672
909.163
1
1Ab4vLMMAbY_2354-2439
50
2
0
2
589.333333
1,076.666667
inside_visible
1,054.796
486.171
0
End of preview. Expand in Data Studio
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

ChildPlay Dataset

Paper

ChildPlay: A New Benchmark for Understanding Children’s Gaze Behaviour (Tafasca et al. ICCV 2023)

Abstract

Gaze behaviors such as eye-contact or shared attention are important markers for diagnosing developmental disorders in children. While previous studies have looked at some of these elements, the analysis is usually performed on private datasets and is restricted to lab settings. Furthermore, all publicly available gaze target prediction benchmarks mostly contain instances of adults, which makes models trained on them less applicable to scenarios with young children. In this paper, we propose the first study for predicting the gaze target of children and interacting adults. To this end, we introduce the ChildPlay Gaze dataset: a curated collection of short video clips featuring children playing and interacting with adults in uncontrolled environments (e.g. kindergarten, therapy centers, preschools etc.), which we annotate with rich gaze information. Our results show that looking at faces prediction performance on children is much worse than on adults, and can be significantly improved by fine-tuning models using child gaze annotations.

Dataset Description

The ChildPlay Gaze dataset is composed of 401 clips extracted from 95 longer YouTube videos, totaling 120549 frames. For each clip, we select up to 3 people, and annotate all of them in each frame (when they are visible) with gaze information.

The annotations folder contains 3 subfolders: train, val and test. Each subfolder contains csv annotation files in the format videoid_startframe_endframe.csv. In this naming convention, videoid refers to the original video from which the clip was extracted, while the startframe and endframe refer to the starting and ending frames of the clip in the original YouTube video videoid.

For example, one of the original videos downloaded from YouTube is 1Ab4vLMMAbY.mp4 where 1Ab4vLMMAbY is the YouTube video ID, which can be used directly to build the URL (i.e. https://www.youtube.com/watch?v=1Ab4vLMMAbY). The annotation file 1Ab4vLMMAbY_2354-2439.csv found under ChildPlay/annotations/test refers to the annotation of the clip 1Ab4vLMMAbY_2354-2439.mp4 extracted from 1Ab4vLMMAbY.mp4 which starts at frame 2354 and ends at frame 2439 (included). The numbering starts from 1.

Please note that some videos were recorded at 60 FPS, whereas most are at 24-30. When we extracted clips from these, we also downsampled to 30 FPS by skipping every other frame. The starting and ending frames in their names correspond to their numbers in the original video, but we also include the mention downsampled so they are recognizable. For example, smwfiZd8HLc_7508-8408-downsampled.mp4 is a clip extracted between frames 7508 and 8408 from the video smwfiZd8HLc.mp4. However, it only contains 451 frames as opposed to the expected 901 = 8408 - 7508 + 1 since it was downsampled.

Each annotation csv file has one row per annotated person per frame, and includes the following columns:

  • clip: the name of the clip (without extension). This value is duplicated across the entire dataframe.
  • frame: the relative frame number in the clip. For example, frame n refers to the nth frame of the clip. If the clip is named videoid_start-end, then frame n in the clip corresponds to frame start + n - 1 in the original video (unless the clip was downsampled).
  • person_id: an id used to separate and track annotated people in the clip.
  • bbox_x: the x-value of the upper left corner of the head bounding box of the person (in the image frame).
  • bbox_y: the y-value of the upper left corner of the head bounding box of the person (in the image frame).
  • bbox_width: the width of the head bounding box of the person (in the image frame).
  • bbox_height: the height of the head bounding box of the person (in the image frame).
  • gaze_class: a gaze label to determine the type of gaze behavior. One of [inside_visible, outside_frame, gaze_shift, inside_occluded, inside_uncertain, eyes_closed]. Refer to the paper for the definitions of these flags. The labels inside_visible and outside_frame in particular, correspond to the standard inside vs outside label found in other gaze following datasets (e.g. GazeFollow and VideoAttentionTarget)
  • gaze_x: the x-value of the target gaze point (in the image frame). This value is set to -1 when gaze_class != inside_visible.
  • gaze_y: the y-value of the target gaze point (in the image frame). This value is set to -1 when gaze_class != inside_visible.
  • is_child: a binary flag denoting whether the person is a child or an adult.

You will also find a videos.csv file containing a list of videos to download, from which the ChildPlay clips were extracted, along with other metadata (e.g. channel ID, fps, resolution, etc.). There is also a clips.csv file containing similar information but for each clip, and a splits.csv detailing the train/val/test split of each clip.

Furthermore, we provide utility scripts to extract the necessary clips and images from the videos (assuming you have already downloaded them).

Dataset Acquisition

Please follow the steps below to setup the dataset:

  1. Download the 95 original videos listed in videos.csv from YouTube. You can use the python package pytube or some other tool.
  2. Use the extract-clips-from-videos.py script to extract both the clips and corresponding frames from the videos. In order to use the script, you have to supply the following flags --clip_csv_path (path to the clips.csv file), --video_folder (path to the folder of the downloaded videos), --clip_folder (path where to save the clips, it will be created if it doesn't exist), --image_folder (path where to save the images, it will be created if it doesn't exist). Please note that you need to have the packages pandas, tqdm and opencv installed. The script also requires ffmpeg for the extraction.

The final dataset folder structure should look like

.
β”œβ”€β”€ annotations
β”‚   β”œβ”€β”€ test
β”‚   β”‚   β”œβ”€β”€ 1Ab4vLMMAbY_2354-2439.csv
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ 1Aea8BH-PCs_1256-1506.csv
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ val
β”‚   β”‚   β”œβ”€β”€ bI1GohGXSt0_2073-2675.csv
β”‚   β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ clips
β”‚   β”œβ”€β”€ 1Ab4vLMMAbY_2354-2439.mp4
β”œβ”€β”€ images
β”‚   β”œβ”€β”€ 1Ab4vLMMAbY_2354-2439
β”‚   β”‚   β”œβ”€β”€ 1Ab4vLMMAbY_2354.jpg
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ videos
β”‚   β”œβ”€β”€ 1Ab4vLMMAbY.mp4
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ clips.csv
β”œβ”€β”€ README.md
β”œβ”€β”€ extract-clips-from-videos.py
β”œβ”€β”€ splits.csv
└── videos.csv

Contact

Please reach out to Samy Tafasca ([email protected]) or Jean-Marc Odobez ([email protected]) if you have any questions, or if some videos are no longer available on YouTube.

Downloads last month
37