Datasets:

Languages:
English
ArXiv:
Tags:
NLP
License:
jdrechsel commited on
Commit
0dead75
·
verified ·
1 Parent(s): 6f4f3d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -3
README.md CHANGED
@@ -1,3 +1,97 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ dataset_info:
5
+ features:
6
+ - name: text
7
+ dtype: string
8
+ splits:
9
+ - name: train
10
+ configs:
11
+ - config_name: default
12
+ data_files:
13
+ - split: train
14
+ path: data/train-*
15
+ tags:
16
+ - NLP
17
+ license: cc-by-4.0
18
+ ---
19
+
20
+ # Dataset Card for Dataset Name
21
+
22
+ <!-- Provide a quick summary of the dataset. -->
23
+
24
+ This dataset is a filtered version of [BookCorpus](https://huggingface.co/datasets/bookcorpus/bookcorpus) containing only bias-neutral words (for biases gender, race, and religion).
25
+
26
+ > **__NOTE:__** This dataset is derived from BookCorpus, for which we do not have publication rights. Therefore, this repository only provides indices referring to gender-neutral entries within the BookCorpus dataset on Hugging Face. By using `load_dataset('aieng-lab/biasneutral', trust_remote_code=True)`, both the indices and the full BookCorpus dataset are downloaded locally. The indices are then used to construct the BIASNEUTRAL dataset. The initial dataset generation takes a few minutes, but subsequent loads are cached for faster access. This setup only works with `datasets` versions beween `3.0.2` and `3.6.0` (successfully tested with Python 3.9-3.11), as later versions deprecated `trust_remote_code`. If your project depends on other `datasets` versions, we recommend generating the dataset once in a separate Python environment, saving the resulting data to disk, and then loading the local copy within your main project.
27
+
28
+ ```python
29
+ biasneutral = load_dataset('aieng-lab/biasneutral', trust_remote_code=True, split='train')
30
+ ```
31
+
32
+ ## Examples:
33
+
34
+ Index | Text
35
+ ------|-----
36
+ 8498 | no one sitting near me could tell that i was seething with rage .
37
+ 8500 | by now everyone knew we were an item , the thirty-five year old business mogul , and the twenty -three year old pop sensation .
38
+ 8501 | we 'd been able to keep our affair hidden for all of two months and that only because of my high security .
39
+ 8503 | i was n't too worried about it , i just do n't like my personal life splashed across the headlines , but i guess it came with the territory .
40
+ 8507 | i 'd sat there prepared to be bored out of my mind for the next two hours or so .
41
+ 8508 | i 've seen and had my fair share of models over the years , and they no longer appealed .
42
+ 8512 | when i finally looked up at the stage , my breath had got caught in my lungs .
43
+ 8516 | i pulled my phone and cancelled my dinner date and essentially ended the six-month relationship i 'd been barely having with another woman .
44
+ 8518 | when i see something that i want , i go after it .
45
+ 8529 | if i had anything to say about that , it would be a permanent thing , or until i 'd had my fill at least .
46
+
47
+
48
+ ## Dataset Details
49
+
50
+ <!-- Provide the basic links for the dataset. -->
51
+
52
+ - **Repository:** [github.com/aieng-lab/gradiend](https://github.com/aieng-lab/gradiend)
53
+ - **Paper:** [![arXiv](https://img.shields.io/badge/arXiv-2502.01406-blue.svg)](https://arxiv.org/abs/2502.01406)
54
+ - **Original Data**: [BookCorpus](https://huggingface.co/datasets/bookcorpus/bookcorpus)
55
+
56
+
57
+ ## Uses
58
+
59
+ <!-- Address questions around how the dataset is intended to be used. -->
60
+
61
+ This dataset is suitable for training and evaluating language models. For example, its lack of gender-related words makes it ideal for assessing language modeling capabilities in both gender-biased and gender-neutral models during masked language modeling (MLM) tasks, allowing for an evaluation independent of gender bias (same holds for race and religion bias).
62
+
63
+
64
+ ## Dataset Creation
65
+
66
+ We generated this dataset by filtering the BookCorpus dataset, leaving only entries matching the following criteria:
67
+ - Each entry contains at least 50 characters
68
+ - No name of [aieng-lab/namextend](https://huggingface.co/datasets/aieng-lab/namextend)
69
+ - No gender-specific pronoun is contained (he/she/him/her/his/hers/himself/herself)
70
+ - No gender-specific noun is contained according to the 2421 plural-extended entries of this [gendered-word dataset](https://github.com/ecmonsen/gendered_words)
71
+ - No [Bias Attribute Words](https://github.com/McGill-NLP/bias-bench/blob/main/data/bias_attribute_words.json) (gender, race, religion) as defined by [Maede et al. (2022)](https://arxiv.org/abs/2110.08527).
72
+
73
+
74
+ ## Citation
75
+
76
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
77
+
78
+ This dataset has been originally introduced as part of [GRADIEND](https://arxiv.org/abs/2502.01406), though it has not been used in the final version of the paper (but e.g., in version [v2](https://arxiv.org/abs/2502.01406v2)), as it was generalized to a more general *bias neutral* dataset [BIASNEUTRAL](https://huggingface.co/datasets/aieng-lab/biasneutral).
79
+
80
+ **BibTeX:**
81
+
82
+ ```
83
+ @misc{drechsel2025gradiendfeaturelearning,
84
+ title={{GRADIEND}: Feature Learning within Neural Networks Exemplified through Biases},
85
+ author={Jonathan Drechsel and Steffen Herbold},
86
+ year={2025},
87
+ eprint={2502.01406},
88
+ archivePrefix={arXiv},
89
+ primaryClass={cs.LG},
90
+ url={https://arxiv.org/abs/2502.01406},
91
+ }
92
+ ```
93
+
94
+
95
+ ## Dataset Card Authors
96
+
97
+ [jdrechsel](https://huggingface.co/jdrechsel)