html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | Ok, thanks for the update.
Indeed, the metrics API of Datasets is framework agnostic, so we can't rely on a PyTorch-only implementation.
[This file](https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py) is probably want we need to implement.
| I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 31 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.2992970049381256,
-0.22287172079086304,
-0.06462191045284271,
0.015125968493521214,
0.13359065353870392,
-0.14127883315086365,
0.026443010196089745,
-0.13633601367473602,
-0.19549286365509033,
0.12304971367120743,
-0.6802902221679688,
0.12593263387680054,
-0.15478523075580597,
0.1198725... |
https://github.com/huggingface/datasets/issues/2301 | Unable to setup dev env on Windows | Hi @gchhablani,
There are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.
On Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visualstudio.micr... | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datas... | 52 | Unable to setup dev env on Windows
Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\e... | [
-0.4121341109275818,
-0.08557786047458649,
-0.09772764146327972,
-0.13878099620342255,
0.3114473521709442,
0.06837538629770279,
0.3448513448238373,
0.046587392687797546,
-0.10013053566217422,
0.15623492002487183,
0.043379876762628555,
0.24242666363716125,
0.1581839919090271,
0.216297283768... |
https://github.com/huggingface/datasets/issues/2300 | Add VoxPopuli | I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https://github.com/facebookresearch/voxpopuli/blob/main/voxpopuli/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alternati... | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**:... | 65 | Add VoxPopuli
## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech data... | [
-0.2925266921520233,
0.23287232220172882,
-0.048693519085645676,
-0.07535192370414734,
-0.15587884187698364,
-0.18521815538406372,
0.38092973828315735,
0.21007314324378967,
-0.026084594428539276,
0.2548181414604187,
-0.29555556178092957,
0.09394391626119614,
-0.5140136480331421,
0.22828216... |
https://github.com/huggingface/datasets/issues/2300 | Add VoxPopuli | Hey @jfainberg,
This sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do:
```python
dataset = load_dataset("voxpopuli", "french")
```
=> so as a start I think your option 2 is the way to go! | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**:... | 54 | Add VoxPopuli
## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech data... | [
-0.36220473051071167,
0.20605099201202393,
-0.03822990506887436,
0.001416170853190124,
-0.1214069202542305,
-0.12960658967494965,
0.2858927547931671,
0.34885525703430176,
0.18100377917289734,
0.26731768250465393,
-0.12204397469758987,
0.17343279719352722,
-0.5791187286376953,
0.24865658581... |
https://github.com/huggingface/datasets/issues/2294 | Slow #0 when using map to tokenize. | Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ?
There are no difference between process 0 and the others except that it processes the first shard of the dataset. | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | 39 | Slow #0 when using map to tokenize.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_... | [
-0.4360867738723755,
-0.334041953086853,
-0.033181339502334595,
-0.05558718740940094,
0.0697016566991806,
-0.19447867572307587,
0.381237268447876,
0.20731574296951294,
-0.27479979395866394,
0.051777929067611694,
0.315757155418396,
0.38787856698036194,
-0.23858888447284698,
-0.0081708747893... |
https://github.com/huggingface/datasets/issues/2294 | Slow #0 when using map to tokenize. | Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this:
```if args.dataset_name1 is not None:
dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split="train")
dataset1 = dataset1.remove_co... | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | 172 | Slow #0 when using map to tokenize.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_... | [
-0.4216775894165039,
-0.246376633644104,
-0.0001557975629111752,
0.06709447503089905,
0.030975567176938057,
-0.2085660696029663,
0.43102577328681946,
0.22374996542930603,
-0.25451669096946716,
0.045073118060827255,
0.2334742397069931,
0.3255724608898163,
-0.14147140085697174,
0.01760584302... |
https://github.com/huggingface/datasets/issues/2294 | Slow #0 when using map to tokenize. | That makes sense ! You can indeed use `map` on both datasets separately and then concatenate.
Another option is to concatenate, then shuffle, and then `map`. | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | 26 | Slow #0 when using map to tokenize.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_... | [
-0.4388079345226288,
-0.27525386214256287,
-0.0035830156411975622,
-0.01950499229133129,
0.03765517845749855,
-0.16884669661521912,
0.3846200108528137,
0.24709680676460266,
-0.26957160234451294,
0.07264751940965652,
0.18708264827728271,
0.3446466624736786,
-0.19860227406024933,
0.034010261... |
https://github.com/huggingface/datasets/issues/2288 | Load_dataset for local CSV files | Hi,
this is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):
```python
import ast
# load the dataset ... | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
``... | 72 | Load_dataset for local CSV files
The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John']... | [
-0.11043217778205872,
-0.0460992157459259,
0.0017927540466189384,
0.0747479647397995,
0.44804656505584717,
0.06277728080749512,
0.4552137851715088,
0.21968235075473785,
0.29949021339416504,
-0.1329665333032608,
0.2060755044221878,
0.4624340534210205,
-0.07284623384475708,
0.079717442393302... |
https://github.com/huggingface/datasets/issues/2288 | Load_dataset for local CSV files | Hi,
Thanks for the reply.
I have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error:
```
ArrowInvalid: Could not convert X with type str: tried to convert to int
```
Why this happens ? Should labels be mapped to their ids and use int instead of str ? | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
``... | 55 | Load_dataset for local CSV files
The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John']... | [
-0.17451202869415283,
-0.109178826212883,
-0.023106729611754417,
0.09802352637052536,
0.48854589462280273,
0.021457752212882042,
0.4982631504535675,
0.22232048213481903,
0.36614271998405457,
-0.05790624022483826,
0.02285994030535221,
0.5268845558166504,
-0.04110362380743027,
0.178615689277... |
https://github.com/huggingface/datasets/issues/2285 | Help understanding how to build a dataset for language modeling as with the old TextDataset |
I received an answer for this question on the HuggingFace Datasets forum by @lhoestq
Hi !
If you want to tokenize line by line, you can use this:
```
max_seq_length = 512
num_proc = 4
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(lin... | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text datas... | 270 | Help understanding how to build a dataset for language modeling as with the old TextDataset
Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens lim... | [
-0.239203080534935,
-0.0001861280034063384,
0.01759270578622818,
0.15123137831687927,
0.13693851232528687,
-0.16728396713733673,
0.516079306602478,
0.09827988594770432,
-0.05547741800546646,
-0.15961724519729614,
0.1894439160823822,
-0.19410671293735504,
-0.16465266048908234,
0.13402473926... |
https://github.com/huggingface/datasets/issues/2279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | From the trace this seems like an error in the tokenizer library instead.
Do you mind opening an issue at https://github.com/huggingface/tokenizers instead? | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure... | 22 | Compatibility with Ubuntu 18 and GLIBC 2.27?
## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-... | [
-0.14281167089939117,
-0.17239275574684143,
0.10753278434276581,
0.13023465871810913,
0.05963877588510513,
-0.07720506191253662,
0.2908615171909332,
0.2598782777786255,
0.22593839466571808,
0.011138660833239555,
0.17404592037200928,
0.22679010033607483,
-0.20159238576889038,
0.020794451236... |
https://github.com/huggingface/datasets/issues/2279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | Hi @tginart, thanks for reporting.
I think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/issues/685 | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure... | 16 | Compatibility with Ubuntu 18 and GLIBC 2.27?
## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-... | [
-0.14281167089939117,
-0.17239275574684143,
0.10753278434276581,
0.13023465871810913,
0.05963877588510513,
-0.07720506191253662,
0.2908615171909332,
0.2598782777786255,
0.22593839466571808,
0.011138660833239555,
0.17404592037200928,
0.22679010033607483,
-0.20159238576889038,
0.020794451236... |
https://github.com/huggingface/datasets/issues/2278 | Loss result inGptNeoForCasual | Hi ! I think you might have to ask on the `transformers` repo on or the forum at /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2F
Closing since it's not related to this library | Is there any way you give the " loss" and "logits" results in the gpt neo api? | 27 | Loss result inGptNeoForCasual
Is there any way you give the " loss" and "logits" results in the gpt neo api?
Hi ! I think you might have to ask on the `transformers` repo on or the forum at /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2F
Closing since it's not related to this library | [
-0.17251914739608765,
-0.5047550797462463,
-0.04942307621240616,
0.47317689657211304,
-0.07214011996984482,
-0.3464438021183014,
0.0108578996732831,
0.052749838680028915,
-0.4997345805168152,
0.14229072630405426,
-0.09955909103155136,
-0.0388612300157547,
0.10090505331754684,
0.25429624319... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:
```
---------------------------------------------------------------------------
MemoryError Traceback (most rece... | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.20849712193012238,
-0.11429031938314438,
0.048471610993146896,
0.434645414352417,
0.19933004677295685,
0.17065304517745972,
-0.13739603757858276,
0.2591969966888428,
-0.16948167979717255,
0.042984738945961,
0.026932524517178535,
0.2109091579914093,
0.005909740924835205,
-0.2283866554498... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Hi ! this looks like an important issue. Let me try to reproduce this.
Cc @samsontmr this might be related to the memory issue you have in #2134 | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.16557899117469788,
0.008644308894872665,
0.025085624307394028,
0.3855164051055908,
0.2144162654876709,
0.13296055793762207,
0.0000971694607869722,
0.2391079217195511,
-0.23874132335186005,
0.04352074861526489,
0.10074494779109955,
0.184909388422966,
0.13490600883960724,
-0.1963893175125... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | @lhoestq Just went to open a similar issue.
It seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.
To find the bug, I modified the `_deepcopy` function in `table.py` as follows:
```python
def _deepcopy(x, memo: dict):
"""deepcopy... | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.11895932257175446,
0.08330532163381577,
0.12607289850711823,
0.3719974458217621,
0.017117014154791832,
0.11338696628808975,
-0.11379124969244003,
0.34220200777053833,
-0.37632253766059875,
-0.03133966028690338,
-0.00298906397074461,
0.2833091616630554,
0.1032993495464325,
-0.15682853758... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Thanks for the insights @mariosasko ! I'm working on a fix.
Since this is a big issue I'll make a patch release as soon as this is fixed | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.13985075056552887,
0.015114533714950085,
0.057670269161462784,
0.3696735203266144,
0.18395721912384033,
0.15967591106891632,
-0.03367818146944046,
0.266921728849411,
-0.2130461037158966,
0.008929387666285038,
0.0735272541642189,
0.18218234181404114,
0.10922607779502869,
-0.2046648114919... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.17059192061424255,
0.009217318147420883,
0.029284002259373665,
0.3677372634410858,
0.17550209164619446,
0.15065082907676697,
-0.03633296862244606,
0.25570085644721985,
-0.2184762805700302,
0.02055484801530838,
0.07563771307468414,
0.200698584318161,
0.11371906846761703,
-0.1913737356662... |
https://github.com/huggingface/datasets/issues/2275 | SNLI dataset has labels of -1 | Hi @puzzler10,
Those examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:
`dataset = load_dataset("snli")`
`dataset_test_filter = dataset['test'].filter(lambda exampl... | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107... | 69 | SNLI dataset has labels of -1
There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset v... | [
0.29897645115852356,
-0.37612736225128174,
-0.032856591045856476,
0.22228240966796875,
0.06941965222358704,
0.08158786594867706,
0.3312397003173828,
0.16919435560703278,
0.18132787942886353,
0.24445891380310059,
-0.35453903675079346,
0.4890282154083252,
-0.11607921123504639,
0.264320999383... |
https://github.com/huggingface/datasets/issues/2272 | Bug in Dataset.class_encode_column | This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6
It was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| 24 | Bug in Dataset.class_encode_column
## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
This has been fixed in this commit: https://github.com/hugg... | [
-0.0357850007712841,
-0.15752172470092773,
-0.0868108794093132,
0.2376333326101303,
0.5289875864982605,
0.1175256296992302,
0.5686455965042114,
0.3704493045806885,
0.259745329618454,
0.19476863741874695,
-0.13328726589679718,
0.5430200695991516,
0.09942653775215149,
0.3341946601867676,
0... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ? | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 27 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.19467999041080475,
0.11409839242696762,
-0.012763917446136475,
0.2849428951740265,
0.064592145383358,
-0.061417464166879654,
0.13234001398086548,
0.4565902352333069,
0.4422375559806824,
0.1792287826538086,
0.23589669167995453,
0.38085129857063293,
0.05458706617355347,
-0.056168209761381... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | Hi, I just ran into a similar error. Here is the minimal code to reproduce:
```python
from datasets import load_dataset, DatasetDict
ds = load_dataset('super_glue', 'multirc')
ds.save_to_disk('tempds')
ds = DatasetDict.load_from_disk('tempds')
```
```bash
Reusing dataset super_glue (/home/idahl/.cache/h... | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 226 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.19467999041080475,
0.11409839242696762,
-0.012763917446136475,
0.2849428951740265,
0.064592145383358,
-0.061417464166879654,
0.13234001398086548,
0.4565902352333069,
0.4422375559806824,
0.1792287826538086,
0.23589669167995453,
0.38085129857063293,
0.05458706617355347,
-0.056168209761381... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | My current workaround is to remove the idx feature:
```
from datasets import load_dataset, DatasetDict, Value
ds = load_dataset('super_glue', 'multirc')
ds = ds.remove_columns('idx')
ds.save_to_disk('tempds')
ds = DatasetDict.load_from_disk('tempds')
```
works. | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 29 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.19467999041080475,
0.11409839242696762,
-0.012763917446136475,
0.2849428951740265,
0.064592145383358,
-0.061417464166879654,
0.13234001398086548,
0.4565902352333069,
0.4422375559806824,
0.1792287826538086,
0.23589669167995453,
0.38085129857063293,
0.05458706617355347,
-0.056168209761381... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | It looks like this issue comes from the order of the fields in the 'idx' struct that is different for some reason.
I'm looking into it. Note that as a workaround you can also flatten the nested features with `ds = ds.flatten()` | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 42 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.19467999041080475,
0.11409839242696762,
-0.012763917446136475,
0.2849428951740265,
0.064592145383358,
-0.061417464166879654,
0.13234001398086548,
0.4565902352333069,
0.4422375559806824,
0.1792287826538086,
0.23589669167995453,
0.38085129857063293,
0.05458706617355347,
-0.056168209761381... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | I just pushed a fix on `master`. We'll do a new release soon !
Thanks for reporting | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 17 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.19467999041080475,
0.11409839242696762,
-0.012763917446136475,
0.2849428951740265,
0.064592145383358,
-0.061417464166879654,
0.13234001398086548,
0.4565902352333069,
0.4422375559806824,
0.1792287826538086,
0.23589669167995453,
0.38085129857063293,
0.05458706617355347,
-0.056168209761381... |
https://github.com/huggingface/datasets/issues/2262 | NewsPH NLI dataset script fails to access test data. | Thanks @bhavitvyamalik for the fix !
The fix will be available in the next release.
It's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version="master"` in `load_dataset` to use the fixed version of this dataset. | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If yo... | 44 | NewsPH NLI dataset script fails to access test data.
In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee... | [
-0.16071607172489166,
0.28491100668907166,
-0.12697595357894897,
0.17550452053546906,
0.08590205758810043,
0.040289465337991714,
0.2562074363231659,
0.36588481068611145,
0.04193374142050743,
0.3072817921638489,
0.005247786175459623,
0.043372176587581635,
0.015890859067440033,
0.21912215650... |
https://github.com/huggingface/datasets/issues/2256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | Thanks for reporting ! We are working on this and we'll do a patch release very soon. | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
... | 17 | Running `datase.map` with `num_proc > 1` uses a lot of memory
## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load... | [
-0.0754544585943222,
-0.3018319606781006,
-0.04138434678316116,
0.35142242908477783,
0.20282021164894104,
0.09861712902784348,
0.09969044476747513,
0.2842758893966675,
0.272355318069458,
0.2235054224729538,
0.18526223301887512,
0.4316782057285309,
-0.14579494297504425,
-0.04350971430540085... |
https://github.com/huggingface/datasets/issues/2256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | We did a patch release to fix this issue.
It should be fixed in the new version 1.6.1
Thanks again for reporting and for the details :) | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
... | 27 | Running `datase.map` with `num_proc > 1` uses a lot of memory
## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load... | [
-0.07295110821723938,
-0.2929818332195282,
-0.04242735356092453,
0.3707071840763092,
0.20692743360996246,
0.08793415129184723,
0.08913225680589676,
0.2784176170825958,
0.262588232755661,
0.22961002588272095,
0.17976833879947662,
0.4309554398059845,
-0.14897708594799042,
-0.0277888812124729... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi ! Sorry to hear that. This may come from another issue then.
First can we check if this latency comes from the dataset itself ?
You can try to load your dataset and benchmark the speed of querying random examples inside it ?
```python
import time
import numpy as np
from datasets import load_from_disk
da... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 101 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi @lhoestq, here is the result. I additionally measured time to `load_from_disk`:
* 60GB
```
loading took: 22.618776321411133
ramdom indexing 100 times took: 0.10214924812316895
```
* 600GB
```
loading took: 1176.1764674186707
ramdom indexing 100 times took: 2.853600025177002
```
Hmm.. I double checke... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 59 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I'm surprised by the speed change. Can you give more details about your dataset ?
The speed depends on the number of batches in the arrow tables and the distribution of the lengths of the batches.
You can access the batches by doing `dataset.data.to_batches()` (use only for debugging) (it doesn't bring data in memory... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 84 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Also if you could give us more info about your env like your OS, version of pyarrow and if you're using an HDD or a SSD | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 26 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Here are some details of my 600GB dataset. This is a dataset AFTER the `map` function and once I load this dataset, I do not use `map` anymore in the training. Regarding the distribution of the lengths, it is almost uniform (90% is 512 tokens, and 10% is randomly shorter than that -- typical setting for language modeli... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 118 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Regarding the environment, I am running the code on a cloud server. Here are some info:
```
Ubuntu 18.04.5 LTS # cat /etc/issue
pyarrow 3.0.0 # pip list | grep pyarrow
```
The data is stored in SSD and it is mounted to the machine via Network File System.
If you could point me to some of the ... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 76 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I am not sure how I could provide you with the reproducible code, since the problem only arises when the data is big. For the moment, I would share the part that I think is relevant. Feel free to ask me for more info.
```python
class MyModel(pytorch_lightning.LightningModule)
def setup(self, stage):
s... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 71 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi ! Sorry for the delay I haven't had a chance to take a look at this yet. Are you still experiencing this issue ?
I'm asking because the latest patch release 1.6.2 fixed a few memory issues that could have lead to slow downs | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 45 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi! I just ran the same code with different datasets (one is 60 GB and another 600 GB), and the latter runs much slower. ETA differs by 10x. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 28 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq and @hwijeen
Despite upgrading to datasets 1.6.2, still experiencing extremely slow (2h00) loading for a 300Gb local dataset shard size 1.1Gb on local HDD (40Mb/s read speed). This corresponds almost exactly to total data divided by reading speed implying that it reads the entire dataset at each load.
St... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 227 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi @lhoestq thanks for the quick turn-around, actually the plain vanilla way, without an particular knack or fashion, I tried to look into the documentation for some alternative but couldn't find any
> dataset = load_from_disk(dataset_path=os.path.join(datasets_dir,dataset_dir)) | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 36 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I’m facing the same issue when loading a 900GB dataset (stored via `save_to_disk`): `load_from_disk(path_to_dir)` takes 1.5 hours and htop consistently shows high IO rates > 120 M/s. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 27 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @tsproisl same here, smells like ~~teen spirit~~ intended generator inadvertently ending up iterator
@lhoestq perhaps solution to detect bug location in code is to track its signature via HD read usage monitoring, option is to add tracking decorator on top each function and sequentially close all hatches from top to... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 57 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I wasn't able to reproduce this on a toy dataset of around 300GB:
```python
import datasets as ds
s = ds.load_dataset("squad", split="train")
s4000 = ds.concatenate_datasets([s] * 4000)
print(ds.utils.size_str(s4000.data.nbytes)) # '295.48 GiB'
s4000.save_to_disk("tmp/squad_4000")
```
```python
import... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 130 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Just tried on google colab and got ~1min for a 15GB dataset (only 200 times SQuAD), while it should be instantaneous. The time is spent reading the Apache Arrow table from the memory mapped file. This might come a virtual disk management issue. I'm trying to see if I can still speed it up on colab. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 56 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq what is Google Colab's HD read speed, is it possible to introspect incl. make like SSD or HDD ? | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 20 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq Thank you! The issue is getting more interesting. The second script is still running, but it's definitely taking much longer than 15 seconds. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 24 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Okay, here’s the ouput:
Blocks read 158396
Elapsed time: 529.10s
Also using datasets 1.6.2. Do you have any ideas, how to pinpoint the problem? | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 24 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq, @tsproisl mmmh still writing on my side about 1h to go, thinking on it are your large datasets all monoblock unsharded ? mine is 335 times 1.18Gb shards. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 29 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | The 529.10s was a bit too optimistic. I cancelled the reading process once before running it completely, therefore the harddrive cache probably did its work.
Here are three consecutive runs
First run (freshly written to disk):
Blocks read 309702
Elapsed time: 1267.74s
Second run (immediately after):
Blocks read... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 62 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq
First test
> elapsed time: 11219.05s
Second test running bear with me, for Windows users slight trick to modify original "disk0" string:
First find physical unit relevant key in dictionnary
```
import psutil
psutil.disk_io_counters(perdisk=True)
```
> {'PhysicalDrive0': sdiskio(read_count=18453... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 115 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Unfortunately no. Thanks for running the benchmark though, it shows that you machine does a lot of read operations. This is not expected: in other machines it does almost no read operations which enables a very fast loading.
I did some tests on google colab and have the same issue. The first time the dataset arrow f... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 177 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Just want to say that I am seeing the same issue. Dataset size if 268GB and it takes **3 hours** to load `load_from_disk`, using dataset version `1.9.0`. Filesystem underneath is `Lustre` | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 31 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi @lhoestq, confirmed Windows issue, exact same code running on Linux OS total loading time about 3 minutes. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 18 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hmm that's different from what I got. I was on Ubuntu when reporting the initial issue. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 16 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.404059499502182,
0.21396146714687347,
-0.13430961966514587,
0.16329367458820343,
0.22690042853355408,
-0.1808895468711853,
0.21954874694347382,
0.40079012513160706,
0.11868315935134888,
-0.00920156016945839,
-0.3094535171985626,
0.06606590747833252,
0.17966286838054657,
0.14807660877704... |
https://github.com/huggingface/datasets/issues/2250 | some issue in loading local txt file as Dataset for run_mlm.py | Hi,
1. try
```python
dataset = load_dataset("text", data_files={"train": ["a1.txt", "b1.txt"], "test": ["c1.txt"]})
```
instead.
Sadly, I can't reproduce the error on my machine. If the above code doesn't resolve the issue, try to update the library to the
newest version (`pip instal... | 
first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> FileNotFoundError: [Errno 2] No such file or directory: 'c'
by ... | 110 | some issue in loading local txt file as Dataset for run_mlm.py

first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> F... | [
-0.2187604010105133,
-0.16674171388149261,
0.03139006346464157,
0.4202874004840851,
0.4102851152420044,
0.2086266279220581,
0.3599502742290497,
0.3321824371814728,
0.09591742604970932,
-0.08650462329387665,
0.14329589903354645,
0.11218500882387161,
-0.3130260705947876,
0.11540620028972626,... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Hi @villmow, thanks for reporting.
Could you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.
Once you update Datasets, please confirm if the problem persists. | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 47 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.35363975167274475,
-0.410763680934906,
-0.09764104336500168,
0.2570491433143616,
-0.007619423326104879,
0.11197606474161148,
0.18846549093723297,
0.39104968309402466,
0.4355853199958801,
-0.013238923624157906,
0.2567104399204254,
0.35408899188041687,
0.009039025753736496,
-0.15175244212... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Hi @albertvillanova, thanks for the reply. I just tried the new version and the problem still persists.
Do I need to rebuild the saved dataset (which I load from disk) with the 1.6.0 version of datasets? My script loads this dataset and creates new datasets from it. I tried it without rebuilding.
See this short ... | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 70 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.36301472783088684,
-0.3355690538883209,
-0.08693239837884903,
0.29149767756462097,
0.00480444822460413,
0.08502500504255295,
0.20765312016010284,
0.3980059325695038,
0.4348949193954468,
-0.00896398350596428,
0.2125944197177887,
0.2832149863243103,
-0.02504633739590645,
-0.20425826311111... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | There can be a bit of delay between the creations of the processes but this delay should be the same for both your `map` calls. We should look into this.
Also if you hav some code that reproduces this issue on google colab that'd be really useful !
Regarding the speed differences:
This looks like a similar issue a... | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 103 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.3194858133792877,
-0.47740572690963745,
-0.07475773245096207,
0.33022060990333557,
-0.06720136106014252,
0.041248664259910583,
0.20973245799541473,
0.32634973526000977,
0.43672415614128113,
0.029365545138716698,
0.24169345200061798,
0.3490334749221802,
0.01909375935792923,
-0.0388511456... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Upgrade to 1.6.1 solved my problem somehow. I did not change any of my code, but now it starts all processes around the same time. | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 25 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.3285200297832489,
-0.42511433362960815,
-0.09021824598312378,
0.31596818566322327,
0.01706496812403202,
0.09015648812055588,
0.2094435691833496,
0.4447200298309326,
0.4825790822505951,
0.06496257334947586,
0.2510420083999634,
0.31609249114990234,
-0.05608958378434181,
-0.314390480518341... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Nice ! I'm glad this works now.
Closing for now, but feel free to re-open if you experience this issue again. | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 21 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.30707424879074097,
-0.46950411796569824,
-0.09229348599910736,
0.26054397225379944,
0.02377930097281933,
0.08166927099227905,
0.20609712600708008,
0.3872794508934021,
0.4719318151473999,
0.02073497325181961,
0.2926185131072998,
0.3568669855594635,
0.015233035199344158,
-0.17234356701374... |
https://github.com/huggingface/datasets/issues/2239 | Error loading wikihow dataset | Hi @odellus, thanks for reporting.
The `wikihow` dataset has 2 versions:
- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.
- `sep`: Consisting of each paragraph and its summary.
Therefore, in order to load it, you have to specify which vers... | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | 71 | Error loading wikihow dataset
## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the... | [
-0.22749939560890198,
0.3702230155467987,
0.02380955219268799,
0.39879482984542847,
0.2537096440792084,
0.27477872371673584,
0.42765364050865173,
0.4391774833202362,
0.24584342539310455,
0.09350797533988953,
0.21628548204898834,
0.3853774666786194,
-0.012894228100776672,
0.1863866895437240... |
https://github.com/huggingface/datasets/issues/2239 | Error loading wikihow dataset | Good call out. I did try that and that's when it told me to download the
dataset. Don't believe I have tried it with local files. Will try first
thing in the morning and get back to you.
On Mon, Apr 19, 2021, 11:17 PM Albert Villanova del Moral <
***@***.***> wrote:
> Hi @odellus <https://github.com/odellus>, thanks ... | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | 168 | Error loading wikihow dataset
## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the... | [
-0.22749939560890198,
0.3702230155467987,
0.02380955219268799,
0.39879482984542847,
0.2537096440792084,
0.27477872371673584,
0.42765364050865173,
0.4391774833202362,
0.24584342539310455,
0.09350797533988953,
0.21628548204898834,
0.3853774666786194,
-0.012894228100776672,
0.1863866895437240... |
https://github.com/huggingface/datasets/issues/2239 | Error loading wikihow dataset | Hi @odellus, yes you are right.
Due to the server where the `wikihow` dataset is hosted, the dataset can't be downloaded automatically by `huggingface` and you have to download it manually as you did.
Nevertheless, you have to specify which dataset version you would like to load anyway:
```python
dataset = load... | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | 90 | Error loading wikihow dataset
## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the... | [
-0.22749939560890198,
0.3702230155467987,
0.02380955219268799,
0.39879482984542847,
0.2537096440792084,
0.27477872371673584,
0.42765364050865173,
0.4391774833202362,
0.24584342539310455,
0.09350797533988953,
0.21628548204898834,
0.3853774666786194,
-0.012894228100776672,
0.1863866895437240... |
https://github.com/huggingface/datasets/issues/2237 | Update Dataset.dataset_size after transformed with map | @albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks! | After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated. | 28 | Update Dataset.dataset_size after transformed with map
After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.
@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks! | [
-0.2140057235956192,
-0.3152701258659363,
-0.12281583249568939,
0.15191873908042908,
0.060608748346567154,
0.019681990146636963,
0.2813272476196289,
-0.12090153992176056,
0.1866588443517685,
0.1171899139881134,
-0.1872752159833908,
0.0038693936076015234,
0.3808465301990509,
0.1846315115690... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.
Do you already have some ideas of what you would like to implement and how ? | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 31 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Hey @lhoestq, thank you so much for the opportunity.
Although I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:
1. First, we would have to update the `ArrowWriter.write()` function here:
https://github.com/hu... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 235 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Interesting !
We keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.
Other that that, I really like the idea of chec... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 86 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | @lhoestq I'm glad you liked the idea!
I think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated).
And s... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 171 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.
> I think that this is also what was originally envisioned as mentioned in the documentation here:
This part was originally developed by tensorflow datasets, and tensorflow dataset... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 224 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That’s how I had it in mind originally. | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 34 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.
In my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 160 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | In `datasets` the keys are currently ignored.
For shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.
We can use it to:
1. detect duplicates
2. verify that ... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 62 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.
> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 119 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.02753690630197525,
-0.21593116223812103,
0.029170531779527664,
0.47627606987953186,
0.06841940432786942,
-0.23636241257190704,
0.4168975353240967,
0.08041312545537949,
0.4312437176704407,
0.13391125202178955,
0.1672191619873047,
0.3502565026283264,
-0.020680399611592293,
0.20651644468307... |
https://github.com/huggingface/datasets/issues/2229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !
thanks :) | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
... | 25 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int`
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/dat... | [
-0.0539386123418808,
0.050974514335393906,
0.04516829922795296,
0.12626683712005615,
0.251376748085022,
0.017300529405474663,
0.44494572281837463,
0.2987632155418396,
0.6008680462837219,
0.24047143757343292,
0.10923555493354797,
0.4075712263584137,
0.01950695551931858,
0.21599027514457703,... |
https://github.com/huggingface/datasets/issues/2229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | @lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks! | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
... | 20 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int`
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/dat... | [
-0.07023032754659653,
0.09619050472974777,
0.07785563170909882,
0.12735706567764282,
0.19009707868099213,
0.01261034794151783,
0.4497652053833008,
0.2994566857814789,
0.6488565802574158,
0.17797236144542694,
0.087863028049469,
0.441419780254364,
0.06090208515524864,
0.2075345814228058,
-... |
https://github.com/huggingface/datasets/issues/2226 | Batched map fails when removing all columns | I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:
```python
from datasets import load_dataset
sst = load_dataset("sst")
sst.set_format("torch", columns=["label"], output_all_columns=True)
ds = sst["train"]
# crashes
ds.map(
l... | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | 49 | Batched map fails when removing all columns
Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", rem... | [
-0.18979914486408234,
0.11866959929466248,
0.016956796869635582,
0.03455601632595062,
0.2957543134689331,
0.19975584745407104,
0.7953290939331055,
0.3469542860984802,
0.2681218385696411,
0.5083374977111816,
0.12785488367080688,
0.39863190054893494,
-0.22215960919857025,
-0.1497608274221420... |
https://github.com/huggingface/datasets/issues/2226 | Batched map fails when removing all columns | Thanks for reporting and for providing this code to reproduce the issue, this is really helpful ! | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | 17 | Batched map fails when removing all columns
Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", rem... | [
-0.18979914486408234,
0.11866959929466248,
0.016956796869635582,
0.03455601632595062,
0.2957543134689331,
0.19975584745407104,
0.7953290939331055,
0.3469542860984802,
0.2681218385696411,
0.5083374977111816,
0.12785488367080688,
0.39863190054893494,
-0.22215960919857025,
-0.1497608274221420... |
https://github.com/huggingface/datasets/issues/2226 | Batched map fails when removing all columns | I merged a fix, it should work on `master` now :)
We'll do a new release soon ! | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | 18 | Batched map fails when removing all columns
Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", rem... | [
-0.18979914486408234,
0.11866959929466248,
0.016956796869635582,
0.03455601632595062,
0.2957543134689331,
0.19975584745407104,
0.7953290939331055,
0.3469542860984802,
0.2681218385696411,
0.5083374977111816,
0.12785488367080688,
0.39863190054893494,
-0.22215960919857025,
-0.1497608274221420... |
https://github.com/huggingface/datasets/issues/2218 | Duplicates in the LAMA dataset | Hi,
currently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:
```python
>>> from datasets import load_dataset, Dataset
>>> dataset = load_dataset('lama', split='train')
>>... | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | 94 | Duplicates in the LAMA dataset
I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13e... | [
0.26157283782958984,
-0.3211718499660492,
-0.030378246679902077,
0.6542946100234985,
0.3174534738063812,
-0.13887427747249603,
0.3127519488334656,
0.32697299122810364,
-0.5472853183746338,
0.33654361963272095,
-0.34542617201805115,
0.36006513237953186,
0.08196251839399338,
-0.2813021540641... |
https://github.com/huggingface/datasets/issues/2218 | Duplicates in the LAMA dataset | Oh, seems like my question wasn't specified well. I'm _not_ asking how to remove duplicates, but whether duplicates should be removed if I want to do the evaluation on the LAMA dataset as it was proposed in the original paper/repository? In other words, will I get the same result if evaluate on the de-duplicated datase... | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | 77 | Duplicates in the LAMA dataset
I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13e... | [
0.26157283782958984,
-0.3211718499660492,
-0.030378246679902077,
0.6542946100234985,
0.3174534738063812,
-0.13887427747249603,
0.3127519488334656,
0.32697299122810364,
-0.5472853183746338,
0.33654361963272095,
-0.34542617201805115,
0.36006513237953186,
0.08196251839399338,
-0.2813021540641... |
https://github.com/huggingface/datasets/issues/2218 | Duplicates in the LAMA dataset | So it looks like the person who added LAMA to the library chose to have one item per piece of evidence rather than one per relation - and in this case, there are duplicate pieces of evidence for the target relation
If I understand correctly, to reproduce reported results, you would have to aggregate predictions for ... | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | 77 | Duplicates in the LAMA dataset
I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13e... | [
0.26157283782958984,
-0.3211718499660492,
-0.030378246679902077,
0.6542946100234985,
0.3174534738063812,
-0.13887427747249603,
0.3127519488334656,
0.32697299122810364,
-0.5472853183746338,
0.33654361963272095,
-0.34542617201805115,
0.36006513237953186,
0.08196251839399338,
-0.2813021540641... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | Hi @nsaphra, thanks for reporting.
This issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?
```shell
pip install -U datasets
``` | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 31 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723238170146942,
-0.21355798840522766,
0.019322341307997704,
0.18529240787029266,
0.41951000690460205,
0.06305500864982605,
0.2729540765285492,
0.1801062524318695,
0.07569621503353119,
-0.05244571343064308,
-0.19403813779354095,
0.15459610521793365,
-0.07444856315851212,
0.297414541244... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are downloaded from `master` instead of the `1.2.1` repo.
You can try setting the env var `HF_SCRIPTS_VERSION="1.2.1"` as a workaround. Let me know if that helps. | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 42 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723238170146942,
-0.21355798840522766,
0.019322341307997704,
0.18529240787029266,
0.41951000690460205,
0.06305500864982605,
0.2729540765285492,
0.1801062524318695,
0.07569621503353119,
-0.05244571343064308,
-0.19403813779354095,
0.15459610521793365,
-0.07444856315851212,
0.297414541244... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip insta... | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 69 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723238170146942,
-0.21355798840522766,
0.019322341307997704,
0.18529240787029266,
0.41951000690460205,
0.06305500864982605,
0.2729540765285492,
0.1801062524318695,
0.07569621503353119,
-0.05244571343064308,
-0.19403813779354095,
0.15459610521793365,
-0.07444856315851212,
0.297414541244... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | Yep, seems to have fixed things! The conda package could really do with an update. Thanks! | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 16 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723238170146942,
-0.21355798840522766,
0.019322341307997704,
0.18529240787029266,
0.41951000690460205,
0.06305500864982605,
0.2729540765285492,
0.1801062524318695,
0.07569621503353119,
-0.05244571343064308,
-0.19403813779354095,
0.15459610521793365,
-0.07444856315851212,
0.297414541244... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 22 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.35207632184028625,
0.1850520670413971,
-0.10664539784193039,
0.24557937681674957,
0.39981594681739807,
0.03360707685351372,
0.3876354694366455,
0.20610398054122925,
0.30574896931648254,
0.14539271593093872,
-0.2683596909046173,
-0.19080810248851776,
0.282751202583313,
0.0175518561154603... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | I saw this on their website when we request to download the dataset:

Can we still request them link for the dataset and make a PR? @lhoestq @yjernite | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 29 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.35207632184028625,
0.1850520670413971,
-0.10664539784193039,
0.24557937681674957,
0.39981594681739807,
0.03360707685351372,
0.3876354694366455,
0.20610398054122925,
0.30574896931648254,
0.14539271593093872,
-0.2683596909046173,
-0.19080810248851776,
0.282751202583313,
0.0175518561154603... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon ! | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 21 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.35207632184028625,
0.1850520670413971,
-0.10664539784193039,
0.24557937681674957,
0.39981594681739807,
0.03360707685351372,
0.3876354694366455,
0.20610398054122925,
0.30574896931648254,
0.14539271593093872,
-0.2683596909046173,
-0.19080810248851776,
0.282751202583313,
0.0175518561154603... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ... | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 25 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.35207632184028625,
0.1850520670413971,
-0.10664539784193039,
0.24557937681674957,
0.39981594681739807,
0.03360707685351372,
0.3876354694366455,
0.20610398054122925,
0.30574896931648254,
0.14539271593093872,
-0.2683596909046173,
-0.19080810248851776,
0.282751202583313,
0.0175518561154603... |
https://github.com/huggingface/datasets/issues/2211 | Getting checksum error when trying to load lc_quad dataset | Hi,
I've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:
```bash
datasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications
```
| I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, ge... | 31 | Getting checksum error when trying to load lc_quad dataset
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading ... | [
-0.15864777565002441,
0.043078746646642685,
-0.0340239591896534,
0.35635891556739807,
0.26776716113090515,
0.014728926122188568,
0.07140327244997025,
0.2469019740819931,
0.3710916042327881,
-0.04227321222424507,
-0.10491660237312317,
0.0677453875541687,
-0.037775918841362,
0.07531490921974... |
https://github.com/huggingface/datasets/issues/2210 | dataloading slow when using HUGE dataset | Hi ! Yes this is an issue with `datasets<=1.5.0`
This issue has been fixed by #2122 , we'll do a new release soon :)
For now you can test it on the `master` branch. | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | 34 | dataloading slow when using HUGE dataset
Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle... | [
-0.5256723761558533,
-0.213547021150589,
-0.06311199814081192,
0.2832505404949188,
0.13892103731632233,
-0.02778422273695469,
0.15010325610637665,
0.23279748857021332,
-0.15769262611865997,
-0.09218926727771759,
-0.11915000528097153,
0.14390508830547333,
-0.1486590951681137,
-0.18230821192... |
https://github.com/huggingface/datasets/issues/2210 | dataloading slow when using HUGE dataset | Hi, thank you for your answer. I did not realize that my issue stems from the same problem. | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | 18 | dataloading slow when using HUGE dataset
Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle... | [
-0.5256723761558533,
-0.213547021150589,
-0.06311199814081192,
0.2832505404949188,
0.13892103731632233,
-0.02778422273695469,
0.15010325610637665,
0.23279748857021332,
-0.15769262611865997,
-0.09218926727771759,
-0.11915000528097153,
0.14390508830547333,
-0.1486590951681137,
-0.18230821192... |
https://github.com/huggingface/datasets/issues/2207 | making labels consistent across the datasets | Hi ! The ClassLabel feature type encodes the labels as integers.
The integer corresponds to the index of the label name in the `names` list of the ClassLabel.
Here that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).
You can get the label names back by using `a.features['label'].int... | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if ... | 51 | making labels consistent across the datasets
Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in cas... | [
0.016260795295238495,
-0.1287364810705185,
-0.06999016553163528,
0.4029693901538849,
0.382713258266449,
-0.13002808392047882,
0.42602837085723877,
0.0234171524643898,
0.08527489006519318,
0.27108532190322876,
-0.23050154745578766,
0.5326740145683289,
-0.015341627411544323,
0.40478694438934... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | Hi,
the output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.
... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 53 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.23382829129695892,
0.292187362909317,
-0.058828771114349365,
0.06909643113613129,
0.27344268560409546,
-0.08607129007577896,
0.1836448758840561,
0.3055635690689087,
-0.53685063123703,
-0.19321967661380768,
-0.046171002089977264,
0.4354698359966278,
-0.0010596851352602243,
-0.23817433416... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | Hi @yana-xuyan, thanks for reporting.
As clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. A... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 98 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.23382829129695892,
0.292187362909317,
-0.058828771114349365,
0.06909643113613129,
0.27344268560409546,
-0.08607129007577896,
0.1836448758840561,
0.3055635690689087,
-0.53685063123703,
-0.19321967661380768,
-0.046171002089977264,
0.4354698359966278,
-0.0010596851352602243,
-0.23817433416... |
https://github.com/huggingface/datasets/issues/2200 | _prepare_split will overwrite DatasetBuilder.info.features | Hi ! This might be related to #2153
You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch
I'm opening a PR to fix this and also to figure out how it was not caught in the tests
EDIT: opened #2201 | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | 43 | _prepare_split will overwrite DatasetBuilder.info.features
Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_featu... | [
-0.24060194194316864,
-0.03829101473093033,
-0.10776720196008682,
0.17833778262138367,
0.2982982099056244,
0.2197922319173813,
0.45460841059684753,
0.2158689796924591,
-0.33275455236434937,
0.11496510356664658,
0.12917564809322357,
0.17151258885860443,
0.0863349437713623,
0.48735311627388,... |
https://github.com/huggingface/datasets/issues/2200 | _prepare_split will overwrite DatasetBuilder.info.features | > Hi ! This might be related to #2153
>
> You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch
> I'm opening a PR to fix this and also to figure out how it was not caught in the tests
>
> EDIT: opened #2201
Glad to hear that! Thank you for your fix, I'm new to hug... | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | 67 | _prepare_split will overwrite DatasetBuilder.info.features
Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_featu... | [
-0.24060194194316864,
-0.03829101473093033,
-0.10776720196008682,
0.17833778262138367,
0.2982982099056244,
0.2197922319173813,
0.45460841059684753,
0.2158689796924591,
-0.33275455236434937,
0.11496510356664658,
0.12917564809322357,
0.17151258885860443,
0.0863349437713623,
0.48735311627388,... |
https://github.com/huggingface/datasets/issues/2196 | `load_dataset` caches two arrow files? | Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to... | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | 64 | `load_dataset` caches two arrow files?
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be a... | [
-0.02062368206679821,
-0.1851760894060135,
-0.13287143409252167,
0.6645824909210205,
-0.08110970258712769,
0.3052114248275757,
0.1752958595752716,
0.2564815282821655,
0.2874499261379242,
-0.15786445140838623,
-0.009086311794817448,
0.1946641355752945,
0.08018267154693604,
-0.51300019025802... |
https://github.com/huggingface/datasets/issues/2196 | `load_dataset` caches two arrow files? | Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G cache-ed205e500a7dc44c.arrow`
To my observation, both `load_dataset` and `map` creates `cache-*` files, and I... | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | 61 | `load_dataset` caches two arrow files?
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be a... | [
0.01727607287466526,
-0.14640125632286072,
-0.1149081364274025,
0.6492548584938049,
-0.06654690206050873,
0.31068435311317444,
0.24094057083129883,
0.2578831613063812,
0.302135169506073,
-0.25634127855300903,
-0.03561738133430481,
0.2787819802761078,
0.1161758154630661,
-0.4971510171890259... |
https://github.com/huggingface/datasets/issues/2196 | `load_dataset` caches two arrow files? | This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | 16 | `load_dataset` caches two arrow files?
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be a... | [
-0.0013232697965577245,
-0.20793098211288452,
-0.12528638541698456,
0.7411249876022339,
-0.14882920682430267,
0.2700349688529968,
0.2812333106994629,
0.23481984436511993,
0.3605193495750427,
-0.23589098453521729,
-0.01162185613065958,
0.2067556083202362,
0.1341191977262497,
-0.479701846837... |
https://github.com/huggingface/datasets/issues/2195 | KeyError: '_indices_files' in `arrow_dataset.py` | Thanks @samsontmr this should be fixed on master now
Feel free to reopen if you're still having issues | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line ... | 18 | KeyError: '_indices_files' in `arrow_dataset.py`
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/l... | [
-0.34370043873786926,
0.06250933557748795,
-0.061988405883312225,
0.6979848146438599,
-0.07629922032356262,
0.15042346715927124,
0.19406910240650177,
0.4931739270687103,
0.5131545662879944,
0.14809001982212067,
0.019221670925617218,
0.1839224100112915,
-0.39186057448387146,
0.0680303201079... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.