Improve dataset card for Text-ADBench
Browse filesThis PR significantly enhances the dataset card for Text-ADBench by:
- Linking to the official paper on the Hugging Face Hub: https://huggingface.co/papers/2507.12295.
- Providing a direct link to the associated GitHub repository.
- Populating the dataset description with the paper's abstract and detailing the included text datasets and LLM embeddings.
- Adding comprehensive "Sample Usage" instructions directly from the project's GitHub README.
- Filling in details for "Uses", "Dataset Structure", "Dataset Creation", and "Bias, Risks, and Limitations".
- Updating the citation information with the main paper's BibTeX and links/citations for underlying datasets and LLMs.
- Adding relevant `tags` and `language` to the metadata for better discoverability.
These changes make the dataset more informative, discoverable, and user-friendly for researchers interested in text anomaly detection.
|
@@ -3,141 +3,229 @@ license: mit
|
|
| 3 |
task_categories:
|
| 4 |
- text-classification
|
| 5 |
- feature-extraction
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
-
# Dataset Card for Dataset Name
|
| 8 |
|
| 9 |
-
|
| 10 |
-
This repositoty covers 8 Text datasets inlcuding: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, Reuters21578.
|
| 11 |
-
We provide the original textual data, preprocess data and multiple embeddings based LLama2, Llama3, Mistral and Embedding Models (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002) from OpenAI.
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
|
|
|
|
| 19 |
|
|
|
|
| 20 |
|
| 21 |
-
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
|
|
|
|
|
|
| 26 |
|
| 27 |
-
### Dataset
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
-
-
|
| 32 |
-
-
|
| 33 |
-
-
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
-
|
|
|
|
| 38 |
|
| 39 |
-
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
### Out-of-Scope Use
|
| 46 |
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
|
|
|
| 50 |
|
| 51 |
## Dataset Structure
|
| 52 |
|
| 53 |
-
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
[
|
| 56 |
|
| 57 |
## Dataset Creation
|
| 58 |
|
| 59 |
### Curation Rationale
|
| 60 |
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
[More Information Needed]
|
| 64 |
|
| 65 |
### Source Data
|
| 66 |
|
| 67 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
| 68 |
-
|
| 69 |
#### Data Collection and Processing
|
| 70 |
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
[More Information Needed]
|
| 74 |
|
| 75 |
#### Who are the source data producers?
|
| 76 |
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
[More Information Needed]
|
| 80 |
-
|
| 81 |
-
### Annotations [optional]
|
| 82 |
-
|
| 83 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
| 84 |
-
|
| 85 |
-
#### Annotation process
|
| 86 |
-
|
| 87 |
-
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
| 88 |
-
|
| 89 |
-
[More Information Needed]
|
| 90 |
-
|
| 91 |
-
#### Who are the annotators?
|
| 92 |
-
|
| 93 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
| 94 |
-
|
| 95 |
-
[More Information Needed]
|
| 96 |
-
|
| 97 |
-
#### Personal and Sensitive Information
|
| 98 |
-
|
| 99 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Bias, Risks, and Limitations
|
| 104 |
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
|
|
|
| 108 |
|
| 109 |
### Recommendations
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 140 |
|
| 141 |
## Dataset Card Contact
|
| 142 |
|
| 143 |
-
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- text-classification
|
| 5 |
- feature-extraction
|
| 6 |
+
tags:
|
| 7 |
+
- anomaly-detection
|
| 8 |
+
- benchmark
|
| 9 |
+
- embeddings
|
| 10 |
+
- llms
|
| 11 |
+
language:
|
| 12 |
+
- en
|
| 13 |
---
|
|
|
|
| 14 |
|
| 15 |
+
# Text-ADBench: Text Anomaly Detection Benchmark based on LLMs Embedding
|
|
|
|
|
|
|
| 16 |
|
| 17 |
+
This repository provides **Text-ADBench**, a comprehensive benchmark for text anomaly detection, leveraging embeddings from diverse pre-trained language models across a wide array of text datasets.
|
| 18 |
|
| 19 |
+
**Paper**: [Text-ADBench: Text Anomaly Detection Benchmark based on LLMs Embedding](https://huggingface.co/papers/2507.12295)
|
| 20 |
+
**Code**: [https://github.com/Feng-001/Text-ADBench](https://github.com/Feng-001/Text-ADBench)
|
| 21 |
|
| 22 |
+
## Abstract
|
| 23 |
|
| 24 |
+
Text anomaly detection is a critical task in natural language processing (NLP), with applications spanning fraud detection, misinformation identification, spam detection and content moderation, etc. Despite significant advances in large language models (LLMs) and anomaly detection algorithms, the absence of standardized and comprehensive benchmarks for evaluating the existing anomaly detection methods on text data limits rigorous comparison and development of innovative approaches. This work performs a comprehensive empirical study and introduces a benchmark for text anomaly detection, leveraging embeddings from diverse pre-trained language models across a wide array of text datasets. Our work systematically evaluates the effectiveness of embedding-based text anomaly detection by incorporating (1) early language models (GloVe, BERT); (2) multiple LLMs (LLaMa-2, LLama-3, Mistral, OpenAI (small, ada, large)); (3) multi-domain text datasets (news, social media, scientific publications); (4) comprehensive evaluation metrics (AUROC, AUPRC). Our experiments reveal a critical empirical insight: embedding quality significantly governs anomaly detection efficacy, and deep learning-based approaches demonstrate no performance advantage over conventional shallow algorithms (e.g., KNN, Isolation Forest) when leveraging LLM-derived embeddings. In addition, we observe strongly low-rank characteristics in cross-model performance matrices, which enables an efficient strategy for rapid model evaluation (or embedding evaluation) and selection in practical applications. Furthermore, by open-sourcing our benchmark toolkit that includes all embeddings from different models and code at this https URL , this work provides a foundation for future research in robust and scalable text anomaly detection systems.
|
| 25 |
|
| 26 |
+
## Dataset Details
|
| 27 |
|
| 28 |
+
This repository covers 8 text datasets including: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, Reuters21578. For each of these multi-domain datasets (news, social media, scientific publications), the repository provides:
|
| 29 |
+
* The original textual data.
|
| 30 |
+
* Preprocessed data.
|
| 31 |
+
* Multiple embeddings derived from various pre-trained language models, including:
|
| 32 |
+
* Early language models (GloVe, BERT)
|
| 33 |
+
* Multiple LLMs (LLaMa-2, LLaMa-3, Mistral)
|
| 34 |
+
* OpenAI embedding models (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002)
|
| 35 |
|
| 36 |
+
### Dataset Description
|
| 37 |
|
| 38 |
+
Text-ADBench addresses the critical task of text anomaly detection by providing a standardized and comprehensive benchmark. It facilitates rigorous comparison and development of innovative approaches by systematically evaluating embedding-based text anomaly detection across diverse models and datasets. The benchmark highlights that embedding quality significantly influences anomaly detection performance and that traditional shallow algorithms can be as effective as deep learning approaches when utilizing LLM-derived embeddings.
|
| 39 |
|
| 40 |
+
- **Curated by:** Feng Xiao and Jicong Fan
|
| 41 |
+
- **Language(s) (NLP):** English
|
| 42 |
+
- **License:** MIT
|
| 43 |
|
| 44 |
+
### Dataset Sources
|
| 45 |
|
| 46 |
+
- **Repository:** [https://github.com/Feng-001/Text-ADBench](https://github.com/Feng-001/Text-ADBench)
|
| 47 |
+
- **Paper:** [https://huggingface.co/papers/2507.12295](https://huggingface.co/papers/2507.12295)
|
| 48 |
|
| 49 |
+
## Uses
|
| 50 |
|
| 51 |
+
### Direct Use
|
| 52 |
|
| 53 |
+
This dataset is intended for researchers and practitioners in natural language processing and artificial intelligence, specifically for:
|
| 54 |
+
* Benchmarking existing text anomaly detection methods.
|
| 55 |
+
* Developing and evaluating new anomaly detection algorithms on diverse text data.
|
| 56 |
+
* Studying the impact of various LLM embeddings on anomaly detection efficacy.
|
| 57 |
+
* Exploring efficient strategies for rapid model evaluation and selection in practical applications, leveraging observed low-rank characteristics in performance matrices.
|
| 58 |
|
| 59 |
### Out-of-Scope Use
|
| 60 |
|
| 61 |
+
This dataset is not intended for:
|
| 62 |
+
* General text classification tasks unrelated to anomaly detection.
|
| 63 |
+
* Training large language models from scratch, as it primarily provides embeddings and benchmark data, not raw corpus data for pre-training.
|
| 64 |
+
* Applications where biases present in the original source datasets or embedding models could lead to unfair or discriminatory outcomes without proper mitigation.
|
| 65 |
|
| 66 |
## Dataset Structure
|
| 67 |
|
| 68 |
+
The repository contains 8 distinct text datasets: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, and Reuters21578. For each dataset, the repository provides:
|
| 69 |
+
* Original textual data (e.g., in `text_data/`).
|
| 70 |
+
* Preprocessed versions of the text data.
|
| 71 |
+
* Multiple sets of embeddings, generated using a range of models including GloVe, BERT, Llama-2, Llama-3, Mistral, and OpenAI's text embedding models (e.g., in `text_embedding/`).
|
| 72 |
|
| 73 |
+
For a detailed file structure, please refer to the [GitHub repository](https://github.com/Feng-001/Text-ADBench).
|
| 74 |
|
| 75 |
## Dataset Creation
|
| 76 |
|
| 77 |
### Curation Rationale
|
| 78 |
|
| 79 |
+
The dataset was created to address a critical gap in the field of text anomaly detection: the absence of standardized and comprehensive benchmarks. By providing a unified framework, Text-ADBench enables rigorous comparison and facilitates the development of innovative approaches to text anomaly detection, leveraging the advancements in large language models.
|
|
|
|
|
|
|
| 80 |
|
| 81 |
### Source Data
|
| 82 |
|
|
|
|
|
|
|
| 83 |
#### Data Collection and Processing
|
| 84 |
|
| 85 |
+
The benchmark leverages a wide array of publicly accessible multi-domain text datasets. The original textual data was collected, followed by preprocessing steps. Subsequently, embeddings were generated using diverse pre-trained language models, encompassing both early models and modern LLMs. The benchmark toolkit also supports generating embeddings for new text data.
|
|
|
|
|
|
|
| 86 |
|
| 87 |
#### Who are the source data producers?
|
| 88 |
|
| 89 |
+
The source data producers include the original authors and maintainers of the 8 constituent text datasets (e.g., 20Newsgroups, IMDB, SMS_SPAM, etc.). The benchmark and its generated embeddings were curated by Feng Xiao and Jicong Fan, the authors of the Text-ADBench paper.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
## Bias, Risks, and Limitations
|
| 92 |
|
| 93 |
+
Users should be aware of the inherent risks, biases, and limitations associated with any text-based datasets and large language models. These can include:
|
| 94 |
+
* **Data Biases:** The underlying 8 text datasets may contain social biases (e.g., gender, racial, political), historical biases, or domain-specific nuances that could be propagated through the embeddings and affect anomaly detection performance.
|
| 95 |
+
* **Model Limitations:** The performance of anomaly detection algorithms heavily depends on the quality of the embeddings. While the benchmark evaluates various LLMs, their specific limitations (e.g., in representing nuanced meanings, handling rare words, or biases present in their training data) can influence results.
|
| 96 |
+
* **Definition of Anomaly:** The concept of "anomaly" can be subjective and context-dependent. The benchmark uses established datasets, but users should consider their specific application's definition of anomaly.
|
| 97 |
|
| 98 |
### Recommendations
|
| 99 |
|
| 100 |
+
Users are strongly encouraged to:
|
| 101 |
+
* Familiarize themselves with the characteristics, potential biases, and limitations of each individual dataset by consulting their original documentation or papers.
|
| 102 |
+
* Critically evaluate the results of anomaly detection, considering the specific context and potential impact of biases.
|
| 103 |
+
* Perform their own bias analyses when deploying models trained or evaluated using this benchmark in sensitive applications.
|
| 104 |
+
|
| 105 |
+
## Sample Usage
|
| 106 |
+
|
| 107 |
+
To use the Text-ADBench, you can download the datasets and precomputed embeddings, and then run the various experiments.
|
| 108 |
+
|
| 109 |
+
### Prerequisites
|
| 110 |
+
|
| 111 |
+
1. **Environment:**
|
| 112 |
+
- Python 3.8
|
| 113 |
+
- Install dependencies: `pip install -r requirements.txt` (from the [GitHub repository](https://github.com/Feng-001/Text-ADBench))
|
| 114 |
+
|
| 115 |
+
2. **Download Datasets and Embeddings:**
|
| 116 |
+
- [Text data](https://huggingface.co/datasets/Feng-001/Text-ADBench/tree/main/text_data)
|
| 117 |
+
- [Text embeddings](https://huggingface.co/datasets/Feng-001/Text-ADBench/tree/main/text_embedding)
|
| 118 |
+
- Revise the configuration file `configs.py` (in the cloned GitHub repository) and set valid `DATA_DIR` and `EMBEDDING_DIR`.
|
| 119 |
+
|
| 120 |
+
### Text Embedding (`./embedding/` in the GitHub repo)
|
| 121 |
+
|
| 122 |
+
This section details how to generate embeddings if you wish to use different models or new text data.
|
| 123 |
+
- Login to your Hugging Face account: `huggingface-cli login --token your-token`
|
| 124 |
+
- Configure Hugging Face Token in `./embedding/configs_llms.py`.
|
| 125 |
+
- *(Optional)* For OpenAI Embedding Models: you need to configure your API-Key in `./embedding/configs_llms.py`.
|
| 126 |
+
- **Example: Obtaining embedding of *Llama3-8b-mntp* on dataset *sms_spam***
|
| 127 |
+
```bash
|
| 128 |
+
python main.py --dataset sms_spam --model_from meta --model Llama3-8b --ft_llm mntp --batch_size 5000 --max_size 28
|
| 129 |
+
```
|
| 130 |
+
- More Commands: see `text_embedding.sh` in the repository.
|
| 131 |
+
- For new text data: you must first supplement the corresponding logic for loading data in `./embedding/data_preprocess.py`. Once this preprocessing script has been executed, you can then implement the embedding command for the new dataset.
|
| 132 |
+
|
| 133 |
+
### Anomaly Detection (`./anomaly_detection/` in the GitHub repo)
|
| 134 |
+
|
| 135 |
+
This section shows how to run anomaly detection algorithms on the prepared embeddings.
|
| 136 |
+
- **Example: Running AD algorithm *OCSVM* on dataset *sms_spam***
|
| 137 |
+
```bash
|
| 138 |
+
python main.py --dataset sms_spam --ad ocsvm --repeat 1
|
| 139 |
+
```
|
| 140 |
+
- More Commands: see `anomaly_detection.sh` in the repository.
|
| 141 |
+
|
| 142 |
+
### Low-Rank Prediction (`./low_rank_prediction/` in the GitHub repo)
|
| 143 |
+
|
| 144 |
+
This section demonstrates how to perform experiments related to the low-rank characteristics of cross-model performance matrices.
|
| 145 |
+
- The performance matrices from our paper are provided in `performance_matrices` within the GitHub repository.
|
| 146 |
+
- **Example:**
|
| 147 |
+
```bash
|
| 148 |
+
python ./low_rank_prediction/matrix_completion.py --missing_rate 0.5 --rank 1
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
## Citation
|
| 152 |
+
|
| 153 |
+
If you find Text-ADBench useful for your research and applications, please cite our paper:
|
| 154 |
+
|
| 155 |
+
1. **Our Work**
|
| 156 |
+
```bibtex
|
| 157 |
+
@misc{xiao2025textadbenchtextanomalydetection,
|
| 158 |
+
title={Text-ADBench: Text Anomaly Detection Benchmark based on LLMs Embedding},
|
| 159 |
+
author={Feng Xiao and Jicong Fan},
|
| 160 |
+
year={2025},
|
| 161 |
+
eprint={2507.12295},
|
| 162 |
+
archivePrefix={arXiv},
|
| 163 |
+
primaryClass={cs.CL},
|
| 164 |
+
url={https://arxiv.org/abs/2507.12295},
|
| 165 |
+
}
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
2. **Original Datasets**
|
| 169 |
+
The datasets utilized in this repository are publicly accessible. If you employ any of them from the repository, please ensure to cite the original papers or resources accordingly:
|
| 170 |
+
* [20Newsgroups](http://qwone.com/~jason/20Newsgroups/)
|
| 171 |
+
* [Reuters21578](https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/corpora/reuters.zip)
|
| 172 |
+
* [IMDB](http://ai.stanford.edu/~amaas/data/sentiment/)
|
| 173 |
+
* [SST2](https://huggingface.co/datasets/stanfordnlp/sst2)
|
| 174 |
+
* [SMS_SPAM](https://huggingface.co/datasets/ucirvine/sms_spam)
|
| 175 |
+
* [Enron](https://huggingface.co/datasets/Hellisotherpeople/enron_emails_parsed)
|
| 176 |
+
* [Web-of-Science](https://huggingface.co/datasets/river-martin/web-of-science-with-label-texts)
|
| 177 |
+
* [DBpedia14](https://huggingface.co/datasets/fancyzhx/dbpedia_14)
|
| 178 |
+
|
| 179 |
+
3. **LLM Embeddings (LLM2Vec)**
|
| 180 |
+
Building upon Llama-2-7B-chat, Mistral-7B-Instruct-v0.2 and Llama-3-8B-Instruct, we employ their fine-tuned versions tailored for text embedding from [LLM2Vec](https://github.com/McGill-NLP/llm2vec/tree/main). Please ensure to cite the original papers if you use these embeddings:
|
| 181 |
+
|
| 182 |
+
* **Llama-2-7B-chat**
|
| 183 |
+
```bibtex
|
| 184 |
+
@article{touvron2023llama,
|
| 185 |
+
title={Llama 2: Open foundation and fine-tuned chat models},
|
| 186 |
+
author={Touvron, Hugo and Martin, Louis and Stone, Kevin and Albert, Peter and Almahairi, Amjad and Babaei, Yasmine and Bashlykov, Nikolay and Batra, Soumya and Bhargava, Prajjwal and Bhosale, Shruti and others},
|
| 187 |
+
journal={arXiv preprint arXiv:2307.09288},
|
| 188 |
+
year={2023}
|
| 189 |
+
}
|
| 190 |
+
```
|
| 191 |
+
* **Mistral-7B-Instruct-v0.2**
|
| 192 |
+
```bibtex
|
| 193 |
+
@misc{jiang2023mistral7b,
|
| 194 |
+
title={Mistral 7B},
|
| 195 |
+
author={Albert Q. Jiang and Alexandre Sablayrolles and Arthur Mensch and Chris Bamford and Devendra Singh Chaplot and Diego de las Casas and Florian Bressand and Gianna Lengyel and Guillaume Lample and Lucile Saulnier and Lélio Renard Lavaud and Marie-Anne Lachaux and Pierre Stock and Teven Le Scao and Thibaut Lavril and Thomas Wang and Timothée Lacroix and William El Sayed},
|
| 196 |
+
year={2023},
|
| 197 |
+
eprint={2310.06825},
|
| 198 |
+
archivePrefix={arXiv},
|
| 199 |
+
primaryClass={cs.CL},
|
| 200 |
+
url={https://arxiv.org/abs/2310.06825},
|
| 201 |
+
}
|
| 202 |
+
```
|
| 203 |
+
* **Llama-3-8B-Instruct**
|
| 204 |
+
```bibtex
|
| 205 |
+
@article{llama3modelcard,
|
| 206 |
+
title={Llama 3 Model Card},
|
| 207 |
+
author={AI@Meta},
|
| 208 |
+
year={2024},
|
| 209 |
+
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
|
| 210 |
+
}
|
| 211 |
+
```
|
| 212 |
+
* **LLM2Vec**
|
| 213 |
+
```bibtex
|
| 214 |
+
@inproceedings{
|
| 215 |
+
llm2vec,
|
| 216 |
+
title={{LLM2V}ec: Large Language Models Are Secretly Powerful Text Encoders},
|
| 217 |
+
author={Parishad BehnamGhader and Vaibhav Adlakha and Marius Mosbach and Dzmitry Bahdanau and Nicolas Chapados and Siva Reddy},
|
| 218 |
+
booktitle={First Conference on Language Modeling},
|
| 219 |
+
year={2024},
|
| 220 |
+
url={https://openreview.net/forum?id=IW1PR7vEBf}
|
| 221 |
+
}
|
| 222 |
+
```
|
| 223 |
+
|
| 224 |
+
## Dataset Card Authors
|
| 225 |
+
|
| 226 |
+
* Feng Xiao
|
| 227 |
+
* Jicong Fan
|
| 228 |
|
| 229 |
## Dataset Card Contact
|
| 230 |
|
| 231 |
+
For any questions regarding this dataset, please refer to the contact information provided in the original GitHub repository or paper.
|