metadata
language:
- tr
license: mit
task_categories:
- feature-extraction
size_categories:
- 100K<n<1M
tags:
- embedding-distillation
- student-teacher
pretty_name: Cosmos Corpus Encoded
dataset_info:
features:
- name: text
dtype: string
- name: teacher_embedding_final
list: float64
- name: teacher_embedding_pre_dense
list: float64
- name: tabi_input_ids
list: int64
- name: cosmos_input_ids
list: int64
- name: mursit_input_ids
list: int64
- name: mft_input_ids
list: int64
splits:
- name: train
num_bytes: 6603761113
num_examples: 224779
download_size: 1576033082
dataset_size: 6603761113
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Cosmos Corpus Encoded for Embedding Distillation
This dataset is a pre-tokenized version of alibayram/cosmos-corpus-0-05-with-embeddings, designed for efficient embedding distillation training of MFT and TabiBERT models.
Dataset Description
- Source:
alibayram/cosmos-corpus-0-05-with-embeddings - Language: Turkish
- Task: Embedding Distillation (Teacher-Student Training)
- Total Examples: 224,807 (Filtered from 300,000)
- Max Sequence Length: 2048 tokens
Pre-processing & Filtering
The dataset was processed using two different tokenizers to support multiple student architectures:
- MFT Tokenizer: A custom morphologically informed tokenizer.
- TabiBERT Tokenizer: A BERT-based tokenizer with 32k vocabulary.
Filtering:
- Original size: 300,000 examples.
- Filtered size: 224,807 examples (~75%).
- Criterion: Both
mft_input_idsandtabi_input_idsmust be <= 2048 tokens. - Sequences longer than 2048 tokens were dropped to ensure efficient training within context limits.
Dataset Structure
The dataset contains the following columns:
| Column | Type | Description |
|---|---|---|
text |
string |
The original raw text content. |
mft_input_ids |
list[int] |
Token IDs encoded using the MFT tokenizer. |
tabi_input_ids |
list[int] |
Token IDs encoded using the TabiBERT tokenizer. |
teacher_embedding_final |
list[float] |
Final layer embeddings from the teacher model (Gemma-2-9b-it). |
Data Instances
{
'text': 'Makine öğrenmesi, verilerden öğrenen algoritmaların çalışılmasıdır.',
'mft_input_ids': [124, 5921, ...],
'tabi_input_ids': [101, 2341, ...],
'teacher_embedding_final': [0.021, -0.054, ...] # 3584-dimensional vectors
}
Usage
This dataset is optimized for the EmbeddingDistillationTrainer. You can load it directly without needing to re-tokenize during training.
from datasets import load_dataset
dataset = load_dataset("alibayram/cosmos-corpus-encoded")
Training Example
To train a model using the mft_input_ids column:
from embedding_trainer import EmbeddingDistillationTrainer, EmbeddingTrainerConfig
config = EmbeddingTrainerConfig(
student_model="alibayram/mft-downstream-task-embeddinggemma",
input_ids_column="mft_input_ids", # or "tabi_input_ids"
embedding_column="teacher_embedding_final",
loss_type="cosine",
batch_size=256
)
trainer = EmbeddingDistillationTrainer(config)
trainer.train("alibayram/cosmos-corpus-encoded")
Creation Details
- Created by: Ali Bayram
- Date: 2026-01-25
- Teacher Model:
google/gemma-2-9b-it(Embeddings extracted viasartify-llm/Gemma-2-9b-it-v2-embedding) - Processing Script:
prepare_dataset.py
License
MIT