Munch-1 - Large-Scale Urdu Text-to-Speech Dataset
π Dataset Description
Munch-1 is a large-scale Urdu Text-to-Speech (TTS) dataset containing high-quality audio recordings paired with Urdu text transcripts. The dataset features multiple voice variations and natural pronunciation patterns suitable for training and evaluating Urdu TTS models.
Rough Assumption :
3.86 million audio clips, if each 20 seconds long, total about 21,444+ hours of audio.
Key Features
- π€ 13 Different Voices: alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan
- π£οΈ Natural Urdu Pronunciation: Proper handling of Urdu script, punctuation, and intonation
- π Large Scale: 3,856,500 audio-text pairs
- π΅ High Quality Audio: PCM16 format, 22.05 kHz sample rate
- πΎ Efficient Storage: Parquet format with compression
- π Lightweight Index Available: Hashed index for exploration without downloading full dataset
Dataset Statistics
| Metric | Value |
|---|---|
| Total Size | 3.28 TB |
| Total Rows | 3,856,500 |
| Number of Files | 7,714 parquet files (~400 MB each) |
| Audio Format | PCM16 (raw audio bytes) |
| Sample Rate | 22,050 Hz |
| Bit Depth | 16-bit signed integer |
| Text Language | Urdu (with occasional mixed language) |
| Voice Count | 13 unique voices |
| Avg Audio Size | ~50kB-5+MB per sample |
| Avg Duration | ~3-5 seconds per sample |
| Total Duration | ~13,200-24,800 hours of audio |
π Companion Dataset
For efficient exploration without downloading the full 3.28 TB dataset, use the Munch-1 Hashed Index:
- π Contains all metadata + SHA-256 hashes of audio
- πΎ Only ~1 GB (99.97% smaller)
- β‘ Search 3.86M records in seconds
- π― Selectively download only what you need
π Quick Start
Installation
pip install datasets pandas numpy scipy
Basic Usage
from datasets import load_dataset
import numpy as np
import io
from scipy.io import wavfile
import IPython.display as ipd
# Load a specific file
ds = load_dataset(
"humair025/munch-1",
data_files="tts_data_20251203_125841_0a26c418.parquet",
split="train"
)
# Helper function to convert PCM16 bytes to WAV
def pcm16_bytes_to_wav(pcm_bytes, sample_rate=22050):
audio_array = np.frombuffer(pcm_bytes, dtype=np.int16)
wav_io = io.BytesIO()
wavfile.write(wav_io, sample_rate, audio_array)
wav_io.seek(0)
return wav_io
# Play first audio sample
row = ds[0]
wav_io = pcm16_bytes_to_wav(row['audio_bytes'])
ipd.display(ipd.Audio(wav_io, rate=22050))
print(f"Text: {row['text']}")
print(f"Voice: {row['voice']}")
Efficient Exploration (Recommended)
Instead of downloading the full 3.28 TB dataset, start with the hashed index:
from datasets import load_dataset
import pandas as pd
# Load the lightweight index (~1 GB)
index_ds = load_dataset("humair025/hashed_data_munch_1", split="train")
index_df = pd.DataFrame(index_ds)
# Explore the dataset
print(f"Total samples: {len(index_df)}")
print(f"Voices: {index_df['voice'].unique()}")
print(f"Voice distribution:\n{index_df['voice'].value_counts()}")
# Find specific samples
ash_samples = index_df[index_df['voice'] == 'ash']
short_audio = index_df[index_df['audio_size_bytes'] < 40000]
# Download only what you need
files_needed = ash_samples['parquet_file_name'].unique()[:10]
ds = load_dataset(
"humair025/munch-1",
data_files=list(files_needed),
split="train"
)
Load Multiple Files
# Load first 10 files
ds = load_dataset(
"humair025/munch-1",
data_files="tts_data_20251203_*.parquet", # Wildcard pattern
split="train"
)
print(f"Total samples: {len(ds)}")
Batch Processing
from huggingface_hub import HfApi
# Get all parquet files
api = HfApi()
files = api.list_repo_files(repo_id="humair025/munch-1", repo_type="dataset")
parquet_files = [f for f in files if f.endswith('.parquet')]
print(f"Total files: {len(parquet_files)}")
# Load first 20 files
batch = parquet_files[:20]
ds = load_dataset(
"humair025/munch-1",
data_files=batch,
split="train"
)
π Dataset Structure
Data Fields
Each row in the dataset contains:
| Field | Type | Description |
|---|---|---|
id |
int | Paragraph ID (sequential) |
text |
string | Original Urdu text |
transcript |
string | TTS transcript (may differ slightly from input) |
voice |
string | Voice name used (e.g., "ash", "sage", "coral") |
audio_bytes |
bytes | Raw PCM16 audio data |
timestamp |
string | ISO format timestamp of generation (nullable) |
error |
string | Error message if generation failed (nullable) |
Example Row
{
'id': 42,
'text': 'ΫΫ Ψ§ΫΪ© ΩΩ
ΩΩΫ Ω
ΨͺΩ ΫΫΫ',
'transcript': 'ΫΫ Ψ§ΫΪ© ΩΩ
ΩΩΫ Ω
ΨͺΩ ΫΫΫ',
'voice': 'ash',
'audio_bytes': b'\x00\x01...', # PCM16 bytes
'timestamp': '2024-12-03T13:03:14.123456',
'error': None
}
π― Use Cases
1. TTS Model Training
Train Urdu text-to-speech models with diverse voice samples:
- Fine-tune existing TTS models
- Train voice cloning systems
- Develop multi-speaker TTS
- Create voice conversion models
2. Speech Recognition
Develop Urdu ASR systems:
- Train speech-to-text models
- Evaluate transcription accuracy
- Research Urdu phonetics
- Build pronunciation dictionaries
3. Voice Research
Study voice characteristics and patterns:
- Analyze voice similarity
- Research pronunciation patterns
- Study Urdu phonetics and prosody
- Compare voice quality metrics
4. Audio Processing
Develop audio processing pipelines:
- Audio enhancement
- Noise reduction
- Speech synthesis evaluation
- Audio quality assessment
5. Linguistic Analysis
Explore linguistic patterns:
- Text analysis and corpus linguistics
- Punctuation usage patterns
- Sentence structure analysis
- Code-switching research (Urdu-English)
π§ Advanced Usage
Voice Distribution Analysis
import pandas as pd
from collections import Counter
# Using the hashed index (recommended)
index_ds = load_dataset("humair025/hashed_data_munch_1", split="train")
index_df = pd.DataFrame(index_ds)
# Count voice usage
voice_counts = index_df['voice'].value_counts()
print("Voice Distribution:")
for voice, count in voice_counts.items():
percentage = (count / len(index_df)) * 100
print(f" {voice}: {count:,} samples ({percentage:.2f}%)")
Audio Length Analysis
# Using the hashed index
avg_size = index_df['audio_size_bytes'].mean()
avg_duration = (avg_size / 2) / 22050 # bytes to seconds
print(f"Average audio size: {avg_size/1024:.2f} KB")
print(f"Average duration: {avg_duration:.2f} seconds")
# Duration distribution
durations = (index_df['audio_size_bytes'] / 2) / 22050
print(f"Min duration: {durations.min():.2f}s")
print(f"Max duration: {durations.max():.2f}s")
print(f"Median duration: {durations.median():.2f}s")
Text Statistics
# Text length analysis
text_lengths = index_df['text'].str.len()
word_counts = index_df['text'].str.split().str.len()
print(f"Average characters: {text_lengths.mean():.0f}")
print(f"Average words: {word_counts.mean():.0f}")
print(f"Longest text: {text_lengths.max()} characters")
Duplicate Detection
# Find duplicate audio using hashes
duplicates = index_df[index_df.duplicated(subset=['audio_bytes_hash'], keep=False)]
if len(duplicates) > 0:
print(f"Found {len(duplicates):,} duplicate rows")
print(f"Unique audio: {index_df['audio_bytes_hash'].nunique():,}")
redundancy = (1 - index_df['audio_bytes_hash'].nunique()/len(index_df)) * 100
print(f"Redundancy: {redundancy:.2f}%")
else:
print("No duplicates found!")
Export to WAV Files
import os
from tqdm import tqdm
# Load specific samples
ds = load_dataset(
"humair025/munch-1",
data_files="tts_data_20251203_*.parquet",
split="train"
)
os.makedirs("audio_files", exist_ok=True)
for i, row in enumerate(tqdm(ds[:100])): # First 100 samples
wav_io = pcm16_bytes_to_wav(row['audio_bytes'])
filename = f"audio_files/sample_{i:04d}_{row['voice']}.wav"
with open(filename, 'wb') as f:
f.write(wav_io.read())
Selective Download by Voice
# Using hashed index to find files
voice_of_interest = 'ash'
ash_files = index_df[index_df['voice'] == voice_of_interest]['parquet_file_name'].unique()
print(f"Files containing '{voice_of_interest}' voice: {len(ash_files)}")
# Download first 10 files with ash voice
ds = load_dataset(
"humair025/munch-1",
data_files=list(ash_files[:10]),
split="train"
)
print(f"Loaded {len(ds)} samples")
π Dataset Creation
This dataset was generated using a high-performance parallel TTS pipeline with the following characteristics:
Generation Pipeline
- Concurrent Processing: 10-20 parallel workers
- Voice Rotation: Sequential rotation through 13 voices
- Quality Control: Automatic retry with exponential backoff
- Fault Tolerance: Checkpoint-based resumption
- Smart Batching: Efficient 500-row batches
- API: OpenAI-compatible TTS endpoints
Pipeline Features
- β Natural Urdu pronunciation with proper intonation
- β
Punctuation-aware pausing:
Ψ(question mark): 400ms pause with higher pitch!(exclamation): 300ms pause with emphasisΨ(comma): 500ms pauseΫ(full stop): 1000ms pause
- β Mixed-language support for technical terms
- β Variable pacing for natural flow
- β Error handling and logging
β οΈ Important Notes
Audio Format
- Audio is stored as raw PCM16 bytes (not WAV files)
- Must be converted before playback (see examples above)
- Sample rate: 22,050 Hz
- Bit depth: 16-bit signed integer
- Channels: Mono (1 channel)
Large Dataset Considerations
- πΎ Size: 3.28 TB total - download selectively
- π¦ Files: 7,714 individual parquet files (~400 MB each)
- β‘ Streaming: Recommended for full dataset access
- π Batching: Load files in batches to manage memory
- π Index First: Use hashed index to explore before downloading
Recommended Workflow
- Explore: Load the hashed index (~1 GB)
- Filter: Find samples matching your criteria
- Download: Selectively download only needed parquet files
- Process: Work with manageable subsets
Potential Data Issues
β οΈ Duplicates: This dataset may contain duplicate audio samples. Use the hashed index for deduplication:
# Get unique samples only
unique_df = index_df.drop_duplicates(subset=['audio_bytes_hash'], keep='first')
unique_files = unique_df['parquet_file_name'].unique()
β οΈ Quality Variance: Some samples may have:
- Low volume or clipping
- Mispronunciations (especially for rare words)
- Background noise
- Transcription differences from input text
π Performance Tips
Memory Management
# DON'T: Load entire dataset at once
# ds = load_dataset("humair025/munch-1", split="train") # 3.28 TB!
# DO: Use streaming mode
ds = load_dataset(
"humair025/munch-1",
data_files="tts_data_20251203_*.parquet",
split="train",
streaming=True # Stream data instead of loading all
)
# Process in batches
for i, batch in enumerate(ds.iter(batch_size=100)):
# Process 100 samples at a time
if i >= 10: # Process only first 1000 samples
break
Efficient File Selection
# Select specific date range
ds = load_dataset(
"humair025/munch-1",
data_files="tts_data_20251203_*.parquet", # Only Dec 3rd files
split="train"
)
# Or specific time range
ds = load_dataset(
"humair025/munch-1",
data_files="tts_data_20251203_1303*.parquet", # Around 13:03
split="train"
)
# Or use the index to find specific files
target_files = index_df[index_df['voice'] == 'ash']['parquet_file_name'].unique()[:5]
ds = load_dataset("humair025/munch-1", data_files=list(target_files), split="train")
Storage Optimization
# If storage is limited, consider:
# 1. Download only specific voices
# 2. Download in batches and process incrementally
# 3. Use the hashed index for metadata-only analysis
# 4. Delete processed files after feature extraction
π Citation
If you use this dataset in your research, please cite:
BibTeX
@dataset{munch_urdu_tts_2025,
title={Munch-1: Large-Scale Urdu Text-to-Speech Dataset},
author={Munir, Humair},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/humair025/munch-1}},
note={3.86M audio-text pairs across 13 voices}
}
APA Format
Munir, H. (2025). Munch-1: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
Hugging Face. https://huggingface.co/datasets/humair025/munch-1
MLA Format
Munir, Humair. "Munch-1: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025,
https://huggingface.co/datasets/humair025/munch-1.
π€ Contributing
Issues, suggestions, and contributions are welcome! Please:
- π Report data quality issues
- π‘ Suggest improvements
- π Share your use cases and research
- π§ Contribute analysis scripts or tools
π License
This dataset is released under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.
You are free to:
- β Share β copy and redistribute the material in any medium or format
- β Adapt β remix, transform, and build upon the material for any purpose
- β Commercial use β use the dataset for commercial purposes
Under the following terms:
- π Attribution β You must give appropriate credit, provide a link to the license, and indicate if changes were made
π Important Links
- π§ This Dataset (Full Audio) - 3.28 TB
- π Hashed Index - ~1 GB metadata + hashes
- π¬ Discussions - Ask questions, share research
- π Report Issues - Data quality problems
π Acknowledgments
- TTS Generation: OpenAI-compatible API endpoints
- Voices: 13 high-quality voice models (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
- Infrastructure: HuggingFace Datasets platform
- Tools: Python, datasets, pandas, numpy, scipy
π Usage Statistics
Help us understand how the dataset is used:
- Training TTS models
- Speech recognition research
- Voice cloning experiments
- Linguistic analysis
- Educational purposes
- Other (please share in discussions!)
β‘ Quick Start Tips
- First Time Users: Start with the hashed index (~1 GB) to explore the dataset
- Download Smart: Use the index to find specific samples, then download only those parquet files
- Memory Matters: Use streaming mode if working with large subsets
- Deduplication: Check for duplicates using audio hashes before training
- Voice Selection: Each voice has ~300k samples - choose based on your needs
Note: This is a large dataset (3.28 TB, 3.86M samples). Please download selectively based on your needs. Consider using the hashed index for exploration and selective downloading.
Last Updated: December 2025
Status: β Complete - All 7,714 files uploaded
π‘ Pro Tip: Download the lightweight hashed index first to explore the dataset, find duplicates, and identify exactly which files you need - then download only those specific parquet files from this dataset!
- Downloads last month
- 668