You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Munch-1 - Large-Scale Urdu Text-to-Speech Dataset

Dataset Hashed Index Size Rows License

πŸ“– Dataset Description

Munch-1 is a large-scale Urdu Text-to-Speech (TTS) dataset containing high-quality audio recordings paired with Urdu text transcripts. The dataset features multiple voice variations and natural pronunciation patterns suitable for training and evaluating Urdu TTS models.

Rough Assumption :

3.86 million audio clips, if each 20 seconds long, total about 21,444+ hours of audio.

Key Features

  • 🎀 13 Different Voices: alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan
  • πŸ—£οΈ Natural Urdu Pronunciation: Proper handling of Urdu script, punctuation, and intonation
  • πŸ“Š Large Scale: 3,856,500 audio-text pairs
  • 🎡 High Quality Audio: PCM16 format, 22.05 kHz sample rate
  • πŸ’Ύ Efficient Storage: Parquet format with compression
  • πŸ“‡ Lightweight Index Available: Hashed index for exploration without downloading full dataset

Dataset Statistics

Metric Value
Total Size 3.28 TB
Total Rows 3,856,500
Number of Files 7,714 parquet files (~400 MB each)
Audio Format PCM16 (raw audio bytes)
Sample Rate 22,050 Hz
Bit Depth 16-bit signed integer
Text Language Urdu (with occasional mixed language)
Voice Count 13 unique voices
Avg Audio Size ~50kB-5+MB per sample
Avg Duration ~3-5 seconds per sample
Total Duration ~13,200-24,800 hours of audio

πŸ”— Companion Dataset

For efficient exploration without downloading the full 3.28 TB dataset, use the Munch-1 Hashed Index:

  • πŸ“Š Contains all metadata + SHA-256 hashes of audio
  • πŸ’Ύ Only ~1 GB (99.97% smaller)
  • ⚑ Search 3.86M records in seconds
  • 🎯 Selectively download only what you need

πŸš€ Quick Start

Installation

pip install datasets pandas numpy scipy

Basic Usage

from datasets import load_dataset
import numpy as np
import io
from scipy.io import wavfile
import IPython.display as ipd

# Load a specific file
ds = load_dataset(
    "humair025/munch-1",
    data_files="tts_data_20251203_125841_0a26c418.parquet",
    split="train"
)

# Helper function to convert PCM16 bytes to WAV
def pcm16_bytes_to_wav(pcm_bytes, sample_rate=22050):
    audio_array = np.frombuffer(pcm_bytes, dtype=np.int16)
    wav_io = io.BytesIO()
    wavfile.write(wav_io, sample_rate, audio_array)
    wav_io.seek(0)
    return wav_io

# Play first audio sample
row = ds[0]
wav_io = pcm16_bytes_to_wav(row['audio_bytes'])
ipd.display(ipd.Audio(wav_io, rate=22050))

print(f"Text: {row['text']}")
print(f"Voice: {row['voice']}")

Efficient Exploration (Recommended)

Instead of downloading the full 3.28 TB dataset, start with the hashed index:

from datasets import load_dataset
import pandas as pd

# Load the lightweight index (~1 GB)
index_ds = load_dataset("humair025/hashed_data_munch_1", split="train")
index_df = pd.DataFrame(index_ds)

# Explore the dataset
print(f"Total samples: {len(index_df)}")
print(f"Voices: {index_df['voice'].unique()}")
print(f"Voice distribution:\n{index_df['voice'].value_counts()}")

# Find specific samples
ash_samples = index_df[index_df['voice'] == 'ash']
short_audio = index_df[index_df['audio_size_bytes'] < 40000]

# Download only what you need
files_needed = ash_samples['parquet_file_name'].unique()[:10]
ds = load_dataset(
    "humair025/munch-1",
    data_files=list(files_needed),
    split="train"
)

Load Multiple Files

# Load first 10 files
ds = load_dataset(
    "humair025/munch-1",
    data_files="tts_data_20251203_*.parquet",  # Wildcard pattern
    split="train"
)

print(f"Total samples: {len(ds)}")

Batch Processing

from huggingface_hub import HfApi

# Get all parquet files
api = HfApi()
files = api.list_repo_files(repo_id="humair025/munch-1", repo_type="dataset")
parquet_files = [f for f in files if f.endswith('.parquet')]

print(f"Total files: {len(parquet_files)}")

# Load first 20 files
batch = parquet_files[:20]
ds = load_dataset(
    "humair025/munch-1",
    data_files=batch,
    split="train"
)

πŸ“Š Dataset Structure

Data Fields

Each row in the dataset contains:

Field Type Description
id int Paragraph ID (sequential)
text string Original Urdu text
transcript string TTS transcript (may differ slightly from input)
voice string Voice name used (e.g., "ash", "sage", "coral")
audio_bytes bytes Raw PCM16 audio data
timestamp string ISO format timestamp of generation (nullable)
error string Error message if generation failed (nullable)

Example Row

{
    'id': 42,
    'text': 'یہ ایک Ω†Ω…ΩˆΩ†Ϋ Ω…ΨͺΩ† ہے۔',
    'transcript': 'یہ ایک Ω†Ω…ΩˆΩ†Ϋ Ω…ΨͺΩ† ہے۔',
    'voice': 'ash',
    'audio_bytes': b'\x00\x01...',  # PCM16 bytes
    'timestamp': '2024-12-03T13:03:14.123456',
    'error': None
}

🎯 Use Cases

1. TTS Model Training

Train Urdu text-to-speech models with diverse voice samples:

  • Fine-tune existing TTS models
  • Train voice cloning systems
  • Develop multi-speaker TTS
  • Create voice conversion models

2. Speech Recognition

Develop Urdu ASR systems:

  • Train speech-to-text models
  • Evaluate transcription accuracy
  • Research Urdu phonetics
  • Build pronunciation dictionaries

3. Voice Research

Study voice characteristics and patterns:

  • Analyze voice similarity
  • Research pronunciation patterns
  • Study Urdu phonetics and prosody
  • Compare voice quality metrics

4. Audio Processing

Develop audio processing pipelines:

  • Audio enhancement
  • Noise reduction
  • Speech synthesis evaluation
  • Audio quality assessment

5. Linguistic Analysis

Explore linguistic patterns:

  • Text analysis and corpus linguistics
  • Punctuation usage patterns
  • Sentence structure analysis
  • Code-switching research (Urdu-English)

πŸ”§ Advanced Usage

Voice Distribution Analysis

import pandas as pd
from collections import Counter

# Using the hashed index (recommended)
index_ds = load_dataset("humair025/hashed_data_munch_1", split="train")
index_df = pd.DataFrame(index_ds)

# Count voice usage
voice_counts = index_df['voice'].value_counts()
print("Voice Distribution:")
for voice, count in voice_counts.items():
    percentage = (count / len(index_df)) * 100
    print(f"  {voice}: {count:,} samples ({percentage:.2f}%)")

Audio Length Analysis

# Using the hashed index
avg_size = index_df['audio_size_bytes'].mean()
avg_duration = (avg_size / 2) / 22050  # bytes to seconds

print(f"Average audio size: {avg_size/1024:.2f} KB")
print(f"Average duration: {avg_duration:.2f} seconds")

# Duration distribution
durations = (index_df['audio_size_bytes'] / 2) / 22050
print(f"Min duration: {durations.min():.2f}s")
print(f"Max duration: {durations.max():.2f}s")
print(f"Median duration: {durations.median():.2f}s")

Text Statistics

# Text length analysis
text_lengths = index_df['text'].str.len()
word_counts = index_df['text'].str.split().str.len()

print(f"Average characters: {text_lengths.mean():.0f}")
print(f"Average words: {word_counts.mean():.0f}")
print(f"Longest text: {text_lengths.max()} characters")

Duplicate Detection

# Find duplicate audio using hashes
duplicates = index_df[index_df.duplicated(subset=['audio_bytes_hash'], keep=False)]

if len(duplicates) > 0:
    print(f"Found {len(duplicates):,} duplicate rows")
    print(f"Unique audio: {index_df['audio_bytes_hash'].nunique():,}")
    redundancy = (1 - index_df['audio_bytes_hash'].nunique()/len(index_df)) * 100
    print(f"Redundancy: {redundancy:.2f}%")
else:
    print("No duplicates found!")

Export to WAV Files

import os
from tqdm import tqdm

# Load specific samples
ds = load_dataset(
    "humair025/munch-1",
    data_files="tts_data_20251203_*.parquet",
    split="train"
)

os.makedirs("audio_files", exist_ok=True)

for i, row in enumerate(tqdm(ds[:100])):  # First 100 samples
    wav_io = pcm16_bytes_to_wav(row['audio_bytes'])
    filename = f"audio_files/sample_{i:04d}_{row['voice']}.wav"
    with open(filename, 'wb') as f:
        f.write(wav_io.read())

Selective Download by Voice

# Using hashed index to find files
voice_of_interest = 'ash'
ash_files = index_df[index_df['voice'] == voice_of_interest]['parquet_file_name'].unique()

print(f"Files containing '{voice_of_interest}' voice: {len(ash_files)}")

# Download first 10 files with ash voice
ds = load_dataset(
    "humair025/munch-1",
    data_files=list(ash_files[:10]),
    split="train"
)

print(f"Loaded {len(ds)} samples")

πŸ“ Dataset Creation

This dataset was generated using a high-performance parallel TTS pipeline with the following characteristics:

Generation Pipeline

  • Concurrent Processing: 10-20 parallel workers
  • Voice Rotation: Sequential rotation through 13 voices
  • Quality Control: Automatic retry with exponential backoff
  • Fault Tolerance: Checkpoint-based resumption
  • Smart Batching: Efficient 500-row batches
  • API: OpenAI-compatible TTS endpoints

Pipeline Features

  • βœ… Natural Urdu pronunciation with proper intonation
  • βœ… Punctuation-aware pausing:
    • ؟ (question mark): 400ms pause with higher pitch
    • ! (exclamation): 300ms pause with emphasis
    • ، (comma): 500ms pause
    • Ϋ” (full stop): 1000ms pause
  • βœ… Mixed-language support for technical terms
  • βœ… Variable pacing for natural flow
  • βœ… Error handling and logging

⚠️ Important Notes

Audio Format

  • Audio is stored as raw PCM16 bytes (not WAV files)
  • Must be converted before playback (see examples above)
  • Sample rate: 22,050 Hz
  • Bit depth: 16-bit signed integer
  • Channels: Mono (1 channel)

Large Dataset Considerations

  • πŸ’Ύ Size: 3.28 TB total - download selectively
  • πŸ“¦ Files: 7,714 individual parquet files (~400 MB each)
  • ⚑ Streaming: Recommended for full dataset access
  • πŸ”„ Batching: Load files in batches to manage memory
  • πŸ“Š Index First: Use hashed index to explore before downloading

Recommended Workflow

  1. Explore: Load the hashed index (~1 GB)
  2. Filter: Find samples matching your criteria
  3. Download: Selectively download only needed parquet files
  4. Process: Work with manageable subsets

Potential Data Issues

⚠️ Duplicates: This dataset may contain duplicate audio samples. Use the hashed index for deduplication:

# Get unique samples only
unique_df = index_df.drop_duplicates(subset=['audio_bytes_hash'], keep='first')
unique_files = unique_df['parquet_file_name'].unique()

⚠️ Quality Variance: Some samples may have:

  • Low volume or clipping
  • Mispronunciations (especially for rare words)
  • Background noise
  • Transcription differences from input text

πŸ“Š Performance Tips

Memory Management

# DON'T: Load entire dataset at once
# ds = load_dataset("humair025/munch-1", split="train")  # 3.28 TB!

# DO: Use streaming mode
ds = load_dataset(
    "humair025/munch-1",
    data_files="tts_data_20251203_*.parquet",
    split="train",
    streaming=True  # Stream data instead of loading all
)

# Process in batches
for i, batch in enumerate(ds.iter(batch_size=100)):
    # Process 100 samples at a time
    if i >= 10:  # Process only first 1000 samples
        break

Efficient File Selection

# Select specific date range
ds = load_dataset(
    "humair025/munch-1",
    data_files="tts_data_20251203_*.parquet",  # Only Dec 3rd files
    split="train"
)

# Or specific time range
ds = load_dataset(
    "humair025/munch-1",
    data_files="tts_data_20251203_1303*.parquet",  # Around 13:03
    split="train"
)

# Or use the index to find specific files
target_files = index_df[index_df['voice'] == 'ash']['parquet_file_name'].unique()[:5]
ds = load_dataset("humair025/munch-1", data_files=list(target_files), split="train")

Storage Optimization

# If storage is limited, consider:
# 1. Download only specific voices
# 2. Download in batches and process incrementally
# 3. Use the hashed index for metadata-only analysis
# 4. Delete processed files after feature extraction

πŸ“œ Citation

If you use this dataset in your research, please cite:

BibTeX

@dataset{munch_urdu_tts_2025,
  title={Munch-1: Large-Scale Urdu Text-to-Speech Dataset},
  author={Munir, Humair},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/humair025/munch-1}},
  note={3.86M audio-text pairs across 13 voices}
}

APA Format

Munir, H. (2025). Munch-1: Large-Scale Urdu Text-to-Speech Dataset [Dataset]. 
Hugging Face. https://huggingface.co/datasets/humair025/munch-1

MLA Format

Munir, Humair. "Munch-1: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025, 
https://huggingface.co/datasets/humair025/munch-1.

🀝 Contributing

Issues, suggestions, and contributions are welcome! Please:

  • πŸ› Report data quality issues
  • πŸ’‘ Suggest improvements
  • πŸ“ Share your use cases and research
  • πŸ”§ Contribute analysis scripts or tools

πŸ“„ License

This dataset is released under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.

You are free to:

  • βœ… Share β€” copy and redistribute the material in any medium or format
  • βœ… Adapt β€” remix, transform, and build upon the material for any purpose
  • βœ… Commercial use β€” use the dataset for commercial purposes

Under the following terms:

  • πŸ“ Attribution β€” You must give appropriate credit, provide a link to the license, and indicate if changes were made

πŸ”— Important Links


πŸ™ Acknowledgments

  • TTS Generation: OpenAI-compatible API endpoints
  • Voices: 13 high-quality voice models (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
  • Infrastructure: HuggingFace Datasets platform
  • Tools: Python, datasets, pandas, numpy, scipy

πŸ“ˆ Usage Statistics

Help us understand how the dataset is used:

  • Training TTS models
  • Speech recognition research
  • Voice cloning experiments
  • Linguistic analysis
  • Educational purposes
  • Other (please share in discussions!)

⚑ Quick Start Tips

  1. First Time Users: Start with the hashed index (~1 GB) to explore the dataset
  2. Download Smart: Use the index to find specific samples, then download only those parquet files
  3. Memory Matters: Use streaming mode if working with large subsets
  4. Deduplication: Check for duplicates using audio hashes before training
  5. Voice Selection: Each voice has ~300k samples - choose based on your needs

Note: This is a large dataset (3.28 TB, 3.86M samples). Please download selectively based on your needs. Consider using the hashed index for exploration and selective downloading.

Last Updated: December 2025

Status: βœ… Complete - All 7,714 files uploaded


πŸ’‘ Pro Tip: Download the lightweight hashed index first to explore the dataset, find duplicates, and identify exactly which files you need - then download only those specific parquet files from this dataset!

Downloads last month
668