Whisper Turbo Fine-tuned on TED-LIUM (CTranslate2 Format)
This is the CTranslate2 version of Dafisns/whisper-turbo-tedlium, optimized for fast inference with faster-whisper.
π Performance
- 2-4x faster than original PyTorch model
- Lower memory usage
- Optimized for GPU inference
π¦ Installation
pip install faster-whisper
π» Usage
from faster_whisper import WhisperModel
# Load model (auto-download from HuggingFace)
model = WhisperModel("Dafisns/whisper-turbo-tedlium-ct2", device="cuda", compute_type="float32")
# Transcribe
segments, info = model.transcribe("audio.mp3", language="en")
for segment in segments:
print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")
Model Info
- Base Model: OpenAI Whisper Large v3 Turbo
- Fine-tuned on: TED-LIUM Release 3
- Format: CTranslate2 (optimized)
- Quantization: float32
- Language: English
Related Models
- PyTorch Version: Dafisns/whisper-turbo-tedlium
- Use this for training/fine-tuning
- CTranslate2 Version: Dafisns/whisper-turbo-tedlium-ct2
- Use this for fast inference
Citation
@misc{whisper-turbo-tedlium-ct2,
author = {Dafisns},
title = {Whisper Turbo Fine-tuned on TED-LIUM (CTranslate2)},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/Dafisns/whisper-turbo-tedlium-ct2}
}
π License
Apache 2.0
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support