Whisper Turbo Fine-tuned on TED-LIUM (CTranslate2 Format)

This is the CTranslate2 version of Dafisns/whisper-turbo-tedlium, optimized for fast inference with faster-whisper.

πŸš€ Performance

  • 2-4x faster than original PyTorch model
  • Lower memory usage
  • Optimized for GPU inference

πŸ“¦ Installation

pip install faster-whisper

πŸ’» Usage

from faster_whisper import WhisperModel

# Load model (auto-download from HuggingFace)
model = WhisperModel("Dafisns/whisper-turbo-tedlium-ct2", device="cuda", compute_type="float32")

# Transcribe
segments, info = model.transcribe("audio.mp3", language="en")

for segment in segments:
    print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")

Model Info

  • Base Model: OpenAI Whisper Large v3 Turbo
  • Fine-tuned on: TED-LIUM Release 3
  • Format: CTranslate2 (optimized)
  • Quantization: float32
  • Language: English

Related Models

Citation

@misc{whisper-turbo-tedlium-ct2,
  author = {Dafisns},
  title = {Whisper Turbo Fine-tuned on TED-LIUM (CTranslate2)},
  year = {2024},
  publisher = {HuggingFace},
  url = {https://huggingface.co/Dafisns/whisper-turbo-tedlium-ct2}
}

πŸ“„ License

Apache 2.0

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support