How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("automatic-speech-recognition", model="rasgaard/whisper-tiny.da")
# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq

processor = AutoProcessor.from_pretrained("rasgaard/whisper-tiny.da")
model = AutoModelForSpeechSeq2Seq.from_pretrained("rasgaard/whisper-tiny.da")
Quick Links

A small hobby project trained in a Kaggle notebook using their free P100 GPUs. Was curious about if you could train whisper-tiny to perform decently if you specialized it for a single language, i.e. danish in this case. The TL;DR is that the results are not great :)

from transformers import pipeline

pipe = pipeline("automatic-speech-recognition", 
                model="rasgaard/whisper-tiny.da")
Downloads last month
6
Safetensors
Model size
57.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rasgaard/whisper-tiny.da

Finetuned
(1818)
this model

Dataset used to train rasgaard/whisper-tiny.da