# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("rasgaard/whisper-tiny.da")
model = AutoModelForSpeechSeq2Seq.from_pretrained("rasgaard/whisper-tiny.da")Quick Links
A small hobby project trained in a Kaggle notebook using their free P100 GPUs. Was curious about if you could train whisper-tiny to perform decently if you specialized it for a single language, i.e. danish in this case. The TL;DR is that the results are not great :)
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition",
model="rasgaard/whisper-tiny.da")
- Downloads last month
- 6
Model tree for rasgaard/whisper-tiny.da
Base model
openai/whisper-tiny
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="rasgaard/whisper-tiny.da")