This model is a fine-tune of OpenAI's Whisper Medium model (https://huggingface.co/openai/whisper-medium) over the following Korean datasets:
- https://huggingface.co/datasets/Junhoee/STT_Korean_Dataset_80000
- https://huggingface.co/datasets/Bingsu/zeroth-korean
Combined they have roughly 102k sentences.
This is the last checkpoint which has achieved ~16 WER (down from ~24 WER).
Training was 10,000 iterations.
- Downloads last month
- 32
Model tree for royshilkrot/whisper-medium-korean-ggml
Base model
openai/whisper-medium