The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
audio
audio |
|---|
Dataset Card for mghana_st
mghana-st is a curated audio dataset intended for speech tasks (e.g., speech recognition, speaker identification, speech clustering, or speech translation). It contains short local language audio clips collected from various sources, annotated with English translations and tags for non-verbal events.
Dataset Details
Dataset Description
mghana-st is a curated audio dataset of short local-language audio clips collected from multiple sources, annotated with English translations and tags for non-verbal events. It is intended to support research and development in spoken-language technologies for the represented local languages (examples below). Primary intended uses are automatic speech recognition (ASR), speech-to-text translation, speaker identification and diarization research, speech clustering and representation learning, and non-verbal audio event detection (laughter, clapping, music, noise, etc.). It can also be used for lexicon building, pronunciation studies, low-resource speech modeling, and human-in-the-loop annotation workflows.
- Curated by: Adwumatech AI
- Funded by: Adwumatech AI
- License: MIT
Dataset Sources
UGSpeechData: A Multilingual Speech Dataset of Ghanaian Languages
Ashesi/Financial-Inclusion-Speech-Dataset
Uses
Direct Use
Automatic Speech Recognition (ASR): train/supervise acoustic models and end-to-end ASR systems using audio + transcription data. Short utterances are especially useful for frame-level and segmental models, keyword spotting, and command recognition.
Speech-to-text translation / Speech translation: train end-to-end or cascade speech translation systems using paired local-language audio and English translations.
Speaker identification and verification research: study speaker embeddings and representation learning from short utterances (note: short clips limit long-term voice characteristics).
Speaker diarization & clustering: use short clips for speaker clustering experiments or to augment diarization systems with short-segment training data.
Speech representation learning / self-supervised learning: pretraining and fine-tuning of models (e.g., Wav2Vec-style) for downstream tasks in low-resource settings.
Non-verbal audio event detection and tagging: train classifiers or multi-task models to detect events like laughter, coughing, music, applause, silence, etc., annotated via tags.
Phonetic/linguistic research: small-unit phonetic analyses, pronunciation variation studies, code-switching research when present.
Dataset augmentation & evaluation: use as hold-out test sets or to augment existing corpora for cross-lingual transfer experiments.
Out-of-Scope Use
High-accuracy speaker forensics or legal evidence: not designed, validated, or documented for forensic identification; ethical and legal constraints apply.
Fine-grained demographic or sensitive-attribute analysis: the dataset likely lacks sufficient, reliable, or ethically collected demographic labels for fair research into race, religion, health, sexual orientation, etc.; avoid inferring sensitive attributes.
Any use violating local consent or data-protection laws: do not use for purposes incompatible with how consent was obtained (if consent information is missing, avoid high-risk applications).
Annotators
- Acquah, Bernard
- Adeleke, Abigail
- Adjei - Forson, Richmond Paa
- Adukonu, Benjamin Edo
- Aforla, Christabel Yayra
- Agbedumasi, Emmanuel Kafui
- Akowuah, Yirenkyi
- Akrong, Alfred Nii Kwaku
- Allotey, John
- Amarh, Mary Yasmine
- Amegatcher, Martha Jasmine
- Anderson, Jameel
- Anim, Emmanuel Nkansah
- Anku, Seth Adibi
- Ansahene, Joe-Bosworth
- Asare, Daniel Mcvey
- Asare-Kessie, Akosua Korankyewaa
- Atanyara, Abigail
- Atotura, Abdul Rasheed
- Awuah, Derrick Christian
- Baah, Godfred
- Baanuo, John Sheba
- Bannerman-Williams, Godfred Nii Lante
- Batsa, Anthony Kuma
- Benson, Rene Maa Shormeh
- Boahen-Kwarteng, Esther
- Boahene, Charles
- Dodoo, Emmanuella
- Eshun, Edmond Brite
- Frimpong, Eunice Asabea
- Gaisey, Manuel Joe Sam Acquah
- Kakabiku, Daniel Elikplim
- Madjitey, Richard Partey
- Ntori-Brobbey, Emmanuel Kwaku
- Owusu, Belinda
- Sowah, Joanna Dzifa
- Tawiah, Priscilla
Citations
Wiafe, Isaac et al., “UGSpeechData: A Multilingual Speech Dataset of Ghanaian Languages.” Science Data Bank, Oct. 16, 2025 [Online]. Available: https://doi.org/10.57760/sciencedb.22298. [Accessed: Dec. 16, 2025]
Asamoah Owusu, D., Korsah, A., Quartey, B., Nwolley Jnr., S., Sampah, D., Adjepon-Yamoah, D. and Omane Boateng, L., 2022. GitHub - Ashesi-Org/Financial-Inclusion-Speech-Dataset: A speech dataset to support financial inclusion created by Ashesi University and Nokwary Technologies with funding from Lacuna Fund.. [online] GitHub. Available at: https://github.com/Ashesi-Org/Financial-Inclusion-Speech-Dataset.
To Cite this Dataset
Frank Lawrence Nii Adoquaye Acquaye, Jesse Johnson. December 2025. mghana-st, a curated dataset for speech tasks. Available at: https://huggingface.co/datasets/adwumatech-ai/mghana-st
- Downloads last month
- 1,356