WILFRID β #1 8B Model on MELD (0.6607 Weighted F1)
The highest-performing 8B model ever on the MELD emotion recognition benchmark.
Results (Test Set)
| Emotion | Precision | Recall | F1 | Support |
|---|---|---|---|---|
| anger | 0.6736 | 0.4725 | 0.5554 | 345 |
| disgust | 0.3768 | 0.3824 | 0.3796 | 68 |
| fear | 0.3571 | 0.1000 | 0.1562 | 50 |
| joy | 0.6921 | 0.5423 | 0.6081 | 402 |
| neutral | 0.7130 | 0.9156 | 0.8017 | 1256 |
| sadness | 0.6447 | 0.2356 | 0.3451 | 208 |
| surprise | 0.6263 | 0.6263 | 0.6263 | 281 |
| accuracy | 0.6847 | 2610 | ||
| weighted F1 | 0.6607 |
Trained in ~80 minutes on a single RTX 5090 using 4-bit QLoRA.
Quick Inference Example
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Wilfrid28/llama3-meld-wilfrid", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Wilfrid28/llama3-meld-wilfrid")
prompt = """Conversation:
Ross: We were on a break!
Rachel: No we weren't!
Current turn:
Ross: WE WERE ON A BREAK!!
Emotion (one word):"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=5, temperature=0.1)
emotion = tokenizer.decode(output[0], skip_special_tokens=True).split()[-1]
print(emotion) # β anger
### Live Demo
**Try the Emotion Oracle Dashboard**: https://huggingface.co/spaces/Wilfrid28/wilfrid-oracle
The most beautiful emotion detection interface ever built.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Wilfrid28/llama3-meld-wilfrid
Base model
meta-llama/Meta-Llama-3-8B-Instruct