Would you use cheaper fine-tuning if it cut costs by 70%?

I see a lot of people here struggling with GPU costs for training and fine-tuning.

I’m exploring an idea: what if you could fine-tune models (Llama, Mistral, SDXL, etc.) on idle/off-peak GPUs, and get the trained model + hosted endpoint back for ~70% less than AWS or OpenAI?

Curious — would this be useful for anyone here? What’s the biggest pain point you’ve hit — cost, complexity, or reliability?

2 Likes

What’s the biggest pain point you’ve hit

I don’t do much fine-tuning myself, but everyone seems to be struggling with costs.

Another issue is that many people get stuck when fine-tuning is aborted due to configuration errors. If we could limit the target hardware or base model, could we avoid this to some extent by providing presets?

A similar service that comes to mind is HF Jobs.