distilbert-base-uncased-lora-text-classification

This model is a fine-tuned version of distilbert-base-uncased on the financial_phrasebank dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2612
  • Accuracy: {'accuracy': 0.8287247214197276}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6087 1.0 606 0.6192 {'accuracy': 0.8018984729673958}
0.4539 2.0 1212 0.5939 {'accuracy': 0.8312009905076352}
0.3768 3.0 1818 0.7302 {'accuracy': 0.8283120099050764}
0.3249 4.0 2424 0.7608 {'accuracy': 0.8287247214197276}
0.1923 5.0 3030 0.8825 {'accuracy': 0.8283120099050764}
0.1518 6.0 3636 1.0603 {'accuracy': 0.8332645480808915}
0.1068 7.0 4242 1.1702 {'accuracy': 0.8262484523318201}
0.0673 8.0 4848 1.2515 {'accuracy': 0.8217086256706562}
0.072 9.0 5454 1.2673 {'accuracy': 0.8303755674783326}
0.0315 10.0 6060 1.2612 {'accuracy': 0.8287247214197276}

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.6.0+cpu
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shandonk/distilbert-base-uncased-lora-text-classification

Adapter
(348)
this model

Dataset used to train shandonk/distilbert-base-uncased-lora-text-classification

Evaluation results