Marwolaeth commited on
Commit
f45338a
·
verified ·
1 Parent(s): 08e709d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -26,16 +26,16 @@ model-index:
26
  split: validation
27
  metrics:
28
  - type: accuracy
29
- value: 0.6710097719869706
30
  name: Accuracy
31
  - type: f1
32
- value: 0.6576271186440678
33
  name: F1
34
  - type: precision
35
- value: 0.6830985915492958
36
  name: Precision
37
  - type: recall
38
- value: 0.6339869281045751
39
  name: Recall
40
  ---
41
 
@@ -69,7 +69,7 @@ with torch.inference_mode():
69
  )
70
  p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
71
  print({v: p[k] for k, v in model.config.id2label.items()})
72
- # {'not_entailment': 0.61647415, 'entailment': 0.38352585}
73
 
74
  # An example concerning sentiments
75
  premise2 = 'Я ненавижу желтые занавески'
@@ -80,7 +80,7 @@ with torch.inference_mode():
80
  )
81
  p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
82
  print({v: p[k] for k, v in model.config.id2label.items()})
83
- # {'not_entailment': 0.520086, 'entailment': 0.47991407}
84
  ```
85
 
86
  ## Model Performance Metrics
@@ -89,13 +89,13 @@ The following metrics summarize the performance of the model on the test dataset
89
 
90
  | Metric | Value |
91
  |----------------------------------|---------------------------|
92
- | **Validation Loss** | 0.6546 |
93
- | **Validation Accuracy** | 67.10% |
94
- | **Validation F1 Score** | 65.76% |
95
- | **Validation Precision** | 68.31% |
96
- | **Validation Recall** | 63.40% |
97
- | **Validation Runtime*** | 0.2546 seconds |
98
- | **Samples per Second*** | 1 205.80 |
99
- | **Steps per Second*** | 7.86 |
100
 
101
  *Using T4 GPU with Google Colab
 
26
  split: validation
27
  metrics:
28
  - type: accuracy
29
+ value: 0.6742671009771987
30
  name: Accuracy
31
  - type: f1
32
+ value: 0.6710526315789473
33
  name: F1
34
  - type: precision
35
+ value: 0.6754966887417219
36
  name: Precision
37
  - type: recall
38
+ value: 0.6666666666666666
39
  name: Recall
40
  ---
41
 
 
69
  )
70
  p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
71
  print({v: p[k] for k, v in model.config.id2label.items()})
72
+ # {'not_entailment': 0.68763, 'entailment': 0.31237}
73
 
74
  # An example concerning sentiments
75
  premise2 = 'Я ненавижу желтые занавески'
 
80
  )
81
  p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
82
  print({v: p[k] for k, v in model.config.id2label.items()})
83
+ # {'not_entailment': 0.558041, 'entailment': 0.441959}
84
  ```
85
 
86
  ## Model Performance Metrics
 
89
 
90
  | Metric | Value |
91
  |----------------------------------|---------------------------|
92
+ | **Validation Loss** | 0.6492 |
93
+ | **Validation Accuracy** | 67.43% |
94
+ | **Validation F1 Score** | 67.11% |
95
+ | **Validation Precision** | 67.55% |
96
+ | **Validation Recall** | 66.67% |
97
+ | **Validation Runtime*** | 0.2631 seconds |
98
+ | **Samples per Second*** | 1 167.02 |
99
+ | **Steps per Second*** | 7.60 |
100
 
101
  *Using T4 GPU with Google Colab