Update README.md
Browse files
README.md
CHANGED
|
@@ -26,16 +26,16 @@ model-index:
|
|
| 26 |
split: validation
|
| 27 |
metrics:
|
| 28 |
- type: accuracy
|
| 29 |
-
value: 0.
|
| 30 |
name: Accuracy
|
| 31 |
- type: f1
|
| 32 |
-
value: 0.
|
| 33 |
name: F1
|
| 34 |
- type: precision
|
| 35 |
-
value: 0.
|
| 36 |
name: Precision
|
| 37 |
- type: recall
|
| 38 |
-
value: 0.
|
| 39 |
name: Recall
|
| 40 |
---
|
| 41 |
|
|
@@ -69,7 +69,7 @@ with torch.inference_mode():
|
|
| 69 |
)
|
| 70 |
p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
|
| 71 |
print({v: p[k] for k, v in model.config.id2label.items()})
|
| 72 |
-
# {'not_entailment': 0.
|
| 73 |
|
| 74 |
# An example concerning sentiments
|
| 75 |
premise2 = 'Я ненавижу желтые занавески'
|
|
@@ -80,7 +80,7 @@ with torch.inference_mode():
|
|
| 80 |
)
|
| 81 |
p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
|
| 82 |
print({v: p[k] for k, v in model.config.id2label.items()})
|
| 83 |
-
# {'not_entailment': 0.
|
| 84 |
```
|
| 85 |
|
| 86 |
## Model Performance Metrics
|
|
@@ -89,13 +89,13 @@ The following metrics summarize the performance of the model on the test dataset
|
|
| 89 |
|
| 90 |
| Metric | Value |
|
| 91 |
|----------------------------------|---------------------------|
|
| 92 |
-
| **Validation Loss** | 0.
|
| 93 |
-
| **Validation Accuracy** | 67.
|
| 94 |
-
| **Validation F1 Score** |
|
| 95 |
-
| **Validation Precision** |
|
| 96 |
-
| **Validation Recall** |
|
| 97 |
-
| **Validation Runtime*** | 0.
|
| 98 |
-
| **Samples per Second*** | 1
|
| 99 |
-
| **Steps per Second*** | 7.
|
| 100 |
|
| 101 |
*Using T4 GPU with Google Colab
|
|
|
|
| 26 |
split: validation
|
| 27 |
metrics:
|
| 28 |
- type: accuracy
|
| 29 |
+
value: 0.6742671009771987
|
| 30 |
name: Accuracy
|
| 31 |
- type: f1
|
| 32 |
+
value: 0.6710526315789473
|
| 33 |
name: F1
|
| 34 |
- type: precision
|
| 35 |
+
value: 0.6754966887417219
|
| 36 |
name: Precision
|
| 37 |
- type: recall
|
| 38 |
+
value: 0.6666666666666666
|
| 39 |
name: Recall
|
| 40 |
---
|
| 41 |
|
|
|
|
| 69 |
)
|
| 70 |
p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
|
| 71 |
print({v: p[k] for k, v in model.config.id2label.items()})
|
| 72 |
+
# {'not_entailment': 0.68763, 'entailment': 0.31237}
|
| 73 |
|
| 74 |
# An example concerning sentiments
|
| 75 |
premise2 = 'Я ненавижу желтые занавески'
|
|
|
|
| 80 |
)
|
| 81 |
p = torch.softmax(prediction.logits, -1).cpu().numpy()[0]
|
| 82 |
print({v: p[k] for k, v in model.config.id2label.items()})
|
| 83 |
+
# {'not_entailment': 0.558041, 'entailment': 0.441959}
|
| 84 |
```
|
| 85 |
|
| 86 |
## Model Performance Metrics
|
|
|
|
| 89 |
|
| 90 |
| Metric | Value |
|
| 91 |
|----------------------------------|---------------------------|
|
| 92 |
+
| **Validation Loss** | 0.6492 |
|
| 93 |
+
| **Validation Accuracy** | 67.43% |
|
| 94 |
+
| **Validation F1 Score** | 67.11% |
|
| 95 |
+
| **Validation Precision** | 67.55% |
|
| 96 |
+
| **Validation Recall** | 66.67% |
|
| 97 |
+
| **Validation Runtime*** | 0.2631 seconds |
|
| 98 |
+
| **Samples per Second*** | 1 167.02 |
|
| 99 |
+
| **Steps per Second*** | 7.60 |
|
| 100 |
|
| 101 |
*Using T4 GPU with Google Colab
|