lc2004 commited on
Commit
b8f1c79
·
verified ·
1 Parent(s): a21b31d

Upload 2 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ btc_prediction_20251020_113426.png filter=lfs diff=lfs merge=lfs -text
btc_prediction_20251020_113426.png ADDED

Git LFS Details

  • SHA256: 98e2c8dbdcf1228d304772518392d096ecddefe836955136ed171e0fbb79dde4
  • Pointer size: 131 Bytes
  • Size of remote file: 350 kB
tokenizer_training_rank_0.log ADDED
@@ -0,0 +1,1365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started ===
2
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_1h_finetune
3
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/logs
4
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Rank: 0
5
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-19 13:15:19
6
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer...
7
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042
8
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - === Training Configuration ===
9
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/data/BTCUSDT_1h_20251018_220012_fixed.csv
10
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Lookback window: 512
11
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Predict window: 48
12
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Batch size: 32
13
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002
14
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Training epochs: 30
15
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Device: cuda:0
16
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Distributed training: False
17
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training...
18
+ 2025-10-19 13:15:19 - tokenizer_training_rank_0 - INFO - Starting tokenizer training...
19
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - === Tokenizer Training Started ===
20
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Experiment Name: BTCUSDT_1h_finetune
21
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Log Directory: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/logs
22
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Rank: 0
23
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Timestamp: 2025-10-19 13:16:50
24
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Loading pretrained tokenizer...
25
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Tokenizer parameters: 3,958,042
26
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - === Training Configuration ===
27
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Data path: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/data/BTCUSDT_1h_20251018_220012_fixed.csv
28
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Lookback window: 512
29
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Predict window: 48
30
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Batch size: 32
31
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Learning rate: 0.0002
32
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Training epochs: 30
33
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Device: cuda:0
34
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Distributed training: False
35
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Starting tokenizer fine-tuning training...
36
+ 2025-10-19 13:16:50 - tokenizer_training_rank_0 - INFO - Starting tokenizer training...
37
+ 2025-10-19 13:16:54 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 50/475] LR: 0.000026, Loss: -0.0294
38
+ 2025-10-19 13:16:54 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0703
39
+ - Recon Loss Pre: 0.0073
40
+ - Recon Loss All: 0.0041
41
+ 2025-10-19 13:16:58 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 100/475] LR: 0.000043, Loss: -0.0297
42
+ 2025-10-19 13:16:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0704
43
+ - Recon Loss Pre: 0.0071
44
+ - Recon Loss All: 0.0039
45
+ 2025-10-19 13:17:01 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 150/475] LR: 0.000070, Loss: -0.0299
46
+ 2025-10-19 13:17:01 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0704
47
+ - Recon Loss Pre: 0.0068
48
+ - Recon Loss All: 0.0038
49
+ 2025-10-19 13:17:04 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 200/475] LR: 0.000101, Loss: -0.0299
50
+ 2025-10-19 13:17:04 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0704
51
+ - Recon Loss Pre: 0.0069
52
+ - Recon Loss All: 0.0037
53
+ 2025-10-19 13:17:07 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 250/475] LR: 0.000134, Loss: -0.0300
54
+ 2025-10-19 13:17:07 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705
55
+ - Recon Loss Pre: 0.0067
56
+ - Recon Loss All: 0.0037
57
+ 2025-10-19 13:17:11 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 300/475] LR: 0.000164, Loss: -0.0300
58
+ 2025-10-19 13:17:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705
59
+ - Recon Loss Pre: 0.0070
60
+ - Recon Loss All: 0.0036
61
+ 2025-10-19 13:17:14 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 350/475] LR: 0.000186, Loss: -0.0303
62
+ 2025-10-19 13:17:14 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0705
63
+ - Recon Loss Pre: 0.0063
64
+ - Recon Loss All: 0.0035
65
+ 2025-10-19 13:17:17 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 400/475] LR: 0.000198, Loss: -0.0301
66
+ 2025-10-19 13:17:17 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0706
67
+ - Recon Loss Pre: 0.0065
68
+ - Recon Loss All: 0.0038
69
+ 2025-10-19 13:17:20 - tokenizer_training_rank_0 - INFO - [Epoch 1/30, Step 450/475] LR: 0.000200, Loss: -0.0306
70
+ 2025-10-19 13:17:20 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707
71
+ - Recon Loss Pre: 0.0060
72
+ - Recon Loss All: 0.0035
73
+ 2025-10-19 13:17:23 - tokenizer_training_rank_0 - INFO -
74
+ --- Epoch 1/30 Summary ---
75
+ Validation Loss: 0.0032
76
+ Epoch Time: 0:00:32
77
+ Total Training Time: 0:00:32
78
+
79
+ 2025-10-19 13:17:23 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/tokenizer/best_model (validation loss: 0.0032)
80
+ 2025-10-19 13:17:25 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 25/475] LR: 0.000200, Loss: -0.0309
81
+ 2025-10-19 13:17:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709
82
+ - Recon Loss Pre: 0.0057
83
+ - Recon Loss All: 0.0033
84
+ 2025-10-19 13:17:28 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 75/475] LR: 0.000200, Loss: -0.0309
85
+ 2025-10-19 13:17:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709
86
+ - Recon Loss Pre: 0.0057
87
+ - Recon Loss All: 0.0033
88
+ 2025-10-19 13:17:31 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 125/475] LR: 0.000200, Loss: -0.0303
89
+ 2025-10-19 13:17:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0707
90
+ - Recon Loss Pre: 0.0065
91
+ - Recon Loss All: 0.0037
92
+ 2025-10-19 13:17:35 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 175/475] LR: 0.000200, Loss: -0.0309
93
+ 2025-10-19 13:17:35 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0708
94
+ - Recon Loss Pre: 0.0057
95
+ - Recon Loss All: 0.0033
96
+ 2025-10-19 13:17:38 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 225/475] LR: 0.000200, Loss: -0.0308
97
+ 2025-10-19 13:17:38 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709
98
+ - Recon Loss Pre: 0.0059
99
+ - Recon Loss All: 0.0034
100
+ 2025-10-19 13:17:41 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 275/475] LR: 0.000200, Loss: -0.0310
101
+ 2025-10-19 13:17:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709
102
+ - Recon Loss Pre: 0.0057
103
+ - Recon Loss All: 0.0031
104
+ 2025-10-19 13:17:44 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 325/475] LR: 0.000200, Loss: -0.0310
105
+ 2025-10-19 13:17:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710
106
+ - Recon Loss Pre: 0.0057
107
+ - Recon Loss All: 0.0033
108
+ 2025-10-19 13:17:47 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 375/475] LR: 0.000200, Loss: -0.0307
109
+ 2025-10-19 13:17:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709
110
+ - Recon Loss Pre: 0.0060
111
+ - Recon Loss All: 0.0034
112
+ 2025-10-19 13:17:50 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 425/475] LR: 0.000199, Loss: -0.0309
113
+ 2025-10-19 13:17:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0709
114
+ - Recon Loss Pre: 0.0059
115
+ - Recon Loss All: 0.0033
116
+ 2025-10-19 13:17:54 - tokenizer_training_rank_0 - INFO - [Epoch 2/30, Step 475/475] LR: 0.000199, Loss: -0.0309
117
+ 2025-10-19 13:17:54 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710
118
+ - Recon Loss Pre: 0.0059
119
+ - Recon Loss All: 0.0033
120
+ 2025-10-19 13:17:55 - tokenizer_training_rank_0 - INFO -
121
+ --- Epoch 2/30 Summary ---
122
+ Validation Loss: 0.0032
123
+ Epoch Time: 0:00:31
124
+ Total Training Time: 0:00:31
125
+
126
+ 2025-10-19 13:17:55 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/tokenizer/best_model (validation loss: 0.0032)
127
+ 2025-10-19 13:17:58 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 50/475] LR: 0.000199, Loss: -0.0314
128
+ 2025-10-19 13:17:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
129
+ - Recon Loss Pre: 0.0053
130
+ - Recon Loss All: 0.0030
131
+ 2025-10-19 13:18:01 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 100/475] LR: 0.000199, Loss: -0.0311
132
+ 2025-10-19 13:18:01 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710
133
+ - Recon Loss Pre: 0.0056
134
+ - Recon Loss All: 0.0033
135
+ 2025-10-19 13:18:04 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 150/475] LR: 0.000199, Loss: -0.0313
136
+ 2025-10-19 13:18:04 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
137
+ - Recon Loss Pre: 0.0055
138
+ - Recon Loss All: 0.0030
139
+ 2025-10-19 13:18:08 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 200/475] LR: 0.000199, Loss: -0.0309
140
+ 2025-10-19 13:18:08 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0710
141
+ - Recon Loss Pre: 0.0059
142
+ - Recon Loss All: 0.0033
143
+ 2025-10-19 13:18:11 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 250/475] LR: 0.000198, Loss: -0.0312
144
+ 2025-10-19 13:18:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
145
+ - Recon Loss Pre: 0.0055
146
+ - Recon Loss All: 0.0031
147
+ 2025-10-19 13:18:14 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 300/475] LR: 0.000198, Loss: -0.0314
148
+ 2025-10-19 13:18:14 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
149
+ - Recon Loss Pre: 0.0053
150
+ - Recon Loss All: 0.0031
151
+ 2025-10-19 13:18:17 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 350/475] LR: 0.000198, Loss: -0.0312
152
+ 2025-10-19 13:18:17 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
153
+ - Recon Loss Pre: 0.0055
154
+ - Recon Loss All: 0.0032
155
+ 2025-10-19 13:18:20 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 400/475] LR: 0.000198, Loss: -0.0318
156
+ 2025-10-19 13:18:20 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
157
+ - Recon Loss Pre: 0.0049
158
+ - Recon Loss All: 0.0028
159
+ 2025-10-19 13:18:24 - tokenizer_training_rank_0 - INFO - [Epoch 3/30, Step 450/475] LR: 0.000198, Loss: -0.0314
160
+ 2025-10-19 13:18:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
161
+ - Recon Loss Pre: 0.0052
162
+ - Recon Loss All: 0.0030
163
+ 2025-10-19 13:18:27 - tokenizer_training_rank_0 - INFO -
164
+ --- Epoch 3/30 Summary ---
165
+ Validation Loss: 0.0032
166
+ Epoch Time: 0:00:31
167
+ Total Training Time: 0:00:31
168
+
169
+ 2025-10-19 13:18:28 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 25/475] LR: 0.000197, Loss: -0.0313
170
+ 2025-10-19 13:18:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
171
+ - Recon Loss Pre: 0.0054
172
+ - Recon Loss All: 0.0031
173
+ 2025-10-19 13:18:31 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 75/475] LR: 0.000197, Loss: -0.0314
174
+ 2025-10-19 13:18:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
175
+ - Recon Loss Pre: 0.0052
176
+ - Recon Loss All: 0.0030
177
+ 2025-10-19 13:18:34 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 125/475] LR: 0.000197, Loss: -0.0311
178
+ 2025-10-19 13:18:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
179
+ - Recon Loss Pre: 0.0057
180
+ - Recon Loss All: 0.0033
181
+ 2025-10-19 13:18:38 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 175/475] LR: 0.000196, Loss: -0.0316
182
+ 2025-10-19 13:18:38 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
183
+ - Recon Loss Pre: 0.0052
184
+ - Recon Loss All: 0.0029
185
+ 2025-10-19 13:18:41 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 225/475] LR: 0.000196, Loss: -0.0312
186
+ 2025-10-19 13:18:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
187
+ - Recon Loss Pre: 0.0055
188
+ - Recon Loss All: 0.0031
189
+ 2025-10-19 13:18:44 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 275/475] LR: 0.000196, Loss: -0.0311
190
+ 2025-10-19 13:18:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
191
+ - Recon Loss Pre: 0.0057
192
+ - Recon Loss All: 0.0032
193
+ 2025-10-19 13:18:48 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 325/475] LR: 0.000196, Loss: -0.0312
194
+ 2025-10-19 13:18:48 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
195
+ - Recon Loss Pre: 0.0056
196
+ - Recon Loss All: 0.0031
197
+ 2025-10-19 13:18:51 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 375/475] LR: 0.000195, Loss: -0.0316
198
+ 2025-10-19 13:18:51 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
199
+ - Recon Loss Pre: 0.0050
200
+ - Recon Loss All: 0.0029
201
+ 2025-10-19 13:18:54 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 425/475] LR: 0.000195, Loss: -0.0312
202
+ 2025-10-19 13:18:54 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
203
+ - Recon Loss Pre: 0.0055
204
+ - Recon Loss All: 0.0032
205
+ 2025-10-19 13:18:57 - tokenizer_training_rank_0 - INFO - [Epoch 4/30, Step 475/475] LR: 0.000194, Loss: -0.0314
206
+ 2025-10-19 13:18:57 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
207
+ - Recon Loss Pre: 0.0055
208
+ - Recon Loss All: 0.0030
209
+ 2025-10-19 13:18:59 - tokenizer_training_rank_0 - INFO -
210
+ --- Epoch 4/30 Summary ---
211
+ Validation Loss: 0.0031
212
+ Epoch Time: 0:00:32
213
+ Total Training Time: 0:00:32
214
+
215
+ 2025-10-19 13:18:59 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/tokenizer/best_model (validation loss: 0.0031)
216
+ 2025-10-19 13:19:02 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 50/475] LR: 0.000194, Loss: -0.0315
217
+ 2025-10-19 13:19:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
218
+ - Recon Loss Pre: 0.0053
219
+ - Recon Loss All: 0.0030
220
+ 2025-10-19 13:19:06 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 100/475] LR: 0.000194, Loss: -0.0316
221
+ 2025-10-19 13:19:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
222
+ - Recon Loss Pre: 0.0051
223
+ - Recon Loss All: 0.0030
224
+ 2025-10-19 13:19:09 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 150/475] LR: 0.000193, Loss: -0.0316
225
+ 2025-10-19 13:19:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
226
+ - Recon Loss Pre: 0.0052
227
+ - Recon Loss All: 0.0030
228
+ 2025-10-19 13:19:12 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 200/475] LR: 0.000193, Loss: -0.0318
229
+ 2025-10-19 13:19:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
230
+ - Recon Loss Pre: 0.0049
231
+ - Recon Loss All: 0.0028
232
+ 2025-10-19 13:19:15 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 250/475] LR: 0.000192, Loss: -0.0314
233
+ 2025-10-19 13:19:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
234
+ - Recon Loss Pre: 0.0053
235
+ - Recon Loss All: 0.0031
236
+ 2025-10-19 13:19:19 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 300/475] LR: 0.000192, Loss: -0.0316
237
+ 2025-10-19 13:19:19 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
238
+ - Recon Loss Pre: 0.0052
239
+ - Recon Loss All: 0.0030
240
+ 2025-10-19 13:19:22 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 350/475] LR: 0.000192, Loss: -0.0315
241
+ 2025-10-19 13:19:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
242
+ - Recon Loss Pre: 0.0052
243
+ - Recon Loss All: 0.0031
244
+ 2025-10-19 13:19:25 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 400/475] LR: 0.000191, Loss: -0.0317
245
+ 2025-10-19 13:19:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
246
+ - Recon Loss Pre: 0.0051
247
+ - Recon Loss All: 0.0029
248
+ 2025-10-19 13:19:28 - tokenizer_training_rank_0 - INFO - [Epoch 5/30, Step 450/475] LR: 0.000191, Loss: -0.0320
249
+ 2025-10-19 13:19:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
250
+ - Recon Loss Pre: 0.0047
251
+ - Recon Loss All: 0.0027
252
+ 2025-10-19 13:19:31 - tokenizer_training_rank_0 - INFO -
253
+ --- Epoch 5/30 Summary ---
254
+ Validation Loss: 0.0030
255
+ Epoch Time: 0:00:32
256
+ Total Training Time: 0:00:32
257
+
258
+ 2025-10-19 13:19:31 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/tokenizer/best_model (validation loss: 0.0030)
259
+ 2025-10-19 13:19:33 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 25/475] LR: 0.000190, Loss: -0.0316
260
+ 2025-10-19 13:19:33 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
261
+ - Recon Loss Pre: 0.0053
262
+ - Recon Loss All: 0.0029
263
+ 2025-10-19 13:19:36 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 75/475] LR: 0.000190, Loss: -0.0316
264
+ 2025-10-19 13:19:36 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
265
+ - Recon Loss Pre: 0.0052
266
+ - Recon Loss All: 0.0029
267
+ 2025-10-19 13:19:39 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 125/475] LR: 0.000189, Loss: -0.0316
268
+ 2025-10-19 13:19:39 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
269
+ - Recon Loss Pre: 0.0052
270
+ - Recon Loss All: 0.0029
271
+ 2025-10-19 13:19:42 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 175/475] LR: 0.000189, Loss: -0.0314
272
+ 2025-10-19 13:19:42 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
273
+ - Recon Loss Pre: 0.0054
274
+ - Recon Loss All: 0.0030
275
+ 2025-10-19 13:19:46 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 225/475] LR: 0.000188, Loss: -0.0313
276
+ 2025-10-19 13:19:46 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0711
277
+ - Recon Loss Pre: 0.0055
278
+ - Recon Loss All: 0.0031
279
+ 2025-10-19 13:19:49 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 275/475] LR: 0.000188, Loss: -0.0315
280
+ 2025-10-19 13:19:49 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
281
+ - Recon Loss Pre: 0.0052
282
+ - Recon Loss All: 0.0029
283
+ 2025-10-19 13:19:52 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 325/475] LR: 0.000187, Loss: -0.0313
284
+ 2025-10-19 13:19:52 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
285
+ - Recon Loss Pre: 0.0055
286
+ - Recon Loss All: 0.0031
287
+ 2025-10-19 13:19:56 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 375/475] LR: 0.000186, Loss: -0.0315
288
+ 2025-10-19 13:19:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
289
+ - Recon Loss Pre: 0.0051
290
+ - Recon Loss All: 0.0031
291
+ 2025-10-19 13:19:59 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 425/475] LR: 0.000186, Loss: -0.0315
292
+ 2025-10-19 13:19:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
293
+ - Recon Loss Pre: 0.0052
294
+ - Recon Loss All: 0.0030
295
+ 2025-10-19 13:20:02 - tokenizer_training_rank_0 - INFO - [Epoch 6/30, Step 475/475] LR: 0.000185, Loss: -0.0317
296
+ 2025-10-19 13:20:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
297
+ - Recon Loss Pre: 0.0050
298
+ - Recon Loss All: 0.0028
299
+ 2025-10-19 13:20:03 - tokenizer_training_rank_0 - INFO -
300
+ --- Epoch 6/30 Summary ---
301
+ Validation Loss: 0.0031
302
+ Epoch Time: 0:00:32
303
+ Total Training Time: 0:00:32
304
+
305
+ 2025-10-19 13:20:07 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 50/475] LR: 0.000185, Loss: -0.0316
306
+ 2025-10-19 13:20:07 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
307
+ - Recon Loss Pre: 0.0051
308
+ - Recon Loss All: 0.0029
309
+ 2025-10-19 13:20:10 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 100/475] LR: 0.000184, Loss: -0.0316
310
+ 2025-10-19 13:20:10 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
311
+ - Recon Loss Pre: 0.0052
312
+ - Recon Loss All: 0.0029
313
+ 2025-10-19 13:20:13 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 150/475] LR: 0.000183, Loss: -0.0317
314
+ 2025-10-19 13:20:13 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
315
+ - Recon Loss Pre: 0.0049
316
+ - Recon Loss All: 0.0029
317
+ 2025-10-19 13:20:17 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 200/475] LR: 0.000183, Loss: -0.0314
318
+ 2025-10-19 13:20:17 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
319
+ - Recon Loss Pre: 0.0055
320
+ - Recon Loss All: 0.0030
321
+ 2025-10-19 13:20:20 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 250/475] LR: 0.000182, Loss: -0.0315
322
+ 2025-10-19 13:20:20 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
323
+ - Recon Loss Pre: 0.0052
324
+ - Recon Loss All: 0.0030
325
+ 2025-10-19 13:20:24 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 300/475] LR: 0.000181, Loss: -0.0315
326
+ 2025-10-19 13:20:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
327
+ - Recon Loss Pre: 0.0052
328
+ - Recon Loss All: 0.0030
329
+ 2025-10-19 13:20:27 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 350/475] LR: 0.000181, Loss: -0.0316
330
+ 2025-10-19 13:20:27 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
331
+ - Recon Loss Pre: 0.0051
332
+ - Recon Loss All: 0.0030
333
+ 2025-10-19 13:20:30 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 400/475] LR: 0.000180, Loss: -0.0314
334
+ 2025-10-19 13:20:30 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
335
+ - Recon Loss Pre: 0.0052
336
+ - Recon Loss All: 0.0032
337
+ 2025-10-19 13:20:34 - tokenizer_training_rank_0 - INFO - [Epoch 7/30, Step 450/475] LR: 0.000179, Loss: -0.0319
338
+ 2025-10-19 13:20:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
339
+ - Recon Loss Pre: 0.0048
340
+ - Recon Loss All: 0.0028
341
+ 2025-10-19 13:20:36 - tokenizer_training_rank_0 - INFO -
342
+ --- Epoch 7/30 Summary ---
343
+ Validation Loss: 0.0031
344
+ Epoch Time: 0:00:32
345
+ Total Training Time: 0:00:32
346
+
347
+ 2025-10-19 13:20:38 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 25/475] LR: 0.000179, Loss: -0.0314
348
+ 2025-10-19 13:20:38 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
349
+ - Recon Loss Pre: 0.0054
350
+ - Recon Loss All: 0.0031
351
+ 2025-10-19 13:20:41 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 75/475] LR: 0.000178, Loss: -0.0317
352
+ 2025-10-19 13:20:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
353
+ - Recon Loss Pre: 0.0049
354
+ - Recon Loss All: 0.0029
355
+ 2025-10-19 13:20:44 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 125/475] LR: 0.000177, Loss: -0.0315
356
+ 2025-10-19 13:20:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
357
+ - Recon Loss Pre: 0.0053
358
+ - Recon Loss All: 0.0031
359
+ 2025-10-19 13:20:48 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 175/475] LR: 0.000177, Loss: -0.0319
360
+ 2025-10-19 13:20:48 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
361
+ - Recon Loss Pre: 0.0047
362
+ - Recon Loss All: 0.0027
363
+ 2025-10-19 13:20:51 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 225/475] LR: 0.000176, Loss: -0.0315
364
+ 2025-10-19 13:20:51 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
365
+ - Recon Loss Pre: 0.0052
366
+ - Recon Loss All: 0.0030
367
+ 2025-10-19 13:20:54 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 275/475] LR: 0.000175, Loss: -0.0316
368
+ 2025-10-19 13:20:54 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
369
+ - Recon Loss Pre: 0.0051
370
+ - Recon Loss All: 0.0029
371
+ 2025-10-19 13:20:58 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 325/475] LR: 0.000174, Loss: -0.0314
372
+ 2025-10-19 13:20:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
373
+ - Recon Loss Pre: 0.0054
374
+ - Recon Loss All: 0.0031
375
+ 2025-10-19 13:21:01 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 375/475] LR: 0.000174, Loss: -0.0317
376
+ 2025-10-19 13:21:01 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
377
+ - Recon Loss Pre: 0.0049
378
+ - Recon Loss All: 0.0030
379
+ 2025-10-19 13:21:05 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 425/475] LR: 0.000173, Loss: -0.0316
380
+ 2025-10-19 13:21:05 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
381
+ - Recon Loss Pre: 0.0052
382
+ - Recon Loss All: 0.0030
383
+ 2025-10-19 13:21:08 - tokenizer_training_rank_0 - INFO - [Epoch 8/30, Step 475/475] LR: 0.000172, Loss: -0.0319
384
+ 2025-10-19 13:21:08 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
385
+ - Recon Loss Pre: 0.0047
386
+ - Recon Loss All: 0.0028
387
+ 2025-10-19 13:21:09 - tokenizer_training_rank_0 - INFO -
388
+ --- Epoch 8/30 Summary ---
389
+ Validation Loss: 0.0031
390
+ Epoch Time: 0:00:32
391
+ Total Training Time: 0:00:32
392
+
393
+ 2025-10-19 13:21:12 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 50/475] LR: 0.000171, Loss: -0.0315
394
+ 2025-10-19 13:21:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
395
+ - Recon Loss Pre: 0.0052
396
+ - Recon Loss All: 0.0030
397
+ 2025-10-19 13:21:16 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 100/475] LR: 0.000170, Loss: -0.0314
398
+ 2025-10-19 13:21:16 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
399
+ - Recon Loss Pre: 0.0052
400
+ - Recon Loss All: 0.0032
401
+ 2025-10-19 13:21:19 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 150/475] LR: 0.000170, Loss: -0.0318
402
+ 2025-10-19 13:21:19 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
403
+ - Recon Loss Pre: 0.0048
404
+ - Recon Loss All: 0.0029
405
+ 2025-10-19 13:21:22 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 200/475] LR: 0.000169, Loss: -0.0314
406
+ 2025-10-19 13:21:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
407
+ - Recon Loss Pre: 0.0055
408
+ - Recon Loss All: 0.0030
409
+ 2025-10-19 13:21:26 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 250/475] LR: 0.000168, Loss: -0.0317
410
+ 2025-10-19 13:21:26 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
411
+ - Recon Loss Pre: 0.0050
412
+ - Recon Loss All: 0.0029
413
+ 2025-10-19 13:21:29 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 300/475] LR: 0.000167, Loss: -0.0316
414
+ 2025-10-19 13:21:29 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
415
+ - Recon Loss Pre: 0.0051
416
+ - Recon Loss All: 0.0030
417
+ 2025-10-19 13:21:32 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 350/475] LR: 0.000166, Loss: -0.0314
418
+ 2025-10-19 13:21:32 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
419
+ - Recon Loss Pre: 0.0053
420
+ - Recon Loss All: 0.0031
421
+ 2025-10-19 13:21:35 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 400/475] LR: 0.000165, Loss: -0.0317
422
+ 2025-10-19 13:21:35 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
423
+ - Recon Loss Pre: 0.0049
424
+ - Recon Loss All: 0.0029
425
+ 2025-10-19 13:21:39 - tokenizer_training_rank_0 - INFO - [Epoch 9/30, Step 450/475] LR: 0.000165, Loss: -0.0316
426
+ 2025-10-19 13:21:39 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
427
+ - Recon Loss Pre: 0.0051
428
+ - Recon Loss All: 0.0030
429
+ 2025-10-19 13:21:41 - tokenizer_training_rank_0 - INFO -
430
+ --- Epoch 9/30 Summary ---
431
+ Validation Loss: 0.0031
432
+ Epoch Time: 0:00:32
433
+ Total Training Time: 0:00:32
434
+
435
+ 2025-10-19 13:21:43 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 25/475] LR: 0.000164, Loss: -0.0315
436
+ 2025-10-19 13:21:43 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
437
+ - Recon Loss Pre: 0.0052
438
+ - Recon Loss All: 0.0031
439
+ 2025-10-19 13:21:46 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 75/475] LR: 0.000163, Loss: -0.0316
440
+ 2025-10-19 13:21:46 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0712
441
+ - Recon Loss Pre: 0.0051
442
+ - Recon Loss All: 0.0029
443
+ 2025-10-19 13:21:50 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 125/475] LR: 0.000162, Loss: -0.0315
444
+ 2025-10-19 13:21:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
445
+ - Recon Loss Pre: 0.0052
446
+ - Recon Loss All: 0.0031
447
+ 2025-10-19 13:21:53 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 175/475] LR: 0.000161, Loss: -0.0318
448
+ 2025-10-19 13:21:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
449
+ - Recon Loss Pre: 0.0048
450
+ - Recon Loss All: 0.0028
451
+ 2025-10-19 13:21:56 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 225/475] LR: 0.000160, Loss: -0.0316
452
+ 2025-10-19 13:21:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
453
+ - Recon Loss Pre: 0.0052
454
+ - Recon Loss All: 0.0030
455
+ 2025-10-19 13:22:00 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 275/475] LR: 0.000159, Loss: -0.0319
456
+ 2025-10-19 13:22:00 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
457
+ - Recon Loss Pre: 0.0048
458
+ - Recon Loss All: 0.0028
459
+ 2025-10-19 13:22:03 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 325/475] LR: 0.000158, Loss: -0.0319
460
+ 2025-10-19 13:22:03 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
461
+ - Recon Loss Pre: 0.0047
462
+ - Recon Loss All: 0.0028
463
+ 2025-10-19 13:22:07 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 375/475] LR: 0.000157, Loss: -0.0318
464
+ 2025-10-19 13:22:07 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
465
+ - Recon Loss Pre: 0.0049
466
+ - Recon Loss All: 0.0029
467
+ 2025-10-19 13:22:10 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 425/475] LR: 0.000156, Loss: -0.0318
468
+ 2025-10-19 13:22:10 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
469
+ - Recon Loss Pre: 0.0048
470
+ - Recon Loss All: 0.0029
471
+ 2025-10-19 13:22:13 - tokenizer_training_rank_0 - INFO - [Epoch 10/30, Step 475/475] LR: 0.000155, Loss: -0.0320
472
+ 2025-10-19 13:22:13 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
473
+ - Recon Loss Pre: 0.0046
474
+ - Recon Loss All: 0.0028
475
+ 2025-10-19 13:22:14 - tokenizer_training_rank_0 - INFO -
476
+ --- Epoch 10/30 Summary ---
477
+ Validation Loss: 0.0031
478
+ Epoch Time: 0:00:32
479
+ Total Training Time: 0:00:32
480
+
481
+ 2025-10-19 13:22:17 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 50/475] LR: 0.000155, Loss: -0.0315
482
+ 2025-10-19 13:22:17 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
483
+ - Recon Loss Pre: 0.0052
484
+ - Recon Loss All: 0.0031
485
+ 2025-10-19 13:22:21 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 100/475] LR: 0.000154, Loss: -0.0320
486
+ 2025-10-19 13:22:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
487
+ - Recon Loss Pre: 0.0046
488
+ - Recon Loss All: 0.0027
489
+ 2025-10-19 13:22:24 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 150/475] LR: 0.000153, Loss: -0.0316
490
+ 2025-10-19 13:22:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0713
491
+ - Recon Loss Pre: 0.0051
492
+ - Recon Loss All: 0.0031
493
+ 2025-10-19 13:22:27 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 200/475] LR: 0.000152, Loss: -0.0318
494
+ 2025-10-19 13:22:27 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
495
+ - Recon Loss Pre: 0.0049
496
+ - Recon Loss All: 0.0029
497
+ 2025-10-19 13:22:31 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 250/475] LR: 0.000151, Loss: -0.0320
498
+ 2025-10-19 13:22:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
499
+ - Recon Loss Pre: 0.0046
500
+ - Recon Loss All: 0.0028
501
+ 2025-10-19 13:22:34 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 300/475] LR: 0.000150, Loss: -0.0319
502
+ 2025-10-19 13:22:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
503
+ - Recon Loss Pre: 0.0047
504
+ - Recon Loss All: 0.0029
505
+ 2025-10-19 13:22:37 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 350/475] LR: 0.000149, Loss: -0.0319
506
+ 2025-10-19 13:22:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
507
+ - Recon Loss Pre: 0.0047
508
+ - Recon Loss All: 0.0029
509
+ 2025-10-19 13:22:41 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 400/475] LR: 0.000148, Loss: -0.0320
510
+ 2025-10-19 13:22:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
511
+ - Recon Loss Pre: 0.0046
512
+ - Recon Loss All: 0.0029
513
+ 2025-10-19 13:22:44 - tokenizer_training_rank_0 - INFO - [Epoch 11/30, Step 450/475] LR: 0.000147, Loss: -0.0319
514
+ 2025-10-19 13:22:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
515
+ - Recon Loss Pre: 0.0047
516
+ - Recon Loss All: 0.0029
517
+ 2025-10-19 13:22:47 - tokenizer_training_rank_0 - INFO -
518
+ --- Epoch 11/30 Summary ---
519
+ Validation Loss: 0.0031
520
+ Epoch Time: 0:00:32
521
+ Total Training Time: 0:00:32
522
+
523
+ 2025-10-19 13:22:48 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 25/475] LR: 0.000146, Loss: -0.0321
524
+ 2025-10-19 13:22:48 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
525
+ - Recon Loss Pre: 0.0045
526
+ - Recon Loss All: 0.0027
527
+ 2025-10-19 13:22:51 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 75/475] LR: 0.000145, Loss: -0.0321
528
+ 2025-10-19 13:22:51 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
529
+ - Recon Loss Pre: 0.0045
530
+ - Recon Loss All: 0.0028
531
+ 2025-10-19 13:22:55 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 125/475] LR: 0.000144, Loss: -0.0317
532
+ 2025-10-19 13:22:55 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
533
+ - Recon Loss Pre: 0.0050
534
+ - Recon Loss All: 0.0030
535
+ 2025-10-19 13:22:58 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 175/475] LR: 0.000143, Loss: -0.0320
536
+ 2025-10-19 13:22:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
537
+ - Recon Loss Pre: 0.0045
538
+ - Recon Loss All: 0.0028
539
+ 2025-10-19 13:23:02 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 225/475] LR: 0.000142, Loss: -0.0320
540
+ 2025-10-19 13:23:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
541
+ - Recon Loss Pre: 0.0047
542
+ - Recon Loss All: 0.0028
543
+ 2025-10-19 13:23:05 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 275/475] LR: 0.000141, Loss: -0.0319
544
+ 2025-10-19 13:23:05 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
545
+ - Recon Loss Pre: 0.0047
546
+ - Recon Loss All: 0.0030
547
+ 2025-10-19 13:23:08 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 325/475] LR: 0.000140, Loss: -0.0318
548
+ 2025-10-19 13:23:08 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0714
549
+ - Recon Loss Pre: 0.0049
550
+ - Recon Loss All: 0.0030
551
+ 2025-10-19 13:23:11 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 375/475] LR: 0.000138, Loss: -0.0320
552
+ 2025-10-19 13:23:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
553
+ - Recon Loss Pre: 0.0046
554
+ - Recon Loss All: 0.0029
555
+ 2025-10-19 13:23:15 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 425/475] LR: 0.000137, Loss: -0.0321
556
+ 2025-10-19 13:23:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
557
+ - Recon Loss Pre: 0.0045
558
+ - Recon Loss All: 0.0027
559
+ 2025-10-19 13:23:18 - tokenizer_training_rank_0 - INFO - [Epoch 12/30, Step 475/475] LR: 0.000136, Loss: -0.0320
560
+ 2025-10-19 13:23:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
561
+ - Recon Loss Pre: 0.0047
562
+ - Recon Loss All: 0.0028
563
+ 2025-10-19 13:23:19 - tokenizer_training_rank_0 - INFO -
564
+ --- Epoch 12/30 Summary ---
565
+ Validation Loss: 0.0030
566
+ Epoch Time: 0:00:32
567
+ Total Training Time: 0:00:32
568
+
569
+ 2025-10-19 13:23:22 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 50/475] LR: 0.000135, Loss: -0.0320
570
+ 2025-10-19 13:23:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
571
+ - Recon Loss Pre: 0.0047
572
+ - Recon Loss All: 0.0028
573
+ 2025-10-19 13:23:25 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 100/475] LR: 0.000134, Loss: -0.0322
574
+ 2025-10-19 13:23:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
575
+ - Recon Loss Pre: 0.0044
576
+ - Recon Loss All: 0.0027
577
+ 2025-10-19 13:23:28 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 150/475] LR: 0.000133, Loss: -0.0321
578
+ 2025-10-19 13:23:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
579
+ - Recon Loss Pre: 0.0045
580
+ - Recon Loss All: 0.0028
581
+ 2025-10-19 13:23:31 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 200/475] LR: 0.000132, Loss: -0.0314
582
+ 2025-10-19 13:23:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
583
+ - Recon Loss Pre: 0.0059
584
+ - Recon Loss All: 0.0029
585
+ 2025-10-19 13:23:35 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 250/475] LR: 0.000131, Loss: -0.0323
586
+ 2025-10-19 13:23:35 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
587
+ - Recon Loss Pre: 0.0044
588
+ - Recon Loss All: 0.0026
589
+ 2025-10-19 13:23:38 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 300/475] LR: 0.000130, Loss: -0.0323
590
+ 2025-10-19 13:23:38 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
591
+ - Recon Loss Pre: 0.0043
592
+ - Recon Loss All: 0.0026
593
+ 2025-10-19 13:23:41 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 350/475] LR: 0.000129, Loss: -0.0320
594
+ 2025-10-19 13:23:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
595
+ - Recon Loss Pre: 0.0047
596
+ - Recon Loss All: 0.0028
597
+ 2025-10-19 13:23:44 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 400/475] LR: 0.000128, Loss: -0.0323
598
+ 2025-10-19 13:23:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
599
+ - Recon Loss Pre: 0.0043
600
+ - Recon Loss All: 0.0026
601
+ 2025-10-19 13:23:48 - tokenizer_training_rank_0 - INFO - [Epoch 13/30, Step 450/475] LR: 0.000127, Loss: -0.0320
602
+ 2025-10-19 13:23:48 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
603
+ - Recon Loss Pre: 0.0047
604
+ - Recon Loss All: 0.0029
605
+ 2025-10-19 13:23:50 - tokenizer_training_rank_0 - INFO -
606
+ --- Epoch 13/30 Summary ---
607
+ Validation Loss: 0.0030
608
+ Epoch Time: 0:00:31
609
+ Total Training Time: 0:00:31
610
+
611
+ 2025-10-19 13:23:50 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/tokenizer/best_model (validation loss: 0.0030)
612
+ 2025-10-19 13:23:52 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 25/475] LR: 0.000126, Loss: -0.0320
613
+ 2025-10-19 13:23:52 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
614
+ - Recon Loss Pre: 0.0047
615
+ - Recon Loss All: 0.0028
616
+ 2025-10-19 13:23:55 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 75/475] LR: 0.000124, Loss: -0.0322
617
+ 2025-10-19 13:23:55 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
618
+ - Recon Loss Pre: 0.0044
619
+ - Recon Loss All: 0.0028
620
+ 2025-10-19 13:23:59 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 125/475] LR: 0.000123, Loss: -0.0321
621
+ 2025-10-19 13:23:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
622
+ - Recon Loss Pre: 0.0046
623
+ - Recon Loss All: 0.0027
624
+ 2025-10-19 13:24:02 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 175/475] LR: 0.000122, Loss: -0.0321
625
+ 2025-10-19 13:24:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
626
+ - Recon Loss Pre: 0.0047
627
+ - Recon Loss All: 0.0028
628
+ 2025-10-19 13:24:06 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 225/475] LR: 0.000121, Loss: -0.0321
629
+ 2025-10-19 13:24:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
630
+ - Recon Loss Pre: 0.0046
631
+ - Recon Loss All: 0.0027
632
+ 2025-10-19 13:24:09 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 275/475] LR: 0.000120, Loss: -0.0321
633
+ 2025-10-19 13:24:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
634
+ - Recon Loss Pre: 0.0046
635
+ - Recon Loss All: 0.0028
636
+ 2025-10-19 13:24:12 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 325/475] LR: 0.000119, Loss: -0.0322
637
+ 2025-10-19 13:24:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
638
+ - Recon Loss Pre: 0.0046
639
+ - Recon Loss All: 0.0027
640
+ 2025-10-19 13:24:15 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 375/475] LR: 0.000118, Loss: -0.0323
641
+ 2025-10-19 13:24:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
642
+ - Recon Loss Pre: 0.0043
643
+ - Recon Loss All: 0.0027
644
+ 2025-10-19 13:24:19 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 425/475] LR: 0.000117, Loss: -0.0324
645
+ 2025-10-19 13:24:19 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
646
+ - Recon Loss Pre: 0.0042
647
+ - Recon Loss All: 0.0027
648
+ 2025-10-19 13:24:22 - tokenizer_training_rank_0 - INFO - [Epoch 14/30, Step 475/475] LR: 0.000116, Loss: -0.0324
649
+ 2025-10-19 13:24:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
650
+ - Recon Loss Pre: 0.0043
651
+ - Recon Loss All: 0.0025
652
+ 2025-10-19 13:24:23 - tokenizer_training_rank_0 - INFO -
653
+ --- Epoch 14/30 Summary ---
654
+ Validation Loss: 0.0030
655
+ Epoch Time: 0:00:32
656
+ Total Training Time: 0:00:32
657
+
658
+ 2025-10-19 13:24:23 - tokenizer_training_rank_0 - INFO - Best model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/tokenizer/best_model (validation loss: 0.0030)
659
+ 2025-10-19 13:24:26 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 50/475] LR: 0.000114, Loss: -0.0320
660
+ 2025-10-19 13:24:26 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
661
+ - Recon Loss Pre: 0.0048
662
+ - Recon Loss All: 0.0029
663
+ 2025-10-19 13:24:30 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 100/475] LR: 0.000113, Loss: -0.0322
664
+ 2025-10-19 13:24:30 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
665
+ - Recon Loss Pre: 0.0046
666
+ - Recon Loss All: 0.0026
667
+ 2025-10-19 13:24:33 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 150/475] LR: 0.000112, Loss: -0.0323
668
+ 2025-10-19 13:24:33 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
669
+ - Recon Loss Pre: 0.0044
670
+ - Recon Loss All: 0.0026
671
+ 2025-10-19 13:24:36 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 200/475] LR: 0.000111, Loss: -0.0323
672
+ 2025-10-19 13:24:36 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
673
+ - Recon Loss Pre: 0.0043
674
+ - Recon Loss All: 0.0026
675
+ 2025-10-19 13:24:40 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 250/475] LR: 0.000110, Loss: -0.0318
676
+ 2025-10-19 13:24:40 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
677
+ - Recon Loss Pre: 0.0050
678
+ - Recon Loss All: 0.0029
679
+ 2025-10-19 13:24:43 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 300/475] LR: 0.000109, Loss: -0.0324
680
+ 2025-10-19 13:24:43 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
681
+ - Recon Loss Pre: 0.0042
682
+ - Recon Loss All: 0.0027
683
+ 2025-10-19 13:24:46 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 350/475] LR: 0.000108, Loss: -0.0323
684
+ 2025-10-19 13:24:46 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
685
+ - Recon Loss Pre: 0.0044
686
+ - Recon Loss All: 0.0027
687
+ 2025-10-19 13:24:50 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 400/475] LR: 0.000107, Loss: -0.0322
688
+ 2025-10-19 13:24:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
689
+ - Recon Loss Pre: 0.0045
690
+ - Recon Loss All: 0.0028
691
+ 2025-10-19 13:24:53 - tokenizer_training_rank_0 - INFO - [Epoch 15/30, Step 450/475] LR: 0.000105, Loss: -0.0322
692
+ 2025-10-19 13:24:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
693
+ - Recon Loss Pre: 0.0046
694
+ - Recon Loss All: 0.0026
695
+ 2025-10-19 13:24:56 - tokenizer_training_rank_0 - INFO -
696
+ --- Epoch 15/30 Summary ---
697
+ Validation Loss: 0.0030
698
+ Epoch Time: 0:00:32
699
+ Total Training Time: 0:00:32
700
+
701
+ 2025-10-19 13:24:58 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 25/475] LR: 0.000104, Loss: -0.0321
702
+ 2025-10-19 13:24:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
703
+ - Recon Loss Pre: 0.0046
704
+ - Recon Loss All: 0.0027
705
+ 2025-10-19 13:25:01 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 75/475] LR: 0.000103, Loss: -0.0322
706
+ 2025-10-19 13:25:01 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
707
+ - Recon Loss Pre: 0.0045
708
+ - Recon Loss All: 0.0027
709
+ 2025-10-19 13:25:04 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 125/475] LR: 0.000102, Loss: -0.0320
710
+ 2025-10-19 13:25:04 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
711
+ - Recon Loss Pre: 0.0048
712
+ - Recon Loss All: 0.0028
713
+ 2025-10-19 13:25:08 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 175/475] LR: 0.000101, Loss: -0.0320
714
+ 2025-10-19 13:25:08 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
715
+ - Recon Loss Pre: 0.0047
716
+ - Recon Loss All: 0.0029
717
+ 2025-10-19 13:25:11 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 225/475] LR: 0.000100, Loss: -0.0319
718
+ 2025-10-19 13:25:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
719
+ - Recon Loss Pre: 0.0048
720
+ - Recon Loss All: 0.0029
721
+ 2025-10-19 13:25:14 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 275/475] LR: 0.000099, Loss: -0.0322
722
+ 2025-10-19 13:25:14 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
723
+ - Recon Loss Pre: 0.0045
724
+ - Recon Loss All: 0.0027
725
+ 2025-10-19 13:25:18 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 325/475] LR: 0.000097, Loss: -0.0323
726
+ 2025-10-19 13:25:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
727
+ - Recon Loss Pre: 0.0044
728
+ - Recon Loss All: 0.0027
729
+ 2025-10-19 13:25:21 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 375/475] LR: 0.000096, Loss: -0.0318
730
+ 2025-10-19 13:25:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
731
+ - Recon Loss Pre: 0.0050
732
+ - Recon Loss All: 0.0029
733
+ 2025-10-19 13:25:24 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 425/475] LR: 0.000095, Loss: -0.0322
734
+ 2025-10-19 13:25:24 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
735
+ - Recon Loss Pre: 0.0044
736
+ - Recon Loss All: 0.0028
737
+ 2025-10-19 13:25:27 - tokenizer_training_rank_0 - INFO - [Epoch 16/30, Step 475/475] LR: 0.000094, Loss: -0.0322
738
+ 2025-10-19 13:25:27 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
739
+ - Recon Loss Pre: 0.0046
740
+ - Recon Loss All: 0.0026
741
+ 2025-10-19 13:25:29 - tokenizer_training_rank_0 - INFO -
742
+ --- Epoch 16/30 Summary ---
743
+ Validation Loss: 0.0031
744
+ Epoch Time: 0:00:32
745
+ Total Training Time: 0:00:32
746
+
747
+ 2025-10-19 13:25:32 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 50/475] LR: 0.000093, Loss: -0.0319
748
+ 2025-10-19 13:25:32 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
749
+ - Recon Loss Pre: 0.0048
750
+ - Recon Loss All: 0.0030
751
+ 2025-10-19 13:25:35 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 100/475] LR: 0.000092, Loss: -0.0319
752
+ 2025-10-19 13:25:35 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
753
+ - Recon Loss Pre: 0.0050
754
+ - Recon Loss All: 0.0027
755
+ 2025-10-19 13:25:39 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 150/475] LR: 0.000091, Loss: -0.0321
756
+ 2025-10-19 13:25:39 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
757
+ - Recon Loss Pre: 0.0046
758
+ - Recon Loss All: 0.0027
759
+ 2025-10-19 13:25:42 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 200/475] LR: 0.000090, Loss: -0.0322
760
+ 2025-10-19 13:25:42 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
761
+ - Recon Loss Pre: 0.0044
762
+ - Recon Loss All: 0.0028
763
+ 2025-10-19 13:25:45 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 250/475] LR: 0.000088, Loss: -0.0320
764
+ 2025-10-19 13:25:45 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
765
+ - Recon Loss Pre: 0.0047
766
+ - Recon Loss All: 0.0028
767
+ 2025-10-19 13:25:48 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 300/475] LR: 0.000087, Loss: -0.0321
768
+ 2025-10-19 13:25:48 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
769
+ - Recon Loss Pre: 0.0046
770
+ - Recon Loss All: 0.0027
771
+ 2025-10-19 13:25:52 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 350/475] LR: 0.000086, Loss: -0.0321
772
+ 2025-10-19 13:25:52 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
773
+ - Recon Loss Pre: 0.0046
774
+ - Recon Loss All: 0.0027
775
+ 2025-10-19 13:25:55 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 400/475] LR: 0.000085, Loss: -0.0321
776
+ 2025-10-19 13:25:55 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
777
+ - Recon Loss Pre: 0.0047
778
+ - Recon Loss All: 0.0027
779
+ 2025-10-19 13:25:59 - tokenizer_training_rank_0 - INFO - [Epoch 17/30, Step 450/475] LR: 0.000084, Loss: -0.0323
780
+ 2025-10-19 13:25:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
781
+ - Recon Loss Pre: 0.0044
782
+ - Recon Loss All: 0.0027
783
+ 2025-10-19 13:26:01 - tokenizer_training_rank_0 - INFO -
784
+ --- Epoch 17/30 Summary ---
785
+ Validation Loss: 0.0031
786
+ Epoch Time: 0:00:32
787
+ Total Training Time: 0:00:32
788
+
789
+ 2025-10-19 13:26:03 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 25/475] LR: 0.000083, Loss: -0.0321
790
+ 2025-10-19 13:26:03 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
791
+ - Recon Loss Pre: 0.0046
792
+ - Recon Loss All: 0.0028
793
+ 2025-10-19 13:26:06 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 75/475] LR: 0.000082, Loss: -0.0322
794
+ 2025-10-19 13:26:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
795
+ - Recon Loss Pre: 0.0045
796
+ - Recon Loss All: 0.0027
797
+ 2025-10-19 13:26:09 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 125/475] LR: 0.000081, Loss: -0.0320
798
+ 2025-10-19 13:26:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
799
+ - Recon Loss Pre: 0.0047
800
+ - Recon Loss All: 0.0028
801
+ 2025-10-19 13:26:12 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 175/475] LR: 0.000079, Loss: -0.0322
802
+ 2025-10-19 13:26:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
803
+ - Recon Loss Pre: 0.0045
804
+ - Recon Loss All: 0.0027
805
+ 2025-10-19 13:26:15 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 225/475] LR: 0.000078, Loss: -0.0320
806
+ 2025-10-19 13:26:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
807
+ - Recon Loss Pre: 0.0047
808
+ - Recon Loss All: 0.0028
809
+ 2025-10-19 13:26:18 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 275/475] LR: 0.000077, Loss: -0.0322
810
+ 2025-10-19 13:26:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
811
+ - Recon Loss Pre: 0.0044
812
+ - Recon Loss All: 0.0028
813
+ 2025-10-19 13:26:20 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 325/475] LR: 0.000076, Loss: -0.0323
814
+ 2025-10-19 13:26:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
815
+ - Recon Loss Pre: 0.0044
816
+ - Recon Loss All: 0.0027
817
+ 2025-10-19 13:26:23 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 375/475] LR: 0.000075, Loss: -0.0323
818
+ 2025-10-19 13:26:23 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
819
+ - Recon Loss Pre: 0.0043
820
+ - Recon Loss All: 0.0026
821
+ 2025-10-19 13:26:26 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 425/475] LR: 0.000074, Loss: -0.0323
822
+ 2025-10-19 13:26:26 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
823
+ - Recon Loss Pre: 0.0044
824
+ - Recon Loss All: 0.0027
825
+ 2025-10-19 13:26:29 - tokenizer_training_rank_0 - INFO - [Epoch 18/30, Step 475/475] LR: 0.000073, Loss: -0.0323
826
+ 2025-10-19 13:26:29 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
827
+ - Recon Loss Pre: 0.0043
828
+ - Recon Loss All: 0.0027
829
+ 2025-10-19 13:26:31 - tokenizer_training_rank_0 - INFO -
830
+ --- Epoch 18/30 Summary ---
831
+ Validation Loss: 0.0032
832
+ Epoch Time: 0:00:29
833
+ Total Training Time: 0:00:29
834
+
835
+ 2025-10-19 13:26:34 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 50/475] LR: 0.000072, Loss: -0.0322
836
+ 2025-10-19 13:26:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
837
+ - Recon Loss Pre: 0.0045
838
+ - Recon Loss All: 0.0027
839
+ 2025-10-19 13:26:37 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 100/475] LR: 0.000071, Loss: -0.0321
840
+ 2025-10-19 13:26:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
841
+ - Recon Loss Pre: 0.0046
842
+ - Recon Loss All: 0.0028
843
+ 2025-10-19 13:26:41 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 150/475] LR: 0.000070, Loss: -0.0322
844
+ 2025-10-19 13:26:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
845
+ - Recon Loss Pre: 0.0045
846
+ - Recon Loss All: 0.0027
847
+ 2025-10-19 13:26:44 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 200/475] LR: 0.000068, Loss: -0.0321
848
+ 2025-10-19 13:26:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
849
+ - Recon Loss Pre: 0.0046
850
+ - Recon Loss All: 0.0028
851
+ 2025-10-19 13:26:47 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 250/475] LR: 0.000067, Loss: -0.0324
852
+ 2025-10-19 13:26:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0717
853
+ - Recon Loss Pre: 0.0043
854
+ - Recon Loss All: 0.0026
855
+ 2025-10-19 13:26:50 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 300/475] LR: 0.000066, Loss: -0.0321
856
+ 2025-10-19 13:26:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
857
+ - Recon Loss Pre: 0.0046
858
+ - Recon Loss All: 0.0028
859
+ 2025-10-19 13:26:53 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 350/475] LR: 0.000065, Loss: -0.0323
860
+ 2025-10-19 13:26:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0717
861
+ - Recon Loss Pre: 0.0044
862
+ - Recon Loss All: 0.0027
863
+ 2025-10-19 13:26:56 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 400/475] LR: 0.000064, Loss: -0.0323
864
+ 2025-10-19 13:26:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0717
865
+ - Recon Loss Pre: 0.0044
866
+ - Recon Loss All: 0.0027
867
+ 2025-10-19 13:26:58 - tokenizer_training_rank_0 - INFO - [Epoch 19/30, Step 450/475] LR: 0.000063, Loss: -0.0324
868
+ 2025-10-19 13:26:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0717
869
+ - Recon Loss Pre: 0.0042
870
+ - Recon Loss All: 0.0027
871
+ 2025-10-19 13:27:01 - tokenizer_training_rank_0 - INFO -
872
+ --- Epoch 19/30 Summary ---
873
+ Validation Loss: 0.0032
874
+ Epoch Time: 0:00:30
875
+ Total Training Time: 0:00:30
876
+
877
+ 2025-10-19 13:27:02 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 25/475] LR: 0.000062, Loss: -0.0322
878
+ 2025-10-19 13:27:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0717
879
+ - Recon Loss Pre: 0.0045
880
+ - Recon Loss All: 0.0028
881
+ 2025-10-19 13:27:05 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 75/475] LR: 0.000061, Loss: -0.0321
882
+ 2025-10-19 13:27:05 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
883
+ - Recon Loss Pre: 0.0046
884
+ - Recon Loss All: 0.0028
885
+ 2025-10-19 13:27:08 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 125/475] LR: 0.000060, Loss: -0.0321
886
+ 2025-10-19 13:27:08 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
887
+ - Recon Loss Pre: 0.0045
888
+ - Recon Loss All: 0.0028
889
+ 2025-10-19 13:27:11 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 175/475] LR: 0.000059, Loss: -0.0322
890
+ 2025-10-19 13:27:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
891
+ - Recon Loss Pre: 0.0045
892
+ - Recon Loss All: 0.0027
893
+ 2025-10-19 13:27:14 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 225/475] LR: 0.000058, Loss: -0.0319
894
+ 2025-10-19 13:27:14 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
895
+ - Recon Loss Pre: 0.0049
896
+ - Recon Loss All: 0.0028
897
+ 2025-10-19 13:27:17 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 275/475] LR: 0.000057, Loss: -0.0322
898
+ 2025-10-19 13:27:17 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
899
+ - Recon Loss Pre: 0.0045
900
+ - Recon Loss All: 0.0028
901
+ 2025-10-19 13:27:20 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 325/475] LR: 0.000056, Loss: -0.0322
902
+ 2025-10-19 13:27:20 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
903
+ - Recon Loss Pre: 0.0045
904
+ - Recon Loss All: 0.0027
905
+ 2025-10-19 13:27:22 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 375/475] LR: 0.000055, Loss: -0.0322
906
+ 2025-10-19 13:27:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
907
+ - Recon Loss Pre: 0.0044
908
+ - Recon Loss All: 0.0027
909
+ 2025-10-19 13:27:25 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 425/475] LR: 0.000054, Loss: -0.0322
910
+ 2025-10-19 13:27:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
911
+ - Recon Loss Pre: 0.0044
912
+ - Recon Loss All: 0.0027
913
+ 2025-10-19 13:27:28 - tokenizer_training_rank_0 - INFO - [Epoch 20/30, Step 475/475] LR: 0.000053, Loss: -0.0323
914
+ 2025-10-19 13:27:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
915
+ - Recon Loss Pre: 0.0044
916
+ - Recon Loss All: 0.0026
917
+ 2025-10-19 13:27:29 - tokenizer_training_rank_0 - INFO -
918
+ --- Epoch 20/30 Summary ---
919
+ Validation Loss: 0.0032
920
+ Epoch Time: 0:00:28
921
+ Total Training Time: 0:00:28
922
+
923
+ 2025-10-19 13:27:33 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 50/475] LR: 0.000052, Loss: -0.0322
924
+ 2025-10-19 13:27:33 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
925
+ - Recon Loss Pre: 0.0045
926
+ - Recon Loss All: 0.0027
927
+ 2025-10-19 13:27:36 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 100/475] LR: 0.000051, Loss: -0.0322
928
+ 2025-10-19 13:27:36 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
929
+ - Recon Loss Pre: 0.0046
930
+ - Recon Loss All: 0.0027
931
+ 2025-10-19 13:27:39 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 150/475] LR: 0.000050, Loss: -0.0321
932
+ 2025-10-19 13:27:39 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
933
+ - Recon Loss Pre: 0.0046
934
+ - Recon Loss All: 0.0029
935
+ 2025-10-19 13:27:42 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 200/475] LR: 0.000049, Loss: -0.0322
936
+ 2025-10-19 13:27:42 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
937
+ - Recon Loss Pre: 0.0044
938
+ - Recon Loss All: 0.0028
939
+ 2025-10-19 13:27:45 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 250/475] LR: 0.000048, Loss: -0.0321
940
+ 2025-10-19 13:27:45 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
941
+ - Recon Loss Pre: 0.0046
942
+ - Recon Loss All: 0.0028
943
+ 2025-10-19 13:27:48 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 300/475] LR: 0.000047, Loss: -0.0321
944
+ 2025-10-19 13:27:48 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
945
+ - Recon Loss Pre: 0.0046
946
+ - Recon Loss All: 0.0028
947
+ 2025-10-19 13:27:51 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 350/475] LR: 0.000046, Loss: -0.0324
948
+ 2025-10-19 13:27:51 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
949
+ - Recon Loss Pre: 0.0042
950
+ - Recon Loss All: 0.0026
951
+ 2025-10-19 13:27:53 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 400/475] LR: 0.000045, Loss: -0.0322
952
+ 2025-10-19 13:27:54 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
953
+ - Recon Loss Pre: 0.0044
954
+ - Recon Loss All: 0.0027
955
+ 2025-10-19 13:27:56 - tokenizer_training_rank_0 - INFO - [Epoch 21/30, Step 450/475] LR: 0.000044, Loss: -0.0322
956
+ 2025-10-19 13:27:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
957
+ - Recon Loss Pre: 0.0045
958
+ - Recon Loss All: 0.0027
959
+ 2025-10-19 13:27:59 - tokenizer_training_rank_0 - INFO -
960
+ --- Epoch 21/30 Summary ---
961
+ Validation Loss: 0.0031
962
+ Epoch Time: 0:00:29
963
+ Total Training Time: 0:00:29
964
+
965
+ 2025-10-19 13:28:01 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 25/475] LR: 0.000043, Loss: -0.0324
966
+ 2025-10-19 13:28:01 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
967
+ - Recon Loss Pre: 0.0042
968
+ - Recon Loss All: 0.0026
969
+ 2025-10-19 13:28:03 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 75/475] LR: 0.000042, Loss: -0.0323
970
+ 2025-10-19 13:28:03 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
971
+ - Recon Loss Pre: 0.0044
972
+ - Recon Loss All: 0.0027
973
+ 2025-10-19 13:28:06 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 125/475] LR: 0.000041, Loss: -0.0321
974
+ 2025-10-19 13:28:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
975
+ - Recon Loss Pre: 0.0047
976
+ - Recon Loss All: 0.0027
977
+ 2025-10-19 13:28:09 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 175/475] LR: 0.000040, Loss: -0.0321
978
+ 2025-10-19 13:28:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
979
+ - Recon Loss Pre: 0.0045
980
+ - Recon Loss All: 0.0028
981
+ 2025-10-19 13:28:12 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 225/475] LR: 0.000039, Loss: -0.0321
982
+ 2025-10-19 13:28:12 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
983
+ - Recon Loss Pre: 0.0045
984
+ - Recon Loss All: 0.0028
985
+ 2025-10-19 13:28:15 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 275/475] LR: 0.000039, Loss: -0.0322
986
+ 2025-10-19 13:28:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
987
+ - Recon Loss Pre: 0.0045
988
+ - Recon Loss All: 0.0028
989
+ 2025-10-19 13:28:18 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 325/475] LR: 0.000038, Loss: -0.0321
990
+ 2025-10-19 13:28:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
991
+ - Recon Loss Pre: 0.0047
992
+ - Recon Loss All: 0.0028
993
+ 2025-10-19 13:28:20 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 375/475] LR: 0.000037, Loss: -0.0324
994
+ 2025-10-19 13:28:20 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
995
+ - Recon Loss Pre: 0.0043
996
+ - Recon Loss All: 0.0025
997
+ 2025-10-19 13:28:23 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 425/475] LR: 0.000036, Loss: -0.0321
998
+ 2025-10-19 13:28:23 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
999
+ - Recon Loss Pre: 0.0046
1000
+ - Recon Loss All: 0.0028
1001
+ 2025-10-19 13:28:26 - tokenizer_training_rank_0 - INFO - [Epoch 22/30, Step 475/475] LR: 0.000035, Loss: -0.0321
1002
+ 2025-10-19 13:28:26 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1003
+ - Recon Loss Pre: 0.0046
1004
+ - Recon Loss All: 0.0029
1005
+ 2025-10-19 13:28:27 - tokenizer_training_rank_0 - INFO -
1006
+ --- Epoch 22/30 Summary ---
1007
+ Validation Loss: 0.0032
1008
+ Epoch Time: 0:00:28
1009
+ Total Training Time: 0:00:28
1010
+
1011
+ 2025-10-19 13:28:31 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 50/475] LR: 0.000034, Loss: -0.0321
1012
+ 2025-10-19 13:28:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1013
+ - Recon Loss Pre: 0.0046
1014
+ - Recon Loss All: 0.0028
1015
+ 2025-10-19 13:28:34 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 100/475] LR: 0.000033, Loss: -0.0321
1016
+ 2025-10-19 13:28:34 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1017
+ - Recon Loss Pre: 0.0046
1018
+ - Recon Loss All: 0.0027
1019
+ 2025-10-19 13:28:37 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 150/475] LR: 0.000032, Loss: -0.0322
1020
+ 2025-10-19 13:28:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1021
+ - Recon Loss Pre: 0.0044
1022
+ - Recon Loss All: 0.0028
1023
+ 2025-10-19 13:28:41 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 200/475] LR: 0.000032, Loss: -0.0322
1024
+ 2025-10-19 13:28:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1025
+ - Recon Loss Pre: 0.0046
1026
+ - Recon Loss All: 0.0027
1027
+ 2025-10-19 13:28:44 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 250/475] LR: 0.000031, Loss: -0.0323
1028
+ 2025-10-19 13:28:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1029
+ - Recon Loss Pre: 0.0044
1030
+ - Recon Loss All: 0.0026
1031
+ 2025-10-19 13:28:47 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 300/475] LR: 0.000030, Loss: -0.0321
1032
+ 2025-10-19 13:28:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1033
+ - Recon Loss Pre: 0.0045
1034
+ - Recon Loss All: 0.0028
1035
+ 2025-10-19 13:28:50 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 350/475] LR: 0.000029, Loss: -0.0321
1036
+ 2025-10-19 13:28:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1037
+ - Recon Loss Pre: 0.0047
1038
+ - Recon Loss All: 0.0028
1039
+ 2025-10-19 13:28:53 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 400/475] LR: 0.000028, Loss: -0.0322
1040
+ 2025-10-19 13:28:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1041
+ - Recon Loss Pre: 0.0045
1042
+ - Recon Loss All: 0.0027
1043
+ 2025-10-19 13:28:56 - tokenizer_training_rank_0 - INFO - [Epoch 23/30, Step 450/475] LR: 0.000028, Loss: -0.0321
1044
+ 2025-10-19 13:28:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1045
+ - Recon Loss Pre: 0.0046
1046
+ - Recon Loss All: 0.0028
1047
+ 2025-10-19 13:28:58 - tokenizer_training_rank_0 - INFO -
1048
+ --- Epoch 23/30 Summary ---
1049
+ Validation Loss: 0.0031
1050
+ Epoch Time: 0:00:30
1051
+ Total Training Time: 0:00:30
1052
+
1053
+ 2025-10-19 13:29:00 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 25/475] LR: 0.000027, Loss: -0.0322
1054
+ 2025-10-19 13:29:00 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1055
+ - Recon Loss Pre: 0.0045
1056
+ - Recon Loss All: 0.0027
1057
+ 2025-10-19 13:29:03 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 75/475] LR: 0.000026, Loss: -0.0323
1058
+ 2025-10-19 13:29:03 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1059
+ - Recon Loss Pre: 0.0044
1060
+ - Recon Loss All: 0.0026
1061
+ 2025-10-19 13:29:06 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 125/475] LR: 0.000025, Loss: -0.0321
1062
+ 2025-10-19 13:29:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1063
+ - Recon Loss Pre: 0.0046
1064
+ - Recon Loss All: 0.0028
1065
+ 2025-10-19 13:29:09 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 175/475] LR: 0.000025, Loss: -0.0320
1066
+ 2025-10-19 13:29:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1067
+ - Recon Loss Pre: 0.0047
1068
+ - Recon Loss All: 0.0028
1069
+ 2025-10-19 13:29:13 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 225/475] LR: 0.000024, Loss: -0.0321
1070
+ 2025-10-19 13:29:13 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1071
+ - Recon Loss Pre: 0.0047
1072
+ - Recon Loss All: 0.0028
1073
+ 2025-10-19 13:29:16 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 275/475] LR: 0.000023, Loss: -0.0321
1074
+ 2025-10-19 13:29:16 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1075
+ - Recon Loss Pre: 0.0046
1076
+ - Recon Loss All: 0.0027
1077
+ 2025-10-19 13:29:19 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 325/475] LR: 0.000022, Loss: -0.0321
1078
+ 2025-10-19 13:29:19 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1079
+ - Recon Loss Pre: 0.0046
1080
+ - Recon Loss All: 0.0026
1081
+ 2025-10-19 13:29:22 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 375/475] LR: 0.000022, Loss: -0.0321
1082
+ 2025-10-19 13:29:22 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1083
+ - Recon Loss Pre: 0.0046
1084
+ - Recon Loss All: 0.0028
1085
+ 2025-10-19 13:29:25 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 425/475] LR: 0.000021, Loss: -0.0323
1086
+ 2025-10-19 13:29:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1087
+ - Recon Loss Pre: 0.0044
1088
+ - Recon Loss All: 0.0026
1089
+ 2025-10-19 13:29:28 - tokenizer_training_rank_0 - INFO - [Epoch 24/30, Step 475/475] LR: 0.000020, Loss: -0.0323
1090
+ 2025-10-19 13:29:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1091
+ - Recon Loss Pre: 0.0044
1092
+ - Recon Loss All: 0.0027
1093
+ 2025-10-19 13:29:30 - tokenizer_training_rank_0 - INFO -
1094
+ --- Epoch 24/30 Summary ---
1095
+ Validation Loss: 0.0031
1096
+ Epoch Time: 0:00:31
1097
+ Total Training Time: 0:00:31
1098
+
1099
+ 2025-10-19 13:29:33 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 50/475] LR: 0.000020, Loss: -0.0321
1100
+ 2025-10-19 13:29:33 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1101
+ - Recon Loss Pre: 0.0047
1102
+ - Recon Loss All: 0.0027
1103
+ 2025-10-19 13:29:36 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 100/475] LR: 0.000019, Loss: -0.0323
1104
+ 2025-10-19 13:29:36 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1105
+ - Recon Loss Pre: 0.0044
1106
+ - Recon Loss All: 0.0026
1107
+ 2025-10-19 13:29:39 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 150/475] LR: 0.000018, Loss: -0.0320
1108
+ 2025-10-19 13:29:39 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
1109
+ - Recon Loss Pre: 0.0047
1110
+ - Recon Loss All: 0.0028
1111
+ 2025-10-19 13:29:43 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 200/475] LR: 0.000018, Loss: -0.0323
1112
+ 2025-10-19 13:29:43 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1113
+ - Recon Loss Pre: 0.0043
1114
+ - Recon Loss All: 0.0027
1115
+ 2025-10-19 13:29:46 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 250/475] LR: 0.000017, Loss: -0.0321
1116
+ 2025-10-19 13:29:46 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
1117
+ - Recon Loss Pre: 0.0046
1118
+ - Recon Loss All: 0.0028
1119
+ 2025-10-19 13:29:49 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 300/475] LR: 0.000016, Loss: -0.0324
1120
+ 2025-10-19 13:29:49 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1121
+ - Recon Loss Pre: 0.0043
1122
+ - Recon Loss All: 0.0026
1123
+ 2025-10-19 13:29:53 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 350/475] LR: 0.000016, Loss: -0.0321
1124
+ 2025-10-19 13:29:53 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1125
+ - Recon Loss Pre: 0.0046
1126
+ - Recon Loss All: 0.0028
1127
+ 2025-10-19 13:29:56 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 400/475] LR: 0.000015, Loss: -0.0324
1128
+ 2025-10-19 13:29:56 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1129
+ - Recon Loss Pre: 0.0043
1130
+ - Recon Loss All: 0.0026
1131
+ 2025-10-19 13:29:59 - tokenizer_training_rank_0 - INFO - [Epoch 25/30, Step 450/475] LR: 0.000015, Loss: -0.0323
1132
+ 2025-10-19 13:29:59 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1133
+ - Recon Loss Pre: 0.0044
1134
+ - Recon Loss All: 0.0026
1135
+ 2025-10-19 13:30:02 - tokenizer_training_rank_0 - INFO -
1136
+ --- Epoch 25/30 Summary ---
1137
+ Validation Loss: 0.0031
1138
+ Epoch Time: 0:00:32
1139
+ Total Training Time: 0:00:32
1140
+
1141
+ 2025-10-19 13:30:04 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 25/475] LR: 0.000014, Loss: -0.0322
1142
+ 2025-10-19 13:30:04 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1143
+ - Recon Loss Pre: 0.0045
1144
+ - Recon Loss All: 0.0027
1145
+ 2025-10-19 13:30:07 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 75/475] LR: 0.000013, Loss: -0.0321
1146
+ 2025-10-19 13:30:07 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1147
+ - Recon Loss Pre: 0.0045
1148
+ - Recon Loss All: 0.0028
1149
+ 2025-10-19 13:30:10 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 125/475] LR: 0.000013, Loss: -0.0320
1150
+ 2025-10-19 13:30:10 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1151
+ - Recon Loss Pre: 0.0046
1152
+ - Recon Loss All: 0.0029
1153
+ 2025-10-19 13:30:14 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 175/475] LR: 0.000012, Loss: -0.0322
1154
+ 2025-10-19 13:30:14 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1155
+ - Recon Loss Pre: 0.0044
1156
+ - Recon Loss All: 0.0027
1157
+ 2025-10-19 13:30:17 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 225/475] LR: 0.000012, Loss: -0.0318
1158
+ 2025-10-19 13:30:17 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1159
+ - Recon Loss Pre: 0.0050
1160
+ - Recon Loss All: 0.0028
1161
+ 2025-10-19 13:30:20 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 275/475] LR: 0.000011, Loss: -0.0322
1162
+ 2025-10-19 13:30:20 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1163
+ - Recon Loss Pre: 0.0047
1164
+ - Recon Loss All: 0.0026
1165
+ 2025-10-19 13:30:23 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 325/475] LR: 0.000011, Loss: -0.0322
1166
+ 2025-10-19 13:30:23 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1167
+ - Recon Loss Pre: 0.0046
1168
+ - Recon Loss All: 0.0027
1169
+ 2025-10-19 13:30:27 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 375/475] LR: 0.000010, Loss: -0.0324
1170
+ 2025-10-19 13:30:27 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0717
1171
+ - Recon Loss Pre: 0.0042
1172
+ - Recon Loss All: 0.0026
1173
+ 2025-10-19 13:30:30 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 425/475] LR: 0.000010, Loss: -0.0323
1174
+ 2025-10-19 13:30:30 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1175
+ - Recon Loss Pre: 0.0043
1176
+ - Recon Loss All: 0.0026
1177
+ 2025-10-19 13:30:33 - tokenizer_training_rank_0 - INFO - [Epoch 26/30, Step 475/475] LR: 0.000009, Loss: -0.0321
1178
+ 2025-10-19 13:30:33 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1179
+ - Recon Loss Pre: 0.0046
1180
+ - Recon Loss All: 0.0027
1181
+ 2025-10-19 13:30:34 - tokenizer_training_rank_0 - INFO -
1182
+ --- Epoch 26/30 Summary ---
1183
+ Validation Loss: 0.0031
1184
+ Epoch Time: 0:00:32
1185
+ Total Training Time: 0:00:32
1186
+
1187
+ 2025-10-19 13:30:37 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 50/475] LR: 0.000009, Loss: -0.0322
1188
+ 2025-10-19 13:30:37 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1189
+ - Recon Loss Pre: 0.0044
1190
+ - Recon Loss All: 0.0027
1191
+ 2025-10-19 13:30:41 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 100/475] LR: 0.000008, Loss: -0.0322
1192
+ 2025-10-19 13:30:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1193
+ - Recon Loss Pre: 0.0044
1194
+ - Recon Loss All: 0.0027
1195
+ 2025-10-19 13:30:44 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 150/475] LR: 0.000008, Loss: -0.0324
1196
+ 2025-10-19 13:30:44 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1197
+ - Recon Loss Pre: 0.0043
1198
+ - Recon Loss All: 0.0026
1199
+ 2025-10-19 13:30:47 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 200/475] LR: 0.000007, Loss: -0.0322
1200
+ 2025-10-19 13:30:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1201
+ - Recon Loss Pre: 0.0045
1202
+ - Recon Loss All: 0.0027
1203
+ 2025-10-19 13:30:50 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 250/475] LR: 0.000007, Loss: -0.0322
1204
+ 2025-10-19 13:30:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1205
+ - Recon Loss Pre: 0.0045
1206
+ - Recon Loss All: 0.0027
1207
+ 2025-10-19 13:30:52 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 300/475] LR: 0.000007, Loss: -0.0321
1208
+ 2025-10-19 13:30:52 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1209
+ - Recon Loss Pre: 0.0047
1210
+ - Recon Loss All: 0.0026
1211
+ 2025-10-19 13:30:55 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 350/475] LR: 0.000006, Loss: -0.0324
1212
+ 2025-10-19 13:30:55 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1213
+ - Recon Loss Pre: 0.0042
1214
+ - Recon Loss All: 0.0026
1215
+ 2025-10-19 13:30:58 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 400/475] LR: 0.000006, Loss: -0.0321
1216
+ 2025-10-19 13:30:58 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1217
+ - Recon Loss Pre: 0.0046
1218
+ - Recon Loss All: 0.0027
1219
+ 2025-10-19 13:31:02 - tokenizer_training_rank_0 - INFO - [Epoch 27/30, Step 450/475] LR: 0.000005, Loss: -0.0321
1220
+ 2025-10-19 13:31:02 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1221
+ - Recon Loss Pre: 0.0046
1222
+ - Recon Loss All: 0.0027
1223
+ 2025-10-19 13:31:04 - tokenizer_training_rank_0 - INFO -
1224
+ --- Epoch 27/30 Summary ---
1225
+ Validation Loss: 0.0031
1226
+ Epoch Time: 0:00:30
1227
+ Total Training Time: 0:00:30
1228
+
1229
+ 2025-10-19 13:31:06 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 25/475] LR: 0.000005, Loss: -0.0324
1230
+ 2025-10-19 13:31:06 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1231
+ - Recon Loss Pre: 0.0042
1232
+ - Recon Loss All: 0.0026
1233
+ 2025-10-19 13:31:09 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 75/475] LR: 0.000005, Loss: -0.0323
1234
+ 2025-10-19 13:31:09 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1235
+ - Recon Loss Pre: 0.0043
1236
+ - Recon Loss All: 0.0026
1237
+ 2025-10-19 13:31:13 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 125/475] LR: 0.000004, Loss: -0.0322
1238
+ 2025-10-19 13:31:13 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1239
+ - Recon Loss Pre: 0.0044
1240
+ - Recon Loss All: 0.0027
1241
+ 2025-10-19 13:31:16 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 175/475] LR: 0.000004, Loss: -0.0323
1242
+ 2025-10-19 13:31:16 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1243
+ - Recon Loss Pre: 0.0044
1244
+ - Recon Loss All: 0.0026
1245
+ 2025-10-19 13:31:19 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 225/475] LR: 0.000004, Loss: -0.0323
1246
+ 2025-10-19 13:31:19 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1247
+ - Recon Loss Pre: 0.0043
1248
+ - Recon Loss All: 0.0027
1249
+ 2025-10-19 13:31:23 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 275/475] LR: 0.000003, Loss: -0.0322
1250
+ 2025-10-19 13:31:23 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1251
+ - Recon Loss Pre: 0.0045
1252
+ - Recon Loss All: 0.0027
1253
+ 2025-10-19 13:31:26 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 325/475] LR: 0.000003, Loss: -0.0323
1254
+ 2025-10-19 13:31:26 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1255
+ - Recon Loss Pre: 0.0043
1256
+ - Recon Loss All: 0.0026
1257
+ 2025-10-19 13:31:29 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 375/475] LR: 0.000003, Loss: -0.0323
1258
+ 2025-10-19 13:31:29 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1259
+ - Recon Loss Pre: 0.0044
1260
+ - Recon Loss All: 0.0026
1261
+ 2025-10-19 13:31:32 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 425/475] LR: 0.000003, Loss: -0.0322
1262
+ 2025-10-19 13:31:32 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1263
+ - Recon Loss Pre: 0.0044
1264
+ - Recon Loss All: 0.0027
1265
+ 2025-10-19 13:31:36 - tokenizer_training_rank_0 - INFO - [Epoch 28/30, Step 475/475] LR: 0.000002, Loss: -0.0318
1266
+ 2025-10-19 13:31:36 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
1267
+ - Recon Loss Pre: 0.0050
1268
+ - Recon Loss All: 0.0029
1269
+ 2025-10-19 13:31:37 - tokenizer_training_rank_0 - INFO -
1270
+ --- Epoch 28/30 Summary ---
1271
+ Validation Loss: 0.0031
1272
+ Epoch Time: 0:00:32
1273
+ Total Training Time: 0:00:32
1274
+
1275
+ 2025-10-19 13:31:40 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 50/475] LR: 0.000002, Loss: -0.0323
1276
+ 2025-10-19 13:31:40 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1277
+ - Recon Loss Pre: 0.0043
1278
+ - Recon Loss All: 0.0027
1279
+ 2025-10-19 13:31:43 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 100/475] LR: 0.000002, Loss: -0.0324
1280
+ 2025-10-19 13:31:43 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1281
+ - Recon Loss Pre: 0.0042
1282
+ - Recon Loss All: 0.0025
1283
+ 2025-10-19 13:31:47 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 150/475] LR: 0.000002, Loss: -0.0323
1284
+ 2025-10-19 13:31:47 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1285
+ - Recon Loss Pre: 0.0045
1286
+ - Recon Loss All: 0.0026
1287
+ 2025-10-19 13:31:50 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 200/475] LR: 0.000001, Loss: -0.0318
1288
+ 2025-10-19 13:31:50 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0715
1289
+ - Recon Loss Pre: 0.0049
1290
+ - Recon Loss All: 0.0030
1291
+ 2025-10-19 13:31:54 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 250/475] LR: 0.000001, Loss: -0.0323
1292
+ 2025-10-19 13:31:54 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1293
+ - Recon Loss Pre: 0.0044
1294
+ - Recon Loss All: 0.0027
1295
+ 2025-10-19 13:31:57 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 300/475] LR: 0.000001, Loss: -0.0322
1296
+ 2025-10-19 13:31:57 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1297
+ - Recon Loss Pre: 0.0045
1298
+ - Recon Loss All: 0.0027
1299
+ 2025-10-19 13:32:01 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 350/475] LR: 0.000001, Loss: -0.0322
1300
+ 2025-10-19 13:32:01 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1301
+ - Recon Loss Pre: 0.0045
1302
+ - Recon Loss All: 0.0027
1303
+ 2025-10-19 13:32:04 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 400/475] LR: 0.000001, Loss: -0.0323
1304
+ 2025-10-19 13:32:04 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1305
+ - Recon Loss Pre: 0.0044
1306
+ - Recon Loss All: 0.0026
1307
+ 2025-10-19 13:32:07 - tokenizer_training_rank_0 - INFO - [Epoch 29/30, Step 450/475] LR: 0.000001, Loss: -0.0321
1308
+ 2025-10-19 13:32:07 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1309
+ - Recon Loss Pre: 0.0046
1310
+ - Recon Loss All: 0.0027
1311
+ 2025-10-19 13:32:10 - tokenizer_training_rank_0 - INFO -
1312
+ --- Epoch 29/30 Summary ---
1313
+ Validation Loss: 0.0031
1314
+ Epoch Time: 0:00:32
1315
+ Total Training Time: 0:00:32
1316
+
1317
+ 2025-10-19 13:32:11 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 25/475] LR: 0.000001, Loss: -0.0323
1318
+ 2025-10-19 13:32:11 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1319
+ - Recon Loss Pre: 0.0044
1320
+ - Recon Loss All: 0.0026
1321
+ 2025-10-19 13:32:15 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 75/475] LR: 0.000000, Loss: -0.0321
1322
+ 2025-10-19 13:32:15 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1323
+ - Recon Loss Pre: 0.0047
1324
+ - Recon Loss All: 0.0028
1325
+ 2025-10-19 13:32:18 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 125/475] LR: 0.000000, Loss: -0.0323
1326
+ 2025-10-19 13:32:18 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1327
+ - Recon Loss Pre: 0.0044
1328
+ - Recon Loss All: 0.0026
1329
+ 2025-10-19 13:32:21 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 175/475] LR: 0.000000, Loss: -0.0322
1330
+ 2025-10-19 13:32:21 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1331
+ - Recon Loss Pre: 0.0044
1332
+ - Recon Loss All: 0.0027
1333
+ 2025-10-19 13:32:25 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 225/475] LR: 0.000000, Loss: -0.0321
1334
+ 2025-10-19 13:32:25 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1335
+ - Recon Loss Pre: 0.0049
1336
+ - Recon Loss All: 0.0026
1337
+ 2025-10-19 13:32:28 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 275/475] LR: 0.000000, Loss: -0.0321
1338
+ 2025-10-19 13:32:28 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1339
+ - Recon Loss Pre: 0.0046
1340
+ - Recon Loss All: 0.0027
1341
+ 2025-10-19 13:32:31 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 325/475] LR: 0.000000, Loss: -0.0323
1342
+ 2025-10-19 13:32:31 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1343
+ - Recon Loss Pre: 0.0043
1344
+ - Recon Loss All: 0.0026
1345
+ 2025-10-19 13:32:34 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 375/475] LR: 0.000000, Loss: -0.0323
1346
+ 2025-10-19 13:32:35 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1347
+ - Recon Loss Pre: 0.0043
1348
+ - Recon Loss All: 0.0027
1349
+ 2025-10-19 13:32:38 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 425/475] LR: 0.000000, Loss: -0.0322
1350
+ 2025-10-19 13:32:38 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1351
+ - Recon Loss Pre: 0.0044
1352
+ - Recon Loss All: 0.0027
1353
+ 2025-10-19 13:32:41 - tokenizer_training_rank_0 - INFO - [Epoch 30/30, Step 475/475] LR: 0.000000, Loss: -0.0321
1354
+ 2025-10-19 13:32:41 - tokenizer_training_rank_0 - INFO - - VQ Loss: -0.0716
1355
+ - Recon Loss Pre: 0.0045
1356
+ - Recon Loss All: 0.0028
1357
+ 2025-10-19 13:32:42 - tokenizer_training_rank_0 - INFO -
1358
+ --- Epoch 30/30 Summary ---
1359
+ Validation Loss: 0.0031
1360
+ Epoch Time: 0:00:32
1361
+ Total Training Time: 0:00:32
1362
+
1363
+ 2025-10-19 13:32:42 - tokenizer_training_rank_0 - INFO - Tokenizer training completed! Best validation loss: 0.0030
1364
+ Training time: 15.87 minutes
1365
+ Model saved to: /root/project/Kronos-Btc-finetune-master/Kronos/finetune_csv/finetuned//BTCUSDT_1h_finetune/tokenizer