webxos commited on
Commit
6cbd1af
·
verified ·
1 Parent(s): c1f4f56

Upload 6 files

Browse files
Files changed (7) hide show
  1. .gitattributes +1 -0
  2. README.md +108 -0
  3. config.json +29 -0
  4. model_metadata.json +21 -0
  5. particle_data.json +3 -0
  6. terminal_log.txt +111 -0
  7. training_log.json +590 -0
.gitattributes ADDED
@@ -0,0 +1 @@
 
 
1
+ particle_data.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ionic Sphere Quantum Simulator v7.0
2
+ ## Real-Time Neural Training System
3
+
4
+ **Model Name:** IonicQuantumSimulator_v7.0
5
+ **Version:** 7.0
6
+ **Export Date:** 2025-12-31T00:27:29.944Z
7
+
8
+ ### Training Summary
9
+ - **Total Epochs:** 3
10
+ - **Final Loss:** 0.6713
11
+ - **Final Accuracy:** 65.6%
12
+ - **Training Samples:** 800
13
+ - **Simulation Time:** 37.8s
14
+
15
+ ### Dataset Information
16
+ This package contains real-time captured data from the quantum ionic simulation:
17
+
18
+ **Particle Data:**
19
+ - Frames captured: 29
20
+ - Particles per frame: 10240
21
+ - Total position samples: 890880
22
+ - Time range: 38s
23
+
24
+ **Features Captured:**
25
+ 1. Position (x, y, z) - normalized coordinates
26
+ 2. Velocity (x, y) - movement vectors
27
+ 3. Timestamp - simulation time
28
+ 4. Model state - neural network parameters at capture time
29
+
30
+ ### Model Architecture
31
+ ```
32
+ Input(5) → Dense(32, relu) → Dropout(0.2)
33
+ → Dense(16, relu)
34
+ → Dense(8, relu)
35
+ → Output(1, sigmoid)
36
+ ```
37
+
38
+ ### Training Configuration
39
+ - **Optimizer:** Adam (learning_rate=0.001)
40
+ - **Loss Function:** Binary Crossentropy
41
+ - **Batch Size:** 32
42
+ - **Validation Split:** 20%
43
+ - **Shuffle:** True
44
+
45
+ ### Simulation Parameters
46
+ - **Ion Count:** 10,240
47
+ - **Ocean Size:** 200x200 units
48
+ - **Physics Engine:** GPU.js accelerated
49
+ - **Render Engine:** Three.js r128
50
+ - **Target FPS:** 60
51
+
52
+ ### File Structure
53
+ ```
54
+ ionicsphere_export_v7.0_*.zip/
55
+ ├── model_metadata.json # Model configuration and stats
56
+ ├── training_log.json # Loss/accuracy per epoch
57
+ ├── particle_data.json # Captured particle positions/velocities
58
+ ├── screenshots/ # PNG frames from simulation
59
+ │ ├── frame_0_*.png
60
+ │ └── ...
61
+ ├── tfjs_model/ # TensorFlow.js model files
62
+ │ ├── model.json
63
+ │ └── weights.bin
64
+ ├── README.md # This file
65
+ ├── terminal_log.txt # CLI interaction history
66
+ └── config.json # System configuration
67
+ ```
68
+
69
+ ### Usage Instructions
70
+
71
+ **1. Load Model in TensorFlow.js:**
72
+ ```javascript
73
+ async function loadModel() {
74
+ const model = await tf.loadLayersModel('tfjs_model/model.json');
75
+ const weights = await fetch('tfjs_model/weights.bin');
76
+ // Load weights and make predictions
77
+ }
78
+ ```
79
+
80
+ **2. Analyze Particle Data:**
81
+ ```javascript
82
+ const data = JSON.parse(particleDataJson);
83
+ const positions = data.positions; // Array of position frames
84
+ const velocities = data.velocities; // Array of velocity frames
85
+ ```
86
+
87
+ **3. Reproduce Simulation:**
88
+ - Use Three.js with provided particle data
89
+ - Apply same physics parameters
90
+ - Feed data into neural network for stability predictions
91
+
92
+ ### Citation
93
+ If you use this data in research, please cite:
94
+ ```bibtex
95
+ @dataset{ionic_sphere_2024,
96
+ title={Real-Time Quantum Ionic Simulation Dataset},
97
+ author={IONICSPHERE Research Team},
98
+ year={2024},
99
+ publisher={IONICSPHERE v7.0},
100
+ url={https://github.com/ionicsphere/simulator}
101
+ }
102
+ ```
103
+
104
+ ### License
105
+ Research Use Only - Attribution Required
106
+
107
+ ### Contact
108
+ For questions or access to newer versions, visit the project repository.
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "simulation": {
3
+ "ion_count": 10240,
4
+ "ocean_size": 200,
5
+ "time_step": 0.016,
6
+ "physics_engine": "GPU.js"
7
+ },
8
+ "neural_network": {
9
+ "input_shape": [
10
+ 5
11
+ ],
12
+ "output_shape": [
13
+ 1
14
+ ],
15
+ "layers": [
16
+ 32,
17
+ 16,
18
+ 8
19
+ ],
20
+ "activation": "relu",
21
+ "output_activation": "sigmoid"
22
+ },
23
+ "export_info": {
24
+ "version": "7.0",
25
+ "format": "JSON/ZIP",
26
+ "total_size": "varies",
27
+ "compatible_with": "TensorFlow.js, Three.js"
28
+ }
29
+ }
model_metadata.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "IonicQuantumSimulator_v7.0",
3
+ "version": "7.0",
4
+ "export_date": "2025-12-31T00:27:28.589Z",
5
+ "epochs_trained": 3,
6
+ "final_loss": 0.6712959408760071,
7
+ "final_accuracy": 0.65625,
8
+ "particle_count": 10240,
9
+ "simulation_time": 36.255,
10
+ "features": [
11
+ "position_x",
12
+ "position_y",
13
+ "position_z",
14
+ "velocity_x",
15
+ "velocity_y"
16
+ ],
17
+ "architecture": "5→32→16→8→1",
18
+ "optimizer": "adam",
19
+ "learning_rate": 0.001,
20
+ "batch_size": 32
21
+ }
particle_data.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6eec7646b413561178ef4b439d8a2e55e6342738a462e44dcd1f13a399cfb12
3
+ size 46086944
terminal_log.txt ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ========================================
2
+ REAL-TIME IONIC SIMULATOR v7.0
3
+ TensorFlow.js + Three.js Integration
4
+ ========================================
5
+ Initializing quantum simulation matrix...
6
+ Loading TensorFlow.js neural kernel...
7
+ Generating 10,240 synthetic ions...
8
+ Type 'help' for available commands
9
+ $ █
10
+ [SYSTEM] Booting IONICSPHERE v7.0...
11
+ [TENSORFLOW] Initializing...
12
+ [TENSORFLOW] Backend: webgl
13
+ [TENSORFLOW] Creating neural network...
14
+ [TENSORFLOW] Model created successfully
15
+ [TENSORFLOW] Architecture: 5→32→16→8→1
16
+ [TENSORFLOW] Optimizer: Adam (0.001)
17
+ [DATA] Generating synthetic training data...
18
+ [DATA] Generated 800 training samples
19
+ [DATA] Generated 200 validation samples
20
+ [THREE.JS] Initializing 3D visualization...
21
+ [IONS] Created 10,240 particles
22
+ [GPU.JS] Physics kernel initialized
23
+ [THREE.JS] Visualization ready
24
+ [SYSTEM] Ready. Type "help" for commands.
25
+ [SYSTEM] Real-time training: train/stop
26
+ [SYSTEM] Data export: export
27
+ [SIMULATION] Started real-time quantum simulation
28
+ [TRAINING] Started real-time neural training
29
+ [TRAINING] Using live particle data as input
30
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
31
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
32
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
33
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
34
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
35
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
36
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
37
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
38
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
39
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
40
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
41
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
42
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
43
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
44
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
45
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
46
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
47
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
48
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
49
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
50
+ [CAPTURE] Stored 10 data frames
51
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
52
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
53
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
54
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
55
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
56
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
57
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
58
+ [TRAINING] Epoch 1 completed
59
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
60
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
61
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
62
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
63
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
64
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
65
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
66
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
67
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
68
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
69
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
70
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
71
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
72
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
73
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
74
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
75
+ [TRAINING] Epoch 2 completed
76
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
77
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
78
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
79
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
80
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
81
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
82
+ [CAPTURE] Stored 20 data frames
83
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
84
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
85
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
86
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
87
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
88
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
89
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
90
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
91
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
92
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
93
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
94
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
95
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
96
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
97
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
98
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
99
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
100
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
101
+ [TRAINING] Epoch 3 completed
102
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
103
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
104
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
105
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
106
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
107
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
108
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
109
+ [TRAINING] Error: Cannot start training because another fit() call is ongoing.
110
+ [TRAINING] Stopped
111
+ [EXPORT] Creating unified package...
training_log.json ADDED
@@ -0,0 +1,590 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "epoch": 0,
4
+ "batch": 0,
5
+ "loss": 0.7411002516746521,
6
+ "accuracy": 0.3125,
7
+ "timestamp": 1767140820443
8
+ },
9
+ {
10
+ "epoch": 0,
11
+ "batch": 1,
12
+ "loss": 0.748737096786499,
13
+ "accuracy": 0.3125,
14
+ "timestamp": 1767140820916
15
+ },
16
+ {
17
+ "epoch": 0,
18
+ "batch": 2,
19
+ "loss": 0.7897660136222839,
20
+ "accuracy": 0.25,
21
+ "timestamp": 1767140821432
22
+ },
23
+ {
24
+ "epoch": 0,
25
+ "batch": 3,
26
+ "loss": 0.7086161375045776,
27
+ "accuracy": 0.40625,
28
+ "timestamp": 1767140821808
29
+ },
30
+ {
31
+ "epoch": 0,
32
+ "batch": 4,
33
+ "loss": 0.717400312423706,
34
+ "accuracy": 0.375,
35
+ "timestamp": 1767140822505
36
+ },
37
+ {
38
+ "epoch": 0,
39
+ "batch": 5,
40
+ "loss": 0.737858235836029,
41
+ "accuracy": 0.4375,
42
+ "timestamp": 1767140823469
43
+ },
44
+ {
45
+ "epoch": 0,
46
+ "batch": 6,
47
+ "loss": 0.6953486204147339,
48
+ "accuracy": 0.4375,
49
+ "timestamp": 1767140823806
50
+ },
51
+ {
52
+ "epoch": 0,
53
+ "batch": 7,
54
+ "loss": 0.7095077037811279,
55
+ "accuracy": 0.40625,
56
+ "timestamp": 1767140824237
57
+ },
58
+ {
59
+ "epoch": 0,
60
+ "batch": 8,
61
+ "loss": 0.7757364511489868,
62
+ "accuracy": 0.28125,
63
+ "timestamp": 1767140824902
64
+ },
65
+ {
66
+ "epoch": 0,
67
+ "batch": 9,
68
+ "loss": 0.7191400527954102,
69
+ "accuracy": 0.4375,
70
+ "timestamp": 1767140825147
71
+ },
72
+ {
73
+ "epoch": 0,
74
+ "batch": 10,
75
+ "loss": 0.7332611680030823,
76
+ "accuracy": 0.40625,
77
+ "timestamp": 1767140825360
78
+ },
79
+ {
80
+ "epoch": 0,
81
+ "batch": 11,
82
+ "loss": 0.7418195009231567,
83
+ "accuracy": 0.3125,
84
+ "timestamp": 1767140825866
85
+ },
86
+ {
87
+ "epoch": 0,
88
+ "batch": 12,
89
+ "loss": 0.7675049304962158,
90
+ "accuracy": 0.25,
91
+ "timestamp": 1767140826320
92
+ },
93
+ {
94
+ "epoch": 0,
95
+ "batch": 13,
96
+ "loss": 0.7316216230392456,
97
+ "accuracy": 0.3125,
98
+ "timestamp": 1767140826776
99
+ },
100
+ {
101
+ "epoch": 0,
102
+ "batch": 14,
103
+ "loss": 0.6952475309371948,
104
+ "accuracy": 0.5,
105
+ "timestamp": 1767140827294
106
+ },
107
+ {
108
+ "epoch": 0,
109
+ "batch": 15,
110
+ "loss": 0.7152165770530701,
111
+ "accuracy": 0.3125,
112
+ "timestamp": 1767140828054
113
+ },
114
+ {
115
+ "epoch": 0,
116
+ "batch": 16,
117
+ "loss": 0.7109678983688354,
118
+ "accuracy": 0.40625,
119
+ "timestamp": 1767140828120
120
+ },
121
+ {
122
+ "epoch": 0,
123
+ "batch": 17,
124
+ "loss": 0.7420613765716553,
125
+ "accuracy": 0.375,
126
+ "timestamp": 1767140828272
127
+ },
128
+ {
129
+ "epoch": 0,
130
+ "batch": 18,
131
+ "loss": 0.71675705909729,
132
+ "accuracy": 0.3125,
133
+ "timestamp": 1767140828651
134
+ },
135
+ {
136
+ "epoch": 0,
137
+ "batch": 19,
138
+ "loss": 0.7275558114051819,
139
+ "accuracy": 0.3125,
140
+ "timestamp": 1767140829014
141
+ },
142
+ {
143
+ "epoch": 0,
144
+ "batch": 20,
145
+ "loss": 0.7111624479293823,
146
+ "accuracy": 0.34375,
147
+ "timestamp": 1767140829324
148
+ },
149
+ {
150
+ "epoch": 0,
151
+ "batch": 21,
152
+ "loss": 0.6968860030174255,
153
+ "accuracy": 0.40625,
154
+ "timestamp": 1767140829608
155
+ },
156
+ {
157
+ "epoch": 0,
158
+ "batch": 22,
159
+ "loss": 0.7071496248245239,
160
+ "accuracy": 0.53125,
161
+ "timestamp": 1767140829935
162
+ },
163
+ {
164
+ "epoch": 0,
165
+ "batch": 23,
166
+ "loss": 0.6908776760101318,
167
+ "accuracy": 0.46875,
168
+ "timestamp": 1767140830271
169
+ },
170
+ {
171
+ "epoch": 0,
172
+ "batch": 24,
173
+ "loss": 0.6962689161300659,
174
+ "accuracy": 0.40625,
175
+ "timestamp": 1767140830655
176
+ },
177
+ {
178
+ "epoch": 1,
179
+ "batch": 0,
180
+ "loss": 0.7090840935707092,
181
+ "accuracy": 0.375,
182
+ "timestamp": 1767140831106
183
+ },
184
+ {
185
+ "epoch": 1,
186
+ "batch": 1,
187
+ "loss": 0.7116997241973877,
188
+ "accuracy": 0.5,
189
+ "timestamp": 1767140831400
190
+ },
191
+ {
192
+ "epoch": 1,
193
+ "batch": 2,
194
+ "loss": 0.7077443599700928,
195
+ "accuracy": 0.375,
196
+ "timestamp": 1767140831774
197
+ },
198
+ {
199
+ "epoch": 1,
200
+ "batch": 3,
201
+ "loss": 0.6939939856529236,
202
+ "accuracy": 0.5625,
203
+ "timestamp": 1767140832150
204
+ },
205
+ {
206
+ "epoch": 1,
207
+ "batch": 4,
208
+ "loss": 0.6895667314529419,
209
+ "accuracy": 0.46875,
210
+ "timestamp": 1767140832515
211
+ },
212
+ {
213
+ "epoch": 1,
214
+ "batch": 5,
215
+ "loss": 0.699230432510376,
216
+ "accuracy": 0.46875,
217
+ "timestamp": 1767140832989
218
+ },
219
+ {
220
+ "epoch": 1,
221
+ "batch": 6,
222
+ "loss": 0.6921590566635132,
223
+ "accuracy": 0.46875,
224
+ "timestamp": 1767140833190
225
+ },
226
+ {
227
+ "epoch": 1,
228
+ "batch": 7,
229
+ "loss": 0.6988462209701538,
230
+ "accuracy": 0.4375,
231
+ "timestamp": 1767140833516
232
+ },
233
+ {
234
+ "epoch": 1,
235
+ "batch": 8,
236
+ "loss": 0.7171862721443176,
237
+ "accuracy": 0.375,
238
+ "timestamp": 1767140833662
239
+ },
240
+ {
241
+ "epoch": 1,
242
+ "batch": 9,
243
+ "loss": 0.6946218013763428,
244
+ "accuracy": 0.4375,
245
+ "timestamp": 1767140833819
246
+ },
247
+ {
248
+ "epoch": 1,
249
+ "batch": 10,
250
+ "loss": 0.6919615864753723,
251
+ "accuracy": 0.5,
252
+ "timestamp": 1767140833964
253
+ },
254
+ {
255
+ "epoch": 1,
256
+ "batch": 11,
257
+ "loss": 0.6953616142272949,
258
+ "accuracy": 0.4375,
259
+ "timestamp": 1767140834074
260
+ },
261
+ {
262
+ "epoch": 1,
263
+ "batch": 12,
264
+ "loss": 0.7070935368537903,
265
+ "accuracy": 0.375,
266
+ "timestamp": 1767140834223
267
+ },
268
+ {
269
+ "epoch": 1,
270
+ "batch": 13,
271
+ "loss": 0.7055894136428833,
272
+ "accuracy": 0.375,
273
+ "timestamp": 1767140834388
274
+ },
275
+ {
276
+ "epoch": 1,
277
+ "batch": 14,
278
+ "loss": 0.6903753876686096,
279
+ "accuracy": 0.5625,
280
+ "timestamp": 1767140834798
281
+ },
282
+ {
283
+ "epoch": 1,
284
+ "batch": 15,
285
+ "loss": 0.6901687383651733,
286
+ "accuracy": 0.53125,
287
+ "timestamp": 1767140835068
288
+ },
289
+ {
290
+ "epoch": 1,
291
+ "batch": 16,
292
+ "loss": 0.7004857063293457,
293
+ "accuracy": 0.46875,
294
+ "timestamp": 1767140835232
295
+ },
296
+ {
297
+ "epoch": 1,
298
+ "batch": 17,
299
+ "loss": 0.7013710737228394,
300
+ "accuracy": 0.375,
301
+ "timestamp": 1767140835565
302
+ },
303
+ {
304
+ "epoch": 1,
305
+ "batch": 18,
306
+ "loss": 0.6903039813041687,
307
+ "accuracy": 0.46875,
308
+ "timestamp": 1767140835970
309
+ },
310
+ {
311
+ "epoch": 1,
312
+ "batch": 19,
313
+ "loss": 0.6959590315818787,
314
+ "accuracy": 0.53125,
315
+ "timestamp": 1767140836436
316
+ },
317
+ {
318
+ "epoch": 1,
319
+ "batch": 20,
320
+ "loss": 0.6937905550003052,
321
+ "accuracy": 0.40625,
322
+ "timestamp": 1767140837174
323
+ },
324
+ {
325
+ "epoch": 1,
326
+ "batch": 21,
327
+ "loss": 0.6821688413619995,
328
+ "accuracy": 0.59375,
329
+ "timestamp": 1767140837329
330
+ },
331
+ {
332
+ "epoch": 1,
333
+ "batch": 22,
334
+ "loss": 0.6867853999137878,
335
+ "accuracy": 0.59375,
336
+ "timestamp": 1767140837479
337
+ },
338
+ {
339
+ "epoch": 1,
340
+ "batch": 23,
341
+ "loss": 0.6854528784751892,
342
+ "accuracy": 0.5625,
343
+ "timestamp": 1767140837627
344
+ },
345
+ {
346
+ "epoch": 1,
347
+ "batch": 24,
348
+ "loss": 0.6779837608337402,
349
+ "accuracy": 0.6875,
350
+ "timestamp": 1767140837786
351
+ },
352
+ {
353
+ "epoch": 2,
354
+ "batch": 0,
355
+ "loss": 0.6750118732452393,
356
+ "accuracy": 0.6875,
357
+ "timestamp": 1767140837989
358
+ },
359
+ {
360
+ "epoch": 2,
361
+ "batch": 1,
362
+ "loss": 0.6848226189613342,
363
+ "accuracy": 0.625,
364
+ "timestamp": 1767140838171
365
+ },
366
+ {
367
+ "epoch": 2,
368
+ "batch": 2,
369
+ "loss": 0.6864094734191895,
370
+ "accuracy": 0.625,
371
+ "timestamp": 1767140838361
372
+ },
373
+ {
374
+ "epoch": 2,
375
+ "batch": 3,
376
+ "loss": 0.6736193299293518,
377
+ "accuracy": 0.625,
378
+ "timestamp": 1767140838547
379
+ },
380
+ {
381
+ "epoch": 2,
382
+ "batch": 4,
383
+ "loss": 0.6886866092681885,
384
+ "accuracy": 0.46875,
385
+ "timestamp": 1767140838842
386
+ },
387
+ {
388
+ "epoch": 2,
389
+ "batch": 5,
390
+ "loss": 0.6872621774673462,
391
+ "accuracy": 0.5625,
392
+ "timestamp": 1767140839113
393
+ },
394
+ {
395
+ "epoch": 2,
396
+ "batch": 6,
397
+ "loss": 0.6883639097213745,
398
+ "accuracy": 0.5625,
399
+ "timestamp": 1767140839328
400
+ },
401
+ {
402
+ "epoch": 2,
403
+ "batch": 7,
404
+ "loss": 0.7002809643745422,
405
+ "accuracy": 0.53125,
406
+ "timestamp": 1767140839713
407
+ },
408
+ {
409
+ "epoch": 2,
410
+ "batch": 8,
411
+ "loss": 0.6715573072433472,
412
+ "accuracy": 0.625,
413
+ "timestamp": 1767140839960
414
+ },
415
+ {
416
+ "epoch": 2,
417
+ "batch": 9,
418
+ "loss": 0.6938263773918152,
419
+ "accuracy": 0.40625,
420
+ "timestamp": 1767140840368
421
+ },
422
+ {
423
+ "epoch": 2,
424
+ "batch": 10,
425
+ "loss": 0.6874582767486572,
426
+ "accuracy": 0.53125,
427
+ "timestamp": 1767140840631
428
+ },
429
+ {
430
+ "epoch": 2,
431
+ "batch": 11,
432
+ "loss": 0.6666799783706665,
433
+ "accuracy": 0.71875,
434
+ "timestamp": 1767140841058
435
+ },
436
+ {
437
+ "epoch": 2,
438
+ "batch": 12,
439
+ "loss": 0.6758941411972046,
440
+ "accuracy": 0.65625,
441
+ "timestamp": 1767140841589
442
+ },
443
+ {
444
+ "epoch": 2,
445
+ "batch": 13,
446
+ "loss": 0.6774862408638,
447
+ "accuracy": 0.6875,
448
+ "timestamp": 1767140842012
449
+ },
450
+ {
451
+ "epoch": 2,
452
+ "batch": 14,
453
+ "loss": 0.6637570261955261,
454
+ "accuracy": 0.78125,
455
+ "timestamp": 1767140842240
456
+ },
457
+ {
458
+ "epoch": 2,
459
+ "batch": 15,
460
+ "loss": 0.684646487236023,
461
+ "accuracy": 0.53125,
462
+ "timestamp": 1767140842587
463
+ },
464
+ {
465
+ "epoch": 2,
466
+ "batch": 16,
467
+ "loss": 0.6750656366348267,
468
+ "accuracy": 0.71875,
469
+ "timestamp": 1767140842826
470
+ },
471
+ {
472
+ "epoch": 2,
473
+ "batch": 17,
474
+ "loss": 0.6863251328468323,
475
+ "accuracy": 0.5625,
476
+ "timestamp": 1767140843053
477
+ },
478
+ {
479
+ "epoch": 2,
480
+ "batch": 18,
481
+ "loss": 0.7029653191566467,
482
+ "accuracy": 0.5,
483
+ "timestamp": 1767140843289
484
+ },
485
+ {
486
+ "epoch": 2,
487
+ "batch": 19,
488
+ "loss": 0.6970543265342712,
489
+ "accuracy": 0.59375,
490
+ "timestamp": 1767140843429
491
+ },
492
+ {
493
+ "epoch": 2,
494
+ "batch": 20,
495
+ "loss": 0.7156466245651245,
496
+ "accuracy": 0.375,
497
+ "timestamp": 1767140843686
498
+ },
499
+ {
500
+ "epoch": 2,
501
+ "batch": 21,
502
+ "loss": 0.6840257048606873,
503
+ "accuracy": 0.53125,
504
+ "timestamp": 1767140843923
505
+ },
506
+ {
507
+ "epoch": 2,
508
+ "batch": 22,
509
+ "loss": 0.6979974508285522,
510
+ "accuracy": 0.5625,
511
+ "timestamp": 1767140844310
512
+ },
513
+ {
514
+ "epoch": 2,
515
+ "batch": 23,
516
+ "loss": 0.685049295425415,
517
+ "accuracy": 0.53125,
518
+ "timestamp": 1767140844455
519
+ },
520
+ {
521
+ "epoch": 2,
522
+ "batch": 24,
523
+ "loss": 0.6642530560493469,
524
+ "accuracy": 0.75,
525
+ "timestamp": 1767140844747
526
+ },
527
+ {
528
+ "epoch": 3,
529
+ "batch": 0,
530
+ "loss": 0.6771752834320068,
531
+ "accuracy": 0.625,
532
+ "timestamp": 1767140845021
533
+ },
534
+ {
535
+ "epoch": 3,
536
+ "batch": 1,
537
+ "loss": 0.659159779548645,
538
+ "accuracy": 0.625,
539
+ "timestamp": 1767140845246
540
+ },
541
+ {
542
+ "epoch": 3,
543
+ "batch": 2,
544
+ "loss": 0.6672576665878296,
545
+ "accuracy": 0.6875,
546
+ "timestamp": 1767140845426
547
+ },
548
+ {
549
+ "epoch": 3,
550
+ "batch": 3,
551
+ "loss": 0.6715335845947266,
552
+ "accuracy": 0.65625,
553
+ "timestamp": 1767140845957
554
+ },
555
+ {
556
+ "epoch": 3,
557
+ "batch": 4,
558
+ "loss": 0.6796213388442993,
559
+ "accuracy": 0.59375,
560
+ "timestamp": 1767140846322
561
+ },
562
+ {
563
+ "epoch": 3,
564
+ "batch": 5,
565
+ "loss": 0.6706708669662476,
566
+ "accuracy": 0.53125,
567
+ "timestamp": 1767140846615
568
+ },
569
+ {
570
+ "epoch": 3,
571
+ "batch": 6,
572
+ "loss": 0.6850961446762085,
573
+ "accuracy": 0.53125,
574
+ "timestamp": 1767140846815
575
+ },
576
+ {
577
+ "epoch": 3,
578
+ "batch": 7,
579
+ "loss": 0.677682638168335,
580
+ "accuracy": 0.65625,
581
+ "timestamp": 1767140846956
582
+ },
583
+ {
584
+ "epoch": 3,
585
+ "batch": 8,
586
+ "loss": 0.6712959408760071,
587
+ "accuracy": 0.65625,
588
+ "timestamp": 1767140847277
589
+ }
590
+ ]