nguyen10001's picture
End of training
829a597 verified
2025-10-10 23:43:55,222 [INFO] Loading tokenizer and model...
2025-10-10 23:43:56,776 [INFO] We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
2025-10-10 23:43:58,617 [INFO] Loading data from local Excel file: /home/datnt/workspace/finetune_paraphase/masked_paraphrased_data.xlsx
2025-10-10 23:44:06,694 [INFO] Successfully loaded and split data.
2025-10-10 23:44:06,694 [INFO] Train samples: 131750
2025-10-10 23:44:06,694 [INFO] Validation samples: 14639
2025-10-10 23:44:06,694 [INFO] Applying LoRA configuration to the model...
2025-10-10 23:44:07,217 [INFO] Tokenizing the dataset...
2025-10-10 23:44:22,447 [WARNING] Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
2025-10-10 23:44:24,340 [INFO] Starting the fine-tuning process...
2025-10-10 23:51:43,705 [INFO] Loading tokenizer and model...
2025-10-10 23:51:45,171 [INFO] We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk).
2025-10-10 23:51:46,865 [INFO] Loading data from local Excel file: /home/datnt/workspace/finetune_paraphase/masked_paraphrased_data.xlsx
2025-10-10 23:51:54,282 [INFO] Successfully loaded and split data.
2025-10-10 23:51:54,282 [INFO] Train samples: 131750
2025-10-10 23:51:54,283 [INFO] Validation samples: 14639
2025-10-10 23:51:54,283 [INFO] Applying LoRA configuration to the model...
2025-10-10 23:51:54,761 [INFO] Tokenizing the dataset...
2025-10-10 23:52:09,891 [WARNING] Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
2025-10-10 23:52:11,415 [INFO] Starting the fine-tuning process...
2025-10-11 10:06:36,335 [INFO] Fine-tuning finished.
2025-10-11 10:06:36,337 [INFO] Pushing the best model to Hugging Face Hub...