llara_duretrieval_train / LLARA-passage-paddle /train_LLARA-passage-paddle_dureader_dual.train.jsonl.log
lzq2021's picture
Upload 16 files
8c6eebb verified
/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:718: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)
LAUNCH INFO 2025-10-31 05:46:06,209 ----------- Configuration ----------------------
LAUNCH INFO 2025-10-31 05:46:06,209 auto_cluster_config: 0
LAUNCH INFO 2025-10-31 05:46:06,209 auto_parallel_config: None
LAUNCH INFO 2025-10-31 05:46:06,209 auto_tuner_json: None
LAUNCH INFO 2025-10-31 05:46:06,209 devices: 0,1,2,3
LAUNCH INFO 2025-10-31 05:46:06,209 elastic_level: -1
LAUNCH INFO 2025-10-31 05:46:06,209 elastic_timeout: 30
LAUNCH INFO 2025-10-31 05:46:06,209 enable_gpu_log: True
LAUNCH INFO 2025-10-31 05:46:06,209 gloo_port: 6767
LAUNCH INFO 2025-10-31 05:46:06,209 host: None
LAUNCH INFO 2025-10-31 05:46:06,209 ips: None
LAUNCH INFO 2025-10-31 05:46:06,209 job_id: default
LAUNCH INFO 2025-10-31 05:46:06,209 legacy: False
LAUNCH INFO 2025-10-31 05:46:06,209 log_dir: log
LAUNCH INFO 2025-10-31 05:46:06,209 log_level: INFO
LAUNCH INFO 2025-10-31 05:46:06,209 log_overwrite: False
LAUNCH INFO 2025-10-31 05:46:06,209 master: None
LAUNCH INFO 2025-10-31 05:46:06,209 max_restart: 3
LAUNCH INFO 2025-10-31 05:46:06,209 nnodes: 1
LAUNCH INFO 2025-10-31 05:46:06,209 nproc_per_node: None
LAUNCH INFO 2025-10-31 05:46:06,209 rank: -1
LAUNCH INFO 2025-10-31 05:46:06,209 run_mode: collective
LAUNCH INFO 2025-10-31 05:46:06,209 server_num: None
LAUNCH INFO 2025-10-31 05:46:06,209 servers:
LAUNCH INFO 2025-10-31 05:46:06,209 sort_ip: False
LAUNCH INFO 2025-10-31 05:46:06,209 start_port: 6070
LAUNCH INFO 2025-10-31 05:46:06,209 trainer_num: None
LAUNCH INFO 2025-10-31 05:46:06,209 trainers:
LAUNCH INFO 2025-10-31 05:46:06,209 training_script: train.py
LAUNCH INFO 2025-10-31 05:46:06,209 training_script_args: ['--do_train', '--query_instruction_for_retrieval', 'query: ', '--passage_instruction_for_retrieval', '', '--model_name_or_path', '/mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle', '--output_dir', 'tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle', '--save_steps', '999999999', '--train_data', './data/dureader_dual.train.jsonl', '--fp16_opt_level', 'O2', '--fp16', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '32', '--recompute', '--train_group_size', '4', '--learning_rate', '1e-4', '--query_max_len', '128', '--passage_max_len', '4096', '--num_train_epochs', '1', '--logging_steps', '1', '--overwrite_output_dir', '--negatives_cross_device', '--warmup_steps', '10', '--max_steps', '100', '--do_train', '--fine_tune_type', 'lora', '--sentence_pooling_method', 'last_8', '--sharding', 'stage3 offload', '--use_flash_attention', '--temperature', '0.01']
LAUNCH INFO 2025-10-31 05:46:06,209 with_gloo: 1
LAUNCH INFO 2025-10-31 05:46:06,209 --------------------------------------------------
LAUNCH INFO 2025-10-31 05:46:06,221 Job: default, mode collective, replicas 1[1:1], elastic False
LAUNCH INFO 2025-10-31 05:46:06,282 Run Pod: mwneka, replicas 4, status ready
LAUNCH INFO 2025-10-31 05:46:06,698 Watching Pod: mwneka, replicas 4, status running
/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:718: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)
/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/_distutils_hack/__init__.py:30: UserWarning: Setuptools is replacing distutils. Support for replacing an already imported distutils is deprecated. In the future, this condition will fail. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
warnings.warn(
/mnt/141nfs/lizhuoqun/PaddleNLP_1022/PaddleNLP/paddlenlp/trainer/training_args.py:1271: UserWarning: `offload` is not supported NOW!
warnings.warn("`offload` is not supported NOW!")
[2025-10-31 05:46:11,680] [ INFO] distributed_strategy.py:335 - distributed strategy initialized
======================= Modified FLAGS detected =======================
FLAGS(name='FLAGS_cuda_cccl_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/cuda_cccl/include/', default_value='')
FLAGS(name='FLAGS_cusparse_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/cusparse/lib', default_value='')
FLAGS(name='FLAGS_cusolver_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/cusolver/lib', default_value='')
FLAGS(name='FLAGS_selected_gpus', current_value='0', default_value='')
FLAGS(name='FLAGS_cublas_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/cublas/lib', default_value='')
FLAGS(name='FLAGS_nccl_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/nccl/lib', default_value='')
FLAGS(name='FLAGS_cupti_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/cuda_cupti/lib', default_value='')
FLAGS(name='FLAGS_cudnn_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/cudnn/lib', default_value='')
FLAGS(name='FLAGS_curand_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia/curand/lib', default_value='')
FLAGS(name='FLAGS_enable_pir_in_executor', current_value=True, default_value=False)
FLAGS(name='FLAGS_nvidia_package_dir', current_value='/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/../nvidia', default_value='')
=======================================================================
I1031 05:46:11.685451 21927 tcp_store.cc:336] create libuv server at port: 45539
I1031 05:46:11.688109 21927 tcp_utils.cc:132] Successfully connected to 172.17.0.5:45539
I1031 05:46:12.307258 21927 process_group_nccl.cc:154] ProcessGroupNCCL pg_timeout_ 1800000
I1031 05:46:12.307317 21927 process_group_nccl.cc:155] ProcessGroupNCCL nccl_comm_init_option_ 0
[2025-10-31 05:46:12,307] [ INFO] topology.py:526 - Total 4 pipe comm group(s) create successfully!
W1031 05:46:12.311336 21927 gpu_resources.cc:114] Please NOTE: device: 0, GPU Compute Capability: 8.0, Driver API Version: 12.8, Runtime API Version: 12.8
/mnt/141nfs/zhongtianyun2023/miniconda3/envs/paddle_0905/lib/python3.10/site-packages/paddle/distributed/communication/group.py:145: UserWarning: Current global rank 0 is not in group _default_pg10
warnings.warn(
[2025-10-31 05:46:12,314] [ INFO] topology.py:526 - Total 4 data comm group(s) create successfully!
[2025-10-31 05:46:12,317] [ INFO] topology.py:526 - Total 4 model comm group(s) create successfully!
I1031 05:46:12.317198 21927 process_group_nccl.cc:154] ProcessGroupNCCL pg_timeout_ 1800000
I1031 05:46:12.317209 21927 process_group_nccl.cc:155] ProcessGroupNCCL nccl_comm_init_option_ 0
[2025-10-31 05:46:12,317] [ INFO] topology.py:526 - Total 1 sharding comm group(s) create successfully!
I1031 05:46:12.317279 21927 process_group_nccl.cc:154] ProcessGroupNCCL pg_timeout_ 1800000
I1031 05:46:12.317286 21927 process_group_nccl.cc:155] ProcessGroupNCCL nccl_comm_init_option_ 0
[2025-10-31 05:46:12,317] [ INFO] topology.py:440 - HybridParallelInfo: rank_id: 0, mp_degree: 1, sharding_degree: 4, pp_degree: 1, dp_degree: 1, sep_degree: 1, mp_group: [0], sharding_group: [0, 1, 2, 3], pp_group: [0], dp_group: [0], sep:group: None, check/clip group: [0, 1, 2, 3]
[2025-10-31 05:46:12,348] [ INFO] - +==============================================================================+
| |
| DistributedStrategy Overview |
| |
+==============================================================================+
| a_sync=True <-> a_sync_configs |
+------------------------------------------------------------------------------+
| k_steps -1 |
| max_merge_var_num 1 |
| send_queue_size 16 |
| independent_recv_thread False |
| min_send_grad_num_before_recv 1 |
| thread_pool_size 1 |
| send_wait_times 1 |
| runtime_split_send_recv False |
| launch_barrier True |
| heter_worker_device_guard cpu |
| lr_decay_steps 10 |
| use_ps_gpu 0 |
| use_gpu_graph 0 |
+==============================================================================+
| Environment Flags, Communication Flags |
+------------------------------------------------------------------------------+
| mode 1 |
| elastic False |
| auto False |
| sync_nccl_allreduce True |
| nccl_comm_num 1 |
| use_hierarchical_allreduce False |
| hierarchical_allreduce_inter_nranks 1 |
| sync_batch_norm False |
| fuse_all_reduce_ops True |
| fuse_grad_size_in_MB 32 |
| fuse_grad_size_in_TFLOPS 50.0 |
| cudnn_exhaustive_search False |
| conv_workspace_size_limit 512 |
| cudnn_batchnorm_spatial_persistent False |
| fp16_allreduce False |
| last_comm_group_size_MB 1.0 |
| find_unused_parameters False |
| without_graph_optimization True |
| fuse_grad_size_in_num 8 |
| calc_comm_same_stream False |
| asp False |
| fuse_grad_merge False |
| semi_auto False |
| adam_d2sum False |
| auto_search False |
| heter_ccl_mode False |
| is_fl_ps_mode False |
| with_coordinator False |
| split_data True |
| downpour_table_param [] |
| fs_client_param |
+==============================================================================+
| Build Strategy |
+------------------------------------------------------------------------------+
| fuse_elewise_add_act_ops False |
| fuse_bn_act_ops False |
| fuse_relu_depthwise_conv False |
| fuse_broadcast_ops False |
| fuse_all_optimizer_ops False |
| enable_inplace False |
| enable_backward_optimizer_op_deps True |
| cache_runtime_context False |
| fuse_bn_add_act_ops True |
| enable_auto_fusion False |
| enable_addto False |
| allow_cuda_graph_capture False |
| reduce_strategy 0 |
| fuse_gemm_epilogue False |
| debug_graphviz_path |
| fused_attention False |
| fused_feedforward False |
| fuse_dot_product_attention False |
| fuse_resunit False |
+==============================================================================+

[2025-10-31 05:46:12,350] [ INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
[2025-10-31 05:46:12,350] [ WARNING] - Process rank: 0, device: gpu, distributed training: True, 16-bits training: True
[2025-10-31 05:46:12,351] [ INFO] - Training/evaluation parameters RetrieverTrainingArguments(
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
amp_custom_black_list=None,
amp_custom_white_list=None,
amp_master_grad=False,
aoa_config=None,
auto_parallel_resume_form_hybrid_parallel=False,
bf16=False,
bf16_full_eval=False,
ckpt_quant_stage=O0,
context_parallel_degree=1,
count_trained_tokens=False,
data_parallel_config=,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_shuffle=True,
ddp_find_unused_parameters=None,
device=gpu,
disable_tqdm=False,
distributed_dataloader=False,
do_eval=False,
do_export=False,
do_predict=False,
do_train=True,
enable_auto_parallel=False,
enable_zero_cost_checkpoint=False,
eval_accumulation_steps=None,
eval_steps=None,
evaluation_strategy=IntervalStrategy.NO,
expert_max_capacity=4294967296,
expert_min_capacity=1,
expert_parallel_degree=1,
expert_tensor_parallel_degree=1,
fine_tune_type=lora,
fix_position_embedding=False,
flash_device_save_steps=0,
flatten_param_grads=False,
force_reshard_pp=False,
fp16=True,
fp16_full_eval=False,
fp16_opt_level=O2,
fuse_sequence_parallel_allreduce=False,
gradient_accumulation_steps=32,
greater_is_better=None,
hybrid_parallel_topo_order=sharding_first,
ignore_data_skip=False,
ignore_load_lr_and_optim=False,
ignore_save_lr_and_optim=False,
label_names=None,
lazy_data_processing=True,
learning_rate=0.0001,
load_best_model_at_end=False,
load_checkpoint_format=None,
load_sharded_model=False,
load_sharded_model_remap_parameter_name=False,
local_rank=0,
log_on_each_node=True,
logging_dir=tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle/runs/Oct31_05-46-11_b63e119a1648,
logging_first_step=False,
logging_steps=1,
logging_strategy=IntervalStrategy.STEPS,
lr_end=1e-07,
lr_scheduler_type=SchedulerType.LINEAR,
margin=0.2,
matryoshka_dims=[64, 128, 256, 512, 768],
matryoshka_loss_weights=[1, 1, 1, 1, 1],
max_evaluate_steps=-1,
max_grad_norm=1.0,
max_steps=100,
metric_for_best_model=None,
metrics_output_path=None,
min_lr=0.0,
minimum_eval_times=None,
nccl_comm_group_config=None,
negatives_cross_device=True,
no_cuda=False,
num_cycles=0.5,
num_train_epochs=1.0,
offload_optim=False,
optim=OptimizerNames.ADAMW,
ordered_save_group_size=0,
output_dir=tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle,
output_signal_dir=tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle,
overwrite_output_dir=True,
pad_token_id=0,
past_index=-1,
pdc_download_ckpt=False,
pdc_download_timeout=300,
per_device_eval_batch_size=8,
per_device_train_batch_size=1,
pipeline_parallel_config=,
pipeline_parallel_degree=1,
power=1.0,
prediction_loss_only=False,
recompute=True,
refined_recompute={},
release_grads=False,
remove_unused_columns=True,
report_to=['visualdl'],
resume_from_checkpoint=None,
run_name=tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle,
save_checkpoint_format=None,
save_on_each_node=False,
save_rng_states=True,
save_sharded_model=False,
save_sharding_stage1_model_include_freeze_params=False,
save_steps=999999999,
save_strategy=IntervalStrategy.STEPS,
save_tokenizer=True,
save_total_limit=None,
scale_loss=32768,
seed=42,
sentence_pooling_method=last_8,
sep_parallel_degree=1,
sequence_parallel=False,
sequence_parallel_config=,
sharded_model_from_ema=False,
sharding=[<ShardingOption.FULL_SHARD: 'stage3'>, <ShardingOption.OFFLOAD: 'offload'>],
sharding_comm_buffer_size_MB=-1,
sharding_degree=-1,
sharding_offload_opt_buffersize_GB=-1,
sharding_parallel_config=,
sharding_parallel_degree=4,
sharding_parallel_mesh_dimension=dp,
skip_data_intervals=None,
skip_memory_metrics=True,
skip_profile_timer=True,
split_inputs_sequence_dim=True,
split_norm_comm=False,
temperature=0.01,
tensor_parallel_config=,
tensor_parallel_degree=1,
tensorwise_offload_optimizer=False,
to_static=False,
unified_checkpoint=False,
unified_checkpoint_config=,
use_async_save=False,
use_expert_parallel=False,
use_inbatch_neg=False,
use_lowprecision_moment=False,
use_matryoshka=False,
wandb_api_key=None,
wandb_http_proxy=None,
warmup_ratio=0.0,
warmup_steps=10,
weight_decay=0.0,
zcc_ema_interval=1,
zcc_ema_loss_threshold=None,
zcc_pipeline_hooks_capacity_usage=0.6,
zcc_save_ema_coef=None,
zcc_workers_num=3,
)
[2025-10-31 05:46:12,351] [ INFO] - Model parameters ModelArguments(model_name_or_path='/mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle', tokenizer_name=None, normalized=True, use_flash_attention=True)
[2025-10-31 05:46:12,351] [ INFO] - Data parameters DataArguments(train_data='./data/dureader_dual.train.jsonl', train_group_size=4, query_max_len=128, passage_max_len=4096, max_example_num_per_dataset=100000000, query_instruction_for_retrieval='query: ', passage_instruction_for_retrieval='')
[2025-10-31 05:46:12,352] [ INFO] - The global seed is set to 42, local seed is set to 46 and random seed is set to 42.
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
Ignored unknown kwarg option __type
[2025-10-31 05:46:12,366] [ INFO] - We are using <class 'paddlenlp.transformers.llama.modeling.LlamaModel'> to load '/mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle'.
[2025-10-31 05:46:12,366] [ INFO] - Loading configuration file /mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle/config.json
[2025-10-31 05:46:12,367] [ INFO] - Loading weights file /mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle/model_state.pdparams.index.json
file: /mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle/model_state.pdparams.index.json is paddle weight.
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1/2 [00:16<00:16, 16.93s/it]
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:22<00:00, 10.29s/it]
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:22<00:00, 11.28s/it]
[2025-10-31 05:46:35,413] [ INFO] - All model checkpoint weights were used when initializing LlamaModel.

[2025-10-31 05:46:35,440] [ INFO] - All the weights of LlamaModel were initialized from the model checkpoint at /mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaModel for predictions without further training.
[2025-10-31 05:46:35,443] [ INFO] - Loading configuration file /mnt/141nfs/lizhuoqun/hf_models/LLARA-passage-paddle/config.json
[2025-10-31 05:46:35,822] [ WARNING] - Reset tensor_parallel_degree of lora_config to 1.
[2025-10-31 05:46:35,822] [ INFO] - Mark only lora and trainable_module as trainable.
[2025-10-31 05:46:35,851] [ DEBUG] - Frozen parameters: 1.98e+10 || Trainable parameters:2.40e+08 || Total parameters:2.01e+10|| Trainable:1.20%
[2025-10-31 05:46:35,901] [ INFO] - The global seed is set to 42, local seed is set to 46 and random seed is set to 42.
[2025-10-31 05:46:35,959] [ INFO] - max_steps is given, it will override any value given in num_train_epochs
[2025-10-31 05:46:35,959] [ INFO] - Using half precision
[2025-10-31 05:46:35,984] [ DEBUG] - ============================================================
[2025-10-31 05:46:35,984] [ DEBUG] - Training Configuration Arguments 
[2025-10-31 05:46:35,984] [ DEBUG] - paddle commit id : dc312219f1498234f178a3b6a93e9f1f4ddebcb8
[2025-10-31 05:46:35,984] [ DEBUG] - paddlenlp commit id : 51787ba815d78521c373d8c08f72cc3a10a425da.dirty
[2025-10-31 05:46:35,984] [ DEBUG] - _no_sync_in_gradient_accumulation: True
[2025-10-31 05:46:35,984] [ DEBUG] - adam_beta1 : 0.9
[2025-10-31 05:46:35,984] [ DEBUG] - adam_beta2 : 0.999
[2025-10-31 05:46:35,984] [ DEBUG] - adam_epsilon : 1e-08
[2025-10-31 05:46:35,984] [ DEBUG] - amp_custom_black_list : None
[2025-10-31 05:46:35,985] [ DEBUG] - amp_custom_white_list : None
[2025-10-31 05:46:35,985] [ DEBUG] - amp_master_grad : False
[2025-10-31 05:46:35,985] [ DEBUG] - aoa_config : None
[2025-10-31 05:46:35,985] [ DEBUG] - auto_parallel_resume_form_hybrid_parallel: False
[2025-10-31 05:46:35,985] [ DEBUG] - bf16 : False
[2025-10-31 05:46:35,985] [ DEBUG] - bf16_full_eval : False
[2025-10-31 05:46:35,985] [ DEBUG] - ckpt_quant_stage : O0
[2025-10-31 05:46:35,985] [ DEBUG] - context_parallel_degree : 1
[2025-10-31 05:46:35,985] [ DEBUG] - context_parallel_rank : 0
[2025-10-31 05:46:35,985] [ DEBUG] - count_trained_tokens : False
[2025-10-31 05:46:35,985] [ DEBUG] - current_device : gpu:0
[2025-10-31 05:46:35,985] [ DEBUG] - data_parallel_config : 
[2025-10-31 05:46:35,985] [ DEBUG] - data_parallel_degree : 1
[2025-10-31 05:46:35,985] [ DEBUG] - data_parallel_rank : 0
[2025-10-31 05:46:35,985] [ DEBUG] - dataloader_drop_last : False
[2025-10-31 05:46:35,985] [ DEBUG] - dataloader_num_workers : 0
[2025-10-31 05:46:35,985] [ DEBUG] - dataloader_shuffle : True
[2025-10-31 05:46:35,985] [ DEBUG] - dataset_rank : 0
[2025-10-31 05:46:35,986] [ DEBUG] - dataset_world_size : 4
[2025-10-31 05:46:35,986] [ DEBUG] - ddp_find_unused_parameters : None
[2025-10-31 05:46:35,986] [ DEBUG] - device : gpu
[2025-10-31 05:46:35,986] [ DEBUG] - disable_tqdm : False
[2025-10-31 05:46:35,986] [ DEBUG] - distributed_dataloader : False
[2025-10-31 05:46:35,986] [ DEBUG] - do_eval : False
[2025-10-31 05:46:35,986] [ DEBUG] - do_export : False
[2025-10-31 05:46:35,986] [ DEBUG] - do_predict : False
[2025-10-31 05:46:35,986] [ DEBUG] - do_train : True
[2025-10-31 05:46:35,986] [ DEBUG] - enable_auto_parallel : False
[2025-10-31 05:46:35,986] [ DEBUG] - enable_zero_cost_checkpoint : False
[2025-10-31 05:46:35,986] [ DEBUG] - eval_accumulation_steps : None
[2025-10-31 05:46:35,986] [ DEBUG] - eval_batch_size : 8
[2025-10-31 05:46:35,986] [ DEBUG] - eval_steps : None
[2025-10-31 05:46:35,986] [ DEBUG] - evaluation_strategy : IntervalStrategy.NO
[2025-10-31 05:46:35,986] [ DEBUG] - expert_max_capacity : 4294967296
[2025-10-31 05:46:35,986] [ DEBUG] - expert_min_capacity : 1
[2025-10-31 05:46:35,986] [ DEBUG] - expert_parallel_degree : 1
[2025-10-31 05:46:35,986] [ DEBUG] - expert_parallel_rank : 0
[2025-10-31 05:46:35,986] [ DEBUG] - expert_tensor_parallel_degree : 1
[2025-10-31 05:46:35,987] [ DEBUG] - fine_tune_type : lora
[2025-10-31 05:46:35,987] [ DEBUG] - fix_position_embedding : False
[2025-10-31 05:46:35,987] [ DEBUG] - flash_device_save_steps : 0
[2025-10-31 05:46:35,987] [ DEBUG] - flatten_param_grads : False
[2025-10-31 05:46:35,987] [ DEBUG] - force_reshard_pp : False
[2025-10-31 05:46:35,987] [ DEBUG] - fp16 : True
[2025-10-31 05:46:35,987] [ DEBUG] - fp16_full_eval : False
[2025-10-31 05:46:35,987] [ DEBUG] - fp16_opt_level : O2
[2025-10-31 05:46:35,987] [ DEBUG] - fuse_sequence_parallel_allreduce: False
[2025-10-31 05:46:35,987] [ DEBUG] - gradient_accumulation_steps : 32
[2025-10-31 05:46:35,987] [ DEBUG] - greater_is_better : None
[2025-10-31 05:46:35,987] [ DEBUG] - hybrid_parallel_topo_order : sharding_first
[2025-10-31 05:46:35,987] [ DEBUG] - ignore_data_skip : False
[2025-10-31 05:46:35,987] [ DEBUG] - ignore_load_lr_and_optim : False
[2025-10-31 05:46:35,987] [ DEBUG] - ignore_save_lr_and_optim : False
[2025-10-31 05:46:35,987] [ DEBUG] - label_names : None
[2025-10-31 05:46:35,987] [ DEBUG] - lazy_data_processing : True
[2025-10-31 05:46:35,987] [ DEBUG] - learning_rate : 0.0001
[2025-10-31 05:46:35,987] [ DEBUG] - load_best_model_at_end : False
[2025-10-31 05:46:35,988] [ DEBUG] - load_checkpoint_format : None
[2025-10-31 05:46:35,988] [ DEBUG] - load_sharded_model : False
[2025-10-31 05:46:35,988] [ DEBUG] - load_sharded_model_remap_parameter_name: False
[2025-10-31 05:46:35,988] [ DEBUG] - local_process_index : 0
[2025-10-31 05:46:35,988] [ DEBUG] - local_rank : 0
[2025-10-31 05:46:35,988] [ DEBUG] - log_level : -1
[2025-10-31 05:46:35,988] [ DEBUG] - log_level_replica : -1
[2025-10-31 05:46:35,988] [ DEBUG] - log_on_each_node : True
[2025-10-31 05:46:35,988] [ DEBUG] - logging_dir : tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle/runs/Oct31_05-46-11_b63e119a1648
[2025-10-31 05:46:35,988] [ DEBUG] - logging_first_step : False
[2025-10-31 05:46:35,988] [ DEBUG] - logging_steps : 1
[2025-10-31 05:46:35,988] [ DEBUG] - logging_strategy : IntervalStrategy.STEPS
[2025-10-31 05:46:35,988] [ DEBUG] - logical_process_index : 0
[2025-10-31 05:46:35,988] [ DEBUG] - lr_end : 1e-07
[2025-10-31 05:46:35,988] [ DEBUG] - lr_scheduler_type : SchedulerType.LINEAR
[2025-10-31 05:46:35,988] [ DEBUG] - margin : 0.2
[2025-10-31 05:46:35,988] [ DEBUG] - matryoshka_dims : [64, 128, 256, 512, 768]
[2025-10-31 05:46:35,988] [ DEBUG] - matryoshka_loss_weights : [1, 1, 1, 1, 1]
[2025-10-31 05:46:35,988] [ DEBUG] - max_evaluate_steps : -1
[2025-10-31 05:46:35,989] [ DEBUG] - max_grad_norm : 1.0
[2025-10-31 05:46:35,989] [ DEBUG] - max_steps : 100
[2025-10-31 05:46:35,989] [ DEBUG] - metric_for_best_model : None
[2025-10-31 05:46:35,989] [ DEBUG] - metrics_output_path : None
[2025-10-31 05:46:35,989] [ DEBUG] - min_lr : 0.0
[2025-10-31 05:46:35,989] [ DEBUG] - minimum_eval_times : None
[2025-10-31 05:46:35,989] [ DEBUG] - moe_sharding_parallel_degree : 1
[2025-10-31 05:46:35,989] [ DEBUG] - nccl_comm_group_config : None
[2025-10-31 05:46:35,989] [ DEBUG] - negatives_cross_device : True
[2025-10-31 05:46:35,989] [ DEBUG] - no_cuda : False
[2025-10-31 05:46:35,989] [ DEBUG] - num_cycles : 0.5
[2025-10-31 05:46:35,989] [ DEBUG] - num_train_epochs : 1.0
[2025-10-31 05:46:35,989] [ DEBUG] - offload_optim : False
[2025-10-31 05:46:35,989] [ DEBUG] - optim : OptimizerNames.ADAMW
[2025-10-31 05:46:35,989] [ DEBUG] - optimizer_name_suffix : shard00
[2025-10-31 05:46:35,989] [ DEBUG] - ordered_save_group_size : 0
[2025-10-31 05:46:35,989] [ DEBUG] - output_dir : tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle
[2025-10-31 05:46:35,989] [ DEBUG] - output_signal_dir : tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle
[2025-10-31 05:46:35,989] [ DEBUG] - overwrite_output_dir : True
[2025-10-31 05:46:35,990] [ DEBUG] - pad_token_id : 0
[2025-10-31 05:46:35,990] [ DEBUG] - past_index : -1
[2025-10-31 05:46:35,990] [ DEBUG] - pdc_download_ckpt : False
[2025-10-31 05:46:35,990] [ DEBUG] - pdc_download_timeout : 300
[2025-10-31 05:46:35,990] [ DEBUG] - per_device_eval_batch_size : 8
[2025-10-31 05:46:35,990] [ DEBUG] - per_device_train_batch_size : 1
[2025-10-31 05:46:35,990] [ DEBUG] - pipeline_parallel_config : 
[2025-10-31 05:46:35,990] [ DEBUG] - pipeline_parallel_degree : 1
[2025-10-31 05:46:35,990] [ DEBUG] - pipeline_parallel_rank : 0
[2025-10-31 05:46:35,990] [ DEBUG] - power : 1.0
[2025-10-31 05:46:35,990] [ DEBUG] - prediction_loss_only : False
[2025-10-31 05:46:35,990] [ DEBUG] - process_index : 0
[2025-10-31 05:46:35,990] [ DEBUG] - recompute : True
[2025-10-31 05:46:35,990] [ DEBUG] - refined_recompute : {}
[2025-10-31 05:46:35,990] [ DEBUG] - release_grads : False
[2025-10-31 05:46:35,990] [ DEBUG] - remove_unused_columns : True
[2025-10-31 05:46:35,990] [ DEBUG] - report_to : ['visualdl']
[2025-10-31 05:46:35,990] [ DEBUG] - resume_from_checkpoint : None
[2025-10-31 05:46:35,990] [ DEBUG] - run_name : tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle
[2025-10-31 05:46:35,990] [ DEBUG] - save_checkpoint_format : None
[2025-10-31 05:46:35,991] [ DEBUG] - save_on_each_node : False
[2025-10-31 05:46:35,991] [ DEBUG] - save_rng_states : True
[2025-10-31 05:46:35,991] [ DEBUG] - save_sharded_model : False
[2025-10-31 05:46:35,991] [ DEBUG] - save_sharding_stage1_model_include_freeze_params: False
[2025-10-31 05:46:35,991] [ DEBUG] - save_steps : 999999999
[2025-10-31 05:46:35,991] [ DEBUG] - save_strategy : IntervalStrategy.STEPS
[2025-10-31 05:46:35,991] [ DEBUG] - save_tokenizer : True
[2025-10-31 05:46:35,991] [ DEBUG] - save_total_limit : None
[2025-10-31 05:46:35,991] [ DEBUG] - scale_loss : 32768
[2025-10-31 05:46:35,991] [ DEBUG] - seed : 42
[2025-10-31 05:46:35,991] [ DEBUG] - sentence_pooling_method : last_8
[2025-10-31 05:46:35,991] [ DEBUG] - sep_parallel_degree : 1
[2025-10-31 05:46:35,991] [ DEBUG] - sequence_parallel : False
[2025-10-31 05:46:35,991] [ DEBUG] - sequence_parallel_config : 
[2025-10-31 05:46:35,991] [ DEBUG] - sharded_model_from_ema : False
[2025-10-31 05:46:35,991] [ DEBUG] - sharding : [<ShardingOption.FULL_SHARD: 'stage3'>, <ShardingOption.OFFLOAD: 'offload'>]
[2025-10-31 05:46:35,991] [ DEBUG] - sharding_comm_buffer_size_MB : -1
[2025-10-31 05:46:35,991] [ DEBUG] - sharding_degree : -1
[2025-10-31 05:46:35,991] [ DEBUG] - sharding_offload_opt_buffersize_GB: -1
[2025-10-31 05:46:35,991] [ DEBUG] - sharding_parallel_config : 
[2025-10-31 05:46:35,992] [ DEBUG] - sharding_parallel_degree : 4
[2025-10-31 05:46:35,992] [ DEBUG] - sharding_parallel_mesh_dimension: dp
[2025-10-31 05:46:35,992] [ DEBUG] - sharding_parallel_rank : 0
[2025-10-31 05:46:35,992] [ DEBUG] - should_load_dataset : True
[2025-10-31 05:46:35,992] [ DEBUG] - should_load_sharding_stage1_model: False
[2025-10-31 05:46:35,992] [ DEBUG] - should_log : True
[2025-10-31 05:46:35,992] [ DEBUG] - should_save : True
[2025-10-31 05:46:35,992] [ DEBUG] - should_save_model_state : True
[2025-10-31 05:46:35,992] [ DEBUG] - should_save_model_with_tensor_fusion: False
[2025-10-31 05:46:35,992] [ DEBUG] - should_save_sharding_stage1_model: False
[2025-10-31 05:46:35,992] [ DEBUG] - skip_data_intervals : None
[2025-10-31 05:46:35,992] [ DEBUG] - skip_memory_metrics : True
[2025-10-31 05:46:35,992] [ DEBUG] - skip_profile_timer : True
[2025-10-31 05:46:35,992] [ DEBUG] - split_inputs_sequence_dim : True
[2025-10-31 05:46:35,992] [ DEBUG] - split_norm_comm : False
[2025-10-31 05:46:35,992] [ DEBUG] - temperature : 0.01
[2025-10-31 05:46:35,992] [ DEBUG] - tensor_parallel_config : 
[2025-10-31 05:46:35,992] [ DEBUG] - tensor_parallel_degree : 1
[2025-10-31 05:46:35,993] [ DEBUG] - tensor_parallel_rank : 0
[2025-10-31 05:46:35,993] [ DEBUG] - tensorwise_offload_optimizer : False
[2025-10-31 05:46:35,993] [ DEBUG] - to_static : False
[2025-10-31 05:46:35,993] [ DEBUG] - train_batch_size : 1
[2025-10-31 05:46:35,993] [ DEBUG] - unified_checkpoint : False
[2025-10-31 05:46:35,993] [ DEBUG] - unified_checkpoint_config : 
[2025-10-31 05:46:35,993] [ DEBUG] - use_async_save : False
[2025-10-31 05:46:35,993] [ DEBUG] - use_expert_parallel : False
[2025-10-31 05:46:35,993] [ DEBUG] - use_hybrid_parallel : True
[2025-10-31 05:46:35,993] [ DEBUG] - use_inbatch_neg : False
[2025-10-31 05:46:35,993] [ DEBUG] - use_lowprecision_moment : False
[2025-10-31 05:46:35,993] [ DEBUG] - use_matryoshka : False
[2025-10-31 05:46:35,993] [ DEBUG] - wandb_api_key : None
[2025-10-31 05:46:35,993] [ DEBUG] - wandb_http_proxy : None
[2025-10-31 05:46:35,993] [ DEBUG] - warmup_ratio : 0.0
[2025-10-31 05:46:35,993] [ DEBUG] - warmup_steps : 10
[2025-10-31 05:46:35,993] [ DEBUG] - weight_decay : 0.0
[2025-10-31 05:46:35,993] [ DEBUG] - weight_name_suffix : 
[2025-10-31 05:46:35,993] [ DEBUG] - world_size : 4
[2025-10-31 05:46:35,993] [ DEBUG] - zcc_ema_interval : 1
[2025-10-31 05:46:35,994] [ DEBUG] - zcc_ema_loss_threshold : None
[2025-10-31 05:46:35,994] [ DEBUG] - zcc_pipeline_hooks_capacity_usage: 0.6
[2025-10-31 05:46:35,994] [ DEBUG] - zcc_save_ema_coef : None
[2025-10-31 05:46:35,994] [ DEBUG] - zcc_workers_num : 3
[2025-10-31 05:46:35,994] [ DEBUG] - 
[2025-10-31 05:46:35,994] [ INFO] - Starting training from resume_from_checkpoint : None
W1031 05:46:36.186211 21927 nccl_comm_context.cc:70] ncclCommInitRankConfigMemOpt is not supported.
[2025-10-31 05:46:37,414] [ WARNING] group_sharded.py:148 - the input of scaler is None, please ensure the logic of your scaler outside is same as GroupShardedScaler.
[2025-10-31 05:46:37,416] [ WARNING] group_sharded_stage3.py:195 - While using ClipGradByGlobalNorm in GroupShardedStage3, the grad clip of original optimizer will be changed.
W1031 05:46:37.419310 21927 nccl_comm_context.cc:70] ncclCommInitRankConfigMemOpt is not supported.
[2025-10-31 05:47:10,897] [ INFO] - [timelog] checkpoint loading time: 0.00s (2025-10-31 05:47:10) 
[2025-10-31 05:47:10,897] [ INFO] - ***** Running training *****
[2025-10-31 05:47:10,897] [ INFO] - Num examples = 86,395
[2025-10-31 05:47:10,897] [ INFO] - Num Epochs = 1
[2025-10-31 05:47:10,897] [ INFO] - Instantaneous batch size per device = 1
[2025-10-31 05:47:10,897] [ INFO] - Total train batch size (w. parallel, distributed & accumulation) = 128
[2025-10-31 05:47:10,897] [ INFO] - Gradient Accumulation steps = 32
[2025-10-31 05:47:10,897] [ INFO] - Total optimization steps = 100
[2025-10-31 05:47:10,897] [ INFO] - Total num train samples = 12,800
[2025-10-31 05:47:10,902] [ DEBUG] - Number of trainable parameters = 79,953,920 (per device)
TrainProcess: 0%| | 0/100 [00:00<?, ?it/s]W1031 05:47:11.376879 21927 multiply_fwd_func.cc:82] got different data type, run type promotion automatically, this may cause data type been changed.
W1031 05:47:36.905462 21927 dygraph_functions.cc:94044] got different data type, run type promotion automatically, this may cause data type been changed.
Found inf or nan, current scale is: 32768.0, decrease to: 32768.0*0.5
[2025-10-31 05:53:01,170] [ WARNING] - optimizer not run, scale_before: 32768.0, scale_after: 16384.0
TrainProcess: 1%| | 1/100 [05:50<9:38:42, 350.73s/it][2025-10-31 05:53:01,749] [ INFO] - loss: 7.97692871, learning_rate: 0.0, global_step: 1, current_memory_allocated: 12.27591347694397, current_memory_reserved: 70.46749949455261, max_memory_allocated: 57.78868627548218, max_memory_reserved: 70.46749949455261, interval_runtime: 350.7595, interval_samples_per_second: 0.3649, interval_steps_per_second: 0.0029, progress_or_epoch: 0.0015, cpu_used_memory: 59.85, cpu_available_memory: 940.99
TrainProcess: 2%|▏ | 2/100 [12:32<10:21:27, 380.48s/it][2025-10-31 05:59:43,075] [ INFO] - loss: 7.24871826, learning_rate: 1e-05, global_step: 2, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 401.2908, interval_samples_per_second: 0.319, interval_steps_per_second: 0.0025, progress_or_epoch: 0.003, cpu_used_memory: 60.0, cpu_available_memory: 940.84
TrainProcess: 3%|β–Ž | 3/100 [19:15<10:32:01, 390.94s/it][2025-10-31 06:06:26,488] [ INFO] - loss: 7.50622559, learning_rate: 2e-05, global_step: 3, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.454, interval_samples_per_second: 0.3173, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0044, cpu_used_memory: 59.93, cpu_available_memory: 940.92
TrainProcess: 4%|▍ | 4/100 [26:02<10:35:22, 397.11s/it][2025-10-31 06:13:13,032] [ INFO] - loss: 7.08703613, learning_rate: 3e-05, global_step: 4, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.5144, interval_samples_per_second: 0.3149, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0059, cpu_used_memory: 59.92, cpu_available_memory: 940.93
Found inf or nan, current scale is: 16384.0, decrease to: 16384.0*0.5
[2025-10-31 06:19:59,165] [ WARNING] - optimizer not run, scale_before: 16384.0, scale_after: 8192.0
TrainProcess: 5%|β–Œ | 5/100 [32:48<10:34:14, 400.57s/it][2025-10-31 06:19:59,791] [ INFO] - loss: 6.20175171, learning_rate: 3e-05, global_step: 5, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.7245, interval_samples_per_second: 0.3147, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0074, cpu_used_memory: 59.97, cpu_available_memory: 940.88
TrainProcess: 6%|β–Œ | 6/100 [39:34<10:30:28, 402.43s/it][2025-10-31 06:26:45,861] [ INFO] - loss: 7.08898926, learning_rate: 4e-05, global_step: 6, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.0119, interval_samples_per_second: 0.3153, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0089, cpu_used_memory: 60.0, cpu_available_memory: 940.84
TrainProcess: 7%|β–‹ | 7/100 [46:20<10:25:20, 403.44s/it][2025-10-31 06:33:31,275] [ INFO] - loss: 6.4140625, learning_rate: 5e-05, global_step: 7, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.5321, interval_samples_per_second: 0.3156, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0104, cpu_used_memory: 59.26, cpu_available_memory: 941.59
TrainProcess: 8%|β–Š | 8/100 [53:04<10:19:13, 403.84s/it][2025-10-31 06:40:15,913] [ INFO] - loss: 5.63946533, learning_rate: 6e-05, global_step: 8, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.6895, interval_samples_per_second: 0.3163, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0119, cpu_used_memory: 59.26, cpu_available_memory: 941.59
TrainProcess: 9%|β–‰ | 9/100 [59:50<10:13:20, 404.40s/it][2025-10-31 06:47:01,537] [ INFO] - loss: 4.40109253, learning_rate: 7e-05, global_step: 9, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.6183, interval_samples_per_second: 0.3156, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0133, cpu_used_memory: 59.29, cpu_available_memory: 941.56
TrainProcess: 10%|β–ˆ | 10/100 [1:06:36<10:07:15, 404.84s/it][2025-10-31 06:53:47,385] [ INFO] - loss: 5.27586365, learning_rate: 8e-05, global_step: 10, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.8464, interval_samples_per_second: 0.3154, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0148, cpu_used_memory: 59.34, cpu_available_memory: 941.51
TrainProcess: 11%|β–ˆ | 11/100 [1:13:21<10:00:42, 404.97s/it][2025-10-31 07:00:32,648] [ INFO] - loss: 4.16642761, learning_rate: 9e-05, global_step: 11, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.2545, interval_samples_per_second: 0.3159, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0163, cpu_used_memory: 59.4, cpu_available_memory: 941.45
TrainProcess: 12%|β–ˆβ– | 12/100 [1:20:09<9:55:01, 405.70s/it] [2025-10-31 07:07:20,015] [ INFO] - loss: 2.82327652, learning_rate: 0.0001, global_step: 12, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 407.3524, interval_samples_per_second: 0.3142, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0178, cpu_used_memory: 59.45, cpu_available_memory: 941.4
TrainProcess: 13%|β–ˆβ–Ž | 13/100 [1:26:57<9:49:22, 406.46s/it][2025-10-31 07:14:08,244] [ INFO] - loss: 2.10344696, learning_rate: 9.889e-05, global_step: 13, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.2236, interval_samples_per_second: 0.3136, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0193, cpu_used_memory: 59.47, cpu_available_memory: 941.37
TrainProcess: 14%|β–ˆβ– | 14/100 [1:33:45<9:43:19, 406.97s/it][2025-10-31 07:20:56,366] [ INFO] - loss: 1.96983337, learning_rate: 9.778e-05, global_step: 14, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.1595, interval_samples_per_second: 0.3136, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0207, cpu_used_memory: 59.52, cpu_available_memory: 941.33
TrainProcess: 15%|β–ˆβ–Œ | 15/100 [1:40:32<9:36:43, 407.10s/it][2025-10-31 07:27:43,847] [ INFO] - loss: 1.42926025, learning_rate: 9.667e-05, global_step: 15, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 407.4351, interval_samples_per_second: 0.3142, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0222, cpu_used_memory: 59.6, cpu_available_memory: 941.24
TrainProcess: 16%|β–ˆβ–Œ | 16/100 [1:47:23<9:31:14, 408.03s/it][2025-10-31 07:34:34,068] [ INFO] - loss: 1.46002197, learning_rate: 9.556e-05, global_step: 16, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 410.1607, interval_samples_per_second: 0.3121, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0237, cpu_used_memory: 59.64, cpu_available_memory: 941.2
TrainProcess: 17%|β–ˆβ–‹ | 17/100 [1:54:11<9:24:44, 408.24s/it][2025-10-31 07:41:22,750] [ INFO] - loss: 1.22354126, learning_rate: 9.444e-05, global_step: 17, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.7351, interval_samples_per_second: 0.3132, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0252, cpu_used_memory: 59.81, cpu_available_memory: 941.04
TrainProcess: 18%|β–ˆβ–Š | 18/100 [2:01:00<9:18:06, 408.37s/it][2025-10-31 07:48:11,379] [ INFO] - loss: 1.31933594, learning_rate: 9.333e-05, global_step: 18, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.6739, interval_samples_per_second: 0.3132, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0267, cpu_used_memory: 59.67, cpu_available_memory: 941.18
TrainProcess: 19%|β–ˆβ–‰ | 19/100 [2:07:48<9:11:07, 408.24s/it][2025-10-31 07:54:59,308] [ INFO] - loss: 1.15179443, learning_rate: 9.222e-05, global_step: 19, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 407.9283, interval_samples_per_second: 0.3138, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0281, cpu_used_memory: 59.68, cpu_available_memory: 941.16
TrainProcess: 20%|β–ˆβ–ˆ | 20/100 [2:14:34<9:03:35, 407.69s/it][2025-10-31 08:01:45,734] [ INFO] - loss: 1.24546051, learning_rate: 9.111e-05, global_step: 20, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.4197, interval_samples_per_second: 0.3149, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0296, cpu_used_memory: 59.75, cpu_available_memory: 941.09
TrainProcess: 21%|β–ˆβ–ˆ | 21/100 [2:21:20<8:56:00, 407.10s/it][2025-10-31 08:08:31,531] [ INFO] - loss: 1.27971649, learning_rate: 9e-05, global_step: 21, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.7178, interval_samples_per_second: 0.3155, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0311, cpu_used_memory: 59.71, cpu_available_memory: 941.14
TrainProcess: 22%|β–ˆβ–ˆβ– | 22/100 [2:28:07<8:49:11, 407.07s/it][2025-10-31 08:15:18,541] [ INFO] - loss: 1.19939423, learning_rate: 8.889e-05, global_step: 22, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 407.0508, interval_samples_per_second: 0.3145, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0326, cpu_used_memory: 59.86, cpu_available_memory: 940.98
TrainProcess: 23%|β–ˆβ–ˆβ–Ž | 23/100 [2:34:56<8:43:16, 407.75s/it][2025-10-31 08:22:07,793] [ INFO] - loss: 1.31498718, learning_rate: 8.778e-05, global_step: 23, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 409.2762, interval_samples_per_second: 0.3127, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0341, cpu_used_memory: 59.78, cpu_available_memory: 941.07
TrainProcess: 24%|β–ˆβ–ˆβ– | 24/100 [2:41:44<8:36:35, 407.83s/it][2025-10-31 08:28:55,831] [ INFO] - loss: 1.15634918, learning_rate: 8.667e-05, global_step: 24, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.0467, interval_samples_per_second: 0.3137, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0356, cpu_used_memory: 59.84, cpu_available_memory: 941.01
TrainProcess: 25%|β–ˆβ–ˆβ–Œ | 25/100 [2:48:31<8:29:31, 407.62s/it][2025-10-31 08:35:42,919] [ INFO] - loss: 1.03514862, learning_rate: 8.556e-05, global_step: 25, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 407.0991, interval_samples_per_second: 0.3144, interval_steps_per_second: 0.0025, progress_or_epoch: 0.037, cpu_used_memory: 59.85, cpu_available_memory: 941.0
TrainProcess: 26%|β–ˆβ–ˆβ–Œ | 26/100 [2:55:16<8:21:30, 406.63s/it][2025-10-31 08:42:27,265] [ INFO] - loss: 1.30350494, learning_rate: 8.444e-05, global_step: 26, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.3273, interval_samples_per_second: 0.3166, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0385, cpu_used_memory: 59.92, cpu_available_memory: 940.93
TrainProcess: 27%|β–ˆβ–ˆβ–‹ | 27/100 [3:01:59<8:13:25, 405.55s/it][2025-10-31 08:49:10,333] [ INFO] - loss: 0.91003418, learning_rate: 8.333e-05, global_step: 27, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.0384, interval_samples_per_second: 0.3176, interval_steps_per_second: 0.0025, progress_or_epoch: 0.04, cpu_used_memory: 59.94, cpu_available_memory: 940.91
TrainProcess: 28%|β–ˆβ–ˆβ–Š | 28/100 [3:08:41<8:05:18, 404.42s/it][2025-10-31 08:55:52,204] [ INFO] - loss: 1.03718758, learning_rate: 8.222e-05, global_step: 28, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 401.845, interval_samples_per_second: 0.3185, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0415, cpu_used_memory: 60.1, cpu_available_memory: 940.74
TrainProcess: 29%|β–ˆβ–ˆβ–‰ | 29/100 [3:15:23<7:58:00, 403.95s/it][2025-10-31 09:02:34,971] [ INFO] - loss: 0.9580574, learning_rate: 8.111e-05, global_step: 29, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.7817, interval_samples_per_second: 0.3178, interval_steps_per_second: 0.0025, progress_or_epoch: 0.043, cpu_used_memory: 59.98, cpu_available_memory: 940.86
TrainProcess: 30%|β–ˆβ–ˆβ–ˆ | 30/100 [3:22:07<7:51:00, 403.71s/it][2025-10-31 09:09:18,101] [ INFO] - loss: 0.90377045, learning_rate: 8e-05, global_step: 30, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.1949, interval_samples_per_second: 0.3175, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0444, cpu_used_memory: 59.99, cpu_available_memory: 940.86
TrainProcess: 31%|β–ˆβ–ˆβ–ˆ | 31/100 [3:28:50<7:44:02, 403.52s/it][2025-10-31 09:16:01,166] [ INFO] - loss: 1.01200104, learning_rate: 7.889e-05, global_step: 31, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.0193, interval_samples_per_second: 0.3176, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0459, cpu_used_memory: 60.06, cpu_available_memory: 940.78
TrainProcess: 32%|β–ˆβ–ˆβ–ˆβ– | 32/100 [3:35:34<7:37:35, 403.76s/it][2025-10-31 09:22:45,504] [ INFO] - loss: 0.91242599, learning_rate: 7.778e-05, global_step: 32, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.3427, interval_samples_per_second: 0.3166, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0474, cpu_used_memory: 60.05, cpu_available_memory: 940.79
TrainProcess: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 33/100 [3:42:20<7:31:39, 404.48s/it][2025-10-31 09:29:31,679] [ INFO] - loss: 1.08840942, learning_rate: 7.667e-05, global_step: 33, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.1463, interval_samples_per_second: 0.3152, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0489, cpu_used_memory: 60.98, cpu_available_memory: 939.86
TrainProcess: 34%|β–ˆβ–ˆβ–ˆβ– | 34/100 [3:49:05<7:25:10, 404.71s/it][2025-10-31 09:36:16,872] [ INFO] - loss: 1.01836014, learning_rate: 7.556e-05, global_step: 34, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.2529, interval_samples_per_second: 0.3159, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0504, cpu_used_memory: 61.04, cpu_available_memory: 939.81
TrainProcess: 35%|β–ˆβ–ˆβ–ˆβ–Œ | 35/100 [3:55:52<7:18:53, 405.13s/it][2025-10-31 09:43:03,000] [ INFO] - loss: 1.06134415, learning_rate: 7.444e-05, global_step: 35, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.098, interval_samples_per_second: 0.3152, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0519, cpu_used_memory: 61.02, cpu_available_memory: 939.82
TrainProcess: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 36/100 [4:02:39<7:12:47, 405.74s/it][2025-10-31 09:49:50,220] [ INFO] - loss: 0.82617188, learning_rate: 7.333e-05, global_step: 36, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 407.2047, interval_samples_per_second: 0.3143, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0533, cpu_used_memory: 61.05, cpu_available_memory: 939.8
TrainProcess: 37%|β–ˆβ–ˆβ–ˆβ–‹ | 37/100 [4:09:24<7:06:01, 405.73s/it][2025-10-31 09:56:35,949] [ INFO] - loss: 1.09431076, learning_rate: 7.222e-05, global_step: 37, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.674, interval_samples_per_second: 0.3155, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0548, cpu_used_memory: 65.29, cpu_available_memory: 935.55
TrainProcess: 38%|β–ˆβ–ˆβ–ˆβ–Š | 38/100 [4:16:09<6:59:02, 405.52s/it][2025-10-31 10:03:20,932] [ INFO] - loss: 1.18837738, learning_rate: 7.111e-05, global_step: 38, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.0278, interval_samples_per_second: 0.316, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0563, cpu_used_memory: 66.35, cpu_available_memory: 934.5
TrainProcess: 39%|β–ˆβ–ˆβ–ˆβ–‰ | 39/100 [4:22:54<6:51:58, 405.22s/it][2025-10-31 10:10:05,460] [ INFO] - loss: 0.89819336, learning_rate: 7e-05, global_step: 39, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.5133, interval_samples_per_second: 0.3164, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0578, cpu_used_memory: 62.39, cpu_available_memory: 938.46
TrainProcess: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 40/100 [4:29:39<6:45:09, 405.16s/it][2025-10-31 10:16:50,508] [ INFO] - loss: 0.83964729, learning_rate: 6.889e-05, global_step: 40, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.0509, interval_samples_per_second: 0.316, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0593, cpu_used_memory: 62.23, cpu_available_memory: 938.62
TrainProcess: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 41/100 [4:36:23<6:38:12, 404.97s/it][2025-10-31 10:23:34,986] [ INFO] - loss: 0.96242523, learning_rate: 6.778e-05, global_step: 41, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.4902, interval_samples_per_second: 0.3164, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0607, cpu_used_memory: 62.32, cpu_available_memory: 938.53
TrainProcess: 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 42/100 [4:43:08<6:31:16, 404.76s/it][2025-10-31 10:30:19,280] [ INFO] - loss: 0.83631706, learning_rate: 6.667e-05, global_step: 42, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.2767, interval_samples_per_second: 0.3166, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0622, cpu_used_memory: 61.52, cpu_available_memory: 939.32
TrainProcess: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 43/100 [4:49:52<6:24:21, 404.58s/it][2025-10-31 10:37:03,410] [ INFO] - loss: 1.01245499, learning_rate: 6.556e-05, global_step: 43, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.1735, interval_samples_per_second: 0.3167, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0637, cpu_used_memory: 61.59, cpu_available_memory: 939.26
TrainProcess: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 44/100 [4:56:37<6:17:39, 404.64s/it][2025-10-31 10:43:48,230] [ INFO] - loss: 0.80370522, learning_rate: 6.444e-05, global_step: 44, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.7733, interval_samples_per_second: 0.3162, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0652, cpu_used_memory: 61.08, cpu_available_memory: 939.77
TrainProcess: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 45/100 [5:03:21<6:10:48, 404.53s/it][2025-10-31 10:50:32,471] [ INFO] - loss: 0.99791718, learning_rate: 6.333e-05, global_step: 45, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.2557, interval_samples_per_second: 0.3166, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0667, cpu_used_memory: 60.71, cpu_available_memory: 940.13
TrainProcess: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 46/100 [5:10:07<6:04:33, 405.06s/it][2025-10-31 10:57:18,763] [ INFO] - loss: 0.91827583, learning_rate: 6.222e-05, global_step: 46, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.3129, interval_samples_per_second: 0.315, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0682, cpu_used_memory: 60.82, cpu_available_memory: 940.02
TrainProcess: 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 47/100 [5:16:53<5:57:59, 405.27s/it][2025-10-31 11:04:04,481] [ INFO] - loss: 0.67258835, learning_rate: 6.111e-05, global_step: 47, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.7787, interval_samples_per_second: 0.3154, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0696, cpu_used_memory: 60.77, cpu_available_memory: 940.07
TrainProcess: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 48/100 [5:23:40<5:51:35, 405.69s/it][2025-10-31 11:10:51,232] [ INFO] - loss: 0.98800659, learning_rate: 6e-05, global_step: 48, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 406.6298, interval_samples_per_second: 0.3148, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0711, cpu_used_memory: 60.74, cpu_available_memory: 940.1
TrainProcess: 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 49/100 [5:30:28<5:45:25, 406.39s/it][2025-10-31 11:17:39,218] [ INFO] - loss: 0.90081024, learning_rate: 5.889e-05, global_step: 49, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.0576, interval_samples_per_second: 0.3137, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0726, cpu_used_memory: 60.81, cpu_available_memory: 940.03
TrainProcess: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 50/100 [5:37:16<5:39:02, 406.85s/it][2025-10-31 11:24:27,192] [ INFO] - loss: 0.83219147, learning_rate: 5.778e-05, global_step: 50, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 407.9098, interval_samples_per_second: 0.3138, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0741, cpu_used_memory: 61.67, cpu_available_memory: 939.17
TrainProcess: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 51/100 [5:44:04<5:32:36, 407.27s/it][2025-10-31 11:31:15,410] [ INFO] - loss: 1.02770996, learning_rate: 5.667e-05, global_step: 51, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.2572, interval_samples_per_second: 0.3135, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0756, cpu_used_memory: 61.73, cpu_available_memory: 939.11
TrainProcess: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 52/100 [5:50:54<5:26:36, 408.25s/it][2025-10-31 11:38:05,930] [ INFO] - loss: 1.05404377, learning_rate: 5.556e-05, global_step: 52, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 410.5388, interval_samples_per_second: 0.3118, interval_steps_per_second: 0.0024, progress_or_epoch: 0.077, cpu_used_memory: 61.75, cpu_available_memory: 939.09
TrainProcess: 53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 53/100 [5:57:47<5:20:51, 409.62s/it][2025-10-31 11:44:58,760] [ INFO] - loss: 0.87389946, learning_rate: 5.444e-05, global_step: 53, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 412.8, interval_samples_per_second: 0.3101, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0785, cpu_used_memory: 61.86, cpu_available_memory: 938.99
TrainProcess: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 54/100 [6:04:39<5:14:32, 410.27s/it][2025-10-31 11:51:50,456] [ INFO] - loss: 0.97066689, learning_rate: 5.333e-05, global_step: 54, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 411.7836, interval_samples_per_second: 0.3108, interval_steps_per_second: 0.0024, progress_or_epoch: 0.08, cpu_used_memory: 61.02, cpu_available_memory: 939.82
TrainProcess: 55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 55/100 [6:11:29<5:07:44, 410.32s/it][2025-10-31 11:58:40,992] [ INFO] - loss: 0.79508591, learning_rate: 5.222e-05, global_step: 55, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 410.475, interval_samples_per_second: 0.3118, interval_steps_per_second: 0.0024, progress_or_epoch: 0.0815, cpu_used_memory: 61.17, cpu_available_memory: 939.68
TrainProcess: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 56/100 [6:18:18<5:00:32, 409.82s/it][2025-10-31 12:05:29,601] [ INFO] - loss: 0.74722576, learning_rate: 5.111e-05, global_step: 56, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 408.6251, interval_samples_per_second: 0.3132, interval_steps_per_second: 0.0024, progress_or_epoch: 0.083, cpu_used_memory: 61.17, cpu_available_memory: 939.67
TrainProcess: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 57/100 [6:25:02<4:52:25, 408.03s/it][2025-10-31 12:12:13,437] [ INFO] - loss: 0.82260895, learning_rate: 5e-05, global_step: 57, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.8658, interval_samples_per_second: 0.3169, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0844, cpu_used_memory: 61.16, cpu_available_memory: 939.68
TrainProcess: 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 58/100 [6:31:44<4:44:19, 406.18s/it][2025-10-31 12:18:55,322] [ INFO] - loss: 0.80335999, learning_rate: 4.889e-05, global_step: 58, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 401.8563, interval_samples_per_second: 0.3185, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0859, cpu_used_memory: 61.15, cpu_available_memory: 939.69
TrainProcess: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 59/100 [6:38:27<4:36:52, 405.18s/it][2025-10-31 12:25:38,139] [ INFO] - loss: 0.74405861, learning_rate: 4.778e-05, global_step: 59, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.8266, interval_samples_per_second: 0.3178, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0874, cpu_used_memory: 61.18, cpu_available_memory: 939.67
TrainProcess: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 60/100 [6:45:10<4:29:48, 404.72s/it][2025-10-31 12:32:21,790] [ INFO] - loss: 0.80104542, learning_rate: 4.667e-05, global_step: 60, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.6686, interval_samples_per_second: 0.3171, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0889, cpu_used_memory: 61.37, cpu_available_memory: 939.48
TrainProcess: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 61/100 [6:51:52<4:22:32, 403.91s/it][2025-10-31 12:39:03,803] [ INFO] - loss: 0.84001923, learning_rate: 4.556e-05, global_step: 61, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.0117, interval_samples_per_second: 0.3184, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0904, cpu_used_memory: 61.29, cpu_available_memory: 939.55
TrainProcess: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 62/100 [6:58:34<4:15:27, 403.36s/it][2025-10-31 12:45:45,935] [ INFO] - loss: 0.81241417, learning_rate: 4.444e-05, global_step: 62, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.0882, interval_samples_per_second: 0.3183, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0919, cpu_used_memory: 61.3, cpu_available_memory: 939.54
TrainProcess: 63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 63/100 [7:05:16<4:08:25, 402.84s/it][2025-10-31 12:52:27,533] [ INFO] - loss: 1.04758453, learning_rate: 4.333e-05, global_step: 63, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 401.6393, interval_samples_per_second: 0.3187, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0933, cpu_used_memory: 61.33, cpu_available_memory: 939.51
TrainProcess: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 64/100 [7:11:58<4:01:30, 402.53s/it][2025-10-31 12:59:09,319] [ INFO] - loss: 0.77729416, learning_rate: 4.222e-05, global_step: 64, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 401.7717, interval_samples_per_second: 0.3186, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0948, cpu_used_memory: 61.41, cpu_available_memory: 939.43
TrainProcess: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 65/100 [7:18:40<3:54:42, 402.35s/it][2025-10-31 13:05:51,237] [ INFO] - loss: 0.65013695, learning_rate: 4.111e-05, global_step: 65, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 401.9448, interval_samples_per_second: 0.3185, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0963, cpu_used_memory: 61.47, cpu_available_memory: 939.38
TrainProcess: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 66/100 [7:25:21<3:47:53, 402.15s/it][2025-10-31 13:12:32,988] [ INFO] - loss: 0.77750492, learning_rate: 4e-05, global_step: 66, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 401.7091, interval_samples_per_second: 0.3186, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0978, cpu_used_memory: 61.45, cpu_available_memory: 939.4
TrainProcess: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 67/100 [7:32:04<3:41:15, 402.29s/it][2025-10-31 13:19:15,602] [ INFO] - loss: 1.10520172, learning_rate: 3.889e-05, global_step: 67, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.5921, interval_samples_per_second: 0.3179, interval_steps_per_second: 0.0025, progress_or_epoch: 0.0993, cpu_used_memory: 61.66, cpu_available_memory: 939.19
TrainProcess: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 68/100 [7:38:47<3:34:41, 402.54s/it][2025-10-31 13:25:58,717] [ INFO] - loss: 0.66330528, learning_rate: 3.778e-05, global_step: 68, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.142, interval_samples_per_second: 0.3175, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1007, cpu_used_memory: 61.5, cpu_available_memory: 939.34
TrainProcess: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 69/100 [7:45:32<3:28:20, 403.24s/it][2025-10-31 13:32:43,516] [ INFO] - loss: 0.77275848, learning_rate: 3.667e-05, global_step: 69, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.8421, interval_samples_per_second: 0.3162, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1022, cpu_used_memory: 61.48, cpu_available_memory: 939.36
TrainProcess: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 70/100 [7:52:15<3:21:32, 403.09s/it][2025-10-31 13:39:26,293] [ INFO] - loss: 0.75451612, learning_rate: 3.556e-05, global_step: 70, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.7779, interval_samples_per_second: 0.3178, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1037, cpu_used_memory: 61.54, cpu_available_memory: 939.31
TrainProcess: 71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 71/100 [7:58:58<3:14:49, 403.09s/it][2025-10-31 13:46:09,476] [ INFO] - loss: 0.83746719, learning_rate: 3.444e-05, global_step: 71, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.0883, interval_samples_per_second: 0.3175, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1052, cpu_used_memory: 61.59, cpu_available_memory: 939.25
TrainProcess: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 72/100 [8:05:40<3:08:00, 402.88s/it][2025-10-31 13:52:51,802] [ INFO] - loss: 0.68475342, learning_rate: 3.333e-05, global_step: 72, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.4058, interval_samples_per_second: 0.3181, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1067, cpu_used_memory: 61.77, cpu_available_memory: 939.07
TrainProcess: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 73/100 [8:12:23<3:01:17, 402.86s/it][2025-10-31 13:59:34,593] [ INFO] - loss: 0.87439346, learning_rate: 3.222e-05, global_step: 73, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.7801, interval_samples_per_second: 0.3178, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1082, cpu_used_memory: 61.67, cpu_available_memory: 939.18
TrainProcess: 74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 74/100 [8:19:07<2:54:42, 403.17s/it][2025-10-31 14:06:18,502] [ INFO] - loss: 0.80682707, learning_rate: 3.111e-05, global_step: 74, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.9087, interval_samples_per_second: 0.3169, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1096, cpu_used_memory: 61.65, cpu_available_memory: 939.2
TrainProcess: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 75/100 [8:25:50<2:48:00, 403.20s/it][2025-10-31 14:13:01,815] [ INFO] - loss: 0.91726685, learning_rate: 3e-05, global_step: 75, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.2516, interval_samples_per_second: 0.3174, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1111, cpu_used_memory: 61.65, cpu_available_memory: 939.2
TrainProcess: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 76/100 [8:32:33<2:41:15, 403.16s/it][2025-10-31 14:19:44,836] [ INFO] - loss: 0.77852249, learning_rate: 2.889e-05, global_step: 76, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.0565, interval_samples_per_second: 0.3176, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1126, cpu_used_memory: 61.74, cpu_available_memory: 939.11
TrainProcess: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 77/100 [8:39:17<2:34:32, 403.17s/it][2025-10-31 14:26:28,021] [ INFO] - loss: 0.64542007, learning_rate: 2.778e-05, global_step: 77, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.2185, interval_samples_per_second: 0.3174, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1141, cpu_used_memory: 61.91, cpu_available_memory: 938.93
TrainProcess: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 78/100 [8:46:00<2:27:50, 403.22s/it][2025-10-31 14:33:11,403] [ INFO] - loss: 0.85227966, learning_rate: 2.667e-05, global_step: 78, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.3908, interval_samples_per_second: 0.3173, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1156, cpu_used_memory: 61.83, cpu_available_memory: 939.01
TrainProcess: 79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 79/100 [8:52:44<2:21:15, 403.60s/it][2025-10-31 14:39:55,890] [ INFO] - loss: 0.89850044, learning_rate: 2.556e-05, global_step: 79, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.4463, interval_samples_per_second: 0.3165, interval_steps_per_second: 0.0025, progress_or_epoch: 0.117, cpu_used_memory: 61.87, cpu_available_memory: 938.97
TrainProcess: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 80/100 [8:59:28<2:14:31, 403.55s/it][2025-10-31 14:46:39,284] [ INFO] - loss: 0.73733902, learning_rate: 2.444e-05, global_step: 80, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.4239, interval_samples_per_second: 0.3173, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1185, cpu_used_memory: 61.87, cpu_available_memory: 938.98
TrainProcess: 81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 81/100 [9:06:11<2:07:45, 403.44s/it][2025-10-31 14:53:22,452] [ INFO] - loss: 0.79546261, learning_rate: 2.333e-05, global_step: 81, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.1577, interval_samples_per_second: 0.3175, interval_steps_per_second: 0.0025, progress_or_epoch: 0.12, cpu_used_memory: 61.91, cpu_available_memory: 938.94
TrainProcess: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 82/100 [9:12:55<2:01:07, 403.72s/it][2025-10-31 15:00:06,830] [ INFO] - loss: 0.75153351, learning_rate: 2.222e-05, global_step: 82, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.3936, interval_samples_per_second: 0.3165, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1215, cpu_used_memory: 62.1, cpu_available_memory: 938.75
TrainProcess: 83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 83/100 [9:19:39<1:54:20, 403.59s/it][2025-10-31 15:06:50,102] [ INFO] - loss: 0.97904587, learning_rate: 2.111e-05, global_step: 83, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.272, interval_samples_per_second: 0.3174, interval_steps_per_second: 0.0025, progress_or_epoch: 0.123, cpu_used_memory: 61.97, cpu_available_memory: 938.87
TrainProcess: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 84/100 [9:26:22<1:47:37, 403.58s/it][2025-10-31 15:13:33,680] [ INFO] - loss: 0.83035088, learning_rate: 2e-05, global_step: 84, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.558, interval_samples_per_second: 0.3172, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1245, cpu_used_memory: 62.07, cpu_available_memory: 938.77
TrainProcess: 85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 85/100 [9:33:07<1:40:57, 403.82s/it][2025-10-31 15:20:18,097] [ INFO] - loss: 0.56796551, learning_rate: 1.889e-05, global_step: 85, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.3945, interval_samples_per_second: 0.3165, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1259, cpu_used_memory: 62.03, cpu_available_memory: 938.81
TrainProcess: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 86/100 [9:39:50<1:34:13, 403.79s/it][2025-10-31 15:27:01,761] [ INFO] - loss: 0.93466377, learning_rate: 1.778e-05, global_step: 86, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.7092, interval_samples_per_second: 0.3171, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1274, cpu_used_memory: 62.11, cpu_available_memory: 938.73
TrainProcess: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 87/100 [9:46:34<1:27:28, 403.77s/it][2025-10-31 15:33:45,530] [ INFO] - loss: 0.93402195, learning_rate: 1.667e-05, global_step: 87, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.7246, interval_samples_per_second: 0.317, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1289, cpu_used_memory: 62.11, cpu_available_memory: 938.74
TrainProcess: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 88/100 [9:53:18<1:20:47, 403.93s/it][2025-10-31 15:40:29,845] [ INFO] - loss: 0.89247513, learning_rate: 1.556e-05, global_step: 88, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.3319, interval_samples_per_second: 0.3166, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1304, cpu_used_memory: 62.17, cpu_available_memory: 938.68
TrainProcess: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 89/100 [10:00:01<1:13:59, 403.58s/it][2025-10-31 15:47:12,607] [ INFO] - loss: 0.84331131, learning_rate: 1.444e-05, global_step: 89, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.7456, interval_samples_per_second: 0.3178, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1319, cpu_used_memory: 62.21, cpu_available_memory: 938.63
TrainProcess: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 90/100 [10:06:46<1:07:19, 403.94s/it][2025-10-31 15:53:57,350] [ INFO] - loss: 0.69530869, learning_rate: 1.333e-05, global_step: 90, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.7758, interval_samples_per_second: 0.3162, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1333, cpu_used_memory: 62.3, cpu_available_memory: 938.54
TrainProcess: 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 91/100 [10:13:29<1:00:32, 403.61s/it][2025-10-31 16:00:40,191] [ INFO] - loss: 0.80363655, learning_rate: 1.222e-05, global_step: 91, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.8474, interval_samples_per_second: 0.3177, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1348, cpu_used_memory: 62.28, cpu_available_memory: 938.56
TrainProcess: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 92/100 [10:20:11<53:45, 403.19s/it] [2025-10-31 16:07:22,476] [ INFO] - loss: 0.81467819, learning_rate: 1.111e-05, global_step: 92, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.2101, interval_samples_per_second: 0.3182, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1363, cpu_used_memory: 62.34, cpu_available_memory: 938.5
TrainProcess: 93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 93/100 [10:26:55<47:03, 403.30s/it][2025-10-31 16:14:06,010] [ INFO] - loss: 0.73225021, learning_rate: 1e-05, global_step: 93, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.568, interval_samples_per_second: 0.3172, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1378, cpu_used_memory: 62.36, cpu_available_memory: 938.48
TrainProcess: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 94/100 [10:33:38<40:19, 403.26s/it][2025-10-31 16:20:49,261] [ INFO] - loss: 0.90310669, learning_rate: 8.889e-06, global_step: 94, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.1673, interval_samples_per_second: 0.3175, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1393, cpu_used_memory: 62.38, cpu_available_memory: 938.46
TrainProcess: 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 95/100 [10:40:22<33:37, 403.55s/it][2025-10-31 16:27:33,365] [ INFO] - loss: 0.72865582, learning_rate: 7.778e-06, global_step: 95, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.2221, interval_samples_per_second: 0.3167, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1407, cpu_used_memory: 62.42, cpu_available_memory: 938.42
TrainProcess: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 96/100 [10:47:06<26:54, 403.60s/it][2025-10-31 16:34:17,115] [ INFO] - loss: 0.44960594, learning_rate: 6.667e-06, global_step: 96, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 403.7215, interval_samples_per_second: 0.3171, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1422, cpu_used_memory: 62.48, cpu_available_memory: 938.37
TrainProcess: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 97/100 [10:53:50<20:11, 403.84s/it][2025-10-31 16:41:01,499] [ INFO] - loss: 0.63899231, learning_rate: 5.556e-06, global_step: 97, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.3735, interval_samples_per_second: 0.3165, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1437, cpu_used_memory: 62.46, cpu_available_memory: 938.38
TrainProcess: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 98/100 [11:00:33<13:26, 403.49s/it][2025-10-31 16:47:44,120] [ INFO] - loss: 0.68179607, learning_rate: 4.444e-06, global_step: 98, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 402.6714, interval_samples_per_second: 0.3179, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1452, cpu_used_memory: 62.64, cpu_available_memory: 938.21
TrainProcess: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 99/100 [11:07:17<06:43, 403.84s/it][2025-10-31 16:54:28,783] [ INFO] - loss: 0.99879265, learning_rate: 3.333e-06, global_step: 99, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 404.6553, interval_samples_per_second: 0.3163, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1467, cpu_used_memory: 62.52, cpu_available_memory: 938.32
TrainProcess: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [11:14:03<00:00, 404.25s/it][2025-10-31 17:01:14,016] [ INFO] - loss: 0.79484558, learning_rate: 2.222e-06, global_step: 100, current_memory_allocated: 12.27591323852539, current_memory_reserved: 70.46749949455261, max_memory_allocated: 63.21619176864624, max_memory_reserved: 70.46749949455261, interval_runtime: 405.2323, interval_samples_per_second: 0.3159, interval_steps_per_second: 0.0025, progress_or_epoch: 0.1482, cpu_used_memory: 62.63, cpu_available_memory: 938.22
[2025-10-31 17:01:14,017] [ INFO] -
Training completed.

[2025-10-31 17:01:14,029] [ INFO] - train_runtime: 40443.1151, train_samples_per_second: 0.3165, train_steps_per_second: 0.0025, train_loss: 1.5433832550048827, progress_or_epoch: 0.1482, cpu_used_memory: 62.63, cpu_available_memory: 938.22
TrainProcess: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [11:14:03<00:00, 404.43s/it]
[2025-10-31 17:01:14,033] [ INFO] - Saving model checkpoint to tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle
[2025-10-31 17:01:14,062] [ INFO] - tokenizer config file saved in tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle/tokenizer_config.json
[2025-10-31 17:01:14,131] [ INFO] - Special tokens file saved in tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle/special_tokens_map.json
[2025-10-31 17:01:14,133] [ INFO] - added tokens file saved in tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle/added_tokens.json
[2025-10-31 17:01:16,050] [ INFO] - Configuration saved in tmp_train_dureader_dual.train.jsonl/LLARA-passage-paddle/config.json
[2025-10-31 17:01:16,051] [ INFO] - ***** train metrics *****
[2025-10-31 17:01:16,051] [ INFO] - cpu_available_memory = 938.22
[2025-10-31 17:01:16,051] [ INFO] - cpu_used_memory = 62.63
[2025-10-31 17:01:16,051] [ INFO] - progress_or_epoch = 0.1482
[2025-10-31 17:01:16,051] [ INFO] - train_loss = 1.5434
[2025-10-31 17:01:16,052] [ INFO] - train_runtime = 11:14:03.11
[2025-10-31 17:01:16,052] [ INFO] - train_samples_per_second = 0.3165
[2025-10-31 17:01:16,052] [ INFO] - train_steps_per_second = 0.0025
LAUNCH INFO 2025-10-31 17:01:24,796 Pod completed
LAUNCH INFO 2025-10-31 17:01:24,797 Exit code 0