CrossEncoder based on Qwen/Qwen3-Embedding-0.6B

This is a Cross Encoder model finetuned from Qwen/Qwen3-Embedding-0.6B using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

  • Model Type: Cross Encoder
  • Base model: Qwen/Qwen3-Embedding-0.6B
  • Maximum Sequence Length: 32768 tokens
  • Number of Output Labels: 1 label

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("vkimbris/qwen3_06b_items_reranker")
# Get scores for pairs of texts
pairs = [
    ['Васаби порошок горчичный Премиум Fumiko Resfood 1кг, 10шт/кор, Кихай', 'Васаби Fumiko Premium  грейд А, 85% хрена'],
    ['Соус Терияки Genso 1,5n/1,7кг, бшт/кор, Россия', 'Соус Терияки Genso'],
    ['Уксус рисовый Padam Prem Resfood 20л, Россия', 'Уксус рисовый Padam Premium'],
    ['Имбирь маринованный розовый Tabuko Restood 1,5 кг, вес сухого вещ-ва 1кг, 10шт/кор, Китай', 'Имбирь маринованный Tabuko розовый'],
    ["Паста Том Ям 'Genso' пакет (0,400 кг) упак. 24 шт. Тайланд", 'Паста Том Ям Genso'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'Васаби порошок горчичный Премиум Fumiko Resfood 1кг, 10шт/кор, Кихай',
    [
        'Васаби Fumiko Premium  грейд А, 85% хрена',
        'Соус Терияки Genso',
        'Уксус рисовый Padam Premium',
        'Имбирь маринованный Tabuko розовый',
        'Паста Том Ям Genso',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Classification

Metric Value
accuracy 0.9389
accuracy_threshold 0.7263
f1 0.9392
f1_threshold 0.7263
precision 0.9356
recall 0.9427
average_precision 0.9509

Cross Encoder Classification

Metric Value
accuracy 0.9436
accuracy_threshold 0.8169
f1 0.9447
f1_threshold 0.7355
precision 0.9267
recall 0.9634
average_precision 0.9544

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,047 training samples
  • Columns: premise and hypothesis
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis
    type string string
    details
    • min: 11 characters
    • mean: 49.72 characters
    • max: 107 characters
    • min: 6 characters
    • mean: 27.71 characters
    • max: 62 characters
  • Samples:
    premise hypothesis
    Смесь мучная темпурная 'KANESHIRO' 1кг Мука темпурная Kaneshiro
    Смесь темпурная Kaneshiro Resfood 1xr. 10шт/кор Мука темпурная Kaneshiro
    Имбирь маринованный розовый 'Hansey' 1,4 кг*10 (в.с. КОРОБОК ПО 10 ПАЧЕК) Имбирь маринованный розовый Hansey, вес сухого вещества 1000 г
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 10.0,
        "num_negatives": 4,
        "activation_fn": "torch.nn.modules.activation.Sigmoid"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 262 evaluation samples
  • Columns: premise and hypothesis
  • Approximate statistics based on the first 262 samples:
    premise hypothesis
    type string string
    details
    • min: 14 characters
    • mean: 50.15 characters
    • max: 111 characters
    • min: 13 characters
    • mean: 26.98 characters
    • max: 62 characters
  • Samples:
    premise hypothesis
    Васаби порошок горчичный Премиум Fumiko Resfood 1кг, 10шт/кор, Кихай Васаби Fumiko Premium грейд А, 85% хрена
    Соус Терияки Genso 1,5n/1,7кг, бшт/кор, Россия Соус Терияки Genso
    Уксус рисовый Padam Prem Resfood 20л, Россия Уксус рисовый Padam Premium
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 10.0,
        "num_negatives": 4,
        "activation_fn": "torch.nn.modules.activation.Sigmoid"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 15
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 15
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss average_precision
1.5152 100 0.4864 0.1104 0.8944
3.0303 200 0.1238 0.0983 0.9240
4.5455 300 0.1106 0.0934 0.9466
6.0606 400 0.1068 0.0939 0.9378
7.5758 500 0.1135 0.1023 0.9232
9.0909 600 0.1061 0.1187 0.9186
10.6061 700 0.1074 0.0808 0.9445
12.1212 800 0.1039 0.1153 0.9403
13.6364 900 0.1082 0.0900 0.9509
-1 -1 - - 0.9544

Framework Versions

  • Python: 3.12.3
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.4.2
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
-
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vkimbris/qwen3_06b_items_reranker

Finetuned
(120)
this model

Paper for vkimbris/qwen3_06b_items_reranker

Evaluation results