Model Card for Qwen3-CoT-Scientific-Research

20250421_1023_Scientific Discovery Design_simple_compose_01jscbjwqdetvtd5phh85mwsa7.png

Model Details

Model Description

  • Base Model: Qwen3-1.7B
  • Task: Scientific Reasoning with Chain-of-Thought (CoT)
  • Dataset: CoT_Reasoning_Scientific_Discovery_and_Research (custom dataset focusing on step-by-step scientific reasoning tasks)
  • Training Objective: Encourage step-by-step logical deductions for scientific reasoning problems

Uses

Direct Use

This fine-tuned model is designed for:

  • Assisting in teaching and learning scientific reasoning
  • Supporting educational AI assistants in science classrooms
  • Demonstrating step-by-step scientific reasoning in research training contexts
  • Serving as a resource for automated reasoning systems to better emulate structured scientific logic

It is not intended to replace human researchers, perform advanced analytics, or generate novel scientific discoveries.

Bias, Risks, and Limitations

  • May oversimplify complex or interdisciplinary problems
  • Performance limited by the scope of training data (primarily introductory-level scientific reasoning tasks)
  • Does not handle real-world experimentation or advanced statistical modeling
  • May produce incorrect reasoning if the prompt is highly ambiguous

How to Get Started with the Model

Use the code below to get started with the model.


from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel


tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen3-1.7B",
    device_map={"": 0}
)

model = PeftModel.from_pretrained(base_model,"khazarai/Qwen3-CoT-Scientific-Research")

question = """
How are microfluidic devices revolutionizing laboratory analysis techniques, and what are the primary advantages they offer over traditional methods?
"""

messages = [
    {"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
    enable_thinking = True,
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 1800,
    temperature = 0.6,
    top_p = 0.95,
    top_k = 20,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

For pipeline:

from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B")
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-1.7B")
model = PeftModel.from_pretrained(base_model, "khazarai/Qwen3-CoT-Scientific-Research")

question = """
How are microfluidic devices revolutionizing laboratory analysis techniques, and what are the primary advantages they offer over traditional methods?
"""

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
    {"role": "user", "content": question}
]
pipe(messages)

Training Details

Training Data

Scope

This model was fine-tuned on tasks that involve core scientific reasoning:

  • Formulating testable hypotheses
  • Identifying independent and dependent variables
  • Designing simple controlled experiments
  • Interpreting graphs, tables, and basic data representations
  • Understanding relationships between evidence and conclusions
  • Recognizing simple logical fallacies in scientific arguments

Illustrative Examples

  • Drawing conclusions from experimental results
  • Evaluating alternative explanations for observed data
  • Explaining step-by-step reasoning behind scientific conclusions

Emphasis on Chain-of-Thought (CoT)

  • The dataset highlights explicit reasoning steps, making the model better at producing step-by-step explanations when solving scientific reasoning tasks.
  • Focus on Foundational Knowledge
  • The dataset aims to strengthen models in foundational scientific reasoning skills rather than covering all domains of scientific knowledge.

Focus on Foundational Knowledge

The dataset aims to strengthen models in foundational scientific reasoning skills rather than covering all domains of scientific knowledge.

Dataset: moremilk/CoT_Reasoning_Scientific_Discovery_and_Research

Framework versions

  • PEFT 0.15.2
Downloads last month
30
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for khazarai/Qwen3-CoT-Scientific-Research

Finetuned
Qwen/Qwen3-1.7B
Finetuned
unsloth/Qwen3-1.7B
Adapter
(15)
this model

Dataset used to train khazarai/Qwen3-CoT-Scientific-Research

Collection including khazarai/Qwen3-CoT-Scientific-Research