SmolLM2-360M Financial News JSON Analyst (SmolNewsAnalysis-002)

Overview

  • Purpose Fine-tuned SmolLM2-360M-Instruct to summarize Alpaca/FMP financial news into structured sentiment + significance scores consumed by the Twatter news pipeline.
  • Base model HuggingFaceTB/SmolLM2-360M-Instruct (360M parameters, Apache-2.0).
  • Architecture Decoder-only transformer compatible with SmolLM chat formatting (...).
  • Finetuning method LoRA adapters (rank 8, alpha 16, dropout 0) merged into base weights post-training.
  • Repository integration Loaded via SharedLLMManager.TrainedModelClient and invoked by stock_news_processor.py for Alpaca feed scoring.

What it Predicts

  • sentiment_score Float in [-1, 1] summarizing bullish/bearish tone.
  • sentiment_confidence Model confidence for sentiment score (float 0-1).
  • wow_score Market impact category normalized to Extremely Bad News, Bad News, Meh News, Regular News, Big News, or Huge News.
  • wow_confidence Confidence for wow_score (float 0-1).
  • symbol Canonical ticker symbol extracted from the article payload (may be empty if missing upstream).
  • site / source_name Strings describing origin; pass-through of Alpaca metadata for downstream routing.

Prompt Format

  • System message Injected automatically by chat_template.jinja and Modelfile when missing:
You are a precise financial news analyst. Read the news text and output a compact JSON with fields: symbol, site, source_name, sentiment_score, sentiment_confidence, wow_score, wow_confidence. Output only the JSON without commentary.
  • Full chat template used for SmolLM-style prompts:
<|im_start|>system
You are a precise financial news analyst. Read the news text and output a compact JSON with fields: symbol, site, source_name, sentiment_score, sentiment_confidence, wow_score, wow_confidence. Output only the JSON without commentary.<|im_end|>
<|im_start|>user
<news article title/summary + metadata>
<|im_end|>
<|im_start|>assistant
  • Alternative [INST] framing used in SharedLLMManager.TrainedModelClient.generate() when TRAINED_MODEL_TYPE="llama":
<s>[INST] <<SYS>>
You are a precise financial news analyst. Read the news text and output a compact JSON with fields: symbol, site, source_name, sentiment_score, sentiment_confidence, wow_score, wow_confidence. Output only the JSON without commentary.
<</SYS>>

<news article title/summary + metadata> [/INST]

Output Must be a single JSON object; downstream parsing stops at the first closing brace.

Quick Start

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "LeviDeHaan/SmolNewsAnalysis-002"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto")

prompt = """<|im_start|>system
You are a precise financial news analyst...<|im_end|>\n"""
prompt += "<|im_start|>user\nTesla shares climb after deliveries beat expectations. Symbol: TSLA Site: bloomberg.com\n<|im_end|>\n<|im_start|>assistant\n"

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=160, temperature=0.1)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)

Training Data

  • Source Aggregated Alpaca/FMP news processed by stock_news_processor.py and exported through extract_stock_json_news_training.py to training_data/news_data/stock_news_training.json.
  • Samples 1 506 deduplicated instruction/response pairs (hash dedupe over title/summary + ticker + site).
  • Wow distribution Big News 645, Regular News 272, Bad News 253, Huge News 198, Meh News 120, Extremely Bad News 16 (plus 2 legacy Bad News (negative but not catastrophic) entries coerced to canonical values at inference time).
  • Content News titles, summaries, symbol/site metadata, and minified JSON outputs describing sentiment and impact.

Training Procedure

  • Framework LLaMA-Factory (SmolLM2 template) + PEFT LoRA on bf16 accelerators.
  • Hyperparameters learning_rate=5e-5, per_device_train_batch_size=2, gradient_accumulation_steps=8, num_train_epochs=10, cutoff_len=2048, lora_r=8, lora_alpha=16, lora_dropout=0, lr_scheduler_type=cosine_with_restarts, max_grad_norm=1.0, warmup_steps=0.
  • Tokens seen 8 757 824 (num_input_tokens_seen in all_results.json).
  • Final train loss 0.0925.
  • Adapters Merged into base weights before export; no LoRA files required at inference.

Known Limitations

  • Domain specificity Tuned on short news briefs; long-form articles may require additional summarization or truncation (stock_news_processor.py trims to 1800 chars).
  • JSON adherence Strong but still validate output to guard against malformed fields.
  • Ticker coverage Relies on upstream symbol detection; missing tickers yield blank symbol values.
  • Wow taxonomy Responses outside the canonical set default to Regular News via analyzer normalization.
  • Market latency Model reflects historical data only; no real-time awareness or price prediction.

License

  • Model weights Apache-2.0 (inherits from base model).

Contact & Support

Downloads last month
10
Safetensors
Model size
0.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LeviDeHaan/SmolNewsAnalysis-002

Finetuned
(108)
this model