SmolLM2-360M Financial News JSON Analyst (SmolNewsAnalysis-002)
- Hugging Face model card: https://huggingface.co/LeviDeHaan/SmolNewsAnalysis-002
- Author: Levi De Haan
- Project overview: https://levidehaan.com/projects/twatter
Overview
- Purpose Fine-tuned SmolLM2-360M-Instruct to summarize Alpaca/FMP financial news into structured sentiment + significance scores consumed by the Twatter news pipeline.
- Base model
HuggingFaceTB/SmolLM2-360M-Instruct(360M parameters, Apache-2.0). - Architecture Decoder-only transformer compatible with SmolLM chat formatting (
...). - Finetuning method LoRA adapters (rank 8, alpha 16, dropout 0) merged into base weights post-training.
- Repository integration Loaded via
SharedLLMManager.TrainedModelClientand invoked bystock_news_processor.pyfor Alpaca feed scoring.
What it Predicts
- sentiment_score Float in
[-1, 1]summarizing bullish/bearish tone. - sentiment_confidence Model confidence for sentiment score (float
0-1). - wow_score Market impact category normalized to
Extremely Bad News,Bad News,Meh News,Regular News,Big News, orHuge News. - wow_confidence Confidence for wow_score (float
0-1). - symbol Canonical ticker symbol extracted from the article payload (may be empty if missing upstream).
- site / source_name Strings describing origin; pass-through of Alpaca metadata for downstream routing.
Prompt Format
- System message Injected automatically by
chat_template.jinjaandModelfilewhen missing:
You are a precise financial news analyst. Read the news text and output a compact JSON with fields: symbol, site, source_name, sentiment_score, sentiment_confidence, wow_score, wow_confidence. Output only the JSON without commentary.
- Full chat template used for SmolLM-style prompts:
<|im_start|>system
You are a precise financial news analyst. Read the news text and output a compact JSON with fields: symbol, site, source_name, sentiment_score, sentiment_confidence, wow_score, wow_confidence. Output only the JSON without commentary.<|im_end|>
<|im_start|>user
<news article title/summary + metadata>
<|im_end|>
<|im_start|>assistant
- Alternative
[INST]framing used inSharedLLMManager.TrainedModelClient.generate()whenTRAINED_MODEL_TYPE="llama":
<s>[INST] <<SYS>>
You are a precise financial news analyst. Read the news text and output a compact JSON with fields: symbol, site, source_name, sentiment_score, sentiment_confidence, wow_score, wow_confidence. Output only the JSON without commentary.
<</SYS>>
<news article title/summary + metadata> [/INST]
Output Must be a single JSON object; downstream parsing stops at the first closing brace.
Quick Start
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "LeviDeHaan/SmolNewsAnalysis-002"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto")
prompt = """<|im_start|>system
You are a precise financial news analyst...<|im_end|>\n"""
prompt += "<|im_start|>user\nTesla shares climb after deliveries beat expectations. Symbol: TSLA Site: bloomberg.com\n<|im_end|>\n<|im_start|>assistant\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=160, temperature=0.1)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
Training Data
- Source Aggregated Alpaca/FMP news processed by
stock_news_processor.pyand exported throughextract_stock_json_news_training.pytotraining_data/news_data/stock_news_training.json. - Samples 1 506 deduplicated instruction/response pairs (hash dedupe over title/summary + ticker + site).
- Wow distribution
Big News645,Regular News272,Bad News253,Huge News198,Meh News120,Extremely Bad News16 (plus 2 legacyBad News (negative but not catastrophic)entries coerced to canonical values at inference time). - Content News titles, summaries, symbol/site metadata, and minified JSON outputs describing sentiment and impact.
Training Procedure
- Framework LLaMA-Factory (SmolLM2 template) + PEFT LoRA on bf16 accelerators.
- Hyperparameters
learning_rate=5e-5,per_device_train_batch_size=2,gradient_accumulation_steps=8,num_train_epochs=10,cutoff_len=2048,lora_r=8,lora_alpha=16,lora_dropout=0,lr_scheduler_type=cosine_with_restarts,max_grad_norm=1.0,warmup_steps=0. - Tokens seen 8 757 824 (
num_input_tokens_seeninall_results.json). - Final train loss 0.0925.
- Adapters Merged into base weights before export; no LoRA files required at inference.
Known Limitations
- Domain specificity Tuned on short news briefs; long-form articles may require additional summarization or truncation (
stock_news_processor.pytrims to 1800 chars). - JSON adherence Strong but still validate output to guard against malformed fields.
- Ticker coverage Relies on upstream symbol detection; missing tickers yield blank
symbolvalues. - Wow taxonomy Responses outside the canonical set default to
Regular Newsvia analyzer normalization. - Market latency Model reflects historical data only; no real-time awareness or price prediction.
License
- Model weights Apache-2.0 (inherits from base model).
Contact & Support
- Maintainer Levi De Haan
- Project page https://levidehaan.com/projects/twatter
- Hugging Face discussions https://huggingface.co/LeviDeHaan/SmolNewsAnalysis-002/discussions
- Downloads last month
- 10
Model tree for LeviDeHaan/SmolNewsAnalysis-002
Base model
HuggingFaceTB/SmolLM2-360M
Quantized
HuggingFaceTB/SmolLM2-360M-Instruct