Clearledgr Llama Model

Model Description

Clearledgr Llama Model is a specialized financial AI model fine-tuned on Llama 3.1 8B for automated bank reconciliation and financial data processing. This model achieves 87.8% loss reduction and provides high-accuracy financial reconciliation capabilities.

Model Details

  • Base Model: meta-llama/Llama-3.1-8B
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Steps: 9,920
  • Loss Reduction: 87.8% (from 2.97 to 0.36)
  • Training Samples: 13,750
  • LoRA Config: r=8, alpha=32

Capabilities

  • Bank Reconciliation: 99.7% accuracy
  • Transaction Matching: Automated matching with AI
  • Variance Analysis: Precision financial calculations
  • Exception Handling: Intelligent reconciliation of discrepancies
  • Multi-format Support: Various bank and GL formats

Training Data

The model was trained on comprehensive financial datasets including:

  • Bank transaction records
  • General ledger entries
  • Reconciliation patterns
  • Financial compliance rules
  • International accounting standards

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "clearledgr/clearledgr-reconciliation-enhanced-full")

# Generate reconciliation analysis
prompt = "Reconcile the following bank transactions with GL entries..."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)

Performance Metrics

  • Reconciliation Accuracy: 99.7%
  • Processing Speed: ~2.5 seconds per transaction set
  • Loss Reduction: 87.8%
  • Training Efficiency: Converged at 9,920 steps

Deployment

This model is optimized for cloud deployment and supports:

  • HuggingFace Inference API
  • Google Colab deployment
  • RunPod cloud training
  • Local inference on MacBook M3

License

This model is released under the Llama 3.1 license. See the original Llama 3.1 license for terms and conditions.

Citation

@model{clearledgr2025,
  title={Clearledgr Llama Model},
  author={Clearledgr AI Team},
  year={2025},
  publisher={HuggingFace},
  url={https://huggingface.co/clearledgr/clearledgr-llama-model}
}

Contact

For questions about this model, please contact the Clearledgr team.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mombalam/clearledgr-llama-financial-ai

Adapter
(560)
this model