-
Grokking in the Wild: Data Augmentation for Real-World Multi-Hop Reasoning with Transformers
Paper • 2504.20752 • Published • 92 -
Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language Models in Math
Paper • 2504.21233 • Published • 49 -
AF Adapter: Continual Pretraining for Building Chinese Biomedical Language Model
Paper • 2211.11363 • Published • 1 -
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
Paper • 2405.12130 • Published • 50
Collections
Discover the best community collections!
Collections including paper arxiv:2211.11363
-
CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization
Paper • 2310.10134 • Published • 1 -
TiC-CLIP: Continual Training of CLIP Models
Paper • 2310.16226 • Published • 9 -
In-Context Pretraining: Language Modeling Beyond Document Boundaries
Paper • 2310.10638 • Published • 30 -
Controlled Decoding from Language Models
Paper • 2310.17022 • Published • 15
-
Moral Foundations of Large Language Models
Paper • 2310.15337 • Published • 1 -
Specific versus General Principles for Constitutional AI
Paper • 2310.13798 • Published • 3 -
Contrastive Prefence Learning: Learning from Human Feedback without RL
Paper • 2310.13639 • Published • 25 -
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Paper • 2309.00267 • Published • 52
-
MedS^3: Towards Medical Small Language Models with Self-Evolved Slow Thinking
Paper • 2501.12051 • Published -
Bridging Language Barriers in Healthcare: A Study on Arabic LLMs
Paper • 2501.09825 • Published • 14 -
Exploring the Inquiry-Diagnosis Relationship with Advanced Patient Simulators
Paper • 2501.09484 • Published • 19 -
BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature
Paper • 2501.07171 • Published • 55
-
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 -
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 45 -
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 -
LoRA ensembles for large language model fine-tuning
Paper • 2310.00035 • Published • 2
-
Grokking in the Wild: Data Augmentation for Real-World Multi-Hop Reasoning with Transformers
Paper • 2504.20752 • Published • 92 -
Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language Models in Math
Paper • 2504.21233 • Published • 49 -
AF Adapter: Continual Pretraining for Building Chinese Biomedical Language Model
Paper • 2211.11363 • Published • 1 -
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
Paper • 2405.12130 • Published • 50
-
MedS^3: Towards Medical Small Language Models with Self-Evolved Slow Thinking
Paper • 2501.12051 • Published -
Bridging Language Barriers in Healthcare: A Study on Arabic LLMs
Paper • 2501.09825 • Published • 14 -
Exploring the Inquiry-Diagnosis Relationship with Advanced Patient Simulators
Paper • 2501.09484 • Published • 19 -
BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature
Paper • 2501.07171 • Published • 55
-
CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization
Paper • 2310.10134 • Published • 1 -
TiC-CLIP: Continual Training of CLIP Models
Paper • 2310.16226 • Published • 9 -
In-Context Pretraining: Language Modeling Beyond Document Boundaries
Paper • 2310.10638 • Published • 30 -
Controlled Decoding from Language Models
Paper • 2310.17022 • Published • 15
-
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper • 2310.08659 • Published • 28 -
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 45 -
ModuLoRA: Finetuning 3-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
Paper • 2309.16119 • Published • 1 -
LoRA ensembles for large language model fine-tuning
Paper • 2310.00035 • Published • 2
-
Moral Foundations of Large Language Models
Paper • 2310.15337 • Published • 1 -
Specific versus General Principles for Constitutional AI
Paper • 2310.13798 • Published • 3 -
Contrastive Prefence Learning: Learning from Human Feedback without RL
Paper • 2310.13639 • Published • 25 -
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Paper • 2309.00267 • Published • 52