view post Post 4011 I am very sad to say that the budget in creating of SnowflakeCore-G1 1b and 7b MoE models ran out and I can't pre-train them anymore. See translation
view post Post 457 the training for SnowflakeCore-G1-1B and 7B would be retaken because now I implemented DeepSpeed and management to use two gpus. See translation
i3-architecture Note: The models are listed in the default order set by Hugging Face, so the latest model appears at the bottom. Running FlameF0X/i3-Series 🐢 Chat with the i3 model series FlameF0X/i3-tiny Text Generation • 711k • Updated Oct 17 • 23 • 1 FlameF0X/i3-12m Text Generation • 12.7M • Updated Oct 23 • 38 • 3 FlameF0X/i3-22m Text Generation • 22.6M • Updated Oct 31 • 21 • 2
Reinforcement Learning All the RL agent i made FlameF0X/o2 Reinforcement Learning • Updated Jul 10 FlameF0X/CanoPy Reinforcement Learning • Updated Sep 5
i3-architecture Note: The models are listed in the default order set by Hugging Face, so the latest model appears at the bottom. Running FlameF0X/i3-Series 🐢 Chat with the i3 model series FlameF0X/i3-tiny Text Generation • 711k • Updated Oct 17 • 23 • 1 FlameF0X/i3-12m Text Generation • 12.7M • Updated Oct 23 • 38 • 3 FlameF0X/i3-22m Text Generation • 22.6M • Updated Oct 31 • 21 • 2
Reinforcement Learning All the RL agent i made FlameF0X/o2 Reinforcement Learning • Updated Jul 10 FlameF0X/CanoPy Reinforcement Learning • Updated Sep 5