andy-4.2 banner

The Mindcraft CE team introduces Andy-4.2, noted as the best local AI you can use to play Minecraft with. Thinking faster than Andy-4.1, being able to carry out more actions than Andy-4, and rivaling models 10x it's size.

Key Innovations

Andy-4.2 uses largely the same formula as Andy-4.1, but introduces a new architecture from the Qwen3.5 series which makes the model not only smarter, but more efficient. Using Gated Deltanet attention allows Andy-4.2 to run on a single RTX 3090, with 256k tokens of context, at a staggering 8-bit quantization.

Andy-4.2 is also the first local model capable of getting a full set of diamond armour, with zero human interaction

andy-4.2 with diamond armour

Like Andy-4.1, Andy-4.2 has vision capabilities, and has a stronger multimodal base that allows for even deeper comprehension of the game state.

How to Run

Andy-4.2 is still recommended to be ran using LM Studio, we have tried using Ollama, and there were a plethora of issues, including looping, mismatched chat templates, etc;

Below are the recommended sampling parameters for Andy-4.2, but the default settings in LM Studio work great, and the model is still able to get full diamond armour by itself.

Name Value
Temperature 0.6
Repeat Penalty 1
Top P Sampling 0.95
Min P Sampling 0

Model Specifications

  • Size: 9B Parameters
  • Architecture: Qwen3.5
  • Context Length: Up to 1 million tokens
  • Message Count: 120 messages stable
  • CoT Style: DeepSeek-R1 style

Training Specifications

  • Hardware: 1x RTX 3090
  • Training Time: 5 Hours
  • Dataset Size: 2,748 examples
  • Learning Rate: 2e-5
  • LR Scheduler: cosine
  • Epoch Count: 1 Epoch
  • Training Quantization: 4-bit QLoRA with 8-bit QaT

Testing

The testing for Andy-4.2 was done at 8-bit, which was done to test if QaT (Quantization Aware Training) had assisted in the preservation of data inside of Andy-4.2.

Andy-4.2 had the following stats during runtime for testing:

  • Mindcraft-CE version 1.2.7
  • 8-bit Quantization
  • 8-bit KV Cache quantization
  • Base LM Studio sampling parameters
  • 32,000 Context Length
  • 65 Messages

Limitations

Even though Andy-4.2 is capable of incredible feats, there is one domain where it does not perform well: Building. During internal testing any time Andy-4.2 would use !newAction, it would produce thousands and thousands of tokens, but never do anything. It is not advised to use Andy-4.2 as your code model.

Apart from that. Andy-4.2 has shown to be our most hard-working model yet, and navigates potential errors very well.

What's Next?

Based on the lessons from Andy-4.2, the Mindcraft team is prepared to collect better training data, explore new architectures to make the cost of running Andy models cheaper, as well as packing more brains into these tiny minds.

Licenses and Notices

Like all other Andy models, Andy-4.2 is based on the Andy license of terms. Being generally permissive, it contains qualifiers as to what makes an "Andy" class model.

See Andy 2.0 License.

This work uses data and models created by @Sweaterdog.

Downloads last month
-
Safetensors
Model size
10B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Mindcraft-CE/Andy-4.2

Finetuned
Qwen/Qwen3.5-9B
Finetuned
(117)
this model
Quantizations
2 models

Collection including Mindcraft-CE/Andy-4.2