canirun.ai — NexusFleet

Which AI models can each server actually run? Live hardware stats.

main-claw

10.10.10.1
CPU12 cores
RAM63 GB
GPUnone
CPU modelAMD Ryzen 5 3600 6-Core Processor
5 GREAT5 WELL2 DECENT0 TIGHT0 BARELY
Models that fit (12 good options)
ModelSizeRAMFitNotes
TinyLlama 1.1B 0.7GB 2GB GREAT Ultra-fast, basic chat
Qwen2.5 0.5B 0.4GB 1GB GREAT Tiny, fast
Phi-3 Mini 3.8B 2.4GB 6GB GREAT Microsoft, balanced
Llama 3.2 3B 2GB 4GB GREAT Meta, efficient
Gemma 2 2B 1.6GB 4GB GREAT Google, compact
Qwen2.5 7B 4.7GB 8GB WELL Strong general use
Llama 3.1 8B 4.9GB 10GB WELL Meta flagship 8B
Mistral 7B 4.4GB 8GB WELL Mistral baseline
DeepSeek Coder 6.7B 3.8GB 8GB WELL Code specialist
Gemma 3 12B 7.5GB 16GB WELL Carlos uses this
Mixtral 8x7B 26GB 48GB DECENT MoE, slow on CPU
Qwen2.5 32B 19GB 40GB DECENT Large general
Llama 3.3 70B 40GB 64GB NO RAM too low
DeepSeek V3 671B 400GB 1500GB NO RAM too low
Llama 4 Scout 109B 65GB 128GB NO RAM too low

claw-child

10.10.10.4
CPU12 cores
RAM63 GB
GPUnone
CPU modelAMD Ryzen 5 3600 6-Core Processor
5 GREAT5 WELL2 DECENT0 TIGHT0 BARELY
Models that fit (12 good options)
ModelSizeRAMFitNotes
TinyLlama 1.1B 0.7GB 2GB GREAT Ultra-fast, basic chat
Qwen2.5 0.5B 0.4GB 1GB GREAT Tiny, fast
Phi-3 Mini 3.8B 2.4GB 6GB GREAT Microsoft, balanced
Llama 3.2 3B 2GB 4GB GREAT Meta, efficient
Gemma 2 2B 1.6GB 4GB GREAT Google, compact
Qwen2.5 7B 4.7GB 8GB WELL Strong general use
Llama 3.1 8B 4.9GB 10GB WELL Meta flagship 8B
Mistral 7B 4.4GB 8GB WELL Mistral baseline
DeepSeek Coder 6.7B 3.8GB 8GB WELL Code specialist
Gemma 3 12B 7.5GB 16GB WELL Carlos uses this
Mixtral 8x7B 26GB 48GB DECENT MoE, slow on CPU
Qwen2.5 32B 19GB 40GB DECENT Large general
Llama 3.3 70B 40GB 64GB NO RAM too low
DeepSeek V3 671B 400GB 1500GB NO RAM too low
Llama 4 Scout 109B 65GB 128GB NO RAM too low

claw-childone

10.10.10.5
CPU12 cores
RAM63 GB
GPUnone
CPU modelAMD Ryzen 5 3600 6-Core Processor
5 GREAT5 WELL2 DECENT0 TIGHT0 BARELY
Models that fit (12 good options)
ModelSizeRAMFitNotes
TinyLlama 1.1B 0.7GB 2GB GREAT Ultra-fast, basic chat
Qwen2.5 0.5B 0.4GB 1GB GREAT Tiny, fast
Phi-3 Mini 3.8B 2.4GB 6GB GREAT Microsoft, balanced
Llama 3.2 3B 2GB 4GB GREAT Meta, efficient
Gemma 2 2B 1.6GB 4GB GREAT Google, compact
Qwen2.5 7B 4.7GB 8GB WELL Strong general use
Llama 3.1 8B 4.9GB 10GB WELL Meta flagship 8B
Mistral 7B 4.4GB 8GB WELL Mistral baseline
DeepSeek Coder 6.7B 3.8GB 8GB WELL Code specialist
Gemma 3 12B 7.5GB 16GB WELL Carlos uses this
Mixtral 8x7B 26GB 48GB DECENT MoE, slow on CPU
Qwen2.5 32B 19GB 40GB DECENT Large general
Llama 3.3 70B 40GB 64GB NO RAM too low
DeepSeek V3 671B 400GB 1500GB NO RAM too low
Llama 4 Scout 109B 65GB 128GB NO RAM too low

Updated: 04/05/2026, 21:30:51 · refresh