Instruction-tuned Llama 3.2 model with 3 billion parameters, lightweight and efficient
Artificial Analysis Quality Index; Higher is better
Output Tokens per Second; Higher is better
USD per 1M Tokens; Lower is better
Seconds to First Tokens Chunk Received; Lower is better