Compare Free LLM APIs
Select up to 6 models to compare capabilities, limits, and performance side by side.
0/6 selected
Pick models from the gallery below
OpenRouter 262K ctx
inclusionAI: Ring-2.6-1T (free)
OpenRouter 131K ctx
Baidu Qianfan: CoBuddy (free)
OpenRouter 1.0M ctx
Owl Alpha
OpenRouter 256K ctx
NVIDIA: Nemotron 3 Nano Omni (free)
OpenRouter 131K ctx
Poolside: Laguna XS.2 (free)
OpenRouter 131K ctx
Poolside: Laguna M.1 (free)
OpenRouter 66K ctx
Baidu: Qianfan-OCR-Fast (free)
OpenRouter 262K ctx
Google: Gemma 4 26B A4B (free)
OpenRouter 262K ctx
Google: Gemma 4 31B (free)
OpenRouter 1.0M ctx
Google: Lyria 3 Pro Preview
OpenRouter 1.0M ctx
Google: Lyria 3 Clip Preview
OpenRouter 262K ctx
NVIDIA: Nemotron 3 Super (free)
OpenRouter 197K ctx
MiniMax: MiniMax M2.5 (free)
OpenRouter 200K ctx
Free Models Router
OpenRouter 33K ctx
LiquidAI: LFM2.5-1.2B-Thinking (free)
OpenRouter 33K ctx
LiquidAI: LFM2.5-1.2B-Instruct (free)
OpenRouter 256K ctx
NVIDIA: Nemotron 3 Nano 30B A3B (free)
OpenRouter 128K ctx
NVIDIA: Nemotron Nano 12B 2 VL (free)
OpenRouter 262K ctx
Qwen: Qwen3 Next 80B A3B Instruct (free)
OpenRouter 128K ctx
NVIDIA: Nemotron Nano 9B V2 (free)
OpenRouter 131K ctx
OpenAI: gpt-oss-120b (free)
OpenRouter 131K ctx
OpenAI: gpt-oss-20b (free)
OpenRouter 131K ctx
Z.ai: GLM 4.5 Air (free)
OpenRouter 262K ctx
Qwen: Qwen3 Coder 480B A35B (free)
OpenRouter 33K ctx
Venice: Uncensored (free)
OpenRouter 66K ctx
Meta: Llama 3.3 70B Instruct (free)
OpenRouter 131K ctx
Meta: Llama 3.2 3B Instruct (free)
OpenRouter 131K ctx
Nous: Hermes 3 405B Instruct (free)
Cohere 256K ctx
Command A (111B)
Cohere 128K ctx
Command R+
Cohere 128K ctx
Command R7B
Cohere 131K ctx
Embed 4
Cohere 131K ctx
Rerank 3.5
Google Gemini 1.0M ctx
Gemini 2.5 Flash
Google Gemini 1.0M ctx
Gemini 2.5 Flash-Lite
Mistral AI 256K ctx
Mistral Small 4
Mistral AI 128K ctx
Mistral Medium 3
Mistral AI 256K ctx
Mistral Large 3
Mistral AI 128K ctx
Mistral Nemo (12B)
Mistral AI 256K ctx
Codestral
Mistral AI 128K ctx
Pixtral Large
Z AI (Zhipu AI) 200K ctx
GLM-4.7-Flash
Z AI (Zhipu AI) 128K ctx
GLM-4.5-Flash
Z AI (Zhipu AI) 128K ctx
GLM-4.6V-Flash
Cerebras 128K ctx
llama3.1-8b
Cerebras 128K ctx
gpt-oss-120b
Cerebras 131K ctx
qwen-3-235b-a22b-instruct-2507
Cerebras 128K ctx
zai-glm-4.7
Cloudflare Workers AI 131K ctx
@cf/meta/llama-3.3-70b-instruct-fp8-fast
Cloudflare Workers AI 131K ctx
@cf/meta/llama-3.1-8b-instruct-fp8-fast
Cloudflare Workers AI 131K ctx
@cf/meta/llama-3.2-11b-vision-instruct
Cloudflare Workers AI 10.0M ctx
@cf/meta/llama-4-scout-17b-16e-instruct
Cloudflare Workers AI 128K ctx
@cf/mistralai/mistral-small-3.1-24b-instruct
Cloudflare Workers AI 256K ctx
@cf/google/gemma-4-26b-a4b-it
Cloudflare Workers AI 32K ctx
@cf/qwen/qwq-32b
Cloudflare Workers AI 32K ctx
@cf/deepseek-ai/deepseek-r1-distill-qwen-32b
GitHub Models 1.0M ctx
gpt-4.1
GitHub Models 1.0M ctx
gpt-4.1-mini
GitHub Models 128K ctx
gpt-4o
GitHub Models 200K ctx
o3-mini
GitHub Models 200K ctx
o4-mini
GitHub Models 512K ctx
Llama-4-Scout-17B-16E
GitHub Models 256K ctx
Llama-4-Maverick-17B-128E
GitHub Models 131K ctx
Meta-Llama-3.3-70B
GitHub Models 128K ctx
Mistral-Small-3.1
Groq 131K ctx
llama-3.3-70b-versatile
Groq 131K ctx
llama-3.1-8b-instant
Groq 131K ctx
llama-4-scout-17b-16e-instruct
Groq 131K ctx
llama-4-maverick-17b-128e-instruct
Groq 131K ctx
qwen3-32b
Groq 262K ctx
kimi-k2-instruct
Groq 131K ctx
deepseek-r1-distill-70b
Groq 131K ctx
whisper-large-v3
Groq 131K ctx
whisper-large-v3-turbo
Hugging Face 32K ctx
Mistral-7B-Instruct-v0.3
Hugging Face 32K ctx
Mixtral-8x7B-Instruct-v0.1
Hugging Face 128K ctx
Phi-3.5-mini-instruct
Hugging Face 131K ctx
Qwen2.5-7B-Instruct
Kilo Code 131K ctx
bytedance-seed/dola-seed-2.0-pro:free
Kilo Code 131K ctx
x-ai/grok-code-fast-1:optimized:free
Kilo Code 262K ctx
nvidia/nemotron-3-super-120b-a12b:free
Kilo Code 131K ctx
arcee-ai/trinity-large-thinking:free
Kilo Code 131K ctx
openrouter/free
LLM7.io 131K ctx
deepseek-v3-0324
LLM7.io 131K ctx
gpt-4o-mini
LLM7.io 32K ctx
mistral-small-3.1-24b
LLM7.io 131K ctx
qwen2.5-coder-32b
ModelScope 131K ctx
Qwen/Qwen-Image
Ollama Cloud 128K ctx
llama3.1:cloud
Ollama Cloud 128K ctx
deepseek-r1:cloud
Ollama Cloud 128K ctx
qwen2.5:cloud
Ollama Cloud 8K ctx
gemma2:cloud
Ollama Cloud 32K ctx
mistral:cloud
OVHcloud AI Endpoints 131K ctx
Meta-Llama-3_3-70B-Instruct
OVHcloud AI Endpoints 262K ctx
Qwen3-Coder-30B-A3B-Instruct
OVHcloud AI Endpoints 128K ctx
Qwen2.5-VL-72B-Instruct
OVHcloud AI Endpoints 128K ctx
Mistral-Nemo-Instruct-2407
OVHcloud AI Endpoints 32K ctx
Qwen3Guard-Gen-8B
OVHcloud AI Endpoints 32K ctx
Qwen3Guard-Gen-0.6B
SiliconFlow 33K ctx
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
SiliconFlow 131K ctx
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
SiliconFlow 32K ctx
THUDM/glm-4-9b-chat
SiliconFlow 66K ctx
THUDM/GLM-4.1V-9B-Thinking
SiliconFlow 131K ctx
deepseek-ai/DeepSeek-OCR
SiliconFlow 131K ctx
Abbreviation
NVIDIA NIM 1.0M ctx
deepseek-ai/deepseek-v4-flash
NVIDIA NIM 1.0M ctx
deepseek-ai/deepseek-v4-pro
NVIDIA NIM 131K ctx
google/gemma-3-12b-it
NVIDIA NIM 131K ctx
google/gemma-3-27b-it
NVIDIA NIM 131K ctx
google/gemma-3-4b-it
NVIDIA NIM 33K ctx
google/gemma-3n-e4b-it
NVIDIA NIM 131K ctx
google/gemma-4-31b-it
NVIDIA NIM 131K ctx
meta/llama-3.2-3b-instruct
NVIDIA NIM 197K ctx
minimaxai/minimax-m2.5
NVIDIA NIM 197K ctx
minimaxai/minimax-m2.7
NVIDIA NIM 131K ctx
mistralai/mistral-large
NVIDIA NIM 131K ctx
mistralai/mistral-large-2-instruct
NVIDIA NIM 131K ctx
nvidia/llama-3.1-nemotron-ultra-253b-v1
NVIDIA NIM 262K ctx
nvidia/nemotron-3-nano-30b-a3b
NVIDIA NIM 256K ctx
nvidia/nemotron-3-nano-omni-30b-a3b-reasoning
NVIDIA NIM 262K ctx
nvidia/nemotron-3-super-120b-a12b
NVIDIA NIM 128K ctx
nvidia/nemotron-nano-12b-v2-vl
NVIDIA NIM 131K ctx
openai/gpt-oss-120b
NVIDIA NIM 131K ctx
openai/gpt-oss-20b
NVIDIA NIM 262K ctx
qwen/qwen3-next-80b-a3b-instruct
NVIDIA NIM 262K ctx
stepfun-ai/step-3.5-flash
OpenRouter 131K ctx
NVIDIA: Llama Nemotron Embed VL 1B V2 (free)
Side-by-Side Comparison
Select 2-4 models from the table above to compare them side by side.