OpenRouter: Perceptron: Perceptron Mk1 — AI Model

This model is no longer free. See free alternatives.
openrouter/perceptron-perceptron-mk1
Free since Free until Context 33K Released May 12, 2026 Status
chat

Perceptron Mk1 (Mark One) is Perceptron's highest-quality vision-language model for video and embodied reasoning.** It accepts image and video inputs paired with natural language queries, and produces detailed visual understanding responses, either structured or natural language. It excels at video understanding tasks like video QA, summarization, and event detection. On image inputs, it advances point-by-example grounding from multimodal prompts, OCR and document parsing on messy real-world inputs, open vocabulary object detection and counting, and hand pose estimation. Reasoning can be enabled per request to trade latency for deeper analysis on harder tasks. Structured annotations are emitted inline with text only when explicitly requested via the `annotation_format` parameter (pass `"point"`, `"box"`, or `"polygon"` for spatial localization on images, or `"clip"` (start/end timestamps) for temporal segments in video). Without `annotation_format`, the model returns natural-language text only.

Try Perceptron: Perceptron Mk1

Test this model directly in the playground.

Click to start testing in Playground...

One-Click Config for Claude Code, Cursor & Codex

Optimized configs for your favorite AI tools.

Claude Code

# Claude Code works via OpenRouter's Anthropic-compatible API.
# Note: Only paid Anthropic Claude models are supported (e.g. claude-sonnet-4.6, claude-opus-4).
# Browse available Claude models at: https://openrouter.ai/models?q=anthropic

# Add to ~/.zshrc or ~/.bashrc
export OPENROUTER_API_KEY="<your-openrouter-api-key>"  # Get at https://openrouter.ai/settings/keys
export ANTHROPIC_BASE_URL="https://openrouter.ai/api"
export ANTHROPIC_AUTH_TOKEN="$OPENROUTER_API_KEY"
export ANTHROPIC_API_KEY=""  # Must be explicitly empty to avoid conflicts

# Optional: pin specific models for each role
# export ANTHROPIC_DEFAULT_SONNET_MODEL="anthropic/claude-sonnet-4.6"
# export ANTHROPIC_DEFAULT_HAIKU_MODEL="anthropic/claude-haiku-4.5"

# Then simply run: claude

Cursor

# Cursor → Settings (⚙️) → Models → Add Model
# Enter the model name exactly as shown, then fill in:
#   Override OpenAI Base URL: https://openrouter.ai/api/v1
#   OpenAI API Key: <your-api-key>   # Get at https://openrouter.ai/workspaces/default/keys
# Click "Verify" to confirm the connection, then enable the model.
#
# Model name to add: Perceptron: Perceptron Mk1

Codex

# Add to ~/.zshrc or ~/.bashrc
export OPENAI_BASE_URL="https://openrouter.ai/api/v1"
export OPENAI_API_KEY="<your-api-key>"  # Get at https://openrouter.ai/workspaces/default/keys

# Then run:
codex --model "Perceptron: Perceptron Mk1"

Gemini CLI

# ~/.gemini/settings.json
{
  "apiKey": "<your-api-key>",
  "model": "Perceptron: Perceptron Mk1"
}
# Get API key at https://openrouter.ai/workspaces/default/keys

OpenCode

// ~/.config/opencode/opencode.json
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "free-llm": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Free LLM",
      "options": {
        "baseURL": "https://openrouter.ai/api/v1",
        "apiKey": "<your-api-key>"
      },
      "models": {
        "Perceptron: Perceptron Mk1": { "name": "Perceptron: Perceptron Mk1" }
      }
    }
  }
}
// Get API key at https://openrouter.ai/workspaces/default/keys

Hermes

# Step 1 — Edit config.yaml
# Windows: C:\Users\<you>\AppData\Local\hermes\config.yaml
# macOS/Linux: ~/.config/hermes/config.yaml

model:
  default: Perceptron: Perceptron Mk1
  provider: custom
  base_url: ${CUSTOM_BASE_URL}
  api_key: ${CUSTOM_API_KEY}
  model_aliases:
    Perceptron: Perceptron Mk1:
      model: "Perceptron: Perceptron Mk1"
      provider: "custom"

# Step 2 — Edit .env (same directory as config.yaml)
# Windows: C:\Users\<you>\AppData\Local\hermes\.env
# macOS/Linux: ~/.config/hermes/.env

# ========================
# Custom API (OpenAI-compatible)
# ========================
CUSTOM_API_KEY=<your-api-key>        # Get at https://openrouter.ai/workspaces/default/keys
CUSTOM_BASE_URL=https://openrouter.ai/api/v1

OpenClaw

// ~/.openclaw/openclaw.json  (JSON5 format)
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "Perceptron: Perceptron Mk1",
      },
    },
  },
  "models": {
    "providers": {
      // Option A — Built-in provider (OpenAI, Anthropic, Google…)
      // Just add apiKey; OpenClaw handles the baseUrl automatically
      // "openai": { "apiKey": "<your-api-key>" },

      // Option B — Custom OpenAI-compatible base URL (e.g. OpenRouter, NVIDIA)
      "free-llm": {
        "baseUrl": "https://openrouter.ai/api/v1",
        "apiKey": "<your-api-key>",  // Get at https://openrouter.ai/workspaces/default/keys
        "api": "openai-completions", // openai-completions | anthropic-messages | …
        "models": [
          { "id": "Perceptron: Perceptron Mk1", "name": "Perceptron: Perceptron Mk1" },
        ],
      },
    },
  },
}
// Apply: openclaw gateway restart
// Verify: openclaw doctor --fix
See our FAQ for common questions about free LLM APIs