Skip to content
  • Status
  • Announcements
  • Docs
  • Support
  • About
  • Partners
  • Enterprise
  • Careers
  • Pricing
  • Privacy
  • Terms
  •  
  • © 2025 OpenRouter, Inc

    DeepSeek: R1 Distill Qwen 14B

    deepseek/deepseek-r1-distill-qwen-14b

    Created Jan 29, 202532,768 context
    $0.12/M input tokens$0.12/M output tokens

    DeepSeek R1 Distill Qwen 14B is a distilled large language model based on Qwen 2.5 14B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.

    Other benchmark results include:

    • AIME 2024 pass@1: 69.7
    • MATH-500 pass@1: 93.9
    • CodeForces Rating: 1481

    The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

    Recent activity on R1 Distill Qwen 14B

    Total usage per day on OpenRouter

    Reasoning
    12.2M
    Prompt
    4.84M
    Completion
    4.11M

    Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.