PopularCodingMIT

DeepSeek Coder V2

by DeepSeek

Aug 15, 2025128K context$0.07/M input$0.29/M output

DeepSeek Coder V2 236B is a 236B MoE model with 21B active parameters. Trained on 2T tokens of code. Achieves 65.8% on SWE-bench, 90.2% on HumanEval, 88.4% on MBPP.

View on HuggingFace
Fleek Pricing
$0.0025/GPU-second
Context128K tokens
Estimated Token Cost
Input
$0.07/M
Output
$0.29/M
Based on 25,000 tokens/sec
vs CompetitorsSave 70%

Overview

Parameters

236B (MoE)

Architecture

Mixture of Experts

Context

128K

Provider

DeepSeek

Best For

Complex refactoringArchitecture designMulti-file reasoningCode review

OpenAI Compatible

Drop-in replacement for OpenAI API. Just change the base URL.

Pay Per Second

Only pay for actual GPU compute time. No idle costs.

Enterprise Ready

99.9% uptime SLA, SOC 2 compliant, dedicated support.

Auto Scaling

Scales from zero to thousands of requests automatically.

Calculate Your Savings

See how much you'd save running DeepSeek Coder V2 on Fleek

DeepSeek Coder V2
Your Fleek Cost
$29-50/mo
11.4K-20.0K GPU-sec × $0.0025
Fireworks AI
$230/mo
Your Savings70%
Annual Savings
$2.3K

Technical Specifications

Model NameDeepSeek Coder V2
Total Parameters236B (MoE)
Active Parameters21B
ArchitectureMixture of Experts
Context Length128K tokens
Inference Speed25,000 tokens/sec
ProviderDeepSeek
Release DateAug 15, 2025
LicenseMIT
HuggingFacehttps://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-236B

Ready to run DeepSeek Coder V2?

Join the waitlist for early access. Start free with $5 in credits.

View Pricing