deepseek-v3-main.zip源代码
资源文件列表:

deepseek-v3-main/
deepseek-v3-main/.github/
deepseek-v3-main/.github/ISSUE_TEMPLATE/
deepseek-v3-main/.github/ISSUE_TEMPLATE/bug_report.md 468B
deepseek-v3-main/.github/ISSUE_TEMPLATE/feature_request.md 595B
deepseek-v3-main/.gitignore 3.32KB
deepseek-v3-main/CITATION.cff 5.93KB
deepseek-v3-main/DeepSeek_V3.pdf 1.59MB
deepseek-v3-main/LICENSE-CODE 1.04KB
deepseek-v3-main/LICENSE-MODEL 13.44KB
deepseek-v3-main/README.md 23.41KB
deepseek-v3-main/README_WEIGHTS.md 3.57KB
deepseek-v3-main/figures/
deepseek-v3-main/figures/benchmark.png 179.28KB
deepseek-v3-main/figures/niah.png 105.93KB
deepseek-v3-main/inference/
deepseek-v3-main/inference/configs/
deepseek-v3-main/inference/configs/config_16B.json 417B
deepseek-v3-main/inference/configs/config_236B.json 455B
deepseek-v3-main/inference/configs/config_671B.json 503B
deepseek-v3-main/inference/convert.py 3.73KB
deepseek-v3-main/inference/fp8_cast_bf16.py 4.35KB
deepseek-v3-main/inference/generate.py 7.63KB
deepseek-v3-main/inference/kernel.py 7.89KB
deepseek-v3-main/inference/model.py 31.75KB
deepseek-v3-main/inference/requirements.txt 66B
资源介绍:
为方便亲们下载,将deepseek-v3-main.zip源代码放此下载。## Table of Contents 1. [Introduction](#1-introduction) 2. [Model Summary](#2-model-summary) 3. [Model Downloads](#3-model-downloads) 4. [Evaluation Results](#4-evaluation-results) 5. [Chat Website & API Platform](#5-chat-website--api-platform) 6. [How to Run Locally](#6-how-to-run-locally) 7. [License](#7-license) 8. [Citation](#8-citation) 9. [Contact](#9-contact) ## 1. Introduction We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-V3-Base | 671B | 37B | 128K | [ð¤ Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base) |
| DeepSeek-V3 | 671B | 37B | 128K | [ð¤ Hugging Face](https://huggingface.co/deepseek-ai/DeepSeek-V3) |
> [!NOTE]
> The total size of DeepSeek-V3 models on Hugging Face is 685B, which includes 671B of the Main Model weights and 14B of the Multi-Token Prediction (MTP) Module weights.
To ensure optimal performance and flexibility, we have partnered with open-source communities and hardware vendors to provide multiple ways to run the model locally. For step-by-step guidance, check out Section 6: [How_to Run_Locally](#6-how-to-run-locally).
For developers looking to dive deeper, we recommend exploring [README_WEIGHTS.md](./README_WEIGHTS.md) for details on the Main Model weights and the Multi-Token Prediction (MTP) Modules. Please note that MTP support is currently under active development within the community, and we welcome your contributions and feedback.
## 4. Evaluation Results
### Base Model
#### Standard Benchmarks
| | Benchmark (Metric) | # Shots | DeepSeek-V2 | Qwen2.5 72B | LLaMA3.1 405B | DeepSeek-V3 |
|---|-------------------|----------|--------|-------------|---------------|---------|
| | Architecture | - | MoE | Dense | Dense | MoE |
| | # Activated Params | - | 21B | 72B | 405B | 37B |
| | # Total Params | - | 236B | 72B | 405B | 671B |
| English | Pile-test (BPB) | - | 0.606 | 0.638 | **0.542** | 0.548 |
| | BBH (EM) | 3-shot | 78.8 | 79.8 | 82.9 | **87.5** |
| | MMLU (Acc.) | 5-shot | 78.4 | 85.0 | 84.4 | **87 
100+评论