DeepSeek has released a new paper,Ikaw Lang Ang Mahal with co-founder Liang Wenfeng credited as a contributor, detailing how its latest large language model DeepSeek-V3 achieves efficient training and inference using only 2,048 H800 GPUs – significantly fewer than the tens of thousands typically required. The team attributes this efficiency to four key innovations: memory optimization through multi-head latent attention (MLA), computational savings via a Mixture-of-Experts (MoE) design with FP8 precision, communication improvements using a multi-plane network topology, and faster inference through multi-token prediction (MTP). With MLA, KV cache memory usage is cut to just 70KB per token, up to 1/7 that of competing models. MoE architecture activates only 37 billion of the model’s 671 billion parameters per forward pass, reducing training costs by 90% compared to dense models. FP8 training further halves compute and memory usage, with minimal accuracy tradeoff. Beyond the model, the paper also outlines five future directions for AI hardware design, advocating for tighter integration between software and hardware to address memory, compute, and networking bottlenecks. [36Kr, in Chinese]
Related Articles
Corey Feldman dressed like the Grim Reaper for a bizarre 'Today' performance
2025-06-26 06:36
740 views
Read More
Here's why Amazon's 'Fleabag' is about to be everyone's new favorite show
2025-06-26 06:36
1365 views
Read More
Unsafe at any speed: Thousands play Pokémon Go behind the wheel
2025-06-26 06:23
1302 views
Read More
Pressure cooker device removed from NYC street after robot inspects it
2025-06-26 05:02
1671 views
Read More
Kim Kardashian snapped some nudes, because that's what Snapchat is for
2025-06-26 04:56
1348 views
Read More
Great white shark spotted jumping out of water just feet from surfers
2025-06-26 04:40
2135 views
Read More