However, it’s still cheaper than its competitors.

The new chatbot from DeepSeek introduced itself to me with this description: 

Hi, I was created so you can ask anything and get an answer that might even surprise you.

Today, the artificial intelligence developed by DeepSeek has become a serious competitor in the market and the cause of one of NVIDIA's largest stock price drops.

DeepSeek TestImage: ensigame.com

What sets this model apart is its architecture and training methods. It employs several innovative technologies:

  • Multi-token Prediction (MTP): Instead of predicting one word at a time, the model forecasts multiple words simultaneously by analyzing different parts of a sentence. This approach enhances both the accuracy and efficiency of the model.
  • Mixture of Experts (MoE): This architecture uses various neural networks to process input data. The technology accelerates AI training and improves performance. In DeepSeek V3, 256 neural networks are utilized, with eight being activated for each token processing task.
  • Multi-head Latent Attention (MLA): This mechanism helps focus on the most significant parts of a sentence. MLA extracts key details from text fragments repeatedly rather than just once, reducing the likelihood of missing important information. Thanks to this, the AI is more likely to capture crucial nuances in the input data.

The prominent Chinese startup DeepSeek claimed to have created a competitive AI model with minimal costs, stating that they spent only $6 million on training the powerful neural network DeepSeek V3 and used just 2048 graphics processors.

DeepSeek V3Image: ensigame.com

However, analysts from SemiAnalysis discovered that DeepSeek operates a large computational infrastructure comprising approximately 50,000 Nvidia Hopper GPUs. This includes 10,000 H800 units, 10,000 more advanced H100s, and additional batches of H20 GPUs. These resources are distributed across several data centers and are utilized for AI training, research, and financial modeling.

The company's total investment in servers amounts to around $1.6 billion, with operational expenses estimated at $944 million.

DeepSeek is a subsidiary of the Chinese hedge fund High-Flyer, which spun off the startup as a separate division focused on AI technologies in 2023. Unlike most startups that rent computing power from cloud providers, DeepSeek owns its own data centers, giving it full control over AI model optimization and enabling faster implementation of innovations. The company remains self-funded, which positively impacts its flexibility and decision-making speed.

DeepSeekImage: ensigame.com

Moreover, some researchers at DeepSeek earn over $1.3 million annually, attracting top talent from leading Chinese universities (the company does not hire foreign specialists).

Even considering this, DeepSeek's recent claim of training its latest model for just $6 million seems unrealistic. This figure refers only to the cost of GPU usage during pre-training and does not account for research expenses, model refinement, data processing, or overall infrastructure costs.

Since its inception, DeepSeek has invested over $500 million in AI development. However, unlike larger companies burdened by bureaucracy, DeepSeek's compact structure allows it to actively and effectively implement AI innovations.

DeepSeekImage: ensigame.com

The example of DeepSeek demonstrates that a well-funded independent AI company can compete with industry leaders. Nevertheless, experts emphasize that the company's success is largely due to billions in investments, technical breakthroughs, and a strong team, while claims about a "revolutionary budget" for developing AI models are somewhat exaggerated.

Still, competitors' costs remain significantly higher. For instance, compare the cost of model training: DeepSeek spent $5 million on R1, while ChatGPT4o cost $100 million.

Main image: x.com