News

DeepSeek launches V3.1 with faster reasoning, domestic chip support, open-source release, and new API pricing, marking its ...
China's DeepSeek has released a 685-billion parameter open-source AI model, DeepSeek V3.1, challenging OpenAI and Anthropic ...
Chinese startup DeepSeek has released its largest AI model to date, a 685-billion-parameter model that industry observers say ...
The speed and popularity of DeepSeek’s models have challenged US incumbents such as OpenAI, and demonstrated how Chinese ...
Speaking of tokens, DeepSeek has boosted the number of tokens in its context window, which you can think of as its short-term ...
DeepSeek V3.1 is finally here, and while it performs significantly better than R1, it doesn't outperform GPT-5 Thinking or ...
Chinese AI firm adds longer memory to its flagship model but still faces chip shortages that stall bigger ambitions ...
In a quiet yet impactful move, DeepSeek, the Hangzhou-based AI research lab, has unveiled DeepSeek V3.1, an upgraded version ...
DeepSeek launches V3.1 with doubled context, advanced coding, and math abilities. Featuring 685B parameters under MIT Licence ...
DeepSeek V3.1 comes with an extended context window, allowing the model to process and retain more information within a ...
It introduces a hybrid-inference design that supports two modes- Think and Non-Think, within a single model, promising faster reasoning, stronger agent skills and an extended long-context window ...
DeepSeek announced the v3.1 model through a message on WeChat, China's widely used social platform, and on the Hugging Face community website. The new model boasts ...