What Is DeepSeek 2025? Its The New Chinese AI Tool!
Just weeks after its newfound fame, Chinese AI startup DeepSec is moving at a rapid pace, beating out competitors and sparking a heat debate about the virtues of open-source software.
However, numerous security concerns have emerge about the company, leading to private and government agencies banning its use. Here’s what you need to know.
What is DeepSec?
Found by Liang Wenfeng in May 2023 (and not even two years old), the Chinese startup has challenge establish AI companies with its open-source approach. According to Forbes, DeepSec’s advantage may be due to the fact that it is solely fund by Hi-Flyer, a hedge fund run by Wenfeng, which gives the company a funding model that supports rapid growth and research.
The startup achieve massive success in January when it release the full version of R1, its open-source logic model that could outperform OpenAI’s o1. Shortly after, downloads of DeepSeek’s AI assistant on the App Store — which runs V3, the DeepSeek model release in December — surpassed ChatGPT, which had previously been the most downloaded free app. DeepSeek R1 even climb to third place overall in HuggingFace’s Chatbot Arena, battling several Gemini models and ChatGPT-4o; at the same time, DeepSeek release a promising new image model.
The company’s ability to build successful models by strategically optimizing older chips and distributing query loads across models for efficiency — in the wake of export restrictions on US-made chips, including Nvidia’s — is impressive by industry standards.
What is DeepSeek R1?
Fully release on January 21st, R1 is DeepSeek’s flagship reasoning model, performing on par with or above OpenAI’s acclaime o1 model in a variety of math, coding, and reasoning benchmarks.
Built on V3 and base on Alibaba’s Qwen and Meta’s Llama, what makes R1 attractive is that, unlike other top models from tech giants, it is open source, meaning anyone can download and use it. However, DeepSeek has not release R1’s training dataset. So far, all of the other models it has release are also open source.
DeepSeek is cheaper than comparable US models. For reference, R1 API access starts at $0.14 for one million tokens, a fraction of the $7.50 that OpenAI charges for the equivalent tier.
DeepSeek claims in a company paper that its V3 model, which can be compare to a standard chatbot model like Claude, cost $5.6 million to train, a number that is toute (and disput) as the model’s full development cost. According to a Reuters report, some lab experts believe that DeepSeek’s paper only mentions the final training run for V3, not its full development costs (which would be a fraction of what tech giants have spent on building competing models). Other experts believe that DeepSeek’s costs do not include previous infrastructure, R&D, data, and staff costs.
DeepSeek standard chatbot
One drawback that could affect the model’s long-term competitiveness with o1 and U.S.-made alternatives is censorship. Chinese models often include blocks on certain content, meaning they may not answer some questions even if they perform relatively well compare to other models (see how DeepSeek’s AI assistant answers questions about Tiananmen Square and Taiwan here). As DeepSeek’s use grows, some worry that its models’ strict Chinese guardrails and methodological biases could be embedd in all kinds of infrastructure.
However, you can access uncensore, US-base versions of DeepSec through platforms like Perplexity. These platforms have strippe DeepSec of its censorship weight and run it on local servers to avoid security concerns.
In December, ZDNET’s Tiernan Ray compare R1-Lite’s ability to interpret train of thought to o1, and the results were mix. Nevertheless, DeepSec’s AI assistant reveals its train of thought to the user during questions, which is a novel experience for many chatbot users because ChatGPT doesn’t extrapolate its logic.
Of course, all the popular models come with red-party backgrounds, community guidelines, and content guards. But, at least at this stage, US-made chatbots are unlikely to shy away from answering questions about historical events.
What are the privacy and security concerns?
On TikTok — which is now somewhat banned in the US — data privacy concerns have also swirle around DeepSec.
Earlier this month, Ferut Security CEO Iva Saryany told ABC that his company had discovere “direct links to servers in China and companies controlle by the Chinese government,” which he said they had “never seen before.”
After decrypting some of DeepCk’s code, Feroot found hidden programming that can send user data — including identifying information, queries, and online activity — to China Mobile, a Chinese government-operate telecom company that has been banne from operating in the US since 2019 due to national security concerns.
NowSecure then recommend organizations “forbid” the use of DeepSeek’s mobile app after finding several flaws including unencrypted data (meaning anyone monitoring traffic can intercept it) and poor data storage.
internal DeepSeek database was publicly accessible
Last week, research firm Wiz discovere that an internal DeepSeek database was publicly accessible “within minutes” of conducting a security check. The “completely open and unauthenticated” database contained chat histories, user API keys, and other sensitive data.
“More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world,” Wiz’s report explains.
According to Wire, which initially publish the research, though Wiz did not receive a response from DeepSeek, the database appeared to be taken down within 30 minutes of Wiz notifying the company. It’s unclear how long it was accessible or if any other entity discovere it before it was taken down.
Even without this alarming development, DeepSeek’s privacy policy raises some red flags. It states, “The personal information we collect from you may be stored on a server located outside the country where you live. We store the information we collect in secure servers located in the People’s Republic of China.”
The policy outlines that DeepSeek collects plenty of information, including but not limited to:
IP address, unique device identifiers, and cookies
Date of birth (where applicable), username, email address and/or telephone number, and password
Your text or audio input, prompt, uploaded files, feedback, chat history, or other content that you provide to our model and services
Proof of identity or age, feedback, or inquiries about your use of the Service [If you contact DeepSeek]
The policy continues: “Where we transfer any personal information out of the country where you live, including for one or more of the purposes as set out in this Policy, we will do so in accordance with the requirements of applicable data protection laws.” The policy does not mention GDPR compliance.
“Users need to be aware that any data share with the platform could be subject to government access under China’s cybersecurity laws, which mandate that companies provide access to data upon request by authorities,” Adrianus Warmenhoven, a member of NordVPN’s security advisory board, told ZDNET via email.
According to some observers, the fact that R1 is open source means increas transparency, allowing users to inspect the model’s source code for signs of privacy-relate activity.
However, DeepSeek also release smaller versions of R1, which can be download and run locally to avoid any concerns about data being sent back to the company (as opposed to accessing the chatbot online).
All chatbots, including ChatGPT, collect some degree of user data when querie via the browser.
Is DeepSeek AI safe?
AI safety researchers have long been concern that powerful open-source models could be applie in dangerous and unregulate ways once out in the wild. Tests by AI safety firm Chatterbox found DeepSeek R1 has “safety issues across the board.”
To varying degrees, US AI companies employ some kind of safety oversight team. DeepSeek has not publiciz whether it has a safety research team, and has not respond to ZDNET’s request for comment on the matter.
“Most companies will keep racing to build the strongest AI they can, irrespective of the risks, and will see enhance algorithmic efficiency as a way to achieve higher performance faster,” said Peter Slattery, a researcher on MIT’s FutureTech team who led its Risk Repository project. “That leaves us even less time to address the safety, governance, and societal challenges that will come with increasingly advance AI systems.”
“DeepSeek’s breakthrough in training efficiency also means we should soon expect to see a large number of local, specialize ‘wrappers’ — apps built on top of DeepSeek R1 engine — which will each introduce their own privacy risks, and which could each be misuse if they fell into the wrong hands,” added Ryan Fedasiuk, director of US AI governance at The Future Society, an AI policy nonprofit.
Is DeepSeek more energy efficient?
Some analysts note that DeepSeek’s l