
A few weeks after new fame, Chinese AI startup DeepSeek is moving at a fierce speed, knocking down its competitors, sparking an axial conversation about the virtues of open source software.
The US also supports “AI control” and puts AI safety aside
However, many security concerns have emerged about the company, urging private and governmental organizations to ban Deepseek. This is what you need to know.
What is Deepseek?
Founded in May 2023 by Liang Wenfeng (and therefore not two years ago), Chinese startups are challenging AI companies established with an open source approach. According to Forbes, Deepseek’s Edge could be in the fact that it is funded solely by High-Flyer. It is also funded by High-Flyer, a hedge fund run by Wenfeng, providing a funding model to support rapid growth and research.
The startup made waves last month when it released the full version of the R1, the company’s open source inference model that can surpass Openai’s O1. Last week, Deepseek’s AI Assistant’s App Store download, running the model DeepSeek V3, released in December, was previously the most downloaded free app. The Deepseek R1 has climbed to the third spot overall at Huggingface’s Chatbot Arena, releasing a promising new image model while fighting several Gemini models and ChatGPT-4o.
Also: If you are confused, you can try deepseek R1 without security risk
The company’s ability to create successful models by strategically optimizing older chips – the consequence of the US chip ban, including NVIDIA – and distribution of query load across models for efficiency is impressive by industry standards.
What is DeepSeek R1?
Fully released on January 21st, R1 is Deepseek’s flagship inference model, running Openai’s praised O1 model with several mathematics, coding and inference benchmarks.
Based on V3, what makes the R1 interesting, based on Alibaba’s Qwen and Meta’s Llama, is open source, unlike most of the other top models in Tech Giants, and is downloaded and used by anyone It means you can. That said, Deepseek does not disclose the training dataset for R1. So far, all the other models we have released are also open source.
Also: I tested Deepseek’s R1 and V3 coding skills, but we are all not destined (still)
The Deepseek is cheaper than its comparable US models. For reference, R1 API access starts at $0.14 with 1 million tokens.
Deepseek is a V3 model that can compare to standard chatbot models like Claude, and is now available in circulation (and contested) as the overall model development cost. It claims it costs. As reported by the AP, some lab experts believe that this paper refers to only the final training run of V3, not the overall development cost (which is a competitive model). (It’s part of what the tech giants spent on building). Some experts suggest that DeepSeek costs do not include previous infrastructure, R&D, data and labor costs.
One drawback that could affect the long-term competition between the model O1 and US alternatives is censorship. Chinese models often contain blocks on a particular subject. That is, it works relatively well for other models, but can not answer some queries (see how Deepseek’s AI assistant responds to questions about Tiananmen Square and Taiwan here ). As DeepSeek uses increase, some are concerned that the strict Chinese guardrails and systemic biases of the model will be embedded in all kinds of infrastructure.
You can access the non-uncensored US-based version of Deepseek via platforms such as Perplexity, which ran it on a local server to remove the weights of Secondship and avoid security concerns.
Also: Is Deepseek’s new image model another win for cheap AI?
In December, ZDNET’s Tiannan Ray compared R1-Lite’s ability to explain the chain of thought with O1’s ability, and the results were mixed. That being said, Deepseek’s AI assistant reveals the toll of thought to the users during queries, a new experience for many chatbot users, given that ChatGpt doesn’t externalize its inference.
Of course, all popular models come with red care backgrounds, community guidelines and content guard rails. However, at least at this stage, American-made chatbots rarely refrain from answering questions about historical events.
Red flag of privacy and security
Data privacy concerns circulating on Tiktok – China-owned social media apps are currently somewhat banned in the US, but are also occurring around Deepseek.
Also, ChatGpt’s Deep Research has identified 20 jobs it replaces. Is yours on the list?
On Wednesday, Ivan Tsarynny, CEO of Feroot Security, told ABC that his company had discovered “a direct link to Chinese servers and Chinese companies under Chinese government control.” “
After decrypting Deepseek’s code, Feroot has announced that Chinese mobiles are prohibited from operating in the US by Chinese government-run Chinese operators, including information, queries, and online activity identification. I found hidden programming that I can send to. 2019 due to national security concerns.
On Thursday, Nowsecure announced that it had announced that it had “deepseek’s mobile app” after finding some flaws, including unencrypted data (meaning that people who monitor traffic can intercept) and insufficient data storage. Organizations that recommend “prohibit” will “prohibit”.
Last week, research firm Wiz discovered that the internal DeepSeek database was publicly available “within minutes” after performing a security check. The “fully open and unacceptable” database contained chat history, user API keys, and other sensitive data.
Also: Why restarting your phone daily is the best defense against zero-click hackers
“More importantly, exposure allows for full database control and potential privilege escalation within a DeepSeek environment.
According to Wired, which first published Wid, Wiz had not received a response from DeepSeek, but it appeared that the database had been deleted within 30 minutes of Wiz notifying the company. It is unknown how long it is accessible or whether other entities were discovered before they were deleted.
Without this amazing development, Deepseek’s privacy policy raises several flags. “The personal information we collect from you may be stored on servers outside the country you live in.” “We store information we collect on a secure server in the People’s Republic of China.”
Also: The “The Last Test of Mankind” benchmark is intermittent in the top AI models – can we do better?
This policy outlines that DeepSeek collects a large amount of information, including, but not limited to:
“IP address, unique device identifier, and cookies” “Date of birth (if applicable), username, email address and/or phone number, password “”Text or audio input, prompts, uploaded files, feedback , chat history or any other content that we provide to our models and services. “” If you contact Deepseek, we will provide you with proof of identity or age, feedback or enquiries regarding your use of the Services.”
The policy continues. “If you transfer personal information from the country you live, including one or more of the purposes described in this policy, you will do so in accordance with the requirements of applicable data protection laws.” The policy states that GDPR compliance is not available. It’s not mentioned.
Also: Apple researchers reveal the secret source behind DeepseekAI
“Data shared with the platform may be subject to government access under China’s Cybersecurity Act, which requires companies to provide access to data on demand from authorities. “We need to be aware of this,” said Adrianus Warmenhoven, a member of Nordvpn’s Security Advisory Committee. zdnet by email.
According to some observers, the fact that R1 is open source means greater transparency, allowing users to inspect the model’s source code for indications of privacy-related activity.
However, DeepSeek has released a smaller version of the R1. R1 can be downloaded and run locally to avoid concerns about data sent to the company rather than accessing Chatbot online.
Also: ChatGpt Privacy Tips: Two Important Ways to Limit Data to Share with OpenAI
All chatbots, including ChatGpt, collect some amount of user data when queried through a browser.
Safety concerns
AI safety researchers have long been concerned that powerful open source models can be applied in dangerous and unregulated ways once they are out in the wild. Testing with the AI Safety Firm Chatterbox revealed that the Deepseek R1 has a “complete safety issue.”
Also: We’re losing the battle against complexity, and AI may help
Even to varying degrees, US AI companies employ some kind of safety monitoring team. Deepseek has not made public whether there is a safety research team or has not responded to requests for comment on the ZDNET issue.
“Most companies believe that algorithmic efficiency will be improved as a way to continue racing to build the most powerful AI possible regardless of risk, regardless of the risk, as a way to achieve higher performance faster.” “This will allow even less time to address the safety, governance and social challenges associated with increasingly sophisticated AI systems.”
“The breakthrough in Deepseek’s training efficiency is that local, specialized “wrappers” (apps) built on top of the Deepseek R1 engine will each introduce their own privacy risks, each of which could become their own. This means you should expect it right away. “It was misused when they fell into the wrong hands,” added Ryan Fedasiuk, director of US AI Governance, a nonprofit for AI policy.
Energy efficiency claim
Some analysts point out that Deepseek’s low-lift calculation model is more energy efficient than our AI Giants.
“Deepseek’s new AI model will have less energy to train and execute using less energy than large competitors,” Slattery said. “However, I don’t think this marks the beginning of a long-term trend in lowering energy consumption. The power of AI comes from data, algorithms, and computing. They usually do what they do with the overall energy usage. Rather than reducing it, we reinvested those profits to create a bigger, more powerful model.”
“Deepseek is not just an AI company that has made extraordinary profits in computing efficiency. Over the past few months, American humanity and Google Gemini have boasted similar performance improvements,” Fedasiuk said.
Also: $450 and 19 hours are needed to rival Openai’s O1-Preview
“The results of Deepseek seem to have an independently designed breakthrough that promises to make large-scale language models more efficient and cheaper than many industry experts expected. It’s worth noting that it looks like it, but as dynamic as AI, it’s hard to predict only the time the company can get into the spotlight.”
How will Deepseek have an impact on the AI industry?
The success of R1 highlights the oceanic changes of AI that allow small labs and researchers to create competitive models and diversify options. For example, organizations without OpenAI funds or staff can download R1 and tweak them to compete with models such as O1. Just before the release of R1, researchers at UC Berkeley created an open source model with the early versions of O1, O1-Preview and PAR, for around $450 in just 19 hours.
Given how AI investments are, many experts speculate that this development could destroy the AI bubble (the stock market certainly panicked). I believe Deepseek’s success is viewing it as exposing the idea that cutting-edge development means big models and spending. We’ve also cast Stargate, a $500 billion infrastructure initiative led by several AI Giants, from a new perspective, and whether competitive AI will require the energy and scale of the initiative’s proposed data centers. Create a guess about.
Also: Humanity offers $20,000 to those who can jailbreak new AI safety systems
The rise of Deepseek comes at a critical time for Chinese-American technological relations a few days after the long-fighting Tiktok Ban fell into partial effect. Ironically, Deepseek sets out in plain terms prey to security concerns that the US struggled to prove Tiktok in its long-term efforts to enact the ban. The US Navy has already banned Deepseek, and lawmakers are trying to ban the app from all government equipment.