Dario Amodei, the CEO of Anthropic, revealed on Jordan Schneider’s ChinaTalk podcast that through security testing conducted by Anthropic, the DeepSeek model’s performance was found to be the worst among all models tested. Amodei did not specify which version of the DeepSeek model was tested, nor did he provide additional technical details about this particular test. He simply stated that this evaluation is part of Anthropic’s use of various AI models to assess potential national security risks.
Furthermore, the evaluation was aimed at determining whether the model could generate relevant data related to biological weapons that are not easily found on Google or in textbooks. Based on the assessment, Amodei believes that the current DeepSeek model is not posing a threat in providing sensitive and dangerous information. However, in the near future, this may change.
Despite praising the DeepSeek team as talented engineers, Amodei advises companies to seriously consider the security implications of such AI technologies.
Source: TechCrunch
TLDR: During a podcast interview, Dario Amodei, CEO of Anthropic, discussed the security testing results of the DeepSeek AI model, highlighting its poor performance compared to other models. While acknowledging the team’s skills, Amodei emphasized the importance of AI security considerations.
Leave a Comment