DeepSeek warns of ‘jailbreak’ risks for its open-source models
DeepSeek has revealed details about the risks posed by its artificial intelligence models for the first time, noting that open-sourced models are particularly susceptible to being “jailbroken” by malicious actors. The Hangzhou-based start-up said it evaluated its models using industry benchmarks as well as its own tests in a peer-reviewed article published in the academic…