A group of researchers in China has expressed concerns over the rapid adoption of DeepSeek’s artificial intelligence models in hospitals, citing significant risks to clinical safety and patient privacy. DeepSeek, a start-up known for its cost-effective, open-source AI models, has seen widespread use, with over 300 hospitals in China integrating its large language models (LLMs) into clinical diagnostics and decision-making by early March.
The researchers, led by Wong Tien Yin, a founding head of Tsinghua Medicine at Tsinghua University, highlighted a major issue with DeepSeek’s models: while they offer strong reasoning capabilities, they tend to generate “plausible but factually incorrect outputs,” which could lead to serious clinical errors. The team published their concerns in a paper in the medical journal JAMA, calling for more cautious deployment of these AI systems in healthcare settings. This paper stands as a rare voice of caution in a nation that has quickly embraced the technology as a breakthrough in AI.
The researchers warned that healthcare professionals could become overreliant on the AI output, potentially overlooking errors or biases in diagnostic results and treatment recommendations. Such overreliance could lead to diagnostic inaccuracies or even harm to patients. More cautious healthcare providers could find themselves burdened with the task of constantly verifying AI-generated data in time-sensitive medical situations. This issue is compounded by the fact that many hospitals have opted for private, on-site deployments of DeepSeek’s models to mitigate security and privacy risks. However, this solution shifts the responsibility for cybersecurity to individual healthcare institutions, many of which lack robust infrastructure to protect against breaches.
The researchers also raised concerns about the accessibility of AI-powered health recommendations to underserved populations in China. While these populations now have unprecedented access to AI-driven healthcare suggestions, they often lack the clinical oversight necessary for safe implementation, putting patients at further risk.
The rapid expansion of AI in healthcare is evident, with other companies like Ant Group and Tairex also deploying AI medical agents and virtual healthcare platforms in China. However, the increasing use of AI in clinical settings has also sparked growing scrutiny regarding its safety, security, and reliability, with researchers stressing the need for caution and stronger safeguards to ensure patient welfare.
READ MORE: