Report 4811
The frenzy over China's home-grown chatbot from DeepSeek has spread to the medical sector, inspiring more Chinese to turn to using the artificial intelligence (AI) model for diagnoses.
But the shift has also triggered mixed feelings -- especially over the possibility of AI's involvement in medical decisions.
Last week, a 12-second video on Douyin, China's version of TikTok, went viral.
"I am crushed!" ran the caption of the video, with the thumbnail featuring a doctor seated at his desk.
"The patient went on DeepSeek and questioned my treatment. I was so angry and checked the medical guidebook, only to find out that it had been updated," he said, realising that he was the one in error.
The doctor's experience is not an isolated one. As China pushes for AI supremacy, members of the public are increasingly finding themselves face-to-face with AI civil servants, educators, newsreaders and even medical assistants.
According to a media tally, as of February, close to 100 hospitals around China had announced they would use AI models, including from DeepSeek, for help with tasks such as decision-making, analysis of medical imaging and quality control of medical records.
A surgeon from Shenzhen, in southern Guangdong province, said an internal memo circulated by her hospital last week announced that they would soon start using the DeepSeek AI model -- "for research purposes only".
"In the future, doctors might lose their jobs," the surgeon said. "At least I can still perform surgery."
Her hospital also cautioned against uploading patient or hospital data to AI models to prevent information leaks.
Others welcomed the change. A neurologist in Nanjing, in eastern Jiangsu province, said his social media was flooded with posts praising DeepSeek for making everyday tasks much easier.
"When you are on duty at the outpatient clinic, you have limited time with each patient, so if you could generate a medical record with a few keywords, it saves time," he said. "The records are usually short and have a routine style, which is perfect for AI to fill in."
But of course, the medical decisions had to be made by the doctors, he added.
An official statement this week from the Beijing government on its social media page highlighted the adoption of AI models by multiple prominent hospitals in the national capital.
The hospitals had used the model to "accelerate the research and development of new drugs, improve efficiency of diagnosis and treatment and provide patients with more accurate and convenient medical services", the statement said.
The public have been quick to adopt AI in their daily lives as well. While in the lift at a hospital in Shenzhen, this reporter came across a couple loudly arguing over the doctor's prescription for their child. In the end, the mother whipped out her phone and spoke to DeepSeek, listing the child's symptoms and asking for a diagnosis.
But others have questioned such usage, especially in a specialised field like medicine. For instance, Hunan province in central China last month banned hospitals from using AI to generate prescriptions.
Marko Skoric, associate professor in the media and communication department at City University of Hong (CityU), said most AI systems were "black box" systems, where it was possible to track their input and output but not their internal workings and decision-making processes.
"For medical professionals, we know the curriculum and the corpus of knowledge that they have to acquire in the medical school together with sound scientific principles of decision-making -- this has all been fairly transparent," he said.
There is also the issue of liability when AI makes medical decisions.
"Who would be held responsible when things go wrong? The doctors, hospitals or tech companies?" Skoric asked.
Does the arrival of China's low-cost DeepSeek mean the end of Nvidia's chip dominance?
Jonathan Zhu, chair professor of computational social science at CityU, said the attitude towards AI for medicine should be a "cautiously positive and active" one.
He called for the sector to be regulated by both the government and the medical fraternity, suggesting that the AI models used should be evidence-based, integrate multiple sources, and be continuously incremental from general to specific.
"In daily use of AI for medicine, consumers should always seek 'second opinions', such as checking with multiple AI sources -- the more diverse the better -- and comparing these answers with those from trusted medical professionals," he said.
Some social media users even reported using AI to help with parenting. One popular post said their child was crying over a toy, but after chatting with Doubao, he calmed down.
Doubao is a ChatGPT-like conversational bot developed by TikTok's owner, China-based ByteDance.
"AI is more mentally healthy than 90 per cent of the human beings, AI parenting might raise warmer and kinder children," one commenter said.
The push for the widespread use of AI has been firmly backed by authorities in China.
When DeepSeek's AI models first came out, Beijing hailed it as a success for China's innovation drive in the face of Western sanctions limiting access to hi-tech chips.
Chinese state media has been quick to champion the private start-up as a national asset in the global competition for AI supremacy. DeepSeek founder Liang Wenfeng was among a handful of entrepreneurs invited last month to a symposium hosted by President Xi Jinping, who encouraged them to push ahead with innovation to power China's economic rise.
Cities across China, including Shenzhen, Hohhot in the north, and Ganzhou and Wuxi in the east, have since integrated AI into their government service platforms and internal operations.