In Elon Musk’s world, artificial intelligence is the new MD. X owners encourage users to upload medical test results, such as CTs and bone scans, to the platform so that X’s artificial intelligence chatbot, Grok, can learn how to interpret the results effectively.
He previously said the information would be used to train X’s artificial intelligence chatbot, Grok, on how to interpret them effectively.
Earlier this month, Elon Musk retweeted a video of himself talking about uploading medical data to Grok on X, saying: “Try it!”
“You can upload your X-ray or MRI image to Grok and it will give you a medical diagnosis,” Musk said in a video uploaded in June. “I’ve seen cases where it’s actually better than what doctors tell you.
In 2024, Musk said medical images uploaded to Grok will be used to train robots.
“It’s still early days, but it’s already pretty accurate and it’s going to get really good,” Musk wrote on X. “Let us know what Grok is doing right or needs improvement.”
In his response, Musk also claimed that Gronk saved a man in Norway by diagnosing a problem that doctors failed to notice. Owner X is willing to upload his medical information to his robot.
“I recently had an MRI and submitted it to Grok,” Musk said on the show. Moonshot with Peter Diamandis The podcast was released on Tuesday. “Neither the doctors nor Gronk found anything.”
Musk did not reveal on the podcast why he had the MRI. XAI with X tells wealth In a statement: “Legacy media lies.”
Grok faces some competition in the AI health space. This week, OpenAI launched ChatGPT Health, an in-bot experience that allows users to securely connect medical records and health apps such as MyFitnessPal and Apple Health. The company says it will not use personal medical information to train the model.
Artificial intelligence chatbots have become a ubiquitous source of medical information for people. OpenAI reported that 40 million people sought health information from the model this week, with 55% using the bot to find or better understand symptoms.
Dr. Gronk will see you now.
So far, Grok’s ability to detect medical anomalies has been mixed. Some users claim that AI successfully analyzed blood test results and identified breast cancer. But it also grossly misinterpreted other information, according to doctors who responded to some of Musk’s questions about Grok’s ability to interpret medical information. On one occasion, Gronk mistook a “textbook case” of tuberculosis for a herniated disc or spinal stenosis. In another example, a robot mistook a mammogram of a benign breast cyst for an image of a testicle.
A May 2025 study found that while all AI models have limitations in processing and predicting medical outcomes, Grok was the most effective at determining the presence of lesions in 35,711 brain MRI slices compared to Google’s Gemini and ChatGPT-4o.
“We know they have the technical skills,” Dr. Laura Heacock, an associate professor in the Department of Radiology at NYU Langone Health, wrote in [graphics processing units] It’s up to them whether to include medical imaging. Currently, non-generative AI methods continue to excel in medical imaging. “
Dr. Gronk’s question
Experts say Musk’s lofty goal of training artificial intelligence to make medical diagnoses is also a risky one. While artificial intelligence is increasingly being used as a means to make complex science more understandable and create assistive technology, teaching Grok to use data from social media platforms has raised concerns about Grok’s accuracy and user privacy.
Ryan Tarzy, CEO of health technology company Avandra Imaging, told Fast Company in an interview that requiring users to enter data directly, rather than obtaining it from a secure database with de-identified patient data, is how Musk is trying to speed up the development of Grok. Additionally, the information comes from a limited sample willing to upload images and tests, meaning the AI will not collect data from sources that represent the broader and more diverse field of medicine.
Medical information shared on social media is not subject to the Health Insurance Portability and Accountability Act (HIPAA), a federal law designed to protect a patient’s private information from being shared without the patient’s consent. This means users have less control over where their information goes once they choose to share it.
“There are numerous risks associated with this approach, including accidental sharing of patient identities,” Tazi said. “Personal health information is ‘imprinted’ in too many images, such as CT scans, and will inevitably be released as part of the scheme.”
Matthew McCoy, an assistant professor of medical ethics and health policy at the University of Pennsylvania, said the possible privacy dangers posed by Grok are not fully understood because X may have privacy protections that the public is unaware of. He said users share medical information at their own risk.
“As an individual user, am I willing to contribute health data?” he previously told new york times. “Absolutely not.”
A version of this story was originally published on wealth network November 20, 2024.
More information on artificial intelligence and health:
-
OpenAI launches ChatGPT Health, aiming to become a personal health data center
-
OpenAI advises ChatGPT to play doctor as millions of Americans face soaring insurance costs: ‘In the U.S., ChatGPT has become a vital ally’
-
as utah Giving AI the ability to prescribe some medicationsdoctors warn patients of risks
This story originally appeared on Fortune.com