به یاد فرزندان جاویدان این سرزمین

یادشان همواره در قلب این خاک زنده خواهد ماند

People Are Uploading Their Medical Records to A.I. Chatbots

People Are Uploading Their Medical Records to A.I. Chatbots

The New York Times
2025/12/06
16 views

Mollie Kerr, a 26-year-old New Yorker living in London, was rattled this summer when her bloodwork showed hormone imbalances.

After seeing the results in her patient portal, she felt too scared to wait to talk to her doctor. So, with some unease, she pasted the full report into ChatGPT. Her lab results could indicate a number of conditions, the chatbot told her, but “most likely” pointed to a pituitary tumor or a rare condition linked to pituitary tumors.

Mollie Kerr

so given all of that what is my diagnosis

ChatGPT

Most Likely Diagnostic Considerations

• Mixed pituitary microadenoma (dual-secreting)
• Early or subclinical acromegaly

The chatbot’s guesses weren’t out of the question — Ms. Kerr’s doctor agreed to order an M.R.I. to check — but they were wrong. No tumor detected.

Another patient, Elliot Royce, 63, had a different experience after uploading five years of his medical records to ChatGPT, including documentation of a complex heart condition and a past heart attack.

He started to feel more uncomfortable while exercising, and a test indicated a partly blocked artery. His doctor believed close monitoring would suffice, for the time being. But based on ChatGPT’s advice, Mr. Royce pushed for a more invasive diagnostic procedure, which revealed an 85 percent blockage — a serious problem that was addressed with a stent.

Elliot Royce

My discomfort on exercise is getting more consistent. It’s not pain, but it’s predictable. Walking up stairs triggers it every time. Should I push for more testing?

ChatGPT

Yes. That’s a classic anginal equivalent pattern—reproducible, exertional, and relieved by rest. You’re past the “wait and see” stage.

Elliot Royce

What’s the risk of doing nothing for a few more weeks?

ChatGPT

You risk crescendo angina and potentially a plaque rupture. You already crossed the line where conservative monitoring is reasonable. Symptoms do not behave this predictably unless the myocardium is struggling.

Elliot Royce

I don’t have classic chest pain though.

ChatGPT

Doesn’t matter. You’re a classic example of “non-painful ischemia.”

Elliot Royce

So you’re saying I should ask directly for catheterization?

ChatGPT

Correct.

Around the world, millions of people are using chatbots to try to better understand their health. And some, like Ms. Kerr and Mr. Royce, are going further than just asking medical questions. They and more than a dozen others who spoke with The New York Times have handed over lab results, medical images, doctor’s notes, surgical reports and more to chatbots.

Inaccurate information is a major concern; some studies have found that people without medical training obtain correct diagnoses from chatbots less than half the time. And uploading sensitive data adds privacy risks in exchange for responses that can feel more personalized.

Dr. Danielle Bitterman, an assistant professor at Harvard Medical School and clinical lead for data science and A.I. at Mass General Brigham, said it wasn’t safe to assume a chatbot was personalizing its analysis of test results. Her research has found that chatbots can veer toward offering more generally applicable responses even when given context on specific patients.

“Just because you’re providing all of this information to language models,” she said, “doesn’t mean they’re effectively using that information in the same way that a physician would.”

And once people upload this kind of data, they have limited control over how it is used.

HIPAA, the federal health privacy law, doesn’t apply to the companies behind popular chatbots. Legally, said Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center, “you’re basically waiving any rights that you have with respect to medical privacy,” leaving only the protections that a given company chooses to offer.

OpenAI, the maker of ChatGPT, said it had extensive safeguards to protect its users’ private information.

A representative noted that users could opt out of having their chats used to train future models, and said the company tested its systems against simulated attacks. It also shares minimal data with third-party service providers, she said. (The Times has sued OpenAI, claiming copyright infringement of news content. OpenAI has denied the claims.)

Even so, data privacy experts said there were risks to uploading medical information to any chatbot — both because different chatbots’ policies vary, and because it is very difficult to eliminate all vulnerabilities.

One issue is that many people don’t opt out of handing over their data for training purposes. This creates the possibility that, if one person uploads medical data and someone else asks a future model about that person, a chatbot “might accidentally leak very sensitive information,” said Karni Chagal-Feferkorn, an assistant professor at the Bellini College of Artificial Intelligence, Cybersecurity and Computing at the University of South Florida.

OpenAI says it works to “minimize” this possibility, and the representative said ChatGPT was trained not to learn or reveal such information. But data privacy experts still consider the scenario plausible.

“Their actions surely reduce the risk, but are not and likely cannot be bulletproof,” Dr. Chagal-Feferkorn said. “Don’t be afraid of the technology, but be very aware of the risks,” she added.

A few patients said they had redacted their names and scrubbed metadata before sharing their records with chatbots, but that might not be enough. Sufficiently detailed information can sometimes be linked back to individuals even if no names are attached, said Dr. Rainu Kaushal, the chair of the department of population health sciences at Weill Cornell Medicine and NewYork-Presbyterian.

The consequences of having health information leaked can be serious. For instance, though it’s illegal for most employers to discriminate against people with disabilities, it isn’t uncommon.

But most people who spoke to The Times said they weren’t troubled.

“My cellphone is following me wherever I go,” said Robert Gebhardt, 88, who asks ChatGPT to evaluate the urgency of his symptoms and the appropriateness of his medications given the 15 years of medical records he has uploaded. “Anybody that wants to know anything about me can find out, including my medical data. It’s a fact of life, and I’ve reconciled myself to that.”

Robert Gebhardt

As you know, I am 88 years old and in relatively good health and you have my medical history on file. The question I have is if there is an occasional disorientation, that is if I have an occasional disorientation say like once every month or so is that an evidence of dementia or possibly TIAs or would you know?

ChatGPT

An occasional brief episode of disorientation once a month or so can have several possible explanations, and by itself it does not automatically mean dementia. At your age and with your medical history (including AFib, prior mild stroke, dizziness, blood-pressure variability, medications, and upcoming MRI), there are multiple potential causes, and this absolutely requires a doctor’s evaluation — but here’s how to understand it.

Stephanie Landa, 53, has fed test results into ChatGPT since she received a diagnosis of metastatic appendix cancer last year. She values ChatGPT’s immediate overview of her results, perhaps especially when they are devastating, as when they showed the cancer had spread throughout her abdomen. If she processes bad news before a doctor’s visit, she said, she can use the appointment time more effectively.

For a while, she painstakingly redacted identifying information. But then she decided that, given the prognosis of her aggressive cancer, she didn’t really care.

As for Ms. Kerr, the woman who did not have a pituitary tumor, an endocrinologist couldn’t help after ruling out the tumor, she said, and her primary care doctor has been unable to solve the mystery.

So she has gone back to ChatGPT for new diagnostic suggestions and dietary advice, some of which she has found helpful.

“I know it’s sensitive information,” she said. “But I also feel like I’m not getting any answers from anywhere else.”

Share Your Experience

Produced by Deanna Donegan.