SAN FRANCISCO — For the last few years, the tech industry has convinced people that their artificially intelligent chatbots get better the more data you feed them. The next step is to get users to share their most sensitive information: their health records.

What could go wrong?

Microsoft this week unveiled a tool that will let users share records from multiple health providers with its chatbot, Copilot. The records can then be combined with data gathered by a user’s fitness device, such as an Apple Watch. After analyzing all the information, the chatbot will come up with a high-level overview of health issues for the user.

Microsoft’s announcement echoed moves by Amazon, OpenAI and Anthropic, which began testing similar tools — Health AI, ChatGPT Health and Claude for Healthcare — this year. By collecting health data and offering direct feedback, the companies, whose AI chatbots have made headlines for contributing to some users’ psychosis, isolation and unhealthy habits, are treading into risky territory.

(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to AI systems. The two companies have denied the suit’s claims.)

In interviews, physicians said there might be upsides to chatbot-assisted health care, like helping people gain insight into their health at a time when health care is becoming increasingly unaffordable. But sharing health records with tech companies creates a host of privacy risks. Like past technologies that made people overly anxious about their health, the chatbots could also lead to unnecessary trips to the doctor.

Here’s what you need to know.

How would this work?

On Microsoft’s Copilot website and mobile app, users will be able to click on a “Health” tab and create a profile by answering questions about their age and sex. From there, users can opt to share their health records and data from devices like an Apple Watch, Fitbit and an Oura sleep tracker.

Users can then prod the chatbot with questions or symptoms by saying things like, “I haven’t been sleeping well.” The chatbot then analyzes the health records and wearable data to make observations, such as sleep trends since a recent hospital visit.

The chatbot can also come up with a “bottom line” summary of health issues to pay attention to, such as sleep deprivation, diabetes and limited physical activity.

Users will initially be able to try Copilot Health for free when it is released this year. Microsoft said it planned to charge a subscription fee to use the tool, but it did not share a price.

What are the potential benefits?

Medical records have been chaotic and cumbersome for patients to navigate because the information can be scattered across various databases used by different health providers. (A primary care physician could struggle to offer feedback on a foot injury, for example, if the patient’s podiatrist used a different record system.) Microsoft’s AI could help connect the dots from many different health providers, along with a user’s fitness device data.

Microsoft said a doctor would probably need hours to manually review all of a person’s medical records and fitness device data to come up with a high-level overview on health. It said Copilot Health could do this in seconds.

“This is about giving consumers and patients incredible insight and intelligence over their own record and helping them navigate very complex challenges and a very complex system that we’ve all created for them,” said Dr. Dominic King, Microsoft’s vice president of health in its AI division.

As health care costs have risen, many Americans are dropping coverage. An AI chatbot could be a low-cost way to help people pay closer attention to their health and research information on symptoms, similar to a web search on a site like WebMD.

What are the risks?

In recent years, cyberattacks have breached hospitals and health care systems. Putting health records in a central place makes that information much more tempting target for criminals, said Matthew Green, an associate professor of computer science at Johns Hopkins University. A victim’s health data could expose conditions that he or she would want to keep private.

“There is a pot of gold of high-value data that is in one location that people can get,” Green said.

Similarly, law enforcement agencies that want an individual’s health records could go to Microsoft instead of multiple providers, said Mario Trujillo, a data privacy lawyer for the Electronic Frontier Foundation, a digital rights nonprofit. A woman pursuing reproductive health care in a state with an abortion ban could be at higher risk, he added.

Also, the Health Insurance Portability and Accountability Act, or HIPAA, which strictly requires traditional health care providers to protect patient privacy, does not apply to tech companies offering chatbots. So these companies, which are not health care providers even when they offer similar services, could do what they wished with health records, such as use the information to train their AI or show ads related to a user’s health conditions.

Microsoft said people’s health data would be encrypted and would not be used to improve its AI or serve targeted ads. It also said it gave law enforcement agencies access to customer data only in response to valid legal requests.

Is health advice from a chatbot trustworthy?

Microsoft says Copilot Health is meant to help people understand their health and prepare for appointments — not replace a doctor’s expertise. Its news release included a disclaimer that the chatbot “is not intended to diagnose, treat or prevent diseases.”

Dr. Girish Nadkarni, chief AI officer for the Mount Sinai Health System, said it was naive to think that users would not ask a chatbot that had access to all of their medical records for diagnoses and advice.

“Sure, you can include a disclaimer not to use it that way,” he said. “But people are going to use it that way. That’s just human nature.”

So far, research suggests that chatbots are not yet ready for that responsibility.

A study published last month analyzed several chatbots, including those from OpenAI and Meta, and found that they were no better than a web search at guiding users toward the correct diagnoses or helping them determine what they should do next. And the technology posed unique risks, sometimes presenting false information or drastically changing its advice depending on slight changes in the wording of the questions.

These weaknesses have already led to high-profile mistakes. For instance, a 60-year-old man was held for weeks in a psychiatric unit after ChatGPT suggested cutting down on salt by eating sodium bromide instead, causing paranoia and hallucinations.

OpenAI said the current version of ChatGPT was significantly better at answering health questions than the model, since phased out, that was tested in the study. Meta did not respond to a request for comment.

Some new research suggests that even models that are tailored for users’ health questions, like ChatGPT Health, pose risks. When Nadkarni and his colleagues input details from hypothetical medical cases into the model, which was released in January, it missed high-risk emergencies, in one case failing to recommend the emergency room for someone with impending respiratory failure.

Another risk is that a chatbot’s basic summaries of health problems could create anxiety, said Dr. Lisa Piercey, a former health commissioner for Tennessee. A sinus headache this time of year is likely to be a symptom of allergies, but a chatbot could raise the possibility of a more serious condition that spurs an unnecessary visit to the doctor.

“It very well could tell you you’ve got a brain tumor,” she said. “That causes a ton of anxiety.”

Copilot Health has also not yet been studied by independent researchers. King of Microsoft said the chatbot was designed to avoid giving medical advice even in the face of pointed questions and instead offer “guidance and support.” Rather than tell users that they have a specific condition, it may provide a list of possible diagnoses. Or instead of recommending a medication, it may provide some questions that users can ask their doctors.

The company also said it was releasing Copilot Health gradually, testing new features with a small set of users each step of the way, to ensure that the experience remained safe and reliable.