Google is using AI to answer your health questions. Should you trust it?

Google is using AI to answer your health questions. Should you trust it?

Do you have a sinus infection or do you have a headache? How does one experience a stress fracture? Is the pain in your chest concerning you? The responses to those queries on Google may have been penned by artificial intelligence.

This month, Google launched a new tool called AI Overviews. This tool makes use of generative AI, a kind of machine learning technology that can generate conversational responses to particular search queries in a matter of seconds after being trained on data from the internet.

In the weeks after the tool's release, users have come across a diverse range of errors and strange responses on many topics. However, experts said the stakes were especially high when it came to how it responds to health-related queries. Though it also has the ability to provide false information, technology may help people adopt better lifestyles or seek necessary medical attention. Sometimes the AI can create false information. Furthermore, if websites devoid of scientific foundations influence its responses, it may provide advise that contradicts medical advice or puts a person's health at danger.

It has already been demonstrated that the system yields poor results that appear to be based on faulty sources. For example, AI Overviews advised some users to consume at least one rock daily for vitamins and minerals when they questioned "how many rocks should I eat." (The parody website The Onion provided the advice.)

"You can't trust everything you read," said Dr. Karandeep Singh, chief health AI officer at UC San Diego Health. The source of your information is crucial when it comes to health, he said.

Health searches have "additional guardrails," according to Hema Budaraju, a Google senior director of product management who assists in leading development on AI Overview. However, she declined to go into specifics. According to her, searches that are seen as explicit or harmful, or that point out that a person is in a precarious situation—like harming themselves—do not set off AI summaries.

Google stated that the tool was compatible with the Google Knowledge Graph, an information system that already exists and has gathered billions of facts from hundreds of sources, but it declined to release a comprehensive list of websites that bolster the content in AI Overviews.

For example, "Is chocolate healthy?" The response from Google retrieves data from studies on mental, cardiovascular, and other health-related topics.

Questions like these about health are frequently answered by consulting reliable sources. But in this instance, the answer also mentions the Italian gelato and chocolate firm Venchi.

“For instance, searching "Is chocolate healthy for you?" yielded results from ZOE's website as well as websites offering "gut intelligence tests" that can be completed at home and nutrition applications.”

The updated search results do list certain sources, which are frequently websites such as the World Health Organization, WebMD, Mayo Clinic, and PubMed, which is a repository for scientific research, when it comes to health-related queries. However, it is not a complete list: Additionally, the program may pull content from e-commerce websites, Reddit, blogs, and Wikipedia. Furthermore, it does not disclose to users where sources the facts came from.

Many consumers would be able to tell right away with a regular search result between a candy firm and a reliable medical website. However, a single text block that incorporates data from several sources could be confusing.

Dr. Seema Yasmin, head of the Stanford Health Communication Initiative, said, "And that's if people are even looking at the source." She went on, "I don't know if people are looking, or if we've really taught them adequately to look." She expressed skepticism over the typical user's willingness to delve further than a superficial solution, based on her personal investigation into disinformation.

Regarding the correctness of the chocolate answer, Tufts University professor of medicine and cardiologist Dr. Dariush Mozaffarian stated that it summarized research on the health advantages of chocolate and contained certain information that were basically correct. However, he said that it does not include any qualifiers on the evidence or differentiate between stronger evidence from observational research and weaker evidence from randomized trials.

Antioxidants are indeed present in chocolate, according to Mozaffarian. But the idea that eating chocolate might help keep memory decline at bay? There is still much to be determined, and there "needs a lot of caveats," he said. Listing these kinds of assertions next to each other creates the false impression that some are more established than they actually are.

Even when the science behind a particular answer hasn't changed, the answers themselves may change as AI advances.

In a statement, a Google representative claimed that the business made every effort to display disclaimers on responses when they were necessary, along with remarks indicating that the data shouldn't be interpreted as medical advice.

It's unclear how precisely AI Overviews assess the quality of the evidence or if they consider research findings that contradict one another, like the health benefits of coffee. Yasmin stated, "Science isn't a collection of static facts." She and other experts questioned if the tool will use outdated scientific findings that have been refuted or that do not reflect the most recent knowledge on a given topic.

"As physicians, we constantly have to make critical decisions and discriminate between sources of quality," Dr. Danielle Bitterman, a physician-scientist in artificial intelligence at Brigham and Women's Hospital and Dana-Farber Cancer Institute, said. "They are parsing the evidence."

"We need to better understand how they would navigate across different sources and how they apply a critical lens to arrive at a summary," she said, if we want tools like AI Overviews to play that role.

Experts expressed alarm about these unknowns, pointing out that the new approach prioritizes the AI Overview response over specific links to reliable medical websites like the Cleveland Clinic and the Mayo Clinic. These websites have consistently shown up at the top of search results for numerous health-related queries.

According to a Google representative, AI Overviews are meant to supplement, not to replace, the content that shows up in the search results page. They will match or summarize it. Instead, the spokesman explained, it's meant to assist individuals in understanding the material that is accessible.

With regard to the fresh answers, the Mayo Clinic declined to comment. According to a Cleveland Clinic spokeswoman, those looking for health information should "directly search known and trusted sources" and get in touch with a doctor if they have any symptoms.

According to a statement from a spokesperson of Scripps Health, the California-based healthcare organization that is mentioned in certain AI Overview summaries, "citations in Google's AI generated responses could be helpful in that they establish Scripps Health as a reputable source of health information."

Nevertheless, the spokesperson stated, "we do have concerns that we cannot vouch for the content produced through AI in the same way we can for our own content, which is vetted by our medical professionals."

Experts stated that when it comes to medical queries, the presentation of the answer to the user is equally as important as its correctness. Consider asking yourself, "Am I having a heart attack?" Dr. Richard Gumina, director of cardiovascular medicine at Ohio State University Wexner Medical Center, noted the AI response included a helpful summary of symptoms.

However, he said, the text instructed him to phone 911 only after he had scrolled through a lengthy list of symptoms. Gumina also typed in "Am I having a stroke?" to see if the tool would return a more urgent result. Sure enough, it did, advising users to dial 911 right away. He declared he would seek assistance right away if a patient showed signs of a heart attack or stroke.

Health information seekers are advised to proceed cautiously when interacting with the new replies, according to experts. In essence, they advised users to read the small print found beneath certain AI Overviews responses, which states: "This is for informational purposes only." Seek professional assistance for diagnosis or medical advice. The use of generative AI is exploratory.