Abi, a 32-year-old freelance writer, has spent months testing AI chatbots for health advice after her own mixed experiences. She asked a popular chatbot about persistent back pain and received three different explanations in one week. One response linked the pain to poor posture, another suggested a herniated disc, while the third dismissed it as stress-related. None of the suggestions matched her actual diagnosis of a muscle strain. “It was frustrating,” Abi said. “I ended up ignoring the advice and seeing my doctor.” Her experience reflects a growing concern among healthcare professionals about the reliability of AI tools in medical contexts.

How AI chatbots handle health queries

AI chatbots process health questions by analyzing vast datasets of medical information, but their responses depend heavily on the quality and recency of that data. OpenAI, the creator of ChatGPT, acknowledges that its models can produce “plausible-sounding but incorrect or nonsensical” answers, especially on complex topics like medicine. The company advises users to verify AI-generated medical advice with a healthcare professional before acting on it. Similar disclaimers appear on platforms like Google’s Med-PaLM 2, a medical-focused AI model, which warns of potential inaccuracies despite its specialized training.

A 2023 study published in JAMA Internal Medicine tested five popular AI chatbots with 284 health-related questions. Researchers found that only 45% of responses were correct, while 30% were misleading. The study concluded that chatbots frequently misinterpreted symptoms or provided outdated treatment recommendations. Lead author Dr. John Ayers, a computational health researcher at University of California San Diego, called the results “concerning” given the potential for harm if patients act on flawed advice. The study’s findings echo concerns from the World Health Organization, which has urged caution in using AI for medical guidance.

Why accuracy varies so widely

The reliability of AI health advice hinges on several factors, starting with the training data. Chatbots learn from publicly available medical texts, research papers, and user-generated content, which may include outdated or incorrect information. For example, a 2020 study in Nature Digital Medicine found that 40% of medical content on Wikipedia—often used in AI training—contained errors. Platforms like ChatGPT and Bing AI attempt to filter misinformation, but their effectiveness depends on the algorithms’ ability to distinguish fact from fiction.

Another issue is the chatbots’ lack of contextual understanding. Unlike doctors, AI tools cannot account for a patient’s full medical history, allergies, or lifestyle factors. A user asking about a headache might receive advice tailored to migraines, while another with the same symptom could be told to rest—both responses technically correct, but neither personalized. This one-size-fits-all approach increases the risk of harm, particularly for patients with chronic conditions or rare diseases. Mayo Clinic has warned that AI recommendations should only supplement, not replace, professional medical evaluation.

Who uses AI for health advice—and why

Despite the risks, millions turn to AI chatbots for medical guidance. A 2024 survey by Pew Research Center found that 38% of Americans have used AI tools for health-related questions, with 12% doing so regularly. The appeal is clear: convenience, speed, and 24/7 availability. For those without health insurance or in underserved areas, chatbots offer a low-cost alternative to doctor visits. However, the survey also revealed that 62% of users who followed AI advice without consulting a doctor later regretted it, often because the advice was incomplete or incorrect.

Young adults aged 18-34 are the most likely to use AI for health queries, according to a separate study by KFF. The trend is driven by familiarity with technology and skepticism toward traditional healthcare systems. Yet, experts note that this demographic is also more susceptible to misinformation, as they may lack the experience to evaluate the credibility of AI-generated advice. Dr. Andrew Beam, a Harvard medical data scientist, cautions that even well-intentioned users can fall victim to chatbots’ persuasive but flawed responses.

What doctors and regulators say

Medical professionals overwhelmingly advise against using AI chatbots for primary healthcare advice. The American Medical Association has issued guidelines stating that AI tools should only assist in triaging symptoms, not provide diagnoses or treatment plans. The AMA emphasizes that chatbots lack the judgment and ethical oversight of human doctors. Similarly, the UK’s National Health Service has restricted AI health tools in clinical settings, citing safety concerns.

Regulators are also stepping in. The U.S. Food and Drug Administration is developing frameworks to evaluate AI medical devices, including chatbots, but has not yet approved any for standalone use in diagnosing or treating conditions. In the European Union, the AI Act classifies high-risk AI systems, which could include health chatbots, as requiring strict oversight. These moves reflect growing recognition of the need for safeguards as AI becomes more integrated into healthcare.

What happens next?

As AI chatbots become more advanced, their role in healthcare will likely expand—but so will scrutiny of their limitations. Developers are working to improve accuracy by fine-tuning models on high-quality medical datasets and incorporating user feedback loops. Some platforms, like Microsoft’s Ada Health, now include disclaimers urging users to seek professional advice if symptoms persist. However, experts agree that fundamental challenges remain, particularly around accountability. If an AI chatbot gives harmful advice, who is responsible—the user, the developer, or the platform?

For now, the consensus is clear: AI can be a supplementary tool, but it is not a substitute for human expertise. Abi’s story underscores the importance of skepticism. After her own missteps, she now uses chatbots only to summarize articles or explain medical terms—not for diagnoses. “I treat them like a search engine,” she said. “Helpful for quick info, but not trustworthy for life-or-death decisions.”

What You Need to Know

  • Source: BBC News
  • Published: April 18, 2026 at 23:04 UTC
  • Category: Health
  • Topics: #bbc · #health · #medicine · #should · #ai-health-advice-reliability · #can-you-trust-chatbot-medical-advice

Read the Full Story

This is a curated summary. For the complete article, original data, quotes and full analysis:

Read the full story on BBC News →

All reporting rights belong to the respective author(s) at BBC News. GlobalBR News summarizes publicly available content to help readers discover the most relevant global news.


Curated by GlobalBR News · April 18, 2026



🇧🇷 Resumo em Português

O uso de inteligência artificial para buscar orientações médicas nunca esteve tão acessível no Brasil quanto agora, mas um estudo recente revelou que os resultados podem ser tão variados quanto perigosos. Pesquisas mostram que plataformas como chatbots podem oferecer conselhos conflitantes sobre doenças comuns, colocando em risco a saúde de milhões de brasileiros que já recorrem a essas ferramentas para diagnósticos rápidos.

A popularização desses sistemas no país reflete a crescente demanda por soluções ágeis em um sistema de saúde muitas vezes sobrecarregado, mas especialistas alertam para os riscos da falta de regulamentação e supervisão médica. Enquanto o Ministério da Saúde ainda não estabeleceu normas específicas para o uso de IA em diagnósticos, a população segue exposta a informações imprecisas que podem atrasar tratamentos ou agravar condições. A situação é ainda mais crítica em regiões com menor acesso a médicos, onde a tentação de confiar em respostas instantâneas é maior.

Diante desse cenário, a recomendação dos profissionais é clara: a IA deve ser vista como uma ferramenta complementar, jamais como substituta de um profissional de saúde qualificado. Enquanto órgãos reguladores não definem diretrizes, a população precisa redobrar a cautela e priorizar consultas presenciais sempre que possível.


🇪🇸 Resumen en Español

El auge de los chatbots de inteligencia artificial ha revolucionado el acceso a la información médica, pero su fiabilidad sigue siendo un interrogante para los expertos. Un reciente estudio ha puesto en evidencia las contradicciones y errores en los consejos sanitarios que ofrecen estas herramientas, generando escepticismo entre profesionales de la salud.

La investigación, que analizó múltiples plataformas, reveló que hasta un 40% de las recomendaciones sobre temas como medicación o síntomas comunes contenían datos imprecisos o peligrosos. Para los hispanohablantes, este hallazgo subraya la importancia de contrastar cualquier orientación con fuentes médicas reconocidas, especialmente cuando se trata de enfermedades crónicas o tratamientos farmacológicos. La falta de regulación en estos sistemas y la ausencia de supervisión humana directa plantean riesgos tangibles, por lo que los usuarios deben abordar estas respuestas con cautela y sentido crítico.