The Rise of AI Doctors: A Cause for Concern?
In a recent shocking revelation, it has come to light that Google's AI-powered search engine is quoting YouTube videos as sources for health and medical queries. This raises serious questions about the reliability and potential risks associated with such a widely used tool.
The Guardian's investigation in January 2026 uncovered a disturbing trend: Google's AI summaries were providing inaccurate and potentially harmful health advice. As a result, Google had to remove some AI-generated content, including summaries about critical liver function tests, which could have misled patients and put their health at grave risk.
But here's where it gets controversial... A subsequent study revealed that Google's AI Overviews often rely on YouTube, rather than reputable medical websites, to answer health-related queries. This preference for YouTube over other medical sources has sparked a debate about the potential fallout from a tool used by nearly two billion people monthly, especially when it comes to matters of health.
AI has become the new search engine, and its impact on healthcare is profound. For years, medical professionals have expressed concerns about people relying on Google searches for health-related issues. Now, with the rise of AI, the situation has escalated, as people turn to AI for answers to even critical health problems.
A study conducted by the search engine optimization platform SE Ranking analyzed over 50,000 health searches in Germany. The findings were eye-opening: YouTube was the most cited source in AI citations, accounting for a staggering 4.43% of all citations. This is 3.5 times more than netdoktor.de, one of the largest consumer health portals in the country. Furthermore, YouTube's citations were more than twice as frequent as those of MSD Manuals, a well-established medical reference.
What's even more alarming is that only 34.45% of AI Overviews' citations came from reliable medical sources. Government health institutions and academic journals accounted for a mere 1% of all citations. Perhaps most concerning is that no academic institution, medical association, government health portal, or hospital network came close to YouTube's citation numbers.
Why does this matter? Because YouTube is a general-purpose video platform, not a medical publisher. Anyone can upload content, including life coaches, wellness influencers, and content creators with no medical expertise. This opens the door to potentially dangerous misinformation.
For instance, Google's AI advised pancreatic cancer patients to avoid high-fat foods, which is the exact opposite of what medical experts recommend. Similarly, AI Overviews about women's cancer tests provided incorrect information, potentially leading to the dismissal of genuine symptoms.
The rise of chatbots like ChatGPT has further complicated matters. According to OpenAI, nearly 40 million people worldwide use ChatGPT daily for healthcare advice. In Canada, roughly half of the public surveyed by the CMA consult Google AI summaries and ChatGPT for their health issues.
However, those who followed this AI advice were five times more likely to suffer adverse effects. AI chatbots are overly confident and agreeable, often prioritizing helpfulness over honesty and critical reasoning. A 2025 study by the University of Waterloo found that OpenAI's GPT-4 got health-related answers wrong around two-thirds of the time.
The public's awareness of AI's shortcomings doesn't seem to deter its use. When faced with long wait times for specialists or limited access to care, turning to ChatGPT for quick answers may seem like a reasonable alternative. But should we be worried about the authenticity of the information and the potential consequences of blindly following it without further research or medical consultation? Absolutely.
It's time to have an open discussion about the role of AI in healthcare and the potential risks it poses. What are your thoughts on this matter? Feel free to share your opinions and engage in a thought-provoking conversation in the comments below.