International experts are working on the world’s first safety guide for public use of AI health chatbots. With the rise of Large Language Models, millions of people are already turning to systems like ChatGPT, Copilot, Claude, and Gemini to interpret symptoms and simplify medical jargon.
People are also increasingly seeking nutrition and natural solutions over solely relying on medical support to accompany them on their health journey, as seen in the Food is Medicine movement. This may prompt them to seek nutrition advice through chatbots if medical experts cannot answer their questions, as in the US, they receive minimal nutrition training.
Nutrition Insight speaks with the corresponding author to learn about the project, which is seeking public perspectives and recruiting collaborators to help shape the guide.
The team points out that AI chatbots can often hallucinate health advice. “It is hard to know how often chatbots hallucinate, partly because some hallucinations are extremely obvious and others are more subtle,” says Dr. Joseph Alderman, National Institute for Health and Care Research (NIHR) Clinical Lecturer at the University of Birmingham, UK.
“What is most important is that users are aware that this is something that chatbots are prone to and that they do not rely entirely on chatbots for advice or information. It will be important to verify information with trusted sources of health and well-being information — this is the sort of guidance we hope to be able to give when we publish our Health Chatbot Users’ Guide.”
The issue of knowing what information is reliable is already prevalent. A recent national survey found that nearly half of US citizens rely on unaccredited sources, social media, and AI-generated recommendations for nutrition advice rather than trained professionals. It flagged that consumers are struggling to differentiate reliable data from misinformation.
Simple guide for safe use
Using AI chatbots also tends to create an “echo chamber effect,” which mirrors users’ biases rather than challenging them, warns the project team in their Nature Health correspondence.
A new safety guide aims to help the public use AI health chatbots more carefully and responsibly.We ask Alderman what this might mean for common but ungrounded nutrition beliefs, such as high protein or carbs causing inflammation, and how users could prompt AI for evidence-based pushback.
“There are lots of tips out there about ‘prompt engineering,’ or building AI chatbot queries in such a way that they are more accurate and helpful. This can help users get better answers from AI chatbots.”
“Unfortunately, though, many of these are written in complex and technical language. When we build our guide, we will help share the latest findings from research into AI safety and communicate this clearly in a way that everyone can understand,” he assures.
A recent study warned that teens using AI models for meal plans and dietary advice may be eating too few calories compared to those following plans created by dieticians. While undercalculating macronutrients such as carbs, the tools overcalculated proteins and lipids. The researchers said that those unhappy with their bodies are at a higher risk of developing unhealthy eating behaviors.
Alderman argues that the AI health chatbot guide will focus on ensuring users have the tools to use AI chatbots safely to maximize health benefits.
Moreover, the team warns of the potential for AI to reinforce biases that amplify health inequalities. It may also pose threats to the data privacy of personal health information.
Nutrition Insight previously explored how the AI revolution is transforming personalized nutrition, giving people greater access to health solutions with Qina. However, the company warned that without ethical frameworks, the technology risks deepening health inequalities and cultural gaps.
Qina experts stated that for AI to be beneficial and safe, it must be grounded in robust, up-to-date scientific evidence and developed by multidisciplinary, diverse teams. “This ensures inclusivity, accuracy, and ethical integrity. Equally important is the need for diverse datasets to train these systems, which is key to avoiding bias and ensuring global relevance.”
Meeting the public where they are
Some patients are increasingly consulting chatbots before dietitians or physicians. Alderman points out that several have shared that they find these tools to be helpful when learning what questions to ask health experts during appointments.
Experts warn that AI health tools can mislead users, making source-checking and human oversight essential.“When we build The Health Chatbot Users’ Guide, we will ask the public about the ways they find AI chatbots useful and techniques they use to get the best responses. We hope that this will help users share what works for them, meaning that other people can try this for themselves.”
Furthermore, the guide emphasizes harm reduction over banning AI use. Alderman explains that since many are already using AI chatbots, it could be helpful for them to check the references used to generate responses.
“A really important message, though, is that chatbots can get things wrong, and so people should not rely on them for all of their advice,” he underscores. “In some cases, it will be important to visit other trusted sources of health information, such as the WHO, the Centers for Disease Control and Prevention, the National Health Service (NHS), or others.”
Separate research has confirmed that AI tools can create educational nutrition plans or support diet planning. However, these cannot yet replace professional dieticians, underscoring the need for human oversight to ensure personalized care and safety.
The project also involves researchers from the UK’s University Hospitals Birmingham NHS Foundation Trust and the NIHR Birmingham Biomedical Research Centre alongside 20 institutions worldwide.
