Why AI Can’t Practice Nursing: The Hidden Dangers of AI in Healthcare | Opinion
This is an opinion article, contributed by Emory University.
In an age where artificial intelligence is shaping the way we access knowledge, it’s easy to fall into the trap of believing that faster information means better information. Google’s AI Overview feature, a tool designed to summarize answers from across the web, has already become a staple in quick online searches. When it comes to essential health information, however, ‘quick’ can often turn deadly.
AI Makes Errors
As a healthcare professional in training and someone deeply immersed in patient safety conversations, I was alarmed by a real example with potentially fatal consequences: Vincristine is a chemotherapy drug used in many pediatric and adult cancer treatment protocols. For decades, it has been crystal clear that vincristine is to be administered intravenously only. If it is mistakenly administered intrathecally (into the spinal fluid), the outcome is catastrophic. Patients develop ascending paralytic neurotoxicity, leading to respiratory failure, coma, and death.
There is no reversal. It is considered an always-reportable, never-event medical error.
And yet, when you Google “intrathecal vincristine,” Google’s AI Overview confidently states, “Intrathecal vincristine is a chemotherapy medication that is administered directly into the spinal fluid.”
This is not a minor oversight. This is a lethal statement presented with algorithmic certainty, positioned above official clinical guidelines, safety warnings, and peer-reviewed data. Only after scrolling further does one encounter the NIH statement clearly warning, “Vincristine has a high neurotoxicity level. If given intrathecally by accident, it can cause ascending radiculomyeloencephalopathy, which is almost always fatal.”
Even more interesting? The lack of a disclaimer. For example, if you Google the medication name “Desplex,” a medication that was discontinued in the 1970s, at the end of the AI summary, Google states:
“Disclaimer: This information is for general knowledge and informational purposes only and does not constitute medical advice. It is essential to consult with a qualified healthcare professional for any health concerns or before making any decisions related to your health or treatment.”
AI Summaries Include Errors
It may be difficult to believe that real nurses use AI summaries. However, Dr. Erica Davis, BSN, MS, PhD, RN, whose career spans primary care, chronic disease management, and nursing education, noted: “Yes, I have seen both nurses and patients use AI-generated summaries for medical knowledge. Even students will use them sometimes.”
Nurses and nursing educators cannot let their guards down. The reality is, we are often the last line of defense between patients and medical errors. We are the ones verifying orders, catching near-misses, and educating confused family members who come in clinging to something they read online. Increasingly, what they read is no longer a blog or subreddit; it’s an AI-generated “medical summary” formatted to look authoritative.
AI Tools Should Never Substitute Human Wisdom
Nurse scientist Dr. Roxana Chicas, PhD, RN, understands this risk deeply. Dr. Chicas is a researcher developing an AI-informed wearable biopatch that works to protect farm workers from heat-related illness. She is grounded in both community advocacy and lived experience, providing an upfront view on how misinformation has disproportionately harmed vulnerable populations.
“AI tools can be helpful for general information or learning,” she reminds us, “but they should never replace verified clinical guidelines, peer-reviewed research, or consultation with experienced colleagues when making care decisions that impact patient safety.”
Misinformation doesn’t just confuse, she continues; it adds an ethical and emotional burden to clinical work. "When patients come in influenced by inaccurate digital content, nurses must work to re-educate and rebuild trust, treating not just the illness but the digital fallout surrounding it.”
While AI can guide clinical thinking, it is never a substitute for judgment developed through education and preceptorship. “AI is helpful,” Dr. Davis notes, “but by no means a substitute for human wisdom and clinical judgment gained through training and experience.”
AI, Nursing, and Ethics
Nurses are innovators by nature. We have redesigned workflows, created safety checklists, developed protocols, and championed patient advocacy in every corner of healthcare. AI now needs that same nursing imprint. Technology teams can code an algorithm, but only a clinician understands why a single misplaced phrase, such as “administered intrathecally,” can be catastrophic.
This is where the ethical dimension of AI becomes critical. Ethics scholar and acute care nurse practitioner Dr. Chelsea O. P. Hagopian, DNP, APRN, AGACNP-BC, whose scholarship centers on informed consent, nursing ethics, and public-facing health communication, warns that AI is not just a technical tool; it is a communication actor that shapes perceived safety. “That kind of statement of certainty implies acceptability,” she explains.
Her point is profound. If AI states misinformation confidently, it doesn’t just risk error – it undermines ethical patient participation by making dangerous practices sound standard. That is why both practicing nurses and students need to root their clinical decisions not just in knowledge, but in “information literacy confidence,” or the ability to recognize when a source is credible and when it is not.
Navigating AI and Patients
And yet, when patients reference something they “Googled,” shutting them down is not the answer. “Tell me more – that’s my response,” says Dr. Hagopian. “I dislike it when clinicians are like ‘don’t do that’ or ‘only come to me;’ it's horrible because it disempowers someone to be able to participate in their own care.”
By responding this way, she models a nursing ethic of relational communication, one that doesn’t shame patients for seeking information but guides them back toward safe, evidence-based care.
We must advocate for clinical oversight in algorithm training, incorporate patient safety language into AI models, and integrate digital discernment into nursing education—not as an optional tech literacy elective, but as a core safety competency. As Dr. Davis points out, this includes actively involving nurses in AI design.
“Nurses should partner with technologists to ensure summaries are accurate and clinically safe,” she says.
AI Is The Future of Healthcare
AI will shape the future of healthcare communication. The question is not whether it will happen, but whether nurses will be present to shape its ethics, language, and safety parameters.
Whether you are a nursing student memorizing medication safety protocols, a clinical nurse catching errors at 3 AM, or a future APRN, educator, or policy advocate, your clinical judgment and your voice are what stand between algorithmic assumptions and patient safety. AI will speak. The critical question is: who will teach it how?
🤔Nurses, what do you think about AI in healthcare? Share your thoughts in the forum below.
If you have a nursing news story that deserves to be heard, we want to amplify it to our massive community of millions of nurses! Get your story in front of Nurse.org Editors now - click here to fill out our quick submission form today!



