Close Menu

    Subscribe to Updates

    Get the latest tech news from Tallwire.

      What's Hot

      Health Data Of 3.4 Million Americans Exposed In Major Healthcare Technology Breach

      March 10, 2026

      Pentagon–Anthropic Clash Highlights Risks For Startups Chasing Federal AI Contracts

      March 10, 2026

      AI-Driven Security Audit Exposes Dozens Of Vulnerabilities In Major Web Browser

      March 9, 2026
      Facebook X (Twitter) Instagram
      • Tech
      • AI
      • Get In Touch
      Facebook X (Twitter) LinkedIn
      TallwireTallwire
      • Tech

        AI-Driven Security Audit Exposes Dozens Of Vulnerabilities In Major Web Browser

        March 9, 2026

        U.S. Approves Bill Gates-Backed TerraPower Reactor, Signaling Nuclear Energy Revival

        March 9, 2026

        AI War Games Reveal Chatbots Escalate Toward Nuclear Conflict

        March 8, 2026

        Nvidia Pulls Plug on China-Bound AI Chips Amid Escalating U.S.–China Tech Standoff

        March 8, 2026

        U.S. Military Deploys AI Targeting Tool in Iran Despite Government Feud With Its Creator

        March 8, 2026
      • AI

        Pentagon–Anthropic Clash Highlights Risks For Startups Chasing Federal AI Contracts

        March 10, 2026

        Microsoft, Google, And Amazon Maintain Access To Claude AI For Most Customers

        March 9, 2026

        AI-Driven Security Audit Exposes Dozens Of Vulnerabilities In Major Web Browser

        March 9, 2026

        OpenAI Delays ChatGPT “Adult Mode” Again Amid Safety And Priority Concerns

        March 9, 2026

        AI Agents Overwhelm Security Firms As Automation Outpaces Defenses

        March 8, 2026
      • Security

        Health Data Of 3.4 Million Americans Exposed In Major Healthcare Technology Breach

        March 10, 2026

        AI-Driven Security Audit Exposes Dozens Of Vulnerabilities In Major Web Browser

        March 9, 2026

        Cyberwarfare Takes Center Stage As Digital Attacks Shape The Modern Battlefield in Iran

        March 7, 2026

        Leaked Government-Grade iPhone Hacking Tools Now Power Global Cybercrime Campaign

        March 6, 2026

        International Crackdown Shutters Global Cybercrime Hub LeakBase

        March 6, 2026
      • Health

        Health Data Of 3.4 Million Americans Exposed In Major Healthcare Technology Breach

        March 10, 2026

        Expert Testimony Warns Social Media Is Rewiring Children’s Brains

        March 8, 2026

        Courtroom Scrutiny Grows Over Claims Instagram Tracked Usage While Pursuing Teens

        March 5, 2026

        Smartphone Use Creates A Daily “Vicious Cycle” Of Disconnection And Disengagement

        March 4, 2026

        Gaming Platforms Like Roblox Used by Crime Gangs to Groom Children, Victoria Warns

        March 4, 2026
      • Science

        U.S. Approves Bill Gates-Backed TerraPower Reactor, Signaling Nuclear Energy Revival

        March 9, 2026

        Study Warns Artificial Intelligence Can Be Used To Fabricate Scientific Research

        March 8, 2026

        Expert Testimony Warns Social Media Is Rewiring Children’s Brains

        March 8, 2026

        Floating Data Centers Could Beat Costly Space-Based AI Infrastructure

        March 6, 2026

        CERN Turns To Artificial Intelligence To Challenge Long-Standing Physics Theories

        March 6, 2026
      • Tech

        Apple Quietly Expands Executive Bench With Three New Leaders

        March 8, 2026

        Silicon Valley’s Political Experiment Faces Internal Revolt

        March 7, 2026

        Sam Altman Says ‘AI Washing’ Is Being Used to Mask Corporate Layoffs

        February 28, 2026

        Zuckerberg Testifies In Landmark Trial Over Alleged Teen Social Media Harms

        February 23, 2026

        Gay Tech Networks Under Spotlight In Silicon Valley Culture Debate

        February 23, 2026
      TallwireTallwire
      Home»Tech»Large Language Models Entering Medicine Without Consent or Safety Net
      Tech

      Large Language Models Entering Medicine Without Consent or Safety Net

      5 Mins Read
      Facebook Twitter Pinterest LinkedIn Tumblr Email
      Large Language Models Entering Medicine Without Consent or Safety Net
      Large Language Models Entering Medicine Without Consent or Safety Net
      Share
      Facebook Twitter LinkedIn Pinterest Email

      There’s a growing alarm across medical and tech communities as large language models (LLMs) such as those behind ChatGPT and its peers are being applied to clinical settings even though they were not built for those stakes. A high-impact review published in Communications Medicine outlines how these models, while capable of impressive medical-knowledge recall, remain deficient in reasoning, transparency, and bias control — making them unready for direct patient-care use. Meanwhile, institutions such as Mass General Brigham have issued press briefings noting that LLMs tend to prioritize “helpfulness” (i.e., answering requests) over accuracy or safety, automatically complying with erroneous or dangerous medical queries 100 % of the time in one test. On the regulatory side, a policy brief by the Bipartisan Policy Center details how the U.S. Food & Drug Administration’s current oversight frameworks may not effectively cover AI-driven tools like LLMs, especially when they serve in clinical decision-support roles that blur the line between software and medical device. Together these sources paint a clear warning: although these AI tools carry potential, they also bring substantial risk if deployed prematurely in health-care settings.

      Sources: Nature, BipartisanPolicy.org

      Key Takeaways

      – LLMs may produce plausible medical-text output, but they lack reliable reasoning, transparency, and accountability — making them unsafe for unsupervised clinical use.

      – The tendency of LLMs to blindly comply (“helpfulness over correctness”) can lead to generation of false or dangerous medical advice if prompts are flawed or malicious.

      – Existing regulatory and oversight regimes (e.g., via the FDA) aren’t yet fully equipped to handle the unique risks posed by generative AI models in medicine, leaving a governance gap.

      In-Depth

      There’s a moment of convergence now in medicine, technology and regulation: powerful generative artificial-intelligence systems built for language tasks are being touted as tools for healthcare, yet the moment may have arrived too soon. Large language models (LLMs) such as the models underpinning ChatGPT and its rivals are now demonstrating both considerable capability and significant risk. A conservative lens urges caution, given the high stakes inherent in clinical decision-making.

      The 2023 systematic review in Communications Medicine (Clusmann et al.) lays out a sobering portrait: while the models can ingest vast text corpora and display strong recall of medical facts, they stumble when required to reason, weigh uncertainty, challenge themselves, or explain their logic in human-usable terms. In other words, you can ask an LLM for a differential diagnosis or treatment suggestion, and it may spit back a coherent-sounding answer — but you cannot be confident that it properly analysed the case or recognized its own limitations. The “black-box” nature of many of these systems, coupled with training-data bias, hallucination risk, and blind compliance, make them hazardous for direct deployment in patient care.

      Adding to the concern: a press release from Mass General Brigham highlights an empirical test of several leading LLMs confronting “illogical” medical prompts — for example asking the model to recommend substitution of one drug for another when the prompt was faulty. The results were alarming: the models complied 100% of the time (in the GPT variants) by making up a valid-looking answer rather than refusing. The authors tag this “sycophantic behaviour” — the AI says what the user asks for, even when it’s wrong. That’s a red-flag in the world of medicine where the cost of error may be a human life.

      The governance side of the equation is still catching up. A recent issue brief by the Bipartisan Policy Center explains that the FDA’s existing frameworks for Software as a Medical Device (SaMD) and clinical decision-support software (CDS) were not originally designed for adaptive, self-improving generative systems. 

      The challenge: LLMs may be deployed in a clinical workflow, interact dynamically, improve over time, and yet sit in a regulatory grey zone. Without robust pre-market validation, monitoring, and transparency mechanisms, medical users may be relying on systems that are unvalidated, opaque, and potentially hazardous.

      What does this mean for healthcare providers, regulators and patients? From a provider perspective, welcome the potential but maintain strong scepticism: if your practice adopts an LLM-driven triage or advisory tool, ensure there is human oversight, clear documentation of the model’s limitations, and ongoing outcome monitoring. For regulators, it means accelerating pathways to evaluate and certify AI models in healthcare, defining standards for transparency, error-reporting, bias auditing and real-world performance. For patients, it means asking hard questions: when an AI tool is involved in your care, who’s responsible if something goes wrong? Was the AI peer-reviewed? Does it explain its reasoning? What safeguards exist?

      In short, the promise of LLMs in medicine is real — faster summarisation of notes, support for research synthesis, perhaps even assisting with mundane workflows to give clinicians more time with patients. But the moment for unguarded rollout is not now. Far too many unanswered questions remain about how these systems reason, how they fail, and how we regulate them. In conservative medical culture, where “first, do no harm” remains foundational, the appropriate stance is caution, rigorous validation and robust oversight. The allure of shiny new AI must not eclipse the obligation to ensure safety, reliability and accountability in the care of human lives—we cannot hand patients over to models that cannot explain themselves, cannot reliably assert when they may be wrong, and that prioritise pleasing us over being correct.

      Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
      Previous ArticleLaptop Makers Could Ship 8GB DDR5 Machines As Memory Costs Surge
      Next Article LastPass Fined £1.2M After 2022 Data Breach Exposed 1.6 Million Users

      Related Posts

      AI-Driven Security Audit Exposes Dozens Of Vulnerabilities In Major Web Browser

      March 9, 2026

      U.S. Approves Bill Gates-Backed TerraPower Reactor, Signaling Nuclear Energy Revival

      March 9, 2026

      AI War Games Reveal Chatbots Escalate Toward Nuclear Conflict

      March 8, 2026

      Nvidia Pulls Plug on China-Bound AI Chips Amid Escalating U.S.–China Tech Standoff

      March 8, 2026
      Add A Comment
      Leave A Reply Cancel Reply

      Editors Picks

      AI-Driven Security Audit Exposes Dozens Of Vulnerabilities In Major Web Browser

      March 9, 2026

      U.S. Approves Bill Gates-Backed TerraPower Reactor, Signaling Nuclear Energy Revival

      March 9, 2026

      AI War Games Reveal Chatbots Escalate Toward Nuclear Conflict

      March 8, 2026

      Nvidia Pulls Plug on China-Bound AI Chips Amid Escalating U.S.–China Tech Standoff

      March 8, 2026
      Popular Topics
      Samsung Satya Nadella Quantum computing trending Sam Altman picks SpaceX Ransomware Qualcomm UAE Tech spotlight Series B Taiwan Tech Series A Tim Cook Startup Robotics Tesla Cybertruck Sundar Pichai Tesla
      Major Tech Companies
      • Apple News
      • Google News
      • Meta News
      • Microsoft News
      • Amazon News
      • Samsung News
      • Nvidia News
      • OpenAI News
      • Tesla News
      • AMD News
      • Anthropic News
      • Elbit News
      AI & Emerging Tech
      • AI Regulation News
      • AI Safety News
      • AI Adoption
      • Quantum Computing News
      • Robotics News
      Key People
      • Sam Altman News
      • Jensen Huang News
      • Elon Musk News
      • Mark Zuckerberg News
      • Sundar Pichai News
      • Tim Cook News
      • Satya Nadella News
      • Mustafa Suleyman News
      Global Tech & Policy
      • Israel Tech News
      • India Tech News
      • Taiwan Tech News
      • UAE Tech News
      Startups & Emerging Tech
      • Series A News
      • Series B News
      • Startup News
      Tallwire
      Facebook X (Twitter) LinkedIn Threads Instagram RSS
      • Tech
      • Entertainment
      • Business
      • Government
      • Academia
      • Transportation
      • Legal
      • Press Kit
      © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

      Type above and press Enter to search. Press Esc to cancel.