Facefam ArticlesFacefam Articles
  • webmaster
    • How to
    • Developers
    • Hosting
    • monetization
    • Reports
  • Technology
    • Software
  • Downloads
    • Windows
    • android
    • PHP Scripts
    • CMS
  • REVIEWS
  • Donate
  • Join Facefam
Search

Archives

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2025
  • December 2024
  • November 2024

Categories

  • Advertiser
  • AI
  • android
  • betting
  • Bongo
  • Business
  • CMS
  • cryptocurrency
  • Developers
  • Development
  • Downloads
  • Entertainment
  • Entrepreneur
  • Finacial
  • General
  • Hosting
  • How to
  • insuarance
  • Internet
  • Kenya
  • monetization
  • Music
  • News
  • Phones
  • PHP Scripts
  • Reports
  • REVIEWS
  • RUSSIA
  • Software
  • Technology
  • Tips
  • Tragic
  • Ukraine
  • Uncategorized
  • USA
  • webmaster
  • webmaster
  • Windows
  • Women Empowerment
  • Wordpress
  • Wp Plugins
  • Wp themes
Facefam 2025
Notification Show More
Font ResizerAa
Facefam ArticlesFacefam Articles
Font ResizerAa
  • Submit a Post
  • Donate
  • Join Facefam social
Search
  • webmaster
    • How to
    • Developers
    • Hosting
    • monetization
    • Reports
  • Technology
    • Software
  • Downloads
    • Windows
    • android
    • PHP Scripts
    • CMS
  • REVIEWS
  • Donate
  • Join Facefam
Have an existing account? Sign In
Follow US
Technologywebmaster

OpenAI’s GPT-5 Touts Medical Benchmarks and Mental Health Guidelines

Ronald Kenyatta
Last updated: August 11, 2025 6:59 pm
By
Ronald Kenyatta
ByRonald Kenyatta
Follow:
Share
7 Min Read
SHARE

Contents
GPT-5 responds to sensitive safety questions in a more nuanced wayOpenAI’s HealthBench tested GPT-5 against real doctorsOpenAI adjusted responses to mental health questions
OpenAI CEO Sam Altman speaks on the Aug. 7 livestream at which the AI model GPT-5 was announced.
OpenAI CEO Sam Altman speaks on the Aug. 7 livestream at which the AI model GPT-5 was announced. Screenshot: TechRepublic

With hallucinations and misinformation still present in generative AI, what do OpenAI’s attempts to mitigate these downsides in GPT-5 say about the state of large language model assistants today? Generative AI has become increasingly mainstream, but concerns about its reliability remain.

“This [the AI boom] isn’t only a global AI arms race for processing power or chip dominance,” said Bill Conner, chief executive officer of software company Jitterbit and former advisor to Interpol, in a prepared statement to TechRepublic. “It’s a test of trust, transparency, and interoperability at scale where AI, security and privacy are designed together to deliver accountability for governments, businesses and citizens.”

GPT-5 responds to sensitive safety questions in a more nuanced way

OpenAI safety training team lead Saachi Jain discussed both reducing hallucinations and addressing “mitigating deception” in GPT-5 during the release livestream last Thursday. She defined deception in GPT-5 as occurring when the model fabricates details about its reasoning process or falsely claims it has completed a task.

An AI coding tool from Replit, for example, produced some odd behaviors when it attempted to explain why it deleted an entire production database. When OpenAI demonstrated GPT-5, the presentation included examples of medical advice and a skewed chart shown for humor.

“GPT-5 is significantly less deceptive than o3 and o4-mini,” Jain said.

HealthBench Hard Hallucinations Inaccuracies on challenging conversations
HealthBench Hard Hallucinations Inaccuracies on challenging conversations. The hallucination rate for GPT-5 is much lower than 03 or 4o. Image: OpenAI livestream of GPT-5

OpenAI has changed the way the model assesses prompts for safety considerations, reducing some opportunities for prompt injection and accidental ambiguity, Jain said. As an example, she demonstrated how the model answers questions about lighting pyrogen, a chemical used in fireworks.

The formerly cutting-edge model o3 “over-rotates on intent” when asked this question, Jain said. o3 provides technical details if the request is framed neutrally, or refuses if it detects implied harm. GPT-5 uses a “safe completions” safety measure instead that “tries to maximize helpfulness within safety constraints,” Jain said. In the prompt about lighting fireworks, for example, that means referring the user to the manufacturer’s manuals for professional pyrotechnic composition.

“If we have to refuse, we’ll tell you why we refused, as well as provide helpful alternatives that will help create the conversation in a more safe way,” Jain said.

The new tuning does not eliminate the risk of cyberattacks or malicious prompts that exploit the flexibility of natural language models. Cybersecurity researchers at SPLX conducted a red team exercise on GPT-5 and found it to still be vulnerable to certain prompt injection and obfuscation attacks. Among the models tested, SPLX reported GPT-4o performed best.

OpenAI’s HealthBench tested GPT-5 against real doctors

Consumers have used ChatGPT as a sounding board for physical and mental health concerns, but its advice still carries more caveats than Googling for symptoms online. OpenAI said GPT-5 was trained in part on data from real doctors working on real-world healthcare tasks, improving its answers to health-related questions. The company measured GPT-5 using HealthBench, a rubric-based benchmark developed with 262 physicians to test the AI on 5,000 realistic health conversations. GPT-5 scored 46.2% on HealthBench Hard, compared to o3’s score of 31.6%.

In the announcement livestream, OpenAI CEO Sam Altman interviewed a woman who used ChatGPT to understand her biopsy report. The AI helped her decode the report into plain language and make a decision on whether to pursue radiation treatment after doctors didn’t agree on what steps to take.

However, consumers should remain cautious about making major health decisions based on chatbot responses or sharing highly personal information with the model.

Sample fictional health question
Sample fictional health question for GPT-5. Image: Corey Noles/TechnologyAdvice

OpenAI adjusted responses to mental health questions

To reduce risks when users seek mental health advice, OpenAI added guardrails to GPT-5 to prompt users to take breaks and to avoid giving direct answers to major life decisions.

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI staff wrote in an Aug. 4 blog post. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”

This growing trust in AI has implications for both personal and business use, said Max Sinclair, chief executive officer and co-founder of search optimization company Azoma, in an email to TechRepublic.

“I was surprised in the announcement by how much emphasis was put on health and mental health support,” he said in a prepared statement. “Studies have already shown that people put a high degree of trust in AI results – for shopping even more than in-store retail staff. As people turn more and more to ChatGPT for support with the most pressing and private problems in their lives, this trust of AI is only likely to increase.”

At Black Hat, some security experts find AI is accelerating work to an unsustainable pace. 

TAGGED:BenchmarksGPT5GuidelinesHealthMedicalMentalOpenAIsTouts
Share This Article
Facebook Whatsapp Whatsapp Email Copy Link Print
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Train Your Team in AI-Powered Pen Testing for Just $19.99 Train Your Team in AI-Powered Pen Testing for Just $19.99
Next Article NVIDIA Responds to Demo of First Rowhammer Attack on GPUs NVIDIA, AMD to Hand Over 15% of China AI Revenue to US Government
Leave a review

Leave a Review Cancel reply

Your email address will not be published. Required fields are marked *

Please select a rating!

Meta Strikes $10 Billion Cloud Deal With Google to Boost AI Capacity
NVIDIA CEO Dismisses Chip Security Allegations as China Orders Firms to Halt Purchases
Anthropic Folds Claude Code Into Business Plans With Governance Tools
Google Claims One Gemini AI Prompt Uses Five Drops of Water
Generate AI Business Infographics without the Fees

Recent Posts

  • Meta Strikes $10 Billion Cloud Deal With Google to Boost AI Capacity
  • NVIDIA CEO Dismisses Chip Security Allegations as China Orders Firms to Halt Purchases
  • Anthropic Folds Claude Code Into Business Plans With Governance Tools
  • Google Claims One Gemini AI Prompt Uses Five Drops of Water
  • Generate AI Business Infographics without the Fees

Recent Comments

  1. https://tubemp4.ru on Best Features of PHPFox Social Network Script
  2. Вулкан Платинум on Best Features of PHPFox Social Network Script
  3. Вулкан Платинум официальный on Best Features of PHPFox Social Network Script
  4. Best Quality SEO Backlinks on DDoS Attacks Now Key Weapons in Geopolitical Conflicts, NETSCOUT Warns
  5. http://boyarka-inform.com on Comparing Wowonder and ShaunSocial

You Might Also Like

IT Leader’s Guide to the Metaverse

August 21, 2025
State of AI Adoption in Financial Services: A TechRepublic Exclusive
Technologywebmaster

State of AI Adoption in Financial Services: A TechRepublic Exclusive

August 21, 2025
AI Underperforms in Reality, and the Stock Market is Feeling It
Technologywebmaster

AI Underperforms in Reality, and the Stock Market is Feeling It

August 21, 2025
Google Shows Off Pixel 10 Series and Pixel Watch 4
Technologywebmaster

Google Shows Off Pixel 10 Series and Pixel Watch 4

August 21, 2025
NVIDIA & NSF to Build Fully Open AI Models for Science
Technologywebmaster

NVIDIA & NSF to Build Fully Open AI Models for Science

August 20, 2025
Previous Next
Facefam ArticlesFacefam Articles
Facefam Articles 2025
  • Submit a Post
  • Donate
  • Join Facefam social
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?

Not a member? Sign Up