Facefam ArticlesFacefam Articles
  • webmaster
    • How to
    • Developers
    • Hosting
    • monetization
    • Reports
  • Technology
    • Software
  • Downloads
    • Windows
    • android
    • PHP Scripts
    • CMS
  • REVIEWS
  • Donate
  • Join Facefam
Search

Archives

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2025
  • December 2024
  • November 2024

Categories

  • Advertiser
  • AI
  • android
  • betting
  • Bongo
  • Business
  • CMS
  • cryptocurrency
  • Developers
  • Development
  • Downloads
  • Entertainment
  • Entrepreneur
  • Finacial
  • General
  • Hosting
  • How to
  • insuarance
  • Internet
  • Kenya
  • monetization
  • Music
  • News
  • Phones
  • PHP Scripts
  • Reports
  • REVIEWS
  • RUSSIA
  • Software
  • Technology
  • Tips
  • Tragic
  • Ukraine
  • Uncategorized
  • USA
  • webmaster
  • webmaster
  • Windows
  • Women Empowerment
  • Wordpress
  • Wp Plugins
  • Wp themes
Facefam 2025
Notification Show More
Font ResizerAa
Facefam ArticlesFacefam Articles
Font ResizerAa
  • Submit a Post
  • Donate
  • Join Facefam social
Search
  • webmaster
    • How to
    • Developers
    • Hosting
    • monetization
    • Reports
  • Technology
    • Software
  • Downloads
    • Windows
    • android
    • PHP Scripts
    • CMS
  • REVIEWS
  • Donate
  • Join Facefam
Have an existing account? Sign In
Follow US
Technologywebmaster

Hackers Can Hide Malicious Code in Gemini’s Email Summaries

Ronald Kenyatta
Last updated: July 22, 2025 8:23 pm
By
Ronald Kenyatta
ByRonald Kenyatta
Follow:
Share
4 Min Read
SHARE

Contents
Understanding the prompt-injection flawManageEngine Log360Exploring Google’s layered securityMore Google coverageProtecting the integrity of Gemini
Flat vector illustration of a phishing scam concept.
Image: Gstudio/Adobe Stock

Google’s Gemini chatbot is vulnerable to a prompt-injection exploit that could trick users into falling for phishing scams, without them ever seeing it coming.

The flaw allows attackers to embed hidden instructions in seemingly benign emails. When a user clicks Summarize This Email using Gemini for Google Workspace, the chatbot can be manipulated into generating fake security alerts, prompting victims to click malicious links or call scam phone numbers.

According to the anonymous researcher who originally discovered and reported the vulnerability, the technique “involves clever and unorthodox tactics designed to deceive the model, often requiring an understanding of its operational mechanics to achieve desired outcomes.”

Understanding the prompt-injection flaw

Since the malicious email doesn’t include any attachments, it’s not always viewed as a red flag—either by users or their SPAM filters. Moreover, since it exploits HTML and CSS code, it’s easily hidden within the body of the email itself. Once it’s been embedded, Gemini for Google Workspace will process it just like any other set of instructions.

“Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated ‘security alert’ in the AI generated summary,” Marco Figueroa, researcher with 0DIN, said.

It’s important to note that neither Google, the anonymous researcher, nor the team at 0DIN has seen any verified reports of this happening to any Gemini users; however, this specific prompt-injection attack was demonstrated as a proof of concept by 0DIN’s researchers.

ManageEngine Log360

Employees per Company Size

Micro (0-49), Small (50-249), Medium (250-999), Large (1,000-4,999), Enterprise (5,000+)

Micro (0-49 Employees), Small (50-249 Employees), Medium (250-999 Employees), Large (1,000-4,999 Employees), Enterprise (5,000+ Employees)
Micro, Small, Medium, Large, Enterprise

Features

Activity Monitoring, Blacklisting, Dashboard, and more

Exploring Google’s layered security

Google has gone to great lengths to secure its Gemini platform. Some of these security controls include:

  • Cataloging vulnerabilities and adversarial data within the current line of generative AI platforms.
  • Introducing models that can detect hidden or malicious prompts while classifying them with greater accuracy.
  • Using hardcoded reminders to ensure their large language models (LLMs) perform user-directed tasks and ignore any requests that could be deemed harmful or malicious.
  • Identifying suspicious URLs and blocking the rendering of images from external URLs.
  • Requesting more information or confirmation directly from the user in some instances, also known as “human-in-the-loop.”
  • Providing notifications to users, with contextual information whenever Gemini’s internal controls mitigate a security issue.

Dark Reading reported that some of these safeguards have yet to be fully implemented. Google has also confirmed that it will be introducing additional safeguards for Gemini in the coming months.

More Google coverage

Protecting the integrity of Gemini

Even though this particular flaw hasn’t been exploited yet, AI developers need to be aware that their tools could be used as delivery mechanisms by cunning hackers and other malicious actors. This prompt-injection method is specific to Gemini for Google Workspace, but it’s easy to see how a hacker could apply similar techniques to other AI platforms, such as ChatGPT and Grok.

TAGGED:CodeEmailGeminishackersHideMaliciousSummaries
Share This Article
Facebook Whatsapp Whatsapp Email Copy Link Print
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Scale AI Lays Off Hundreds After $14.3B Meta Investment Scale AI Lays Off Hundreds After $14.3B Meta Investment
Next Article Google DeepMind Achieves Gold-Level Math Olympiad Performance, Matching OpenAI Google DeepMind Achieves Gold-Level Math Olympiad Performance, Matching OpenAI
Leave a review

Leave a Review Cancel reply

Your email address will not be published. Required fields are marked *

Please select a rating!

Meta Strikes $10 Billion Cloud Deal With Google to Boost AI Capacity
NVIDIA CEO Dismisses Chip Security Allegations as China Orders Firms to Halt Purchases
Anthropic Folds Claude Code Into Business Plans With Governance Tools
Google Claims One Gemini AI Prompt Uses Five Drops of Water
Generate AI Business Infographics without the Fees

Recent Posts

  • Meta Strikes $10 Billion Cloud Deal With Google to Boost AI Capacity
  • NVIDIA CEO Dismisses Chip Security Allegations as China Orders Firms to Halt Purchases
  • Anthropic Folds Claude Code Into Business Plans With Governance Tools
  • Google Claims One Gemini AI Prompt Uses Five Drops of Water
  • Generate AI Business Infographics without the Fees

Recent Comments

  1. https://tubemp4.ru on Best Features of PHPFox Social Network Script
  2. Вулкан Платинум on Best Features of PHPFox Social Network Script
  3. Вулкан Платинум официальный on Best Features of PHPFox Social Network Script
  4. Best Quality SEO Backlinks on DDoS Attacks Now Key Weapons in Geopolitical Conflicts, NETSCOUT Warns
  5. http://boyarka-inform.com on Comparing Wowonder and ShaunSocial

You Might Also Like

IT Leader’s Guide to the Metaverse

August 21, 2025
State of AI Adoption in Financial Services: A TechRepublic Exclusive
Technologywebmaster

State of AI Adoption in Financial Services: A TechRepublic Exclusive

August 21, 2025
AI Underperforms in Reality, and the Stock Market is Feeling It
Technologywebmaster

AI Underperforms in Reality, and the Stock Market is Feeling It

August 21, 2025
Google Shows Off Pixel 10 Series and Pixel Watch 4
Technologywebmaster

Google Shows Off Pixel 10 Series and Pixel Watch 4

August 21, 2025
NVIDIA & NSF to Build Fully Open AI Models for Science
Technologywebmaster

NVIDIA & NSF to Build Fully Open AI Models for Science

August 20, 2025
Previous Next
Facefam ArticlesFacefam Articles
Facefam Articles 2025
  • Submit a Post
  • Donate
  • Join Facefam social
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?

Not a member? Sign Up