Gemini CLI: How a Vulnerability Allowed Execution of Malicious Commands

Gemini CLI: How a Vulnerability Allowed Execution of Malicious Commands

Within 48 hours of release, a serious vulnerability was discovered in Google's Gemini CLI coding tool that allowed attackers to run harmful commands on user devices.

Gemini CLI, integrated with Google’s latest AI model, operates within terminal environments to assist developers. However, its default settings could be secretly manipulated to send sensitive data to a malicious server via a newly identified exploit.

The attack required minimal interaction: users simply needed to request a description of an attacker's code package and whitelist a benign command. The deceptive package mirrors common codes found in repositories, hiding prompt injections within its README file. Such injections have emerged as critical threats to the security of AI systems. The concealed natural language lines in these files can trigger automated command execution, resulting in significant vulnerabilities.

In this case, harmful commands were embedded within natural-language lines in the README file, enabling Gemini CLI to execute commands like connecting devices to an attacker's server and leaking potentially sensitive data.

Tracebit’s exploitation of Gemini CLI highlights the inherent risks in AI tools' inability to distinguish between trusted prompts and potentially malicious ones. This exploit demonstrated how commands could run surreptitiously, exploiting vulnerabilities like improper validation and user interface deception.

To mitigate this risk, Google promptly issued a priority fix, urging users to update to the revised version 0.1.14. Enhanced security policies, such as sandboxing environments and proper validation, are recommended for safer usage.

This incident underscores the critical importance of stringent security measures in AI tool development. Users are advised to remain vigilant and update tools regularly to defend against evolving threats.