GEMINI_CLI_OFFICIAL_DOCS

🏠 Home

β”œβ”€β”€ Uninstall.md Content:

Uninstalling the CLI

Your uninstall method depends on how you ran the CLI. Follow the instructions for either npx or a global npm installation.

Method 1: Using npx

npx runs packages from a temporary cache without a permanent installation. To "uninstall" the CLI, you must clear this cache, which will remove gemini-cli and any other packages previously executed with npx.

The npx cache is a directory named _npx inside your main npm cache folder. You can find your npm cache path by running npm config get cache.

For macOS / Linux

# The path is typically ~/.npm/_npx
rm -rf "$(npm config get cache)/_npx"

For Windows

Command Prompt

:: The path is typically %LocalAppData%\npm-cache\_npx
rmdir /s /q "%LocalAppData%\npm-cache\_npx"

PowerShell

# The path is typically $env:LocalAppData\npm-cache\_npx
Remove-Item -Path (Join-Path $env:LocalAppData "npm-cache\_npx") -Recurse -Force

Method 2: Using npm (Global Install)

If you installed the CLI globally (e.g., npm install -g @google/gemini-cli), use the npm uninstall command with the -g flag to remove it.

npm uninstall -g @google/gemini-cli

This command completely removes the package from your system.

β”œβ”€β”€ architecture.md Content:

Gemini CLI Architecture Overview

This document provides a high-level overview of the Gemini CLI's architecture.

Core components

The Gemini CLI is primarily composed of two main packages, along with a suite of tools that can be used by the system in the course of handling command-line input:

  1. CLI package (packages/cli):

  2. Core package (packages/core):

    • Purpose: This acts as the backend for the Gemini CLI. It receives requests sent from packages/cli, orchestrates interactions with the Gemini API, and manages the execution of available tools.
    • Key functions contained in the package:
    • API client for communicating with the Google Gemini API
    • Prompt construction and management
    • Tool registration and execution logic
    • State management for conversations or sessions
    • Server-side configuration
  3. Tools (packages/core/src/tools/):

    • Purpose: These are individual modules that extend the capabilities of the Gemini model, allowing it to interact with the local environment (e.g., file system, shell commands, web fetching).
    • Interaction: packages/core invokes these tools based on requests from the Gemini model.

Interaction Flow

A typical interaction with the Gemini CLI follows this flow:

  1. User input: The user types a prompt or command into the terminal, which is managed by packages/cli.
  2. Request to core: packages/cli sends the user's input to packages/core.
  3. Request processed: The core package:
    • Constructs an appropriate prompt for the Gemini API, possibly including conversation history and available tool definitions.
    • Sends the prompt to the Gemini API.
  4. Gemini API response: The Gemini API processes the prompt and returns a response. This response might be a direct answer or a request to use one of the available tools.
  5. Tool execution (if applicable):
    • When the Gemini API requests a tool, the core package prepares to execute it.
    • If the requested tool can modify the file system or execute shell commands, the user is first given details of the tool and its arguments, and the user must approve the execution.
    • Read-only operations, such as reading files, might not require explicit user confirmation to proceed.
    • Once confirmed, or if confirmation is not required, the core package executes the relevant action within the relevant tool, and the result is sent back to the Gemini API by the core package.
    • The Gemini API processes the tool result and generates a final response.
  6. Response to CLI: The core package sends the final response back to the CLI package.
  7. Display to user: The CLI package formats and displays the response to the user in the terminal.

Key Design Principles

└── assets/ β”œβ”€β”€ connected_devtools.png [Error processing file]

β”œβ”€β”€ gemini-screenshot.png [Error processing file]

β”œβ”€β”€ theme-ansi-light.png [Error processing file]

β”œβ”€β”€ theme-ansi.png [Error processing file]

β”œβ”€β”€ theme-atom-one.png [Error processing file]

β”œβ”€β”€ theme-ayu-light.png [Error processing file]

β”œβ”€β”€ theme-ayu.png [Error processing file]

β”œβ”€β”€ theme-custom.png [Error processing file]

β”œβ”€β”€ theme-default-light.png [Error processing file]

β”œβ”€β”€ theme-default.png [Error processing file]

β”œβ”€β”€ theme-dracula.png [Error processing file]

β”œβ”€β”€ theme-github-light.png [Error processing file]

β”œβ”€β”€ theme-github.png [Error processing file]

β”œβ”€β”€ theme-google-light.png [Error processing file]

β”œβ”€β”€ theme-xcode-light.png [Error processing file]

β”œβ”€β”€ checkpointing.md Content:

Checkpointing

The Gemini CLI includes a Checkpointing feature that automatically saves a snapshot of your project's state before any file modifications are made by AI-powered tools. This allows you to safely experiment with and apply code changes, knowing you can instantly revert back to the state before the tool was run.

How It Works

When you approve a tool that modifies the file system (like write_file or replace), the CLI automatically creates a "checkpoint." This checkpoint includes:

  1. A Git Snapshot: A commit is made in a special, shadow Git repository located in your home directory (~/.gemini/history/<project_hash>). This snapshot captures the complete state of your project files at that moment. It does not interfere with your own project's Git repository.
  2. Conversation History: The entire conversation you've had with the agent up to that point is saved.
  3. The Tool Call: The specific tool call that was about to be executed is also stored.

If you want to undo the change or simply go back, you can use the /restore command. Restoring a checkpoint will:

All checkpoint data, including the Git snapshot and conversation history, is stored locally on your machine. The Git snapshot is stored in the shadow repository while the conversation history and tool calls are saved in a JSON file in your project's temporary directory, typically located at ~/.gemini/tmp/<project_hash>/checkpoints.

Enabling the Feature

The Checkpointing feature is disabled by default. To enable it, you can either use a command-line flag or edit your settings.json file.

Using the Command-Line Flag

You can enable checkpointing for the current session by using the --checkpointing flag when starting the Gemini CLI:

gemini --checkpointing

Using the settings.json File

To enable checkpointing by default for all sessions, you need to edit your settings.json file.

Add the following key to your settings.json:

{
  "checkpointing": {
    "enabled": true
  }
}

Using the /restore Command

Once enabled, checkpoints are created automatically. To manage them, you use the /restore command.

List Available Checkpoints

To see a list of all saved checkpoints for the current project, simply run:

/restore

The CLI will display a list of available checkpoint files. These file names are typically composed of a timestamp, the name of the file being modified, and the name of the tool that was about to be run (e.g., 2025-06-22T10-00-00_000Z-my-file.txt-write_file).

Restore a Specific Checkpoint

To restore your project to a specific checkpoint, use the checkpoint file from the list:

/restore <checkpoint_file>

For example:

/restore 2025-06-22T10-00-00_000Z-my-file.txt-write_file

After running the command, your files and conversation will be immediately restored to the state they were in when the checkpoint was created, and the original tool prompt will reappear.

└── cli/ β”œβ”€β”€ authentication.md Content:

Authentication Setup

The Gemini CLI requires you to authenticate with Google's AI services. On initial startup you'll need to configure one of the following authentication methods:

  1. Login with Google (Gemini Code Assist):

    • Use this option to log in with your Google account.
    • During initial startup, Gemini CLI will direct you to a webpage for authentication. Once authenticated, your credentials will be cached locally so the web login can be skipped on subsequent runs.
    • Note that the web login must be done in a browser that can communicate with the machine Gemini CLI is being run from. (Specifically, the browser will be redirected to a localhost url that Gemini CLI will be listening on).
    • Users may have to specify a GOOGLE_CLOUD_PROJECT if:
    • You have a Google Workspace account. Google Workspace is a paid service for businesses and organizations that provides a suite of productivity tools, including a custom email domain (e.g. your-name@your-company.com), enhanced security features, and administrative controls. These accounts are often managed by an employer or school.
    • You have received a Gemini Code Assist license through the Google Developer Program (including qualified Google Developer Experts)
    • You have been assigned a license to a current Gemini Code Assist standard or enterprise subscription.
    • You are using the product outside the supported regions for free individual usage.
    • You are a Google account holder under the age of 18
    • If you fall into one of these categories, you must first configure a Google Cloud Project ID to use, enable the Gemini for Cloud API and configure access permissions.

    You can temporarily set the environment variable in your current shell session using the following command:

    bash export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID" - For repeated use, you can add the environment variable to your .env file or your shell's configuration file (like ~/.bashrc, ~/.zshrc, or ~/.profile). For example, the following command adds the environment variable to a ~/.bashrc file:

    bash echo 'export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"' >> ~/.bashrc source ~/.bashrc

  2. Gemini API key:

    • Obtain your API key from Google AI Studio: https://aistudio.google.com/app/apikey
    • Set the GEMINI_API_KEY environment variable. In the following methods, replace YOUR_GEMINI_API_KEY with the API key you obtained from Google AI Studio:
    • You can temporarily set the environment variable in your current shell session using the following command: bash export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
    • For repeated use, you can add the environment variable to your .env file.

    • Alternatively you can export the API key from your shell's configuration file (like ~/.bashrc, ~/.zshrc, or ~/.profile). For example, the following command adds the environment variable to a ~/.bashrc file:

      bash echo 'export GEMINI_API_KEY="YOUR_GEMINI_API_KEY"' >> ~/.bashrc source ~/.bashrc

      :warning: Be advised that when you export your API key inside your shell configuration file, any other process executed from the shell can read it.

  3. Vertex AI:

    • Obtain your Google Cloud API key: Get an API Key
    • Set the GOOGLE_API_KEY environment variable. In the following methods, replace YOUR_GOOGLE_API_KEY with your Vertex AI API key:
      • You can temporarily set these environment variables in your current shell session using the following commands: bash export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
      • For repeated use, you can add the environment variables to your .env file or your shell's configuration file (like ~/.bashrc, ~/.zshrc, or ~/.profile). For example, the following commands add the environment variables to a ~/.bashrc file: bash echo 'export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"' >> ~/.bashrc source ~/.bashrc
    • To use Application Default Credentials (ADC), use the following command:
    • Ensure you have a Google Cloud project and have enabled the Vertex AI API. bash gcloud auth application-default login For more information, see Set up Application Default Credentials for Google Cloud.
    • Set the GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION environment variables. In the following methods, replace YOUR_PROJECT_ID and YOUR_PROJECT_LOCATION with the relevant values for your project:

      • You can temporarily set these environment variables in your current shell session using the following commands: bash export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID" export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION" # e.g., us-central1
      • For repeated use, you can add the environment variables to your .env file

      • Alternatively you can export the environment variables from your shell's configuration file (like ~/.bashrc, ~/.zshrc, or ~/.profile). For example, the following commands add the environment variables to a ~/.bashrc file:

      bash echo 'export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"' >> ~/.bashrc echo 'export GOOGLE_CLOUD_LOCATION="YOUR_PROJECT_LOCATION"' >> ~/.bashrc source ~/.bashrc

      :warning: Be advised that when you export your API key inside your shell configuration file, any other process executed from the shell can read it.

  4. Cloud Shell:

    • This option is only available when running in a Google Cloud Shell environment.
    • It automatically uses the credentials of the logged-in user in the Cloud Shell environment.
    • This is the default authentication method when running in Cloud Shell and no other method is configured.

      :warning: Be advised that when you export your API key inside your shell configuration file, any other process executed from the shell can read it.

Persisting Environment Variables with .env Files

You can create a .gemini/.env file in your project directory or in your home directory. Creating a plain .env file also works, but .gemini/.env is recommended to keep Gemini variables isolated from other tools.

Important: Some environment variables (like DEBUG and DEBUG_MODE) are automatically excluded from project .env files to prevent interference with gemini-cli behavior. Use .gemini/.env files for gemini-cli specific variables.

Gemini CLI automatically loads environment variables from the first .env file it finds, using the following search order:

  1. Starting in the current directory and moving upward toward /, for each directory it checks:
  2. .gemini/.env
  3. .env
  4. If no file is found, it falls back to your home directory:
  5. ~/.gemini/.env
  6. ~/.env

Important: The search stops at the first file encounteredβ€”variables are not merged across multiple files.

Examples

Project-specific overrides (take precedence when you are inside the project):

mkdir -p .gemini
echo 'GOOGLE_CLOUD_PROJECT="your-project-id"' >> .gemini/.env

User-wide settings (available in every directory):

mkdir -p ~/.gemini
cat >> ~/.gemini/.env <<'EOF'
GOOGLE_CLOUD_PROJECT="your-project-id"
GEMINI_API_KEY="your-gemini-api-key"
EOF

Non-Interactive Mode / Headless Environments

When running the Gemini CLI in a non-interactive environment, you cannot use the interactive login flow. Instead, you must configure authentication using environment variables.

The CLI will automatically detect if it is running in a non-interactive terminal and will use one of the following authentication methods if available:

  1. Gemini API Key:

    • Set the GEMINI_API_KEY environment variable.
    • The CLI will use this key to authenticate with the Gemini API.
  2. Vertex AI:

    • Set the GOOGLE_GENAI_USE_VERTEXAI=true environment variable.
    • Using an API Key: Set the GOOGLE_API_KEY environment variable.
    • Using Application Default Credentials (ADC):
    • Run gcloud auth application-default login in your environment to configure ADC.
    • Ensure the GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION environment variables are set.

If none of these environment variables are set in a non-interactive session, the CLI will exit with an error.

β”œβ”€β”€ commands.md Content:

CLI Commands

Gemini CLI supports several built-in commands to help you manage your session, customize the interface, and control its behavior. These commands are prefixed with a forward slash (/), an at symbol (@), or an exclamation mark (!).

Slash commands (/)

Slash commands provide meta-level control over the CLI itself.

Built-in Commands

Custom Commands

For a quick start, see the example below.

Custom commands allow you to save and reuse your favorite or most frequently used prompts as personal shortcuts within Gemini CLI. You can create commands that are specific to a single project or commands that are available globally across all your projects, streamlining your workflow and ensuring consistency.

File Locations & Precedence

Gemini CLI discovers commands from two locations, loaded in a specific order:

  1. User Commands (Global): Located in ~/.gemini/commands/. These commands are available in any project you are working on.
  2. Project Commands (Local): Located in <your-project-root>/.gemini/commands/. These commands are specific to the current project and can be checked into version control to be shared with your team.

If a command in the project directory has the same name as a command in the user directory, the project command will always be used. This allows projects to override global commands with project-specific versions.

Naming and Namespacing

The name of a command is determined by its file path relative to its commands directory. Subdirectories are used to create namespaced commands, with the path separator (/ or \) being converted to a colon (:).

TOML File Format (v1)

Your command definition files must be written in the TOML format and use the .toml file extension.

Required Fields
Optional Fields

Handling Arguments

Custom commands support two powerful, low-friction methods for handling arguments. The CLI automatically chooses the correct method based on the content of your command's prompt.

1. Shorthand Injection with {{args}}

If your prompt contains the special placeholder {{args}}, the CLI will replace that exact placeholder with all the text the user typed after the command name. This is perfect for simple, deterministic commands where you need to inject user input into a specific place in a larger prompt template.

Example (git/fix.toml):

# In: ~/.gemini/commands/git/fix.toml
# Invoked via: /git:fix "Button is misaligned on mobile"

description = "Generates a fix for a given GitHub issue."
prompt = "Please analyze the staged git changes and provide a code fix for the issue described here: {{args}}."

The model will receive the final prompt: Please analyze the staged git changes and provide a code fix for the issue described here: "Button is misaligned on mobile".

2. Default Argument Handling

If your prompt does not contain the special placeholder {{args}}, the CLI uses a default behavior for handling arguments.

If you provide arguments to the command (e.g., /mycommand arg1), the CLI will append the full command you typed to the end of the prompt, separated by two newlines. This allows the model to see both the original instructions and the specific arguments you just provided.

If you do not provide any arguments (e.g., /mycommand), the prompt is sent to the model exactly as it is, with nothing appended.

Example (changelog.toml):

This example shows how to create a robust command by defining a role for the model, explaining where to find the user's input, and specifying the expected format and behavior.

# In: <project>/.gemini/commands/changelog.toml
# Invoked via: /changelog 1.2.0 added "Support for default argument parsing."

description = "Adds a new entry to the project's CHANGELOG.md file."
prompt = """
# Task: Update Changelog

You are an expert maintainer of this software project. A user has invoked a command to add a new entry to the changelog.

**The user's raw command is appended below your instructions.**

Your task is to parse the `<version>`, `<change_type>`, and `<message>` from their input and use the `write_file` tool to correctly update the `CHANGELOG.md` file.

## Expected Format
The command follows this format: `/changelog <version> <type> <message>`
- `<type>` must be one of: "added", "changed", "fixed", "removed".

## Behavior
1. Read the `CHANGELOG.md` file.
2. Find the section for the specified `<version>`.
3. Add the `<message>` under the correct `<type>` heading.
4. If the version or type section doesn't exist, create it.
5. Adhere strictly to the "Keep a Changelog" format.
"""

When you run /changelog 1.2.0 added "New feature", the final text sent to the model will be the original prompt followed by two newlines and the command you typed.

3. Executing Shell Commands with !{...}

You can make your commands dynamic by executing shell commands directly within your prompt and injecting their output. This is ideal for gathering context from your local environment, like reading file content or checking the status of Git.

When a custom command attempts to execute a shell command, Gemini CLI will now prompt you for confirmation before proceeding. This is a security measure to ensure that only intended commands can be run.

How It Works:

  1. Inject Commands: Use the !{...} syntax in your prompt to specify where the command should be run and its output injected.
  2. Confirm Execution: When you run the command, a dialog will appear listing the shell commands the prompt wants to execute.
  3. Grant Permission: You can choose to:
    • Allow once: The command(s) will run this one time.
    • Allow always for this session: The command(s) will be added to a temporary allowlist for the current CLI session and will not require confirmation again.
    • No: Cancel the execution of the shell command(s).

The CLI still respects the global excludeTools and coreTools settings. A command will be blocked without a confirmation prompt if it is explicitly disallowed in your configuration.

Example (git/commit.toml):

This command gets the staged git diff and uses it to ask the model to write a commit message.

# In: <project>/.gemini/commands/git/commit.toml
# Invoked via: /git:commit

description = "Generates a Git commit message based on staged changes."

# The prompt uses !{...} to execute the command and inject its output.
prompt = """
Please generate a Conventional Commit message based on the following git diff:

```diff
!{git diff --staged}
```

"""

When you run /git:commit, the CLI first executes git diff --staged, then replaces !{git diff --staged} with the output of that command before sending the final, complete prompt to the model.


Example: A "Pure Function" Refactoring Command

Let's create a global command that asks the model to refactor a piece of code.

1. Create the file and directories:

First, ensure the user commands directory exists, then create a refactor subdirectory for organization and the final TOML file.

mkdir -p ~/.gemini/commands/refactor
touch ~/.gemini/commands/refactor/pure.toml

2. Add the content to the file:

Open ~/.gemini/commands/refactor/pure.toml in your editor and add the following content. We are including the optional description for best practice.

# In: ~/.gemini/commands/refactor/pure.toml
# This command will be invoked via: /refactor:pure

description = "Asks the model to refactor the current context into a pure function."

prompt = """
Please analyze the code I've provided in the current context.
Refactor it into a pure function.

Your response should include:
1. The refactored, pure function code block.
2. A brief explanation of the key changes you made and why they contribute to purity.
"""

3. Run the Command:

That's it! You can now run your command in the CLI. First, you might add a file to the context, and then invoke your command:

> @my-messy-function.js
> /refactor:pure

Gemini CLI will then execute the multi-line prompt defined in your TOML file.

At commands (@)

At commands are used to include the content of files or directories as part of your prompt to Gemini. These commands include git-aware filtering.

Error handling for @ commands

Shell mode & passthrough commands (!)

The ! prefix lets you interact with your system's shell directly from within Gemini CLI.

β”œβ”€β”€ configuration.md Content:

Gemini CLI Configuration

Gemini CLI offers several ways to configure its behavior, including environment variables, command-line arguments, and settings files. This document outlines the different configuration methods and available settings.

Configuration layers

Configuration is applied in the following order of precedence (lower numbers are overridden by higher numbers):

  1. Default values: Hardcoded defaults within the application.
  2. User settings file: Global settings for the current user.
  3. Project settings file: Project-specific settings.
  4. System settings file: System-wide settings.
  5. Environment variables: System-wide or session-specific variables, potentially loaded from .env files.
  6. Command-line arguments: Values passed when launching the CLI.

Settings files

Gemini CLI uses settings.json files for persistent configuration. There are three locations for these files:

Note on environment variables in settings: String values within your settings.json files can reference environment variables using either $VAR_NAME or ${VAR_NAME} syntax. These variables will be automatically resolved when the settings are loaded. For example, if you have an environment variable MY_API_TOKEN, you could use it in settings.json like this: "apiKey": "$MY_API_TOKEN".

The .gemini directory in your project

In addition to a project settings file, a project's .gemini directory can contain other project-specific files related to Gemini CLI's operation, such as:

Available settings in settings.json:

Example settings.json:

{
  "theme": "GitHub",
  "sandbox": "docker",
  "toolDiscoveryCommand": "bin/get_tools",
  "toolCallCommand": "bin/call_tool",
  "mcpServers": {
    "mainServer": {
      "command": "bin/mcp_server.py"
    },
    "anotherServer": {
      "command": "node",
      "args": ["mcp_server.js", "--verbose"]
    }
  },
  "telemetry": {
    "enabled": true,
    "target": "local",
    "otlpEndpoint": "http://localhost:4317",
    "logPrompts": true
  },
  "usageStatisticsEnabled": true,
  "hideTips": false,
  "hideBanner": false,
  "maxSessionTurns": 10,
  "summarizeToolOutput": {
    "run_shell_command": {
      "tokenBudget": 100
    }
  },
  "excludedProjectEnvVars": ["DEBUG", "DEBUG_MODE", "NODE_ENV"],
  "includeDirectories": ["path/to/dir1", "~/path/to/dir2", "../path/to/dir3"],
  "loadMemoryFromIncludeDirectories": true
}

Shell History

The CLI keeps a history of shell commands you run. To avoid conflicts between different projects, this history is stored in a project-specific directory within your user's home folder.

Environment Variables & .env Files

Environment variables are a common way to configure applications, especially for sensitive information like API keys or for settings that might change between environments.

The CLI automatically loads environment variables from an .env file. The loading order is:

  1. .env file in the current working directory.
  2. If not found, it searches upwards in parent directories until it finds an .env file or reaches the project root (identified by a .git folder) or the home directory.
  3. If still not found, it looks for ~/.env (in the user's home directory).

Environment Variable Exclusion: Some environment variables (like DEBUG and DEBUG_MODE) are automatically excluded from being loaded from project .env files to prevent interference with gemini-cli behavior. Variables from .gemini/.env files are never excluded. You can customize this behavior using the excludedProjectEnvVars setting in your settings.json file.

Command-Line Arguments

Arguments passed directly when running the CLI can override other configurations for that specific session.

Context Files (Hierarchical Instructional Context)

While not strictly configuration for the CLI's behavior, context files (defaulting to GEMINI.md but configurable via the contextFileName setting) are crucial for configuring the instructional context (also referred to as "memory") provided to the Gemini model. This powerful feature allows you to give project-specific instructions, coding style guides, or any relevant background information to the AI, making its responses more tailored and accurate to your needs. The CLI includes UI elements, such as an indicator in the footer showing the number of loaded context files, to keep you informed about the active context.

Example Context File Content (e.g., GEMINI.md)

Here's a conceptual example of what a context file at the root of a TypeScript project might contain:

# Project: My Awesome TypeScript Library

## General Instructions:

- When generating new TypeScript code, please follow the existing coding style.
- Ensure all new functions and classes have JSDoc comments.
- Prefer functional programming paradigms where appropriate.
- All code should be compatible with TypeScript 5.0 and Node.js 20+.

## Coding Style:

- Use 2 spaces for indentation.
- Interface names should be prefixed with `I` (e.g., `IUserService`).
- Private class members should be prefixed with an underscore (`_`).
- Always use strict equality (`===` and `!==`).

## Specific Component: `src/api/client.ts`

- This file handles all outbound API requests.
- When adding new API call functions, ensure they include robust error handling and logging.
- Use the existing `fetchWithRetry` utility for all GET requests.

## Regarding Dependencies:

- Avoid introducing new external dependencies unless absolutely necessary.
- If a new dependency is required, please state the reason.

This example demonstrates how you can provide general project context, specific coding conventions, and even notes about particular files or components. The more relevant and precise your context files are, the better the AI can assist you. Project-specific context files are highly encouraged to establish conventions and context.

By understanding and utilizing these configuration layers and the hierarchical nature of context files, you can effectively manage the AI's memory and tailor the Gemini CLI's responses to your specific needs and projects.

Sandboxing

The Gemini CLI can execute potentially unsafe operations (like shell commands and file modifications) within a sandboxed environment to protect your system.

Sandboxing is disabled by default, but you can enable it in a few ways:

By default, it uses a pre-built gemini-cli-sandbox Docker image.

For project-specific sandboxing needs, you can create a custom Dockerfile at .gemini/sandbox.Dockerfile in your project's root directory. This Dockerfile can be based on the base sandbox image:

FROM gemini-cli-sandbox

# Add your custom dependencies or configurations here
# For example:
# RUN apt-get update && apt-get install -y some-package
# COPY ./my-config /app/my-config

When .gemini/sandbox.Dockerfile exists, you can use BUILD_SANDBOX environment variable when running Gemini CLI to automatically build the custom sandbox image:

BUILD_SANDBOX=1 gemini -s

Usage Statistics

To help us improve the Gemini CLI, we collect anonymized usage statistics. This data helps us understand how the CLI is used, identify common issues, and prioritize new features.

What we collect:

What we DON'T collect:

How to opt out:

You can opt out of usage statistics collection at any time by setting the usageStatisticsEnabled property to false in your settings.json file:

{
  "usageStatisticsEnabled": false
}

β”œβ”€β”€ index.md Content:

Gemini CLI

Within Gemini CLI, packages/cli is the frontend for users to send and receive prompts with the Gemini AI model and its associated tools. For a general overview of Gemini CLI, see the main documentation page.

Non-interactive mode

Gemini CLI can be run in a non-interactive mode, which is useful for scripting and automation. In this mode, you pipe input to the CLI, it executes the command, and then it exits.

The following example pipes a command to Gemini CLI from your terminal:

echo "What is fine tuning?" | gemini

Gemini CLI executes the command and prints the output to your terminal. Note that you can achieve the same behavior by using the --prompt or -p flag. For example:

gemini -p "What is fine tuning?"

β”œβ”€β”€ themes.md Content:

Themes

Gemini CLI supports a variety of themes to customize its color scheme and appearance. You can change the theme to suit your preferences via the /theme command or "theme": configuration setting.

Available Themes

Gemini CLI comes with a selection of pre-defined themes, which you can list using the /theme command within Gemini CLI:

Changing Themes

  1. Enter /theme into Gemini CLI.
  2. A dialog or selection prompt appears, listing the available themes.
  3. Using the arrow keys, select a theme. Some interfaces might offer a live preview or highlight as you select.
  4. Confirm your selection to apply the theme.

Theme Persistence

Selected themes are saved in Gemini CLI's configuration so your preference is remembered across sessions.


Custom Color Themes

Gemini CLI allows you to create your own custom color themes by specifying them in your settings.json file. This gives you full control over the color palette used in the CLI.

How to Define a Custom Theme

Add a customThemes block to your user, project, or system settings.json file. Each custom theme is defined as an object with a unique name and a set of color keys. For example:

{
  "customThemes": {
    "MyCustomTheme": {
      "name": "MyCustomTheme",
      "type": "custom",
      "Background": "#181818",
      "Foreground": "#F8F8F2",
      "LightBlue": "#82AAFF",
      "AccentBlue": "#61AFEF",
      "AccentPurple": "#C678DD",
      "AccentCyan": "#56B6C2",
      "AccentGreen": "#98C379",
      "AccentYellow": "#E5C07B",
      "AccentRed": "#E06C75",
      "Comment": "#5C6370",
      "Gray": "#ABB2BF",
      "DiffAdded": "#A6E3A1",
      "DiffRemoved": "#F38BA8",
      "DiffModified": "#89B4FA",
      "GradientColors": ["#4796E4", "#847ACE", "#C3677F"]
    }
  }
}

Color keys:

Required Properties:

You can use either hex codes (e.g., #FF0000) or standard CSS color names (e.g., coral, teal, blue) for any color value. See CSS color names for a full list of supported names.

You can define multiple custom themes by adding more entries to the customThemes object.

Example Custom Theme

Custom theme example

Using Your Custom Theme


Dark Themes

ANSI

ANSI theme

Atom OneDark

Atom One theme

Ayu

Ayu theme

Default

Default theme

Dracula

Dracula theme

GitHub

GitHub theme

Light Themes

ANSI Light

ANSI Light theme

Ayu Light

Ayu Light theme

Default Light

Default Light theme

GitHub Light

GitHub Light theme

Google Code

Google Code theme

Xcode

Xcode Light theme

β”œβ”€β”€ token-caching.md Content:

Token Caching and Cost Optimization

Gemini CLI automatically optimizes API costs through token caching when using API key authentication (Gemini API key or Vertex AI). This feature reuses previous system instructions and context to reduce the number of tokens processed in subsequent requests.

Token caching is available for:

Token caching is not available for:

You can view your token usage and cached token savings using the /stats command. When cached tokens are available, they will be displayed in the stats output.

β”œβ”€β”€ tutorials.md Content:

Tutorials

This page contains tutorials for interacting with Gemini CLI.

Setting up a Model Context Protocol (MCP) server

[!CAUTION] Before using a third-party MCP server, ensure you trust its source and understand the tools it provides. Your use of third-party servers is at your own risk.

This tutorial demonstrates how to set up a MCP server, using the GitHub MCP server as an example. The GitHub MCP server provides tools for interacting with GitHub repositories, such as creating issues and commenting on pull requests.

Prerequisites

Before you begin, ensure you have the following installed and configured:

Guide

Configure the MCP server in settings.json

In your project's root directory, create or open the .gemini/settings.json file. Within the file, add the mcpServers configuration block, which provides instructions for how to launch the GitHub MCP server.

{
  "mcpServers": {
    "github": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "GITHUB_PERSONAL_ACCESS_TOKEN",
        "ghcr.io/github/github-mcp-server"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_PERSONAL_ACCESS_TOKEN}"
      }
    }
  }
}

Set your GitHub token

[!CAUTION] Using a broadly scoped personal access token that has access to personal and private repositories can lead to information from the private repository being leaked into the public repository. We recommend using a fine-grained access token that doesn't share access to both public and private repositories.

Use an environment variable to store your GitHub PAT:

GITHUB_PERSONAL_ACCESS_TOKEN="pat_YourActualGitHubTokenHere"

Gemini CLI uses this value in the mcpServers configuration that you defined in the settings.json file.

Launch Gemini CLI and verify the connection

When you launch Gemini CLI, it automatically reads your configuration and launches the GitHub MCP server in the background. You can then use natural language prompts to ask Gemini CLI to perform GitHub actions. For example:

"get all open issues assigned to me in the 'foo/bar' repo and prioritize them"

└── core/ β”œβ”€β”€ index.md Content:

Gemini CLI Core

Gemini CLI's core package (packages/core) is the backend portion of Gemini CLI, handling communication with the Gemini API, managing tools, and processing requests sent from packages/cli. For a general overview of Gemini CLI, see the main documentation page.

Role of the core

While the packages/cli portion of Gemini CLI provides the user interface, packages/core is responsible for:

Security considerations

The core plays a vital role in security:

Chat history compression

To ensure that long conversations don't exceed the token limits of the Gemini model, the core includes a chat history compression feature.

When a conversation approaches the token limit for the configured model, the core automatically compresses the conversation history before sending it to the model. This compression is designed to be lossless in terms of the information conveyed, but it reduces the overall number of tokens used.

You can find the token limits for each model in the Google AI documentation.

Model fallback

Gemini CLI includes a model fallback mechanism to ensure that you can continue to use the CLI even if the default "pro" model is rate-limited.

If you are using the default "pro" model and the CLI detects that you are being rate-limited, it automatically switches to the "flash" model for the current session. This allows you to continue working without interruption.

File discovery service

The file discovery service is responsible for finding files in the project that are relevant to the current context. It is used by the @ command and other tools that need to access files.

Memory discovery service

The memory discovery service is responsible for finding and loading the GEMINI.md files that provide context to the model. It searches for these files in a hierarchical manner, starting from the current working directory and moving up to the project root and the user's home directory. It also searches in subdirectories.

This allows you to have global, project-level, and component-level context files, which are all combined to provide the model with the most relevant information.

You can use the /memory command to show, add, and refresh the content of loaded GEMINI.md files.

β”œβ”€β”€ memport.md Content:

Memory Import Processor

The Memory Import Processor is a feature that allows you to modularize your GEMINI.md files by importing content from other files using the @file.md syntax.

Overview

This feature enables you to break down large GEMINI.md files into smaller, more manageable components that can be reused across different contexts. The import processor supports both relative and absolute paths, with built-in safety features to prevent circular imports and ensure file access security.

Syntax

Use the @ symbol followed by the path to the file you want to import:

# Main GEMINI.md file

This is the main content.

@./components/instructions.md

More content here.

@./shared/configuration.md

Supported Path Formats

Relative Paths

Absolute Paths

Examples

Basic Import

# My GEMINI.md

Welcome to my project!

@./getting-started.md

## Features

@./features/overview.md

Nested Imports

The imported files can themselves contain imports, creating a nested structure:

# main.md

@./header.md
@./content.md
@./footer.md
# header.md

# Project Header

@./shared/title.md

Safety Features

Circular Import Detection

The processor automatically detects and prevents circular imports:

# file-a.md

@./file-b.md

# file-b.md

@./file-a.md <!-- This will be detected and prevented -->

File Access Security

The validateImportPath function ensures that imports are only allowed from specified directories, preventing access to sensitive files outside the allowed scope.

Maximum Import Depth

To prevent infinite recursion, there's a configurable maximum import depth (default: 5 levels).

Error Handling

Missing Files

If a referenced file doesn't exist, the import will fail gracefully with an error comment in the output.

File Access Errors

Permission issues or other file system errors are handled gracefully with appropriate error messages.

Code Region Detection

The import processor uses the marked library to detect code blocks and inline code spans, ensuring that @ imports inside these regions are properly ignored. This provides robust handling of nested code blocks and complex Markdown structures.

Import Tree Structure

The processor returns an import tree that shows the hierarchy of imported files, similar to Claude's /memory feature. This helps users debug problems with their GEMINI.md files by showing which files were read and their import relationships.

Example tree structure:

Memory Files
 L project: GEMINI.md
            L a.md
              L b.md
                L c.md
              L d.md
                L e.md
                  L f.md
            L included.md

The tree preserves the order that files were imported and shows the complete import chain for debugging purposes.

Comparison to Claude Code's /memory (claude.md) Approach

Claude Code's /memory feature (as seen in claude.md) produces a flat, linear document by concatenating all included files, always marking file boundaries with clear comments and path names. It does not explicitly present the import hierarchy, but the LLM receives all file contents and paths, which is sufficient for reconstructing the hierarchy if needed.

Note: The import tree is mainly for clarity during development and has limited relevance to LLM consumption.

API Reference

processImports(content, basePath, debugMode?, importState?)

Processes import statements in GEMINI.md content.

Parameters:

Returns: Promise - Object containing processed content and import tree

ProcessImportsResult

interface ProcessImportsResult {
  content: string; // The processed content with imports resolved
  importTree: MemoryFile; // Tree structure showing the import hierarchy
}

MemoryFile

interface MemoryFile {
  path: string; // The file path
  imports?: MemoryFile[]; // Direct imports, in the order they were imported
}

validateImportPath(importPath, basePath, allowedDirectories)

Validates import paths to ensure they are safe and within allowed directories.

Parameters:

Returns: boolean - Whether the import path is valid

findProjectRoot(startDir)

Finds the project root by searching for a .git directory upwards from the given start directory. Implemented as an async function using non-blocking file system APIs to avoid blocking the Node.js event loop.

Parameters:

Returns: Promise - The project root directory (or the start directory if no .git is found)

Best Practices

  1. Use descriptive file names for imported components
  2. Keep imports shallow - avoid deeply nested import chains
  3. Document your structure - maintain a clear hierarchy of imported files
  4. Test your imports - ensure all referenced files exist and are accessible
  5. Use relative paths when possible for better portability

Troubleshooting

Common Issues

  1. Import not working: Check that the file exists and the path is correct
  2. Circular import warnings: Review your import structure for circular references
  3. Permission errors: Ensure the files are readable and within allowed directories
  4. Path resolution issues: Use absolute paths if relative paths aren't resolving correctly

Debug Mode

Enable debug mode to see detailed logging of the import process:

const result = await processImports(content, basePath, true);

β”œβ”€β”€ tools-api.md Content:

Gemini CLI Core: Tools API

The Gemini CLI core (packages/core) features a robust system for defining, registering, and executing tools. These tools extend the capabilities of the Gemini model, allowing it to interact with the local environment, fetch web content, and perform various actions beyond simple text generation.

Core Concepts

Built-in Tools

The core comes with a suite of pre-defined tools, typically found in packages/core/src/tools/. These include:

Each of these tools extends BaseTool and implements the required methods for its specific functionality.

Tool Execution Flow

  1. Model Request: The Gemini model, based on the user's prompt and the provided tool schemas, decides to use a tool and returns a FunctionCall part in its response, specifying the tool name and arguments.
  2. Core Receives Request: The core parses this FunctionCall.
  3. Tool Retrieval: It looks up the requested tool in the ToolRegistry.
  4. Parameter Validation: The tool's validateToolParams() method is called.
  5. Confirmation (if needed):
    • The tool's shouldConfirmExecute() method is called.
    • If it returns details for confirmation, the core communicates this back to the CLI, which prompts the user.
    • The user's decision (e.g., proceed, cancel) is sent back to the core.
  6. Execution: If validated and confirmed (or if no confirmation is needed), the core calls the tool's execute() method with the provided arguments and an AbortSignal (for potential cancellation).
  7. Result Processing: The ToolResult from execute() is received by the core.
  8. Response to Model: The llmContent from the ToolResult is packaged as a FunctionResponse and sent back to the Gemini model so it can continue generating a user-facing response.
  9. Display to User: The returnDisplay from the ToolResult is sent to the CLI to show the user what the tool did.

Extending with Custom Tools

While direct programmatic registration of new tools by users isn't explicitly detailed as a primary workflow in the provided files for typical end-users, the architecture supports extension through:

This tool system provides a flexible and powerful way to augment the Gemini model's capabilities, making the Gemini CLI a versatile assistant for a wide range of tasks.

β”œβ”€β”€ deployment.md Content:

Gemini CLI Execution and Deployment

This document describes how to run Gemini CLI and explains the deployment architecture that Gemini CLI uses.

Running Gemini CLI

There are several ways to run Gemini CLI. The option you choose depends on how you intend to use Gemini CLI.


This is the recommended way for end-users to install Gemini CLI. It involves downloading the Gemini CLI package from the NPM registry.

bash npm install -g @google/gemini-cli

Then, run the CLI from anywhere:

bash gemini

bash # Execute the latest version from NPM without a global install npx @google/gemini-cli


2. Running in a sandbox (Docker/Podman)

For security and isolation, Gemini CLI can be run inside a container. This is the default way that the CLI executes tools that might have side effects.


Contributors to the project will want to run the CLI directly from the source code.

```bash # Link the local cli package to your global node_modules npm link packages/cli

# Now you can run your local version using the gemini command gemini ```


4. Running the latest Gemini CLI commit from GitHub

You can run the most recently committed version of Gemini CLI directly from the GitHub repository. This is useful for testing features still in development.

# Execute the CLI directly from the main branch on GitHub
npx https://github.com/google-gemini/gemini-cli

Deployment architecture

The execution methods described above are made possible by the following architectural components and processes:

NPM packages

Gemini CLI project is a monorepo that publishes two core packages to the NPM registry:

These packages are used when performing the standard installation and when running Gemini CLI from the source.

Build and packaging processes

There are two distinct build processes used, depending on the distribution channel:

Docker sandbox image

The Docker-based execution method is supported by the gemini-cli-sandbox container image. This image is published to a container registry and contains a pre-installed, global version of Gemini CLI.

Release process

The release process is automated through GitHub Actions. The release workflow performs the following actions:

  1. Build the NPM packages using tsc.
  2. Publish the NPM packages to the artifact registry.
  3. Create GitHub releases with bundled assets.

└── examples/ β”œβ”€β”€ proxy-script.md Content:

Example Proxy Script

The following is an example of a proxy script that can be used with the GEMINI_SANDBOX_PROXY_COMMAND environment variable. This script only allows HTTPS connections to example.com:443 and declines all other requests.

#!/usr/bin/env node

/**
 * @license
 * Copyright 2025 Google LLC
 * SPDX-License-Identifier: Apache-2.0
 */

// Example proxy server that listens on :::8877 and only allows HTTPS connections to example.com.
// Set `GEMINI_SANDBOX_PROXY_COMMAND=scripts/example-proxy.js` to run proxy alongside sandbox
// Test via `curl https://example.com` inside sandbox (in shell mode or via shell tool)

import http from 'http';
import net from 'net';
import { URL } from 'url';
import console from 'console';

const PROXY_PORT = 8877;
const ALLOWED_DOMAINS = ['example.com', 'googleapis.com'];
const ALLOWED_PORT = '443';

const server = http.createServer((req, res) => {
  // Deny all requests other than CONNECT for HTTPS
  console.log(
    `[PROXY] Denying non-CONNECT request for: ${req.method} ${req.url}`,
  );
  res.writeHead(405, { 'Content-Type': 'text/plain' });
  res.end('Method Not Allowed');
});

server.on('connect', (req, clientSocket, head) => {
  // req.url will be in the format "hostname:port" for a CONNECT request.
  const { port, hostname } = new URL(`http://${req.url}`);

  console.log(`[PROXY] Intercepted CONNECT request for: ${hostname}:${port}`);

  if (
    ALLOWED_DOMAINS.some(
      (domain) => hostname == domain || hostname.endsWith(`.${domain}`),
    ) &&
    port === ALLOWED_PORT
  ) {
    console.log(`[PROXY] Allowing connection to ${hostname}:${port}`);

    // Establish a TCP connection to the original destination.
    const serverSocket = net.connect(port, hostname, () => {
      clientSocket.write('HTTP/1.1 200 Connection Established\r\n\r\n');
      // Create a tunnel by piping data between the client and the destination server.
      serverSocket.write(head);
      serverSocket.pipe(clientSocket);
      clientSocket.pipe(serverSocket);
    });

    serverSocket.on('error', (err) => {
      console.error(`[PROXY] Error connecting to destination: ${err.message}`);
      clientSocket.end(`HTTP/1.1 502 Bad Gateway\r\n\r\n`);
    });
  } else {
    console.log(`[PROXY] Denying connection to ${hostname}:${port}`);
    clientSocket.end('HTTP/1.1 403 Forbidden\r\n\r\n');
  }

  clientSocket.on('error', (err) => {
    // This can happen if the client hangs up.
    console.error(`[PROXY] Client socket error: ${err.message}`);
  });
});

server.listen(PROXY_PORT, () => {
  const address = server.address();
  console.log(`[PROXY] Proxy listening on ${address.address}:${address.port}`);
  console.log(
    `[PROXY] Allowing HTTPS connections to domains: ${ALLOWED_DOMAINS.join(', ')}`,
  );
});

β”œβ”€β”€ extension.md Content:

Gemini CLI Extensions

Gemini CLI supports extensions that can be used to configure and extend its functionality.

How it works

On startup, Gemini CLI looks for extensions in two locations:

  1. <workspace>/.gemini/extensions
  2. <home>/.gemini/extensions

Gemini CLI loads all extensions from both locations. If an extension with the same name exists in both locations, the extension in the workspace directory takes precedence.

Within each location, individual extensions exist as a directory that contains a gemini-extension.json file. For example:

<workspace>/.gemini/extensions/my-extension/gemini-extension.json

gemini-extension.json

The gemini-extension.json file contains the configuration for the extension. The file has the following structure:

{
  "name": "my-extension",
  "version": "1.0.0",
  "mcpServers": {
    "my-server": {
      "command": "node my-server.js"
    }
  },
  "contextFileName": "GEMINI.md",
  "excludeTools": ["run_shell_command"]
}

When Gemini CLI starts, it loads all the extensions and merges their configurations. If there are any conflicts, the workspace configuration takes precedence.

Extension Commands

Extensions can provide custom commands by placing TOML files in a commands/ subdirectory within the extension directory. These commands follow the same format as user and project custom commands and use standard naming conventions.

Example

An extension named gcp with the following structure:

.gemini/extensions/gcp/
β”œβ”€β”€ gemini-extension.json
└── commands/
    β”œβ”€β”€ deploy.toml
    └── gcs/
        └── sync.toml

Would provide these commands:

Conflict Resolution

Extension commands have the lowest precedence. When a conflict occurs with user or project commands:

  1. No conflict: Extension command uses its natural name (e.g., /deploy)
  2. With conflict: Extension command is renamed with the extension prefix (e.g., /gcp.deploy)

For example, if both a user and the gcp extension define a deploy command:

β”œβ”€β”€ gemini-ignore.md Content:

Ignoring Files

This document provides an overview of the Gemini Ignore (.geminiignore) feature of the Gemini CLI.

The Gemini CLI includes the ability to automatically ignore files, similar to .gitignore (used by Git) and .aiexclude (used by Gemini Code Assist). Adding paths to your .geminiignore file will exclude them from tools that support this feature, although they will still be visible to other services (such as Git).

How it works

When you add a path to your .geminiignore file, tools that respect this file will exclude matching files and directories from their operations. For example, when you use the read_many_files command, any paths in your .geminiignore file will be automatically excluded.

For the most part, .geminiignore follows the conventions of .gitignore files:

You can update your .geminiignore file at any time. To apply the changes, you must restart your Gemini CLI session.

How to use .geminiignore

To enable .geminiignore:

  1. Create a file named .geminiignore in the root of your project directory.

To add a file or directory to .geminiignore:

  1. Open your .geminiignore file.
  2. Add the path or file you want to ignore, for example: /archive/ or apikeys.txt.

.geminiignore examples

You can use .geminiignore to ignore directories and files:

# Exclude your /packages/ directory and all subdirectories
/packages/

# Exclude your apikeys.txt file
apikeys.txt

You can use wildcards in your .geminiignore file with *:

# Exclude all .md files
*.md

Finally, you can exclude files and directories from exclusion with !:

# Exclude all .md files except README.md
*.md
!README.md

To remove paths from your .geminiignore file, delete the relevant lines.

β”œβ”€β”€ index.md Content:

Welcome to Gemini CLI documentation

This documentation provides a comprehensive guide to installing, using, and developing Gemini CLI. This tool lets you interact with Gemini models through a command-line interface.

Overview

Gemini CLI brings the capabilities of Gemini models to your terminal in an interactive Read-Eval-Print Loop (REPL) environment. Gemini CLI consists of a client-side application (packages/cli) that communicates with a local server (packages/core), which in turn manages requests to the Gemini API and its AI models. Gemini CLI also contains a variety of tools for tasks such as performing file system operations, running shells, and web fetching, which are managed by packages/core.

This documentation is organized into the following sections:

We hope this documentation helps you make the most of the Gemini CLI!

β”œβ”€β”€ integration-tests.md Content:

Integration Tests

This document provides information about the integration testing framework used in this project.

Overview

The integration tests are designed to validate the end-to-end functionality of the Gemini CLI. They execute the built binary in a controlled environment and verify that it behaves as expected when interacting with the file system.

These tests are located in the integration-tests directory and are run using a custom test runner.

Running the tests

The integration tests are not run as part of the default npm run test command. They must be run explicitly using the npm run test:integration:all script.

The integration tests can also be run using the following shortcut:

npm run test:e2e

Running a specific set of tests

To run a subset of test files, you can use npm run <integration test command> <file_name1> .... where is either test:e2e or test:integration* and <file_name> is any of the .test.js files in the integration-tests/ directory. For example, the following command runs list_directory.test.js and write_file.test.js:

npm run test:e2e list_directory write_file

Running a single test by name

To run a single test by its name, use the --test-name-pattern flag:

npm run test:e2e -- --test-name-pattern "reads a file"

Running all tests

To run the entire suite of integration tests, use the following command:

npm run test:integration:all

Sandbox matrix

The all command will run tests for no sandboxing, docker and podman. Each individual type can be run using the following commands:

npm run test:integration:sandbox:none
npm run test:integration:sandbox:docker
npm run test:integration:sandbox:podman

Diagnostics

The integration test runner provides several options for diagnostics to help track down test failures.

Keeping test output

You can preserve the temporary files created during a test run for inspection. This is useful for debugging issues with file system operations.

To keep the test output, you can either use the --keep-output flag or set the KEEP_OUTPUT environment variable to true.

# Using the flag
npm run test:integration:sandbox:none -- --keep-output

# Using the environment variable
KEEP_OUTPUT=true npm run test:integration:sandbox:none

When output is kept, the test runner will print the path to the unique directory for the test run.

Verbose output

For more detailed debugging, the --verbose flag streams the real-time output from the gemini command to the console.

npm run test:integration:sandbox:none -- --verbose

When using --verbose and --keep-output in the same command, the output is streamed to the console and also saved to a log file within the test's temporary directory.

The verbose output is formatted to clearly identify the source of the logs:

--- TEST: <file-name-without-js>:<test-name> ---
... output from the gemini command ...
--- END TEST: <file-name-without-js>:<test-name> ---

Linting and formatting

To ensure code quality and consistency, the integration test files are linted as part of the main build process. You can also manually run the linter and auto-fixer.

Running the linter

To check for linting errors, run the following command:

npm run lint

You can include the :fix flag in the command to automatically fix any fixable linting errors:

npm run lint:fix

Directory structure

The integration tests create a unique directory for each test run inside the .integration-tests directory. Within this directory, a subdirectory is created for each test file, and within that, a subdirectory is created for each individual test case.

This structure makes it easy to locate the artifacts for a specific test run, file, or case.

.integration-tests/
└── <run-id>/
    └── <test-file-name>.test.js/
        └── <test-case-name>/
            β”œβ”€β”€ output.log
            └── ...other test artifacts...

Continuous integration

To ensure the integration tests are always run, a GitHub Actions workflow is defined in .github/workflows/e2e.yml. This workflow automatically runs the integrations tests for pull requests against the main branch, or when a pull request is added to a merge queue.

The workflow runs the tests in different sandboxing environments to ensure Gemini CLI is tested across each:

β”œβ”€β”€ issue-and-pr-automation.md Content:

Automation and Triage Processes

This document provides a detailed overview of the automated processes we use to manage and triage issues and pull requests. Our goal is to provide prompt feedback and ensure that contributions are reviewed and integrated efficiently. Understanding this automation will help you as a contributor know what to expect and how to best interact with our repository bots.

Guiding Principle: Issues and Pull Requests

First and foremost, almost every Pull Request (PR) should be linked to a corresponding Issue. The issue describes the "what" and the "why" (the bug or feature), while the PR is the "how" (the implementation). This separation helps us track work, prioritize features, and maintain clear historical context. Our automation is built around this principle.


Detailed Automation Workflows

Here is a breakdown of the specific automation workflows that run in our repository.

1. When you open an Issue: Automated Issue Triage

This is the first bot you will interact with when you create an issue. Its job is to perform an initial analysis and apply the correct labels.

2. When you open a Pull Request: Continuous Integration (CI)

This workflow ensures that all changes meet our quality standards before they can be merged.

3. Ongoing Triage for Pull Requests: PR Auditing and Label Sync

This workflow runs periodically to ensure all open PRs are correctly linked to issues and have consistent labels.

4. Ongoing Triage for Issues: Scheduled Issue Triage

This is a fallback workflow to ensure that no issue gets missed by the triage process.

5. Release Automation

This workflow handles the process of packaging and publishing new versions of the Gemini CLI.

We hope this detailed overview is helpful. If you have any questions about our automation or processes, please don't hesitate to ask!

β”œβ”€β”€ keyboard-shortcuts.md Content:

Gemini CLI Keyboard Shortcuts

This document lists the available keyboard shortcuts in the Gemini CLI.

General

Shortcut Description
Esc Close dialogs and suggestions.
Ctrl+C Exit the application. Press twice to confirm.
Ctrl+D Exit the application if the input is empty. Press twice to confirm.
Ctrl+L Clear the screen.
Ctrl+O Toggle the display of the debug console.
Ctrl+S Allows long responses to print fully, disabling truncation. Use your terminal's scrollback to view the entire output.
Ctrl+T Toggle the display of tool descriptions.
Ctrl+Y Toggle auto-approval (YOLO mode) for all tool calls.

Input Prompt

Shortcut Description
! Toggle shell mode when the input is empty.
\ (at end of line) + Enter Insert a newline.
Down Arrow Navigate down through the input history.
Enter Submit the current prompt.
Meta+Delete / Ctrl+Delete Delete the word to the right of the cursor.
Tab Autocomplete the current suggestion if one exists.
Up Arrow Navigate up through the input history.
Ctrl+A / Home Move the cursor to the beginning of the line.
Ctrl+B / Left Arrow Move the cursor one character to the left.
Ctrl+C Clear the input prompt
Ctrl+D / Delete Delete the character to the right of the cursor.
Ctrl+E / End Move the cursor to the end of the line.
Ctrl+F / Right Arrow Move the cursor one character to the right.
Ctrl+H / Backspace Delete the character to the left of the cursor.
Ctrl+K Delete from the cursor to the end of the line.
Ctrl+Left Arrow / Meta+Left Arrow / Meta+B Move the cursor one word to the left.
Ctrl+N Navigate down through the input history.
Ctrl+P Navigate up through the input history.
Ctrl+Right Arrow / Meta+Right Arrow / Meta+F Move the cursor one word to the right.
Ctrl+U Delete from the cursor to the beginning of the line.
Ctrl+V Paste clipboard content. If the clipboard contains an image, it will be saved and a reference to it will be inserted in the prompt.
Ctrl+W / Meta+Backspace / Ctrl+Backspace Delete the word to the left of the cursor.
Ctrl+X / Meta+Enter Open the current input in an external editor.

Suggestions

Shortcut Description
Down Arrow Navigate down through the suggestions.
Tab / Enter Accept the selected suggestion.
Up Arrow Navigate up through the suggestions.

Radio Button Select

Shortcut Description
Down Arrow / j Move selection down.
Enter Confirm selection.
Up Arrow / k Move selection up.
1-9 Select an item by its number.
(multi-digit) For items with numbers greater than 9, press the digits in quick succession to select the corresponding item.

β”œβ”€β”€ npm.md Content:

Package Overview

This monorepo contains two main packages: @google/gemini-cli and @google/gemini-cli-core.

@google/gemini-cli

This is the main package for the Gemini CLI. It is responsible for the user interface, command parsing, and all other user-facing functionality.

When this package is published, it is bundled into a single executable file. This bundle includes all of the package's dependencies, including @google/gemini-cli-core. This means that whether a user installs the package with npm install -g @google/gemini-cli or runs it directly with npx @google/gemini-cli, they are using this single, self-contained executable.

@google/gemini-cli-core

This package contains the core logic for interacting with the Gemini API. It is responsible for making API requests, handling authentication, and managing the local cache.

This package is not bundled. When it is published, it is published as a standard Node.js package with its own dependencies. This allows it to be used as a standalone package in other projects, if needed. All transpiled js code in the dist folder is included in the package.

Release Process

This project follows a structured release process to ensure that all packages are versioned and published correctly. The process is designed to be as automated as possible.

How To Release

Releases are managed through the release.yml GitHub Actions workflow. To perform a manual release for a patch or hotfix:

  1. Navigate to the Actions tab of the repository.
  2. Select the Release workflow from the list.
  3. Click the Run workflow dropdown button.
  4. Fill in the required inputs:
    • Version: The exact version to release (e.g., v0.2.1).
    • Ref: The branch or commit SHA to release from (defaults to main).
    • Dry Run: Leave as true to test the workflow without publishing, or set to false to perform a live release.
  5. Click Run workflow.

Nightly Releases

In addition to manual releases, this project has an automated nightly release process to provide the latest "bleeding edge" version for testing and development.

Process

Every night at midnight UTC, the Release workflow runs automatically on a schedule. It performs the following steps:

  1. Checks out the latest code from the main branch.
  2. Installs all dependencies.
  3. Runs the full suite of preflight checks and integration tests.
  4. If all tests succeed, it calculates the next nightly version number (e.g., v0.2.1-nightly.20230101).
  5. It then builds and publishes the packages to npm with the nightly dist-tag.
  6. Finally, it creates a GitHub Release for the nightly version.

Failure Handling

If any step in the nightly workflow fails, it will automatically create a new issue in the repository with the labels bug and nightly-failure. The issue will contain a link to the failed workflow run for easy debugging.

How to Use the Nightly Build

To install the latest nightly build, use the @nightly tag:

npm install -g @google/gemini-cli@nightly

We also run a Google cloud build called release-docker.yml. Which publishes the sandbox docker to match your release. This will also be moved to GH and combined with the main release file once service account permissions are sorted out.

After the Release

After the workflow has successfully completed, you can monitor its progress in the GitHub Actions tab. Once complete, you should:

  1. Go to the pull requests page of the repository.
  2. Create a new pull request from the release/vX.Y.Z branch to main.
  3. Review the pull request (it should only contain version updates in package.json files) and merge it. This keeps the version in main up-to-date.

Release Validation

After pushing a new release smoke testing should be performed to ensure that the packages are working as expected. This can be done by installing the packages locally and running a set of tests to ensure that they are functioning correctly.

When to merge the version change, or not?

The above pattern for creating patch or hotfix releases from current or older commits leaves the repository in the following state:

  1. The Tag (vX.Y.Z-patch.1): This tag correctly points to the original commit on main that contains the stable code you intended to release. This is crucial. Anyone checking out this tag gets the exact code that was published.
  2. The Branch (release-vX.Y.Z-patch.1): This branch contains one new commit on top of the tagged commit. That new commit only contains the version number change in package.json (and other related files like package-lock.json).

This separation is good. It keeps your main branch history clean of release-specific version bumps until you decide to merge them.

This is the critical decision, and it depends entirely on the nature of the release.

Merge Back for Stable Patches and Hotfixes

You almost always want to merge the release-<tag> branch back into main for any stable patch or hotfix release.

Do NOT Merge Back for Pre-Releases (RC, Beta, Dev)

You typically do not merge release branches for pre-releases back into main.

Local Testing and Validation: Changes to the Packaging and Publishing Process

If you need to test the release process without actually publishing to NPM or creating a public GitHub release, you can trigger the workflow manually from the GitHub UI.

  1. Go to the Actions tab of the repository.
  2. Click on the "Run workflow" dropdown.
  3. Leave the dry_run option checked (true).
  4. Click the "Run workflow" button.

This will run the entire release process but will skip the npm publish and gh release create steps. You can inspect the workflow logs to ensure everything is working as expected.

It is crucial to test any changes to the packaging and publishing process locally before committing them. This ensures that the packages will be published correctly and that they will work as expected when installed by a user.

To validate your changes, you can perform a dry run of the publishing process. This will simulate the publishing process without actually publishing the packages to the npm registry.

npm_package_version=9.9.9 SANDBOX_IMAGE_REGISTRY="registry" SANDBOX_IMAGE_NAME="thename" npm run publish:npm --dry-run

This command will do the following:

  1. Build all the packages.
  2. Run all the prepublish scripts.
  3. Create the package tarballs that would be published to npm.
  4. Print a summary of the packages that would be published.

You can then inspect the generated tarballs to ensure that they contain the correct files and that the package.json files have been updated correctly. The tarballs will be created in the root of each package's directory (e.g., packages/cli/google-gemini-cli-0.1.6.tgz).

By performing a dry run, you can be confident that your changes to the packaging process are correct and that the packages will be published successfully.

Release Deep Dive

The main goal of the release process is to take the source code from the packages/ directory, build it, and assemble a clean, self-contained package in a temporary bundle directory at the root of the project. This bundle directory is what actually gets published to NPM.

Here are the key stages:

Stage 1: Pre-Release Sanity Checks and Versioning

Stage 2: Building the Source Code

Stage 3: Assembling the Final Publishable Package

This is the most critical stage where files are moved and transformed into their final state for publishing. A temporary bundle folder is created at the project root to house the final package contents.

  1. The package.json is Transformed:

    • What happens: The package.json from packages/cli/ is read, modified, and written into the root bundle/ directory.
    • File movement: packages/cli/package.json -> (in-memory transformation) -> bundle/package.json
    • Why: The final package.json must be different from the one used in development. Key changes include:
    • Removing devDependencies.
    • Removing workspace-specific "dependencies": { "@gemini-cli/core": "workspace:*" } and ensuring the core code is bundled directly into the final JavaScript file.
    • Ensuring the bin, main, and files fields point to the correct locations within the final package structure.
  2. The JavaScript Bundle is Created:

    • What happens: The built JavaScript from both packages/core/dist and packages/cli/dist are bundled into a single, executable JavaScript file.
    • File movement: packages/cli/dist/index.js + packages/core/dist/index.js -> (bundled by esbuild) -> bundle/gemini.js (or a similar name).
    • Why: This creates a single, optimized file that contains all the necessary application code. It simplifies the package by removing the need for the core package to be a separate dependency on NPM, as its code is now included directly.
  3. Static and Supporting Files are Copied:

    • What happens: Essential files that are not part of the source code but are required for the package to work correctly or be well-described are copied into the bundle directory.
    • File movement:
    • README.md -> bundle/README.md
    • LICENSE -> bundle/LICENSE
    • packages/cli/src/utils/*.sb (sandbox profiles) -> bundle/
    • Why:
    • The README.md and LICENSE are standard files that should be included in any NPM package.
    • The sandbox profiles (.sb files) are critical runtime assets required for the CLI's sandboxing feature to function. They must be located next to the final executable.

Stage 4: Publishing to NPM

Summary of File Flow

graph TD
    subgraph "Source Files"
        A["packages/core/src/*.ts<br/>packages/cli/src/*.ts"]
        B["packages/cli/package.json"]
        C["README.md<br/>LICENSE<br/>packages/cli/src/utils/*.sb"]
    end

    subgraph "Process"
        D(Build)
        E(Transform)
        F(Assemble)
        G(Publish)
    end

    subgraph "Artifacts"
        H["Bundled JS"]
        I["Final package.json"]
        J["bundle/"]
    end

    subgraph "Destination"
        K["NPM Registry"]
    end

    A --> D --> H
    B --> E --> I
    C --> F
    H --> F
    I --> F
    F --> J
    J --> G --> K

This process ensures that the final published artifact is a purpose-built, clean, and efficient representation of the project, rather than a direct copy of the development workspace.

NPM Workspaces

This project uses NPM Workspaces to manage the packages within this monorepo. This simplifies development by allowing us to manage dependencies and run scripts across multiple packages from the root of the project.

How it Works

The root package.json file defines the workspaces for this project:

{
  "workspaces": ["packages/*"]
}

This tells NPM that any folder inside the packages directory is a separate package that should be managed as part of the workspace.

Benefits of Workspaces

β”œβ”€β”€ quota-and-pricing.md Content:

Gemini CLI: Quotas and Pricing

Your Gemini CLI quotas and pricing depend on the type of account you use to authenticate with Google. Additionally, both quotas and pricing may be calculated differently based on the model version, requests, and tokens used. A summary of model usage is available through the /stats command and presented on exit at the end of a session. See privacy and terms for details on Privacy policy and Terms of Service. Note: published prices are list price; additional negotiated commercial discounting may apply.

This article outlines the specific quotas and pricing applicable to the Gemini CLI when using different authentication methods.

1. Log in with Google (Gemini Code Assist Free Tier)

For users who authenticate by using their Google account to access Gemini Code Assist for individuals:

2. Gemini API Key (Unpaid)

If you are using a Gemini API key for the free tier:

3. Gemini API Key (Paid)

If you are using a Gemini API key with a paid plan:

4. Login with Google (for Workspace or Licensed Code Assist users)

For users of Standard or Enterprise editions of Gemini Code Assist, quotas and pricing are based on a fixed price subscription with assigned license seats:

5. Vertex AI (Express Mode)

If you are using Vertex AI in Express Mode:

6. Vertex AI (Regular Mode)

If you are using the standard Vertex AI service:

7. Google One and Ultra plans, Gemini for Workspace plans

These plans currently apply only to the use of Gemini web-based products provided by Google-based experiences (for example, the Gemini web app or the Flow video editor). These plans do not apply to the API usage which powers the Gemini CLI. Supporting these plans is under active consideration for future support.

β”œβ”€β”€ sandbox.md Content:

Sandboxing in the Gemini CLI

This document provides a guide to sandboxing in the Gemini CLI, including prerequisites, quickstart, and configuration.

Prerequisites

Before using sandboxing, you need to install and set up the Gemini CLI:

npm install -g @google/gemini-cli

To verify the installation

gemini --version

Overview of sandboxing

Sandboxing isolates potentially dangerous operations (such as shell commands or file modifications) from your host system, providing a security barrier between AI operations and your environment.

The benefits of sandboxing include:

Sandboxing methods

Your ideal method of sandboxing may differ depending on your platform and your preferred container solution.

1. macOS Seatbelt (macOS only)

Lightweight, built-in sandboxing using sandbox-exec.

Default profile: permissive-open - restricts writes outside project directory but allows most other operations.

2. Container-based (Docker/Podman)

Cross-platform sandboxing with complete process isolation.

Note: Requires building the sandbox image locally or using a published image from your organization's registry.

Quickstart

# Enable sandboxing with command flag
gemini -s -p "analyze the code structure"

# Use environment variable
export GEMINI_SANDBOX=true
gemini -p "run the test suite"

# Configure in settings.json
{
  "sandbox": "docker"
}

Configuration

Enable sandboxing (in order of precedence)

  1. Command flag: -s or --sandbox
  2. Environment variable: GEMINI_SANDBOX=true|docker|podman|sandbox-exec
  3. Settings file: "sandbox": true in settings.json

macOS Seatbelt profiles

Built-in profiles (set via SEATBELT_PROFILE env var):

Custom Sandbox Flags

For container-based sandboxing, you can inject custom flags into the docker or podman command using the SANDBOX_FLAGS environment variable. This is useful for advanced configurations, such as disabling security features for specific use cases.

Example (Podman):

To disable SELinux labeling for volume mounts, you can set the following:

export SANDBOX_FLAGS="--security-opt label=disable"

Multiple flags can be provided as a space-separated string:

export SANDBOX_FLAGS="--flag1 --flag2=value"

Linux UID/GID handling

The sandbox automatically handles user permissions on Linux. Override these permissions with:

export SANDBOX_SET_UID_GID=true   # Force host UID/GID
export SANDBOX_SET_UID_GID=false  # Disable UID/GID mapping

Troubleshooting

Common issues

"Operation not permitted"

Missing commands

Network issues

Debug mode

DEBUG=1 gemini -s -p "debug command"

Note: If you have DEBUG=true in a project's .env file, it won't affect gemini-cli due to automatic exclusion. Use .gemini/.env files for gemini-cli specific debug settings.

Inspect sandbox

# Check environment
gemini -s -p "run shell command: env | grep SANDBOX"

# List mounts
gemini -s -p "run shell command: mount | grep workspace"

Security notes

β”œβ”€β”€ telemetry.md Content:

Gemini CLI Observability Guide

Telemetry provides data about Gemini CLI's performance, health, and usage. By enabling it, you can monitor operations, debug issues, and optimize tool usage through traces, metrics, and structured logs.

Gemini CLI's telemetry system is built on the OpenTelemetry (OTEL) standard, allowing you to send data to any compatible backend.

Enabling telemetry

You can enable telemetry in multiple ways. Configuration is primarily managed via the .gemini/settings.json file and environment variables, but CLI flags can override these settings for a specific session.

Order of precedence

The following lists the precedence for applying telemetry settings, with items listed higher having greater precedence:

  1. CLI flags (for gemini command):

    • --telemetry / --no-telemetry: Overrides telemetry.enabled.
    • --telemetry-target <local|gcp>: Overrides telemetry.target.
    • --telemetry-otlp-endpoint <URL>: Overrides telemetry.otlpEndpoint.
    • --telemetry-log-prompts / --no-telemetry-log-prompts: Overrides telemetry.logPrompts.
    • --telemetry-outfile <path>: Redirects telemetry output to a file. See Exporting to a file.
  2. Environment variables:

    • OTEL_EXPORTER_OTLP_ENDPOINT: Overrides telemetry.otlpEndpoint.
  3. Workspace settings file (.gemini/settings.json): Values from the telemetry object in this project-specific file.

  4. User settings file (~/.gemini/settings.json): Values from the telemetry object in this global user file.

  5. Defaults: applied if not set by any of the above.

    • telemetry.enabled: false
    • telemetry.target: local
    • telemetry.otlpEndpoint: http://localhost:4317
    • telemetry.logPrompts: true

For the npm run telemetry -- --target=<gcp|local> script: The --target argument to this script only overrides the telemetry.target for the duration and purpose of that script (i.e., choosing which collector to start). It does not permanently change your settings.json. The script will first look at settings.json for a telemetry.target to use as its default.

Example settings

The following code can be added to your workspace (.gemini/settings.json) or user (~/.gemini/settings.json) settings to enable telemetry and send the output to Google Cloud:

{
  "telemetry": {
    "enabled": true,
    "target": "gcp"
  },
  "sandbox": false
}

Exporting to a file

You can export all telemetry data to a file for local inspection.

To enable file export, use the --telemetry-outfile flag with a path to your desired output file. This must be run using --telemetry-target=local.

# Set your desired output file path
TELEMETRY_FILE=".gemini/telemetry.log"

# Run Gemini CLI with local telemetry
# NOTE: --telemetry-otlp-endpoint="" is required to override the default
# OTLP exporter and ensure telemetry is written to the local file.
gemini --telemetry \
  --telemetry-target=local \
  --telemetry-otlp-endpoint="" \
  --telemetry-outfile="$TELEMETRY_FILE" \
  --prompt "What is OpenTelemetry?"

Running an OTEL Collector

An OTEL Collector is a service that receives, processes, and exports telemetry data. The CLI sends data using the OTLP/gRPC protocol.

Learn more about OTEL exporter standard configuration in documentation.

Local

Use the npm run telemetry -- --target=local command to automate the process of setting up a local telemetry pipeline, including configuring the necessary settings in your .gemini/settings.json file. The underlying script installs otelcol-contrib (the OpenTelemetry Collector) and jaeger (The Jaeger UI for viewing traces). To use it:

  1. Run the command: Execute the command from the root of the repository:

    bash npm run telemetry -- --target=local

    The script will: - Download Jaeger and OTEL if needed. - Start a local Jaeger instance. - Start an OTEL collector configured to receive data from Gemini CLI. - Automatically enable telemetry in your workspace settings. - On exit, disable telemetry.

  2. View traces: Open your web browser and navigate to http://localhost:16686 to access the Jaeger UI. Here you can inspect detailed traces of Gemini CLI operations.

  3. Inspect logs and metrics: The script redirects the OTEL collector output (which includes logs and metrics) to ~/.gemini/tmp/<projectHash>/otel/collector.log. The script will provide links to view and a command to tail your telemetry data (traces, metrics, logs) locally.

  4. Stop the services: Press Ctrl+C in the terminal where the script is running to stop the OTEL Collector and Jaeger services.

Google Cloud

Use the npm run telemetry -- --target=gcp command to automate setting up a local OpenTelemetry collector that forwards data to your Google Cloud project, including configuring the necessary settings in your .gemini/settings.json file. The underlying script installs otelcol-contrib. To use it:

  1. Prerequisites:

    • Have a Google Cloud project ID.
    • Export the GOOGLE_CLOUD_PROJECT environment variable to make it available to the OTEL collector. bash export OTLP_GOOGLE_CLOUD_PROJECT="your-project-id"
    • Authenticate with Google Cloud (e.g., run gcloud auth application-default login or ensure GOOGLE_APPLICATION_CREDENTIALS is set).
    • Ensure your Google Cloud account/service account has the necessary IAM roles: "Cloud Trace Agent", "Monitoring Metric Writer", and "Logs Writer".
  2. Run the command: Execute the command from the root of the repository:

    bash npm run telemetry -- --target=gcp

    The script will: - Download the otelcol-contrib binary if needed. - Start an OTEL collector configured to receive data from Gemini CLI and export it to your specified Google Cloud project. - Automatically enable telemetry and disable sandbox mode in your workspace settings (.gemini/settings.json). - Provide direct links to view traces, metrics, and logs in your Google Cloud Console. - On exit (Ctrl+C), it will attempt to restore your original telemetry and sandbox settings.

  3. Run Gemini CLI: In a separate terminal, run your Gemini CLI commands. This generates telemetry data that the collector captures.

  4. View telemetry in Google Cloud: Use the links provided by the script to navigate to the Google Cloud Console and view your traces, metrics, and logs.

  5. Inspect local collector logs: The script redirects the local OTEL collector output to ~/.gemini/tmp/<projectHash>/otel/collector-gcp.log. The script provides links to view and command to tail your collector logs locally.

  6. Stop the service: Press Ctrl+C in the terminal where the script is running to stop the OTEL Collector.

Logs and metric reference

The following section describes the structure of logs and metrics generated for Gemini CLI.

Logs

Logs are timestamped records of specific events. The following events are logged for Gemini CLI:

Metrics

Metrics are numerical measurements of behavior over time. The following metrics are collected for Gemini CLI:

└── tools/ β”œβ”€β”€ file-system.md Content:

Gemini CLI file system tools

The Gemini CLI provides a comprehensive suite of tools for interacting with the local file system. These tools allow the Gemini model to read from, write to, list, search, and modify files and directories, all under your control and typically with confirmation for sensitive operations.

Note: All file system tools operate within a rootDirectory (usually the current working directory where you launched the CLI) for security. Paths that you provide to these tools are generally expected to be absolute or are resolved relative to this root directory.

1. list_directory (ReadFolder)

list_directory lists the names of files and subdirectories directly within a specified directory path. It can optionally ignore entries matching provided glob patterns.

2. read_file (ReadFile)

read_file reads and returns the content of a specified file. This tool handles text, images (PNG, JPG, GIF, WEBP, SVG, BMP), and PDF files. For text files, it can read specific line ranges. Other binary file types are generally skipped.

3. write_file (WriteFile)

write_file writes content to a specified file. If the file exists, it will be overwritten. If the file doesn't exist, it (and any necessary parent directories) will be created.

4. glob (FindFiles)

glob finds files matching specific glob patterns (e.g., src/**/*.ts, *.md), returning absolute paths sorted by modification time (newest first).

5. search_file_content (SearchText)

search_file_content searches for a regular expression pattern within the content of files in a specified directory. Can filter files by a glob pattern. Returns the lines containing matches, along with their file paths and line numbers.


File: src/utils.ts L15: export function myFunction() { L22: myFunction.call();


File: src/index.ts L5: import { myFunction } from './utils';


``` - Confirmation: No.

6. replace (Edit)

replace replaces text within a file. By default, replaces a single occurrence, but can replace multiple occurrences when expected_replacements is specified. This tool is designed for precise, targeted changes and requires significant context around the old_string to ensure it modifies the correct location.

These file system tools provide a foundation for the Gemini CLI to understand and interact with your local project context.

β”œβ”€β”€ index.md Content:

Gemini CLI tools

The Gemini CLI includes built-in tools that the Gemini model uses to interact with your local environment, access information, and perform actions. These tools enhance the CLI's capabilities, enabling it to go beyond text generation and assist with a wide range of tasks.

Overview of Gemini CLI tools

In the context of the Gemini CLI, tools are specific functions or modules that the Gemini model can request to be executed. For example, if you ask Gemini to "Summarize the contents of my_document.txt," the model will likely identify the need to read that file and will request the execution of the read_file tool.

The core component (packages/core) manages these tools, presents their definitions (schemas) to the Gemini model, executes them when requested, and returns the results to the model for further processing into a user-facing response.

These tools provide the following capabilities:

How to use Gemini CLI tools

To use Gemini CLI tools, provide a prompt to the Gemini CLI. The process works as follows:

  1. You provide a prompt to the Gemini CLI.
  2. The CLI sends the prompt to the core.
  3. The core, along with your prompt and conversation history, sends a list of available tools and their descriptions/schemas to the Gemini API.
  4. The Gemini model analyzes your request. If it determines that a tool is needed, its response will include a request to execute a specific tool with certain parameters.
  5. The core receives this tool request, validates it, and (often after user confirmation for sensitive operations) executes the tool.
  6. The output from the tool is sent back to the Gemini model.
  7. The Gemini model uses the tool's output to formulate its final answer, which is then sent back through the core to the CLI and displayed to you.

You will typically see messages in the CLI indicating when a tool is being called and whether it succeeded or failed.

Security and confirmation

Many tools, especially those that can modify your file system or execute commands (write_file, edit, run_shell_command), are designed with safety in mind. The Gemini CLI will typically:

It's important to always review confirmation prompts carefully before allowing a tool to proceed.

Learn more about Gemini CLI's tools

Gemini CLI's built-in tools can be broadly categorized as follows:

Additionally, these tools incorporate:

β”œβ”€β”€ mcp-server.md Content:

MCP servers with the Gemini CLI

This document provides a guide to configuring and using Model Context Protocol (MCP) servers with the Gemini CLI.

What is an MCP server?

An MCP server is an application that exposes tools and resources to the Gemini CLI through the Model Context Protocol, allowing it to interact with external systems and data sources. MCP servers act as a bridge between the Gemini model and your local environment or other services like APIs.

An MCP server enables the Gemini CLI to:

With an MCP server, you can extend the Gemini CLI's capabilities to perform actions beyond its built-in features, such as interacting with databases, APIs, custom scripts, or specialized workflows.

Core Integration Architecture

The Gemini CLI integrates with MCP servers through a sophisticated discovery and execution system built into the core package (packages/core/src/tools/):

Discovery Layer (mcp-client.ts)

The discovery process is orchestrated by discoverMcpTools(), which:

  1. Iterates through configured servers from your settings.json mcpServers configuration
  2. Establishes connections using appropriate transport mechanisms (Stdio, SSE, or Streamable HTTP)
  3. Fetches tool definitions from each server using the MCP protocol
  4. Sanitizes and validates tool schemas for compatibility with the Gemini API
  5. Registers tools in the global tool registry with conflict resolution

Execution Layer (mcp-tool.ts)

Each discovered MCP tool is wrapped in a DiscoveredMCPTool instance that:

Transport Mechanisms

The Gemini CLI supports three MCP transport types:

How to set up your MCP server

The Gemini CLI uses the mcpServers configuration in your settings.json file to locate and connect to MCP servers. This configuration supports multiple servers with different transport mechanisms.

Configure the MCP server in settings.json

You can configure MCP servers at the global level in the ~/.gemini/settings.json file or in your project's root directory, create or open the .gemini/settings.json file. Within the file, add the mcpServers configuration block.

Configuration Structure

Add an mcpServers object to your settings.json file:

{ ...file contains other config objects
  "mcpServers": {
    "serverName": {
      "command": "path/to/server",
      "args": ["--arg1", "value1"],
      "env": {
        "API_KEY": "$MY_API_TOKEN"
      },
      "cwd": "./server-directory",
      "timeout": 30000,
      "trust": false
    }
  }
}

Configuration Properties

Each server configuration supports the following properties:

Required (one of the following)

Optional

OAuth Support for Remote MCP Servers

The Gemini CLI supports OAuth 2.0 authentication for remote MCP servers using SSE or HTTP transports. This enables secure access to MCP servers that require authentication.

Automatic OAuth Discovery

For servers that support OAuth discovery, you can omit the OAuth configuration and let the CLI discover it automatically:

{
  "mcpServers": {
    "discoveredServer": {
      "url": "https://api.example.com/sse"
    }
  }
}

The CLI will automatically:

Authentication Flow

When connecting to an OAuth-enabled server:

  1. Initial connection attempt fails with 401 Unauthorized
  2. OAuth discovery finds authorization and token endpoints
  3. Browser opens for user authentication (requires local browser access)
  4. Authorization code is exchanged for access tokens
  5. Tokens are stored securely for future use
  6. Connection retry succeeds with valid tokens

Browser Redirect Requirements

Important: OAuth authentication requires that your local machine can:

This feature will not work in:

Managing OAuth Authentication

Use the /mcp auth command to manage OAuth authentication:

# List servers requiring authentication
/mcp auth

# Authenticate with a specific server
/mcp auth serverName

# Re-authenticate if tokens expire
/mcp auth serverName

OAuth Configuration Properties

Token Management

OAuth tokens are automatically:

Authentication Provider Type

You can specify the authentication provider type using the authProviderType property:

{
  "mcpServers": {
    "googleCloudServer": {
      "httpUrl": "https://my-gcp-service.run.app/mcp",
      "authProviderType": "google_credentials",
      "oauth": {
        "scopes": ["https://www.googleapis.com/auth/userinfo.email"]
      }
    }
  }
}

Example Configurations

Python MCP Server (Stdio)

{
  "mcpServers": {
    "pythonTools": {
      "command": "python",
      "args": ["-m", "my_mcp_server", "--port", "8080"],
      "cwd": "./mcp-servers/python",
      "env": {
        "DATABASE_URL": "$DB_CONNECTION_STRING",
        "API_KEY": "${EXTERNAL_API_KEY}"
      },
      "timeout": 15000
    }
  }
}

Node.js MCP Server (Stdio)

{
  "mcpServers": {
    "nodeServer": {
      "command": "node",
      "args": ["dist/server.js", "--verbose"],
      "cwd": "./mcp-servers/node",
      "trust": true
    }
  }
}

Docker-based MCP Server

{
  "mcpServers": {
    "dockerizedServer": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "API_KEY",
        "-v",
        "${PWD}:/workspace",
        "my-mcp-server:latest"
      ],
      "env": {
        "API_KEY": "$EXTERNAL_SERVICE_TOKEN"
      }
    }
  }
}

HTTP-based MCP Server

{
  "mcpServers": {
    "httpServer": {
      "httpUrl": "http://localhost:3000/mcp",
      "timeout": 5000
    }
  }
}

HTTP-based MCP Server with Custom Headers

{
  "mcpServers": {
    "httpServerWithAuth": {
      "httpUrl": "http://localhost:3000/mcp",
      "headers": {
        "Authorization": "Bearer your-api-token",
        "X-Custom-Header": "custom-value",
        "Content-Type": "application/json"
      },
      "timeout": 5000
    }
  }
}

MCP Server with Tool Filtering

{
  "mcpServers": {
    "filteredServer": {
      "command": "python",
      "args": ["-m", "my_mcp_server"],
      "includeTools": ["safe_tool", "file_reader", "data_processor"],
      // "excludeTools": ["dangerous_tool", "file_deleter"],
      "timeout": 30000
    }
  }
}

Discovery Process Deep Dive

When the Gemini CLI starts, it performs MCP server discovery through the following detailed process:

1. Server Iteration and Connection

For each configured server in mcpServers:

  1. Status tracking begins: Server status is set to CONNECTING
  2. Transport selection: Based on configuration properties:
  3. httpUrl β†’ StreamableHTTPClientTransport
  4. url β†’ SSEClientTransport
  5. command β†’ StdioClientTransport
  6. Connection establishment: The MCP client attempts to connect with the configured timeout
  7. Error handling: Connection failures are logged and the server status is set to DISCONNECTED

2. Tool Discovery

Upon successful connection:

  1. Tool listing: The client calls the MCP server's tool listing endpoint
  2. Schema validation: Each tool's function declaration is validated
  3. Tool filtering: Tools are filtered based on includeTools and excludeTools configuration
  4. Name sanitization: Tool names are cleaned to meet Gemini API requirements:
  5. Invalid characters (non-alphanumeric, underscore, dot, hyphen) are replaced with underscores
  6. Names longer than 63 characters are truncated with middle replacement (___)

3. Conflict Resolution

When multiple servers expose tools with the same name:

  1. First registration wins: The first server to register a tool name gets the unprefixed name
  2. Automatic prefixing: Subsequent servers get prefixed names: serverName__toolName
  3. Registry tracking: The tool registry maintains mappings between server names and their tools

4. Schema Processing

Tool parameter schemas undergo sanitization for Gemini API compatibility:

5. Connection Management

After discovery:

Tool Execution Flow

When the Gemini model decides to use an MCP tool, the following execution flow occurs:

1. Tool Invocation

The model generates a FunctionCall with:

2. Confirmation Process

Each DiscoveredMCPTool implements sophisticated confirmation logic:

Trust-based Bypass

if (this.trust) {
  return false; // No confirmation needed
}

Dynamic Allow-listing

The system maintains internal allow-lists for:

User Choice Handling

When confirmation is required, users can choose:

3. Execution

Upon confirmation (or trust bypass):

  1. Parameter preparation: Arguments are validated against the tool's schema
  2. MCP call: The underlying CallableTool invokes the server with:

typescript const functionCalls = [ { name: this.serverToolName, // Original server tool name args: params, }, ];

  1. Response processing: Results are formatted for both LLM context and user display

4. Response Handling

The execution result contains:

How to interact with your MCP server

Using the /mcp Command

The /mcp command provides comprehensive information about your MCP server setup:

/mcp

This displays:

Example /mcp Output

MCP Servers Status:

πŸ“‘ pythonTools (CONNECTED)
  Command: python -m my_mcp_server --port 8080
  Working Directory: ./mcp-servers/python
  Timeout: 15000ms
  Tools: calculate_sum, file_analyzer, data_processor

πŸ”Œ nodeServer (DISCONNECTED)
  Command: node dist/server.js --verbose
  Error: Connection refused

🐳 dockerizedServer (CONNECTED)
  Command: docker run -i --rm -e API_KEY my-mcp-server:latest
  Tools: docker__deploy, docker__status

Discovery State: COMPLETED

Tool Usage

Once discovered, MCP tools are available to the Gemini model like built-in tools. The model will automatically:

  1. Select appropriate tools based on your requests
  2. Present confirmation dialogs (unless the server is trusted)
  3. Execute tools with proper parameters
  4. Display results in a user-friendly format

Status Monitoring and Troubleshooting

Connection States

The MCP integration tracks several states:

Server Status (MCPServerStatus)

Discovery State (MCPDiscoveryState)

Common Issues and Solutions

Server Won't Connect

Symptoms: Server shows DISCONNECTED status

Troubleshooting:

  1. Check configuration: Verify command, args, and cwd are correct
  2. Test manually: Run the server command directly to ensure it works
  3. Check dependencies: Ensure all required packages are installed
  4. Review logs: Look for error messages in the CLI output
  5. Verify permissions: Ensure the CLI can execute the server command

No Tools Discovered

Symptoms: Server connects but no tools are available

Troubleshooting:

  1. Verify tool registration: Ensure your server actually registers tools
  2. Check MCP protocol: Confirm your server implements the MCP tool listing correctly
  3. Review server logs: Check stderr output for server-side errors
  4. Test tool listing: Manually test your server's tool discovery endpoint

Tools Not Executing

Symptoms: Tools are discovered but fail during execution

Troubleshooting:

  1. Parameter validation: Ensure your tool accepts the expected parameters
  2. Schema compatibility: Verify your input schemas are valid JSON Schema
  3. Error handling: Check if your tool is throwing unhandled exceptions
  4. Timeout issues: Consider increasing the timeout setting

Sandbox Compatibility

Symptoms: MCP servers fail when sandboxing is enabled

Solutions:

  1. Docker-based servers: Use Docker containers that include all dependencies
  2. Path accessibility: Ensure server executables are available in the sandbox
  3. Network access: Configure sandbox to allow necessary network connections
  4. Environment variables: Verify required environment variables are passed through

Debugging Tips

  1. Enable debug mode: Run the CLI with --debug for verbose output
  2. Check stderr: MCP server stderr is captured and logged (INFO messages filtered)
  3. Test isolation: Test your MCP server independently before integrating
  4. Incremental setup: Start with simple tools before adding complex functionality
  5. Use /mcp frequently: Monitor server status during development

Important Notes

Security Considerations

Performance and Resource Management

Schema Compatibility

This comprehensive integration makes MCP servers a powerful way to extend the Gemini CLI's capabilities while maintaining security, reliability, and ease of use.

Returning Rich Content from Tools

MCP tools are not limited to returning simple text. You can return rich, multi-part content, including text, images, audio, and other binary data in a single tool response. This allows you to build powerful tools that can provide diverse information to the model in a single turn.

All data returned from the tool is processed and sent to the model as context for its next generation, enabling it to reason about or summarize the provided information.

How It Works

To return rich content, your tool's response must adhere to the MCP specification for a CallToolResult. The content field of the result should be an array of ContentBlock objects. The Gemini CLI will correctly process this array, separating text from binary data and packaging it for the model.

You can mix and match different content block types in the content array. The supported block types include:

Example: Returning Text and an Image

Here is an example of a valid JSON response from an MCP tool that returns both a text description and an image:

{
  "content": [
    {
      "type": "text",
      "text": "Here is the logo you requested."
    },
    {
      "type": "image",
      "data": "BASE64_ENCODED_IMAGE_DATA_HERE",
      "mimeType": "image/png"
    },
    {
      "type": "text",
      "text": "The logo was created in 2025."
    }
  ]
}

When the Gemini CLI receives this response, it will:

  1. Extract all the text and combine it into a single functionResponse part for the model.
  2. Present the image data as a separate inlineData part.
  3. Provide a clean, user-friendly summary in the CLI, indicating that both text and an image were received.

This enables you to build sophisticated tools that can provide rich, multi-modal context to the Gemini model.

MCP Prompts as Slash Commands

In addition to tools, MCP servers can expose predefined prompts that can be executed as slash commands within the Gemini CLI. This allows you to create shortcuts for common or complex queries that can be easily invoked by name.

Defining Prompts on the Server

Here's a small example of a stdio MCP server that defines prompts:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';

const server = new McpServer({
  name: 'prompt-server',
  version: '1.0.0',
});

server.registerPrompt(
  'poem-writer',
  {
    title: 'Poem Writer',
    description: 'Write a nice haiku',
    argsSchema: { title: z.string(), mood: z.string().optional() },
  },
  ({ title, mood }) => ({
    messages: [
      {
        role: 'user',
        content: {
          type: 'text',
          text: `Write a haiku${mood ? ` with the mood ${mood}` : ''} called ${title}. Note that a haiku is 5 syllables followed by 7 syllables followed by 5 syllables `,
        },
      },
    ],
  }),
);

const transport = new StdioServerTransport();
await server.connect(transport);

This can be included in settings.json under mcpServers with:

"nodeServer": {
  "command": "node",
  "args": ["filename.ts"],
}

Invoking Prompts

Once a prompt is discovered, you can invoke it using its name as a slash command. The CLI will automatically handle parsing arguments.

/poem-writer --title="Gemini CLI" --mood="reverent"

or, using positional arguments:

/poem-writer "Gemini CLI" reverent

When you run this command, the Gemini CLI executes the prompts/get method on the MCP server with the provided arguments. The server is responsible for substituting the arguments into the prompt template and returning the final prompt text. The CLI then sends this prompt to the model for execution. This provides a convenient way to automate and share common workflows.

Managing MCP Servers with gemini mcp

While you can always configure MCP servers by manually editing your settings.json file, the Gemini CLI provides a convenient set of commands to manage your server configurations programmatically. These commands streamline the process of adding, listing, and removing MCP servers without needing to directly edit JSON files.

Adding a Server (gemini mcp add)

The add command configures a new MCP server in your settings.json. Based on the scope (-s, --scope), it will be added to either the user config ~/.gemini/settings.json or the project config .gemini/settings.json file.

Command:

gemini mcp add [options] <name> <commandOrUrl> [args...]

Options (Flags):

Adding an stdio server

This is the default transport for running local servers.

# Basic syntax
gemini mcp add <name> <command> [args...]

# Example: Adding a local server
gemini mcp add my-stdio-server -e API_KEY=123 /path/to/server arg1 arg2 arg3

# Example: Adding a local python server
gemini mcp add python-server python server.py --port 8080

Adding an HTTP server

This transport is for servers that use the streamable HTTP transport.

# Basic syntax
gemini mcp add --transport http <name> <url>

# Example: Adding an HTTP server
gemini mcp add --transport http http-server https://api.example.com/mcp/

# Example: Adding an HTTP server with an authentication header
gemini mcp add --transport http secure-http https://api.example.com/mcp/ --header "Authorization: Bearer abc123"

Adding an SSE server

This transport is for servers that use Server-Sent Events (SSE).

# Basic syntax
gemini mcp add --transport sse <name> <url>

# Example: Adding an SSE server
gemini mcp add --transport sse sse-server https://api.example.com/sse/

# Example: Adding an SSE server with an authentication header
gemini mcp add --transport sse secure-sse https://api.example.com/sse/ --header "Authorization: Bearer abc123"

Listing Servers (gemini mcp list)

To view all MCP servers currently configured, use the list command. It displays each server's name, configuration details, and connection status.

Command:

gemini mcp list

Example Output:

βœ“ stdio-server: command: python3 server.py (stdio) - Connected
βœ“ http-server: https://api.example.com/mcp (http) - Connected
βœ— sse-server: https://api.example.com/sse (sse) - Disconnected

Removing a Server (gemini mcp remove)

To delete a server from your configuration, use the remove command with the server's name.

Command:

gemini mcp remove <name>

Example:

gemini mcp remove my-server

This will find and delete the "my-server" entry from the mcpServers object in the appropriate settings.json file based on the scope (-s, --scope).

β”œβ”€β”€ memory.md Content:

Memory Tool (save_memory)

This document describes the save_memory tool for the Gemini CLI.

Description

Use save_memory to save and recall information across your Gemini CLI sessions. With save_memory, you can direct the CLI to remember key details across sessions, providing personalized and directed assistance.

Arguments

save_memory takes one argument:

How to use save_memory with the Gemini CLI

The tool appends the provided fact to a special GEMINI.md file located in the user's home directory (~/.gemini/GEMINI.md). This file can be configured to have a different name.

Once added, the facts are stored under a ## Gemini Added Memories section. This file is loaded as context in subsequent sessions, allowing the CLI to recall the saved information.

Usage:

save_memory(fact="Your fact here.")

save_memory examples

Remember a user preference:

save_memory(fact="My preferred programming language is Python.")

Store a project-specific detail:

save_memory(fact="The project I'm currently working on is called 'gemini-cli'.")

Important notes

β”œβ”€β”€ multi-file.md Content:

Multi File Read Tool (read_many_files)

This document describes the read_many_files tool for the Gemini CLI.

Description

Use read_many_files to read content from multiple files specified by paths or glob patterns. The behavior of this tool depends on the provided files:

read_many_files can be used to perform tasks such as getting an overview of a codebase, finding where specific functionality is implemented, reviewing documentation, or gathering context from multiple configuration files.

Note: read_many_files looks for files following the provided paths or glob patterns. A directory path such as "/docs" will return an empty result; the tool requires a pattern such as "/docs/*" or "/docs/*.md" to identify the relevant files.

Arguments

read_many_files takes the following arguments:

How to use read_many_files with the Gemini CLI

read_many_files searches for files matching the provided paths and include patterns, while respecting exclude patterns and default excludes (if enabled).

Usage:

read_many_files(paths=["Your files or paths here."], include=["Additional files to include."], exclude=["Files to exclude."], recursive=False, useDefaultExcludes=false, respect_git_ignore=true)

read_many_files examples

Read all TypeScript files in the src directory:

read_many_files(paths=["src/**/*.ts"])

Read the main README, all Markdown files in the docs directory, and a specific logo image, excluding a specific file:

read_many_files(paths=["README.md", "docs/**/*.md", "assets/logo.png"], exclude=["docs/OLD_README.md"])

Read all JavaScript files but explicitly include test files and all JPEGs in an images folder:

read_many_files(paths=["**/*.js"], include=["**/*.test.js", "images/**/*.jpg"], useDefaultExcludes=False)

Important notes

β”œβ”€β”€ shell.md Content:

Shell Tool (run_shell_command)

This document describes the run_shell_command tool for the Gemini CLI.

Description

Use run_shell_command to interact with the underlying system, run scripts, or perform command-line operations. run_shell_command executes a given shell command. On Windows, the command will be executed with cmd.exe /c. On other platforms, the command will be executed with bash -c.

Arguments

run_shell_command takes the following arguments:

How to use run_shell_command with the Gemini CLI

When using run_shell_command, the command is executed as a subprocess. run_shell_command can start background processes using &. The tool returns detailed information about the execution, including:

Usage:

run_shell_command(command="Your commands.", description="Your description of the command.", directory="Your execution directory.")

run_shell_command examples

List files in the current directory:

run_shell_command(command="ls -la")

Run a script in a specific directory:

run_shell_command(command="./my_script.sh", directory="scripts", description="Run my custom script")

Start a background server:

run_shell_command(command="npm run dev &", description="Start development server in background")

Important notes

Environment Variables

When run_shell_command executes a command, it sets the GEMINI_CLI=1 environment variable in the subprocess's environment. This allows scripts or tools to detect if they are being run from within the Gemini CLI.

Command Restrictions

You can restrict the commands that can be executed by the run_shell_command tool by using the coreTools and excludeTools settings in your configuration file.

The validation logic is designed to be secure and flexible:

  1. Command Chaining Disabled: The tool automatically splits commands chained with &&, ||, or ; and validates each part separately. If any part of the chain is disallowed, the entire command is blocked.
  2. Prefix Matching: The tool uses prefix matching. For example, if you allow git, you can run git status or git log.
  3. Blocklist Precedence: The excludeTools list is always checked first. If a command matches a blocked prefix, it will be denied, even if it also matches an allowed prefix in coreTools.

Command Restriction Examples

Allow only specific command prefixes

To allow only git and npm commands, and block all others:

{
  "coreTools": ["run_shell_command(git)", "run_shell_command(npm)"]
}

Block specific command prefixes

To block rm and allow all other commands:

{
  "coreTools": ["run_shell_command"],
  "excludeTools": ["run_shell_command(rm)"]
}

Blocklist takes precedence

If a command prefix is in both coreTools and excludeTools, it will be blocked.

{
  "coreTools": ["run_shell_command(git)"],
  "excludeTools": ["run_shell_command(git push)"]
}

Block all shell commands

To block all shell commands, add the run_shell_command wildcard to excludeTools:

{
  "excludeTools": ["run_shell_command"]
}

Security Note for excludeTools

Command-specific restrictions in excludeTools for run_shell_command are based on simple string matching and can be easily bypassed. This feature is not a security mechanism and should not be relied upon to safely execute untrusted code. It is recommended to use coreTools to explicitly select commands that can be executed.

β”œβ”€β”€ web-fetch.md Content:

Web Fetch Tool (web_fetch)

This document describes the web_fetch tool for the Gemini CLI.

Description

Use web_fetch to summarize, compare, or extract information from web pages. The web_fetch tool processes content from one or more URLs (up to 20) embedded in a prompt. web_fetch takes a natural language prompt and returns a generated response.

Arguments

web_fetch takes one argument:

How to use web_fetch with the Gemini CLI

To use web_fetch with the Gemini CLI, provide a natural language prompt that contains URLs. The tool will ask for confirmation before fetching any URLs. Once confirmed, the tool will process URLs through Gemini API's urlContext.

If the Gemini API cannot access the URL, the tool will fall back to fetching content directly from the local machine. The tool will format the response, including source attribution and citations where possible. The tool will then provide the response to the user.

Usage:

web_fetch(prompt="Your prompt, including a URL such as https://google.com.")

web_fetch examples

Summarize a single article:

web_fetch(prompt="Can you summarize the main points of https://example.com/news/latest")

Compare two articles:

web_fetch(prompt="What are the differences in the conclusions of these two papers: https://arxiv.org/abs/2401.0001 and https://arxiv.org/abs/2401.0002?")

Important notes

β”œβ”€β”€ web-search.md Content:

Web Search Tool (google_web_search)

This document describes the google_web_search tool.

Description

Use google_web_search to perform a web search using Google Search via the Gemini API. The google_web_search tool returns a summary of web results with sources.

Arguments

google_web_search takes one argument:

How to use google_web_search with the Gemini CLI

The google_web_search tool sends a query to the Gemini API, which then performs a web search. google_web_search will return a generated response based on the search results, including citations and sources.

Usage:

google_web_search(query="Your query goes here.")

google_web_search examples

Get information on a topic:

google_web_search(query="latest advancements in AI-powered code generation")

Important notes

β”œβ”€β”€ tos-privacy.md Content:

Gemini CLI: Terms of Service and Privacy Notice

Gemini CLI is an open-source tool that lets you interact with Google's powerful language models directly from your command-line interface. The Terms of Service and Privacy Notices that apply to your usage of the Gemini CLI depend on the type of account you use to authenticate with Google.

This article outlines the specific terms and privacy policies applicable for different account types and authentication methods. Note: See quotas and pricing for the quota and pricing details that apply to your usage of the Gemini CLI.

How to determine your authentication method

Your authentication method refers to the method you use to log into and access the Gemini CLI. There are four ways to authenticate:

For each of these four methods of authentication, different Terms of Service and Privacy Notices may apply.

Authentication Account Terms of Service Privacy Notice
Gemini Code Assist via Google Individual Google Terms of Service Gemini Code Assist Privacy Notice for Individuals
Gemini Code Assist via Google Standard/Enterprise Google Cloud Platform Terms of Service Gemini Code Assist Privacy Notice for Standard and Enterprise
Gemini Developer API Unpaid Gemini API Terms of Service - Unpaid Services Google Privacy Policy
Gemini Developer API Paid Gemini API Terms of Service - Paid Services Google Privacy Policy
Vertex AI Gen API Google Cloud Platform Service Terms Google Cloud Privacy Notice

1. If you have logged in with your Google account to Gemini Code Assist for Individuals

For users who use their Google account to access Gemini Code Assist for Individuals, these Terms of Service and Privacy Notice documents apply:

2. If you have logged in with your Google account to Gemini Code Assist for Workspace, Standard, or Enterprise Users

For users who use their Google account to access the Standard or Enterprise edition of Gemini Code Assist, these Terms of Service and Privacy Notice documents apply:

3. If you have logged in with a Gemini API key to the Gemini Developer API

If you are using a Gemini API key for authentication with the Gemini Developer API, these Terms of Service and Privacy Notice documents apply:

4. If you have logged in with a Gemini API key to the Vertex AI GenAI API

If you are using a Gemini API key for authentication with a Vertex AI GenAI API backend, these Terms of Service and Privacy Notice documents apply:

Usage Statistics Opt-Out

You may opt-out from sending Usage Statistics to Google by following the instructions available here: Usage Statistics Configuration.

Frequently Asked Questions (FAQ) for the Gemini CLI

1. Is my code, including prompts and answers, used to train Google's models?

Whether your code, including prompts and answers, is used to train Google's models depends on the type of authentication method you use and your account type.

By default (if you have not opted out):

For more information about opting out, refer to the next question.

2. What are Usage Statistics and what does the opt-out control?

The Usage Statistics setting is the single control for all optional data collection in the Gemini CLI.

The data it collects depends on your account and authentication type:

Please refer to the Privacy Notice that applies to your authentication method for more information about what data is collected and how this data is used.

You can disable Usage Statistics for any account type by following the instructions in the Usage Statistics Configuration documentation.

β”œβ”€β”€ troubleshooting.md Content:

Troubleshooting guide

This guide provides solutions to common issues and debugging tips, including topics on:

Authentication or login errors

Frequently asked questions (FAQs)

Common error messages and solutions

Debugging Tips

Existing GitHub Issues similar to yours or creating new Issues

If you encounter an issue that was not covered here in this Troubleshooting guide, consider searching the Gemini CLI Issue tracker on GitHub. If you can't find an issue similar to yours, consider creating a new GitHub Issue with a detailed description. Pull requests are also welcome!