Frequently Asked Questions
General Questions
What is Codex MCP Tool?
Codex MCP Tool is a Model Context Protocol (MCP) server that bridges OpenAI's Codex CLI with MCP-compatible clients like Claude Desktop and Claude Code. It enables seamless AI-powered code analysis, generation, and brainstorming directly within your development environment.
Why use Codex MCP Tool instead of Codex CLI directly?
Benefits of using Codex MCP Tool:
- Non-interactive execution - No manual prompts or confirmations needed
- Integration - Works seamlessly with Claude Desktop/Code
- File references - Use @ syntax to include files easily
- Structured output - Get organized responses with changeMode
- Progress tracking - Real-time updates for long operations
- Multiple tools - Access specialized tools beyond basic prompts
Which AI models are supported?
Currently supported OpenAI models:
- gpt-5.2-codex - Default, latest frontier agentic coding
- gpt-5.1-codex-max - Deep and fast reasoning
- gpt-5.1-codex-mini - Fast and cost-effective
- gpt-5.2 - Broad knowledge, reasoning and coding
Is this an official OpenAI tool?
No, this is a community-developed integration tool. It uses the official Codex CLI but is not directly affiliated with or endorsed by OpenAI.
Installation & Setup
What are the prerequisites?
- Node.js >= 18.0.0
- Codex CLI installed and authenticated
- MCP client (Claude Desktop or Claude Code)
- OpenAI API access with appropriate model permissions
How do I install Codex MCP Tool?
For Claude Code (recommended):
claude mcp add codex-cli -- npx -y @trishchuk/codex-mcp-toolFor Claude Desktop:
npm install -g @trishchuk/codex-mcp-toolThen add to your configuration file.
Where is the configuration file located?
Claude Desktop configuration locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/claude/claude_desktop_config.json
How do I verify the installation?
Test with these commands in your MCP client:
// Test connectivity
/codex-cli:ping "Hello"
// Check help
/codex-cli:help
// Simple task
/codex-cli:ask-codex "explain what Python decorators are"Usage Questions
How do I reference files in my prompts?
Use the @ syntax:
// Single file
'explain @src/main.ts';
// Multiple files
'compare @src/old.ts @src/new.ts';
// Glob patterns
'review @src/*.ts';
'analyze @src/**/*.ts';
// With paths containing spaces (use quotes)
"@\"My Documents/project/file.ts\"";What's the difference between sandbox modes?
| Mode | Read | Write | Delete | Execute | Use Case |
|---|---|---|---|---|---|
read-only | ✅ | ❌ | ❌ | ❌ | Analysis, reviews |
workspace-write | ✅ | ✅ | ⚠️ | ❌ | Refactoring, generation |
danger-full-access | ✅ | ✅ | ✅ | ✅ | Full automation |
How do I use different models?
Specify the model in your request:
{
"name": "ask-codex",
"arguments": {
"prompt": "your task",
"model": "gpt-5.1-codex-max" // or "gpt-5.1-codex", "gpt-5.1-codex-mini"
}
}What is changeMode?
ChangeMode returns structured file edits instead of conversational responses:
{
"prompt": "refactor this code",
"changeMode": true
}
// Returns OLD/NEW edit blocks that can be directly appliedHow do I handle large responses?
For large changeMode responses, use chunking:
// Initial request returns cacheKey
{ "prompt": "large refactor", "changeMode": true }
// Fetch subsequent chunks
{ "cacheKey": "abc123", "chunkIndex": 2 }Tools & Features
What tools are available?
- ask-codex - Execute Codex commands with file references
- brainstorm - Generate ideas with structured methodologies
- ping - Test connectivity
- help - Show Codex CLI help
- fetch-chunk - Retrieve cached response chunks
- timeout-test - Test long-running operations
How does the brainstorm tool work?
The brainstorm tool offers multiple methodologies:
{
"prompt": "ways to improve performance",
"methodology": "scamper", // or "divergent", "convergent", etc.
"domain": "backend",
"ideaCount": 10,
"includeAnalysis": true
}Can I create custom tools?
Yes! Create a new tool in src/tools/:
// src/tools/my-tool.tool.ts
export const myTool: UnifiedTool = {
name: 'my-tool',
description: 'My custom tool',
schema: MyToolSchema,
async execute(args, progress) {
// Implementation
},
};How do progress notifications work?
Long-running operations send progress updates every 25 seconds:
// In tool implementation
progress?.('Processing file 1 of 10...');
progress?.('Analyzing dependencies...');
progress?.('Generating output...');Security & Privacy
Is my code sent to OpenAI?
Yes, when you use Codex MCP Tool, your prompts and referenced files are sent to OpenAI's API for processing. Ensure you:
- Don't include sensitive data
- Review OpenAI's data usage policies
- Use appropriate sandbox modes
How do approval policies work?
| Policy | Description | Use Case |
|---|---|---|
never | No approvals needed | Trusted automation |
on-request | Approve each action | Careful operation |
on-failure | Approve on errors | Semi-automated |
untrusted | Always require approval | Maximum safety |
Can I use this in production?
Codex MCP Tool is designed for development environments. For production:
- Review security implications
- Implement proper access controls
- Monitor API usage and costs
- Consider rate limiting
Are API keys stored securely?
API keys should be:
- Set via environment variables (
OPENAI_API_KEY) - Never committed to version control
- Managed through Codex CLI authentication
- Rotated regularly
Troubleshooting
Why is the tool not responding?
Check:
- Codex CLI is installed:
codex --version - Authentication is valid:
codex auth status - MCP server is running: restart your client
- Configuration syntax is correct
How do I enable debug logging?
# Enable all debug output
DEBUG=* npx @trishchuk/codex-mcp-tool
# Enable specific modules
DEBUG=codex-mcp:* npx @trishchuk/codex-mcp-toolWhat if I get "model not available"?
- Check available models:
codex models list - Verify API access permissions
- Try a different model (e.g.,
gpt-5.1-codex-mini) - Check OpenAI account status
How do I report bugs?
- Check existing issues
- Gather diagnostic information
- Use the bug report template
- Include reproducible steps
Advanced Topics
Can I use multiple MCP servers simultaneously?
Yes, configure multiple servers in your MCP client:
{
"mcpServers": {
"codex-cli": { ... },
"another-server": { ... }
}
}How do I contribute to the project?
See our Contributing Guide:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
What's the difference between Codex models?
- gpt-5.1-codex-max - Most capable, highest reliability for coding
- gpt-5.1-codex - Optimized for codex tasks
- gpt-5.1-codex-mini - Faster and more cost-effective
- gpt-5.1 - General purpose with broad reasoning capabilities
Can I use this with other AI providers?
Currently, Codex MCP Tool is designed for OpenAI's models via Codex CLI. For other providers, consider:
- Forking and adapting the codebase
- Using different MCP servers
- Contributing multi-provider support
Performance & Optimization
How can I improve response times?
- Use faster models:
gpt-5.1-codex-miniis fastest - Be specific with file references: Avoid broad globs
- Enable caching: Reuse common analyses
- Process in batches: Break large tasks
What are the context limits?
| Model | Context Window | Recommended Max |
|---|---|---|
| gpt-5.1-codex-max | Extended | Varies |
| gpt-5.1-codex | Extended | Varies |
| gpt-5.1-codex-mini | Standard | Varies |
| gpt-5.1 | Extended | Varies |
How do I estimate costs?
// Rough cost calculation
tokens = prompt_length + file_content_length + response_length;
cost = (tokens / 1000) * model_price_per_1k;
// Model prices vary - check OpenAI pricing for current rates
// gpt-5.1-codex-max: Premium pricing
// gpt-5.1-codex: Standard pricing
// gpt-5.1-codex-mini: Economy pricingFuture & Roadmap
What features are planned?
- Streaming responses
- Local model support
- Enhanced caching
- Web UI interface
- Custom model configurations
- Batch processing
- Team collaboration features
How do I request features?
- Check existing requests
- Use the feature request template
- Provide use cases and examples
Is there a roadmap?
Check our GitHub Projects for planned features and progress.