Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2 https://ift.tt/iyRzJrj

Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2 https://ift.tt/iyRzJrj

Show HN: Iterm-Mcp – AI Terminal/REPL Control for iTerm2 Hi HN! Ever wish you could just point your AI assistant at your terminal and say 'what's wrong with this output?' That's why I built iterm-mcp. It lets MCP clients like Claude Desktop directly interact with your iTerm2 terminal - reading logs, running commands, using REPLs, and helping debug issues. Want to explore data or debug using a REPL? The AI can start the REPL, run commands, and help interpret the results. This is an MCP server that integrates with Claude Desktop, LibreChat, and other Model Context Protocol compatible clients. https://ift.tt/bVe2l9K Note: Independent project, not officially affiliated with iTerm2 ## Features *Efficient Token Use:* iterm-mcp gives the model the ability to inspect only the output that the model is interested in. The model typically only wants to see the last few lines of output even for long running commands. *Natural Integration:* You share iTerm with the model. You can ask questions about what's on the screen, or delegate a task to the model and watch as it performs each step. *Full Terminal Control and REPL support:* The model can start and interact with REPL's as well as send control characters like ctrl-c, ctrl-z, etc. *Easy on the Dependencies:* iterm-mcp is built with minimal dependencies and is runnable via npx. It's designed to be easy to add to Claude Desktop and other MCP clients. It should just work. ## Real-World Example: Debugging Sidekiq Jobs I needed to debug a Sidekiq job with complex arguments. The arguments were partially obfuscated in the logs. I asked Claude: "open rails console, show me arguments for the latest XYZ job". The model: 1. Launched Rails console 2. Retrieved job details 3. Displayed the arguments that I was looking for ## Architectural Journey This project had a couple interesting constraints around command execution: ### 1. Token Efficiency Challenge I wanted to constrain tokens as much as possible. I didn't want to send the entire output of a long running command to the model, but there's not a great way to know which parts of the output are important to what the model is doing. Sampling could be used here, but it's not well supported yet. *Solution:* I arrived at a pull-based solution for this. The command from the model is sent to the terminal, and the model is made aware of how many lines of output were generated. The model can choose to retrieve as many lines of the buffer that it thinks are relevant. ### 2. Long-Running Process Support I wanted to support long running processes. It turns out that when you run `brew install ffmpeg` - it takes a while, and it's not always clear when the job is done. In early proof of concepts, the model would assume the command completed successfully and begin sending additional commands to the terminal before the first command had finished. *Solution:* iTerm provides a way to ask if the terminal is waiting for user input, but I found that it tended to show false positives in certain situations. For example, a long running command would result in iTerm reporting that the terminal was waiting for input when in fact the command was still running. I found that inspecting the processes associated with the terminal and waiting until the most interesting of those processes settles to a low resource usage is a fair indicator of long running commands being ready for input. ## Requirements * iTerm2 must be running * Node version 18 or greater ## Safety Considerations * The user is responsible for using the tool safely. * No built-in restrictions: iterm-mcp makes no attempt to evaluate the safety of commands that are executed. * Models can behave in unexpected ways. The user is expected to monitor activity and abort when appropriate. * For multi-step tasks, you may need to interrupt the model if it goes off track. Start with smaller, focused tasks until you're familiar with how the model behaves. https://ift.tt/bVe2l9K January 30, 2025 at 11:44PM

0 Comments: