All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Toolnow supports an post action, which can be used to execute some after llm response is generated. There are following actions supported:output: output the content to stdout (default), let action be empty oroutputcopy: copy the content to clipboard, let action to becopysave: save the content to a file, let action be any file nameexecute:- when action is
execute,run, orexec, the content will be treated as a shell command and executed directly. - when action is a non-empty string other than above, it will be treated as a shell command template, and the content will be passed as argument to the command. For example, if action is
echo, and content ishello, then the command executed will beecho "hello".
- when action is
- see
- tr.toml : a translation tool that copy the translation result to clipboard.
gpt -t tr "Hello, how are you?"will copy你好,你好吗?to clipboard - pa.toml : a command assistant that build shell command from user input and execute it directly.
gpt -t pa "list all files in current directory"will executels -acommand directly.
- tr.toml : a translation tool that copy the translation result to clipboard.
-tor--tooloption to specify a tool file to use.-Tis changed to set temperature.
- Add '--tool' '-t' option to enable a tool use mode.
Tool is a pre-defined system prompt, model, and other configurations to do specific tasks. see Tool for more details.
A example tool 'tr' is located at tr.toml, which translate between chinese and english, you can use it like:
gpt -t samples/tools/tr.toml "Hello, how are you?"This will output the translation result, 你好,你好吗?
the samples/tools/tr.toml can be copied to $HOME/.gpt/tools/tr.toml, then you can use it like:
gpt -t tr "Hello, how are you?"- Added
reasonEffortconfiguration to control how llm generate response
- Upgraded the
openaiSDK to v3. - Improved changelog format.
- Verbose level 2 now outputs reason content, and verbose level 3 outputs the raw chunk response.
- Allow using a
ProxyMCPClientto expose any HTTP service as an MCP server, for examplegpt -M samples/qqwry.mcp.yaml "where is 120.197.169.198's location".
- Upgraded the
openaiSDK to v2.
- Allow referencing MCP prompts by name, for example
gpt -M "mcp_url" -s p1 user_prompt.
- Upgraded the
openaiSDK to v1.
- Added streaming HTTP transport support for MCP, indicated by URLs without
sse.
- Enabled multi-round MCP calls.
- Improved compatibility.
- Support specifying only the model name.
- Improved compatibility with MCP servers.
- Fixed MCP SSE so it starts before use.
- Added support for the Model Content Protocol (MCP);
gptnow accepts the-Moption to specify the MCP server.