gllm

gllm - Golang Command-Line LLM Companion

gllm is a powerful command-line tool designed to interact seamlessly with various Large Language Models (LLMs). Supports features like interactive chat, multi-turn conversations, file attachments, search integration, command agent, and extensive customization.

๐Ÿš€ Features


๐Ÿ“Œ Installation

# Install via package manager (if available)
brew tap activebook/gllm
brew install gllm

# Or manually build from source
git clone https://github.com/activebook/gllm.git
cd gllm
go build -o gllm

๐Ÿ“ฆ Upgrade

brew tap activebook/gllm
brew upgrade gllm

๐ŸŽฏ Usage

๐Ÿ”น Basic Commands

gllm "What is Go?"               # Default model & system prompt
gllm "Where is the best place to visit in London?" -m gpt4o # Switch model
gllm "How to find a process and terminate it?" -t shellmate  # Use shellmate prompt to specific shell question
gllm -s "Who's the POTUS right now? and check what's his latest tariff policy" -m gemini-pro -r 10 # Use Gemini model to search and set max references to 10

๐Ÿ”น Attachments (Files, Image, Urls)

gllm "Summarize this" -a report.txt  # Use file as input
gllm "Translate into English" -a image1.jpg  # Use image as input and vision model
gllm "Summarize all:" -a "https://www.cnbc.com" -a "https://www.nytimes.com" # Attach multiple urls
gllm "Transcribe this audio" -a speech.mp3  # Use audil as input (only for gemini multimodal models)

๐Ÿ” Search & Vision

gllm "Who is the President of the United States right now?" --search # Use search to find latest news
gllm "Who is he/she in this photo? And what is his/her current title?" -s -a "face.png" --model gemini # Use vision model and search engine to find people in image
gllm "When was gpt4.5 released?" --search=google # Use specific search engine to find latest news

๐Ÿ’ฌ Keep Conversations (Multi-turn chat)

gllm -s "Who's the POTUS right now?" -c abc     # Start a conversation and retain the full context (last 10 messages)
gllm "Tell me again, who's the POTUS right now?" -c abc  # Continue the default conversation
gllm "Let's talk about why we exist." -c newtalk      # Start a new named conversation called 'newtalk'
gllm -s "Look up what famous people have said about this." -c newtalk  # Continue the 'newtalk' conversation
gllm "Tell me more about his books." -c 1  # Continue the 'newtalk' conversation" with convo index (latest convo)

โš ๏ธ Warning: If youโ€™re using Gemini mode and an OpenAI-compatible model, keep in mind that they cannot be used within the same conversation.
These models handle chat messages differently, and mixing them will lead to unexpected behavior.

๐Ÿ–ฅ๏ธย ยป Interactive Chat (Available!)

gllm chat                       # Start chat with defaults
gllm chat -m gpt4o             # Start chat with a specific model
gllm chat --sys-prompt coder    # Use a named system prompt
gllm chat -c my_chat            # Start a new chat session
gllm chat -c 1                  # Follow a previous chat session with a convo index
gllm chat -s                    # Start a new conversation with search

๐Ÿ› ๏ธ You can change setting in the chat session

gllm> /exit, /quit    # Exit the chat session
gllm> /clear, /reset  # Clear context
gllm> /help           # Show available commands
gllm> /history /h [num] [chars]   # Show recent conversation history (default: 20 messages, 200 chars)
gllm> /markdown, /mark [on|off|only]  # Switch whether to render markdown or not
gllm> /system, /S [name|prompt]  # change system prompt
gllm> /template, /t [name|tmpl]  # change template
gllm> /search, /s [search_engine] # select a search engine to use
gllm> /reference. /r [num]        # change link reference count
gllm> /attach, /a [filename]      # Attach a file to the chat session
gllm> /detach, /d [filename|all]  # Detach a file to the chat session

โš ๏ธ Warning: You canโ€™t switch models within the same conversation. Once you choose a model, youโ€™ll need to stick with it throughout. Just like when using different models online, you can continue or change topics, you can do search and attach files, but the model type remains the same.

๐Ÿ”น Prompt Templates

gllm --template coder              # Use predefined coder prompt
gllm "Act as shell" --system-prompt "You are a Linux shell..."
gllm --system-prompt shell-assistant --template shellmate

๐Ÿ”น Configuration Management

gllm config path     # Show config file location
gllm config show     # Display loaded configurations

๐Ÿ”น Model Management

gllm model list                          # List available models
gllm model add --name gpt4 --key $API_KEY --model gpt-4o --temp 0.7
gllm model default gpt4                   # Set default model

๐Ÿ”น Template & System Prompt Management

gllm template list                        # List available templates
gllm template add coder "You are an expert Go programmer..."
gllm system add --name coder --content "You are an expert Go programmer..."
gllm system default coder                 # Set default system prompt

๐Ÿ”น New update! & Search Engine Management

gllm search list                          # List available search engines   
gllm search google --key $API_KEY --cx $SEARCH_ENGINE_ID # Use Google Search Engine
gllm search tavily --key $API_KEY                       # Use Tavily Search Engine
gllm search default [google,tavily]     # Set default search engine
gllm search save [on|off]         # Save search results on conversation history (careful! could induce token consumption. default: off)

๐Ÿ”น New update! & Conversation Management

gllm convo list            # list all conversations
gllm convo remove newtalk  # remove a conversation
gllm convo remove "chat_*" # wildcard remove multiple conversations
gllm convo info newtalk    # show a conversation in details
gllm convo info 1          # use index to view a conversation in details
gllm convo info newtalk -n 100 -c 300 # view history context (the lastest 100 messages, 300 characters each)
gllm convo clear           # clear all conversations

๐Ÿ”น New update! & Markdown output Management

gllm markdown on            # enable markdown output which append the end of the streaming contents
gllm markdown off           # disable markdown output
gllm markdown only          # only render markdown output

๐Ÿ”น New update! & Plugins Management

gllm plugin list            # list all plugins, loaded add a checkmark before the name
gllm plugin load exec       # load exec plugin, to execute the command
gllm plugin unload exec     # unload exec plugin

๐Ÿ”น Std input Support

cat script.py | gllm "Help me fix bugs in this python coding snippet:" -a - # Use std input as file attachment
cat image.png | gllm "What is this image about?" -a - # Use std input as image attachment
cat jap.txt | gllm "Translate all this into English" # Use std input as text input
cat report.txt | gllm "Summarize this"
echo "What is the capital of France?" | gllm # Use std input as text input
echo "Who's the POTUS right now?" | gllm -s # Use std input as search query

๐Ÿ”น Python code execution (only for gemini2.0)

gllm --code "import time\\nstart = time.time()\\nis_prime = lambda n: n > 1 and all(n % d for d in range(2, int(n**0.5) + 1))\\nLIMIT = 100000\\ncount = sum(1 for x in range(LIMIT) if is_prime(x))\\nend = time.time()\\nprint(f'Number of primes less than {LIMIT}: {count}')\\nprint(f'Time taken: {end - start:.2f} seconds')" -m gemini2.5

๐Ÿ”น Version Information

gllm version
gllm --version

๐Ÿ›  Configuration

By default, gllm stores configurations in a user-specific directory. Use the config commands to manage settings.

default:
  model: gpt4
  system_prompt: coder
  template: default
  search: google
  markdown: on
models:
  - name: gpt4
    endpoint: "https://api.openai.com"
    key: "$OPENAI_KEY"
    model: "gpt-4o"
    temperature: 0.7
search_engines:
  - google:
    key: "$GOOGLE_API_KEY"
  - tavily:

๐Ÿ’ก Why gllm?

Start using gllm today and supercharge your command-line AI experience! ๐Ÿš€


Project Features

This project includes various features to enhance usability and efficiency. Below is an overview of the key functionalities.

Installation & Usage

Topic & Description Screenshot
Install: Easily install using Homebrew Install Screenshot
Upgrade: Upgrade to the latest version Upgrade Screenshot
How to Use: Simply run the command Help Screenshot

Core Functionalities

Feature & Description Screenshot
General Usage: Quick overview of core commands Usage Screenshot
Search Info: Perform smart, targeted searches Search Screenshot
Search Thoroughly: Deep-dive web searches for comprehensive results Search Example
Configuration: Customize settings to fit your workflow Config Screenshot
Reasoning: Leverage advanced reasoning capabilities Reasoning Screenshot

Additional Features

Feature & Description Screenshot
Multi-Search: Run multiple searches in one command Multi-Search Screenshot
Multi-Turn: Continue previous conversations seamlessly Conversation Screenshot
PDF Reading (Gemini only): Extract and analyze PDF content PDF Screenshot
Code Execution (Gemini only): Execute Python code directly Code Execution Screenshot
Markdown Output: Generate clean, readable Markdown Markdown Screenshot
One-Shot Agent: Run task in one shot One-Shot Screenshot

Interactive Chat Features

Feature & Description Screenshot
Chat Mode: Enter an interactive chat session Chat Mode Screenshot
Follow-Up: Pose follow-up questions in the same session Follow-Up Screenshot
Chat History with Tool Calls: View your conversation context and tool usage Tool Calls Screenshot
Chat History with Reasoning: Inspect past reasoning steps alongside your chat Reasoning History Screenshot
Command Agent mode: Utilize the power of command-line agents Command Agent Screenshot
Multi-Commands In One-Shot: Run multiple commands in one shot Multi-Commands Screenshot

For more details, using gllm --help to check.


๐Ÿ— Contributing

@cite { @author: Charles Liu @github: https://github.com/activebook @website: https://activebook.github.io }