gllm

gllm - Golang Command-Line LLM Companion

gllm is a powerful CLI tool designed to interact seamlessly with various Large Language Models (LLMs). It supports features like interactive chat, multi-turn conversations, file attachments, search integration, a command agent, multi-agent workflows, deep research, mcp services and extensive customization.

πŸš€ Features


πŸ“Œ Installation

Homebrew (macOS)

brew tap activebook/gllm
brew install gllm

Build from Source

git clone https://github.com/activebook/gllm.git
cd gllm
go build -o gllm

🎯 Usage

Basic Commands

Interactive Chat

Start an interactive chat session:

gllm chat

Chat Mode Screenshot

Within the chat, you can use various commands:

✏️ Multi-Line Input with Editor

For longer messages or code snippets, use your preferred text editor directly in chat mode:

# In chat mode, type:
/editor
/e

How to use:

  1. Open prefered editor
  2. Compose your multi-line message
  3. Save and exit the editor
  4. Review the content in gllm
  5. Press Enter to send or Ctrl+C to discard

Setup your editor:

# Set your preferred editor (vim, nano, code, etc.)
gllm editor vim

# List available editors
gllm editor list

# Check current editor
gllm editor

Multi-turn Conversations

There are two main ways to have a multi-turn conversation:

1. Single-Line Style (using named conversations)

You can maintain a conversation across multiple commands by assigning a name to your conversation with the -c flag. This is useful for scripting or when you want to continue a specific line of inquiry.

2. Chat Style (interactive session)

For a more interactive experience, you can use the chat command to enter a real-time chat session.

File Attachments

Code Editing

The command agent supports diff editing for precise code modifications.

gllm "Read this file to change function name"
Edit code with diff Cancel an edit
Edit Code Screenshot Cancel Edit Screenshot

Workflows

gllm allows you to define and run complex workflows with multiple agents. A workflow consists of a sequence of agents, where the output of one agent serves as the input for the next. This is useful for tasks like deep research, automated code generation, and more.

How it Works

A workflow is defined by a series of agents, each with a specific role and configuration. There are two types of agents:

When a workflow is executed, gllm processes each agent in the defined order. The output from one agent is written to a directory that becomes the input for the next agent.

Workflow Commands

You can manage your workflows using the gllm workflow command:

Example: A Simple Research Workflow

Here’s an example of a simple research workflow with two agents: a planner and a researcher.

  1. Planner (master): This agent takes a research topic and creates a research plan.
  2. Researcher (worker): This agent takes the research plan and executes it, gathering information and generating a report.

To create this workflow, you would use the gllm workflow add command:

# Add the planner agent
gllm workflow add --name planner --model groq-oss --role master --output "workflow/planner" --template "Create a research plan for the following topic: "

# Add the researcher agent
gllm workflow add --name researcher --model gemini-pro --role worker --input "workflow/planner" --output "workflow/researcher" --template "Execute the following research plan: "

To execute the workflow, you would use the gllm workflow start command:

gllm workflow start "The future of artificial intelligence"

This will start the workflow. The planner agent will create a research plan and save it to the workflow/planner directory. The researcher agent will then read the plan from that directory, execute the research, and save the final report to the workflow/researcher directory.

Here’s an example of a deep research workflow in action:

Planner Dispatcher Workers Summarizer
Designs a plan for the research.
Planner Screenshot
Dispatches sub-tasks to worker agents.
Dispatcher Screenshot
Execute the sub-tasks in parallel.
Workers Screenshot
Summarizes the results from the workers to provide a final report.
Summarizer Screenshot

πŸ€– Agent Management

Create and manage multiple AI assistant profiles with different configurations:

# Create agents for different tasks
gllm agent add coder --model gpt-4o --tools on
gllm agent add researcher --model gemini-pro --search google

# Switch between agents
gllm agent switch coder
gllm agent switch researcher

# List and manage agents
gllm agent list
gllm agent info coder
gllm agent set coder --model gpt-4

Agent Commands:


πŸ›  Model Context Protocol (MCP)

gllm supports the Model Context Protocol (MCP), allowing you to connect to external MCP servers to access additional tools and data sources. This enables LLMs to interact with external services, databases, and APIs through standardized protocols.

Enabling/Disabling MCP

Managing MCP Servers

You can add, configure, and manage MCP servers of different types:

Using MCP in Queries

Once MCP is enabled and servers are configured, the LLM can automatically use available MCP tools during conversations:

gllm "Use the available tools to fetch the latest info of golang version"

Use MCP Screenshot

The LLM will detect relevant MCP tools and use them to enhance its responses with external data and capabilities.


πŸ›  Configuration

gllm stores its configuration in a user-specific directory. You can manage the configuration using the config command.


πŸ— Contributing

Contributions are welcome! Please feel free to submit a pull request or open an issue.


Created by Charles Liu (@activebook)