Nodetool
Visual AI workflow builder - ComfyUI meets n8n for LLM agents, RAG pipelines, and multimodal data flows. Local-first, open source (AGPL-3.0).
Visual AI workflow builder - ComfyUI meets n8n for LLM agents, RAG pipelines, and multimodal data flows. Local-first, open source (AGPL-3.0).
Real data. Real impact.
Emerging
Developers
Per week
Open source
Skills give you superpowers. Install in 30 seconds.
Visual AI workflow builder combining ComfyUI's node-based flexibility with n8n's automation power. Build LLM agents, RAG pipelines, and multimodal data flows on your local machine.
# See system info nodetool infoList workflows
nodetool workflows list
Run a workflow interactively
nodetool run <workflow_id>
Start of chat interface
nodetool chat
Start of web server
nodetool serve
Quick one-line installation:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash
With custom directory:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash --prefix ~/.nodetool
Non-interactive mode (automatic, no prompts):
Both scripts support silent installation:
# Linux/macOS - use -y curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash -yWindows - use -Yes
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex; .\install.ps1 -Yes
What happens with non-interactive mode:
Quick one-line installation:
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex
With custom directory:
.\install.ps1 -Prefix "C:\nodetool"
Non-interactive mode:
.\install.ps1 -Yes
Manage and execute NodeTool workflows:
# List all workflows (user + example) nodetool workflows listGet details for a specific workflow
nodetool workflows get <workflow_id>
Run workflow by ID
nodetool run <workflow_id>
Run workflow from file
nodetool run workflow.json
Run with JSONL output (for automation)
nodetool run <workflow_id> --jsonl
Execute workflows in different modes:
# Interactive mode (default) - pretty output nodetool run workflow_abc123JSONL mode - streaming JSON for subprocess use
nodetool run workflow_abc123 --jsonl
Stdin mode - pipe RunJobRequest JSON
echo '{"workflow_id":"abc","user_id":"1","auth_token":"token","params":{}}' | nodetool run --stdin --jsonl
With custom user ID
nodetool run workflow_abc123 --user-id "custom_user_id"
With auth token
nodetool run workflow_abc123 --auth-token "my_auth_token"
Manage workflow assets (nodes, models, files):
# List all assets nodetool assets listGet asset details
nodetool assets get <asset_id>
Manage NodeTool packages (export workflows, generate docs):
# List packages nodetool package listGenerate documentation
nodetool package docs
Generate node documentation
nodetool package node-docs
Generate workflow documentation (Jekyll)
nodetool package workflow-docs
Scan directory for nodes and create package
nodetool package scan
Initialize new package project
nodetool package init
Manage background job executions:
# List jobs for a user nodetool jobs listGet job details
nodetool jobs get <job_id>
Get job logs
nodetool jobs logs <job_id>
Start background job for workflow
nodetool jobs start <workflow_id>
Deploy NodeTool to cloud platforms (RunPod, GCP, Docker):
# Initialize deployment.yaml nodetool deploy initList deployments
nodetool deploy list
Add new deployment
nodetool deploy add
Apply deployment configuration
nodetool deploy apply
Check deployment status
nodetool deploy status <deployment_name>
View deployment logs
nodetool deploy logs <deployment_name>
Destroy deployment
nodetool deploy destroy <deployment_name>
Manage collections on deployed instance
nodetool deploy collections
Manage database on deployed instance
nodetool deploy database
Manage workflows on deployed instance
nodetool deploy workflows
See what changes will be made
nodetool deploy plan
Discover and manage AI models (HuggingFace, Ollama):
# List cached HuggingFace models by type nodetool model list-hf <hf_type>List all HuggingFace cache entries
nodetool model list-hf-all
List supported HF types
nodetool model hf-types
Inspect HuggingFace cache
nodetool model hf-cache
Scan cache for info
nodetool admin scan-cache
Maintain model caches and clean up:
# Calculate total cache size nodetool admin cache-sizeDelete HuggingFace model from cache
nodetool admin delete-hf <model_name>
Download HuggingFace models with progress
nodetool admin download-hf <model_name>
Download Ollama models
nodetool admin download-ollama <model_name>
Interactive chat and web interface:
# Start CLI chat nodetool chatStart chat server (WebSocket + SSE)
nodetool chat-server
Start FastAPI backend server
nodetool serve --host 0.0.0.0 --port 8000
With static assets folder
nodetool serve --static-folder ./static --apps-folder ./apps
Development mode with auto-reload
nodetool serve --reload
Production mode
nodetool serve --production
Start reverse proxy with HTTPS:
# Start proxy server nodetool proxyCheck proxy status
nodetool proxy-status
Validate proxy config
nodetool proxy-validate-config
Run proxy daemon with ACME HTTP + HTTPS
nodetool proxy-daemon
# View settings and secrets nodetool settings showGenerate custom HTML app for workflow
nodetool vibecoding
Run workflow and export as Python DSL
nodetool dsl-export
Export workflow as Gradio app
nodetool gradio-export
Regenerate DSL
nodetool codegen
Manage database migrations
nodetool migrations
Synchronize database with remote
nodetool sync
Run a NodeTool workflow and get structured output:
# Run workflow interactively nodetool run my_workflow_idRun and stream JSONL output
nodetool run my_workflow_id --jsonl | jq -r '.[] | "(.status) | (.output)"'
Generate documentation for a custom package:
# Scan for nodes and create package nodetool package scanGenerate complete documentation
nodetool package docs
Deploy a NodeTool instance to the cloud:
# Initialize deployment config nodetool deploy initAdd RunPod deployment
nodetool deploy add
Deploy and start
nodetool deploy apply
Check and manage cached AI models:
# List all available models nodetool model list-hf-allInspect cache
nodetool model hf-cache
Quick one-line installation:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash
With custom directory:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash --prefix ~/.nodetool
Non-interactive mode (automatic, no prompts):
Both scripts support silent installation:
# Linux/macOS - use -y curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash -yWindows - use -Yes
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex; .\install.ps1 -Yes
What happens with non-interactive mode:
Quick one-line installation:
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex
With custom directory:
.\install.ps1 -Prefix "C:\nodetool"
Non-interactive mode:
.\install.ps1 -Yes
The installer sets up:
~/.nodetool/envnodetool-core, nodetool-base from NodeTool registrynodetool CLI available from any terminalAfter installation, these variables are automatically configured:
# Conda environment export MAMBA_ROOT_PREFIX="$HOME/.nodetool/micromamba" export PATH="$HOME/.nodetool/env/bin:$HOME/.nodetool/env/Library/bin:$PATH"Model cache directories
export HF_HOME="$HOME/.nodetool/cache/huggingface" export OLLAMA_MODELS="$HOME/.nodetool/cache/ollama"
Check NodeTool environment and installed packages:
nodetool info
Output shows:
No automatic installation available. Please visit the source repository for installation instructions.
View Installation Instructions1,500+ AI skills, agents & workflows. Install in 30 seconds. Part of the Torly.ai family.
© 2026 Torly.ai. All rights reserved.