You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Open-source LLM router & AI cost optimizer. Routes simple prompts to cheap/local models, complex ones to premium — automatically. Drop-in OpenAI-compatible proxy for Claude Code, Codex, Cursor, OpenClaw. Saves 40-70% on AI API costs. Self-hosted, no middleman.
OpenVitamin is a local-first AI execution platform that unifies Agents, Workflows, and multi-model inference into a single programmable system — designed for building real, production-grade AI applications.
Stateless LLM runtime that dynamically routes, loads, executes, and unloads models per request with bounded VRAM caching and intelligent model selection.
Intelligent model orchestration for Claude Code - routes queries to optimal Claude model (Haiku/Sonnet/Opus) based on complexity. It also includes many more features. If this project is working well for you and would like to support me, just help spread the word. Thanks!
Multi-Model Collaboration Pipeline — orchestrate AI models as a DAG. RL routing, multi-verifier voting, agent mesh, self-improving. Works with OpenAI, Anthropic, Gemini, DeepSeek. npm install mmcp-core | pip install mmcp-core
Free, self-hosted AI model router. OpenRouter / ClawRouter alternative using your own API keys. 14-dimension classifier routes to the right model (Anthropic/OpenAI/Kimi) automatically. No middleman, no markup. Built for OpenClaw.
Multi-protocol AI proxy server for Claude Code, Codex CLI, Gemini CLI & OpenClaw. Account pooling, API key management, free model routing, and visual dashboard.
Works for you. Go outside and live. — AI orchestrator that auto-routes tasks to the cheapest model that solves them. 70% run free on local models. Self-auditing, self-improving, zero prompting skill needed. Built with vibe coding by a finance student. Your models, your data.
Claw is a local-first AI control plane that runs powerful open models on your machine and connects to top LLM providers. It intelligently routes every prompt to the best model, giving you one secure workspace for chat, memory, context, connectors, routing, and full AI orchestration.
For OpenClaw, Hermes and more. Find free and low-cost inference (LLM models). Use them directly. Provides both a CLI and MCP server that knows which free-tier LLM APIs exist, which ones you have keys for, and which one fits your task. Returns endpoints so can you call models directly. No proxy, no middleware, no latency tax.