BuildUseful
← Back to work

Harness Core

Topology-aware agent orchestration with multi-provider LLM routing.

shippedai-toolinginfrapythonsqliteanthropicollama

Why this exists

Most agent frameworks pick a side: fully local (slow, inflexible) or fully hosted (expensive, dependent). Harness Core treats the provider as a deployment decision per task, not a framework-level axiom.

What it does

Classifies incoming tasks along an internally-defined topology (n3), routes each task to whichever provider fits — local Ollama for bulk or sensitive work, hosted Anthropic for tasks that need strong reasoning. A SQLite-backed task queue keeps the whole system crash-safe.

What I learned

Classification is where this pattern lives or dies. Getting the topology right mattered more than getting the routing right — cheap/slow providers can still win when the task is correctly framed.