A coding agent for open models

Connect Swival to LM Studio or the HuggingFace Inference API, give it a task, and it runs an autonomous tool loop to completion. On LM Studio, it auto-discovers your loaded model, so setup is zero. A few thousand lines of Python, no framework.

$ uv tool install swival
$ swival "Refactor the error handling in src/api.py"

Why Swival

Your models, your way

Auto-discovers your LM Studio model, or point it at any HuggingFace model or dedicated endpoint. You pick the model and the infrastructure.

Small and hackable

A few thousand lines of Python, no framework. Read the whole agent in an afternoon. Modify it to fit your workflow.

Batteries included

Works out of the box with no configuration. Point it at a model and start working. File editing, search, web fetch, and structured thinking are all built in.

Built for benchmarking

JSON evaluation reports capture timing, tool usage, and context events. Compare models systematically on real coding tasks.

Quickstart

LM Studio

  1. Install LM Studio and load a model with tool-calling support. Recommended first model: qwen3-coder-next (great quality/speed tradeoff on local hardware). Start the server.
  2. Install Swival:
    uv tool install swival
  3. Run:
    swival "Refactor the error handling in src/api.py"

HuggingFace

export HF_TOKEN=hf_...
uv tool install swival
swival "Refactor the error handling in src/api.py" \
    --provider huggingface --model zai-org/GLM-5

For interactive sessions, use swival --repl.