Ollama for Time Series.
Run TSFMs and neural baselines with a single API.
Time series forecasting is still fragmented across incompatible runtimes, conflicting dependencies, and per-model wrappers. tollama now unifies 7 TSFMs and 4 neural baselines behind one daemon, one SDK, one REST API, and first-class agent pathways.
Every TSFM ships its own install, its own API, and its own dependency tree. Building on top of them means fighting fragmentation at every layer.
Install tollama, start the daemon, and run the built-in pull plus forecast demo in under five minutes.
python -m pip install tollama
# optional dashboard extras:
python -m pip install "tollama[tui]"
# from source (dev):
python -m pip install -e ".[dev]"
# terminal 1
tollama serve
# check health + diagnostics
curl http://localhost:11435/api/version
tollama doctor
tollama info --json
# terminal 2
tollama quickstart
# or pull and run manually
tollama pull chronos2
tollama run chronos2 --input examples/chronos2_request.json
# stable API route
curl -s http://localhost:11435/v1/forecast \
-H 'content-type: application/json' \
-d @examples/chronos2_request.json
Human-friendly progress is enabled automatically on interactive terminals. Full setup, models, troubleshooting, and API docs live in the upstream repo: README.md.
Python SDK, REST API, CLI, and dashboards all sit on the same daemon and canonical forecast contract.
from tollama import Tollama
with Tollama() as sdk:
flow = (
sdk.workflow(series={"target": [10, 11, 12, 13, 14], "freq": "D"})
.analyze()
.auto_forecast(horizon=3)
)
print(flow.auto_forecast_result.selection.chosen_model)
The upstream SDK now exposes 16 methods, DataFrame conversion, chained workflows, and follow-on helpers like then_compare().
curl -s http://localhost:11435/v1/forecast \
-H 'content-type: application/json' \
-d '{
"model": "chronos2",
"series": [{"target": [120, 135, 142], "freq": "D"}],
"horizon": 7
}'
Beyond forecasting, the current API surface covers analyze, generate, compare, report, what-if, counterfactual, scenario-tree, modelfiles, ingest, A2A, and dashboard bootstrap routes.
tollama serve
tollama quickstart
tollama explain chronos2
tollama runtime install --all
tollama modelfile list
tollama config keys
Recent CLI additions include doctor, open, dashboard, runtime, modelfile, and dev scaffold.
# Web dashboard (browser)
tollama serve
tollama open
# → http://127.0.0.1:11435/dashboard
# Terminal TUI
python -m pip install "tollama[tui]"
tollama dashboard
The bundled dashboard now ships with packaged static assets, auth-aware API routes, and an aggregated /api/dashboard/state bootstrap endpoint.
The current upstream registry spans 7 TSFMs and 4 neural baselines. They all share the same CLI, SDK, HTTP, MCP, and A2A surface, while family runtimes stay isolated under ~/.tollama/runtimes/.
| Model Family | Past Numeric | Past Categorical | Known-Future Numeric | Known-Future Categorical |
|---|---|---|---|---|
| Chronos-2 | ✓ | ✓ | ✓ | ✓ |
| Granite TTM | ✓ | — | ✓ | — |
| TimesFM 2.5 | ✓ | — | ✓ | — |
| Uni2TS / Moirai | ✓ | — | ✓ | — |
| Sundial | — | — | — | — |
| Toto Open Base | ✓ | — | — | — |
| Lag-Llama | — | — | — | — |
| PatchTST | — | — | — | — |
| TiDE | ✓ | — | ✓ | — |
| N-HiTS / N-BEATSx | — | — | — | — |
The upstream project explicitly targets both developers and AI agents: forecasts, analysis, comparison, and planning can all be invoked as tools over MCP, A2A, LangChain, or framework adapters.
Register tollama-mcp as an MCP server. AI assistants discover and call 15 forecasting tools — forecast, auto-forecast, pipeline, what-if, model management, and data ingest.
First-party LangChain toolkit with 13 tools wrapping the full API surface. Compose forecast chains, embed in ReAct agents, or use with LangGraph workflows.
Framework adapters now ship directly in the package: CrewAI tools, AutoGen tool specs plus function maps, and smolagents-compatible tool wrappers.
Skill package at skills/tollama-forecast/ with health, models, forecast, pull, rm, and info wrappers. E2E validated with contract-first error handling.
Authenticated discovery plus task lifecycle support via POST /a2a and /.well-known/agent-card.json, including message/stream, tasks/get, tasks/query, and tasks/cancel.
The tollamad daemon supervises worker-per-family runtimes, keeps the public contract stable, and auto-bootstrap installs isolated venvs per backend when needed.
~/.tollama/runtimes//api/usage/api/eventsCombined analysis, recommendation, and forecast in a single call via /api/report, with optional narrative blocks.
Real-time forecast refinement and daemon event feeds over SSE via /api/forecast/progressive and /api/events.
Model-free descriptive analysis at /api/analyze and synthetic series generation at /api/generate.
Forecast directly from CSV or Parquet using data_url, /api/forecast/upload, or /api/ingest/upload.
Chain benchmarks, comparisons, and end-to-end plans through /api/compare and /api/pipeline.
Explore alternative futures with /api/what-if, /api/counterfactual, and /api/scenario-tree.
Zero-config model selection via /api/auto-forecast, with ensemble mean and median strategies available today.
Create named forecast profiles with tollama modelfile and manage pull or routing defaults with tollama config.
Optional API-key auth, docs protection, usage metering at /api/usage, Prometheus at /metrics, and full diagnostics at /api/info.
The current endpoint inventory spans system diagnostics, model lifecycle, upload plus ingest, stable v1 routes, structured analysis, scenario workflows, TSModelfiles, observability, and A2A discovery.
The upstream roadmap is now implementation-aware and explicitly tracks what is shipped versus what remains for v1 hardening.
mean and median strategies.tollama.preprocess.Runtime management, analysis, ingest, dashboards, and agent integration all ship in the same platform.