Production · 6 Phases Complete · Agent-ready

spline-lstm

Spline Preprocessing + LSTM/GRU Neural Forecasting.
Short-term noise and long-term dependencies — handled together.

A production-ready time-series forecasting pipeline that combines spline-based preprocessing with LSTM and GRU neural networks. Multi-input modeling, rolling-window cross-validation, CLI runner, FastAPI backend, React UI, and first-party agent ecosystem integration.

View on GitHub ↗ Explore Features →
// Project Phases
Six phases complete.
Built for production from day one.

The project was structured as a phased build — each phase validated before proceeding, with comprehensive documentation, test suites, and operational gates at every stage.

PHASE 01
Data & Preprocessing

Spline interpolation, scaling, and windowing pipeline with reproducible artifact persistence.

PHASE 02
Multi-input Modeling

LSTM and GRU architectures supporting past observations, future-known variables, and static covariates.

PHASE 03
Training & Validation

Unified CLI training runner with rolling-window cross-validation and configurable hyperparameters.

PHASE 04
Production Hardening

Operational gates with smoke testing, run-verification, and circuit-breaker patterns for inference safety.

PHASE 05
API & UI

FastAPI backend exposing forecast endpoints and React UI for model monitoring, input management, and output visualization.

PHASE 06
Agent Ecosystem

Backend compatibility with Tollama agent runtime, LangChain, and OpenClaw skills for integration into multi-agent forecasting workflows.

// Key Features
Advanced forecasting,
production-ready defaults.
〰️
Spline-based Preprocessing

Spline interpolation for gap filling and noise smoothing before neural modeling — capturing the underlying trend without overfitting short-term variance.

🔁
LSTM + GRU Architectures

Both LSTM and GRU architectures are supported, configurable via YAML. Each captures long-range temporal dependencies while handling varying sequence lengths.

🧩
Multi-input Modeling

Supports past observations, future-known variables (e.g., promotions, calendar features), and static covariates (e.g., store type) as separate model inputs.

⌨️
Unified CLI Training Runner

Train, validate, and export models from a single CLI command with full artifact persistence — checkpoints, metrics, and config saved automatically.

📊
Rolling-window Cross-validation

Time-series cross-validation using a rolling-window methodology — no data leakage, realistic evaluation of model performance across historical regimes.

🛡️
Production Operational Gates

Smoke testing and run-verification gates ensure models meet quality thresholds before being promoted to production inference endpoints.

🌐
FastAPI Backend + React UI

REST API for forecast endpoints with a React frontend for monitoring, input management, and visualizing predictions — deployable standalone or embedded.

🤖
Agent & LLM Ecosystem Integration

First-party backend compatibility with the Tollama agent runtime, LangChain, and OpenClaw skills — plugs directly into multi-agent forecasting workflows.

Deep learning
for time series.

TensorFlow-backed LSTM/GRU with spline preprocessing, served via FastAPI and monitored through a React UI. Structured for clean agent integration.

  • Python 3.10–3.11 Core runtime
  • TensorFlow ≥2.14 LSTM / GRU training & inference
  • Spline Interpolation & smoothing preprocessing
  • FastAPI Forecast & management API backend
  • React Monitoring & visualization UI
  • YAML Declarative model & training configuration
  • Tollama / LangChain Agent ecosystem integration
bash · Training Runner CLI
# Install
pip install -e ".[dev]"

# Train with config
python -m spline_lstm.train \
  --config configs/lstm_base.yaml \
  --data data/processed/train.parquet \
  --output artifacts/run_001

# Validate with rolling-window CV
python -m spline_lstm.validate \
  --run artifacts/run_001 \
  --folds 5

# Serve via FastAPI
uvicorn spline_lstm.api:app --port 8080

# Smoke test before production
python scripts/smoke_test.py \
  --endpoint http://localhost:8080
// Agent Ecosystem
Forecasting that plugs
into your agent stack.

spline-lstm is designed to operate as a forecasting backend in multi-agent systems — compatible with the Tollama daemon, LangChain chains, and OpenClaw skill wrappers.

Tollama Runtime Integration

Register spline-lstm as a model family in the Tollama daemon — accessible via the same REST API and SDK used for TSFM models like Chronos and TimesFM.

🦜
LangChain Tool

Expose the FastAPI forecast endpoint as a LangChain tool — letting LLM agents call spline-lstm forecasts as part of reasoning chains.

🧩
OpenClaw Skill Wrapper

Compatible with OpenClaw skill contracts — wrap as a structured skill with health, predict, and explain endpoints for Claude Code agent use.

// Get Started

Hybrid forecasting,
production-ready.

Six phases complete. Agent-ready for integration into any forecasting workflow.

View on GitHub ↗ ← Back to Tollama AI