One command.
Native speed.
Point Timber at any URL and it downloads, compiles, and serves immediately. No separate load step. No configuration. Python never touches the hot path.
Or load a local file and serve by name — your choice.
# Option 1: serve directly from a URL — no pre-download needed
pip install timber-compiler
timber serve https://yourhost.com/fraud_model.json
# Option 2: load a local file, then serve by name
pip install timber-compiler
timber load fraud_model.json --name fraud-detector
timber serve fraud-detector
# Query either way
curl http://localhost:11434/api/predict \
-d '{"model": "fraud-detector", "inputs": [[1.0, 2.0, ...]]}'
Why Timber?
5 Framework Parsers
XGBoost, LightGBM, scikit-learn, CatBoost, and ONNX. Auto-detected from file extension and content.
6 Optimizer Passes
Dead leaf elimination, constant folding, threshold quantization, branch sorting, pipeline fusion, vectorization analysis.
3 Code Backends
C99 for servers & embedded, WebAssembly for browsers & edge, MISRA-C for safety-critical (automotive, medical).
Ollama-Style Serving
timber load → timber serve. REST API on port 11434. Same developer experience as Ollama, but for classical ML.
Zero Dependencies
Generated code needs only a C99 compiler. No runtime libraries, no dynamic allocation, no recursion. Thread-safe by design.
Audit Trails
Every compilation produces a deterministic JSON audit report with SHA-256 hashes, pass logs, and timing. Built for regulated industries.
How It Works
Timber treats your trained model as a program specification and compiles it through a classical compiler pipeline.