Skip to content

Roadmap

This page outlines planned features for tobira. Timelines are estimates and may change based on community feedback and development priorities.

Want to influence priorities?

Join the waitlist — early members get priority access and input on feature prioritization.

Current: Community Edition (v0.5)

The open-source Community edition is production-ready and includes:

  • All inference backends (FastText, BERT, ONNX, Ollama, LLM API, Ensemble, Two-Stage)
  • All MTA plugins (rspamd, SpamAssassin, Haraka, Postfix milter)
  • Full CLI toolset (init, doctor, monitor, train, evaluate, demo, distill, hub-push/pull, ab-test, active-learning)
  • A/B testing and active learning
  • Web dashboard
  • Knowledge distillation
  • AI-generated text detection
  • HuggingFace Hub integration
  • GDPR-aware PII anonymization
  • Docker Compose deployment with health checks
  • Kubernetes-ready health probes (readiness / liveness)
  • PostgreSQL and Redis storage backends

Planned: Enterprise Features — Target 2026 H2

These features are in the planning stage. No implementation has started yet. Target availability: second half of 2026.

Feature Description Status
Multi-tenant management Isolated configurations per tenant with centralized administration Planning
RBAC + audit logs Role-based access control with full audit trail Planning
OpenTelemetry integration Metrics and traces export via OpenTelemetry protocol Planning
Grafana dashboard templates Pre-built dashboards for prediction monitoring and drift detection Planning
SSO (SAML / OIDC) Enterprise identity provider integration Planning
Priority support + SLA Dedicated support channel with service-level agreement Planning

Timeline disclaimer

Target dates are estimates based on current plans and may shift depending on community feedback, waitlist demand, and development capacity.

Planned: Cloud Features — Target 2027

These features depend on Enterprise features and are in early concept stage. Target availability: 2027.

Feature Description Status
Cloud-based model training Upload anonymized data, train on managed GPUs, download results Concept
Managed GPU resources No GPU procurement or maintenance required Concept

How We Prioritize

Feature priority is determined by:

  1. Waitlist demand — Features most requested by waitlist members ship first
  2. Community feedback — GitHub Issues and Discussions influence the backlog
  3. Technical dependencies — Some features (e.g., Cloud) depend on others (e.g., Enterprise auth layer)

Stay Updated

  • Waitlist: Join here for early access notifications
  • GitHub Releases: Watch the repository for release announcements
  • Discussions: Participate in GitHub Discussions for feature requests