✅ Works with InfluxDB 1.x & 2.x See Pricing ->

Tiered Storage for InfluxDB

InfluxDB wasn't made to store years of time series data. Historian was. Run a smaller, cost-effective InfluxDB cluster for recent data while archiving historical data to cold storage — reducing hardware costs and InfluxDB Enterprise licensing by up to 90%.

The Problem with InfluxDB Long-term Storage

InfluxDB is optimized for high-performance ingestion and querying of recent data — not long-term storage. Keeping years of data means expensive hardware scaling and hefty InfluxDB Enterprise licensing costs, or losing valuable historical insights.

SQL Query API
# Query archived InfluxDB data via SQL
POST
https://historian.yourcompany.com/api/query
SELECT
time, temperature, humidity
FROM
sensor_data
WHERE
time >= '2023-01-01'
AND
location = 'datacenter-1'
ORDER BY
time
DESC
LIMIT
100;
# Results (100 rows returned)
time
temperature
humidity
2023-12-31T23:59:00Z22.5°C45%
2023-12-31T23:58:00Z22.3°C46%
2023-12-31T23:57:00Z22.1°C47%
...

Reduce Infrastructure Costs

Cut storage and hardware costs by 90%. Run smaller InfluxDB clusters with reduced Enterprise licensing needs.

Perfect for ML & Analytics

Query archived data via SQL for ML training, anomaly detection, and analytics. Open Parquet format prevents vendor lock-in.

Flexible Storage Options

Works with AWS S3/Directory Buckets, MinIO, Ceph, GCS, or NAS. 100% on-premise or hybrid deployment options.

How Historian Works

Simple 4-step process to archive your InfluxDB data while keeping it queryable via SQL.

Archive & Query

Keep your InfluxDB cluster small and cost-effective for recent data, while archiving historical data to cold storage. Perfect for ML training, anomaly detection, and long-term analytics with SQL access.

  • 1. Define how far back to archive (e.g. older than 2 years)
  • 2. Historian exports data automatically to Parquet
  • 3. Files saved to your storage, partitioned by time
  • 4. Query anytime via SQL REST API
InfluxDB
1.x / 2.x
Historian
Archive Engine
Cold Storage
S3 / MinIO / Ceph / NAS
SQL Engine
Query Processor
SQL API
REST Interface
Live data & recent queries
Historical data (Parquet format)
Processes SQL queries on Parquet
REST API for external access

Simple Flat Pricing

Based on your InfluxDB footprint. No per-GB fees, no storage markup, no limits on query volume.

Basic
Contact for Pricing
1 node, up to 1TB historical data
Contact Us ->
Features include:
  • InfluxDB 1.x & 2.x support
  • Parquet format export
  • SQL query API
  • S3, GCS, MinIO, NAS support
  • Basic retention policies
  • Email support & updates
Most Popular
Pro
Contact for Pricing
Up to 3 nodes / 5TB historical data
Contact Us ->
Everything in Basic, plus:
  • Multi-node support (up to 3)
  • Advanced scheduling & automation
  • Custom retention policies
  • Data compression optimization
  • Priority support
Enterprise
Contact for Pricing
5+ nodes or large clusters
Contact Us ->
Everything in Pro, plus:
  • Unlimited nodes & clusters
  • Custom integrations & APIs
  • Dedicated support engineer
  • On-site training & setup

Real Savings Example

See how much you could save by moving historical data to cold storage.

Without Historian

  • • 2TB SSD storage in Google Cloud = $4,488/year
  • • Frequent RAM stress & cluster slowdowns
  • • Complex cleanup jobs & maintenance
  • • Lost data = lost business intelligence
Total Annual Cost: $4,488+

With Historian

  • • Data moves to cold storage = $48/year @ S3 rates
  • • InfluxDB runs leaner, faster, cheaper
  • • Automated archival & retention
  • • Queries on demand — no vendor lock-in
Total Annual Cost: $3,048 (Historian Pro + Storage)

Net Savings

~$1,440/year

Plus improved performance, reduced maintenance, and preserved historical data

Frequently Asked Questions

Does it support InfluxDB 1.x and 2.x?

Yes, Historian connects to both versions via their respective APIs.

Is this cloud-based?

No. Historian runs 100% on-prem or in your private cloud. You control where the data lives.

What storage backends are supported?

Any object storage compatible with S3 (AWS, GCS, MinIO, DigitalOcean Spaces), or even mounted NAS.

How is the data queried?

Via a built-in SQL API. You can also read the Parquet files directly from tools like Pandas, Spark, Dremio, or Presto.

Have more questions?

Contact Us

Ready to stop stuffing your InfluxDB?

Save storage costs. Keep your history. Stay in control with on-premise tiered storage.