3.5. Prometheus Remote Storage#
3.5.1. Introduction#
QuasarDB provides an integration with Prometheus’s remote storage mechanism, which ships as part of the QuasarDB REST API.
Prometheus’s local storage is limited by single node durability and scalability. Using QuasarDB as your remote storage will allow you to scale your Prometheus storage with your QuasarDB cluster.
Warning
Prometheus’s remote storage is currently considered experimental and subject to changed. Please ensure that you are using the latest stable version of the QuasarDB daemon and REST client.
3.5.2. Prerequisites#
This documentation assumes you have:
3.5.3. Configuration#
To enable the use of QuasarDB with Prometheus’s remote storage add the following settings to your Prometheus configuration file and then restart Prometheus:
# This assumes your qdb_rest client is running at it's default url of http://localhost:40080
remote_write:
- url: "http://localhost:40080/api/prometheus/write"
remote_read:
- url: "http://localhost:40080/api/prometheus/read"
For more advanced remote storage configuration options see Prometheus’s remote_write and remote_read documentation.
Note
Authentication is currently not supported as Prometheus’s configuration file does not allow for environmental variables or secrets. You can expect authentication support in a future release.
3.5.4. How Prometheus metrics are mapped to QuasarDB#
The following transformations are applied when importing Prometheus data into QuasarDB:
Prometheus metrics become QuasarDB tables prefixed with
$qdb.prom
Prometheus metric samples are stored in a
value
column, which is represented as aDouble
Prometheus labels are stored as
Blob
columns
For example, the following Prometheus metric:
http_requests_total{job="prometheus",group="canary"} 201
Would be transformed into the following QuasarDB table:
$timestamp |
$table |
job |
group |
value |
---|---|---|---|---|
2019-01-01T00:00:00.000000000Z |
$qdb.prom.http_requests_total |
prometheus |
canary |
201.0 |