5. Observability#

5.1. Metrics reference#

This document lists all kernel-level metrics available in QuasarDB, categorized by subsystem. Each metric includes its name, type (accumulator or gauge), and a concise description. This structure is optimized for both human readability and AI model training purposes.

5.1.1. Cache#

Metric Name

Type

Description

evicted.count

accumulator

How many evictions were done (from the QuasarDB cache)

evicted.total_bytes

accumulator

How many bytes were evicted

memory.persistence.cache_bytes

gauge

The size of all block caches bytes (RocksDB) - we don’t use the row cache at this time

memory.persistence.memtable_bytes

gauge

The number of bytes in the memtables (RocksDB)

memory.persistence.memtable_unflushed_bytes

gauge

The numbers of unflushed bytes in the memtables (RocksDB)

memory.persistence.table_reader_bytes

gauge

Memory usage of all RocksDB table readers

memory.persistence.total_bytes

gauge

Memory usage of the persistence: memtable bytes, table reader, cache bytes

memory.physmem.total_bytes

gauge

The total amount of physical memory in bytes detected on the machine

memory.physmem.used_bytes

gauge

The amount of physical memory in bytes used on the machine

memory.resident_bytes

gauge

The size in bytes of all entries currently in memory

memory.resident_count

gauge

The number of entries in memory, an entry is a value in the internal hash table, which is correlated, but not identical, to table counts

memory.tbb.global_loc_total_bytes

gauge

Advanced internal stat used for debugging complex memory issues only

memory.tbb.huge_threshold_bytes

gauge

When the allocator will use huge pages (if supported)

memory.tbb.large_object_bytes

gauge

The total bytes of large objects (eg big allocations that don’t fit in the optimized structures)

memory.tbb.large_object_count

gauge

The total count of large objects (eg big allocations that don’t fit in the optimized structures)

memory.tbb.large_unaligned_bytes

gauge

Advanced internal stat used for debugging complex memory issues only

memory.tbb.max_requested_bytes

gauge

The largest allocation request ever made

memory.tbb.softlimit_bytes

gauge

The threshold, in bytes, where TBB will return memory to the OS. Below that threshold, TBB will hold the bytes.

memory.tbb.total_bytes

gauge

Total bytes currently allocated (managed) by TBB - note not every allocation in Quasar goes through TBB

memory.tbb.total_count

gauge

The number of allocations made to TBB

memory.vm.total_bytes

gauge

How many bytes of virtual memory the process can use, this value is usually extremely high on 64-bit operating systems

memory.vm.used_bytes

gauge

How many bytes of virtual memory the process is currently using, can be much higher than the actual memory usage when memory is reserved but not actually used

pageins.count

accumulator

How many pagins were done by Quasar (from disk to Quasar cache)

pageins.total_bytes

accumulator

How many bytes were paged in

5.1.2. Cache - LRU2#

QuasarDB uses a two-level LRU (LRU2) caching strategy consisting of a cold and hot layer. New entries are first placed in the cold cache and only promoted to the hot cache on repeated access. This design improves hit rates for frequently accessed data while avoiding pollution by one-time reads.

The LRU2 metrics help observe:
  • Cold/hot cache pressure (evictions, promotions)

  • Cache efficiency (hit ratios)

  • I/O load due to cache misses (page-ins)

Metric Name

Type

Description

lru2.cold.pagein.count

accumulator

Total number of entries read from disk into the cold cache layer

lru2.cold.pagein.total_bytes

accumulator

Total bytes read from disk into the cold cache layer

lru2.cold.evicted.count

accumulator

Number of entries removed from the cold cache before promotion to hot

lru2.cold.evicted.total_bytes

accumulator

Bytes removed from the cold cache before promotion to hot

lru2.cold.count

gauge

Current number of entries in the cold cache

lru2.hot.evicted.count

accumulator

Total number of evictions from the hot cache

lru2.hot.evicted.total_bytes

accumulator

Total bytes evicted from the hot cache

lru2.hot.promoted.count

accumulator

Total number of entries promoted from cold to hot cache

lru2.hot.promoted.total_bytes

accumulator

Total bytes promoted from cold to hot cache

lru2.hot.hit.count

accumulator

Number of cache hits in the hot layer (entry already promoted)

lru2.hot.hit.total_bytes

accumulator

Total bytes hit in the hot cache

lru2.hot.count

gauge

Current number of entries in the hot cache

5.1.3. Clustering#

Metric Name

Type

Description

chord.invalid_requests_count

accumulator

How many times the client sent a request to the wrong node

chord.predecessor_changes_count

accumulator

How many times the predecessor changed, if more than a couple of times, cluster has issues

chord.successor_changes_count

accumulator

Same as predecessor but for successor

chord.unstable_errors_count

accumulator

How many times we returned “unstable cluster” to the user

sync_with_master.elapsed_sec

accumulator

cluster to cluster time elapsed in seconds

sync_with_master.failures_count

accumulator

cluster to cluster error count

sync_with_master.successes_count

accumulator

cluster to cluster successes count

5.1.4. Environment#

Metrics related to the environment in which QuasarDB is running, such as the OS, license, quasardb version, etc.

Metric Name

Type

Description

hardware_concurrency_count

gauge

The value returned by std::thread::hardware_concurrency(), very useful to diagnose problems

license.attribution_date_epoch

gauge

When the license was attributed, epoch

license.expiration_date_epoch

gauge

Expiration date in seconds from epoch

license.max_memory_bytes

gauge

The maximum number of bytes allowed by the node

license.remaining_days_count

gauge

Numbers of days left until the license expires

license.support_until_epoch

gauge

When the support will expire in seconds from epoch

startup_epoch

gauge

Startup time stamp in seconds from epoch

5.1.5. Indexes#

Metrics that relate to the microindex subsystem of QuasarDB, which speeds up queries.

Metric Name

Type

Description

queries.microindex.aggregation.match_count

accumulator

How many times an aggregation successfully leveraged the microindex

queries.microindex.aggregation.miss_count

accumulator

How many times an aggregation could not leverage the microindex

queries.microindex.filter.match_count

accumulator

How many times a filter (eg WHERE) successfully leveraged the microindex

queries.microindex.filter.miss_count

accumulator

How many times a filter (eg WHERE) could not leverage the microindex

5.1.6. Network#

Network related metrics, useful for understanding the number of requests, simulatenous users and network throughput.

Metric Name

Type

Description

network.current_users_count

gauge

How many users currently have an active session

network.partitions_count

gauge

How many partitions are there

network.sessions.available_count

gauge

How many sessions are available

network.sessions.max_count

gauge

How many sessions total are available

network.sessions.unavailable_count

gauge

How many sessions are currently busy

network.threads_per_partition_count

gauge

How many threads does each partition have

requests.failures_count

accumulator

How many failures accross all calls

requests.in_bytes

accumulator

How many bytes in accrosss all calls

requests.out_bytes

accumulator

How many bytes out accross all calls

requests.slow_count

accumulator

How many requests lasted for longer than log slow operation setting

requests.successes_count

accumulator

How many successes (accross all calls)

requests.total_count

accumulator

How many requests (accross all calls) we have received successes + failures

5.1.7. Performance profiling#

Only enabled when network.profile_performance is enabled in qdbd. Useful for better understanding how busy the cluster is, and where the majority of the time is spent.

Metric Name

Type

Description

perf.[name].[metric].total_ns

accumulator

time spend in ns for the given perf metric of name function

perf.[name].total_ns

accumulator

aggregated total for the function

perf.total_ns

accumulator

total of all measured functions in the current performance trace, helpful to compute ratios of a given function

5.1.8. Storage#

Metric Name

Type

Description

persistence.bucket.total_bytes

accumulator

How many bytes were written to disk for buckets, including large buckets

persistence.bucket.total_count

accumulator

How many times did we write buckets to disk, including large buckets

persistence.bucket.total_us

accumulator

How many microseconds did we spend writing buckets, including large buckets

persistence.bucket_deletion_count

accumulator

How many times did we delete from a bucket

persistence.bucket_insert_count

accumulator

How many times did we insert into a bucket

persistence.bucket_read_count

accumulator

How many times did we read from a bucket

persistence.bucket_update_count

accumulator

How many times did we update a bucket

persistence.cloud_local_cache_bytes

gauge

The current size, in bytes, of the cloud cache (RocksDB + S3)

persistence.entries_count

gauge

The number of entries in the persistence layer, correlated with the number of tables/buckets, but usually higher

persistence.large_bucket.total_bytes

accumulator

How many bytes were written to disk for all the large buckets

persistence.large_bucket.total_count

accumulator

How many times did we write a large bucket

persistence.large_bucket.total_us

accumulator

How many microseconds did we spend writing a large bucket

persistence.persistent_cache_bytes

gauge

The current size, in bytes, of the persisted cache. The persisted cache is used to cache slower I/O on faster I/O. Not to be confused with the cloud cache.

persistence.read_bytes

gauge

How many bytes read from disk, low-level RocksDB metric

persistence.ts_write.failures_count

accumulator

How many “writes” (all ts operations) failed

persistence.ts_write.successes_count

accumulator

How many “writes” (all ts operations) succeded

persistence.utilized_bytes

gauge

How many bytes used on disk, low-level RocksDB metric

persistence.written_bytes

gauge

How many bytes written to disk, lowl-level RocksDB Metric

5.1.9. Storage - async pipelines#

These metrics relate to the async pipelines storage subsystem, which can be heavy in CPU/memory usage, typically used in streaming data use cases.

Metric Name

Type

Description

async_pipelines.[number].buffer.bytes

gauge

How many bytes we have in the “merge” map of the async pipelines (a buffer)

async_pipelines.[number].buffer.count

gauge

How many entries we have in the “merge” map of the async pipelines (a buffer)

async_pipelines.buffer.total_bytes

accumulator

The number of bytes merged by the async pipelines (eg smaller requests merged into a larger one)

async_pipelines.buffer.total_count

accumulator

The number of merge operations

async_pipelines.busy_denied_count

accumulator

denied writes because pipe is full for a given user

async_pipelines.busy_denied_count.total

accumulator

same but for all users

async_pipelines.errors_count

accumulator

errors for the current user id

async_pipelines.errors_count.total

accumulator

errors for all users

async_pipelines.low.state_write.duration_us

accumulator

The time elapsed to write the state of the low priority async pipes

async_pipelines.pulled.total_bytes

accumulator

How many bytes were pulled from the pipelines by the merger

async_pipelines.pulled.total_count

accumulator

How many times data was pulled from the pipelines by the merger

async_pipelines.pushed.total_bytes

accumulator

How many bytes were pushed to the pipelines by a user

async_pipelines.pushed.total_count

accumulator

How many times data was pushed to the pipelines by a user

async_pipelines.write.bytes_total

accumulator

How many bytes were written to disk

async_pipelines.write.elapsed_us

accumulator

How much time was spent writing to disk, this includes serialization, inserting into the timeseries structure in memory, etc

async_pipelines.write.failures_count

accumulator

How many failures for the given user

async_pipelines.write.failures_count.total

accumulator

How many failures for all users

async_pipelines.write.successes_count

accumulator

How many successes for the given user

async_pipelines.write.successes_count.total

accumulator

How many successes for all users

5.1.10. Storage - backups#

These metrics relate to backups of the storage subsystem

Metric Name

Type

Description

backup.elapsed_sec

accumulator

How much time did we spend backing up

backup.failures_count

accumulator

How many errors?

backup.successes_count

accumulator

Background backup successes

backup.total_bytes

accumulator

How many bytes were written to disk

5.1.11. Storage - Optimization#

These metrics relate to background tasks and operations for the storage subsystem that help maintain performance and manage data lifecycle.

Metric Name

Type

Description

compact.cancelations_count

accumulator

Background compaction cancelations

compact.elapsed_sec

accumulator

How much time did we spend compacting

compact.failures_count

accumulator

Background compaction failures

compact.successes_count

accumulator

Background compaction successes (not automatic, explicit calls)

trim.cancelations_count

accumulator

Background trim cancelations

trim.elapsed_sec

accumulator

Background trim duration

trim.failures_count

accumulator

Background trim failures

trim.successes_count

accumulator

Background trim successes

5.1.12. Metric Unit Interpretation#

Metric names use suffixes to indicate the unit or value type:

Suffix

Meaning

_ns

Duration in nanoseconds

_us

Duration in microseconds

_sec

Duration in seconds

_epoch

Timestamp (seconds since Unix epoch)

_bytes

Byte count (e.g., memory or I/O)

_count

Count of operations or events

_total

Cumulative count or size

5.2. Retrieving Statistics from a QuasarDB Cluster#

5.2.1. Using QuasarDB Python API#

This section provides a Python script that demonstrates how to use the QuasarDB Python API to connect to a QuasarDB cluster and retrieve statistics of all nodes in the cluster. The script outlines the process for accessing cumulative and by_uid statistics and provides guidance on interpreting various types of stats.

Prerequisites

Before running the script, make sure you have the following installed:

  • Python (3.x or higher)

  • QuasarDB Python API (quasardb)

Python Script

import quasardb
import quasardb.stats as qdbst
import json

with quasardb.Cluster('qdb://127.0.0.1:2836') as conn:
    stats = qdbst.by_node(conn)
    print(json.dumps(stats, indent=4))
 {
   "127.0.0.1:2836": {
       "by_uid": {},
       "cumulative": {
           "async_pipelines.pulled.total_bytes": 0,
           "async_pipelines.pulled.total_count": 0,
           "cpu.idle": 6012100000,
           "cpu.system": 166390000,
           "cpu.user": 318260000,
           "disk.bytes_free": 221570932736,
           "disk.bytes_total": 274865303552,
           "disk.path": "insecure/db/0-0-0-1",
           "engine_build_date": "2023.12.06T20.59.15.000000000 UTC",
           "engine_version": "3.15.x",
           "evicted.count": 1,
           "evicted.total_bytes": 209,
           "hardware_concurrency": 8,
           "license.attribution_date": 1701900515,
           "license.expiration_date": 0,
           "license.memory": 8589934592,
           "license.remaining_days": 31337,
           "license.support_until": 0,
           "memory.persistence.cache_bytes": 104,
           "memory.persistence.memtable_bytes": 1071088,
           "memory.persistence.memtable_unflushed_bytes": 1071088,
           "memory.persistence.table_reader_bytes": 0,
           "memory.persistence.total_bytes": 1071192,
           "memory.physmem.bytes_total": 33105100800,
           "memory.physmem.bytes_used": 12619468800,
           "memory.resident_bytes": 0,
           "memory.resident_count": 0,
           "memory.vm.bytes_total": 140737488351232,
           "memory.vm.bytes_used": 1776730112,
           "network.current_users_count": 0,
           "network.sessions.available_count": 510,
           "network.sessions.max_count": 512,
           "network.sessions.unavailable_count": 2,
           "node_id": "0-0-0-1",
           "operating_system": "Linux 5.10.199-190.747.amzn2.x86_64",
           "partitions_count": 8,
           "perf.common.get_type_for_removal.deserialization.total_ns": 1374,
           "perf.common.get_type_for_removal.processing.total_ns": 8101,
           "perf.common.get_type_for_removal.total_ns": 9475,
           "perf.control.get_system_info.deserialization.total_ns": 4851,
           "perf.control.get_system_info.processing.total_ns": 599,
           "perf.control.get_system_info.total_ns": 5450,
           "perf.total_ns": 249523,
           "perf.ts.create_root.content_writing.total_ns": 16186,
           "perf.ts.create_root.deserialization.total_ns": 1717,
           "perf.ts.create_root.entry_writing.total_ns": 53294,
           "perf.ts.create_root.processing.total_ns": 131766,
           "perf.ts.create_root.serialization.total_ns": 128,
           "perf.ts.create_root.total_ns": 203091,
           "perf.ts.get_columns.deserialization.total_ns": 1409,
           "perf.ts.get_columns.processing.total_ns": 30098,
           "perf.ts.get_columns.total_ns": 31507,
           "persistence.capacity_bytes": 0,
           "persistence.cloud_local_cache_bytes": 0,
           "persistence.entries_count": 1,
           "persistence.info": "RocksDB 6.27",
           "persistence.persistent_cache_bytes": 0,
           "persistence.read_bytes": 10245,
           "persistence.utilized_bytes": 0,
           "persistence.written_bytes": 25929,
           "requests.bytes_in": 851,
           "requests.bytes_out": 32,
           "requests.errors_count": 2,
           "requests.successes_count": 4,
           "requests.total_count": 6,
           "startup": 1701900515,
           "startup_time": 1701900515,
           "check.online": 1,
           "check.duration_ms": 4
       }
   }
}

5.2.2. Using qdbsh#

In qdbsh, you can use the direct_prefix_get command to retrieve a list of all statistics. Here’s an example:

qdbsh > direct_prefix_get $qdb.statistics 100
 1. $qdb.statistics.async_pipelines.pulled.total_bytes
 2. $qdb.statistics.async_pipelines.pulled.total_count
 3. $qdb.statistics.cpu.idle
 4. $qdb.statistics.cpu.system
 ...
52. $qdb.statistics.startup_time

qdbsh > direct_int_get $qdb.statistics.cpu.user
1575260000

qdbsh > direct_blob_get $qdb.statistics.node_id
c5fe30bf0154acc-5d63bd06e7878b9c-5f635b3cf7fc3560-dbe35df7b5080651
1:12

Note

The direct_prefix_get command is used to retrieve a list of statistics keys that match a specific prefix. In this case, the command is used to fetch statistics keys that start with the prefix $qdb.statistics. The number 100 is a parameter that limits the maximum number of keys returned by the command.

5.2.3. Understanding “by_uid” and “cumulative” Statistics#

The retrieved statistics are organized into two main dictionaries: “by_uid” and “cumulative.”

  • “by_uid”: This dictionary contains user statistics for a secure cluster with users. It groups statistics by their corresponding user IDs.

  • “cumulative”: This dictionary holds cumulative statistics for each node in the cluster. These statistics provide aggregated information across all users and are typically global values for the entire node.

5.3. Performance Tracing#

If you have enabled performance profiling from the server-side, you can run detailed performance traces of the operations that you execute on your cluster. This will provide you with a detailed, step-by-step trace of tje low-level operations, and are usually useful when you are debugging together with your Solution Architect.

If you have enabled statistics in conjunction with performance traces, you will get additional performance trace metrics in your statistics.

To use performance traces from the QuasarDB shell, use the enable_perf_trace command as follows:

$ qdbsh
quasardb shell version 3.5.0master build ce779a9 2019-10-23 00:04:01 +0000
Copyright (c) 2009-2019 quasardb. All rights reserved.
Need some help? Check out our documentation here:  https://doc.quasardb.net

qdbsh > enable_perf_trace
qdbsh > create table testable(col1 int64, col2 double)

*** begin performance trace

total time: 175 us

- function + ts.create_root - 175 us [100.00 %]
           |
           |              data received:         0 us - delta:         0 us [00.00 % - 00.00 %]
           |     deserialization starts:         2 us - delta:         2 us [01.14 % - 01.14 %]
           |       deserialization ends:         7 us - delta:         5 us [02.86 % - 02.86 %]
           |             entering chord:         9 us - delta:         2 us [01.14 % - 01.14 %]
           |                   dispatch:        14 us - delta:         5 us [02.86 % - 02.86 %]
           |     deserialization starts:        24 us - delta:        10 us [05.71 % - 05.71 %]
           |       deserialization ends:        26 us - delta:         2 us [01.14 % - 01.14 %]
           |          processing starts:        28 us - delta:         2 us [01.14 % - 01.14 %]
           |      entry trimming starts:        79 us - delta:        51 us [29.14 % - 29.14 %]
           |        entry trimming ends:        81 us - delta:         2 us [01.14 % - 01.14 %]
           |     content writing starts:        82 us - delta:         1 us [00.57 % - 00.57 %]
           |       content writing ends:       141 us - delta:        59 us [33.71 % - 33.71 %]
           |       entry writing starts:       141 us - delta:         0 us [00.00 % - 00.00 %]
           |         entry writing ends:       172 us - delta:        31 us [17.71 % - 17.71 %]
           |            processing ends:       175 us - delta:         3 us [01.71 % - 01.71 %]

*** end performance trace

In the trace above, for example, we can see the performance trace of the CREATE TABLE statement, and get a detailed idea of where it’s spending most of its time. In this case, the total operation lasted 175 microseconds, and a detailed breakdown of the low level function timings.

5.4. User properties#

User properties are a way to attach metadata to specific connection. They are key-value pairs that can be used to store additional information, useful when you need a way to identify a connection for debugging or logging purposes.

Currently user properties can be set from QuasarDB Python API.

User properties are logged server side, when JSON log output is enabled. Information on how to enable JSON logging can be found here.

5.4.1. Identifying which connection is pushing data in small increments#

Let’s say you have multiple applications writing data to same server. One of those applications is writing data in small increments.

This could impact server performance and is best to avoid when not using async pipelines. Lets use user properties to help identify troubling connection.

If your server is set to detect and log small increment writes, you can modify your applications to use unique user properties to identify which application is causing the issue.

5.4.2. Setting user properties from Python API and triggering logging for small increment writes#

import quasardb
import datetime
import quasardb.pandas as qdbpd
import pandas as pd

data = {
    "column_1": [42]
}

idx = [datetime.datetime.now()]

df = pd.DataFrame(data, index=idx)

with quasardb.Cluster("qdb://127.0.0.1:2836") as conn:

    # first make sure that the user property is enabled
    conn.options().enable_user_properties()

    # now you can set user property
    conn.properties().put("application_id", "0")

    # write single row
    qdbpd.write_dataframe(df, conn, "t", create=True)

Now you should see in the server logs message for small increment coming from this connection.

{"timestamp":"2025-01-16T08:58:06.664698300Z","process_id":16548,"thread_id":33640,"level":"warning","message":"small incremental insert detected: append increased data size for shard t/0ms by only 0.012% (below threshold of 10%). This negatively affects write performance. To turn off this message, set 'log_small_append_percentage' to 0 in your qdbd config file.","$client_hostname":"hal-9000","$client_version":"3.14.2 3.14.2.dev0 d82b8b86d71c9334951b442b937abf9a598eda64 2025-01-14 10:18:10 -0500","$client_target":"AMD64 core2 64-bit","client_application_id":"0","$client_timestamp":"2025-01-16T09:58:06+01:00","$client_platform":"Microsoft Windows 11  (build 26100), 64-bit"}

With this information you can now identify which application is causing the issue and take appropriate action.

More examples for Python API can be found in Python API documentation.