4.5. C#

4.5.1. Requirements#

Before you can get started, please ensure that:

  • You have the latest version of the QuasarDB client library installed on your computer

  • You have access to a running QuasarDB cluster.

The rest of this document assumes you have a cluster up and running under qdb://

4.5.2. Installing libraries#

Use the windows installer executable, select the C api option.

Choose one of the following installer depending on your platform:

> qdb-x.y.z-windows-64bit-setup.exe

> qdb-x.y.z-windows-32bit-setup.exe

4.5.3. Importing libraries#

Most languages require you to import the relevant QuasarDB modules before you can use them, so we start out with them.

#include <qdb/client.h>
#include <qdb/tag.h>
#include <qdb/ts.h>

4.5.4. Connection management#

Establishing a connection with the QuasarDB cluster is easy. You need the URI of at least one of your nodes, and the client will automatically detect all nodes in the cluster.

A QuasarDB cluster operates in either a secure or an insecure mode. If you do not know whether your cluster is running in secure mode, please ask your system administrator. Insecure connection#

// We first need to open a handle, which is is the memory structure that
// QuasarDB uses to maintain connection state.
qdb_handle_t handle;
qdb_error_t error = qdb_open(&handle, qdb_p_tcp);
if (QDB_FAILURE(error)) return EXIT_FAILURE;

// Now that we have opened the handle, we can tell it to establish a connection
// with the cluster.
error = qdb_connect(handle, "qdb://localhost:2836");
if (QDB_FAILURE(error)) return EXIT_FAILURE; Secure connection#

In case of a secure connection, we need to provide a few additional parameters:

  • A username;

  • A user private key;

  • A cluster public key.

More information on QuasarDB’s security mechanisms can be found in our security manual.

If you do not know the values of these parameters, please ask your system administrator.

// We first need to open a handle, which is is the memory structure that
// QuasarDB uses to maintain connection state.
qdb_handle_t handle;
qdb_error_t error = qdb_open(&handle, qdb_p_tcp);
if (QDB_FAILURE(error)) return EXIT_FAILURE;

// Load the encoded key
error = qdb_option_set_cluster_public_key(handle, "cluster_public_key");
if (QDB_FAILURE(error)) return EXIT_FAILURE;

// Then the username and its associated encoded key
error = qdb_option_set_user_credentials(handle, "user", "user_private_key");
if (QDB_FAILURE(error)) return EXIT_FAILURE;

// another option is to load directly from the security files
error = qdb_option_load_security_files(handle, "cluster_public_key.txt", "user_credentials.txt");

// Now that we have opened the handle, we can tell it to establish a connection
// with the cluster.
error = qdb_connect(handle, "qdb://localhost:2836");
if (QDB_FAILURE(error)) return EXIT_FAILURE;

4.5.5. Creating a table#

Before we can store timeseries data, we need to create a table. A table is uniquely identified by a name (e.g. “stocks” or “sensors”) and can have 1 or more columns.

In this example we will create a table “stocks” with three columns, “open”, “close” and “volume”. The respective types of the columns are two double precision floating point values and a 64-bit signed integer.

// Initialize our columns definitions
const qdb_ts_column_info_t columns[3] = {
    {.name = "open", .type = qdb_ts_column_double},  //
    {.name = "close", .type = qdb_ts_column_double}, //
    {.name = "volume", .type = qdb_ts_column_int64}  //
const int columns_count = sizeof(columns) / sizeof(qdb_ts_column_info_t);

// Now create the table with the default shard size
qdb_error_t error = qdb_ts_create(handle, "stocks", qdb_d_default_shard_size, columns, columns_count);
if (QDB_FAILURE(error)) return EXIT_FAILURE; Attaching tags#

QuasarDB allows you to manage your tables by attaching tags to them. For more information about tags, see Managing tables with Tags.

In the example below, we will attach the tag nasdaq to the “stocks” table we created.

error = qdb_attach_tag(handle, "stocks", "nasdaq");
if (QDB_FAILURE(error)) return EXIT_FAILURE;

4.5.6. A word about API types#

Now that we have our tables in place, it’s time to start interacting with actual data. On a high-level, QuasarDB provides two different APIs for you to insert data:

  • A row-based API, where you insert data on a row-by-row basis. This API is referred to as our “batch inserter”. This API provides stronger guarantees in terms of consistency.

  • A column-based API, where you insert pure timeseries data per column. This data is typically aligned per timestamp, and therefor assumes unique timestamps.

If you’re unsure which API is best for you, start out with the row-based insertion API, the batch inserter.

You should now continue with either the row oriented or the column oriented tutorials.

4.5.7. Row oriented API# Batch inserter#

The QuasarDB batch inserter provides you with a row-oriented interface to send data to the QuasarDB cluster. The data is buffered client-side and sent in batches, ensuring efficiency and consistency.

The batch writer has various modes of operation, each with different tradeoffs:

Insertion mode


Use case(s)


Transactional insertion mode that employs Copy-on-Write

General purpose


Transactional insert that does not employ Copy-on-Write. Newly written data may be visible to queries before the transaction is fully completed.

Streaming data, many small incremental writes


Data is buffered in-memory in the QuasarDB daemon nodes before writing to disk. Data from multiple sources is buffered together, and periodically flushed to disk.

Streaming data where multiple processes simultaneously write into the same table(s)

Truncate (a.k.a. “upsert”)

Replaces any existing data with the provided data.

Replay of historical data

When in doubt, we recommend you use the default insertion mode.

The steps involved in using the batch writer API is as follows:

  1. Initialize a local batch inserter instance, providing it with the tables and columns you want to insert data for. Note that specifying multiple tables is supported: this will allow you to insert data into multiple tables in one atomic operation.

  2. Prepare/buffer the batch you want to insert. Buffering locally before sending ensures that the tranmission of the data is happening at maximum throughput, ensuring server-side efficiency.

  3. Push the batch to the cluster.

  4. If necessary, go back to step 2 to send additional batches.

We recommend you use batch sizes as large as possible: between 50k and 500k rows is optimal.

In the example below we will insert two different rows for two separate days into our “stocks” table.

// Initialize our batch columns definitions
const qdb_ts_batch_column_info_t batch_columns[3] = {
    {.timeseries = "stocks", .column = "open", .elements_count_hint = 2},  //
    {.timeseries = "stocks", .column = "close", .elements_count_hint = 2}, //
    {.timeseries = "stocks", .column = "volume", .elements_count_hint = 2} //
const int batch_columns_count = sizeof(batch_columns) / sizeof(qdb_ts_batch_column_info_t);

// create our batch handle
qdb_batch_table_t table;
error = qdb_ts_batch_table_init(handle, batch_columns, batch_columns_count, &table);
if (QDB_FAILURE(error)) return EXIT_FAILURE;

// The batch API is row oriented, we first setup the start timestamp of the row
// Set timestamp to 2019-02-01
qdb_timespec_t timestamp = {.tv_sec = 1548979200, .tv_nsec = 0};
qdb_ts_batch_start_row(table, &timestamp);

// Then set the values for each column
qdb_ts_batch_row_set_double(table, 0, 3.40);
qdb_ts_batch_row_set_double(table, 1, 3.50);
qdb_ts_batch_row_set_int64(table, 2, 10000);

// Add another row
// Set timestamp to 2019-02-02
timestamp.tv_sec = 1549065600;
qdb_ts_batch_start_row(table, &timestamp);
qdb_ts_batch_row_set_double(table, 0, 3.50);
qdb_ts_batch_row_set_double(table, 1, 3.55);
qdb_ts_batch_row_set_int64(table, 2, 7500);

// Push into the database as a single operation
error = qdb_ts_batch_push(table);

// Don't forget to release the table
qdb_release(handle, table); Bulk reader#

On the other side of the row-oriented API we have the “bulk reader”. The bulk reader provides streaming access to a single table, optionally limited by certain columns and/or certain time ranges.

If you want to have efficient row-oriented access to the raw data in a table, this is the API you want to use. If you want to execute aggregates, complex where clauses and/or multi-table joins, please see the query API.

The example below will show you how to read our stock data for just a single day.

// We can initialize our bulk reader directly from the columns we defined earlier
qdb_local_table_t local_table;
error = qdb_ts_local_table_init(handle, "stocks", columns, columns_count, &local_table);
if (QDB_FAILURE(error)) return EXIT_FAILURE;

// Setup a range going from 2019-02-01 to 2019-02-02
qdb_ts_range_t range = {.begin = {.tv_sec = 1548979200, .tv_nsec = 0}, .end = {.tv_sec = 1549065600, .tv_nsec = 0}};
error                = qdb_ts_table_get_ranges(local_table, &range, 1u);

while (!qdb_ts_table_next_row(local_table, &timestamp))
    double value_index_zero   = 0;
    double value_index_one    = 0;
    qdb_int_t value_index_two = 0;

    error = qdb_ts_row_get_double(local_table, 0, &value_index_zero);
    // put cleanup logic here in case of error
    error = qdb_ts_row_get_double(local_table, 1, &value_index_one);
    // put cleanup logic here in case of error
    error = qdb_ts_row_get_int64(local_table, 2, &value_index_two);
    // put cleanup logic here in case of error

// don't forget to release the table once finished
qdb_release(handle, local_table);

The next section will show you how to store and retrieve the same dataset using the column-oriented API. If this is irrelevant to you, it’s safe to skip directly to the query API.

4.5.8. Column oriented API#

The other high level APIs QuasarDB offers are the column-oriented API. These APIs are more lightweight than the row-oriented APIs, and provides a good alternative if your dataset is shaped correctly. Storing timeseries#

To store a single timeseries, all you have to do is provide a sequence of timestamp / value pairs, and which column you want to store them as.

// Prepare the points for each column
const qdb_ts_double_point opens[2] = {
    {.timestamp = {.tv_sec = 1548979200, .tv_nsec = 0}, .value = 3.4}, //
    {.timestamp = {.tv_sec = 1549065600, .tv_nsec = 0}, .value = 3.5}  //
const qdb_ts_double_point closes[2] = {
    {.timestamp = {.tv_sec = 1548979200, .tv_nsec = 0}, .value = 3.50}, //
    {.timestamp = {.tv_sec = 1549065600, .tv_nsec = 0}, .value = 3.55}  //
const qdb_ts_int64_point volumes[2] = {
    {.timestamp = {.tv_sec = 1548979200, .tv_nsec = 0}, .value = 7500}, //
    {.timestamp = {.tv_sec = 1549065600, .tv_nsec = 0}, .value = 10000} //

// Insert each column independently
error = qdb_ts_double_insert(handle, "stocks", "open", opens, 2u);
if (QDB_FAILURE(error)) return EXIT_FAILURE;
error = qdb_ts_double_insert(handle, "stocks", "close", closes, 2u);
if (QDB_FAILURE(error)) return EXIT_FAILURE;
error = qdb_ts_int64_insert(handle, "stocks", "volume", volumes, 2u);
if (QDB_FAILURE(error)) return EXIT_FAILURE; Retrieving timeseries#

To retrieve a single timeseries, you provide a column and one or more timerange(s). Our examples below show how to retrieve all three columns for a single day.

// Setup the range(s) we want to get
const qdb_ts_range_t ranges[1] = {{.begin = {.tv_sec = 1548979200, .tv_nsec = 0}, .end = {.tv_sec = 1549065600, .tv_nsec = 0}}};

// We write the data into empty structure you pass as in-out parameters
qdb_ts_double_point * points = NULL;
qdb_size_t point_count       = 0;

// Get the provided ranges
error = qdb_ts_double_get_ranges(handle, "stocks", "open", ranges, 1u, &points, &point_count);
if (QDB_FAILURE(error)) return EXIT_FAILURE;

4.5.9. Dropping a table#

It’s easy to drop a table with QuasarDB, and is immediately visible to all clients.

// A timeseries is considered a normal entry for this operation
// You can safely remove it
qdb_remove(handle, "stocks");

4.5.10. Reference#