Skip to content

Time Series Storage Service

DEPRECATED!

Please use the connector resource instead.

TSDB (time series database) service provides a storage layer dedicated for metric data produced and evolving over time. For example, TSDB can be used to store periodic sensor data sent from your devices.

Data is associated with a timestamp and optional tags which can be used later for query purposes. Finally, the TSDB service provides a robust aggregation system enabling data-mining capabilities.

Data retention policy can be used to automatically delete data after a certain time period (days, weeks, months, years). When metric data is older than the specified retention period it will be permanently deleted during the next GC period. Because deletions happens periodically it is possible for queries to sometimes retrieve data older than the configured retention policy. The default retention policy applies to all metrics that do not have a per metric retention policy set.

Operations

Storage

Events


Configuration parameters

Please note that bold arguments are required, while italic arguments are optional.

Name Type Description
default_retention ^(\d+(d|w|m|y)|infinity)$ Default retention time with unit. This applies to all metrics that do not have a per metric retention policy set. Default value is 3m.
Supported units: d (days), w (weeks), m (months), y (years)
e.g. 3d, 5w or infinity.
IMPORTANT this affect all the data store in this service.
Default: "3m"
metrics_retention object The retention time for metrics, property name should be the metric name.
metrics_retention.^[a-zA-Z0-9_]+$ ^(\d+(d|w|m|y)|infinity|default)$ Retention time for metric with unit.
To use default for following default value and infinity for keeping metric forever.
Supported units: d (days), w (weeks), m (months), y (years)
e.g. 3d, 5w, default or infinity.
recent_function_disable boolean Set to 'true' to disable the 'recent' functionality to optimize write requests. Using the 'recent' function will then return an error.

Operations

Please note that bold arguments are required, while italic arguments are optional.

addConfig

This operation is meant to set dynamic metric name retention which value won't be reflected to the service configuration. Will add or replace (if existing) metric retention configurations. For fixed metrics name set value from the service configuration instead.

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
default_retention ^(\d+(d|w|m|y)|infinity)$ Default retention time with unit. Default value is 3m.
Supported units: d (days), w (weeks), m (months), y (years)
e.g. 3d, 5w or infinity
Default: "3m"
metrics_retention object The retention time for metrics, property name should be the metric name.
metrics_retention.^[a-zA-Z0-9_]+$ ^(\d+(d|w|m|y)|infinity|default)$ Retention time for metric with unit.
To use default for following default value and infinity for keeping metric forever.
Supported units: d (days), w (weeks), m (months), y (years)
e.g. 3d, 5w, default or infinity

Responses

Code Body Type Description
204 nil Config updated
default object Error

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

local result = Tsdb.addConfig({
  default_retention = "3m",
  metrics_retention = {
    metric1 = "3d",
    metric2 = "10d"
  }
}) response.message = result

deleteAll

delete all data of a given solution

Responses

Code Body Type Description
200 nil All data deleted
default object Error

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

-- Delete all data of a given solution
local out = Tsdb.deleteAll()
response.message = out

export

Start an export job.
An event of TSDB export will be triggered once the job finished in any state.
You can define a 'tsdb' event handler in Lua for this 'exportJob' event in your solution.

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
query object Query arguments
query.fill integer, string Value to fill for time slots with no data points exist.
If the query without sampling, it only works in merge mode.
Supported fill types:
- "": fill the slot with empty string "".
- "null": fill the slot with JSON null.
- "previous": fill the slot with previous value.
- "none": fill the slot with "none" string.
- any integer: fill the slot with the specified integer value.
- "~s:CUSTOM_STRING": fill the slot with the specified "CUSTOM_STRING" string.
Default: none
query.mode string Indicate whether to merge or split the result of each metric.
Supported options: merge, split
Default: merge
query.tags object One or many tags.
Maximum number of tag pairs: 20
If tag value is string, it applies to “AND” operator with other tag pairs.
Following is example of operator (type=switch and area=US)
{
"tags": {
"type": "sensor",
"area": "US"
}
}
If the tag value is array, it applies to “OR” operator. All tag values with array type are in the same group of “OR” operator, even if the tag name is different. When OR operator appears in query, the response structure will be grouped by tags of OR operator.
Maximum number of "OR" tag values: 100
Following is example of operator (type=switch or type=sensor or area=US or area=TW), it counts the "OR" tag values as 4.
{
"tags": {
"type": ["switch", "sensor"],
"area": ["US", "TW"]
}
}
query.epoch string Change returned timestamp of data points to unix epoch format.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
Optional, if not provided, timestamps are returned in RFC3339 UTC with microsecond precision. (Note: the time offset notation can be 'Z' or '+00:00')
query.limit integer Limit the number of data points to return per metric(default is 1000)
NOTE: When query datapoints with OR operator, the limit (default and maximum) will depends on number of OR tags have been provided. That is default = original default / number of OR tags
Maximum: 10000
query.metrics [ string ] One or many metrics
query.end_time integer, string Exclusive UTC ending time range to query, also accept RFC3339 UTC string.
The end_time needs to bigger than or equal to start_time.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
e.g. 1472547546000000u, 1472547546000ms, 1472547546s, 1472547546, 2016-08-30T08:59:06Z
Optional, if not provided, it will use current timestamp in microseconds from server side
query.order_by string Return results in ascending or descending time order.
Supported options: desc, asc
Default: "desc"
query.aggregate [ string ] One to many aggregation functions to apply.
Supported functions: avg, min, max, count, sum
String type value can only use count function
query.start_time integer, string Inclusive UTC starting time range to query, also accept RFC3339 UTC string.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
e.g. 1472547546000000u, 1472547546000ms, 1472547546s, 1472547546, 2016-08-30T08:59:06Z, 2016-08-30T08:59:06+00:00
Optional, if not provided, it will be 7 days earlier than end_time.
query.relative_end integer, string A negative integer with time unit to indicate relative end time before now.
The relative_end time MUST be equal or bigger than the relative_start time.
Supported units: u (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks)
e.g. -3d (The data between 3 to 7 days ago when we use default relative_start time)
Optional, if not provided, the current server timestamp will be used.
query.sampling_size string The size of time slots used for downsampling. Must be a positive integer greater than zero.
Supported units: u (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks)
Optional, used together with fill arguments.
query.relative_start integer, string A negative integer with time unit to indicate relative start time before now.
Supported units: u (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks)
Default: -7d (last 7 days)
format object The data format rules. Property name should be the field name which are the metrics name, "timestamp" and "tags".
format.^[a-zA-Z0-9_]+$ [ object ] Functions to format the this field. the rules will be applied by its order in array.
format.^[a-zA-Z0-9_]+$[].label string Append the given string to field value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].round integer Round the field value with given value. Metrics field only.
Maximum: 15
format.^[a-zA-Z0-9_]+$[].rename string Rename a field to the given value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].discard boolean Remove the field when the value is true. The field parameter should be "tags" and only support "tags" field.
format.^[a-zA-Z0-9_]+$[].replace object Replace the field value which matching the pattern to the new value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].replace.to string The replacement value. Using \{n} to specify capture group. {n} is number of group.
format.^[a-zA-Z0-9_]+$[].replace.match string String or regular expression.
format.^[a-zA-Z0-9_]+$[].datetime integer Convert the unix timestamp to human readable format(support ISO 8601: yyyy-mm-ddThh:mm:ss.[mmm]) for given value as UTC offset hours(n or -n).
For example the timestamp 1509437405123 will be converted to 2017-10-31T08:10:05.123Z when value is 0. Timestamp and metrics field only.
maximum: 14
minimum: -12
format.^[a-zA-Z0-9_]+$[].normalize [ string ] Normalize the given list of tag names, it will fill into different columns. The tags are not specified will be dropped.
The field parameter should be "tags" and only support "tags" field.
filename string File name of export CSV file. The ".csv" extension is not required inside the name. (Space is not allowed in filename)

Responses

Code Body Type Description
200 object Job successfully started
default object Error

Object Parameter of 200 response:

Name Type Description
job_id string Job ID

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

-- Query constraints for export
local metrics = {


  "temperature",
  "humidity",
  "switch",
  "host"
}
local tags = {


  region = "us",
  city = "minneapolis"
}
local query = {


  metrics = metrics,
  tags = tags
}
local format = {


  temperature = {
    {round=3},
    {rename="Temp"}
  },
  timestamp = {
    {datetime=-5}
  },
  tags = {
    {normalize={"city", "region"}}
  },
  host = {
    {replace={
      match="/(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}):(\\d{1,5})/",
      to="Port: \\2 and IP: \\1"
    }}
  }
} -- Start a new export job
local job_id = Tsdb.export({


  query = query,
  filename = "export_mlps_20170321",
  format = format
})
response.message = job_id

exportJobInfo

Query the information of an export job, including status.

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
job_id ^[a-zA-Z0-9]+$ Job id

Responses

Code Body Type Description
200 object Job info successfully returned
default object Error

Object Parameter of 200 response:

Name Type Description
error string Error message if job failed
query object Query arguments
state string State of the job (enqueued, expired, in-progress, completed or failed)
format object The data format rules. Property name should be the field name which are the metrics name, "timestamp" and "tags".
format.^[a-zA-Z0-9_]+$ [ object ] Functions to format the this field. the rules will be applied by its order in array.
format.^[a-zA-Z0-9_]+$[].label string Append the given string to field value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].round integer Round the field value with given value. Metrics field only.
Maximum: 15
format.^[a-zA-Z0-9_]+$[].rename string Rename a field to the given value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].discard boolean Remove the field when the value is true. The field parameter should be "tags" and only support "tags" field.
format.^[a-zA-Z0-9_]+$[].replace object Replace the field value which matching the pattern to the new value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].replace.to string The replacement value. Using \{n} to specify capture group. {n} is number of group.
format.^[a-zA-Z0-9_]+$[].replace.match string String or regular expression.
format.^[a-zA-Z0-9_]+$[].datetime integer Convert the unix timestamp to human readable format(support ISO 8601: yyyy-mm-ddThh:mm:ss.[mmm]) for given value as UTC offset hours(n or -n).
For example the timestamp 1509437405123 will be converted to 2017-10-31T08:10:05.123Z when value is 0. Timestamp and metrics field only.
maximum: 14
minimum: -12
format.^[a-zA-Z0-9_]+$[].normalize [ string ] Normalize the given list of tag names, it will fill into different columns. The tags are not specified will be dropped.
The field parameter should be "tags" and only support "tags" field.
job_id string Job ID
length string The total length of export file in bytes
filename string File name of the exported CSV file
content_id string Content ID of the job to Content service
context_id string Solution id
start_time string Start time of the job
update_time string Last updated time of the job

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

local job_info = Tsdb.exportJobInfo({


  job_id = "xxyyzz"
})
response.message = job_info

exportJobList

List export job records of a given solution in descending timestamp order

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
limit integer Limit the number of results to return (default: 100, maximum allowed: 1000)

Responses

Code Body Type Description
200 [ object ] List of export job
default object Error

Object Parameter of 200 response:

Name Type Description
state string State of the job (enqueued, expired, in-progress, completed or failed)
job_id string Job ID
start_time string Start time of the job

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

local job_list = Tsdb.exportJobList()
response.message = job_list

getConfig

Get retention configuration of solution.

Responses

Code Body Type Description
200 object The retention configuration successfully returned
default object Error

Object Parameter of 200 response:

Name Type Description
default_retention ^(\d+(d|w|m|y)|infinity)$ Default retention time with unit. Default value is 3m.
Supported units: d (days), w (weeks), m (months), y (years)
e.g. 3d, 5w or infinity
Default: "3m"
metrics_retention object The retention time for metrics, property name should be the metric name.
metrics_retention.^[a-zA-Z0-9_]+$ ^(\d+(d|w|m|y)|infinity|default)$ Retention time for metric with unit.
To use default for following default value and infinity for keeping metric forever.
Supported units: d (days), w (weeks), m (months), y (years)
e.g. 3d, 5w, default or infinity

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

local config = Tsdb.getConfig()
response.message = config

import

Start an import job with a header row defined in the CSV file
Header line contains many column definitions.
A column is defined by column_name|column_type|data_type
More details are given below.
Here are column types used to annotate a column:
* t means tag
* m means metric
* ts means timestamp
* mn means metric name in a pair
* mv means metric value in a pair
Here are data types supported by different columns:
* str: tag, metric, metric name, metric value
* int: metric, metric value
* sec: timestamp in second as integer
* ms: timestamp in millisecond as integer
* us: timestamp in microsecond as integer
* float: metric, metric value
Combined the two, you can start to write your column definitions in the
first line of CSV. The formula is:
header line := column_def_1,column_def_2,column_def_3,....,column_def_n
column_def := column_name|column_type|data_type
The default data type for some kind of columns are listed below. For those columns,
the data type can be omitted in it's column definition.
* timestamp: sec
* tag: str
* metric name: str
Finally, there is an living example of how to use this kind of annotation to represent
a data set in csv format.
Given the CSV file:
timestamp|ts,weather|m|str,temperature|m|float,city|t,max_or_min_pair|mn,max_or_min_pair|mv|float
12345,cold,15.4,Taipei,lowest,12.4
12344,warm,23.7,Tainan,highest,25.3
That will be transformed to two write to Tsdb.write:
Tsdb.write({
metrics = {
weather = "cold",
temperature = 15.4,
lowest = 12.4
}},
tags = {
city = "Taipei"
}},
ts = "12345000000"
})
Tsdb.write({
metrics = {
weather = "warm",
temperature = 23.7,
highest = 25.3
}},
tags = {
city = "Tainan"
}},
ts = "12344000000"
})

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
url string The url for a CSV file

Responses

Code Body Type Description
200 nil Job successfully started
default object Error

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

local job_id = Tsdb.import({


  url = "http://example.com/a_sample_file"
})
response.message = job_id

importJobInfo

query the status of an import job

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
job_id ^[a-zA-Z0-9]+$ Job id

Responses

Code Body Type Description
200 nil Job info successfully returned
default object Error

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

local job_info = Tsdb.importJobInfo({


  job_id = "2345"
})
response.message = job_id

importJobList

list all job ids started by a given solution id

Responses

Code Body Type Description
200 nil Job successfully listed
default object Error

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

local job_list = Tsdb.importJobList()
response.message = job_list

listMetrics

list metrics of a given solution

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
limit integer Limit the number of rows to return.
Maximum: 1000
Default: 1000
next string Optional, the cursor to get next page if still having more data

Responses

Code Body Type Description
200 object Metrics information retrieved
default object Error

Object Parameter of 200 response:

Name Type Description
next string Cursor for getting next page
total integer Total number of items
metrics [ string ] List of metrics

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

-- Get a list of created metrics for a given solution
local out = Tsdb.listMetrics({limit = 10})
response.message = out

-- Use next cursor to fetch next page if found in the result of previous query
local out = Tsdb.listMetrics({limit = 10, next = out.next})
response.message = out

listTags

list tags of a given solution

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
limit integer Limit the number of rows to return.
Maximum: 1000
Default: 1000
next string Optional, the cursor to get next page if still having more data

Responses

Code Body Type Description
200 object Tags information retrieved
default object Error

Object Parameter of 200 response:

Name Type Description
next string Cursor for getting next page
tags object Map of tags
total integer Total number of items

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

-- Get a list of created tags for a given solution
local out = Tsdb.listTags({limit = 10})
response.message = out

-- Use next cursor to fetch next page if found in the result of previous query
local out = Tsdb.listTags({limit = 10, next = out.next})
response.message = out

multiWrite

Write data points to one or many metrics with an optional set of tags and a timestamp down to microsecond precision.

Note that if multiple data points are written with exactly the same timestamp, only the last one will be kept and it overwrites the others.

Each metric value has a limited size which depends on the number of tags. (Number of tags + 1) multiplies the size of metric value can't over 480KB. A write request will be rejected without partial writes if exceeding the limit. If any of data points are invalid, request will be rejected without partial writes.

To prevent from a synchronized service call take too long time to response, there are some limitations. Total number of data entries in multiple write at most 2,000 (Reference to the limit of single write). Number of data entries per datapoint: "Number of metrics * (Number of tag pairs) + 1"

If succeeds, it turns a list string of write timestamp in microseconds.

To improve write performance, the recent functionality can also be disabled by setting the recent_function_disable option to true.

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
recent_function_disable boolean Disable the recent functionality or not.
return_ts boolean Whether to return write timestamp in the response
datapoints [ object ] List of data points
datapoints[].ts integer, string Unix timestamp in microseconds used as the write time for given data point.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
e.g. 1472547546000000u, 1472547546000ms, 1472547546s, 1472547546
Optional, if not provided, it will use the received time in microseconds from server side
datapoints[].tags object Pairs of tag and its tag value (only text supported).
Maximum number of tags in a single write: 20
datapoints[].metrics object Pairs of metric name and its value.
Maximum number of metrics in a single write: 100

Responses

Code Body Type Description
200 [ object ] Data successfully inserted
204 nil Data successfully inserted
default object Error

Object Parameter of 200 response:

Name Type Description
write_timestamp string The timestamp of data point written to TSDB (in microseconds)

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

-- Write multiple datapoints of metrics with tags
-- If timestamp is not provided, it will use the received time in microseconds from server side
local metrics1 = {


  temperature = 37.2,
  humidity = 73,
  switch = "on"
}
local metrics2 = {


  temperature = 31.2,
  humidity = 55,
  switch = "off"
}
local tags1 = {


  pid = "pzomp8vn4twklnmi",
  identity = "000001",
  region = "us",
  city = "minneapolis"
}
local tags2 = {


  pid = "lvwpoj19hp7k0000",
  identity = "000002",
  region = "tw",
  city = "taipei"
}
local out = Tsdb.multiWrite({


  datapoints = {
    {
      metrics = metrics1,
      tags = tags1
    },
    {
      metrics = metrics2,
      tags = tags2
    }
  },
  return_ts = true
})
response.message = out

query

Query data points by using any metrics and tags. Support absolute (start_time, end_time) or relative (relative_start, relative_end) time parameters. The end time MUST be bigger or equal to the start time (end_time >= start_time or relative_end >= relative_start).
The first element in returned data point array is always the timestamp.
You can use fill argument to indicate the imputation of missing values.
The metric names of columns property in the response will always be in the order specified in the query, except for the timestamp column which is always the first one.

If no time constraints are specified, it will return recent data points up to the maximum limit in recent one week.

Note that only unique timestamped data will be returned, which means if multiple data points were written with exactly the same timestamp in the response, only the last one will be kept. The logic OR operator is only supported in basic query and cannot be used in down-sampling or aggregation queries.

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
fill integer, string Value to fill for time slots with no data points exist.
If the query without sampling, it only works in merge mode.
Supported fill types:
- "": fill the slot with empty string "".
- "null": fill the slot with JSON null.
- "previous": fill the slot with previous value.
- "none": fill the slot with "none" string.
- any integer: fill the slot with the specified integer value.
- "~s:CUSTOM_STRING": fill the slot with the specified "CUSTOM_STRING" string.
Default: none
mode string Indicate whether to merge or split the result of each metric.
Supported options: merge, split
Default: merge
tags object One or many tags.
Maximum number of tag pairs: 20
If tag value is string, it applies to “AND” operator with other tag pairs.
Following is example of operator (type=switch and area=US)
{
"tags": {
"type": "sensor",
"area": "US"
}
}
If the tag value is array, it applies to “OR” operator. All tag values with array type are in the same group of “OR” operator, even if the tag name is different. When OR operator appears in query, the response structure will be grouped by tags of OR operator.
Maximum number of "OR" tag values: 100
Following is example of operator (type=switch or type=sensor or area=US or area=TW), it counts the "OR" tag values as 4.
{
"tags": {
"type": ["switch", "sensor"],
"area": ["US", "TW"]
}
}
epoch string Change returned timestamp of data points to unix epoch format.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
Optional, if not provided, timestamps are returned in RFC3339 UTC with microsecond precision. (Note: the time offset notation can be 'Z' or '+00:00')
limit integer Limit the number of data points to return per metric(default is 1000)
NOTE: When query datapoints with OR operator, the limit (default and maximum) will depends on number of OR tags have been provided. That is default = original default / number of OR tags
Maximum: 10000
metrics [ string ] One or many metrics
end_time integer, string Exclusive UTC ending time range to query, also accept RFC3339 UTC string.
The end_time needs to bigger than or equal to start_time.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
e.g. 1472547546000000u, 1472547546000ms, 1472547546s, 1472547546, 2016-08-30T08:59:06Z
Optional, if not provided, it will use current timestamp in microseconds from server side
order_by string Return results in ascending or descending time order.
Supported options: desc, asc
Default: "desc"
aggregate [ string ] One to many aggregation functions to apply.
Supported functions: avg, min, max, count, sum
String type value can only use count function
start_time integer, string Inclusive UTC starting time range to query, also accept RFC3339 UTC string.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
e.g. 1472547546000000u, 1472547546000ms, 1472547546s, 1472547546, 2016-08-30T08:59:06Z, 2016-08-30T08:59:06+00:00
Optional, if not provided, it will be 7 days earlier than end_time.
relative_end integer, string A negative integer with time unit to indicate relative end time before now.
The relative_end time MUST be equal or bigger than the relative_start time.
Supported units: u (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks)
e.g. -3d (The data between 3 to 7 days ago when we use default relative_start time)
Optional, if not provided, the current server timestamp will be used.
sampling_size string The size of time slots used for downsampling. Must be a positive integer greater than zero.
Supported units: u (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks)
Optional, used together with fill arguments.
relative_start integer, string A negative integer with time unit to indicate relative start time before now.
Supported units: u (microseconds), ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks)
Default: -7d (last 7 days)

Responses

Code Body Type Description
default object Error
merge_mode object Query results retrieved in merge mode.
split_mode object Query results retrieved in split mode.
or_merge_mode object Query results retrieved in merge mode with OR operation.
or_split_mode object Query results retrieved in split mode with OR operation.
aggregation_mode object Query results retrieved in aggregation mode.

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Object Parameter of merge_mode response:

Name Type Description
tags object The “AND” operator tags specified in query arguments.
values [ [ number, string, object ] ] The data points list.
columns [ string ] The column names for mapping the column in values property.
metrics [ string ] Metrics which are specified in query arguments.

Object Parameter of split_mode response:

Name Type Description
tags object The “AND” operator tags specified in query arguments.
values object The data points list for each metric, the property name is the metric name specified in query arguments.
values.^[a-zA-Z0-9_]+$ [ [ number, string, object ] ] The data points list.
columns [ string ] Its always empty array in this mode.
metrics [ string ] Metrics which are specified in query arguments.

Object Parameter of or_merge_mode response:

Name Type Description
tags object The “AND” operator tags specified in query arguments.
values object The data points list of specific tag name, the property name is tag name.
values.^[a-zA-Z0-9_]+$ object The data points list of specific tag name and tag value, the property name is tag value.
values.[a-zA-Z0-9_]+$.*[a-zA-Z0-9_]+$* [ [ number, string, object ] ] The data points list.
columns [ string ] The column names for mapping the column in values property.
metrics [ string ] Metrics which are specified in query arguments.

Object Parameter of or_split_mode response:

Name Type Description
tags object The “AND” operator tags specified in query arguments.
values object The data points list of specific tag name, the property name is tag name.
values.^[a-zA-Z0-9_]+$ object The data points list of specific tag name and tag value, the property name is tag value.
values.[a-zA-Z0-9_]+$.*[a-zA-Z0-9_]+$* object The data points list for each metric, the property name is the metric name specified in query arguments.
values.[a-zA-Z0-9_]+$.[a-zA-Z0-9_]+$.^[a-zA-Z0-9_]+$ [ [ number, string, object ] ] The data points list.
columns [ string ] Its always empty array in this mode.
metrics [ string ] Metrics which are specified in query arguments.

Object Parameter of aggregation_mode response:

Name Type Description
tags object The “AND” operator tags specified in query arguments.
values object The data points list for each metric, the property name is the metric name specified in query arguments.
values.^[a-zA-Z0-9_]+$ object The aggregation of data points.
values.^[a-zA-Z0-9_]+$.avg number The average of metric values.
values.^[a-zA-Z0-9_]+$.max number The maximum of metric values.
values.^[a-zA-Z0-9_]+$.min number The minimum of metric values.
values.^[a-zA-Z0-9_]+$.sum number The sum of metric values.
values.^[a-zA-Z0-9_]+$.count number The count of metric values.
columns [ string ] Its always empty array in this mode.
metrics [ string ] Metrics which are specified in query arguments.

Example

-- Example 1: Query by Absolute Time Constraint -- -- Get temperature and humidity data points between 2016-08-01 (inclusive) and 2016-09-01 (exclusive) from devices in Minneapolis city and US region local metrics = {"temperature", "switch"}
local tags = {region = "us", city = "minneapolis"}
local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  start_time = "2016-08-01T00:00:00Z",
  end_time = "2016-09-01T00:00:00Z",
  fill = "null",
  limit = 50
})
response.message = out
-- Example 2: Query by Relative Time Constraint -- -- Get temperature data points in recent 3 hours from devices in Taipei city and Asia region (with timestamp in milliseconds format) local metrics = {"temperature"}
local tags = {region = "asia", city = "taipei"}
local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  relative_start = "-3h",
  epoch = "ms",
  fill = "null",
  limit = 50
})
response.message = out
-- Example 3 -- Get temperature data points between 3 to 10 days ago from devices in Taipei city and Asia region (with timestamp in milliseconds format) local metrics = {"temperature"}
local tags = {region = "asia", city = "taipei"}
local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  relative_start = "-10d",
  relative_end = "-3d",
  epoch = "ms",
  fill = "null",
  limit = 50
})
response.message = out
-- Example 4: Query without Time Constraint -- -- Get most recent 5 temperature and humidity data points from devices in Minneapolis city and US region local metrics = {"temperature", "humidity"}
local tags = {region = "us", city = "minneapolis"} local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  limit = 5
})
response.message = out
-- Example 5: Query by Downsampling -- -- Get humidity data points in recent two days from devices in Taipei city, downsampled by 4-hours time slots local metrics = {"humidity"}
local tags = {city = "taipei"}
local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  relative_start = "-2d",
  sampling_size = "4h",
  fill = "none",
  epoch = "ms"
})
response.message = out

-- Example 6: Aggregation by Downsampling -- -- Get average and count of tire pressure data between 2016-08-15 (inclusive) and 2016-09-01 (exclusive) from devices in Minneapolis city, downsampled by 30-minutes time slots local metrics = {"tire_pressure"}
local tags = {city = "minneapolis"}
local aggregate = {"avg", "count"}
local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  start_time = "2016-08-01T00:00:00Z",
  end_time = "2016-09-01T00:00:00Z",
  aggregate = aggregate,
  sampling_size = "30m",
  fill = "none"
})
response.message = out
-- Example 7: Query by Fill in Custom String -- -- To fill in the empty time slots with "Empty" string local metrics = {"temperature", "switch"}
local tags = {region = "us", city = "minneapolis"}
local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  fill = "~s:Empty",
})
response.message = out
-- Example 8: Query by OR tags operator -- -- Get temperature data points which belongs to (sn = dev1 or sn = dev2) and dev = pump local metrics = {"temperature"}
local tags = {dev = "pump", sn = {"dev1", "dev2"}}
local out = Tsdb.query({
  metrics = metrics,
  tags = tags,
  start_time = "2016-08-01T00:00:00Z",
  end_time = "2016-09-01T00:00:00Z",
  limit = 50
})
response.message = out

recent

Get the most recent data point of a particular set of metrics and tag values.

If you want to use advanced metric query, you need to specify an inner Lua table as an element
inside the Lua table of metrics parameter, i.e. metrics = {"m1","m2",{"m3","m4"}}
where {"m3","m4"} is an advanced metric query that query the most recent data
of m3 and also report the value of m4 that written together with m3.

To improve write request performance, this functionality can be disabled by setting the recent_function_disable option to true. If disabled the using the 'recent' query will return an error.

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
recent_function_disable boolean Disable the recent functionality or not.
metrics [ object ] One or many metrics
tag_name string Tag name
tag_values [ integer, string ] One or many tag values

Responses

Code Body Type Description
200 nil Operation successfully returned
default object Error

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

-- Get latest data of metric vibration and humidity from devices with tag sn=123 or sn=456
local out = Tsdb.recent({
  metrics = {"vibration","humidity"},
  tag_name = "sn",
  tag_values = {"123", "456"}
})
response.message = out

-- Get latest data of metric warning and critical from devices with tag sn=123 or sn=456
-- Together with the corresponding text for warning and critical metrics
local out = Tsdb.recent({
  metrics = {
    {"warning", "text"}; 
    {"critical", "text"};
  },
  tag_name = "sn",
  tag_values = {"123", "456"}
})
response.message = out

write

Write data point to one or many metrics with an optional set of tags and a timestamp down to microsecond precision.

Note that if multiple data points are written with exactly the same timestamp, only the last one will be kept and it overwrites the others.

Each metric value has a limited size which depends on the number of tags. (Number of tags + 1) multiplies the size of metric value can't over 480KB. A write request will be rejected without partial writes if exceeding the limit.

If succeeds, it returns a json of write timestamp in microseconds.

Important note on Tag usage: Tags applies a multiplication factor on the metric (eg. a metric with 4 tags -> 4x data write) and will have an impact on both performance and storage usage.

To improve write performance, the recent functionality can also be disabled by setting the recent_function_disable option to true.

Arguments

parameters (object) - Object containing service call parameters.

Name Type Description
recent_function_disable boolean Disable the recent functionality or not.
ts integer, string Unix timestamp in microseconds used as the write time for given data point.
Supported units: u (microseconds), ms (milliseconds), s (seconds)
e.g. 1472547546000000u, 1472547546000ms, 1472547546s, 1472547546
If the unit is not provided, it will be chosen by the number of digits to avoid the user from unintentionally using unusual UTC timestamps.
For seconds precision, the length of UTC timestamp, it is always 10 digits in between 2001/09/09 (9 digits) and 2286/11/20 (11 digits), which we assume that the "real data" be written is in the range of 10 digits.
Similarly, for milliseconds (seconds times 10^3) and microseconds (seconds times 10^6), the reasonable number of digits are 13 and 16 respectively.
For example, the unit of 1472547546 would be second and 14725475460 is invalid.
Valid time range: 1,000,000,000,000,000(us) to 9,999,999,999,999,999(us) unix timestamp.
Optional, if not provided, it will use the received time in microseconds from server side
tags object Pairs of tag and its tag value (only text supported).
Maximum size of tag name and tag value: 1KB.
Maximum number of tags in a single write: 20
metrics object Pairs of metric name and its value.
Maximum size of metric name: 1KB.
Maximum number of metrics in a single write: 100
return_ts boolean Whether to return write timestamp in the response

Responses

Code Body Type Description
200 object Data successfully inserted
204 nil Data successfully inserted
default object Error

Object Parameter of 200 response:

Name Type Description
write_timestamp string The timestamp of data point written to TSDB (in microseconds)

Object Parameter of default response:

Name Type Description
error string Error Message in case of failure
result object Result message

Example

-- Write data point of metrics with tags
-- If timestamp is not provided, it will use the received time in microseconds from server side
local metrics = {


  temperature = 37.2,
  humidity = 73,
  switch = "on",
  host = "8.168.1.24:443"
}
local tags = {


  pid = "pzomp8vn4twklnmi",
  identity = "000001",
  region = "us",
  city = "minneapolis"
}
local out = Tsdb.write({


  metrics = metrics,
  tags = tags
})
response.message = out

-- Write data points of metrics with tags and timestamp
local metrics = {


  temperature = 37.2,
  humidity = 73
}
local tags = {


  identity = "000002"
}
local out = Tsdb.write({


  metrics = metrics,
  tags = tags,
  ts = "1476243965s"
})
response.message = out

Events

exportJob

An event message containing the export task result.

Arguments

job (object) - The information for export job

Name Type Description
error string Error message if job failed
query object Query arguments
state string State of the job (enqueued, expired, in-progress, completed or failed)
format object The data format rules. Property name should be the field name which are the metrics name, "timestamp" and "tags".
format.^[a-zA-Z0-9_]+$ [ object ] Functions to format the this field. the rules will be applied by its order in array.
format.^[a-zA-Z0-9_]+$[].label string Append the given string to field value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].round integer Round the field value with given value. Metrics field only.
Maximum: 15
format.^[a-zA-Z0-9_]+$[].rename string Rename a field to the given value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].discard boolean Remove the field when the value is true. The field parameter should be "tags" and only support "tags" field.
format.^[a-zA-Z0-9_]+$[].replace object Replace the field value which matching the pattern to the new value. Metrics field only.
format.^[a-zA-Z0-9_]+$[].replace.to string The replacement value. Using \{n} to specify capture group. {n} is number of group.
format.^[a-zA-Z0-9_]+$[].replace.match string String or regular expression.
format.^[a-zA-Z0-9_]+$[].datetime integer Convert the unix timestamp to human readable format(support ISO 8601: yyyy-mm-ddThh:mm:ss.[mmm]) for given value as UTC offset hours(n or -n).
For example the timestamp 1509437405123 will be converted to 2017-10-31T08:10:05.123Z when value is 0. Timestamp and metrics field only.
maximum: 14
minimum: -12
format.^[a-zA-Z0-9_]+$[].normalize [ string ] Normalize the given list of tag names, it will fill into different columns. The tags are not specified will be dropped.
The field parameter should be "tags" and only support "tags" field.
job_id string Job ID
length string The total length of export file in bytes
filename string File name of the exported CSV file
content_id string Content ID of the job to Content service
context_id string Solution id
start_time string Start time of the job
update_time string Last updated time of the job

Example

function handle_tsdb_exportJob (job)

 -- Your logic comes here 

end

Service Health Check

Operations

/health

Enable the hosting system to check if service is active and running

Arguments

No Content

Responses

Name Type Description
200 string OK

Errors

No content