AIML Service
Integrate ONNX Runtime on the West Connectivity for Deep Learning Models. Visit ONNX support to learn more on ONNX supported AI model conversions.
Operations
- Aiml.getAsyncImageInference() - Returns an inference result based of an image as an aiml event.
- Aiml.getImageInference() - Returns an inference result based of an image.
- Aiml.inference() - Return an inference result based on input dataset.
- Aiml.modelSignature() - Return a model signature based on the provided model name.
Events
- event - An event triggered by sending specific requests to aiml. Configurable on Config UI.
Operations
Please note that bold arguments are required, while italic arguments are optional.
getAsyncImageInference
The input includes an image and onnx ai model name stored on a solution's content-service. The service also provide basic pre-processing steps to the image before prediction.
Arguments
parameters (object) - Object containing service call parameters.
Name | Type | Description |
---|---|---|
model_name | string | The name of a model. |
image_name | string | The name of a model. |
pre_process | object | Pre-processing steps to the image before the model generates inference result |
pre_process.order | string | Comma separated string, to allow aiml service to execute pre-processing steps in the given order. Values can include allow_grayscale_processing, crop_range, allow_center_crop, allow_normalization, allow_resize |
pre_process.crop_range | string | Four comma separated numbers(left, top, right, bottom), to allow aiml to crop the image. If all numbers are between 0 and 1, numbers become the percentage of the image. |
pre_process.allow_resize | boolean | Allow aiml service to resize image size to match model signature. |
pre_process.allow_center_crop | boolean | Allow aiml service to apply default square cropping pre-processing steps to the image. |
pre_process.allow_normalization | boolean | Allow aiml service to apply image normalization to the image. |
pre_process.allow_grayscale_processing | boolean | Allow aiml service to first convert the image into grayscale and then convert the image into BGR format. |
post_process | object | Post-processing steps to the inference result |
post_process.hide_outputs | string | Comma separated values to filter model inference output. Useful if the output shows Exceeded allotted memory! error or if the output take too long to process. Do not include this in order |
Responses
Code | Body Type | Description |
---|---|---|
200 | string | The acknowledgement receipt of the request. Fixed value "OK". |
429 | object | Too Many Requests. Please re-try in 1 minute or check rate-limit settings. |
500 | object | Internal Server Error |
Object Parameter of 429 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Object Parameter of 500 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Example
local post_process = {}
-- post_process = {hide_outputs="Output1,Output6,Output7,Output8,Output4"}
local pre_process = {} pre_process = {
crop_range="0,0,1,1",
allow_normalization = true
-- order ="crop_range,allow_normalization"
} return Aiml.getAsyncImageInference({
model_name ="mobilenet_v2_roi_topbottom_v3_1.onnx",
image_name ="222203025_C51_R_Sidewall_11_ToolSet_Image1__000024",
pre_process = pre_process,
post_process = post_process
})
getImageInference
The input includes an image and onnx ai model name stored on a solution's content-service. The service also provide basic pre-processing steps to the image before prediction. Please increase cache size if you are experiencing long inference wait time and use many models.
Arguments
parameters (object) - Object containing service call parameters.
Name | Type | Description |
---|---|---|
model_name | string | The name of a model. |
image_name | string | The name of a model. |
pre_process | object | Pre-processing steps to the image before the model generates inference result |
pre_process.order | string | Comma separated string, to allow aiml service to execute pre-processing steps in the given order. Values can include allow_grayscale_processing, crop_range, allow_center_crop, allow_normalization, allow_resize |
pre_process.crop_range | string | Four comma separated numbers(left, top, right, bottom), to allow aiml to crop the image. If all numbers are between 0 and 1, numbers become the percentage of the image. |
pre_process.allow_resize | boolean | Allow aiml service to resize image size to match model signature. |
pre_process.allow_center_crop | boolean | Allow aiml service to apply default square cropping pre-processing steps to the image. |
pre_process.allow_normalization | boolean | Allow aiml service to apply image normalization to the image. |
pre_process.allow_grayscale_processing | boolean | Allow aiml service to first convert the image into grayscale and then convert the image into BGR format. |
post_process | object | Post-processing steps to the inference result |
post_process.hide_outputs | string | Comma separated values to filter model inference output. Useful if the output shows Exceeded allotted memory! error or if the output take too long to process. Do not include this in order |
Responses
Code | Body Type | Description |
---|---|---|
200 | object | The inference result |
400 | object | Bad Request |
422 | object | Unprocessable Content |
429 | object | Too Many Requests. Please re-try in 1 minute or check rate-limit settings. |
500 | object | Internal Server Error |
Object Parameter of 200 response:
Name | Type | Description |
---|---|---|
OutputN | object | Inference result output. Starting from Output1 to OutputN depending on the model |
Object Parameter of 400 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Object Parameter of 422 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Object Parameter of 429 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Object Parameter of 500 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Example
local post_process = {}
-- post_process = { hide_outputs = "Output2"}
local pre_process = {} pre_process = {
crop_range="0,0,1,1",
allow_normalization = true
-- order ="crop_range,allow_normalization"
} return Aiml.getImageInference({model_name="mobilenet_v2_roi_topbottom_v3_1.onnx",
image_name="222203025_C51_R_Sidewall_11_ToolSet_Image1__000024", pre_process = pre_process,
post_process=post_process
})
inference
Return an inference result based on input dataset.
Arguments
parameters (object) - Object containing service call parameters.
Name | Type | Description |
---|---|---|
dataset | [ [ number ] ] | The numerical dataset, should be a 2D array. |
model_name | string | The name of a model. |
Responses
Code | Body Type | Description |
---|---|---|
200 | object | The inference result |
400 | object | Bad Request |
500 | object | Internal Server Error |
Object Parameter of 200 response:
Name | Type | Description |
---|---|---|
OutputN | object | Inference result output. Starting from Output1 to OutputN depending on the model |
Object Parameter of 400 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Object Parameter of 500 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Example
Aiml.inference({
dataset = {
{7, 3.2 , 4.7, 1.4}
},
model_name = "logreg_iris.onnx"
})
Aiml.inference({
dataset = {
{7, 3.2 , 4.7, 1.4},
{6.3, 2.3, 4.4, 1.3},
{5.1, 3.7 , 1.5, 0.4}
},
model_name = "logreg_iris.onnx"
})
modelSignature
Return a model signature based on the provided model name.
Arguments
parameters (object) - Object containing service call parameters.
Name | Type | Description |
---|---|---|
model_name | string | The name of a model. |
Responses
Code | Body Type | Description |
---|---|---|
200 | object | The model signature |
400 | object | Bad Request |
500 | object | Internal Server Error |
Object Parameter of 200 response:
Name | Type | Description |
---|---|---|
inputs | object | The input fields and format |
outputs | object | The output fields and format |
filename | string | The model file name |
createtime | string | The creation date time of model signature, in ISO format |
sess_options | object | The Onnx Runtime session options |
sess_options.execution_mode | string | The Onnx Runtime parameter to control whether to execute operators in the graph sequentially or in parallel |
sess_options.intra_op_num_threads | string | The Onnx Runtime parameter to control the total number of INTRA threads to use to run the model |
sess_options.graph_optimization_level | string | The Onnx Runtime parameter to determine optimization level |
Object Parameter of 400 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Object Parameter of 500 response:
Name | Type | Description |
---|---|---|
type | string | Error type |
error | string | Error message |
status | integer | Response code |
Example
Events
event
An event triggered by sending specific requests such as getAsyncImageInference. Configurable on Config UI.
Arguments
event (object) - A prediciton result for asynchronous inference.
Name | Type | Description |
---|---|---|
output | object | Prediction result. Starting from Output1 to OutputN depending on the model |
source | object | Source model, image, and camera_ip address. |
end_time | integer | Request end timestamp in milliseconds |
start_time | integer | Request start timestamp in milliseconds, 0 when undefined. |