Integrations
In the menu item Integrations all functionalities are grouped that are responsible for receiving data from other systems (e.g. LoraWAN network server) or sending data to it (e.g. Webhook).
Integrationflows
Integrationflows allow to transfer data packets from niotix to a third party system. In addition, the data packets to be transmitted can be filtered and transformed before transmission. Thus, the relevant data packages can be transferred in the data format required by the third party system.
Basically, in an integration flow, the following data can be used:
- Each received data packet of an incoming connector.
- Each change to a datastate of a virtual device.
- Each change to a datastate of a digital twin.
For an integrationflow, all generated data for the respective account are considered. In order to further process only the required datastates, filters (4) can be defined, for example, to restrict them to the generated data of a certain virtual device or connector (see “Filter” section). In addition, it is possible to transform the data before forwarding it via a connector (5). In this process, data can be transformed into the desired structure or data format (see chapter “Transformations”). At the end of an integration workflow, one or more outgoing connectors (6) can be selected for forwarding the data.
Beta: this feature is currently in beta phase. It is therefore to be expected that minor bugs can still be found. The core functionality is already being used in productive use cases. If the implementation of a critical use case is planned, it is recommended to inform a contact person at Digimondo GmbH.
Create an integrationflow
In the Integrations > Integrationflows navigation item, use the “Create” button to create a new integration flow. The following dialog opens:
- Name: Assign a unique name here
- Trigger: Select here which kind of data stream should be considered by the integration flow, there are the following options:
- BEFORE STATE-HANDLING: considers the unchanged data stream of the connectors in the account. This option should be selected if the unchanged raw data is to be forwarded. It should be noted that in this case they do not have meta information available, such as dtwin_id.
- AFTER STATE-HANDLING: Here all datatstate-changes of a virtual device or digital twin are used as trigger of the integration flow. Metainformation, such as dtwin_id, is also available here.
- Integrationflow steps: Filters, transformations, and connectors can be added to an integration flow using this dropdown.
- Order: Defines the order of the steps of an integration flow. The last step of an integration flow must be a connector.
Overview of connectors available for integrationflows:
The following outbound connectors can be used for integration flows:
- MQTT Broker
- Websocket
- Webhook (outgoing)
Important: the expected output-format of the outgoing connector needs to be considered. The niotix backend delivers the data by default in the type “generic”. In order to be used, it must be converted by using a transformation in the step before it’s handed over to the connector. For example if you want to forward the data to a http webservice which expects json you have to create a transformation with “input type” “generic” and “output type” “application/json”
Filter
In an integration flow, all data packets of an account are processed. Filters can be used in integration flows to transform or forward only data packets that match the filter criteria. For example, they can be used to forward data from a specific twin or connector.
Create a filter
In the Integrations > Filters navigation item, use the Create button to create a new integration flow. The following dialog will open:
- Name: Enter a unique name here.
- Extended mode: In “Extended mode” nested filters can be created using JSONATA or Javascript. Important: If you want to create a filter for data packets from niotix, you have to choose the input type “Generic”.
- Attribute: Select here an attribute for the definition of the filter condition:
- account_id: Id of the account for which the filter should be applied.
- config_id: Id of the connector for which the filter should be applied.
- state_id: Id of the datapoint for which the filter should be applied.
- dtwin_id: Id of the digital twin or virtual device for which the filter should be applied.
- dtwin_title: Title of the digital twin or virtual device for which the filter is to be applied.
- twin_category: Here you can define if it should be filtered on virtualDevice or digitalTwin.
- twin_tags: Tag(s) of the digital twin or virtual device for which the filter should be applied.
- twin_key_value_: Key/value (see Custom properties) of a digital twin or virtual device for which the filter should be applied.
- state_identifier: Key of the datapoint to be used for the filter.
- state_type: Allows filtering by specific types of data points (number/string/boolean/json).
- parser_variable: Assigned variable in the parser representing this datapoint.
- source_identifier: The external ID of a virtual device to filter for.
- timestamp: Timestamp that should apply to the filter.
- Operator: Here the common operators are available to define a condition.
- Value: Comparison value under which the condition is checked. Several comparison values can be entered (confirm with Return). In this case, these are linked as OR.
- + - symbol: Different conditions can be linked here (linking of the individual conditions is AND).
Transformations
Transformations can be used to put incoming or outgoing data into the required structure. A common use case is the adaptation of a payload, e.g. for MQTT or Webhook. Additionally it is possible to define the desired output values.
-
Instance name: Assign a unique name here.
-
Account selection: Select the account for which the transformation should be created.
-
Description: Can be used to store additional information.
-
Input type: Select the data format of the input data. Important: If this is data from niotix that is to be transformed, the input type “generic” must be selected.
-
Output type: Define here in which data format the data should be output after the transformation. Here the expected format is to be selected according to the outgoing connector/receiving third system.
-
Transformation type: transformations can be created in jsonata or javascript.
-
Example input: an example dataset can be stored here that is to be transformed. A typical dataset of a niotix data point is structured as follows:
{ "meta": { "timestamp": "2022-07-08T07:25:27.447Z", "state_id": 55292, "dtwin_id": 3613, "dtwin_title": "Example Virtual Device Name", "twin_tags": ["Example","Tag"], "twin_category": "virtualDevice", "twin_ancestor_ids": 1544, "unit": "kWh", "state_identifier": "kWh", "state_type": "number", "account_id": 482, "config_id": 202, "source_type": "bridge", "parser_variable": "kWh", "source_identifier": "847D50161912202E", "twin_key_value": {} }, "value": 1055 }
Connectors
In this section you can set up bridges to third party systems. These bridges are used to receive process data from other systems or to trigger actions in other systems.
To add a new connector, select the type of connector in the Create dialog. The arrow next to the connector name in the menu indicates whether it is an incoming connector (arrow down) for receiving data in niotix or an outgoing connector (arrow up) for sending data. Following the selection, the required connector settings are displayed.
For all connectors, as soon as you click on “Validate & Save”, the changes will be validated if the connection works, and the connector will be saved directly. To delete connectors, click on the “Trash” button.
Cross-connector functions
The following functions are equally available for all connectors:
Connector Logs
After a connector has been created, the details can be viewed and modified via the pencil icon. Here you will find the tab “Connector Logs”. In this tab the last 100 packets of the connector are displayed. This makes it possible to see how and whether, for example, an incoming connector has received data packets. This supports troubleshooting if the data does not arrive correctly at the virtual device or third-party system.
Status logs
After a connector has been created, the details can be viewed and changed via the pencil icon. Here you will find the tab “Status Logs”. Here you can find a connection log, which shows the last 100 logs of the connector regarding the connection status. This supports troubleshooting if the data does not arrive correctly at the virtual device or third-party system.
Overview of available connectors
The following connectors are currently available in niotix:
Connector “firefly”
The “firefly”-connector sets up a connection to DIGIMONDO’s firefly LoRaWAN network server to send downlink-packets to devices.
- Instance name: the individual name of the connector
- Description: an internal description
- template: For the moment not needed to set up the connection, thus disabled.
- apiKey: Add the API key you defined in your firefly instance – this API key is required to authenticate at the firefly system and send downlink packets from the right account.
- baseURL: Add the address of your firefly instance including the protocol (“http” or “https”).
- Firefly MQTT Url (optional): Insert the address of your firefly RabbitMQ instance including protocol (“mqtt” or “mqtts”) e.g. “mqtts://messagequeue.firefly.com”. This field only has to be filled in if virtual devices are to be created with firefly.
Connector “actility”
The “actility” connector allows you to send downlinks to actility devices via a rule in the digital twin.
- Instance name: the individual name of the connector
- Description: an internal description
- Actility Custom Target Profile: the target organization in the Actility instance
- baseURL: Add the address of your Actility instance.
- thingparkLogin: The username for the login.
- thingparkPassword: The password for the login.
Connector “kafka”
The “kafka”-connector is a possibility to send data to an Apache Kafka broker. Similar to the MQTT-bridge, it forwards all changes in states to the kafka cluster for further processing and storing.
- Instance name: the individual name of the connector
- Description: an internal description
- Broker: the address and port of the kafka broker (in the syntax “address:port”)
- Client ID: the kafka client ID
- Topic to emit state changes: the topic under which all stage changes are going to be emitted to the kafka broker
- SASL Mechanism: the authentification mechanism used to register at the kafka broker
- SALS User: the user name for registering at the kafka broker
- SALS Password: password for registering at the kafka broker
Connector “openweathermap.org”
The “openweathermap.org”-connector provides a possibility to receive weather information from the openweathermap.org platform. With this connector, you can receive current weather information such as temperature, humidity, wind, etc. for different locations worldwide including a forecast for up to 6 days. The rain forecast is available for the current day.
To set up the “openweathermap.org”-connector, you need to register at openweathermap.org and set up an API key there.
- Instance name: the individual name of the connector
- Description: an internal description
- API key: Add the API key you created in the openweathermap.org platform
- City: Choose the location for which you want to use weather information.
- Units: Choose between metric or imperial system.
- Update interval: Time interval how often niotix fetches updated information.
IMPORTANT: This connector is using the one call api ( https://openweathermap.org/api/one-call-3). The update interval can be limited depending on your account at openweathermap.org! Please keep in mind, that if you use one and the same API key of openweathermap.org for many connectors, the number of calls is aggregated. If you exceed the limit, you will get charged.
Connector “mail”
The “mail”-connector is used to send emails as actions created in the rule modeler.
- Instance name: the individual name of the connector
- Description: an internal description
- Fallback “from’-mail: A sender’s default alisas if e.g. the field ‘from’ was not set in a digital twin rule.
- Service: Select your email-service-provider if available to configure the authentication.
- Host: the address of the SMTP server
- Port: the port
- User: user name
- Pass: user password
- Secure/Unsecure: Select if it is a secured connection (TLS) or not.
- Ignore/UnignoreTLS: If the connection is not secured (meaning the option before is set to ‘unsecure’), but should be established via STARTTLS, this option must be set to ‘IgnoreTLS’
- Self-signed certificate: Set this option to ‘accept’ when you want to allow self-signed certificates.
Connector “mqtt”
The “mqtt”-connector sets up a connection to an MQTT-broker to send or receive events via niotix.
- Instance name: the individual name of the connector
- Description: an internal description
- Template: By default, “pass-through” is select. This will not transform any of the packages.
- Topic to emit state changes: You can emit all changes of all your digital twin states to an MQTT-broker. If you want to emit all state updates, define the topic here. For the topic, you can use placeholders to define different topics per account and/or per digital twin (e.g. /{AccountId}/digitaltwins/{TwinId}/states/{state}"):
{accountId}
will be replaced by the related account ID{twinId}
will be replaced by the related digital twin ID{state}
will be replaced by the state ID of the corresponding digital twin
- Username: username of the mqtt client user
- Password: password of the mqtt client user
- Path: The pattern which must be used by digital twin-states etc. so that packets are forwarded correctly (placeholders like “+” are allowed). For example: if you set your path to “+/device/+”, a digital twin state with the configuration “(Device)Identifier"="test” and “parsed packet variable"="device/test/value” can receive the MQTT-packets as it is following the devined pattern.
- Url: the address of the MQTT-broker
Connector “mqttbroker”
The “mqttbroker” connector represents an MQTT client that establishes a connection to a RabbitMQ installed in the niotix service network, creates a topic and creates login data (user and password). The connector then “publishes” data on the automatically created topic, which can then be received by “subscribed” MQTT clients. This connector should therefore be used if the remote station receiving the data has an MQTT client available.
Note: For security and stability reasons, we expressly recommend not using the existing internal niotix RabbitMQ, but always creating a new RabbitMQ instance for this connector. Of course, niotix only needs one explicit RabbitMQ for all connectors of this type, which all the connectors share.
How to create a “mqttbroker” connector:
- Instance name: Give the connector an individual, easy-to-understand name.
- Description: Provide a brief description.
- Click on “Validate & Save”, the topic and the login data will then be automatically created and displayed.
Considerations regarding the Quality of Service (QoS)
The “mqttbroker” connector allows to define a QoS when used in an Integrationflow. The following settings can be made:
- 0 - At most once
- 1 - At least once
- 2 - Exactly once
The default is “0 - At most once”. If a higher reliability is needed for the use case the other two options are supported. In these cases it is necessary to use a coherent QoS by the subscribing client, provide a client-id and set clean session to false. If the setup is done accordingly niotix will queue messages when the connection of the subscribing client ist lost. The amount of queued data is limited by the size of the volume which is available to the RabbitMQ (If you use niotix in SaaS we take care that this volume is sized reasonable to deal with your data).
Connector “niota”
The “niota”-connector sets up a connection to an existing instance of niota 1.0 to receive device’s data from a selected account.
- Instance name: the individual name of the connector
- Description: an internal description
- apiKey: The API key you defined in your niota 1.0 tenant - this API key is required to authenticate at the niota 1.0 system and only receive data from the right tenant.
- baseURL: the address of your niota 1.0 instance
IMPORTANT: Note that you need at least one user in the corresponding niota 1.0 account to successfully use the “niotix”-connector!
Connector “smartservice”
The “smartservice” connector establishes a connection to the solutions of Thüga SmartService. If you are interested in this connector, contact your DIGIMONDO correspondent.
Enter the following information:
- Instance name: Give the connector an individual, easy to understand name.
- Description: Provide a short description.
- API Key: * Add the API key.
- Value timestamp from: Choose between “api” or “generated”.
- Update Interval: * Select in which time interval the update of the connector should take place.
Connector “Webhook (Outgoing)”
With the “Webhook (Outgoing)"-connector, niotix can send data to 3rd party systems via webhook. By this, e.g. a rule in a digital twin can send out and individual JSON to a webhook-address. To test the connection, you can - if you do not have any own solution yet - use the open and free platform webhook.site .
Enter the following information:
- Instance name: Give the connector an individual, easy to understand name.
- Description: Provide a short description.
- Header Auth Type: Select how niotix is authenticating a the webhook-receiver (if no authentication is required - like at webhook.site - choose “none”)
- Header Auth Value: Provide the value (e.g. key) for authentication (if no authentication is needed, leave this field empty)
- Method: Choose the HTTP-method (e.g. for webhook.site it is “POST”)
- WebhookURL: Provide the address of the 3rd-party-system (including “https://…")
Connector “websocket”
With the “websocket”-connector niotix can send data to a websocket-server. To test the connection, you can - if you do not have any own solution yet - use open and free platforms like Pie Socket or Socketbay.
Enter the following information:
- Instance name: Give the connector an individual, easy to understand name.
- Description: Provide a short description.
- Connection timeout (ms): Provide a time in milliseconds to define when the connection should be terminated if the server is not responding. If no value is provided the default of 10 seconds is used.
- Websocket Url: Provide the address of the websocket-server (including “wss://…")
Connector “Webhook (Incoming)”
This connector allows data to be sent via HTTP POST to a virtual device as incoming packets. The url for the HTTP POST request is generated automatically and can be copied from the “API endpoint” field. The body of the request must be of type application/json and should have the following default structure:
{
"id": "4711",
"timestamp": "2022-08-19T13:11:05.068Z",
"payload": {
"key1" : "value1",
"key2" : "value2"
}
}
or
{
"id": "4711",
"timestamp": "2022-08-19T13:11:05.068Z",
"payload": "01AB02DEF0"
}
The id is the External ID of the virtual device. If the JSON body is in any other format, it can be transformed in the template script by selecting the Virtual Device Inbound template and following the instructions in the code comments follow.