Architecture


Overview

Firefly implements the LoRaWAN network stack as one integrated platform. niotix connects Firefly as a connected system: product integration runs through the Firefly connector (REST for administration and control, MQTT for uplink data via organization-specific subscriptions).

The following sections build on terms such as account (niotix), organization and API key (Firefly), MQTT subscription, LoRaWAN device, gateway, and downlink; the linked LoRaWAN and IoT Data Hub pages go into more detail.

Placement in the niotix UI

  • Connected systems → LoRaWAN System: opens the Firefly UI (this does not by itself replace a full niotix integration).
  • IntegrationsConnectors: create and maintain the Firefly connector (baseUrl, API key, MQTT).
  • IoT Data Hub → Gateway Mgmt: gateway lifecycle in the account; with a configured datasource, Elasticsearch metrics are also used (see below).

LoRaWAN server roles in Firefly

LoRaWAN separates network, application, and join logic. In Firefly, Network Server, Join Server, and Application Server work together; niotix mainly uses the externally exposed APIs and data paths of the Application Server, while join and network functions stay in the background for radio and security.

Network Server

The Network Server is the network-facing part of the LoRaWAN stack. Typical responsibilities:

  • PHY/MAC: processing LoRaWAN frames received from gateways and resolving the radio path to the device.
  • MAC integrity and security: MIC and frame-counter checks, replay protection, and regional rules (for example duty cycle and sub-bands).
  • Multi-gateway reception: merging or selecting among duplicate uplinks received via several gateways (deduplication / “best gateway” behaviour, depending on implementation).
  • ADR and data rate: optional adaptive data rate and spreading factor where supported by device and network.
  • Downlink: scheduling downlink windows (classes A/B/C), choosing suitable gateways, and respecting transmit plan, power limits, and device class.
  • Forwarding: passing decrypted or prepared network information to the Application Server; join-related cryptography is usually coordinated with the Join Server.

In short, the Network Server decides whether and how an uplink is valid on the network side, and when and via which gateway a downlink is sent.

Join Server

The Join Server encapsulates over-the-air activation (typically OTAA). Typical responsibilities:

  • Key material: secure handling of root keys (for example AppKey) and related join identities (JoinEUI / AppEUI, DevEUI).
  • Join-request: validating incoming join requests (including DevNonce / replay handling) against device registration.
  • Session derivation: generating session keys for network and application after successful authorization.
  • Join-accept: building and cryptographically protecting the join-accept message; working with the Network Server to deliver it to the device.

Without a working Join Server (or valid keys and identity configuration), devices cannot activate via OTAA even if gateways and the Network Server are reachable.

Application Server

The Application Server covers the application-facing side: device and gateway administration in the organization, REST APIs, application-level payload handling, and the mechanisms through which niotix attaches the Firefly connector to organization, devices, gateways, and subscriptions (including the MQTT queue).

End devices, gateways, and transport

Firefly connects LoRaWAN devices along the usual transport path end device ↔ radio ↔ at least one gateway ↔ IP network (backhaul) ↔ Firefly (Network, Application, and Join server logic). The subsections below separate the radio link and IP transport; concrete ports, protocols, and timing for connecting gateways to Firefly are covered in Connecting LoRaWAN Gateways.

LoRaWAN radio path between device and gateway

  • Uplinks and join-requests: the end device transmits on the configured channel plan; one or more gateways receive the LoRaWAN frame and forward it over the backhaul to Firefly.
  • Downlinks: Firefly schedules the downlink (including gateway choice, transmit time, and power limits); the selected gateway transmits the frame in the device’s appropriate receive window (device class A/B/C).
  • Requirements on the radio link: a matching frequency plan and consistent device parameters (including OTAA keys and region); range, spreading factor, and regional rules (for example duty cycle) drive reliability and latency on the air interface; corporate LAN firewall and routing rules apply from the IP backhaul behind the gateway onward.

IP backhaul between gateway and Firefly

Gateways usually reach Firefly’s network server over one of two common paths:

  • Semtech UDP packet forwarder: connectionless UDP to the network server, often destination UDP port 1700 in typical setups (see there). With this forwarder, no gateway record needs to be created in Firefly.
  • Basic Station: WebSocket connection, typically wss://…, commonly destination port 4020 or 443 (TLS), with gateway EUI and auth token in Firefly.

From the operations network, the hostnames and ports defined by the Firefly operator must be reachable outbound from the gateway (towards internet/SaaS or a self-hosted LNS). Inbound long-lived connections from Firefly into private home networks are not the usual pattern; gateways normally initiate outbound traffic (UDP bursts or a WebSocket session). This usually works behind NAT as long as outbound traffic is allowed; Basic Station benefits more from a stable, long-lived outbound path than sporadic UDP alone.

Time and backhaul quality: accurate UTC on the gateway (NTP and optionally GPS) is central for downlink timing, join handling, and diagnostics — see Connecting LoRaWAN Gateways (section Time, NTP, and Receive Time). Low latency and low jitter on the IP path to the network server support timely downlinks and MAC commands; very high delay or packet loss has a noticeable impact depending on region, data rate, and device class.

System overview

%%{ init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#FFFFFF', 'primaryTextColor': '#031403', 'primaryBorderColor': '#606060', 'lineColor': '#606060', 'secondaryColor': '#2fcbff', 'tertiaryColor': '#E7E6E6', 'primary_font' : 'Poppins:wght@300;400;500;600;700;800', 'primary_font_type' : 'sans-serif' } } }%% flowchart LR subgraph fireflySys["`**Firefly**`"] direction LR organization[" Organization"] as[" Application Server"] ns[" Network Server"] js[" Join Server"] devices[" Devices"] gateways[" Gateways"] end organization --> as as --> ns js --> ns devices --> ns gateways --> ns ns --> as as --> organization

The edges show the usual data and administration direction: gateways feed the Network Server, which works with the Join Server and Application Server; organization and devices are governed through the Application Server.

Data flow between niotix and Firefly

For niotix, the Firefly connector is the main integration point:

%%{ init: { 'theme': 'base', 'themeVariables': { 'primaryColor': '#FFFFFF', 'primaryTextColor': '#031403', 'primaryBorderColor': '#606060', 'lineColor': '#606060', 'secondaryColor': '#2fcbff', 'tertiaryColor': '#E7E6E6', 'primary_font' : 'Poppins:wght@300;400;500;600;700;800', 'primary_font_type' : 'sans-serif' } } }%% flowchart LR subgraph niotixSys["`**niotix**`"] direction LR connector[" Firefly connector"] processing[" Processing / Gateway Mgmt"] end subgraph fireflyInt["`**Firefly**`"] direction LR organizationInt[" Organization"] devicesInt[" Devices and gateways"] end connector -->|"REST with API key"| organizationInt organizationInt -->|"Management"| devicesInt devicesInt -->|"MQTT uplinks"| connector connector -->|"Raw packets, downlinks, monitoring"| processing
  1. niotix connects to Firefly using baseUrl and an API key.
  2. The connector reads the matching Firefly organization from that API key.
  3. A Firefly subscription is created or updated for that organization.
  4. niotix receives MQTT uplinks through that subscription.
  5. niotix forwards the raw data into its own packet-processing flow.
  6. Management actions such as device creation, device updates, or downlinks are sent back to Firefly via REST.

The key connector fields in niotix (including baseUrl, apiKey, and the MQTT URL for uplinks) are documented under Connector Firefly. For processing incoming data, configuration in Virtual Devices, and downlinks from Rules, see Virtual Devices and Rules.

The subscription is bound to the organization and managed through the Firefly API in the context of the Application Server. For this purpose, a binding of type organization_up_packets is created together with an MQTT queue.

For niotix, Firefly is not a generic webhook target. It is a LoRaWAN system with its own subscription logic, so niotix typically uses REST calls and Firefly MQTT.

Requirements for the niotix connection

  • From the niotix operations network, outbound HTTPS access to the Firefly REST API and outbound MQTT to the Firefly broker (per the connector configuration) must be permitted; proxy and firewall rules should follow standard enterprise practice for TLS/TCP.
  • The Firefly bridge in the niotix backend uses typical timeouts of about two seconds for REST requests and MQTT connection setup (api.timeout / mqtt.timeout in the service configuration); values may differ per installation.

Elasticsearch and packet metrics

Alongside REST and MQTT, Elasticsearch often matters in operations: LoRaWAN packet metadata (for example packets received per gateway, RSSI, LSNR, spreading factor, timestamps) is usually stored in a network-server-related index. niotix queries the index pattern metrics-networkserver_packets-* in the backend when an Elasticsearch address is configured on the gateway’s datasource (elasticsearchAddress on the datasource; used for Gateway Mgmt and performance views, among others).

Important when interpreting the data:

  • Metrics reflect all LoRaWAN traffic seen at the gateway, not only devices registered in Firefly; the index can therefore include unknown devices and supports assessing the radio path and receive quality.
  • niotix does not write these indexes; a running Elasticsearch cluster with the expected schema or compatible field names is required (including gw_eui, rssi, lsnr, spreading_factor, received_at_server).

For more on how this appears in the UI, see IoT Data Hub (section Gateway Mgmt).

Account and organization model

niotix and Firefly use different terms:

  • In niotix, the main tenant boundary is the account.
  • In Firefly, devices, gateways, users, and API keys are attached to organizations and suborganizations.

For technical mapping:

  • niotix processes data in the context of an account
  • Firefly maps the API key to exactly one organization
  • the Firefly connector builds subscription and device synchronization based on that organization

Common Questions

Why is Firefly visible in niotix although the integration is not working yet?

The LoRaWAN System menu entry only opens the Firefly UI. A Firefly connector is still required for the technical integration with niotix.

Why is an MQTT URL needed?

The MQTT URL is used so that niotix can receive uplink data from Firefly. Without that path, REST-based management still works, but incoming measurements are not transferred in the same way. This is a typical cause when packets are visible in Firefly but do not arrive in niotix, especially after creating a new connector.

Is Firefly or niotix the leading system?

For LoRaWAN raw data and network-side LoRaWAN behaviour, Firefly is the leading system. In niotix-based setups, devices are created and managed in niotix, and the related changes are transferred to Firefly.