1. Introduction

This document is a user-guide for the Live Objects service.

Live Objects LPWA developer guide is available here.

For any question / comment / improvement regarding this document please send us an email at contact: liveobjects.support@orange.com

2. Overview

2.1. What is Live Objects?

Live Objects is one of the product belonging to Orange Datavenue service suite.

Live Objects is a software suite for IoT / M2M solution integrators offering a set of tools to facilitate the interconnection between devices (or connected « things ») and business applications:

  • Connectivity Interfaces (public and private) to collect data, send command or notification from/to IoT/M2M devices,

  • Device Management (supervision, configuration, resources, firmware, etc.),

  • Message Routing between devices and business applications,

  • Data Management: Data Storage with Advanced Search features.

Live Objects overview

landing

It can be used in Software as a Service (SaaS) mode, or deployed “on premises” in a datacenter of the customer’s choice.

The public interfaces are reachable from internet. The private interfaces provide interfaces with a selection of devices (MyPlug) or specific network (LPWAN).

The SaaS allows multiple tenants on the same instance without possible interactions across tenant accounts (i.e. isolation, for example a device belonging to one tenant could not communicate with a device belonging to an other tenant).

A web portal provides a UI to administration functions like manage the message bus configuration, supervise your devices and control access to the tenant.

2.2. Architecture

Live Objects SaaS architecture is composed of three complementary layers:

  • Connectivity layer: manages the communications with the client devices and applications,

  • Bus layer : a set of message-oriented middlewares allowing asynchronous exchanges between our software modules,

  • Service layer: various modules supporting the high level functions (device management, data processing and storage, etc.).

Live Objects architecture

landing

2.3. Connectivity layer

2.3.1. Public interfaces

Live Objects exposes a set of standard and unified public interfaces allowing to connect any programmable devices, gateway or functional IoT backend.

The existing public interfaces are:

MQTT is an industry standard protocol which is designed for efficient exchange of data from and to devices in real-time. It is a binary protocol and MQTT libraries have a small footprint.

HTTPS could be rather used for scarcely connected devices. It does not provide an efficient way to communicate from the SaaS to the devices (requiring periodic polling for example).

For more info, see "Message encodings" section

The public interfaces share a common security scheme based on API keys that you can manage from Live Objects APIs and web portal.

2.3.2. Private interfaces

Live Objects is fully integrated with a selection of devices and network. It handles communications from specific families of devices with defined protocol (over IP), and translate them as standardized messages available on Live Objects message bus.

The existing private interfaces:

  • LPWAN interface connected with LPWAN network server,

    • to provision LPWAN devices

    • to receive and send data from/to LPWAN devices

  • MyPlug interface connected with MyPlug gateways,

    • to provision MyPlug gateway

    • to receive and send data from/to MyPlug gateway and accessories

2.4. Bus layer

Live Objects connectivity interfaces are connecting to a message bus that could route message to external business application or internal micro-services (device management, store and search services).

The message bus offers three distinct modes:

  • Router : adapted to situations where publishers don’t know the destination of the messages. Messages can be either consumed with transient subscriptions or static "Bindings" can be declared to route messages into FIFO queues. More info: Router mode.

  • PubSub : a good fit for real-time transient exchanges. Message are broadcast to all currently available subscribers or dropped. More info: PubSub mode,

  • FIFO : the solution to prevent from message loss in the case of consumer unavailability. Messages are stored in a queue on disk until consumed and acknowledged. When multiple consumers are subscribed to the same queue concurrently, messages are load-balanced between available consumers. More info: FIFO mode,

Various usage of Live Objects message bus

landing

For more info, see "Message Bus" chapter.

2.5. Service layer

2.5.1. Device management

Live Objects offers various functions dedicated to device operators:

  • supervise devices connection and disconnection to/from the SaaS,

  • manage devices configuration parameters,

  • send command to devices and monitor the status of these commands,

  • send resources (any binary file) to devices and monitor the status of this operation.

Live Objects attempts to send command, resources or update the parameters on the asset as soon as the asset is connected and available.

For more info, see "Device Management" chapter.

2.5.2. Data management

Live Objects allows to store the collected data from any connectivity interfaces. These data could be then retrieved by using HTTP REST interface.

A full-text search engine based on Elastic search is provided in order to analyze the data stored. This service is accessible through HTTP REST interface.

For more info, see "Data Management" chapter.

2.5.3. Simple Event Processing

Simple event processing service is aimed at detecting notable single event from the flow of data messages.

Based on processing rules that you define, it generates fired events that your business application can consume to initiate downstream action(s) like alarming, execute a business process, etc.

For more info, see "Event Processing" chapter.

2.6. Security

2.6.1. API keys

API keys are used to control the access to the SaaS for devices/application and users to authenticate. You must create an API Key to use the API.

2.6.2. Users management

When an account is created, a user with administration priviledges is also created on the account. This administrator can add other users to the account and set their priviledges. These priviledges are defined by a set of roles. The users can connect to the Live Objects web portal.

3. Getting started

This chapter is a step-by-step manual for new users of Live Objects giving instructions covering the basic use cases of the service.

3.1. Account creation

In order to use Live Objects, you need to have a dedicated account on the service.

Please contact the Live Objects team to request an account : liveobjects.support@orange.com. A valid email address will be required to create your account. Once the account is created, you should receive an email with an activation link.

account activation email

landing

By clicking on Account Activation, you are redirected to a web page where you can choose the password of your user account.

Once you entered twice your password and a correct "captcha", then clicked on "update password", you are redirected to the Live Objects sign in page where you can now log into your newly created user account.

3.2. Signing in

To log in to Live Objects web portal, connect to liveobjects.orange-business.com using your web-browser:

landing

  • Fill the Log in form with your credentials:

    • your email address,

    • the password set during the activation phase,

  • then click on the Log in button.

If the credentials are correct, a success message is displayed and you are redirected to your “home” page:

landing

3.3. Creating an API Key

To get a device or an application communicating with Live Objects Manage, you will need to create an API Key.

On the left menu, click on api keys and create a new API key. This key will be necessary to set up a connection with the public interfaces (MQTT and REST) of Live Objects Manage. You can restrict the rights of this API key by selecting one or more message queues here. Thus, the API key can only be used in MQTT access limited to these selected message queues. This for example, makes it possible, after having oriented the right device to the right queue, to restrict access to the data of specific devices.

landing

As a security measure, you can not retrieve the API Key again after you have closed the api key creation results page. So, note it down to work with the mqtt client, during the scope of this getting started.

landing

3.4. Connecting an MQTT device

It is up to you to choose your favorite MQTT client or library. We will use here MQTT.Fx client. This client is available on Win/MacOSX/Linux and is free. Download and install the last version of MQTT.fx.

We will start by creating a new Connection profile and configure it based on the device mode device mode set up.

General panel

You will configure here the endpoints of Live Objects including authentication information. In this panel, you can set :

  • Broker Address with liveobjects.orange-business.com

  • Broker Port with 1883

  • Client ID with urn:lo:nsid:dongle:00-14-22-01-23-45 (as an example)

  • Keep Alive Interval with 30 seconds

landing

Credentials panel

  • username: json+device : for device mode MQTT connection

  • password: the API Key that you just created

landing

3.5. Device management basics

3.5.1. Connection status

We can simulate a device connection to Live Objects with MQTT.fx client by clicking on Connect button of MQTT.fx client.

In Live Objects portal, you can see that the device is connected. Go to "assets", the device will appear in the list.

landing

3.5.2. Sending a command

You must first subscribe to the topic waiting for command "dev/cmd". (Subscribe tab of mqtt.fx)

Go to "assets" then select your device in the list and go to "commands" tab.

Click on "add command" then fill the event field with "reboot" then click on "Register". The command will appear in MQTT.fx client subscribe tab.

{
   "req":"reboot",
   "arg":{},
   "cid":94514847
 }

A response could be sent to acknowledge the command received.

To send this response, you can publish a message to this topic "dev/cmd/res". Cid (correlation id) must be set with correlation id received previously.

{
  "res": {
     "done": true
  },
  "cid": 94514847
}

Once published, the status of the command will change to "processed" in the portal commands history tab.

3.6. Message Bus basics

3.6.1. Using a FIFO queue

On the left menu, click on "message bus", you are redirected to the "message bus / FIFO queues" page. Click on the "add FIFO queue" button, a pop-in appears:

landing

Enter a name "myFifo" for your "FIFO queue", then press "Register" button: the newly created FIFO queue "myFifo" is now listed.

landing

On the left menu, click on "developer tools" , you are redirected to a page with different tabs for different tools useful for testing purpose:

In the "publish" tab (selected by default):

  • select "FIFO" in the "Topic Type" select box,

  • enter "myFifo" (the name of the FIFO queue you just created) in the "Topic" input field,

  • enter the following JSON in "Payload" textarea:

    {
       "payload": "Hello world!"
    }
  • press the "Publish" button.

landing

A "success" message is displayed:

landing

Now, go back to your FIFO list, the "myFifo" FIFO now should be displayed with a message count of "1":

landing

3.6.2. Using the Router

On the left menu, click on "message bus" to go back to the "message bus / FIFO queues" page.

Click on the "router" tab, you now see an empty list of "_bindings". Click on the "+ router" button, a pop-in is displayed with a form to create a new bindings:

  • enter "~event.test.#" in the "Routing key filter" input field,

  • select "myFifo" in the "Target FIFO" select box,

  • press the "Create Binding" button.

landing

You now see the newly created binding listed:

landing

publish a non-stored message

On the left menu, click on "developper tools" to go back to the "developer tools" page and "publish" tab.

  • select "Router" in the "Topic Type" select box,

  • enter "~event.test.foo.bar.123" in the "Topic" input field,

  • enter the following JSON in the "Payload" text area:

    {
       "payload": "Hello router!"
    }
  • press the "Publish" button.

A "success" message is displayed:

landing

Now, go back to your FIFO list, the "myFifo" FIFO now should be displayed with a message count of "2" (one more than previously):

landing

You made a publication with a "routing key" (the "topic" field) that has been matched by a declared "binding" that targeted the "myFifo" FIFO, so a copy of your message has been routed and stored into the FIFO as if you had published directly into it!

3.7. Data management

3.7.1. Publishing data messages

We will use MQTT.fx client with device mode to send data message as a device will do.

Data message must be published on this topic : dev/data.

Message :

{
"s" : "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
"ts" : "2016-07-10T10:02:44.907Z",
"loc" : [44.1, -1.5],
"m" : "temperatureDevice_v0",
"v" : {
  "temp" : 17.25
 },
"t" : [ "City.NYC", "Model.Prototype" ]
}

landing

3.7.2. Accessing the stored data

Going back in Live Objects portal, you can consult the data message that was just stored. Go to "data" then search for streamId "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature". The data message sent will appear.

landing

You can perform complex search queries like aggregation using elasticsearch DSL HTTP interface. See example in Data API chapter

4. Concepts

4.1. Tenant account

A tenant account is the isolated space on Live Objects dedicated to a specific customer: every interaction between Live Objects and an external actor (user, device, client application, etc.) or registered entities (user accounts, api keys, etc.) is associated with a tenant account.

Live Objects ensures isolation between those accounts: you can’t access the data and entities managed in another tenant account.

Each tenant account is identified by a unique identifier: the "tenant ID".

A tenant account also has a "name", that should be unique: while the "tenant ID" can’t be changed, the tenant account name can be edited from the "Settings" page of the web portal.

4.2. API key

landing

A Live Objects API Key is a secret that can be used by a device/app/user to authenticate when accessing to Live Objects on the MQTT or HTTP/REST interfaces. At least one API Key must be generated. As a security measure, an API key could not be retrieved after creation.

An API Key belongs to a tenant account: after authentication, all interactions will be associated (and isolated from other tenant accounts) to this account.

An API key can have zero, one or many Roles. These roles allows to restrict the operations that could be performed with the key. An API key validity can be limited in time.

A tenant account is automatically attributed a "master" API key at creation. That API key is special: it can’t be deleted.

An API Key can generate child-API keys that inherit (a subset of) the parent roles and validity period.

Usage:

  • In MQTT, clients must connect to Live Objects by using a valid API Key value in the « password » field of the (first) MQTT « CONNECT » packet,

    • In case of unknown API Key value, or invalid, the connection is refused,

    • on success, all messages published on this connection will be enriched with the API Key id and roles,

  • In HTTP, clients must specify a valid API Key value as HTTP header « X-API-Key » for every request,

    • In case of unknown API key value or invalid, request is refused (HTTP status 403),

    • on success, all messages published due to this request will be enriched with the API Key id and roles.

4.3. User account

A User Account represents a user identity, that can access the Live Objects web portal.

A user account is identified by an email address. A user account is associated with one or many roles. A user can authenticate on the Live Objects web portal using an email address and password.

When user authentication request succeeds, a temporary API key is generated and returned, with same roles as User account.

In case of too many invalid login attempts, the user account is locked out for a while.

For security purpose, a password must be 8 characters including 1 uppercase letter, 1 lowercase letter, 1 special character, 1 digit and 1 special characters.

4.4. Role

A Role can be attributed to an API key or user account. It defines the priviledges of this user or API key on Live Objects.

Important Notice : Some features are only available if you have subscribed to the corresponding option, so you may have the proper roles set on your user but no access to some features because these features are not activated on your tenant account.

The currently available roles and their inclusion in Admin or User profiles:

Role Name Technical value Admin profile User profile Priviledges

API Key

API_KEY_R

X

X

Read parameters and status of an API key.

API Key

API_KEY_W

X

X

Create, modify, disable an API key.

User

USER_R

X

X

Read parameters and status of a user.

User

USER_W

X

Create, modify, disable a user.

Settings

SETTINGS_R

X

X

Read the tenant account custom settings.

Settings

SETTINGS_W

X

X

Create, modify tenant account custom settings.

Device

DEVICE_R

X

X

Read parameters and status of a Device (aka Asset).

Device

DEVICE_W

X

Create, modify, disable a Device (aka Asset), send command, modify config, update ressource of a Device.

Device Campaign

CAMPAIGN_R

X

X

Read parameters and status of a massive deployment campaign on your Device Fleet.

Device Campaign

CAMPAIGN_W

X

Create, modify a massive deployment campaign on your Device Fleet.

Data

DATA_R

X

X

Read the data collected by the Store Service or search into this data using the Search Service.

Data

DATA_W

X

Insert a data record to the Store Service. Minimum permission required for the API key of a device pushing data to Live Objects in HTTPS.

Data Processing

DATA_PROCESSING_R

X

X

Read parameters and status of an event processing rule or a Data decoder.

Data Processing

DATA_PROCESSING_W

X

Create, modify, disable an event processing rule or a Data decoder.

Kibana (Beta feature, limited access)

KIBANA_R

X

Access to a Kibana instance running on the Search Service. Requires also DATA_R role to be effective.

Bus Config

BUS_CONFIG_R

X

X

Read config parameters of a FIFO queue or a binding rule.

Bus Config

BUS_CONFIG_W

X

Create, modify a FIFO queue or a binding rule.

Bus Access

BUS_R

X

X

Read data on the Live Objects bus. Minimum permission for the API key of an application collecting data on Live Objects in MQTT(s).

Bus Access

BUS_W

X

X

Publish data on the Live Objects bus.

DEVICE_ACCESS

X

X

Role to set on a Device API key to allow only MQTT Device mode

4.5. Message

Every interaction between Live Objects and devices and applications is modeled as one or many "messages".

Those messages follow a common format (imposed by a shared communication library), composed of different fields, all optional.

On the « public » interfaces (MQTT & HTTP), message can be represented using various encodings.

For more info about message encodings, see "Message encodings" section

4.6. Asset

The term 'asset' is used in Live Objects to designate an entity managed by Live Objects.

An asset is uniquely identified by a couple namespace / id.

5. Message bus

Live Objects "message bus" is the central layer between the Connectivity layer and Service layer.

This message bus offers various modes:

  • Router : adapted to situations where publisher don’t know the destination of the messages. Messages can be either consumed with transient subscriptions or static "Bindings" can be declared to route messages into FIFO queues. More info: Router mode.

  • PubSub : a good fit for real-time exchanges. Message are broadcast to all available subscribers or dropped. More info: PubSub mode,

  • FIFO : the solution to set up a point to point messaging and guarantee that the messages are delivery to the consumer. Messages are stored in a queue on disk until consumed and acknowledged. Each message is delivered to only one consumer, when multiple consumers are subscribed to the same queue concurrently, messages are load-balanced between available consumers. More info: FIFO mode,

Communications between devices or external applications and the Live Objects interfaces are translated into interactions with the Live Objects message bus. For example, on the Live Objects MQTT interface, a publication to MQTT topic "pubsub/test" is translated into a message publication on the Live Objects message bus on PubSub topic "test".

Various usage of Live Objects message bus

landing

A topic is uniquely identified by a string with the following format: “<topic type>/<topic name>”. Where <topic type> can be “pubsub” or “fifo”, and <topic name> is an arbitrary string.

Example
“pubsub/alldevices” or “fifo/alerts”

Tenants are free to use PubSub and Fifo topics to achieve the communication patterns they need between their devices and applications.

Note that some functions of Live Objects use special topics, all identified by a name starting by “~” (ex: “pubsub/~v0/asset/connected”). The message exchanged on those topics must respect a standard format.

5.1. Router mode

Example 1. ROUTER mode

landing

  • (At the bottom) A client publishes into the Router a message with routing key "_data.alarm"…​

  • (On the left) Two clients are subscribed on router with routing key filter "data.#" and consumer identifier "consumer#1". As the routing key filter matches the routing key of the published message, the message is delivered to those clients. As those clients are subscribed with the same consumer id, the message is "load balanced" : only one of the two consumers receives the message.

  • (At the center) A binding with routing key filter "data.#" is declared from the Router to the FIFO queue "fifo01": this routing key filter matches the routing key of the published message so the message is delivered to this FIFO queue as if it was published in FIFO mode to topic "fifo01".

  • (On the right) A binding with routing key filter "*.alarm" is declared from the Router to the FIFO queue "fifo02": this routing key filter matches the message routing key, so the message is delivered to the FIFO. As a subscriber is currently subscribed to the FIFO queue, it immediately receives the message, but the message is also stored on disk into the queue until acknowledged.

5.2. PubSub mode

Communications in PubSub mode is based on the usage of "topics".

A "topic" is a message source/destination identified by a unique string identifier..

In PubSub mode, Live Objects message bus clients can publish or subscribe to one or many "topics". When a client publishes a message on a specific PubSub topic, the message is broadcast in real-time to all currently subscribed clients. The message is not persisted by Live Objects messaging layer: if no consumers have subscribed, the message is simply dropped and lost forever.

There is no need to declare PubSub "topics" before using them: a "topic" exists as long as at least when client is subscribed to it.

The PubSub mode is a good fit for the following patterns:

  • broadcasting non-critical events to groups of consumers,

  • one-to-one real-time dialogs (simply use a randomly generated topic identifier).

Example 2. PubSub mode

landing

  • On the left, a client publishes on PubSub topic "test" while two consumers are subscribed, the message is then duplicated and delivered to the two consumers.

  • On the right, a client publishes on PubSub topic "alarms" while no consumers are subscribed: the message is dropped.

5.3. FIFO mode / queues

Like in PubSub mode, in FIFO mode communication is also based on the usage of "topics".

There are no conflict between the naming of PubSub topics and FIFO topics: the PubSub topic "test" is different from FIFO topic "test".

Messages published on a FIFO topic are persisted until a subscriber is available and acknowledges the handling of the message. If multiple subscribers consume from the same FIFO topic, messages are load balanced between them. Publication to and consumption from a FIFO topic use acknowledgement, ensuring no message loss. Before being used, a FIFO topic must be created from the Live Objects web portal.

Example 3. FIFO mode

landing

  • On the left, a client publishes in FIFO topic/queue "fifo01" while no consumer is subscribed. The message is stored into the queue, on disk. When later a consumers subscribes to the FIFO topic/queue, the message will be delivered. The message will only disappear from disk once a subscriber acknowledges the reception of the message.

  • On the right, a client publishes on FIFO topic/queue "fifo02" while a consumer is subscribed: the message is stored on disk and immediately delivered to the consumer. The message will only disappear from disk once a subscriber acknowledges the reception of the message. When a consumer that received the message but didn’t acknowledged the message unsubscribes from the topic/queue, the message is put back into the "fifo02" queue and will be delivered to the next available consumer.

FIFO are size-limited. The maximum size is given in bytes. Messages will be dropped from the front of the queue to make room for new messages once the limit is reached meaning that the older messages will be dropped first.

The total number of FIFO and the sum of the size of the FIFO is limited depending on your offer.

For more info about limition, see "Limitation" chapter.

5.4. Message encodings

5.4.1. JSON (version 0)

5.4.1.1. Definition

This JSON format allows to exchange with the various services of the platform and to connect devices for device management. The data part is deprecated and is now replaced by the json specialized data format.

The MQTT message payload should be a valid JSON Object value, with the optional following attributes:

  • correlationId: (number) message correlation id (for RPC). This field used on RPC requests and responses contains a number value used to allow matching between a request and response during a RPC exchange.

  • replyTo: (string) message "reply to" (for RPC request). This field used on RPC request contains a string identifying a publication destination (pubsub, fifo or router) where the response to this request must be sent.

  • source : (list) a list of source, for each source:

    • order : (number) the source order (0 for initial source, 1 for first repeater/gateway…​)

    • namespace : source identifier namespace

    • id : source identifier (in the namespace)

    • ts : timestamp (ms since EPOCH),

  • timestamp : (number) the timestamp associated with the information, expressed in epoch timestamp (elapsed milliseconds since Jan 01 1970), UTC.

  • event : (string) message "event" (= identifies the type / trigger).

  • eventLifecycle: (string) value between "BEGIN", "ONGOING", "END", "ONE_SHOT"

  • payload (string/binary) message payload. Thie field contains the raw binary content of the message. This field can be used to convey encrypted data for example.

  • data : (list) list of data entries:

    • key : (string) data entry key,

    • jsonValue : (string) data entry value, as JSON in an escaped string,

  • location : location associated with the message

    • lat : (number) latitude

    • longitude : (number) longitude

  • asset : field used to describe status of the source asset for Device management exchanges.

5.4.1.2. source

The source field is used on message representing information coming into Live Objects to indicate the path taken by the information before arriving into Live Objects.

Its value is a list of objects, each one descibing a "step" in the path, with the following attributes:

namespace

the first part of the step identifier

id

the second part of the step identifier

ts

the date/time when the information left this "step"

order

indicates the position of this step in the path, "0" meaning "the initial source of information", "1" the first repeater/gateway, etc.

landing

Example
{
   "source": [
      {
         "namespace": "sensor",
         "id": "78239",
         "ts": 1457430816710,
         "order": 0
      },
      {
         "namespace": "gateway",
         "id": "777100001",
         "ts": 1457430825000,
         "order": 1
      }
   ],
   ...
}
5.4.1.3. Example
{
   "correlationId": 122,
   "replyTo": "pubsub/~123243213211",
   "source": [
      {
         "order": 0,
         "namespace": "sensor",
         "id": "001",
         "ts": 1447944553700
      }
   ],
   "timestamp": 1447944553720,
   "event": "FIRE_ALARM",
   "eventLifecycle": "BEGIN",
   "location": {
      "lat": 48.576,
      "lon": 5.747
   },
   "data": [
      {
         "key": "temp",
         "jsonValue": "12.87"
      }
   ],
   "payload": "RC:98:A:1:AZ:EZEZA"
}

5.4.2. JSON for data message

5.4.2.1. Definition

This JSON encoding is aimed to modelize the data collected from IoT things (devices, etc.).

streamId

string uniquely identifying a timeseries / "stream",

timestamp

date/time associated with the collected information,

location

geo location (latitude and longitude) associated with the collected info,

model

string used to indicate what schema is used for the value part of the message,

value

structured representation (JSON object) of the transported info,

tags

list of strings associated with the message to convey extra-information,

metadata

section controled/ enriched by Live Objects

  • source : unique identifier (usually URN) of the source device,

Message must be published to :

  • with device mode to dev/data

  • with bridge mode for payload to router/~event/v1/data/new/(…​),

Example #1 - data collected from MQTT
{
  "streamId" : "urn:uuid:61d2a520-c153-4ec8-a47e-fee21f4eee82!atmos",
  "timestamp" : "2016-03-08T10:02:44.907Z",
  "location" : {
    "lat" : 44.1,
    "lon" : -1.5
  },
  "model" : "atmos_v0",
  "value" : {
    "temp" : 17.25,
    "humidity" : 12.0
  },
  "tags" : [ "City.Lyon", "Model.LoraMoteV1" ]
}
Example #2 - data collected from LPWA
{
    "streamId" : "urn:lpwa:deveui:7A09AEF7E097A7EF!uplink",
    "timestamp" : "2016-03-08T10:02:43.944Z",
    "location" : {
     "lat" : 44.1,
     "lon" : -1.5
    },
    "model" : "lpwa_v1",
    "value" : {
     "port" : 1,
     "fcnt" : 138,
     "rssi" : -111,
     "snr" : -6,
     "sf" : 8,
     "payload" : "a3e1eff054"
    },
    "tags" : [ "City.Lyon", "Model.LoraMoteV1" ],
    "metadata" : {
       "source" : "urn:lpwa:deveui:7A09AEF7E097A7EF"

    }
}

5.5. Remote Procedure Call (RPC)

The Remote Procedure Call is used to execute a command on another module over the Live Objects bus.

Here is the request message format:

{
   "replyTo": "pubsub/~mycallback_topic_12AE45E",
   "correlationId": 156,
   "payload": "The request payload"
}

Here is the field description:

  • replyTo: the reply topic of the RPC request,

  • correlationId: the correlation ID of the RPC request,

  • payload: the content of the RPC request.

Note about the replyTo topic:

  • It must be an ~ topic

  • It is recommended to be a unique topic, so include some random part into the string

  • Do not forget to subscribe to it before sending the request

Here is the answer message format published on the reply topic given on replyTo:

{
   "correlationId": 156,
   "payload": "This is the answer payload"
}

Here is the field description:

  • correlationId: the correlation ID that corresponds to the RPC request

  • payload: the content of the RPC answer

6. Device management

An “asset” is a generic term that can designate a device (sensor, gateway) or an entity observed by devices (ex: a building, a car).

6.1. Asset Supervision

Live Objects can track for you the changes of status of your assets: connection status (connected/disconnected), route used by your asset to communicate with the service, last contact date.

For this, you need to publish messages in the standard format to notify the service of your asset connections / disconnections and status updates.

6.1.1. "Asset connected" event

Once connected to Live Objects, a device needs to explicitly notify its identity and supported features to the platform to become "manageable".

To do this, the device must publish in Router mode with routing key ~event.v2.assets.{ns}.{id}.connected (i.e. in MQTT on topic router/~event/v2/assets/{ns}/{id}/connected), where:

{ns}

the namespace of device identifier (ex: the device model, or identifier family)

{id}

the device identifier - must be unique within the specified namespace

The message to publish must have the following structure:

{
   "source": [
      {
         "order": 0,
         "namespace": "{ns}",
         "id": "{id}"
      }
   ],
   "asset": {
      "topicParamUpdate": "{topicParamUpdate}",
      "topicCommand": "{topicCommand}",
      "topicResourceUpdate": "{topicResourceUpdate}"
   }
}

With:

ns

the device identifier namespace

id

the device identifier

topicParamUpdate

(optional) the MQTT topic where device is subscribed and awaiting for parameter update request

topicCommand

(optional) the MQTT topic where device is subscribed and awaiting for commands

topicResourceUpdate

(optional) the MQTT topic where device is subscribed and awaiting for resource update requests

Example
{
   "source": [
      {
         "order": 0,
         "namespace": "dongle",
         "id": "00-14-22-01-23-45"
      }
   ],
   "asset": {
      "topicParamUpdate": "pubsub/~device/dongle/00-14-22-01-23-45/cfg",
      "topicResourceUpdate": "pubsub/~device/dongle/00-14-22-01-23-45/res"
   }
}

6.1.2. "Asset disconnected" event

A connected device can explicitly tell Live Objects that it is now "disconnected" and unavailable for all device management mechanisms.

Note that this is purely optional: such a message is internally automatically generated when the MQTT connection is broken/closed, for all asset identity that was announced on this connection.

To emit such a notification, you must publish in Router mode with routing key ~event.v2.assets.{ns}.{id}.disconnected (i.e. in MQTT on topic router/~event/v2/assets/{ns}/{id}/disconnected) a message with the following JSON structure:

{
   "source": [
      {
         "order": 0,
         "namespace": "{ns}",
         "id": "{id}"
      }
   ]
}

6.2. Asset Configuration

An "asset" can declare one or many "parameters": a parameter is identified by a string "key" and can take a typed value (binary, int32, uint32, timestamp).

Live Objects can track the changes of the current value of an asset parameters, and allow users to set different target values for those parameters. Live Objects will then try to update the parameters on the asset once it’s connected and available.

Asset configuration sync

landing

  • (before) :

    • asset initiates MQTT connection with Live Objects,

    • asset subscribes in MQTT to a private topic, where it will receive later the configuration update requests,

  • step 0 : asset notifies Live Objects that it is connected and available for configuration updates on a specific topic (cf. Asset Supervision),

  • step 1 : asset notifies Live Objects of its current configuration,

  • step2 : Live Objects compares the current and target configuration for this asset. If they differ:

    • step 3 : Live Objects sends to the asset, on the topic indicated at step 0, the list of parameters to update, with their target value,

    • step 4 : asset handles the request, and tries to apply the change(s),

    • step 5 : asset respond to the change request with the new configuration,

    • step 6 : Live Objects saves the new configuration. Parameters that have been successfully updated now have the status "OK" and the others the status "ERROR".

6.2.1. "Current Configuration" event

A connected device can notify Live Objects of its current configuration by publishing in Router mode with routing key ~event.v2.assets.{ns}.{id}.currentParams (in MQTT, topic router/~event/v2/assets/{ns}/{id}/currentParams) :

{ns}

the "namespace" of device identifier (ex: the device model, or identifier family)

{id}

the device identifier - must be unique within the specified "namespace"

The message to publish must have the following structure:

{
   "source": [
      {
         "order": 0,
         "namespace": "{ns}",
         "id": "{id}"
      }
   ],
   "asset": {
      "params": {
         "{param1Key}": {
            "value{param1Type}": {param1Value}
         },
         "{param2Key}": {
            "value{param2Type}": {param2Value}
         },
         ...
      }
   }
}

With:

param{X}Key

a string uniquely identifying the device configuration parameter

param{X}Type

indicates the config parameter type between

"Int32"

the value must be an integer between -2,147,483,647 and 2,147,483,647,

"UInt32"

the value must a positive integer between 0 and 4,294,967,296,

"Raw"

the value is a base64 encoded binary content,

"String"

the value is a UTF-8 string,

"Float"

the value is float (64 bits) value.

Example
{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "asset": {
      "params": {
         "conn_period_sec": {
           "valueUInt32": 60000
         },
         "log_level": {
           "valueRaw": "REVCVUc="
         },
         "can_filters": {
           "valueRaw": "MSwyNCw1LDIx"
         }
      }
   }
}

6.2.2. Config update request

To receive configuration updates, the device must first subscribe to a topic where it will be awaiting for configuration update requests, and then notify Live Objects using an "Asset Connected" event that it is available for such request on this topic indicated using the topicParamUpdate message field.

Your device must choose the topic name so that there is no conflict with other devices. We advice that you use a topic name containing your device namespace / id identifier couple.

For example pubsub/~device/{ns}/{id}/cfg.

When Live Objects needs to send you a config update request, your device will receive message with the following JSON structure:

{
   "target": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "correlationId": {correlationId},
   "replyTo": {correlationId},
   "asset": {
      "params": {
         "{param1Key}": {
            "value{param1Type}": {param1Value}
         },
         "{param2Key}": {
            "value{param2Type}": {param2Value}
         },
         ...
      }
   }
}

That this message is quite similar to the message emitted by your device to notify of current configuration, except for the field "source" that is here called "target" and the two additional fields "correlationId" and "replyTo":

target

the identity of the targeted device (identical to the "source" of your device "current configuration" notification)

correlationId

a number that you must return to your device response to this update request

replyTo

the topic where Live Objects is expecting the response to this update

asset.params…​

same structure as in the current configuration notification except that here only the parameter that needs to be changed are listed, and the value is the new value to apply

6.2.3. Configuration update response

When receiving a Configuration update request, your device needs to try to apply the specified configuration changes and then to return the new values for the parameters that needed to change. That value can be the same as before the update, the new one requested, or another value, depending on the meaning of the parameter.

For example, if Live Objects request to change a parameter on the device to an invalid value the device can keep the previous value it had for this parameter or choose to apply another default value.

To anwser to a Configuration update request, the device needs to publish a message on the {replyTo} topic that was indicated in the request. This topic should be actually the same as for announcing the Current device configuration.

The published message must have the following JSON structure:

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "correlationId": {correlationId},
   "asset": {
      "params": {
         "{param1Key}": {
            "value{param1Type}": {param1Value}
         },
         "{param2Key}": {
            "value{param2Type}": {param2Value}
         },
         ...
      }
   }
}

That this message is quite similar to the message emitted by your device to notify of current configuration, except for the field "correlationId":

source

the identity of the device (identical to the "source" of your device "current configuration" notification)

correlationId

a number that was in the configuration update request, and is used by Live Objects to track the status of each configuratino parameter

asset.params…​

same structure as in the current configuration notification.

you can but, you don’t have to announce all your configuration parameters here, only the ones that were listed in the configuration update request

6.3. Commands

You can register commands targeting a specific asset: as soon as the asset is available for commands, Live Objects will send them one by one, awaiting the a response for each command from the asset before sending the next one.

Live Objects keeps track of every registered command with its status, and possible response.

Asset configuration sync

landing

6.3.1. Command request

After publishing a Asset connected event with a topicCommands your device can receive at any time a command from Live Objects on the {topicCommands} topic.

Each "command" has the following JSON structure:

{
   "target": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "event":         "{event}",
   "correlationId": {correlationId},
   "replyTo":       "{replyTo}",
   "data": {
      "{key1}": "{key1Value}",
      "{key2}": "{key2Value}",
       ...
   },
   "payload":       "{payload}"
}

Where

{ns}

the target device identifier namespace

{id}

the target device identifier

{event}

the command "event" field, often used to convey the called method named

{correlationId}

a number that must be returned in the command response to allow Live Objects to correlated th request and response

{replyTo}

the topic where the command response is expected

{key<X>}

the key of a data field

{key<X>Value}

the JSON value associated with key<X>

{payload}

the base64-encoded command payload (raw byte array)

6.3.2. Command response

To respond to a received command, the client device must publish a message on the command request {replyTo} topic with the following JSON structure:

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "correlationId": {correlationId},
   "data": {
      "{key1}": "{key1Value}",
      "{key2}": "{key2Value}",
       ...
   },
   "payload":       "{payload}"
}

Where

{ns}

the source device identifier namespace

{id}

the source device identifier

{correlationId}

same value as in the Command request

{key<X>}

a key of a response data field

{key<X>Value}

the JSON value associated with key<X>

{payload}

the base64-encoded command payload (raw byte array)

Example

Request:
{
   "target": [
      {
          "order":     0,
          "namespace": "sensor",
          "id":        "001"
      }
   ],
   "correlationId": 879546045610,
   "replyTo": "pubsub/~7928372983792873",
   "event": "getTime",
   "data": {
      "timezone": "UTC"
   }
}
Response:
(published to "pubsub/~7928372983792873")
{
   "source": [
      {
          "order":     0,
          "namespace": "sensor",
          "id":        "001"
      }
   ],
   "correlationId": 879546045610,
   "data": {
      "status": 200,
      "time": "2016-06-14T12:30:56"
   },
   "payload": "U1VDQ0VTUw==" // "SUCCESS"
}

6.4. Resource management

A "resource" is a versioned binary content (for example a device firmware).

You can manage a repository of resources in your tenant account.

Live Objects can track the current versions of resources on a specific asset.

You can set the target version of resources for a specific asset in Live Objects that will then try to update the resources on the asset as soon as the asset is available for resource update.

Asset resource update

landing

  • step 1: the device (or the codec communicating on its behalf) notifies the Resource Manager module of the currently deployed resource versions,

  • step 2: the Resource Manager module update the current state of device/thing resource versions in database, and compares it to the "target" resources versions for this device. For each resource on the device that is not in the "target" version:

    • step 3: the Resource Manager module send a "prepare resource update" request to the Updated module in charge of the new resource version,

    • step 4: the Updater module prepares the update (for example by retrieving the binary content of this resource version, by creating a temporary access for the device on this resource, etc.),

    • step 5: the Updater module replies to the Resource Manager module with a status (update possible or not) and extra information to transmit to the device (for example a URN where the new resource can be downloaded, a security token to use to access the new resource, etc.)

    • step 6: the Resource Manager receives the Updater reply, and if update is possible builds a resource update request to the device, with the extra info provided by the Updater module;

    • step7: the Resource Manager send the resource update request to the device;

    • step 8: the device proceeds to retrieve the new resource version (ex: HTTP/FTP download…​) from the Updater module, using if needed the extra info that was specified in the resource update request;

    • step 9: during transfer the Updater module or the device notifies the Resource Manager module of the transfer progress;

    • step 10: once the new resource has been completely transfered to the device, the device can verify the binary content (for example checking crypto signature, comparing content hash, etc.) and applies the update;

    • step 11: the device notifies the Resource Manager of the resource update result (success or failure).

6.4.1. Current resource versions

Your device can announce at any time the current versions of its resources by publishing a message in Router mode with routing key ~event.v0.assets.{ns}.{id}.currentResources (i.e. in MQTT on topic router/~event/v0/assets/{ns}/{id}/currentResources):

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "payload": "
      // base64 encoded...
      [
         {
            "resourceId":        "{res1Id}",
            "resourceVersionId": "{res1Version}",
            "connectorMetadata": {res1Metadata}
         },
         ...
      ]
      // ... base64 encoded
   "
}

Where:

res{X}Id

(required) identifier for resource X

res{X}Version

(required) current version for resource X

res{X}Metadata

(optional) JSON object, map of metadata associated with this resource (useful for resource update transfer)

6.4.2. Resource update request

Once your device has announced a topicResUpdate topic in an "Asset Connected" event it can receive at any time a message on this topic, requesting a resource update:

{
   "target": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "data": {
      "resourceId":    "{resourceId}",
      "sourceVersion": "{resourceCurrentVersionId}",
      "targetVersion": "{resourceNewVersionId}",
      "{param1Key}": {param1Value},
      "{param2Key}": {param2Value},
      ...
   },
   "replyTo":       "{replyTo}",
   "correlationId": {correlationId}
}

Where:

{resourceId}

identifies the resource to update

{resourceCurrentVersionId}

the current version of the resource to update (should be checked by device)

{resourceNewVersionId}

the new version of the resource to download and install

{payload}

a base64 content that can give extra info to download the new resource version (ex: URI, token, etc.)

{param(X)Key}

key identifying an extra parameter added to the resource update request

{param(X)Value}

JSON value (string, number, any…​) associated with key {param(X)Key}

{replyTo}

topic where response to the resource update request must be sent

{correlationId}

(signed integer) identifier that must be re-used in the response so that Live Objects correlates the correct response and request

6.4.3. Resource update response

Shortly after receiving the resource update request, the device must respond to indicate if it accepts to make the update:

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "data": {
      "status":                    "{status}",
      "topicCancelResourceUpdate": "{topicCancelResourceUpdate}"
   },
   "correlationId": {correlationId}
}

Where:

{status}

(required) indicates if the device accepts or not to make the update, and why. Possible values: "OK", "UNKNOWN_ASSET", "INVALID_RESOURCE","WRONG_SOURCE_VERSION","WRONG_TARGET_VERSION", "NOT_AUTHORIZED","INTERNAL_ERROR"

{topicCancelResourceUpdate}

(optional) the topic where the device is available to receive request to cancel the resource update

{correlationId}

(signed integer) same value as in the resource update request

6.4.4. Resource update error

Your device can report a custom resource update error by publishing a message in Router mode with routing key ~event.v0.assets.{ns}.{id}.resourceUpdateError (i.e. in MQTT on topic router/~event/v0/assets/{ns}/{id}/resourceUpdateError):

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "payload": "
      // base64 encoded...
      [
         {
            "errorCode":    "{errorCode}",
            "errorDetails": "{errorDetails}"
         },
         ...
      ]
      // ... base64 encoded
   "
}

Where:

errorCode

(optional) device error code,

errorDetails

(required) device error details.

This fields are limited to 256 characters. Characters outside of this limit will be ignored.

6.4.5. Resource update example

Device notifies of current version:

{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "payload": "W3sNCiAgICJyZXNvdXJjZUlkIjogICAgImRvbmdsZVYyX2Zpcm13YXJlfSIsDQogICAicmVzb3VyY2VWZXJzaW9uSWQiOiAiMS4xIiwNCiAgICJjb25uZWN0b3JNZXRhZGF0YSI6IHsiY2hlY2tzdW0iOiAibWQ1In0NCn1d"
   // (base 64)
   // [{
   //    "resourceId": "dongleV2_firmware}",
   //    "resourceVersionId": "1.1",
   //    "connectorMetadata": {"checksum": "md5"}
   // }]
}

Device receives resource update request:

{
   "target": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "data": {
      "resourceId":    "dongleV2_firmware}",
      "sourceVersion": "1.1",
      "targetVersion": "1.3",
      "uri":           "http://.../bin/dongleV2_firmware/versions/1.3/fw_13.bin",
      "md5":           "098f6bcd4621d373cade4e832627b4f6"
   },
   "replyTo": "pubsub/~0574badc-0abf-433e-a8d3-05e7c8f26210",
   "correlationId": -2754511
}

The device then parses the update request parameters and responds on "replyTo" topic ("pubsub/~0574badc-0abf-433e-a8d3-05e7c8f26210"):

{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "data": {
      "resourceId":    "dongleV2_firmware}",
      "sourceVersion": "1.1",
      "targetVersion": "1.3",
      "uri":           "http://.../bin/dongleV2_firmware/versions/1.3/fw_13.bin",
      "md5":           "098f6bcd4621d373cade4e832627b4f6"
   },
   "replyTo": "pubsub/~0574badc-0abf-433e-a8d3-05e7c8f26210",
   "correlationId": -2754511
}

Then the device parses the request parameters and respond to Live Objects that it accepts or not to make the update:

{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "data": {
      "status": "OK"
   },
   "correlationId": -2754511
}

The device then processes this request and downloads/installs the resource content (if necessary by parsing the payload to extract needed info like URI, token, etc.).

It’s up to the resource transfer module to track the status of the download (progress and status: SUCCESS/FAILED).

6.5. Auto Provisioning

Live Objects will register automatically new assets in the inventory the first time this device published a "Asset connected" or "Asset disconnected" event.

When registering an asset previously unknown, Live Objects emits a "Asset created" event.

From the web portal or the APIs you can "delete" an asset: all the related device management information will be forgotten by Live Objects, and a "Asset deleted" event will be published.

6.5.1. "Asset created" event

This event is emitted in Router mode with routing key ~event.v1.assets.<assetIdNamespace>.<assetId>.created:

{
   "payload":
      // base64 encoded...
      {
         "assetIdNamespace": "{ns}",
         "assetId": "{id}"
      }
      // ...base64 encoded.
}

6.5.2. "Asset deleted" event

This event is emitted in Router mode with routing key ~event.v1.assets.<assetIdNamespace>.<assetId>.deleted:

{
   "payload":
      // base64 encoded...
      {
         "assetIdNamespace": "{ns}",
         "assetId": "{id}"
      }
      // ...base64 encoded.
}

6.6. Campaign management

Campaign management is a Live Objects feature that allows a fleet manager to schedule execution of device management operations on a large number of assets.

The following operations are supported in a campaign definition:

  • commands,

  • parameters configuration,

  • resource update.

6.6.1. Campaign creation

When creating a campaign, a user must provide the following information

name

A short name to identify the campaign

description

(optional) Detailed description of the campaign

options

(optional) Set of campaign options.

planning

The scheduling configuration including the start date and the end date for the campaign.

targets

Assets targeted: either idList or filterQuery (exclusively). cf. Campaign targets

operations

A sequence of operations that will be executed on each assets of the campaign.

Campaigns are created from the REST API by providing a campaign definition with properties described previously.

6.6.1.1. Campaign options

Campaign options section could be empty or omitted.

Example of campaign options:

"options": {
    "dynamicallyAddEligibleDevice": true
  }

Options are:

  • dynamicallyAddEligibleDevice (default: false).

    Set this option to true to dynamically enroll assets to the campaign. During the campaign planning, new or updated assets could be dynamically enrolled if they match the filterQuery.

    This option requires a filterQuery target definition.

    A campaign with dynamicallyAddEligibleDevice option will always be in running state until end date, whereas others (non-dynamic) campaigns could end as soon as all devices operations ended or end date reached.

6.6.1.2. Campaign operations

Operation types are: Config, Command, Resource.

All operations definitions could embed an optional maxRetry attribute:

default is 0, and max is 5.

In case of operation failure, operation will be retried in the limit of maxRetry.

6.6.1.3. Config operation

Example of config operation definition:

  {
     "action":"configure",
     "definition":{
        "assetParameters":{
           "param1":{
              "type":"INT32",
              "valueInt32":1234
           }
        },
        maxRetry: 1
     }
  }
action

configure action will send to the asset one or many "parameters" to update.

definition

assetParameters has the same format as the corresponding unitary device management operation: Asset configuration.

maxRetry (optional) defines how many retries should be executed in case of failure of the current operation.

6.6.1.4. Command operation

Example of command operation definition:

  {
    "action": "command",
    "definition": {
        "event": "reset",
        "data": {
            "temp": "12"
        },
        "maxRetry": 0
    }
  }
action

command action will register (and send) a command to the asset.

definition

this section uses event, data and payload attributes in the same format as the corresponding unitary device management operation: Commands.

maxRetry (optional) defines how many retries should be executed in case of failure of the current operation.

6.6.1.5. Resource operation

Example of resource operation definition:

{
  "action": "resource",
  "definition": {
    "resourceId": "X11_firmware",
    "targetVersion": "2.1",
    "maxRetry": 4
  }
}
action

resource action will send a resource update request to the asset.

definition

resourceId identifies the resource to update

targetVersion is the new version of the resource to download and install

maxRetry (optional) defines how many retries should be executed in case of failure of the current operation.

6.6.1.6. Campaign target (idList)

Campaign assets targets: either idList or filterQuery (exclusively).

idList is a flat list of assets identifiers.

Assets are identified using a URN identifier.

The format of this identifier must be urn:lo:nsid:{ns}:{id} with

ns the target device identifier namespace

id the target device identifier

idList is not compatible with dynamicallyAddEligibleDevice option.

Below is an example of campaign targets definition using idList:

   "targets":{
      "idList":[
         "urn:lo:nsid:namespace:device1",
         "urn:lo:nsid:namespace:device2"
      ]
   }
6.6.1.7. Campaign target (filterQuery RSQL)

filterQuery: use RSQL to target assets.

Below is an example of campaign targets definition using filterQuery:

  "targets": {"filterQuery": "groupPath==/"}
Table 1. RSQL Semantic Table
desc syntax

Logical AND

; or and

Logical OR

, or or

Equal to

==

Not equal to

!=

Less than

=lt= or <

Less than or equal to

=le= or

Greater than operator

=gt= or >

Greater than or equal to

=ge= or >=

In

=in=

Not in

=out=

Below are example of filterQuery values.

  • Filter using tags

tags=in=(FUT,TEST1)

For example, devices with at least tags "FUT" and "TEST1" whatever the order or additional tags

  • Filter on properties

properties.mykey=in=(enum1, enum3)
properties.mykey==toto
  • Filter on groups

groupID=in=(1224,1234)
groupID==1234
groupPath=in=(/FR, /EN)
6.6.1.8. Campaign creation examples

Below are examples of campaign definition:

Set parameter param1 to value 1234 on two devices
POST /api/v0/deviceMgt/campaigns
{
   "name":"campaign1",
   "description":"A campaign that configures parameters",
   "planning":{
      "startDate":"2017-07-01T00:00:00Z",
      "endDate":"2017-07-23T23:59:59Z"
   },
   "targets":{
      "idList":[
         "urn:lo:nsid:namespace:device1",
         "urn:lo:nsid:namespace:device2"
      ]
   },
   "operations":[
      {
         "action":"configure",
         "definition":{
            "assetParameters":{
               "param1":{
                  "type":"INT32",
                  "valueInt32":1234
               }
            }
         }
      }
   ]
}
Send a reset command with a delay parameter for devices with foo tag
POST /api/v0/deviceMgt/campaigns
{
   "name":"campaign2",
   "description":"A campaign that sends a command",
   "planning":{
      "startDate":"2017-07-01T00:00:00Z",
      "endDate":"2017-07-23T23:59:59Z"
   },
   "options": {
    "dynamicallyAddEligibleDevice": true
   },
   "targets": {"filterQuery": "tags=in=(foo)"},
   "operations":[
      {
         "action":"command",
         "definition":{
            "event":"reset",
            "data":{
               "delay":"5000"
            }
         }
      }
   ]
}
Update the resource firmware.bin to version 1.1 (with 2 retries max)
POST /api/v0/deviceMgt/campaigns
{
   "name":"campaign3",
   "description":"A campaign that updates a resource",
   "planning":{
      "startDate":"2017-07-01T00:00:00Z",
      "endDate":"2017-07-23T23:59:59Z"
   },
   "targets":{
      "idList":[
         "urn:lo:nsid:namespace:device1",
         "urn:lo:nsid:namespace:device2"
      ]
   },
   "operations":[
      {
         "action":"resource",
         "definition":{
            "resourceId":"firmware.bin",
            "targetVersion":"1.1",
            "maxRetry": 2
         }
      }
   ]
}

6.6.2. Campaign reporting

Once a campaign is created, a fleet manager can monitor the state of a campaign.

A campaign can have one of the statuses described below:

SCHEDULED

The campaign has not yet started

RUNNING

The campaign is in progress

COMPLETE

The campaign is finished and all devices have properly been configured

INCOMPLETE

The campaign is finished but some devices could not be configured

SERVER_ERROR

An internal error occured in the platform and the campaign could not be completed

CANCELING

The campaign is waiting for running sequences to end, sequences that have not started yet will not start

CANCELED

The campaign was canceled and some devices might not have been configured

The possible statuses for a device are presented below:

notStarted

No operation executed on the device

pending

At least one operation of the sequence is still in progress

success

All operations were successufully executed on the device

failure

At least one operation of the sequence failed

canceled

The sequence was canceled before the end of all operations

6.6.2.1. Campaign cancellation

A campaign can be canceled with the following REST API endpoint:

PUT /api/v0/deviceMgt/campaigns/{campaignId}/cancel

If the campaign is already running, canceling it will set its state to CANCELING and the campaign will wait for running sequences to end. Then the campaign state will switch to CANCELED.

To abort running sequences, the force flag can be used.

PUT /api/v0/deviceMgt/campaigns/{campaignId}/cancel
force = true
6.6.2.2. Campaign deletion

A campaign can be deleted with the following REST API endpoint:

DELETE /api/v0/deviceMgt/campaigns/{campaignId}

If the campaign is in RUNNING or CANCELING state, it cannot be deleted. In this case the force flag can be used to execute a forced cancellation and automatically delete the campaign once it is in CANCELED state.

DELETE /api/v0/deviceMgt/campaigns/{campaignId}
force = true
6.6.2.3. Global report

The global report indicates the campaign definition, the current status of a campaign and statistics about the number of devices with a given status.

Get global status of a speficied campaign
GET /api/v0/deviceMgt/campaigns/{campaignId}
{
   "name":"campaign1",
   "description":"A campaign that configures parameters",
   "planning":{
      "startDate":"2017-07-01T00:00:00Z",
      "endDate":"2017-07-23T23:59:59Z"
   },
   "targets":{
      "idList":[
         "urn:lo:nsid:namespace:device1",
         "urn:lo:nsid:namespace:device2"
      ]
   },
   "operations":[
      {
         "action":"configure",
         "definition":{
            "assetParameters":{
               "param1":{
                  "type":"INT32",
                  "valueInt32":1234
               }
            }
         }
      }
   ],
   "numberOfTargets":2,
   "totalTargetsPerStatus":{
      "notStarted":0,
      "pending":1,
      "failed":0,
      "ok":1
   },
   "campaignStatus":"RUNNING",
   "created":"2017-06-01T00:00:00Z",
   "updated":"2017-07-01T00:00:00Z"
}
6.6.2.4. Detailed report

The detailed report gives the status of each device in a campaign. The status property gives the status for the whole sequence of operations. The detailed report also indicates the status of each operation (operation reports are ordered just like in the campaign definition).

operationStatus

Exact status reported by the device manager (the list of possible values depends on the type of operation). A special value notStarted is used when the operation is not yet started.

operationId

Identifier returned by the device manager when the campaign manager created the operation

started

Date when the operation was started

updated

Last time the operation report was updated

ended

Date when the operation was finished

currentRetry

(Option) Retry attempt count of the latest operation executed.

For example, an operationStatus equals to ok (or DONE) and currentRetry equals to 1 means that operation first failed, but the first retry attempt was a success.

Get global status of a speficied campaign
GET /api/v0/deviceMgt/campaigns/{campaignId}/targets
{
   "page":0,
   "size":10,
   "totalCount":2,
   "data":[
      {
         "device":"urn:lo:nsid:namespace:id1",
         "status":"pending",
         "created":"2017-07-01T16:12:21.000Z",
         "updated":"2017-07-01T16:12:21.000Z",
         "operations":[
            {
               "action":"configure",
               "operationStatus":"ok",
               "started":"2017-07-01T16:20:21.000Z",
               "updated":"2017-07-01T16:25:21.000Z",
               "ended":"2017-07-01T16:25:21.000Z"
            },
            {
               "action":"command",
               "operationStatus":"sent",
               "operationId":"12345",
               "started":"2017-07-01T16:30:21.000Z",
               "updated":"2017-07-01T16:31:21.000Z"
            },
            {
               "action":"resource",
               "operationStatus":"notStarted"
            }
         ]
      },
      {
         "device":"urn:lo:nsid:namespace:id2",
         "status":"pending",
         "created":"2017-07-01T16:12:21.000Z",
         "updated":"2017-07-01T16:12:21.000Z",
         "operations":[
            {
               "action":"configure",
               "operationStatus":"ok"
               "started":"2017-07-01T16:20:21.000Z",
               "updated":"2017-07-01T16:25:21.000Z",
               "ended":"2017-07-01T16:25:21.000Z"
            },
            {
               "action":"command",
               "operationStatus":"sent",
               "operationId":"6789",
               "started":"2017-07-01T16:30:21.000Z",
               "updated":"2017-07-01T16:31:21.000Z"
            },
            {
                "action": "resource",
                "operationStatus": "DONE",
                "operationId": "X11_firmware",
                "started": "2017-07-01T16:30:21.000Z",
                "updated": "2017-07-01T16:38:21.000Z",
                "ended": "2017-07-01T16:38:21.000Z",
                "currentRetry": 1
            }
         ]
      }
   ]
}

7. Data management

7.1. Concepts

Data management relies upon:

  • the store service which is aimed to store data messages from IoT things (devices, gateway, IoT app collecting data, etc.) as time-series data streams,

  • and the search service based on the popular open-source Elasticsearch product.

The data collected may be associated to a model. The model is a fundamental concept for the search service, it specifies the schema of the JSON "value" object. The model is dynamically updated based on the data injected.

The model concept is necessary to avoid mapping conflicts in the underlying elasticsearch system.

A model can be seen as a "mapping space" in which data types must not conflict.

  • If the model is not provided, "value" object will be not indexed by the search service. Nevertheless, the data will be stored in the store service and all information except value object will be indexed in search service.

  • If the value JSON object does not comply with the provided model (for example, a field change from long to String type), the data will be not inserted in the search service. The data message will be only stored in the store service.

7.2. Store service

The REST interface allows to add data to a stream and to retrieve data from a stream. A stream could be for example associated to a unique of device (streamID could be therefore a device Identifier) or one type of data coming from a device (streamID could be therefore in this format deviceIdentifier-typeOfData)

Add a data message to a stream :

Request
POST /api/v0/data/streams/{streamId}
X-API-Key: <your API key>
Accept: application/json

body param

description

data

JSON object conforming to data message structure

Warning : the streamId is provided as the last segment of the url.

Request
POST /api/v0/data/streams/myDeviceTemperature
{
  "value": {"temp":24.1},
  "model": "data_model_v0"
 }

For this example, the value.temp field of model "temperature_v0" will be defined as a double type. If a String type is used in the future for value.temp, a new model must be defined. In case that value.temp is set a String type with model "data_model_v0", the message will be dropped by the search service.

Add a bulk of data messages :

Request
POST "/api/v0/data/bulk
X-API-Key: <your API key>
Accept: application/json

body param

description

data

JSON array conforming to an array of data message structure

A bulk will be be processed if all arrays elements are valid, otherwise the bulk will be rejected. Maximum size of the bulk is 1000.

Warning : the streamId is mandatory for each element of the Bulk. this is a difference with the REST API for adding data to a stream.

Request
POST /api/v0/data/bulk
[ {
  "streamId" : "temperature_stream_1"
  "value": {"temp":24.1},
  "model": "data_model_v1"
  },
  {
  "streamId" : "temperature_stream_1"
  "value": {"temp":24.1},
  "model": "data_model_v1"
  },
  {
  "streamId" : "pressure_stream_1"
  "value": {"pressure":1024.0},
  "model": "data_model_v1"
  }
]

Retrieve data from a stream :

Request
GET /api/v0/data/streams/{streamId}
X-API-Key: <your API key>
Accept: application/json

Query params

Description

limit

Optional. max number of data to return, value is limited to 100

timeRange

Optional. filter data where timestamp is in timeRange "from,to"

bookmarkId

Optional. id of document. This id will be used as an offset to access to the data.

Documents are provided in reverse chronological order (newest to oldest)

Request
GET /api/v0/data/streams/myDeviceTemperature
 {
  "id" : "57307f6c0cf294ec63848873",
  "streamId" : "myDeviceTemperature",
  "timestamp" : "2016-05-09T12:15:41.620Z",
  "model" : "temperature_v0",
  "value" : {
    "temp" : 24.1
  },
  "created" : "2016-05-09T12:15:40.286Z"
 }

The REST request body search API is provided to perform search queries.

To learn more about the search API, read the Exploring your Data section of Elasticsearch: The Definitive Guide. (www.elastic.co/guide/en/elasticsearch/reference/current/_the_search_api.html)

To perform a search query :

Request
POST /api/v0/data/search
X-API-Key: <your API key>
Accept: application/json

body param

description

dsl request

elasticsearch DSL request

exemple:

This query requests statistics from the myDeviceTemperature stream temp field.

Request
POST /api/v0/data/search
{
    "size" : 0,
    "query" :
    {
            "term" : { "streamId": "myDeviceTemperature" }
    },
    "aggs" :
    {
        "stats_temperature" : { "stats" : { "field" : "@temperature_v0.value.temp" } }
     }
}

If a model has been provided, search query must be prefixed by @<model> : @temperature_v0.value.datapath

Response
{
  "took": 1,
  "hits": {
    "total": 2
  },
  "aggregations": {
    "stats_temperature": {
      "count": 2,
      "min": 24.1,
      "max": 25.9,
      "avg": 25,
      "sum": 50
    }
  }
}

To perform the same search query; but with the 'hits' part extracted and JSON formated as an array of data messages (to use when you are only interested in the 'hits' part of Elasticsearch answer) :

Request
POST /api/v0/data/search/hits
X-API-Key: <your API key>
Accept: application/json

body param

description

dsl request

elasticsearch DSL request

exemple:

This query requests last data for all devices using the model : temperature_v0.

Request
POST /api/v0/data/search/hits
{
    "size" : 10,
    "query" : {"term" : { "model": "temperature_v0" }
     }
}
Response
[
  {
    "id": "57308b3b7d84805820b35345",
    "streamId": "myDeviceTemperature",
    "timestamp": "2016-05-09T13:06:03.903Z",
    "model": "temperature_v0",
    "value": {
      "temp": 25.9
    },
    "created": "2016-05-09T13:06:03.907Z"
  },
  {
    "id": "573087777d84805820b35344",
    "streamId": "myDeviceTemperature",
    "timestamp": "2016-05-09T12:49:59.966Z",
    "model": "temperature_v0",
    "value": {
      "temp": 24.1
    },
    "created": "2016-05-09T12:49:59.977Z"
  },
  {
    "id": "5730b1577d84805820b35347",
    "streamId": "myStreamDemo-temperature",
    "timestamp": "2016-05-09T15:48:39.390Z",
    "model": "temperature_v0",
    "value": {
      "temp": 24.1
    },
    "created": "2016-05-09T15:48:39.395Z"
  }
]

7.3.1. Geo Query for data injected BEFORE 2017/04

Geo Query can be performed through *location* field.

Request

POST /api/v0/data/search/hits

{
  "query": {
    "filtered": {
      "filter": {
        "geo_distance": {
          "distance": "10km",
          "location": {
            "lat": 43.848,
            "lon": -3.417
          }
        }
      }
    }
  }
}
Response
[
  {
    "id": "57308b3b7d84805820b35345",
    "streamId": "myDeviceTemperature",
    "location" : {
        "lat": 43.8,
        "lon": -3.3
    }
    "timestamp": "2016-05-09T13:06:03.903Z",
    "model": "temperature_v0",
    "value": {
      "temp": 25.9
    },
    "created": "2016-05-09T13:06:03.907Z"
  }
]

7.3.2. Geo Query for data injected AFTER 2017/04

Geo Query can be performed through all fields with name mathing *location* (case insensitive).
In order to geoquery these fields, you must add @geopoint to the location query path : *location*.@geopoint

Request

POST /api/v0/data/search/hits

{
  "query": {
    "filtered": {
      "filter": {
        "geo_distance": {
          "distance": "10km",
          "location.@geopoint": {
            "lat": 43.848,
            "lon": -3.417
          }
        }
      }
    }
  }
}
Response
[
  {
    "id": "57308b3b7d84805820b35345",
    "streamId": "myDeviceTemperature",
    "location" : {
        "lat": 43.8,
        "lon": -3.3
    }
    "timestamp": "2016-05-09T13:06:03.903Z",
    "model": "temperature_v0",
    "value": {
      "temp": 25.9
    },
    "created": "2016-05-09T13:06:03.907Z"
  }
]

7.3.3. Search Query samples

Here are some query samples that can be used. Aggregations are very usefull to retrieve data grouped by any criteria : list all known tags, get all last value per stream, get mean temperature per tag, get the list of streams that have not send data since a date…​ The aggregations results are stored as 'buckets' in the result.
You can also add filters (geoquery, wildcards, terms…​) to all your aggregations query to target specific 'buckets' or data.

7.3.3.1. Give me all you got !
{
    "query": {
        "match_all" : {}
    }
}
7.3.3.2. Give me the list of all known tags
{
    "size": 0,
    "aggs": {
        "grouped_by_tags": {
            "terms": {
                "field": "tags",
                "size": 0
            }
        }
    }
}
result
{
  "took": 44,
  "hits": {
    "total": 66
  },
  "aggregations": {
    "grouped_by_tags": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
        {
          "key": "tag_1",
          "doc_count": 53
        },
        {
          "key": "tag_2",
          "doc_count": 13
        }
      ]
    }
  }
}
7.3.3.3. Give me the last value of all my streams
{
    "size":0,
    "aggs": {
        "tags": {
            "terms": {
                "field": "streamId",
                "size": 0
            },
            "aggs": {
                "last_value": {
                    "top_hits": {
                        "size": 1,
                        "sort": [
                            {
                                "timestamp": {
                                    "order": "desc"
                                }
                            }
                        ]
                    }
                }
            }
        }
    }
}
result
{
  "took": 19,
  "hits": {
    "total": 11
  },
  "aggregations": {
    "tags": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
        {
          "key": "device_1",
          "doc_count": 7,
          "last_value": {
            "hits": {
              "total": 7,
              "max_score": null,
              "hits": [
                {
                    ...
                }
              ]
            }
          }
        },
        {
          "key": "device_2",
          "doc_count": 123,
          "last_value": {
            "hits": {
              "total": 123,
              "max_score": null,
              "hits": [
                {
                    ...
                }
              ]
            }
          },
         ...
        }
      ]
    }
  }
}
7.3.3.4. Give me the list of devices that have not send data since 2017/03/23 10:00:00
{
    "size":0,
    "aggs": {
        "tags": {
            "terms": {
                "field": "streamId",
                "size": 0
            },
            "aggs": {
                "last_date": {
                    "max": {
                        "field": "timestamp"
                    }
                },
                "filter_no_info_since": {
                    "bucket_selector": {
                        "buckets_path": {
                            "lastdate":"last_date"
                        },
                        "script": {
                            "inline": "lastdate<1490263200000",
                            "lang" :"expression"
                        }
                    }
                }
            }
        }
    }
}
result
{
  "took": 8,
  "hits": {
    "total": 9
  },
  "aggregations": {
    "tags": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
        {
          "key": "device_12",
          "doc_count": 7,
          "last_date": {
            "value": 1489504105020,
            "value_as_string": "2017-03-14T15:08:25.020Z"
          }
        },
        {
          "key": "device_153",
          "doc_count": 2,
          "last_date": {
            "value": 1489049619254,
            "value_as_string": "2017-03-09T08:53:39.254Z"
          }
        }
      ]
    }
  }
}

7.4. Decoding service

7.4.1. Overview

The data messages sent to the Live Objects platform can be encoded in a customer specific format. For instance, the payload may be a string containing an hexadecimal value or a csv value.
The data decoding feature enables you to provision your own decoding grammar. On receiving the encoded message, the Live Objects platform will use the grammar to decode the payload into plain text json fields and record the json in the Store service. The stored message will then be searchable with the Advanced Search service.
A "template" option allows to perform mathematical operations on the decoded fields or to define an output format.
A "model" option allows to override original data 'model' field.

The decoding feature is not activated by default.

7.4.2. Binary decoding

Decoder provisioning

The custom decoder describes the grammar to be used to decode the message payload. The Live Objects API to manage the decoders are described in the swagger documentation : https://liveobjects.orange-business.com/swagger-ui/index.html.

The binary decoding module uses the Java Binary Block Parser (JBBP) library.
You must use the JBBP DSL language to describe the binary payload format for your decoder.

Additional types

the float and utf8 are additional types that can be used in the grammar (see examples).

Example : create a binary decoder with the REST API
POST /api/v0/decoders/binary
X-API-Key: <your API key>
Accept: application/json
{
"encoding":"twointegers",  (1)
"enabled":true,  (2)
"format":"int pressure;int temperature;", (3)
"template":"{\"pressure\":{{pressure}}, \"temperature\" : \"{{#math}}{{temperature}}/10{{/math}} celsius\"}", (4)
"model":"model_twointegers"  (5)
}
1 identifies the decoder. This name will be associated to the devices during the provisioning and will be present in the data message.
2 activation/deactivation of the decoder.
3 describes the payload frame (cf. JBBP DSL language). The name of the fields will be found in the resulting decoded payload json.
4 optional parameter describing a post-decoding template format. In this example, the output temperature will be divided by 10 and stored in a string format including its unit. More information on templates.
5 optional parameter that will override the 'model' field in decoded data. If empty, the original value of 'model' field of the encoded data will be used. More information on model field

Every field can have case insensitive name which should not contain '.' (because it is reserved for links to structure field values) and '#'(because it is also reserved for inside usage). Field name must not be started by a number or chars '$' and '_'. Field names are case insensitive, meaning 'myField' and 'myfield' will be returned as 'myfield'.

Endianness ?

The decoding service uses the big-endian order (the high bytes come first). If your device uses little-endian architecture, you can use the < character to prefix a type in your format description.

Example : create a binary decoder for a device sending data in little-endian format
POST /api/v0/decoders/binary
X-API-Key: <your API key>
Accept: application/json
{
"encoding":"my_little_endian_encoding",
"enabled":true,
"format":"<float temperature;" (1)
}
1 : <float means 32-bit float sent in little-endian.
How to test the binary decoder format?

The Live Objects API provides a "test" endpoint which takes a payload format and a payload value as input and provides the decoded value in the response body, if the decoding is successful. Optionally, you can provide a post-decoding template which will describe the output format.

In the following example, the decoded value for the pressure will remain unchanged,
while the decoded value for temperature will be divided by 10.
The test endpoint is described in swagger.

Request
POST /api/v0/decoders/binary/test
 X-API-Key: <your API key>
Accept: application/json
{
"binaryPayloadStructure":"int  pressure; int temperature;",
"binaryPayloadHexString":"000003F5000000DD",
"template":"{\"pressure\":{{pressure}}, \"temperature\" : \"{{temperature}}/10\"}"
}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "temperature": 22.1,
      "pressure": 1013
   },
   "descriptionValid": true
}
How to customize the fields once the payload has been decoded?

The fields resulting of a decoded payload might need to be processed using a template description, in order to change their output format. More information on templates.

Referencing a decoder in a LPWA device

When provisioning a LPWA device, you may reference the decoder to be used for the device so that Live Objects will automatically decode all the payloads received from this device, using the referenced decoder.

Example :

landing

Message decoding

The data message is decoded using the decoder previously provisioned and the decoded fields are added to the value. The encoded raw payload is kept in the decoded message. Once the message has been decoded and stored, "Advanced Search" requests can be performed using the newly decoded fields.

Table 2. Examples :
Type Frame format Payload example Decoded payload (json)

binary

int temperature;

000000DD

"payload" : "000000DD", "temperature":221

binary

ubyte temperature;

DD

"payload" : "DD", "temperature":221

binary

utf8 [16] myString;

2855332e3632542b323144323235503029

"payload" : "2855332e3632542b323144323235503029", "myString":"(U3.62T+21D225P0)"

binary

byte:1 is_led_on; float pressure; float temperature; float altitude; ubyte battery_lvl; byte[6] raw_gps; ushort altitude_gps;

00447CE00041CEF5C345CAB8CD38000000000000FFFF

"payload" : "00447CE00041CEF5C345CAB8CD38000000000000FFFF", "is_led_on":0,"pressure":1011.5,"temperature":25.87,"altitude":6487.1,"battery_lvl":56,"raw_gps_list":[0,0,0,0,0,0],"altitude_gps":65535

binary

float pi;measure[2] {int length; utf8 [length] name;float value;}

4048F5C30000000BC2A955544638537472696E674148000000000012C2A9616E6F7468657255544638537472696E67447D4000

"payload" : "4048F5C30000000BC2A955544638537472696E674148000000000012C2A9616E6F7468657255544638537472696E67447D4000", "pi":3.14, "measure_list":[{ "length":11,"name":"©UTF8String","value":12.5}, {"length":18,"name":"©anotherUTF8String","value":1013.0} ]

Table 3. Json fields

value.payload

a string containing the encoded payload in hexadecimal (raw value)

metadata.encoding

contains the decoder name

model

remains unchanged after decoding if model field of decoder is empty; else it will be set with the value of model field in the decoder

additional LPWA fields (lora port, snr…​) in the value

remain unchanged after decoding.

7.4.3. Csv decoding

Decoder provisioning

The custom decoder describes the columns format and options to be used to decode the message csv payload. The Live Objects API to manage the decoders are described in the swagger documentation : https://liveobjects.orange-business.com/swagger-ui/index.html.

When provisioning a csv decoder, you must specify an ordered list of column names and their associated type. Three column types are available : STRING, NUMERIC or BOOLEAN.
Several options (column separator char, quote char, escape char…​) may be set to customize the csv decoding.

A template option enables you to provide a post-decoding output format including mathematical evaluation. More information on templates.

Column types
  • STRING column may contain UTF-8 characters

  • NUMERIC column may contain integer (32 bits), long (64 bits), float or double values. The values may be signed.

  • BOOLEAN column must contain true or false.

Table 4. Available options
name default definition example

quoteChar

double-quote "\""

character used for quoting values that contain column separator characters or linefeed.

"pierre, dupont",25,true will be decoded as 3 fields.

columnSeparator

comma ","

character used to separate values.

lineFeedSeparator

"\n"

character used to separate data rows. If the message payload contains several rows, only the first one will be decoded.

the decoding result for pierre,35,true\nmarie,25,false will be 3 fields containing pierre, 35 and true.

useEscapeChar

false

set to true if you want to use an escape char.

escapeChar

backslash "\\"

character used to escape values.

skipWhiteSpace

false

if set to true, will trim the decoded values (white spaces before and after will be removed).

Example 1 : create a simple csv decoder with the REST API
POST /api/v0/decoders/csv
X-API-Key: <your API key>
Accept: application/json
{
    "encoding":"my csv encoding", (1)
    "enabled":true, (2)
    "columns": [ (3)
        {"name":"column1","jsonType":"STRING"},
        {"name":"column2","jsonType":"NUMERIC"},
        {"name":"column3","jsonType":"BOOLEAN"}
    ],
    "model":"model_csv_decoded"  (4)
}
1 identifies the decoder. This name will be associated to the devices during the provisioning and will be present in the data message.
2 activation/deactivation of the decoder.
3 an ordered list of column descriptions.
4 optional parameter that will override the 'model' field of decoded data. If empty, the original value of 'model' field of the encoded data will be used. More information on model field.
Example 2 : create a csv decoder with options, using the REST API
POST /api/v0/decoders/csv
X-API-Key: <your API key>
Accept: application/json
{
    "encoding":"my csv encoding with options",
    "enabled":true,
    "columns": [
        {"name":"unit","jsonType":"STRING"},
        {"name":"temperature","jsonType":"NUMERIC"},
        {"name":"normal","jsonType":"BOOLEAN"}
    ],
    "options" : {
        "columnSeparator": "|",
        "quoteChar": "\"",
        "lineFeedSeparator": "/r/n"
    }
}
In the POST request, you can provide only the options you wish to modify. The other options will keep the default values.
How to customize the fields once the payload has been decoded?

The fields resulting of a decoded payload might need to be processed using a template description, in order to change their output format. More information on templates.

How to test the csv decoder ?

The Live Objects API provides a "test" endpoint which takes a csv format description and a payload value as input and provides the decoded value in the response body, if the decoding is successful. The test endpoint is described in swagger.

Request
POST /api/v0/decoders/csv/test
 X-API-Key: <your API key>
Accept: application/json
{
    "columns": [
        {"name":"unit","jsonType":"STRING"},
        {"name":"temperature","jsonType":"NUMERIC"},
        {"name":"threasholdReached","jsonType":"BOOLEAN"}
    ] ,
    "options":{
        "columnSeparator": ","
    },
    "csvPayload":"celsius,250,true",
    "template":"{\"temperature\" : \"{{temperature}}/10\", \"unit\":\"{{unit}}\", \"thresholdReached\":\"{{thresholdReached}}\"} "
}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "unit": "celsius",
      "thresholdReached": "true",
      "temperature": 25
   },
   "descriptionValid": true
}
Message decoding

The data message is decoded using the decoder previously provisioned and the decoded fields are added to the value. The csv encoded raw payload is kept in the decoded message. Once the message has been decoded and stored, "Advanced Search" requests can be performed using the newly decoded fields.

Example in http :

Request
POST /api/v0/data/streams/{streamId}
X-API-Key: <your API key>
Accept: application/json
{
  "value": {"payload":"celsius,25,true"},
  "model": "temperature_v0",
  "metadata" : {"encoding" : "my csv encoding"}
 }

The data message will be stored as:

{
      "id": "585aa47de4b019917e342edd",
      "streamId": "stream0",
      "timestamp": "2016-12-21T15:49:17.693Z",
      "model": "temperature_v0",
      "value":       {
         "payload": "celsius,25,true",
         "normal": true,
         "unit": "celsius",
         "temperature": 25
      },
      "metadata": {"encoding": "my csv encoding"},
      "created": "2016-12-21T15:49:17.750Z"
}
Table 5. Json fields

value.payload

a string containing the csv encoded payload (raw value)

metadata.encoding

contains the decoder name

model

remains unchanged after decoding if model field of decoder is empty; else it will be set with the value of model field in the decoder

7.4.4. Templating

The Live Objects provides, for the decoder creation and the decoder test APIs, an optional parameter named "template". This parameter is a string field describing the target output fields in a mustache-like format.

Table 6. Available functions :

{{#math}}{{/math}}

performs mathematical operations on a field

{{#toUpperCase}}{{/toUpperCase}}

converts a string to upper case

{{#toLowerCase}}{{/toLowerCase}}

converts a string to lower case

The following examples shows, for the same raw binary payload, the output if you are not using any template, or if you define a custom template.

Request (WITHOUT the template parameter)
POST /api/v0/decoders/binary/test
 X-API-Key: <your API key>
Accept: application/json
{
"binaryPayloadStructure":"byte:1 led; ushort pressure; ushort temperature; ushort altitude; ubyte battery; byte[6] raw_gps; ushort altitude_gps;",
"binaryPayloadHexString":"0027830a1bfd6738000000000000ffff"}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "led": 0,
      "pressure": 10115,
      "temperature": 2587,
      "altitude": 64871,
      "battery": 56,
      "raw_gps":       [
         0,
         0,
         0,
         0,
         0,
         0
      ],
      "altitude_gps": 65535
   },
   "descriptionValid": true
}
Request (WITH the template parameter)
POST /api/v0/decoders/binary/test
 X-API-Key: <your API key>
Accept: application/json
{
"binaryPayloadStructure":"byte:1 led; ushort pressure; ushort temperature; ushort altitude; ubyte battery; byte[6] raw_gps; ushort altitude_gps;",
"binaryPayloadHexString":"0027830a1bfd6738000000000000ffff",
"template":"{\"pressure\": \"{{pressure}} / 10\", \"temperature\": \"{{temperature}} / 100\", \"altitude\": \"{{altitude}} / 10\", \"view\": { \"Pressure\": \"{{#math}}{{pressure}}/10{{/math}} hPa\",             \"Temperature\": \"{{#math}}{{temperature}}/100{{/math}} C\",\"Altitude\": \"{{#math}}{{altitude}}/100{{/math}} m\",\"GPSAltitude\": \"{{altitude_gps}} m\",\"Battery\": \"{{battery}} %\"}}}"
}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "altitude": 6487.1,
      "view":       {
         "Pressure": "1011.5 hPa",
         "Temperature": "25.87 C",
         "Altitude": "648.71 m",
         "GPSAltitude": "65535 m",
         "Battery": "56 %"
      },
      "temperature": 25.87,
      "pressure": 1011.5,
      "led": 0,
      "battery": 56,
      "raw_gps":       [
         0,
         0,
         0,
         0,
         0,
         0
      ],
      "altitude_gps": 65535
   },
   "descriptionValid": true
}
the {{#math}}{{/math}} template is needed only if you wish to evaluate a mathematical expression within a string.
Example for a template containing :
\"Temperature\": \"{{temperature}}/100 celsius\" (1)
\"Temperature\": \"{{#math}}{{temperature}}/100{{/math}} celsius\" (2)
\"Temperature\": \"{{#math}}{{temperature}}/100{{/math}}\" (3)
\"Temperature\": \"{{temperature}}/100\" (4)
1 the output will be like "Temperature": "2587/100 celsius" (the division is not evaluated).
2 the output will be like "Temperature": "25.87 celsius" (a string output. the division is evaluated).
3 the output will be like "Temperature": 25.87 (a numeric). In this case, the {{#math}} function is not needed.
4 the output will be like "Temperature": 25.87 (a numeric)
You need to specify in the template, all the fields you wish to get in the output, even if they are not modified by the template.
Example :  "template":"{\"pressure\":{{pressure}}, \"temperature\" : {{temperature}}/10}"
If you omit the "pressure" field in the template, it will simply not appear in the output.
If the decoded value contains a "location" field with latitude and longitude, it will override the location field provided in Live Objects at the same json level as the "value" field.

8. Kibana

Kibana is a tool to visualize all the data injected in Live Objects.

On the first connection, you will be redirected to the 'index pattern' screen.
Keep all options as default; and choose 'timestamp' for the 'Time-field name' box. Juste press 'Create' button, this will create a new index pattern for Kibana.

Do not check 'Do not expand index pattern when searching' or 'Use event times to create index names' checkboxes, this will lead to error messages in later screens. In that case, you should delete the 'index-pattern' created, and recreate a new one without these options.

If you add new fields in your data model, you will need to refresh this index pattern in order for kibana to be able to use these new fields.
Just go to the 'Settings' tab and click on the 'refresh field list' orange button on the top.

Kibana is based on 3 main tabs : Discover, Visualize and Dashboard.

8.1. Discover

Here you will access to all your data. The idea is to 'play' with the filters on the left side of the screen to extract usefull data you need to explore.
You can then save this filtered 'search' to visualize it in the next 'visualize' screen.

There is an important time filter on the upper-right corner of the screen. By default, it will display only last 15 minutes of data. You can choose to display 'last month' data for instance.

8.2. Visualize

Here you will be able to create histogram, map, charts, table, metrics; based on your previous search. You can then save this visualization to be displayed in the next 'dashboard' screen.

8.3. Dashboard

Here you will be able to display the visualization tab you have previously created; and gather them all in a 'dashboard' page. You can create and save several dashboards meant for different users. You can share this dahsboards with the 'share' button.

9. Event Processing

9.1. Concepts

2 independent features are available for event processing :

  • Simple Event Processing (SEP) : a stateless service aimed at detecting notable single event from the flow of data messages. Example : raise an event when the device temperature is above 25°C.

  • State processing (SP) : a service aimed at detecting changes in "device state" computed from data messages. Example : raise an event when the device temperature status changes from "cold" to "hot". A state is computed by applying a state function to a data Message. A notification is sent by Live Objects each time a state value change.

Both services :

  • apply rules when receiving a message : matching rules for SEP and state processing rules for SP. The rules are defined using the JsonLogic format.

  • have a common Context repository where you can store useful information for rule definition like thresholds (ex: "cold" threshold is when data is below 20°C).

  • have a common Geozone repository where you can store the geographical references (polygons) you may use in your rules.

  • generate output events that your business application can consume to initiate downstream action(s) like alarming, execute a business process, etc.

Differences between Simple Event processing and State processing :

  • Simple Event Processing provides a stateless detection function (matching rule) while State Processing provides a stateful (the current state of the device is known by the system) detection function which is useful for uses case more complex than normal/alert status. State processing can be seen as a basic state machine where transitions between states are managed by the state function result and events are transition notifications.

  • Simple Event Processing has a frequency function (firing rule) which defines when "fired events" must be generated : ONCE, ALWAYS, SLEEP.

Event Processing E2E overview

lom_ep_architecture

9.2. First examples

Before going into the details of Event Processing, you can run the following examples in order to get used to the concepts (geozones, contexts, rules, events).

9.2.1. Use case 1 : geozone supervision of a tracker

Pre-requisites :

  • the event processing feature is enabled for the tenant.

  • the tenant has a valid Live Objects API key.

9.2.1.1. Use case description : tracking of package between the shipment zone, transportation zone and delivery zone.

The REST requests for this example are available here and can be run in Postman.

A truck leaves San Francisco with its shipment. A tracker is embedded in the shipment. The truck may take Highway 101 or Route 5 to Los Angeles. A state change event will be sent when the tracker changes of zone.

  • Shipment zone (red) = San Francisco GPS polygon (lat, lon) : (38.358596, -123.019952) (38.306889, -120.954523) (37.124990, -121.789484)

  • Delivery zone (green) = LA GPS polygon : (34.238622, -118.909873) (34.346562, -117.747086) (33.620728, -117.551111) (33.533648, -118.269687)

  • Transportation zone 1 (yellow) = 101 Highway : (37.561997, -122.05261237) (34.059617, -118.154639) (34.102708, -119.203276) (37.440666, -122.641996)

  • Transportation zone 2 (blue) = Route 5 : (37.8705177, -121.3220217) (34.309766, -118.027739) (33.679366, -118.377685) (37.714244, -121.662597)

9.2.1.2. Geographical zones

lom_ep_stateprocessing1

9.2.1.3. Steps

inside

Step1 : Geozone provisioning

First you need to create the 4 geozones you would like to monitor.

  • Make sure that you enter the coordinates with the longitude first (lon, lat).

  • The polygon must be closed (last point=first point).

PUT liveobjects.orange-business.com/api/v0/eventprocessing/geozones/san-francisco

In the request body :

{
  "description": "San Francisco zone",
  "geometry": {
    "coordinates": [
        [[-123.019952, 38.358596],[-120.954523, 38.306889],
        [-121.789484, 37.124990],[-123.019952, 38.358596]]],
    "type": "Polygon"
  },
  "tags": [
    "SF-area", "shipment"
  ]
}
PUT liveobjects.orange-business.com/api/v0/eventprocessing/geozones/los-angeles
{
  "description": "Los Angeles zone",
  "geometry": {
    "coordinates": [
        [[-118.909873, 34.238622],[-117.747086, 34.346562],
        [-117.551111, 33.620728],[-118.269687, 33.533648],[-118.909873, 34.238622]]
        ],
    "type": "Polygon"
  },
  "tags": [
    "LA-area", "delivery"
  ]
}
PUT liveobjects.orange-business.com/api/v0/eventprocessing/geozones/transportation1
{
  "description": "Transportation zone Highway 101",
  "geometry": {
    "coordinates": [
        [[-122.05261237, 37.561997],[-118.154639, 34.059617],
        [-119.203276, 34.102708],[-122.641996, 37.440666],[-122.05261237, 37.561997]]
        ],
    "type": "Polygon"
  },
  "tags": [
    "transportation"
  ]
}
PUT liveobjects.orange-business.com/api/v0/eventprocessing/geozones/transportation2
{
  "description": "Transportation zone Route 5",
  "geometry": {
    "coordinates": [
        [[-121.3220217, 37.8705177],[-118.027739, 34.309766],
        [-118.377685, 33.679366],[-121.662597, 37.714244],[-121.3220217, 37.8705177]]
        ],
    "type": "Polygon"
  },
  "tags": [
    "transportation"
  ]
}

Once the geozones are provisioned, they are available in your user context and can be referenced in your rules.

Step2 : Context provisioning
  • There are 2 transportation zones. You can group them into a single transportation context which will be used in your rule.

  • If you want to apply the rule only to a specific tracking device (the one present in the truck), you can create a device-group context containing the device identifier.

  • You can use the geozones san-francisco and los-angeles in your rule definition. But you probably do not want to reference directly the city names in the rule in order to be able to change the shipment and delivery zones without modifying the rule. Hence, you create an indirection in the context (san-francisco→shipment; los-angeles→delivery).

    PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/transportation
    {
      "contextData": ["transportation1","transportation2"],
      "tags": [
        "transportation","zone","california"
      ]
    }
    PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/device-group
    {
      "contextData": ["urn:lora:0020B20000000101"],
      "tags": [
        "device","truck"
      ]
    }
    PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/shipment
    {
      "contextData": "san-francisco",
      "tags": [
        "geozone"
      ]
    }
    PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/delivery
    {
      "contextData": "los-angeles",
      "tags": [
        "geozone"
      ]
    }
Step3 : State processing rule provisioning
  • This example is aimed at detecting a change in the device state, so you have to create a state processing rule which will be applied only to the monitored device (in the truck).

  • An event will be raised when the truck moves from one zone to the next one (shipment→transportation or transportation→delivery)

Before provisioning the state processing rule, it is useful to run the state processing function on a test data message.

POST liveobjects.orange-business.com/api/v0/eventprocessing/stateprocessing-rule/test
{
  "currentState": {},
  "data": {
    "metadata": {
      "connector": "http",
      "source": "urn:lora:0020B20000000101"
   },
   "streamId": "urn:lora:0020B20000000101!uplink",
    "location": {
      "provider": "lora-macro",
      "accuracy": 10,
      "lon": -122.169846,
      "lat": 37.602902
   },
   "model": "lora_v0",
   "value": {
      "payload": "ae2109000cf3"
   }
  },
  "stateProcessingFunction": {

            "if": [{
            "inside": [{
                "var": "location.lon"
            },
            {
                "var": "location.lat"
            },
            {
                "ctx": {"ctx":"shipment"}
            }]
        },
        "shipment_zone",
        {
            "inside": [{
                "var": "location.lon"
            },
            {
                "var": "location.lat"
            },
            {
                "ctx": {"ctx":["transportation"]}
            }]
        },
        "transportation_zone",
        {
            "inside": [{
                "var": "location.lon"
            },
            {
                "var": "location.lat"
            },
            {
                "ctx": {"ctx":"delivery"}
            }]
        },
        "delivery_zone",
        "unknown_zone"]

  }
}

Response :

{
    "stateFunctionValid": true,
    "dataValid": true,
    "stateFunctionResult": "shipment_zone"
}

Now that the state function is tested, you can provision the state processing rule.

Geo tracking state processing rule :

POST liveobjects.orange-business.com/api/v0/eventprocessing/stateprocessing-rule
{
    "name": "geo tracking", (1)
    "enabled": true,
    "stateFunction": { (2)

            "if": [{
            "inside": [{
                "var": "location.lon"
            },
            {
                "var": "location.lat"
            },
            {
                "ctx": {"ctx":"shipment"}
            }]
        },
        "shipment_zone",
        {
            "inside": [{
                "var": "location.lon"
            },
            {
                "var": "location.lat"
            },
            {
                "ctx": {"ctx":["transportation"]}
            }]
        },
        "transportation_zone",
        {
            "inside": [{
                "var": "location.lon"
            },
            {
                "var": "location.lat"
            },
            {
                "ctx": {"ctx":"delivery"}
            }]
        },
        "delivery_zone",
        "unknown_zone"]

  },
    "filterPredicate": {"in": [ (3)
        {
          "var": "metadata.source"
        },
        {
          "ctx": "device-group"
        }
      ]
    },
    "stateKeyPath": "metadata.source" (4)

}
1 state rule name
2 state processing function in Jsonlogic format
3 the rule will be used only on the devices defined in the device-group
4 the current state will be stored using the "metadata.source" field.
Step4 : Data messages

You can simulate, with the Live Objects REST API, the data messages sent by the tracker.

Data Message 1 :
POST liveobjects.orange-business.com/api/v0/data/streams/urn:lora:0020B20000000101!uplink
{ "metadata": {
      "connector": "http",
      "source": "urn:lora:0020B20000000101"
   },
    "location": {
      "provider": "lora-macro",
      "accuracy": 10,
      "lon": -122.169846,
      "lat": 37.602902},
   "model": "lora_v0",
   "value": {
      "payload": "ae2109000cf3"
   },
   "timestamp": "2017-07-26T08:32:44.034Z",
   "tags": [
      "San Francisco", "Tracker"
     ]
}

The first data message in the SF area will generate an event with no previous state.

{           "stateKey": "urn:lora:0020B20000000101",
            "newState": "shipment_zone",
            "timestamp": "2017-08-02T07:25:59.967Z",
            "stateProcessingRuleId": "8c8ccbcd-5d2d-46cb-a47e-50ce2cefa75a",
            "data": {
                "streamId": "urn:lora:0020B20000000101!uplink",
                "timestamp": "2017-07-26T08:32:44.034Z",
                "location": {
                    "lat": 37.602902,
                    "lon": -122.169846,
                    "accuracy": 10,
                    "provider": "lora-macro"
                },
                "model": "lora_v0",
                "value": {
                    "payload": "ae2109000cf3"
                },
                "tags": [
                    "San Francisco",
                    "Tracker"
                ],
                "metadata": {
                    "source": "urn:lora:0020B20000000101",
                    "connector": "http"
                }
            }

Any other message in SF area will not generate event, because the state would remain unchanged.

Data Message 2 :

Now, you can send a second data message, located this time on Highway 101.

POST liveobjects.orange-business.com/api/v0/data/streams/urn:lora:0020B20000000101!uplink
{ "metadata": {
      "connector": "http",
      "source": "urn:lora:0020B20000000101"
   },
    "location": {
      "provider": "lora-macro",
      "accuracy": 10,
      "lon": -121.562765,
      "lat": 36.969311},
   "model": "lora_v0",
   "value": {
      "payload": "ae2109000cf3"
   },
   "timestamp": "2017-07-26T08:32:44.034Z",
   "tags": [
      "Highway 101", "Tracker"
     ]
}

The message in Highway 101 area will generate the following event. Any other message in Highway 101 area would not generate event because state would be unchanged.

{
            "stateKey": "urn:lora:0020B20000000101",
            "previousState": "shipment_zone",
            "newState": "transportation_zone",
            "timestamp": "2017-08-02T07:31:45.317Z",
            "stateProcessingRuleId": "8c8ccbcd-5d2d-46cb-a47e-50ce2cefa75a",
            "data": {
                "streamId": "urn:lora:0020B20000000101!uplink",
                "timestamp": "2017-07-26T08:32:44.034Z",
                "location": {
                    "lat": 36.969311,
                    "lon": -121.562765,
                    "accuracy": 10,
                    "provider": "lora-macro"
                },
                "model": "lora_v0",
                "value": {
                    "payload": "ae2109000cf3"
                },
                "tags": [
                    "Highway 101",
                    "Tracker"
                ],
                "metadata": {
                    "source": "urn:lora:0020B20000000101",
                    "connector": "http"
                }
}
Data Message 3 :
POST liveobjects.orange-business.com/api/v0/data/streams/urn:lora:0020B20000000101!uplink
{ "metadata": {
      "connector": "http",
      "source": "urn:lora:0020B20000000101"
   },
    "location": {
      "provider": "lora-macro",
      "accuracy": 10,
      "lon": -118.154555,
      "lat": 33.881571},
   "model": "lora_v0",
   "value": {
      "payload": "ae2109000cf3"
   },
   "timestamp": "2017-07-26T08:32:44.034Z",
   "tags": ["Los Angeles", "Tracker"]
}

The third message in LA area will generate the following event. Any other message in LA area would not generate event because state would remain unchanged.

{
            "stateKey": "urn:lora:0020B20000000101",
            "previousState": "transportation_zone",
            "newState": "delivery_zone",
            "timestamp": "2017-08-02T09:32:03.333Z",
            "stateProcessingRuleId": "8c8ccbcd-5d2d-46cb-a47e-50ce2cefa75a",
            "data": {
                "streamId": "urn:lora:0020B20000000101!uplink",
                "timestamp": "2017-07-26T08:32:44.034Z",
                "location": {
                    "lat": 33.881571,
                    "lon": -118.154555,
                    "accuracy": 10,
                    "provider": "lora-macro"
                },
                "model": "lora_v0",
                "value": {
                    "payload": "ae2109000cf3"
                },
                "tags": [
                    "Los Angeles",
                    "Tracker"
                ],
                "metadata": {
                    "source": "urn:lora:0020B20000000101",
                    "connector": "http"
                }
            }
}

9.2.2. Use case 2 : air quality monitoring

Pre-requisites :

  • the event processing feature is enabled for the tenant.

  • the tenant has a valid Live Objects API key.

9.2.2.1. Use case description :
  • Monitor 2 pollutants (NO2 and PM10)

  • Trigger INFO or ALERT events when thresholds are reached.

  • Trigger daily pollution level state change events for each pollutant.

This example includes SIMPLE EVENT PROCESSING rules and STATE PROCESSING rules.

The REST queries for this example are available here and can be run in Postman.

Air quality information is available for every monitoring station in a city. 3 different types of message are available :

  • hourly pollution level for each pollutant (data message sent every hour).

  • pollution level for the last 3 hours for each pollutant (data message sent every hour).

  • daily average level for each pollutant (data message once a day at 0 a.m.).

Information/Alert thresholds are defined for each pollutant type :

air quality

Daily pollution level :

air quality

For NO2, the threshold to trigger the ALERT is lower if the daily state for the previous day is MEDIUM or HIGH. The daily calculated state for NO2 must be stored by your application in the tenant context. Example :

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/no2-previous-day-medium-level-reached
{
  "contextData": true,
  "tags": [
    "previous day"
  ]
}

Event triggering on air quality :

  • 6-hour INFO : when the information level is reached in a monitoring station for NO2 or PM10. Then, wait for 6 hours before getting any new "information level reached" event.

  • real-time ALERT

    • when the alert level is reached in a monitoring station for NO2 or PM10.

    • when the information level is reached in a monitoring station for NO2 and the daily pollution level for previous day was MEDIUM or HIGH

  • daily pollution level :when the daily pollution level changes, like for example : LOW→MEDIUM or MEDIUM→HIGH

air quality

9.2.2.2. Streams of messages

A stream of data messages is attached to a monitoring station. The messages from the "paris-centre" monitoring station will be sent in a distinct stream from the "place de l’Opéra" monitoring station.

Data messages
  • hourly pollution level (sent every hour)

{
   "streamId": "paris-centre-hourly",
    "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_hourly",
   "value": {
      "type":"hourly",
      "NO2":450,
      "PM10":17,
      "monitoring-station":"paris-centre"
   },
   "timestamp": "2017-07-27T13:00:00Z"
}
  • hourly pollution level for the last 3 hours (sent every hour)

{
   "streamId":  "paris-centre-last-3-hours",
    "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_last_three_hours",
   "value": {
      "type":"last_three_hours",
      "data1": {"value":{"NO2":420,"PM10":16},"timestamp":"2017-07-27T11:00:00Z"},
      "data2": {"value": {"NO2":401,"PM10":14},"timestamp":"2017-07-27T12:00:00Z"},
      "data3": {"value": {"NO2":450,"PM10":17},"timestamp":"2017-07-27T13:00:00Z"},
      "monitoring-station":"paris-centre"
   },
   "timestamp":"2017-07-27T13:00:00Z"
}
  • daily average (sent once a day)

{
   "streamId": "paris-centre-daily",
    "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_daily",
   "value": {
      "type":"daily",
      "avg-NO2":250,
      "avg-PM10":53,
      "monitoring-station":"paris-centre"
   },
   "timestamp":"2017-07-28T00:00:00Z"
}
9.2.2.3. Test steps
Step1 : Context provisioning
  • Create the information/alert levels for each pollutant.

    PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/no2-alert-threshold-1
    {
      "contextData": 400,
      "tags": [
        "threshold","alert","no2"
      ]
    }

For the provisioning of the other thresholds, please check the postman requests.

Step2 : State processing rule provisioning

Once the threshold levels are provisioned in the context, the rules need to be provisioned:

  • For events on exceeding thresholds, use Simple Event Processing (matching rules to check if an event should be triggered + firing rule for the frequency of triggering).

  • For events on daily pollution state change, use State Processing.

lom_ep_airquality_1

Before provisioning the state processing rule, it is useful to run the state processing function on a test data message.

POST liveobjects.orange-business.com/api/v0/eventprocessing/stateprocessing-rule/test
{
  "currentState": {},
  "data": {
    "metadata": {
      "connector": "http"
      },
   "streamId": "paris-centre-daily",
    "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_daily",
   "value": {
      "type":"daily",
      "avg-NO2":9,
      "avg-PM10":9,
      "monitoring-station":"paris-centre"
   },
   "timestamp":"2017-07-27T00:00:00Z"
  },
  "stateProcessingFunction": {
        "if": [
        {
        "and" :
                [
                   { "<": [{ "var": "value.avg-NO2"}, {"ctx": "no2-alert-threshold-2"}]},
                   {"==": [{ "var" : "value.type"}, "daily"]}
                ]


        },
        "LOW",
        {
                    "and" :
                [
                   { "<": [{"ctx": "no2-alert-threshold-2"},{ "var": "value.avg-NO2"},{"ctx": "no2-alert-threshold-1"} ]},
                   {"==": [{ "var" : "value.type"}, "daily"]}
                ]

        },
        "MEDIUM",
        {
                    "and" :
                [
                   { ">": [{ "var": "value.avg-NO2"},{"ctx": "no2-alert-threshold-1"} ]},
                   {"==": [{ "var" : "value.type"}, "daily"]}
                ]

        },
        "HIGH"]
  }
}

Response :

{
    "stateFunctionValid": true,
    "dataValid": true,
    "stateFunctionResult": "LOW"
}

The test endpoint expects, as input, the current state, a data message and the state function. The response returns the function status (valid or not), the data status (valid or not) and the result of the state function applied to the data.

The threshold values are retrieved from the tenant context (ctx).

Now that the state function is tested, you can provision the state processing rule.

Daily pollution state processing rule :

POST liveobjects.orange-business.com/api/v0/eventprocessing/stateprocessing-rule
{
    "name": "NO2 daily pollution level",
    "enabled": true,
    "stateFunction": {
        "if": [
        {
        "and" :
                [
                   { "<": [{ "var": "value.avg-NO2"}, {"ctx": "no2-alert-threshold-2"}]},
                   {"==": [{ "var" : "value.type"}, "daily"]}
                ]


        },
        "LOW",
        {
                    "and" :
                [
                   { "<": [{"ctx": "no2-alert-threshold-2"},{ "var": "value.avg-NO2"},{"ctx": "no2-alert-threshold-1"} ]},
                   {"==": [{ "var" : "value.type"}, "daily"]}
                ]

        },
        "MEDIUM",
        {
                    "and" :
                [
                   { ">": [{ "var": "value.avg-NO2"},{"ctx": "no2-alert-threshold-1"} ]},
                   {"==": [{ "var" : "value.type"}, "daily"]}
                ]

        },
        "HIGH"]
  },
    "stateKeyPath": "streamId"

}

For the PM10 pollutant, the process is the same to create the state processing rule.

Step3 : matching rule provisioning

Example : for NO2, if the previous day ended with a "MEDIUM" or "HIGH" level, the ALERT threshold level is 200 microgram/m3 instead of 400. When receiving the daily pollution state change event every night, your application must store it in the context (key name in the context : "no2-previous-day-medium-level-reached" and "no2-previous-day-high-level-reached", value : true or false) in order to be used in the real-time alerts.

the previous day state is set in the tenant context every night, by your application, based on the daily state event sent by the state processing rule .

A testing point is available to prepare the matching rule and test it on a data message.

POST liveobjects.orange-business.com/api/v0/eventprocessing/matching-rule/test
{
  "data": {
    "metadata": {
      "connector": "http"
      },
   "streamId": "paris-centre-hourly",
    "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_hourly",
   "value": {
      "type":"hourly",
      "NO2":201,
      "PM10":15,
      "monitoring-station":"paris-centre"
   },
   "timestamp":"2017-07-27T02:00:00Z"
  },
  "dataPredicate": {
    "and": [{
        ">": [{"var": "value.NO2"}, {"ctx": "no2-alert-threshold-2" }]
    },
    {
        "or": [{"==": [{"ctx": "no2-previous-day-medium-level-reached"},true]},
        {"==": [{"ctx": "no2-previous-day-high-level-reached"},true]}]
    },
    {
        "==": [{"var": "value.type"},"hourly"]}]
    }
}

Response :

{
    "dataPredicateValid": true,
    "dataValid": true,
    "dataPredicateResult": true
}

Now, provision the matching-rule :

POST liveobjects.orange-business.com/api/v0/eventprocessing/matching-rule
{
  "name": "no2-alert-level-reached-threshold2",
  "dataPredicate": {
    "and": [{
        ">": [{"var": "value.NO2"}, {"ctx": "no2-alert-threshold-2" }]
    },
    {
        "or": [{"==": [{"ctx": "no2-previous-day-medium-level-reached"},true]},
        {"==": [{"ctx": "no2-previous-day-high-level-reached"},true]}]
    },
    {
        "==": [{"var": "value.type"},"hourly"]}]
    },
  "enabled": true
}

Response :

{
    "id": "0476993c-b7cc-49a7-9a86-87431ead76e7",
    "name": "no2-alert-level-reached-threshold2",
    "enabled": true,
    "dataPredicate": {
        "and": [
            {
                ">": [{"var": "value.NO2"},
                    {"ctx": "no2-alert-threshold-2"}]
            },
            {
                "or": [{"==": [{"ctx": "no2-previous-day-medium-level-reached"},true]},
                    {"==": [{"ctx": "no2-previous-day-high-level-reached"},true]}
                ]
            },
            {
                "==": [{"var": "value.type"},"hourly"]
            }
        ]
    }
}
Step3 bis : firing rule provisioning

When the matching rule is ready, a firing rule must be provisioned in order to set the frequency for event triggering (ONCE, ALWAYS, SLEEP).

For the matching rule described in the previous step, an event is sent everytime the ALERT threshold is reached in a monitoring station.

POST liveobjects.orange-business.com/api/v0/eventprocessing/firing-rule
{
  "aggregationKeys": [
    "streamId"
  ],
  "enabled": true,
  "firingType": "ALWAYS",
  "matchingRuleIds": [
    "0476993c-b7cc-49a7-9a86-87431ead76e7"
  ],
  "name": "firing NO2 alert 200"
}

Another example of firing rule : for the PM10/NO2 INFO event, for a monitoring station, when an event is triggered, we do not want to receive any other INFO event in the next 6 hours. The firingType is set to SLEEP :

POST liveobjects.orange-business.com/api/v0/eventprocessing/firing-rule
{
  "aggregationKeys": [
    "streamId"
  ],
  "enabled": true,
  "firingType": "SLEEP",
  "matchingRuleIds": [
    "4578993c-b7cc-49a7-9a86-87431ead96a9"
  ],
  "name": "firing PM10 INFO",
    "sleepDuration": "PT6H"
}

The sleepDuration is expressed in a iso8601-duration format.

For the provisioning of the other SEP rules, please check the postman requests.

Step4 Send data messages

In this test, the data messages are sent using Live Objects REST http API.

Data message 1 : hourly
POST liveobjects.orange-business.com/api/v0/data/streams/paris-centre-hourly
{
   "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_hourly",
   "value": {
      "type":"hourly",
      "NO2": 250,
      "PM10":45,
      "monitoring-station":"paris-centre"
   },
   "timestamp": "2017-07-27T14:00:00Z"
}
Data message 2 : last 3 hours
POST liveobjects.orange-business.com/api/v0/data/streams/paris-centre-last-3-hours
{
    "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_last_three_hours",
   "value": {
      "type":"last_three_hours",
      "data1": {"value":{"NO2":420,"PM10":16},"timestamp":"2017-07-27T11:00:00Z"},
      "data2": {"value": {"NO2":401,"PM10":14},"timestamp":"2017-07-27T12:00:00Z"},
      "data3": {"value": {"NO2":450,"PM10":17},"timestamp":"2017-07-27T13:00:00Z"},
      "monitoring-station":"paris-centre"
   },
   "timestamp":"2017-07-27T13:00:00Z"
}
Data message 3 : daily average
POST liveobjects.orange-business.com/api/v0/data/streams/paris-centre-daily
{
    "location": {
        "lon":2.2945, "lat" : 48.8584
   },
   "model": "model_daily",
   "value": {
      "type":"daily",
      "avg-NO2":92,
      "avg-PM10":20,
      "monitoring-station":"paris-centre"
   },
   "timestamp":"2017-07-28T00:00:00Z"
}

In this example, the daily average is calculated in another system. Another option would be to calculate it in a recurrent query on the hourly stream, and then create the daily data message.

POST liveobjects.orange-business.com/api/v0/data/search
{ "size":0,
    "query": {
        "filtered": {
            "filter": {
                "bool":{
                    "must": [
                         {
                             "term": {
                                 "streamId": "paris-centre-hourly"
                             }
                         },
                         {
                             "range": {
                                 "timestamp": {
                                     "gte":"2017-07-27",
                                     "lt":"2017-07-28"
                                 }
                             }
                         }
                    ]
                }
            }
        }
    },
    "aggs" : {
            "avg-NO2" : {
                "avg" : { "field" : "@model_hourly.value.NO2" }

            },
            "avg-PM10" : {
                "avg" : { "field" : "@model_hourly.value.PM10" }

            }
    }
}

response:

{
    "took": 124,
    "hits": {
        "total": 1
    },
    "aggregations": {
        "avg-PM10": {
            "value": 41
        },
        "avg-NO2": {
            "value": 450
        }
    }
}
Step6 Get events

The tenant can be notified of the triggered events. The events are also stored in a dedicated stream which can be requested using the Datamanagement data search API.

POST liveobjects.orange-business.com/api/v0/data/search
{
    "from": 0,
    "size": 10,
    "query": {
        "filtered": {
            "filter": {
                "bool":{
                    "must": [
                         {
                             "term": {
                                 "streamId": "event:paris-centre-hourly"
                             }
                         },
                         {
                             "range": {
                                 "timestamp": {
                                     "gte":"2017-08-03",
                                     "lt":"2017-08-07"
                                 }
                             }
                         }
                    ]
                }
            }
        }
    }
}

The response contains the list of events on the hourly stream for the "paris-centre" monitoring station.

{
    "took": 22,
    "hits": {
        "total": 2,
        "hits": [
            {
                "_source": {
                    "metadata": null,
                    "streamId": "event:paris-centre-hourly",
                    "created": "2017-08-07T11:16:01.920Z",
                    "location": {
                        "provider": null,
                        "alt": null,
                        "accuracy": null,
                        "lon": 2.2945,
                        "lat": 48.8584
                    },
                    "model": "event:model_hourly",
                    "id": "59884bf1e9cf83391a49ee61",
                    "value": {
                        "tenantId": "597f812389179c3436edf332",
                        "matchingContext": {
                            "matchingRule": {
                                "dataPredicate": "{\"and\":[{\">\":[{\"var\":\"value.NO2\"},{\"ctx\":\"no2-info-threshold-1\"}]},{\"==\":[{\"var\":\"value.type\"},\"hourly\"]}]}",
                                "name": "no2-info-level-reached",
                                "id": "84b1cbd7-5184-4460-b05e-41236fbfe770",
                                "enabled": true
                            },
                            "data": {
                                "metadata": {
                                    "connector": "http"
                                },
                                "streamId": "paris-centre-hourly",
                                "location": {
                                    "lon": 2.2945,
                                    "lat": 48.8584
                                },
                                "model": "model_hourly",
                                "value": {
                                    "NO2": 450,
                                    "PM10": 41,
                                    "type": "hourly",
                                    "monitoring-station": "paris-centre"
                                },
                                "timestamp": "2017-07-27T14:00:00Z"
                            },
                            "tenantId": "597f812389179c3436edf332",
                            "timestamp": "2017-08-07T11:16:04.859Z"
                        },
                        "timestamp": "2017-08-07T11:16:04.875Z",
                        "firingRule": {
                            "name": "firing NO2 INFO",
                            "matchingRuleIds": [
                                "84b1cbd7-5184-4460-b05e-41236fbfe770"
                            ],
                            "sleepDuration": "PT6H",
                            "id": "f1dfc01d-a236-4bf6-b8a7-fdb2aa6a4e10",
                            "aggregationKeys": [
                                "streamId"
                            ],
                            "firingType": "SLEEP",
                            "enabled": true
                        }
                    },
                    "timestamp": "2017-08-07T11:16:04.875Z",
                    "tags": [
                        "event"
                    ]
                }
            }
...
}

The events can also be retrieved with MQTT on a specific topic.

9.3. Context repository

9.3.1. Definition

The context repository is a database that allows storing user data that could be useful in the event rules definition and not present in the data messages. The context may include, for instance, thresholds definition, geographical zones, a list of device identifiers, a group of contexts. The context has a key-value format. The key is a string and the value can be a primitive (string, numeric…​), a json object or an array. Optional tags are available to ease the search among the tenant contexts.

for geographical zones, a dedicated geozone database is provided. Once the user has provisioned his geozones, they are automatically available in the user context.

9.3.2. Context provisioning

The Live Objects API to manage context provisioning are described in the swagger documentation (Event processing - Context section) : https://liveobjects.orange-business.com/swagger-ui/index.html.

9.3.3. Context groups

A context value may reference other context keys. Instead of referencing each context individually, the rule can then reference the context group.

Example : See a context groups example.

extract context key

A context key is not necessarily hard coded in your rule. For instance, it can be extracted from your data message (using tags or device identifier).

Here, the context key is generated with the concatenation of the value.streamId field and a string.
 {"ctx" : {"cat":[{"var" : "value.streamId"},"alertingzone"]}}
Here, the context key is extracted from the value.tags field.
"ctx": { "get": [{"filter": [{"var": "value.tags"},"zone"]},0]}

9.4. Geozone repository

9.4.1. Definition

The Geozone repository is a database that allows the user to save his geographical sites/zones of interest. The geozones are stored as polygons (array of geopoints coordinates in decimal degrees). Meta information like a description and tags can be stored with the geozone.

format
  • coordinate order for polygon definition : use longitude as the first coordinate and latitude as the second coordinate.

  • the polygons are closed linestrings. Closed LineStrings have at least four coordinate pairs and specify the same position as the first and last coordinates.

Example of polygon :

[[[1.780892, 48.091452],[2.301382, 48.000565],[2.281961, 47.509630],[1.252634, 47.729556],[1.780892, 48.091452]]]]

9.4.2. Provisioning

The Live Objects API to manage geozone provisioning are described in the swagger documentation (Event processing - Geozone section) : https://liveobjects.orange-business.com/swagger-ui/index.html.

Example :

PUT liveobjects.orange-business.com/api/v0/eventprocessing/geozones/grand-orleans

{
  "description": "my geozone grand Orleans",
  "geometry": {
    "coordinates": [[[1.780892, 48.091452],[2.301382, 48.000565],
    [2.281961, 47.509630],[1.252634, 47.729556],[1.780892, 48.091452]]]],
    "type": "Polygon"
  },
  "tags": ["zone-nord"]
}
  • Once a geozone is provisioned, it is available in the user context. Hence, it can be referenced in event processing rules or in groups of context.

  • When a geozone is updated, the modifications are immediately taken into account by the contexts or rules referencing the geozone.

9.5. Rules and JsonLogic syntax

A rule is a function applied on a data message in order to detect any significant change in the data (exceeding threshold, state modification, change of location). The rules in Simple Event Processsing and State Processing are defined in Live Objects using the JsonLogic syntax.

the JsonLogic log operator has been deactivated.

9.5.1. Additional operators

In addition to the existing JsonLogic operators (logic and boolean operators, numeric operators, string operators, array operators), Live Object provides geographic operators (distance, inside, insideindex, closeto, closetoindex), context operator (ctx) and miscellaneous operators (get, currentstate).

The REST queries for this example are available here and can be run in Postman.

Table 7. distance
Name

distance

Description

Geographical operator. Returns the distance in meters between two points, given their latitude and longitude in decimal degrees.

Parameters

lon1, lat1, lon2, lat2 in decimal degrees

Logic

{ ">" : [ { "distance" : [ { "var" : "location.lon"}, {"var" : "location.lat"}, 2.296565, 48.800206 ] }, 6000 ] }

Data

Eiffel Tower { "location":{ "lon" : 2.2945, "lat" : 48.8584 } }

inside

Result

true

Table 8. ctx
Name

ctx

Description

Retrieve, from the context, one or several values using a key or an array of keys. Several ctx operators can be nested (group of contexts).

Parameters

key or array of keys

Context

In the following example, "freezingThreshold" and "liquidThreshold" must have been provisioned in the tenant context before being used.

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/freezingThreshold

{ "contextData": 0 }

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/liquidThreshold

{ "contextData": 100 }

Logic

{ "if" : [ {"<": [{"var":"value.temp"}, {"ctx": "freezingThreshold"}] }, "ice", {"<": [{"ctx": "freezingThreshold"}, {"var":"value.temp"}, {"ctx": "liquidThreshold"}] },"liquid", "gas" ]

}

Pre-requisite

Data

{"value":{"temp":55}}

Result

"liquid"

Table 9. currentstate
Name

currentstate

Description

Retrieve the current state for a device. For state processing rules only.

Logic

{"if" : [ {"and": [{ "!==": [{ "currentstate": [] }, "hot"] }, {"<": [80,{"var": "value.temp"},100]}]}, "normal", {"<": [{"var":"value.temp"}, 0] }, "cold", {"<": [{"var":"value.temp"}, 80] }, "normal", "hot" ]}

Data

{"value":{"temp":20.0}}

Result

"normal"

Table 10. get
Name

get

Description

Returns the element at the specified position in an array.

Parameters

array, index in the array

Context

In the following example, an array containing latitude and longitude values must have been provisioned in the tenant context :

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/2geopoints

{ "contextData": [48.800206, 2.296565, 48.800206, 2.296565] }

Logic

{ "distance": [{ "get": [{ "ctx": "2geopoints" }, 0] }, { "get": [{ "ctx": "2geopoints" }, 1] }, { "get": [{ "ctx": "2geopoints" }, 2] }, { "get": [{ "ctx": "2geopoints" }, 3] }] }

Data

{}

Result

0

Table 11. inside
Name

inside

Description

Checks if a point defined by its latitude and longitude is inside a polygon (or at least one polygon if an array of polygons is provided as input parameter).

Parameters

longitude, latitude in decimal degrees for the point to be tested, polygon(s) defined by the coordinates of their vertices (lon, lat in decimal degrees).

Logic

{ "inside": [{"var": "location.lon"},{"var": "location.lat"}, [[[2.381121,48.627973],[2.129376,48.629499], [2.099351,48.768217],[2.116302,48.955198], [2.317994,48.927845],[2.455176,48.913357], [2.489472,48.841933],[2.392301,48.762871], [2.381121,48.627973]]]] }

Data

{"location":{"lon":2.350350,"lat":48.854064}}

inside

Result

true

Table 12. insideindex
Name

insideindex

Description

Checks if a point is inside an array of polygons. Returns the index of the first matching polygon. Returns -1 if no matching was found. This operator is usually in conjunction with the "get" operator which will return the matching polygon.

Parameters

longitude, latitude in decimal degrees for the point to be tested, array of polygons defined by the coordinates of their vertices (lon, lat in decimal degrees).

Context

In the example, an array containing latitude and longitude values must have been provisioned in the tenant context :

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/zone-nord { "contextData":["zone-grandparis", "zone-grandorleans"] }

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/zone-grandparis { "contextData":[[ [2.381121, 48.627973], [2.129376, 48.629499], [2.099351, 48.768217], [2.116302, 48.955198], [2.317994, 48.927845], [2.455176, 48.913357], [2.489472, 48.841933], [2.392301, 48.762871], [2.381121, 48.627973] ]] }

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/zone-grandorleans { "contextData":[[ [1.780892, 48.091452], [2.301382, 48.000565], [2.281961, 47.509630], [1.252634, 47.729556], [1.780892, 48.091452] ]] }

Logic

{ "get": [{ "ctx": { "get": [{ "filter": [{ "var": "tags" }, "zone"] }, 0] } }, { "insideindex": [{ "var": "location.lon" }, { "var": "location.lat" }, { "ctx": { "ctx": { "get": [{ "filter": [{ "var": "tags" }, "zone"] }, 0] } } }] }] }

Data

{"location":{"lat" : 48.854064, "lon" : 2.350350}, "tags" : ["otherTag2","zone-nord","otherTag1"] }

Result

"zone-grandparis"

Table 13. closeto
Name

closeto

Description

Checks if a circle is close to a polygon or at least one of the polygons (polygon array).

Parameters

longitude, latitude (in decimal degrees for the circle center), circle radius, polygon or array of polygons

Logic

{ "closeto": [{ "var": "location.lon" }, { "var": "location.lat" }, { "var": "location.accuracy" }, [[[[1.780892,48.091452],[2.301382,48.000565],[2.281961,47.509630],[1.252634,47.729556],[1.780892,48.091452]]], [[[2.281961,47.509630],[1.252634,47.729556]]],[[[2.22412,48.85863],[2.25219,48.88143],[2.28404,48.8785],[2.26816,48.86721],[2.2588,48.84913],[2.22859,48.85004],[2.22412,48.85863]]] ]] }

Data1 : circle center outside polygons, the circle does not intersect any polygon.

{"location":{ "lon" : 2.263849, "lat" : 48.855983, "accuracy" : 100 }}

closeTo1

Result1

false

Data2 : circle center outside polygons, the circle intersects one polygon.

{ "lon" : 2.263849, "lat" : 48.855983, "accuracy" : 200 }

closeTo2

Result2

true

Data3 : a point inside one of the polygons.

{ "lon" : 2.260265350341797, "lat" : 48.85693640789798, "accuracy" : 0 } closeTo1

Result3

true

Table 14. closetoindex
Name

closetoindex

Description

Checks if a circle is close to an array of polygons. Returns the index of the first matching polygon (first index in the array is 0). Returns -1 if no matching was found.

Parameters

longitude, latitude (in decimal degrees for the circle center), circle radius, array of polygons

Logic

{ "closetoindex" : [ { "var" : "location.lon"}, { "var" : "location.lat"}, { "var" : "location.accuracy"} , [[[[1.780892, 48.091452], [2.301382, 48.000565], [2.281961, 47.509630], [1.252634, 47.729556],[1.780892, 48.091452]]], [[[2.224120, 48.858630], [2.252190, 48.881430], [2.284040, 48.878500],[2.268160, 48.867210],[2.258800, 48.849130],[2.228590, 48.850040],[2.224120, 48.858630]]]]] }

N.B.: first polygon in the array is the Orleans area; 2nd polygon is the Paris area.

Data

{"location":{ "lon" : 2.263849, "lat" : 48.855983, "accuracy" : 500 }}

closeToIndex1

Result

1

9.6. Simple Event Processing

9.6.1. Concepts

Simple event processing (SEP) service is aimed at detecting notable single event from the flow of data messages.

Simple event processing combines a stateless boolean detection function (matching rule) with a frequency function (firing rule).

It generates fired events as output that your business application can consume to initiate downstream action(s) like alarming, execute a business process, etc.

Simple Event Processing service E2E overview

lom_sep_architecture

9.6.2. Processing rules

You can set up Matching rules and Firing rules to define how data messages are processed by the SEP service and how fired events are triggered:

9.6.2.1. Matching rule

A matching rule is a simple or compound rule that will be applied on each data message to evaluate if a « match » occurs. A matching rule is evaluated as a boolean result. Matching rule supports numeric, string, logic and distance operators and is based on JsonLogic.

Matching context (containing data message and matching rule id, etc.) are processed by the firing rules associated to these matching rules.

9.6.2.2. Firing rule

A firing rule applies to the matches triggered by one or many matching rules and defines when fired events must be generated.

A firing rule specifies:

  • the list of matching rules associated to this firing rule – when these matching rules match, the firing rule is applied,

  • the frequency of firing: once, sleep and always,

  • optionally, a list of aggregation keys identifying fields to extract from the matching context to identify the firing context.  

The firing rule is applied as follow on each matching context:

  • the firing rule generates the firing context from the matching context, by extracting one or multiple fields defined with the aggregation keys,

  • the firing rule then applies the frequency parameter to optionally throttle the triggering of fired events belonging to the same firing context.  

If the frequency of the firing rule is defined as ONCE or SLEEP then firing guards are created in the system to prevent new generation of fired events for a given firing context. You can manage the firing guards, and for example, remove a firing guard to re-activate a firing rule for a specific firing context.

As an example, by setting the metadata.source field as aggregation key, if a fired event is generated for a device “A”, a firing guard will prevent new fired event for this device “A” and this firing rule. By the way, fired events could occur for devices “B”, “C”, etc…​ for this rule.

With SLEEP mode, a duration specifies the minimum time between two fired events. When the duration is elapsed, the firing guards is removed and new fired events could occur. This duration is computed for each element of the tuple composed of firing rule id + aggregation keys + value (firingRuleID:metadata.source:deviceId1 , firingRuleID:metadata.source:deviceId2, …)

inside

The sleepDuration is expressed in a iso8601-duration format.

9.6.2.3. Fired events consumption

Fired events are accessible with the MQTT API. Your business applications must connect with payload+bridge mode and subscribe to router/~event/v1/data/eventprocessing/fired topic to receive the fired events.

9.6.2.4. Examples

Here are some examples of usage of the simple event processing service.

Data message sent by a device with temperature set to 100 and location set at San Francisco (37.773972,-122.431297)

{
"streamId":"urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
"timestamp":"2016-08-29T08:27:52.874Z",
"location":{"lat":37.773972,"lon":-122.431297},
"model":"temperatureDevice_v0",
"value":{"temp":100},
"metadata":{"source":"urn:lo:nsid:dongle:00-14-22-01-23-45"}
}

Matching rule: numeric (temperature higher than 99) and distance operator (distance between data message and Paris (48.800206, 2.296565) must be higher than 6km)

{
    "name": "compound rule with numeric and distance operators",
    "enabled": true,
    "dataPredicate":
    {
                "and" :
                [
                  { ">" : [ { "distance" : [
                    { "var" : "location.lat"},
                    { "var" : "location.lon"},
                    48.800206,
                    2.296565 ] },
                    6 ] },
                  { ">" : [ { "var" : "value.temp" }, 99 ] }
                ]
    }
}

Firing rule with frequency ONCE and aggregationKeys based on the source field :

{
    "name": "firing rule test",
    "enabled": true,
    "matchingRuleIds": ["{matchingRuleId}"],
    "aggregationKeys":["metadata.source"],
    "firingType":"ONCE"
}

Fired event will be generated once for each source sending data with temperature higher than 99 and not located within a radius of 6km of Paris.

Example with other operators ">", "if", "in", "cat" :

{">":[{"var":{"cat":["value.", {"if" : [
  {"in": [{"var":"model"}, "v0"] }, "temp",
  {"in": [{"var":"model"}, "v1"] }, "temperature",
  "t"
]}]}},100]}

This rule allows to specify the field to be compared to the value "100” based on the model of the data message.

If the model value is:

  • "v0", the comparison will be made with the field "value.temp”,

  • "v1", the comparison will be made with the field "value.temperature”,

  • else it will be made with the field "value.t”.

9.7. State Processing

9.7.1. Concepts

State processing (SP) service aims at detecting changes in "device state" computed from data messages.

A state can represent any result computed from Live Objects data messages : geo-zone ("paris-area", "london-area", ..), temperature status ("hot", "cold", ..), availability status ("ok" , "ko"). Each state is identified by a key retrieved from the user-defined json-path in the data message.

stateKeyPath examples : "streamId", "metadata.source"

A state is computed by applying a state function to a data Message. A notification is sent by Live Objects each time a state value change. State processing differs from event processing as it provides statefull rules which is useful for uses case more complex than normal/alert status. State processing can be seen as a basic state machine where transitions between states are managed by the state function result and events are transition notifications.

9.7.2. State Processing rules

You can set up StateProcessing rules to define how data messages are processed by the SP service.

A StateProcessing rule applies to all new data messages.

A StateProcessing rule specifies:

  • an optional boolean function : filterPredicate. It filters data on which the state processing logic should be applied. This boolean function is described in jsonLogic syntax. If no filter predicate is specified, state function is applied to every data message.

  • a json path relative to data message : stateKeyPath. This path will be used to retrieve the state key. In many cases the state key will be streamId value or metadata.source value in order to associate a state with a device status.

  • a state function stateFunction which is the core of the state processing logic. This function takes as input a data message and computes a state associated whith the state key.

The State function is written in JsonLogic Syntax and can return any primitive value : String, Number, Boolean.  

9.7.2.1. State change events

State processing events are accessible with the MQTT API. Your business applications must connect with payload+bridge mode and subscribe to router/~event/v1/data/eventprocessing/statechange topic to receive the events.

9.7.2.2. State processing initialization

When a state is computed for the first time, it generate a state change event with previous state equals null.

9.7.2.3. Examples

Here are some examples of usage of the state processing.

Temperature monitoring of a device sensor, with 3 temperature range.

Temperature State processing logic:

  • if temperature is below 0 degree Celsius, sensor state is cold.

  • if temperature is between 0 and 100 degrees Celsius sensor state is normal.

  • if temperature is higher than 100 degrees Celsius sensor state is hot.

The sensor is identified by the streamId field within the data message.

{
        "name": "temperature state rule",
        "enabled": true,
        "stateKeyPath": "streamId",
        "stateFunction": {
                "if": [{
                        "<": [{
                                "var": "value.temp"
                        },
                        0]
                },
                "cold",
                {
                        "<": [{
                                "var": "value.temp"
                        },
                        100]
                },
                "normal",
                "hot"]
        }
}

We assume that the current state of the sensor is "normal". The following data message will generate a state change event from "normal" to "hot" for state key : "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature".

{
"streamId":"urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
"timestamp":"2017-05-24T08:29:49.029Z",
"location":{"lat":37.773972,"lon":-122.431297},
"model":"temperatureDevice_v0",
"value":{"temp":200},
"metadata":{"source":"urn:lo:nsid:dongle:00-14-22-01-23-45"}
}

State change event :

{
        "stateKey": "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
        "previousState":"normal",
        "newState": "hot",
        "timestamp": "2017-05-24T08:29:49.029Z",
        "stateProcessingRuleId": "266d3b22-70e0-4f28-9df1-5186c6094f5b",
        "data": {
                "streamId": "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
                "timestamp":"2017-05-24T08:29:49.029Z",
                "location":{"lat":37.773972,"lon":-122.431297},
                "model":"temperatureDevice_v0",
                "value": {
                        "temp": 200
                },
                "metadata":{"source":"urn:lo:nsid:dongle:00-14-22-01-23-45"}
        }
}

10. MQTT interface

Live Objects supports the MQTT protocol to enable bi-directional (publish/subscribe) communications between devices or applications and the platform.

MQTT can be used with or without encryption (TLS/SSL layer).

Live Objects also supports MQTT over WebSocket.

The Live Objects MQTT interface offers multiples "modes":

  • mode "Device": dedicated to device use-cases, based on simple JSON messages,

  • mode "Bridge": full access to Live Objects internal bus capacities, useful for application or gateway use cases.

10.1. Endpoints

MQTT endpoints:

  • mqtt://liveobjects.orange-business.com:1883 for non SSL connection

  • mqtts://liveobjects.orange-business.com:8883 for SSL connection

MQTT over Websocket endpoints:

  • ws://liveobjects.orange-business.com:80/mqtt

  • wss://liveobjects.orange-business.com:443/mqtt

It is recommended to use the MQTTS endpoint for your production environment, otherwise your communication with Live Objects will not be secured.

The certificate presented by the MQTT server is signed by VeriSign. The public root certificate to import is the following:

-----BEGIN CERTIFICATE-----
MIIE0zCCA7ugAwIBAgIQGNrRniZ96LtKIVjNzGs7SjANBgkqhkiG9w0BAQUFADCB
yjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL
ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJp
U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxW
ZXJpU2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0
aG9yaXR5IC0gRzUwHhcNMDYxMTA4MDAwMDAwWhcNMzYwNzE2MjM1OTU5WjCByjEL
MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW
ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJpU2ln
biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp
U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y
aXR5IC0gRzUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvJAgIKXo1
nmAMqudLO07cfLw8RRy7K+D+KQL5VwijZIUVJ/XxrcgxiV0i6CqqpkKzj/i5Vbex
t0uz/o9+B1fs70PbZmIVYc9gDaTY3vjgw2IIPVQT60nKWVSFJuUrjxuf6/WhkcIz
SdhDY2pSS9KP6HBRTdGJaXvHcPaz3BJ023tdS1bTlr8Vd6Gw9KIl8q8ckmcY5fQG
BO+QueQA5N06tRn/Arr0PO7gi+s3i+z016zy9vA9r911kTMZHRxAy3QkGSGT2RT+
rCpSx4/VBEnkjWNHiDxpg8v+R70rfk/Fla4OndTRQ8Bnc+MUCH7lP59zuDMKz10/
NIeWiu5T6CUVAgMBAAGjgbIwga8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8E
BAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJaW1hZ2UvZ2lmMCEwHzAH
BgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYjaHR0cDovL2xvZ28udmVy
aXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFH/TZafC3ey78DAJ80M5+gKv
MzEzMA0GCSqGSIb3DQEBBQUAA4IBAQCTJEowX2LP2BqYLz3q3JktvXf2pXkiOOzE
p6B4Eq1iDkVwZMXnl2YtmAl+X6/WzChl8gGqCBpH3vn5fJJaCGkgDdk+bW48DW7Y
5gaRQBi5+MHt39tBquCWIMnNZBU4gcmU7qKEKQsTb47bDN0lAtukixlE0kF6BWlK
WE9gyn6CagsCqiUXObXbf+eEZSqVir2G3l6BFoMtEMze/aiCKm0oHw0LxOXnGiYZ
4fQRbxC1lfznQgUy286dUV4otp6F01vvpX1FQHKOtw5rDgb7MzVIcbidJ4vEZV8N
hnacRHr2lVz2XTIIM6RUthg/aFzyQkqFOFSDX9HoLPKsEdao7WNq
-----END CERTIFICATE-----

10.2. MQTT support

The MQTT bridge acts as a standard MQTT v3.1.1 message broker (cf. MQTT Protocol Specification 3.1.1), with some limitations:

  • the "Will" functionality is not implemented (all "willXXX" flags and headers are not taken into account),

  • the "retain" functionality is not implemented,

  • the "duplicate" flag is not used.

10.2.1. Connecting

The first packet exchanged should be a MQTT Connect packet, sent from the client to the MQTT endpoint.

This packet must contain:

  • clientId: usage depends on the "mode",

  • username: used to select a mode and encoding:

  • password: a tenant API Key that can be restrained to consume from / publish into only in specified FIFO queues

  • willRetain, willQoS, willFlag, willTopic, willMessage: !!! Not taken into account !!!,

  • keepAlive: any value, will be correctly be interpreted by MQTT bridge (recommended: 30 seconds).

On reception, the MQTT bridge validates the API Key provided.

  • If the tenantKey is valid, then MQTT Bridge returns a MQTT CONNACK message with return code 0x00 Connection Accepted.

  • If the tenantKey is not valid, then MQTT Bridge returns a MQTT CONNACK message with return code 0x04 Connection Refused: bad user name or password, and closes the TCP connection.

10.2.2. MQTT Ping Req/Res

MQTT Bridge answers to PINGREQ packets with PINGRES packets: this is a way for the MQTT client to avoid connection timeouts.

10.2.3. MQTT Disconnect

MQTT Bridge closes the MQTT / TCP connection when receiving a MQTT DISCONNECT message.

10.2.4. TCP Disconnect

When the TCP connection closes (by client or MQTT bridge), the MQTT bridge will close the currently active subscriptions, etc.

10.3. "Device" mode

In the "Device" mode, a single MQTT connection is associated with a specific device, and JSON messages can be exchanged to support various Device Management and Data features:

  • notifying of the device connectivity status,

  • notifying of the current device configuration and receiving configuration updates,

  • notifying of the list of current device "resources" (i.e. binary contents) versions, and receiving resource update requests,

  • receiving commands and responding to them,

  • sending data messages that will be stored.

Device management features

landing

10.3.1. Connection

When initiating the MQTT connection, to select the "Device" mode you must use the following credentials:

  • clientId : your device unique identifier (cf. Device Identifier (URN)),

  • username : json+device, (where "json" indicates the encoding, "device" the mode),

  • password : a valid API key value.

As soon as the MQTT connection has been accepted by Live Objects, your device will appear as "connected" in Live Objects, with various information regarding the MQTT connection.

Once you close the connection (or if the connection timeouts), your device will appear as "disconnected" in Live Objects.

10.3.2. Device Identifier (URN)

The "device id" used as MQTT client Id must be a valid Live Objects URN of the following format:

urn:lo:nsid:{namespace}:{id}

Where:

  • namespace:
    your device identifier "namespace", used to avoid conflicts between various families of identifier (ex: device model, identifier class "imei", msisdn", "mac", etc.).
    Should preferably only contain alphanumeric characters (a-z, A-Z, 0-9).

  • id:
    your device id (ex: IMEI, serial number, MAC address, etc.)
    Should only contain alphanumeric characters (a-z, A-Z, 0-9) and/or any special characters amongst : - _ | + and must avoid # / !.

Examples
urn:lo:nsid:tempSensor:17872800001W
urn:lo:nsid:gtw_M50:7891001

10.3.3. Summary

Authorized MQTT actions from the device:

publish to

dev/info

to announce the current status

publish to

dev/cfg

to announce the current configuration or respond to a config update request

subscribe to

dev/cfg/upd

to receive configuration update requests

publish to

dev/data

to forward collected data

subscribe to

dev/cmd

to receive commands

publish to

dev/cmd/res

to return command responses

publish to

dev/rsc

to announce the current resource versions

subscribe to

dev/rsc/upd

to receive resource update requests

publish to

dev/rsc/upd/res

to respond to resource update requests

publish to

dev/rsc/upd/err

to announce resource update error

10.3.4. Current Status

To notify Live Objects of its current status, your device must publish a message to the MQTT topic dev/info with the following JSON structure:

{
   "info": <<metadata>>
}

Where:

  • metadata:
    A JSON object describing the current asset status.

Example
{
   "info": {
      "IP": "4.4.4.7",
      "gpsActive": true
   }
}

Live Objects registers that status, or updates the already registered one (by adding the new declared key/values or existing the ones already known) until next connection.

10.3.5. Current Config

To notify Live Objects of its current configuration, your device must publish a message to the MQTT topic dev/cfg with the following JSON structure:

{
   "cfg": {
      "<<param1Key>>": {
         "t": "<<param1Type>>",
         "v": <<param1Value>>
      },
      ...
   }
}

Where:

  • param{X}Key: the identifier for the device configuration parameters,

  • param{X}Type : indicates the config parameter type between

    • "i32": the value must be an integer between -2,147,483,647 and 2,147,483,647,

    • "u32": the value must a positive integer between 0 and 4,294,967,296,

    • "str": the value is a UTF-8 string,

    • "bin": the value is a base64 encoded binary content,

    • "f64": the value is float (64 bits) value,

  • param{X}Value : the config parameter value.

Example:
{
   "cfg": {
      "log_level": {
         "t": "str",
         "v": "DEBUG"
      },
      "secret_key": {
         "t": "bin",
         "v": "Nzg3ODY4Ng=="
      },
      "conn_freq": {
         "t": "i32",
         "v": 80000
      }
   }
}

10.3.6. Config update

When your device is ready to receive configuration updates, it can subscribe to the MQTT topic dev/cfg/upd from where it will receive messages of the following format:

{
   "cfg": {
      "<<param1Key>>": {
         "t": "<<param1Type>>",
         "v": <<param1Value>>
      },
      ...
   },
   "cid": <<correlationId>>
}

Message fields:

  • param{X}Key : The identifier of a device configuration parameter that must be updated,

  • param{X}Type, param{X}Value : the new type and value to apply to the parameter,

  • correlationId : an identifier that your device must set when publishing your new configuration, so that Live Objects updates the status of your configuration parameters.

Example:
{
   "cfg": {
      "logLevel": {
         "t": "bin",
         "v": "DEBUG"
      },
      "connPeriod": {
         "t": "i32",
         "v": 80000
      }
   },
   "cid": 907237823
}

10.3.7. Config update response

Once your device has processed a configuration update request, it must return a response to Live Objects by publishing on topic dev/cfg the current value for the parameters that were updated:

{
   "cfg": {
      "<<param1Key>>": {
         "t": "<<param1Type>>",
         "v": <<param1Value>>,
      },
      ...
   },
   "cid": <<correlationId>>
}

Message fields:

  • config : the new configuration of your device (complete or at least all parameters that were in the configuration update request),

  • correlationId : the correlationId of the configuration update request.

Example:

{
   "cfg": {
      "logLevel": {
         "t": "bin",
         "v": "DEBUG"
      },
      "connPeriod": {
         "t": "i32",
         "v": 80000
      }
   },
   "cid": 907237823
}

If the new value for a parameter is the one that was requested in the configuration update request, the parameter will be considered as successfully updated by Live Objects.

If the new value for a parameter is not the one request, the parameter update will be considered as "failed" by Live Objects.

10.3.8. Data push

To publish collected data into Live Objects, your device must publish on the MQTT topic dev/data the following messages:

{
   "s":  "<<streamId>>",
   "ts": "<<timestamp>>",
   "m":  "<<model>>",
   "v": {
          ... <<value>> JSON object ...
   },
   "t" : [<<tag1>>,<<tag2>>,...]
   "loc": [<<latitude>>, <<longitude>>]
}

Message fields:

  • streamId : identifier of the timeseries this message belongs to,

  • timestamp : data/time associated with the message, is ISO 8601 format,

  • model : a string identifying the schema used for the "value" part of the message, to avoid conflict at data indexing,

  • value : a free JSON object describing the collected information,

  • tags : list of strings associated to the message to convey extra-information,

  • latitude, longitude : details of the geo location associated with the message (in degrees),

Example:
{
   "s":   "mydevice!temp",
   "ts":  "2016-01-01T12:15:02Z",
   "m":   "tempV1",
   "loc": [45.4535, 4.5032],
   "v": {
      "temp":     12.75,
      "humidity": 62.1,
      "gpsFix":   true,
      "gpsSats":   [12, 14, 21]
   },
   "t" : [ "City.NYC", "Model.Prototype" ]
}

10.3.9. Commands

When your device is ready to receive commands, it can subscribe to the MQTT topic dev/cmd from where it can receive the following messages:

{
   "req":  "<<request>>",
   "arg": {
      "<<arg1>>": <<arg1Value>>,
      "<<arg2>>": <<arg2Value>>,
      ...
   },
   "cid":  <<correlationId>>
}

Message fields:

  • request : string identifying the method called on the device,

  • arg{X}, arg{X}Value : name and value (any valid JSON value) of an argument passed to the request call,

  • correlationId : an identifier that must be returned in the command response to help Live Objects match the response and request.

Example:
{
   "req":  "buzz",
   "arg": {
      "durationSec": 100,
      "freqHz":     800.0
   },
   "cid": 12238987
}

10.3.10. Commands response

To respond to a command, your device must publish the response to the MQTT topic dev/cmd/res with a message of the following format:

{
   "res": {
      "<<res1>>": "<<res1Value>>",
      "<<res2>>": "<<res2Value>>",
      ...
   },
   "cid":  <<correlationId>>
}

Message fields:

  • res{X}, res{X}Value : optional information returned by the command execution,

  • correlationId : a copy of the command correlationId value.

Example #1:
{
   "res": {
      "done": true
   },
   "cid": 12238987
}
Example #2:
{
   "res": {
      "error": "unknown method 'buzz'"
   },
   "cid": 12238987
}

10.3.11. Current Resources

Once connected, your device can announce the currently deployed versions of its resources by publishing a message on MQTT topic dev/rsc with the following format:

{
   "rsc": {
      "<<resource1Id>>": {
         "v": "<<resource1Version>>",
         "m": <<resource1Metadata>>
      },
      "<<resource2Id>>": {
         "v": "<<resource2Version>>",
         "m": <<resource2Metadata>>
      },
      ...
   }
}

Message fields:

  • resource{X}Id : resource identifier,

  • resource{X}Version : currently deployed version of this resource,

  • resource{X}Metadata : (JSON object) (optional) metadata associated with this resource, useful to resource update.

Example:
{
   "rsc": {
      "X11_firmware": {
         "v": "1.2",
         "m": {
            "username": "78723-672-1232"
         }
      },
      "X11_modem_driver": {
         "v": "4.0.M2"
      }
   }
}

10.3.12. Resources update

When your device is ready to receive resource update request, it just needs to subscribe to MQTT topic dev/rsc/upd. From then on it will receive such request as message with the following JSON format:

{
   "id": "<<resourceId>>",
   "old": "<<resourceCurrentVersion>>",
   "new": "<<resourceNewVersion>>",
   "m": {
      // ... <<metadata>> JSON object ...,
   },
   "cid": "<<correlationId>>"
}

Message fields:

  • resourceId : identifier of resource to update,

  • resourceCurrentVersion : current resource version,

  • resourceNewVersion : new resource version, to download an apply,

  • correlationId : an identifier that must be returned in the resource update response to help Live Objects match the response and request.

Example:
{
   "id": "X11_firmware",
   "old": "1.1",
   "new": "1.2",
   "m": {
      "uri": "http://.../firmware/1.2.bin",
      "md5": "098f6bcd4621d373cade4e832627b4f6"
   },
   "cid": 3378454
}

10.3.13. Resources update response

Once your device receives a "Resource update request", it needs to respond to indicate if it accepts or not the new resource version, by publishing a message on topic dev/rsc/upd/res with the following JSON format:

{
   "res": "<<responseStatus>>",
   "cid": "<<correlationId>>"
}

Message fields:

  • responseStatus : indicates the response status to the resource update request:

    • "OK" : the update is accepted and will start,

    • "UNKNOWN_RESOURCE" : the update is refused, because the resource (identifier) is unsupported by the device,

    • "WRONG_SOURCE_VERSION" : the device is no longer in the "current" (old) resource version specified in the resource update request,

    • "WRONG_TARGET_VERSION" : the device doesn’t support the "target" (new) resource version specified in the resource update request,

    • "INVALID_RESOURCE" : the requested new resource version has incorrect version format or metadata,

    • "NOT_AUTHORIZED" : the device refuses to update the targeted resource (ex: bad timing, "read-only" resource, etc.),

    • "INTERNAL_ERROR" : an error occurred on the device, preventing for the requested resource update,

  • correlationId : copy of the correlationId field from the resource update request.

Example #1:
{
   "res": "OK",
   "cid": 3378454
}
Example #2:
{
   "res": "UNKNOWN_RESOURCE",
   "cid": 778794
}

10.3.14. Resources update response error

Device can report a custom resource update error by publishing a message on MQTT topic dev/rsc/upd/err with the following format:

{
   "errorCode":"ERROR_CODE",
   "errorDetails":"error details"
}

Message fields:

  • errorCode : (optional) device error code,

  • errorDetails : device error details.

This fields are limited to 256 characters. Characters outside of this limit will be ignored.

Example:
{
   "errorCode":"DEV123.Z_FIRMW_CRC",
   "errorDetails":"error while loading firmware, bad CRC"
}

10.4. "Bridge" mode

In the "Bridge" mode, a single MQTT connection can be used to exchange data related to multiple devices or applications.

For example, a "gateway" device could communicate with Live Objects and forward data collected by multiple devices using this mode.

An application that wants to consume flows of data collected by Live Objects and interacts through Live Objects with devices would also use this mode.

10.4.1. Connection

When initiating the MQTT connection, to select the "Bridge" mode you must use the following credentials:

  • clientId : any value - only used as "consumerId" for the Router subscriptions,

  • username : format "{encoding}+bridge" (or just "{encoding}") :

    • "json+bridge" : select "Bridge" mode with "JSON encoding (V0)",

    • "payload+bridge" : select "Bridge" mode with no encoding (only message payloads are available).

10.4.2. Summary

In "bridge" mode, the topics used for publications and subscriptions must follow on of the following format:

  • pubsub/{pubSubTopic}, to use the Live Objects bus in "PubSub" mode,

  • fifo/{fifoId}, directly publish into / consume from a specific FIFO queue, this works only if the used API key has no restriction or if the FIFO queue is specified in the API key’s restriction list,

  • router/{routingKey}, to directly publish to the Live Objects "Router" or consume from it.

All publications made on the MQTT bridge are forwarded to the Live Objects message bus as FIFO, PubSub or Router publications.

All subscriptions made on the MQTT bridge are forwarded to the Live Objects message bus as FIFO, PubSub or Router subscriptions.

10.4.3. PubSub publication

To publish on a PubSub topic, the MQTT client must publish in MQTT on a topic of the following format:

pubsub/{pubSubTopic}

where pubSubTopic is the name of the PubSub topic.

If pubSubTopic starts with the "~" character, then the selected "encoding" is applied to decode the published MQTT message:

  • if encoding = "JSON (V0)", the message should be a valid JSON-encoded message,

  • if encoding = "payload", the MQTT message content becomes the Live Objects message payload.

If pubSubTopic does not start with the "~" character, then the MQTT message content becomes the generated Live Objects message payload.

MQTT message "qos" 0, 1 and 2 are supported, but don’t offer any guarantee here: currently subscribed client to this PubSub topic may or may not receive the message.
Example #1 - any encoding / random message on standard topic
[on MQTT interface]
   action  = MQTT PUBLISH
   topic   = 'pubsub/data'
   content = 'Hello world!'

[on Live Objects bus]
   action  = PubSub publication
   topic   = 'data'
   message = ( payload = "Hello world!" )
Example #2 - JSON encoding / bad message on "~" topic
[on MQTT interface, with encoding=JSON (V0)]
   action  = MQTT PUBLISH
   topic   = 'pubsub/~device/connects'
   content = 'blob'

=> message is not a valid JSON message,
so message is dropped and MQTT connection closed.
Example #3 - JSON encoding / correct message on "~" topic
[on MQTT interface, with encoding = JSON (V0)]
   action  = MQTT PUBLISH
   topic   = 'pubsub/~device/connects'
   content = '{"payload":"Hello world!","timestamp": 1447944553720}'

[on Live Objects bus]
   action  = PubSub publication
   topic   = '~device/connects'
   message = ( payload = "Hello world!" , timestamp = 1447944553720 )
Example #4 - payload encoding / random message on "~" topic
[on MQTT interface, with encoding = payload]
   action  = MQTT PUBLISH
   topic   = 'pubsub/~device/connects'
   content = 'test 1 2 3'

[on Live Objects bus]
   action  = PubSub publication
   topic   = '~device/connects'
   message = ( payload = "test 1 2 3" )

10.4.4. PubSub subscription

To subscribe to a PubSub topic, a MQTT client connected in "Bridge" mode must subscribe to the following MQTT topic:

pubsub/{pubSubTopic}

where pubSubTopic is the name of the PubSub topic.

A MQTT SUBACK packet is returned by Live Objects only once the subscription is active on Live Objects internal message bus.

If pubSubTopic starts with the "~" character, then the selected "encoding" is applied to encode the messages consumed from the internal Live Objects message bus.

If pubSubTopic does not start with the "~" character, then only the Live Objects message "payload" attribute is returned in the MQTT message.

MQTT message "qos" 0, 1 and 2 are supported, but don’t offer any guarantee here: currently subscribed client to this PubSub topic may or may not receive the message.

10.4.5. FIFO publication

To publish directly into a FIFO queue, a MQTT client connected in "Bridge" mode must publish to the following MQTT topic:

fifo/{fifoId}

where fifoId is the identifier of the targeted FIFO queue.

If the "fifoId" starts with "~", the same process is applied to the MQTT publication as for the PubSub publication.

Regarding the "qos" of the MQTT publication:

  • qos = 0 : no acknowledgement is returned, so no guarantee is offered to the client,

  • qos = 1 : a MQTT PUBACK packet is returned only once the message has been stored into the targeted FIFO, or once the message has been dropped because the targeted FIFO does not exists,

  • qos = 2 : idem as for qos=1 but with a PUBREL packet.

10.4.6. FIFO subscription

To subscribe to a FIFO queue, a MQTT client connected in "Bridge" mode must subscribe to the following MQTT topic:

fifo/{fifoId}

where fifoId is the identifier of the targeted FIFO queue.

If the subscription succeeds, Live Objects only returns a MQTT SUBACK packet with return code equals to the requested qos once the subscription is active.

If the subscription fails (for ex. because the FIFO does not exist), a MQTT SUBACK packet is returned with return code 0X80 (= Failure).

As for the PubSub subscriptions:

If fifoId_ starts with the "~" character, then the selected "encoding" is applied to encode the messages consumed from the internal Live Objects message bus.

If fifoId does not start with the "~" character, then only the Live Objects message "payload" attribute is returned in the MQTT message.

Regarding MQTT subscription "qos":

  • qos = 0 : messages consumed from the FIFO disappear from the FIFO queue as soon as written on socket by the Live Objects MQTT interface - so consuming from a FIFO with qos=0 offers no guarantee of message delivery,

  • qos = 1 or 2 : messages consumed from the FIFO are removed from the FIFO only once the first acknowledgement (PUBACK or PUBREC) is received from the subscribed client - by consuming with qos > 0 from a FIFO queue, 'at least once' message delivery is guaranteed.

10.4.7. Router publication

To publish on the Live Objects "Router", a MQTT client connected in "Bridge" mode must publish to the following MQTT topic:

router/{mqttRoutingKey}

If mqttRoutingKey starts with ~, the same process is applied to the MQTT publication as for the PubSub publication.

The message is then published on the Router of the Live Objects internal bus with a routing key equals to mqttRoutingKey where all the / have been replaced by .

(this conversion enables a more MQTT-friendly format for the Live Objects routing keys)

Regarding the "qos" of the MQTT publication:

  • qos = 0 : no acknowledgement is returned, so no guarantee is offered to the client,

  • qos = 1 : a MQTT PUBACK packet is returned only once the message has been stored into all FIFO queues with bindings matching the routing key,

  • qos = 2 : idem as for qos=1 but with a PUBREL packet.

10.4.8. Router subscription

To subscribe to the Live Objects Router, a MQTT client connected in Bridge mode must subscribe to the following MQTT topic:

router/{mqttRoutingKeyFilter}

Live Objects then creates a subscription directly on the message bus Router with routing key filter equals to a converted mqttRoutingKeyFilter:

  • every / is replaced by a .

  • every MQTT wildcard + is replaced by a *

  • MQTT wildcard # stays #

Examples:
  • router/# = Router subscription with routing key filter "#"

  • router/~android/1233231/data = Router subscription with routing key filter ~android.1233231.data

  • router/~android/+/data = Router subscription with routing key filter ~android.*.data

  • router/~android/# = Router subscription with routing key filter ~android.#

Once the subscription is active, Live Objects returns a MQTT SUBACK packet with return code equals to the requested qos once the subscription is active.

If a problem occurs and the subscription fails, a MQTT SUBACK packet is returned with return code 0X80 (= Failure).

As for the PubSub subscriptions:

If pubSubTopic starts with the ~ character, then the selected encoding is applied to encode the messages consumed from the internal Live Objects message bus.

If pubSubTopic does not start with the ~ character, then only the Live Objects message payload attribute is returned in the MQTT message.

Regarding MQTT subscription qos: all values (0, 1, 2) are supported but don’t offer any delivery guarantee.

10.4.8.1. Router subscription for data message

These topics allows to subscribe to the data messages sent to Live Objects.

Relevant topics :
  • router/~event/v1/data/new/ to subscribe to all data messages sent to Live Objects. Associated routing key filter is ~event.v1.data.new.

  • router/~event/v1/data/new/urn/lora/ to subscribe to all LPWA devices uplink data messages. Associated routing key filter is ~event.v1.data.new.urn.lora.

  • router/~event/v1/data/new/urn/msisdn/ to subscribe to all devices, using SMS interface, uplink data messages. Associated routing key filter is ~event.v1.data.new.urn.msisdn.

Exemple to subscribe to a specific LPWA device :
  • router/~event/v1/data/new/urn/lora/<devEUI>/ to subscribe to one device uplink message data stream. Associated routing key filter is ~event.v1.data.new.urn.lora.<devEUI>.

11. REST API

11.1. Endpoints

https://liveobjects.orange-business.com/api/

The current version is version “v0”. As a consequence all methods described in this document are available on URLs starting by:

https://liveobjects.orange-business.com/api/v0/

11.2. Principles

Live Objects exposes REST API providing these functionalities :

  • API key operations

  • Device management for managed devices (inventory, parameters, command ressources operations)

  • Device management for MyPlug devices

  • Device management for LPWA

  • Bus management (create FIFO, binding)

  • Data management (store and search)

  • Contact : Email management (send email)

  • Portal User management

11.2.1. Content

By default all methods that consume or return content only accept one format: JSON (cf. http://json.org ).

As a consequence, for those methods the use of HTTP headers Content-Type or Accept with value application/json is optional.

11.2.2. API-key authentication

Clients of the Live Objects Rest API are authenticated, based on an API key that must be provided with any request made to the API.

This API key must be added has a HTTP header named X-API-Key to the request.

Example (HTTP request to the API)
GET /api/v0/assets HTTP/1.1
Host: <base URL>
X-API-Key: <API key>

If you don’t provide such an API Key, or if you use an invalid API key, Live Objects responds with the standard HTTP Status code 403 Forbidden.

11.2.3. Paging

Some methods that return a list of entities allow paging: the method doesn’t return the full list of entities, but only a subset of the complete list matching your request.

You need to use two standard query parameters (i.e. that must be added at the end of the URL, after a ?, separated by a &, and defined like this: <param>=<value>):

  • size: maximum number of items to return (i.e. number of items per page),

  • page: number of the page to display (starts at 0).

Those parameters are not mandatory: by default skip will be set to 0 and size to 20.

Example:

  • If size=10 and page=0 then item number 0 to 9 (at most) will be returned.

  • If size=20 and page=1, then items number 20 to 39 (at most) will be returned.

Example (HTTP request to the API)
GET /api/v0/assets?page=100&size=20 HTTP/1.1
Host: <base URL>
X-API-Key: <API key>

The responses of such methods are a “page” of items - a JSON object with the following attributes:

  • totalCount: total number of entities matching request in service (only part of them are returned),

  • size: the value for “size” taken into account (can be different of the one in request if the value was invalid),

  • page: the value for “page” taken into account (can be different of the one in request if the value was invalid),

  • data: list of returned entities.

11.3. Swagger

All HTTP REST methods (device management, data management and bus management, etc.) are described in the swagger available here : https://liveobjects.orange-business.com/swagger-ui/index.html.

12. Web portal

The Live Objects web portal is available at https:/liveobjects.orange-business.com.

12.1. landing page

landing

12.2. sign in

landing

12.3. dashboard (home)

landing

12.4. devices

12.4.1. device list

landing

12.4.2. device status

landing

12.4.3. device parameters

landing

12.4.4. device commands

landing

12.4.5. device resources

landing

12.5. data

landing

12.6. simulating

landing

12.7. configuration

12.7.1. account

landing

12.7.2. API keys

landing

12.7.3. users

landing

12.7.4. messages

landing

12.7.5. device resources

landing

13. MyPlug interface

Messages are available on Live Objects bus as notification of activity from the MyPlug devices and accessories associated with your account.

Those messages are published in Router mode with the following routing keys:

  • router/~event/myplug/{gatewayMac}/event for events triggered by a MyPlug gateway,

  • router/~event/myplug_acc/{accessoryMac}/event for events generated from accessories.

13.1. message structure

13.1.1. source

The source attribute/field of the messages emitted from MyPlug activity identify the MyPlug gateway or the MyPlug gateway + accessory, that triggered the event.

If the event has been triggered by the MyPlug gateway only (ex: communication lost, new accessory association…​), then source only contains one element: a source of order=0 with namespace "myplug" and as id the MAC identifier of the MyPlug gateway.

{
   "source": [
      {
         "order": 0,
         "namespace": "myplug",
         "id": "283657E9A51A1F0A"
      }],

   ...

}

If the event has been triggered by a MyPlug accessory only (ex: flood alarm…​), then source contains two element:

  • a source element with order=0, namespace="myplug_acc" and the accessory MAC identifier as id,

  • a source element with order=1, namespace="myplug" and the gateway MAC identifier as id.

{
   "source": [{
      "order": 0,
      "namespace": "myplug_acc",
      "id": "A6564CD9756FD32D"
   },{
      "order": 1,
      "namespace": "myplug",
      "id": "283657E9A51A1F0A"
   }],

   ...

}
13.1.1.1. timestamp

This is the instant of the event generation. It is a JAVA EPOCH corresponding to the number of milliseconds from since 1/1/1970.

13.1.1.2. event

This field contains the event information. The type of the alarm is a string that depends on the asset that has produced the alarm.

13.1.1.3. eventLifecycle

The type of event into its life cycle. Here are the possible values: ONE_SHOT, BEGIN, END, ONGOING.

13.1.1.4. data

The content of this field depends on the myplug event that is described by the message.

Some data values are always present: * type: which contains the accessory type that generate the event * name: in case of accessory event, it contains the name of the accessory

13.1.2. standard events

  • MQTT topic = "router/~event/myplug/{myplugMac}/event"

  • message:

    • event: cf. table,

    • data: cf. table.

Available events:

event eventLifeCycle data fields meaning

GwLost

ONE_SHOT

No communication with gateway for 25 hours. Failed to communicate with the gateway.

BindEvent

BEGIN

"type" ⇒ type of accessory

An accessory has been associated to the LiveIntercom.

BindEvent

END

"type" ⇒ type of accessory

An accessory has been dissociated to the LiveIntercom

13.1.2.1. Accessory: LiveIntercom
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "MY_INTERCOM"

Available events:

event eventLifeCycle meaning

PowerLost

BEGIN

The LiveIntercom has just been disconnected from power supply.

PowerLost

ON_GOING

The LiveIntercom is still not connecter connected to the power supply.

PowerLost

END

The LiveIntercom is connected again on power supply.

LowBat

ONE_SHOT

The battery level is low.

AlarmPb

ONE_SHOT

The user has pressed the alarm button.

SocAlarmPb

ONE_SHOT

The user push the social alarm button.

13.1.2.2. Accessory: Emergency push button
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "SOCIAL_ALARM_BUTTON"

Available events:

event eventLifeCycle meaning

Lowbat

ONE_SHOT

The battery level is low.

EmergencyPb

ONE_SHOT

Button push action.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

13.1.2.3. Accessory: Smoke detector
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "SMOKE_DETECTOR"

Available events:

event eventLifeCycle Meaning

Lowbat

ONE_SHOT

The battery level is low.

Smoke

ONE_SHOT

Smoke detected.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

TestPb

ONE_SHOT

Test buttonpushed.

13.1.2.4. Accessory: Flood detector
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "FLOOD_DETECTOR"

Available events:

event eventLifeCycle Meaning

Lowbat

ONE_SHOT

The battery level is low.

Flood

ONE_SHOT

Flood detected.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

13.1.2.5. Accessory: Wall plug
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "WALL_PLUG"

Available events:

event eventLifeCycle Meaning

Lowbat

ONE_SHOT

The battery level is low.

SwitchError

ONE_SHOT

Switch error.

PowerLost

BEGIN

The WallPlug has just been disconnected from power supply.

PowerLost

END

The WallPlug is connected again on power supply.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

14. Limitations

14.1. Rate limiting

Rate limiting is applied to each API key and control the number of calls or messages per time window (e.g. 1 call per second). Depending on the offer, a rate limiting configuration may be applied to http interface, mqtt interface or both.

Http interface

Each response of the web controller contains 3 headers giving additional information on the status of the current request regarding rate limitation:

X-RateLimit-Limit: 5
X-RateLimit-Remaining: 3
X-RateLimit-Reset: 1479745936295
  • X-Rate-Limit-Limit is the rate limit ceiling per second

  • X-Rate-Limit-Remaining is the number of requests left for this time window

  • X-RateLimit-Reset is the ending date of the current time window (expressed in epoch milliseconds).

When receiving a request that would exceed authorized traffic limit, the web application returns a 429 Too Many Requests error with an empty body.

Note that all X-RateLimit headers are present in the response, as they would in a successful response.

Mqtt interface

For MQTT connection, if the quota is reached, the MQTT session will be disconnected. If an API key is used for several MQTT sessions at the same time, the sum of the requests is computed for this API key.

No reason or additional information is provided to the client software. The client is expected to try to reconnect repeatedly and re-send its data until traffic is allowed again in the next time window.

Limitation

Trial offer

REST Max req. per s. per API Key

5

MQTT Max req. per s. per API Key (uplink)

5

14.2. Ressources limitation

Limitation

Trial offer

Number of FIFO

2

Size max - sum of FIFO (bytes)

1 048 576

Maximum number of users

5

Maximum number of API keys

5

14.3. Compute quota

Limitation

Trial offer

Search service

30ms per window of 10 s.

15. glossary

API

Application Programming Interface

FIFO

First In First Out

HTTP

HyperText Transfer Protocol

IoT

Internet of Things

IP

Internet Protocol

LED

Light-Emitting Diode

LPWA

Low-Power, Wide Area radio protocol

LPWAN

Low-Power, Wide Area Network

LOM

Live Objects Manage

M2M

Machine To Machine

MQTT

Message Queue Telemetry Transport

PPA

Personal Package Archives

PubSub

Publish and Subscribe

REST

REpresentational State Transfer

SaaS

Software as a Service

SDK

Software Development Kit

SIM

Subscriber Identity Module

TCP

Transmission Control Protocol