1. Introduction

This document is a user-guide for the Live Objects service.

Live Objects LPWA developer guide is available here.

For any question / comment / improvement regarding this document please send us an email at contact: liveobjects.support@orange.com

2. Overview

2.1. What is Live Objects?

Live Objects is one of the product belonging to Orange Datavenue service suite.

Live Objects is a software suite for IoT / M2M solution integrators offering a set of tools to facilitate the interconnection between devices (or connected « things ») and business applications:

  • Connectivity Interfaces (public and private) to collect data, send command or notification from/to IoT/M2M devices,

  • Device Management (supervision, configuration, resources, firmware, etc.),

  • Message Routing between devices and business applications,

  • Data Management: Data Storage with Advanced Search features.

Live Objects overview

landing

It can be used in Software as a Service (SaaS) mode, or deployed “on premises” in a datacenter of the customer’s choice.

The public interfaces are reachable from internet. The private interfaces provide interfaces with a selection of devices (MyPlug) or specific network (LPWAN).

The SaaS allows multiple tenants on the same instance without possible interactions across tenant accounts (i.e. isolation, for example a device belonging to one tenant could not communicate with a device belonging to an other tenant).

A web portal provides a UI to administration functions like manage the message bus configuration, supervise your devices and control access to the tenant.

2.2. Architecture

Live Objects SaaS architecture is composed of three complementary layers:

  • Connectivity layer: manages the communications with the client devices and applications,

  • Bus layer : a set of message-oriented middlewares allowing asynchronous exchanges between our software modules,

  • Service layer: various modules supporting the high level functions (device management, data processing and storage, etc.).

Live Objects architecture

landing

2.3. Connectivity layer

2.3.1. Public interfaces

Live Objects exposes a set of standard and unified public interfaces allowing to connect any programmable devices, gateway or functional IoT backend.

The existing public interfaces are:

MQTT is an industry standard protocol which is designed for efficient exchange of data from and to devices in real-time. It is a binary protocol and MQTT libraries have a small footprint.

HTTPS could be rather used for scarcely connected devices. It does not provide an efficient way to communicate from the SaaS to the devices (requiring periodic polling for example).

For more info, see "Message encodings" section

The public interfaces share a common security scheme based on API keys that you can manage from Live Objects APIs and web portal.

2.3.2. Private interfaces

Live Objects is fully integrated with a selection of devices and network. It handles communications from specific families of devices with defined protocol (over IP), and translate them as standardized messages available on Live Objects message bus.

The existing private interfaces:

  • LPWAN interface connected with LPWAN network server,

    • to provision LPWAN devices

    • to receive and send data from/to LPWAN devices

  • MyPlug interface connected with MyPlug gateways,

    • to provision MyPlug gateway

    • to receive and send data from/to MyPlug gateway and accessories

2.4. Bus layer

Live Objects connectivity interfaces are connecting to a message bus that could route message to external business application or internal micro-services (device management, store and search services).

The message bus offers three distinct modes:

  • Router : adapted to situations where publishers don’t know the destination of the messages. Messages can be either consumed with transient subscriptions or static "Bindings" can be declared to route messages into FIFO queues. More info: Router mode.

  • PubSub : a good fit for real-time transient exchanges. Message are broadcast to all currently available subscribers or dropped. More info: PubSub mode,

  • FIFO : the solution to prevent from message loss in the case of consumer unavailability. Messages are stored in a queue on disk until consumed and acknowledged. When multiple consumers are subscribed to the same queue concurrently, messages are load-balanced between available consumers. More info: FIFO mode,

Various usage of Live Objects message bus

landing

For more info, see "Message Bus" chapter.

2.5. Service layer

2.5.1. Device management

Live Objects offers various functions dedicated to device operators:

  • supervise devices connection and disconnection to/from the SaaS,

  • manage devices configuration parameters,

  • send command to devices and monitor the status of these commands,

  • send resources (any binary file) to devices and monitor the status of this operation.

Live Objects attempts to send command, resources or update the parameters on the asset as soon as the asset is connected and available.

For more info, see "Device Management" chapter.

2.5.2. Data management

Live Objects allows to store the collected data from any connectivity interfaces. These data could be then retrieved by using HTTP REST interface.

A full-text search engine based on Elastic search is provided in order to analyze the data stored. This service is accessible through HTTP REST interface.

For more info, see "Data Management" chapter.

2.5.3. Simple Event Processing

Simple event processing service is aimed at detecting notable single event from the flow of data messages.

Based on processing rules that you define, it generates fired events that your business application can consume to initiate downstream action(s) like alarming, execute a business process, etc.

For more info, see "Event Processing" chapter.

2.6. Security

2.6.1. API keys

API keys are used to control the access to the SaaS for devices/application and users to authenticate. You must create an API Key to use the API.

2.6.2. Users management

An account administrator can add users to his account. A user is associated to a list of roles. These users can connect to the Live Objects web portal.

3. Getting started

This chapter is a step-by-step manual for new users of Live Objects giving instructions covering the basic use cases of the service.

3.1. Account creation

In order to use Live Objects, you need to have a dedicated account on the service.

Please contact the Live Objects team to request an account : liveobjects.support@orange.com. A valid email address will be required to create your account. Once the account is created, you should receive an email with an activation link.

account activation email

landing

By clicking on Account Activation, you are redirected to a web page where you can choose the password of your user account.

Once you entered twice your password and a correct "captcha", then clicked on "update password", you are redirected to the Live Objects sign in page where you can now log into your newly created user account.

3.2. Signing in

To log in to Live Objects web portal, connect to liveobjects.orange-business.com using your web-browser:

landing

  • Fill the Log in form with your credentials:

    • your email address,

    • the password set during the activation phase,

  • then click on the Log in button.

If the credentials are correct, a success message is displayed and you are redirected to your “home” page:

landing

3.3. Creating an API Key

To get a device or an application communicating with Live Objects Manage, you will need to create an API Key.

On the left menu, click on api keys and create a new API key. This key will be necessary to set up a connection with the public interfaces (MQTT and REST) of Live Objects Manage.

landing

As a security measure, you can not not be retrieve the API Key again after you have closed the api key creation results page. So, note it down to work with the mqtt client, during the scope of this getting started.

landing

3.4. Connecting an MQTT device

It is up to you to choose your favorite MQTT client or library. We will use here MQTT.Fx client. This client is available on Win/MacOSX/Linux and is free. Download and install the last version of MQTT.fx.

We will start by creating a new Connection profile and configure it based on the device mode device mode set up.

General panel

You will configure here the endpoints of Live Objects including authentication information. In this panel, you can set :

  • Broker Address with liveobjects.orange-business.com

  • Broker Port with 1883

  • Client ID with urn:lo:nsid:dongle:00-14-22-01-23-45 (as an example)

  • Keep Alive Interval with 30 seconds

landing

Credentials panel

  • username: json+device : for device mode MQTT connection

  • password: the API Key that you just created

landing

3.5. Device management basics

3.5.1. Connection status

We can simulate a device connection to Live Objects with MQTT.fx client by clicking on Connect button of MQTT.fx client.

In Live Objects portal, you can see that the device is connected. Go to "assets", the device will appear in the list.

landing

3.5.2. Sending a command

You must first subscribe to the topic waiting for command "dev/cmd". (Subscribe tab of mqtt.fx)

Go to "assets" then select your device in the list and go to "commands" tab.

Click on "add command" then fill the event field with "reboot" then click on "Register". The command will appear in MQTT.fx client subscribe tab.

{
   "req":"reboot",
   "arg":{},
   "cid":94514847
 }

A response could be sent to acknowledge the command received.

To send this response, you can publish a message to this topic "dev/cmd/res". Cid (correlation id) must be set with correlation id received previously.

{
  "res": {
     "done": true
  },
  "cid": 94514847
}

Once published, the status of the command will change to "processed" in the portal commands history tab.

3.6. Message Bus basics

3.6.1. Using a FIFO queue

On the left menu, click on "message bus", you are redirected to the "message bus / FIFO queues" page. Click on the "add FIFO queue" button, a pop-in appears:

landing

Enter a name "myFifo" for your "FIFO queue", then press "Register" button: the newly created FIFO queue "myFifo" is now listed.

landing

On the left menu, click on "developer tools" , you are redirected to a page with different tabs for different tools useful for testing purpose:

In the "publish" tab (selected by default):

  • select "FIFO" in the "Topic Type" select box,

  • enter "myFifo" (the name of the FIFO queue you just created) in the "Topic" input field,

  • enter the following JSON in "Payload" textarea:

    {
       "payload": "Hello world!"
    }
  • press the "Publish" button.

landing

A "success" message is displayed:

landing

Now, go back to your FIFO list, the "myFifo" FIFO now should be displayed with a message count of "1":

landing

3.6.2. Using the Router

On the left menu, click on "message bus" to go back to the "message bus / FIFO queues" page.

Click on the "router" tab, you now see an empty list of "_bindings". Click on the "+ router" button, a pop-in is displayed with a form to create a new bindings:

  • enter "~event.test.#" in the "Routing key filter" input field,

  • select "myFifo" in the "Target FIFO" select box,

  • press the "Create Binding" button.

landing

You now see the newly created binding listed:

landing

publish a non-stored message

On the left menu, click on "developper tools" to go back to the "developer tools" page and "publish" tab.

  • select "Router" in the "Topic Type" select box,

  • enter "~event.test.foo.bar.123" in the "Topic" input field,

  • enter the following JSON in the "Payload" text area:

    {
       "payload": "Hello router!"
    }
  • press the "Publish" button.

A "success" message is displayed:

landing

Now, go back to your FIFO list, the "myFifo" FIFO now should be displayed with a message count of "2" (one more than previously):

landing

You made a publication with a "routing key" (the "topic" field) that has been matched by a declared "binding" that targeted the "myFifo" FIFO, so a copy of your message has been routed and stored into the FIFO as if you had published directly into it!

3.7. Data management

3.7.1. Publishing data messages

We will use MQTT.fx client with device mode to send data message as a device will do.

Data message must be published on this topic : dev/data.

Message :

{
"s" : "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
"ts" : "2016-07-10T10:02:44.907Z",
"loc" : [44.1, -1.5],
"m" : "temperatureDevice_v0",
"v" : {
  "temp" : 17.25
 },
"t" : [ "City.NYC", "Model.Prototype" ]
}

landing

3.7.2. Accessing the stored data

Going back in Live Objects portal, you can consult the data message that was just stored. Go to "data" then search for streamId "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature". The data message sent will appear.

landing

You can perform complex search queries like aggregation using elasticsearch DSL HTTP interface. See example in Data API chapter

4. Concepts

4.1. Tenant account

A tenant account is the isolated space on Live Objects dedicated to a specific customer: every interaction between Live Objects and an external actor (user, device, client application, etc.) or registered entities (user accounts, api keys, etc.) is associated with a tenant account.

Live Objects ensures isolation between those accounts: you can’t access the data and entities managed in another tenant account.

Each tenant account is identified by a unique identifier: the "tenant ID".

A tenant account also has a "name", that should be unique: while the "tenant ID" can’t be changed, the tenant account name can be edited from the "Settings" page of the web portal.

4.2. API key

landing

A Live Objects API Key is a secret that can be used by a device/app/user to authenticate when accessing to Live Objects on the MQTT or HTTP/REST interfaces. At least one API Key must be generated. As a security measure, an API key could not be retrieved after creation.

An API Key belongs to a tenant account: after authentication, all interactions will be associated (and isolated from other tenant accounts) to this account.

An API key can have zero, one or many Roles. These roles allows to restrict the operations that could be performed with the key. An API key validity can be limited in time.

A tenant account is automatically attributed a "master" API key at creation. That API key is special: it can’t be deleted.

An API Key can generate child-API keys that inherit (a subset of) the parent roles and validity period.

Usage:

  • In MQTT, clients must connect to Live Objects by using a valid API Key value in the « password » field of the (first) MQTT « CONNECT » packet,

    • In case of unknown API Key value, or invalid, the connection is refused,

    • on success, all messages published on this connection will be enriched with the API Key id and roles,

  • In HTTP, clients must specify a valid API Key value as HTTP header « X-API-Key » for every request,

    • In case of unknown API key value or invalid, request is refused (HTTP status 403),

    • on success, all messages published due to this request will be enriched with the API Key id and roles.

4.3. User account

A User Account represents a user identity, that can access the Live Objects web portal.

A user account is identified by an email address. A user account is associated with one or many roles. A user can authenticate on the Live Objects web portal using an email address and password.

When user authentication request succeeds, a temporary API key is generated and returned, with same roles as User account.

In case of too many invalid login attempts, the user account is locked out for a while.

For security purpose, a password must be 8 characters including 1 uppercase letter, 1 lowercase letter, 1 special character, 1 digit and 1 special characters.

4.4. Role

A Role can be attributed to an API key or user account.

The currently available roles:

Role Description

ADMIN

tenant account administrator: has access to all functions on the tenant account.

USER_READ

can list all the user accounts of the tenant, but can’t edit them or create new user accounts.

APIKEY_ADMIN

can manage all API keys or create new ones

APIKEY_READ

can list API keys

LPWA_ADMIN

LPWA fleet admin. can manage USERS user account

LPWA_USER

LPWA fleet user. can list devices, list collected uplink messages and send downlink commands.

4.5. Message

Every interaction between Live Objects and devices and applications is modeled as one or many "messages".

Those messages follow a common format (imposed by a shared communication library), composed of different fields, all optional.

On the « public » interfaces (MQTT & HTTP), message can be represented using various encodings.

For more info about message encodings, see "Message encodings" section

4.6. Asset

The term 'asset' is used in Live Objects to designate an entity managed by Live Objects.

An asset is uniquely identified by a couple namespace / id.

5. Message bus

Live Objects "message bus" is the central layer between the Connectivity layer and Service layer.

This message bus offers various modes:

  • Router : adapted to situations where publisher don’t know the destination of the messages. Messages can be either consumed with transient subscriptions or static "Bindings" can be declared to route messages into FIFO queues. More info: Router mode.

  • PubSub : a good fit for real-time exchanges. Message are broadcast to all available subscribers or dropped. More info: PubSub mode,

  • FIFO : the solution to set up a point to point messaging and guarantee that the messages are delivery to the consumer. Messages are stored in a queue on disk until consumed and acknowledged. Each message is delivered to only one consumer, when multiple consumers are subscribed to the same queue concurrently, messages are load-balanced between available consumers. More info: FIFO mode,

Communications between devices or external applications and the Live Objects interfaces are translated into interactions with the Live Objects message bus. For example, on the Live Objects MQTT interface, a publication to MQTT topic "pubsub/test" is translated into a message publication on the Live Objects message bus on PubSub topic "test".

Various usage of Live Objects message bus

landing

A topic is uniquely identified by a string with the following format: “<topic type>/<topic name>”. Where <topic type> can be “pubsub” or “fifo”, and <topic name> is an arbitrary string.

Example
“pubsub/alldevices” or “fifo/alerts”

Tenants are free to use PubSub and Fifo topics to achieve the communication patterns they need between their devices and applications.

Note that some functions of Live Objects use special topics, all identified by a name starting by “~” (ex: “pubsub/~v0/asset/connected”). The message exchanged on those topics must respect a standard format.

5.1. Router mode

Example 1. ROUTER mode

landing

  • (At the bottom) A client publishes into the Router a message with routing key "_data.alarm"…​

  • (On the left) Two clients are subscribed on router with routing key filter "data.#" and consumer identifier "consumer#1". As the routing key filter matches the routing key of the published message, the message is delivered to those clients. As those clients are subscribed with the same consumer id, the message is "load balanced" : only one of the two consumers receives the message.

  • (At the center) A binding with routing key filter "data.#" is declared from the Router to the FIFO queue "fifo01": this routing key filter matches the routing key of the published message so the message is delivered to this FIFO queue as if it was published in FIFO mode to topic "fifo01".

  • (On the right) A binding with routing key filter "*.alarm" is declared from the Router to the FIFO queue "fifo02": this routing key filter matches the message routing key, so the message is delivered to the FIFO. As a subscriber is currently subscribed to the FIFO queue, it immediately receives the message, but the message is also stored on disk into the queue until acknowledged.

5.2. PubSub mode

Communications in PubSub mode is based on the usage of "topics".

A "topic" is a message source/destination identified by a unique string identifier..

In PubSub mode, Live Objects message bus clients can publish or subscribe to one or many "topics". When a client publishes a message on a specific PubSub topic, the message is broadcast in real-time to all currently subscribed clients. The message is not persisted by Live Objects messaging layer: if no consumers have subscribed, the message is simply dropped and lost forever.

There is no need to declare PubSub "topics" before using them: a "topic" exists as long as at least when client is subscribed to it.

The PubSub mode is a good fit for the following patterns:

  • broadcasting non-critical events to groups of consumers,

  • one-to-one real-time dialogs (simply use a randomly generated topic identifier).

Example 2. PubSub mode

landing

  • On the left, a client publishes on PubSub topic "test" while two consumers are subscribed, the message is then duplicated and delivered to the two consumers.

  • On the right, a client publishes on PubSub topic "alarms" while no consumers are subscribed: the message is dropped.

5.3. FIFO mode / queues

Like in PubSub mode, in FIFO mode communication is also based on the usage of "topics".

There are no conflict between the naming of PubSub topics and FIFO topics: the PubSub topic "test" is different from FIFO topic "test".

Messages published on a FIFO topic are persisted until a subscriber is available and acknowledges the handling of the message. If multiple subscribers consume from the same FIFO topic, messages are load balanced between them. Publication to and consumption from a FIFO topic use acknowledgement, ensuring no message loss. Before being used, a FIFO topic must be created from the Live Objects web portal.

Example 3. FIFO mode

landing

  • On the left, a client publishes in FIFO topic/queue "fifo01" while no consumer is subscribed. The message is stored into the queue, on disk. When later a consumers subscribes to the FIFO topic/queue, the message will be delivered. The message will only disappear from disk once a subscriber acknowledges the reception of the message.

  • On the right, a client publishes on FIFO topic/queue "fifo02" while a consumer is subscribed: the message is stored on disk and immediately delivered to the consumer. The message will only disappear from disk once a subscriber acknowledges the reception of the message. When a consumer that received the message but didn’t acknowledged the message unsubscribes from the topic/queue, the message is put back into the "fifo02" queue and will be delivered to the next available consumer.

FIFO are size-limited. The maximum size is given in bytes. Messages will be dropped from the front of the queue to make room for new messages once the limit is reached meaning that the older messages will be dropped first.

The total number of FIFO and the sum of the size of the FIFO is limited depending on your offer.

For more info about limition, see "Limitation" chapter.

5.4. Message encodings

5.4.1. JSON (version 0)

Definition

This JSON format allows to exchange with the various services of the platform and to connect devices for device management. The data part is deprecated and is now replaced by the json specialized data format.

The MQTT message payload should be a valid JSON Object value, with the optional following attributes:

  • correlationId: (number) message correlation id (for RPC). This field used on RPC requests and responses contains a number value used to allow matching between a request and response during a RPC exchange.

  • replyTo: (string) message "reply to" (for RPC request). This field used on RPC request contains a string identifying a publication destination (pubsub, fifo or router) where the response to this request must be sent.

  • source : (list) a list of source, for each source:

    • order : (number) the source order (0 for initial source, 1 for first repeater/gateway…​)

    • namespace : source identifier namespace

    • id : source identifier (in the namespace)

    • ts : timestamp (ms since EPOCH),

  • timestamp : (number) the timestamp associated with the information, expressed in epoch timestamp (elapsed milliseconds since Jan 01 1970), UTC.

  • event : (string) message "event" (= identifies the type / trigger).

  • eventLifecycle: (string) value between "BEGIN", "ONGOING", "END", "ONE_SHOT"

  • payload (string/binary) message payload. Thie field contains the raw binary content of the message. This field can be used to convey encrypted data for example.

  • data : (list) list of data entries:

    • key : (string) data entry key,

    • jsonValue : (string) data entry value, as JSON in an escaped string,

  • location : location associated with the message

    • lat : (number) latitude

    • longitude : (number) longitude

  • asset : field used to describe status of the source asset for Device management exchanges.

source

The source field is used on message representing information coming into Live Objects to indicate the path taken by the information before arriving into Live Objects.

Its value is a list of objects, each one descibing a "step" in the path, with the following attributes:

namespace

the first part of the step identifier

id

the second part of the step identifier

ts

the date/time when the information left this "step"

order

indicates the position of this step in the path, "0" meaning "the initial source of information", "1" the first repeater/gateway, etc.

landing

Example
{
   "source": [
      {
         "namespace": "sensor",
         "id": "78239",
         "ts": 1457430816710,
         "order": 0
      },
      {
         "namespace": "gateway",
         "id": "777100001",
         "ts": 1457430825000,
         "order": 1
      }
   ],
   ...
}
Example
{
   "correlationId": 122,
   "replyTo": "pubsub/~123243213211",
   "source": [
      {
         "order": 0,
         "namespace": "sensor",
         "id": "001",
         "ts": 1447944553700
      }
   ],
   "timestamp": 1447944553720,
   "event": "FIRE_ALARM",
   "eventLifecycle": "BEGIN",
   "location": {
      "lat": 48.576,
      "lng": 5.747
   },
   "data": [
      {
         "key": "temp",
         "jsonValue": "12.87"
      }
   ],
   "payload": "RC:98:A:1:AZ:EZEZA"
}

5.4.2. JSON for data message

Definition

This JSON encoding is aimed to modelize the data collected from IoT things (devices, etc.).

streamId

string uniquely identifying a timeseries / "stream",

timestamp

date/time associated with the collected information,

location

geo location (latitude and longitude) associated with the collected info,

model

string used to indicate what schema is used for the value part of the message,

value

structured representation (JSON object) of the transported info,

tags

list of strings associated with the message to convey extra-information,

metadata

section controled/ enriched by Live Objects

  • source : unique identifier (usually URN) of the source device,

Message must be published to :

  • with device mode to dev/data

  • with bridge mode for payload to router/~event/v1/data/new/(…​),

Example #1 - data collected from MQTT
{
  "streamId" : "urn:uuid:61d2a520-c153-4ec8-a47e-fee21f4eee82!atmos",
  "timestamp" : "2016-03-08T10:02:44.907Z",
  "location" : {
    "lat" : 44.1,
    "lon" : -1.5
  },
  "model" : "atmos_v0",
  "value" : {
    "temp" : 17.25,
    "humidity" : 12.0
  },
  "tags" : [ "City.Lyon", "Model.LoraMoteV1" ]
}
Example #2 - data collected from LPWA
{
    "streamId" : "urn:lpwa:deveui:7A09AEF7E097A7EF!uplink",
    "timestamp" : "2016-03-08T10:02:43.944Z",
    "location" : {
     "lat" : 44.1,
     "lon" : -1.5
    },
    "model" : "lpwa_v1",
    "value" : {
     "port" : 1,
     "fcnt" : 138,
     "rssi" : -111,
     "snr" : -6,
     "sf" : 8,
     "payload" : "a3e1eff054"
    },
    "tags" : [ "City.Lyon", "Model.LoraMoteV1" ],
    "metadata" : {
       "source" : "urn:lpwa:deveui:7A09AEF7E097A7EF"

    }
}

5.5. Remote Procedure Call (RPC)

The Remote Procedure Call is used to execute a command on another module over the Live Objects bus.

Here is the request message format:

{
   "replyTo": "pubsub/~mycallback_topic_12AE45E",
   "correlationId": 156,
   "payload": "The request payload"
}

Here is the field description:

  • replyTo: the reply topic of the RPC request,

  • correlationId: the correlation ID of the RPC request,

  • payload: the content of the RPC request.

Note about the replyTo topic:

  • It must be an ~ topic

  • It is recommended to be a unique topic, so include some random part into the string

  • Do not forget to subscribe to it before sending the request

Here is the answer message format published on the reply topic given on replyTo:

{
   "correlationId": 156,
   "payload": "This is the answer payload"
}

Here is the field description:

  • correlationId: the correlation ID that corresponds to the RPC request

  • payload: the content of the RPC answer

6. Device management

An “asset” is a generic term that can designate a device (sensor, gateway) or an entity observed by devices (ex: a building, a car).

6.1. Asset Supervision

Live Objects can track for you the changes of status of your assets: connection status (connected/disconnected), route used by your asset to communicate with the service, last contact date.

For this, you need to publish messages in the standard format to notify the service of your asset connections / disconnections and status updates.

6.1.1. "Asset connected" event

Once connected to Live Objects, a device needs to explicitly notify its identity and supported features to the platform to become "manageable".

To do this, the device must publish in Router mode with routing key ~event.v2.assets.{ns}.{id}.connected (i.e. in MQTT on topic router/~event/v2/assets/{ns}/{id}/connected), where:

{ns}

the namespace of device identifier (ex: the device model, or identifier family)

{id}

the device identifier - must be unique within the specified namespace

The message to publish must have the following structure:

{
   "source": [
      {
         "order": 0,
         "namespace": "{ns}",
         "id": "{id}"
      }
   ],
   "asset": {
      "topicParamUpdate": "{topicParamUpdate}",
      "topicCommand": "{topicCommand}",
      "topicResourceUpdate": "{topicResourceUpdate}"
   }
}

With:

ns

the device identifier namespace

id

the device identifier

topicParamUpdate

(optional) the MQTT topic where device is subscribed and awaiting for parameter update request

topicCommand

(optional) the MQTT topic where device is subscribed and awaiting for commands

topicResourceUpdate

(optional) the MQTT topic where device is subscribed and awaiting for resource update requests

Example
{
   "source": [
      {
         "order": 0,
         "namespace": "dongle",
         "id": "00-14-22-01-23-45"
      }
   ],
   "asset": {
      "topicParamUpdate": "pubsub/~device/dongle/00-14-22-01-23-45/cfg",
      "topicResourceUpdate": "pubsub/~device/dongle/00-14-22-01-23-45/res"
   }
}

6.1.2. "Asset disconnected" event

A connected device can explicitly tell Live Objects that it is now "disconnected" an unavailable for all device management mechanisms.

Note that this is purely optional: such a message is internally automatically generated when the MQTT connection is broken/closed, for all asset identity that was announced on this connection.

To emit such a notification, you must publish in Router mode with routing key ~event.v2.assets.{ns}.{id}.disconnected (i.e. in MQTT on topic router/~event/v2/assets/{ns}/{id}/disconnected) a message with the following JSON structure:

{
   "source": [
      {
         "order": 0,
         "namespace": "{ns}",
         "id": "{id}"
      }
   ]
}

6.2. Asset Configuration

An "asset" can declare one or many "parameters": a parameter is identified by a string "key" and can take a typed value (binary, int32, uint32, timestamp).

Live Objects can track the changes of the current value of an asset parameters, and allow users to set different target values for those parameters. Live Objects will then try to update the parameters on the asset once it’s connected and available.

Asset configuration sync

landing

  • (before) :

    • asset initiates MQTT connection with Live Objects,

    • asset subscribes in MQTT to a private topic, where it will receive later the configuration update requests,

  • step 0 : asset notifies Live Objects that it is connected and available for configuration updates on a specific topic (cf. Asset Supervision),

  • step 1 : asset notifies Live Objects of its current configuration,

  • step2 : Live Objects compares the current and target configuration for this asset. If they differ:

    • step 3 : Live Objects sends to the asset, on the topic indicated at step 0, the list of parameters to update, with their target value,

    • step 4 : asset handles the request, and tries to apply the change(s),

    • step 5 : asset respond to the change request with the new configuration,

    • step 6 : Live Objects saves the new configuration. Parameters that have been successfully updated now have the status "OK" and the others the status "ERROR".

6.2.1. "Current Configuration" event

A connected device can notify Live Objects of its current configuration by publishing in Router mode with routing key ~event.v2.assets.{ns}.{id}.currentParams (in MQTT, topic router/~event/v2/assets/{ns}/{id}/currentParams) :

{ns}

the "namespace" of device identifier (ex: the device model, or identifier family)

{id}

the device identifier - must be unique within the specified "namespace"

The message to publish must have the following structure:

{
   "source": [
      {
         "order": 0,
         "namespace": "{ns}",
         "id": "{id}"
      }
   ],
   "asset": {
      "params": {
         "{param1Key}": {
            "value{param1Type}": {param1Value}
         },
         "{param2Key}": {
            "value{param2Type}": {param2Value}
         },
         ...
      }
   }
}

With:

param{X}Key

a string uniquely identifying the device configuration parameter

param{X}Type

indicates the config parameter type between

"Int32"

the value must be an integer between -2,147,483,647 and 2,147,483,647,

"UInt32"

the value must a positive integer between 0 and 4,294,967,296,

"Raw"

the value is a base64 encoded binary content,

"String"

the value is a UTF-8 string,

"Float"

the value is float (64 bits) value.

Example
{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "asset": {
      "params": {
         "conn_period_sec": {
           "valueUInt32": 60000
         },
         "log_level": {
           "valueRaw": "REVCVUc="
         },
         "can_filters": {
           "valueRaw": "MSwyNCw1LDIx"
         }
      }
   }
}

6.2.2. Config update request

To receive configuration updates, the device must first subscribe to a topic where it will be awaiting for configuration update requests, and then notify Live Objects using an "Asset Connected" event that it is available for such request on this topic indicated using the topicParamUpdate message field.

Your device must choose the topic name so that there is no conflict with other devices. We advice that you use a topic name containing your device namespace / id identifier couple.

For example pubsub/~device/{ns}/{id}/cfg.

When Live Objects needs to send you a config update request, your device will receive message with the following JSON structure:

{
   "target": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "correlationId": {correlationId},
   "replyTo": {correlationId},
   "asset": {
      "params": {
         "{param1Key}": {
            "value{param1Type}": {param1Value}
         },
         "{param2Key}": {
            "value{param2Type}": {param2Value}
         },
         ...
      }
   }
}

That this message is quite similar to the message emitted by your device to notify of current configuration, except for the field "source" that is here called "target" and the two additional fields "correlationId" and "replyTo":

target

the identity of the targeted device (identical to the "source" of your device "current configuration" notification)

correlationId

a number that you must return to your device response to this update request

replyTo

the topic where Live Objects is expecting the response to this update

asset.params…​

same structure as in the current configuration notification except that here only the parameter that needs to be changed are listed, and the value is the new value to apply

6.2.3. Configuration update response

When receiving a Configuration update request, your device needs to try to apply the specified configuration changes and then to return the new values for the parameters that needed to change. That value can be the same as before the update, the new one requested, or another value, depending on the meaning of the parameter.

For example, if Live Objects request to change a parameter on the device to an invalid value the device can keep the previous value it had for this parameter or choose to apply another default value.

To anwser to a Configuration update request, the device needs to publish a message on the {replyTo} topic that was indicated in the request. This topic should be actually the same as for announcing the Current device configuration.

The published message must have the following JSON structure:

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "correlationId": {correlationId},
   "asset": {
      "params": {
         "{param1Key}": {
            "value{param1Type}": {param1Value}
         },
         "{param2Key}": {
            "value{param2Type}": {param2Value}
         },
         ...
      }
   }
}

That this message is quite similar to the message emitted by your device to notify of current configuration, except for the field "correlationId":

source

the identity of the device (identical to the "source" of your device "current configuration" notification)

correlationId

a number that was in the configuration update request, and is used by Live Objects to track the status of each configuratino parameter

asset.params…​

same structure as in the current configuration notification.

you can but, you don’t have to announce all your configuration parameters here, only the ones that were listed in the configuration update request

6.3. Commands

You can register commands targeting a specific asset: as soon as the asset is available for commands, Live Objects will send them one by one, awaiting the a response for each command from the asset before sending the next one.

Live Objects keeps track of every registered command with its status, and possible response.

Asset configuration sync

landing

6.3.1. Command request

After publishing a Asset connected event with a topicCommands your device can receive at any time a command from Live Objects on the {topicCommands} topic.

Each "command" has the following JSON structure:

{
   "target": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "event":         "{event}",
   "correlationId": {correlationId},
   "replyTo":       "{replyTo}",
   "data": {
      "{key1}": "{key1Value}",
      "{key2}": "{key2Value}",
       ...
   },
   "payload":       "{payload}"
}

Where

{ns}

the target device identifier namespace

{id}

the target device identifier

{event}

the command "event" field, often used to convey the called method named

{correlationId}

a number that must be returned in the command response to allow Live Objects to correlated th request and response

{replyTo}

the topic where the command response is expected

{key<X>}

the key of a data field

{key<X>Value}

the JSON value associated with key<X>

{payload}

the base64-encoded command payload (raw byte array)

6.3.2. Command response

To respond to a received command, the client device must publish a message on the command request {replyTo} topic with the following JSON structure:

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "correlationId": {correlationId},
   "data": {
      "{key1}": "{key1Value}",
      "{key2}": "{key2Value}",
       ...
   },
   "payload":       "{payload}"
}

Where

{ns}

the source device identifier namespace

{id}

the source device identifier

{correlationId}

same value as in the Command request

{key<X>}

a key of a response data field

{key<X>Value}

the JSON value associated with key<X>

{payload}

the base64-encoded command payload (raw byte array)

Example

Request:
{
   "target": [
      {
          "order":     0,
          "namespace": "sensor",
          "id":        "001"
      }
   ],
   "correlationId": 879546045610,
   "replyTo": "pubsub/~7928372983792873",
   "event": "getTime",
   "data": {
      "timezone": "UTC"
   }
}
Response:
(published to "pubsub/~7928372983792873")
{
   "source": [
      {
          "order":     0,
          "namespace": "sensor",
          "id":        "001"
      }
   ],
   "correlationId": 879546045610,
   "data": {
      "status": 200,
      "time": "2016-06-14T12:30:56"
   },
   "payload": "U1VDQ0VTUw==" // "SUCCESS"
}

6.4. Resource management

A "resource" is a versioned binary content (for example a device firmware).

You can manage a repository of resources in your tenant account.

Live Objects can track the current versions of resources on a specific asset.

You can set the target version of resources for a specific asset in Live Objects that will then try to update the resources on the asset as soon as the asset is available for resource update.

Asset resource update

landing

  • step 1: the device (or the codec communicating on its behalf) notifies the Resource Manager module of the currently deployed resource versions,

  • step 2: the Resource Manager module update the current state of device/thing resource versions in database, and compares it to the "target" resources versions for this device. For each resource on the device that is not in the "target" version:

    • step 3: the Resource Manager module send a "prepare resource update" request to the Updated module in charge of the new resource version,

    • step 4: the Updater module prepares the update (for example by retrieving the binary content of this resource version, by creating a temporary access for the device on this resource, etc.),

    • step 5: the Updater module replies to the Resource Manager module with a status (update possible or not) and extra information to transmit to the device (for example a URN where the new resource can be downloaded, a security token to use to access the new resource, etc.)

    • step 6: the Resource Manager receives the Updater reply, and if update is possible builds a resource update request to the device, with the extra info provided by the Updater module;

    • step7: the Resource Manager send the resource update request to the device;

    • step 8: the device proceeds to retrieve the new resource version (ex: HTTP/FTP download…​) from the Updater module, using if needed the extra info that was specified in the resource update request;

    • step 9: during transfer the Updater module or the device notifies the Resource Manager module of the transfer progress;

    • step 10: once the new resource has been completely transfered to the device, the device can verify the binary content (for example checking crypto signature, comparing content hash, etc.) and applies the update;

    • step 11: the device notifies the Resource Manager of the resource update result (success or failure).

6.4.1. Current resource versions

Your device can announce at any time the current versions of its resources by publishing a message in Router mode with routing key ~event.v0.assets.{ns}.{id}.currentResources (i.e. in MQTT on topic router/~event/v0/assets/{ns}/{id}/currentResources):

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "payload": "
      // base64 encoded...
      [
         {
            "resourceId":        "{res1Id}",
            "resourceVersionId": "{res1Version}",
            "connectorMetadata": {res1Metadata}
         },
         ...
      ]
      // ... base64 encoded
   "
}

Where:

res{X}Id

(required) identifier for resource X

res{X}Version

(required) current version for resource X

res{X}Metadata

(optional) JSON object, map of metadata associated with this resource (useful for resource update transfer)

6.4.2. Resource update request

Once your device has announced a topicResUpdate topic in an "Asset Connected" event it can receive at any time a message on this topic, requesting a resource update:

{
   "target": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "data": {
      "resourceId":    "{resourceId}",
      "sourceVersion": "{resourceCurrentVersionId}",
      "targetVersion": "{resourceNewVersionId}",
      "{param1Key}": {param1Value},
      "{param2Key}": {param2Value},
      ...
   },
   "replyTo":       "{replyTo}",
   "correlationId": {correlationId}
}

Where:

{resourceId}

identifies the resource to update

{resourceCurrentVersionId}

the current version of the resource to update (should be checked by device)

{resourceNewVersionId}

the new version of the resource to download and install

{payload}

a base64 content that can give extra info to download the new resource version (ex: URI, token, etc.)

{param(X)Key}

key identifying an extra parameter added to the resource update request

{param(X)Value}

JSON value (string, number, any…​) associated with key {param(X)Key}

{replyTo}

topic where response to the resource update request must be sent

{correlationId}

(signed integer) identifier that must be re-used in the response so that Live Objects correlates the correct response and request

6.4.3. Resource update response

Shortly after receiving the resource update request, the device must respond to indicate if it accepts to make the update:

{
   "source": [
      {
          "order":     0,
          "namespace": "{ns}",
          "id":        "{id}"
      }
   ],
   "data": {
      "status":                    "{status}",
      "topicCancelResourceUpdate": "{topicCancelResourceUpdate}"
   },
   "correlationId": {correlationId}
}

Where:

{status}

(required) indicates if the device accepts or not to make the update, and why. Possible values: "OK", "UNKNOWN_ASSET", "INVALID_RESOURCE","WRONG_SOURCE_VERSION","WRONG_TARGET_VERSION", "NOT_AUTHORIZED","INTERNAL_ERROR"

{topicCancelResourceUpdate}

(optional) the topic where the device is available to receive request to cancel the resource update

{correlationId}

(signed integer) same value as in the resource update request

6.4.4. Example

Device notifies of current version:

{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "payload": "W3sNCiAgICJyZXNvdXJjZUlkIjogICAgImRvbmdsZVYyX2Zpcm13YXJlfSIsDQogICAicmVzb3VyY2VWZXJzaW9uSWQiOiAiMS4xIiwNCiAgICJjb25uZWN0b3JNZXRhZGF0YSI6IHsiY2hlY2tzdW0iOiAibWQ1In0NCn1d"
   // (base 64)
   // [{
   //    "resourceId": "dongleV2_firmware}",
   //    "resourceVersionId": "1.1",
   //    "connectorMetadata": {"checksum": "md5"}
   // }]
}

Device receives resource update request:

{
   "target": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "data": {
      "resourceId":    "dongleV2_firmware}",
      "sourceVersion": "1.1",
      "targetVersion": "1.3",
      "uri":           "http://.../bin/dongleV2_firmware/versions/1.3/fw_13.bin",
      "md5":           "098f6bcd4621d373cade4e832627b4f6"
   },
   "replyTo": "pubsub/~0574badc-0abf-433e-a8d3-05e7c8f26210",
   "correlationId": -2754511
}

The device then parses the update request parameters and responds on "replyTo" topic ("pubsub/~0574badc-0abf-433e-a8d3-05e7c8f26210"):

{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "data": {
      "resourceId":    "dongleV2_firmware}",
      "sourceVersion": "1.1",
      "targetVersion": "1.3",
      "uri":           "http://.../bin/dongleV2_firmware/versions/1.3/fw_13.bin",
      "md5":           "098f6bcd4621d373cade4e832627b4f6"
   },
   "replyTo": "pubsub/~0574badc-0abf-433e-a8d3-05e7c8f26210",
   "correlationId": -2754511
}

Then the device parses the request parameters and respond to Live Objects that it accepts or not to make the update:

{
   "source": [
      {
          "order":     0,
          "namespace": "dongle",
          "id":        "00-14-22-01-23-45"
      }
   ],
   "data": {
      "status": "OK"
   },
   "correlationId": -2754511
}

The device then processes this request and downloads/installs the resource content (if necessary by parsing the payload to extract needed info like URI, token, etc.).

It’s up to the resource transfer module to track the status of the download (progress and status: SUCCESS/FAILED).

6.5. Auto Provisioning

Live Objects will register automatically new assets in the inventory the first time this device published a "Asset connected" or "Asset disconnected" event.

When registering an asset previously unknown, Live Objects emits a "Asset created" event.

From the web portal or the APIs you can "delete" an asset: all the related device management information will be forgotten by Live Objects, and a "Asset deleted" event will be published.

6.5.1. "Asset created" event

This event is emitted in Router mode with routing key ~event.v1.assets.<assetIdNamespace>.<assetId>.created:

{
   "payload":
      // base64 encoded...
      {
         "assetIdNamespace": "{ns}",
         "assetId": "{id}"
      }
      // ...base64 encoded.
}

6.5.2. "Asset deleted" event

This event is emitted in Router mode with routing key ~event.v1.assets.<assetIdNamespace>.<assetId>.deleted:

{
   "payload":
      // base64 encoded...
      {
         "assetIdNamespace": "{ns}",
         "assetId": "{id}"
      }
      // ...base64 encoded.
}

7. Data management

7.1. Concepts

Data management relies upon:

  • the store service which is aimed to store data messages from IoT things (devices, gateway, IoT app collecting data, etc.) as time-series data streams,

  • and the search service based on the popular open-source Elasticsearch product.

The data collected could be associated to a model. The model is a fundamental concept for the search service, it specifies the schema of the JSON "value" object. The model is dynamically updated based on the data injected.

  • If the model is not provided, "value" object will be not indexed by the search service. Nevertheless, the data will be stored in the store service and all information except value object will be indexed in search service.

  • If the value JSON object does not comply with the existing model (for example, a field change from long to String type), the data will be not inserted in the search service. The data message will be only stored in the store service.

7.2. Store service

The REST interface allows to add data to a stream and to retrieve data from a stream. A stream could be for example associated to a unique of device (streamID could be therefore a device Identifier) or one type of data coming from a device (streamID could be therefore in this format deviceIdentifier-typeOfData)

Add data to a stream :

Request
POST /api/v0/data/streams/{streamId}
X-API-Key: <your API key>
Accept: application/json

body param

description

data

JSON object conforming to data message structure

Request
POST /api/v0/data/streams/myDeviceTemperature
{
  "value": {"temp":24.1},
  "model": "temperature_v0"
 }

For this example, the value.temp field of model "temperature_v0" will be defined as a double type. If a String type is used in the future for value.temp, a new model must be defined. In case that value.temp is set a String type with model "temperature_v0", the message will be dropped by the search service.

Retrieve data from a stream :

Request
GET /api/v0/data/streams/{streamId}
X-API-Key: <your API key>
Accept: application/json

Query params

Description

limit

Optional. max number of data to return, value is limited to 100

timeRange

Optional. filter data where timestamp is in timeRange "from,to"

bookmarkId

Optional. id of document. This id will be used as an offset to access to the data.

Documents are provided in reverse chronological order (newest to oldest)

Request
GET /api/v0/data/streams/myDeviceTemperature
 {
  "id" : "57307f6c0cf294ec63848873",
  "streamId" : "myDeviceTemperature",
  "timestamp" : "2016-05-09T12:15:41.620Z",
  "model" : "temperature_v0",
  "value" : {
    "temp" : 24.1
  },
  "created" : "2016-05-09T12:15:40.286Z"
 }

The REST request body search API is provided to perform search queries.

To learn more about the search API, read the Exploring your Data section of Elasticsearch: The Definitive Guide. (www.elastic.co/guide/en/elasticsearch/reference/current/_the_search_api.html)

To perform a search query :

Request
POST /api/v0/data/search
X-API-Key: <your API key>
Accept: application/json

body param

description

dsl request

elasticsearch DSL request

exemple:

This query requests statistics from the myDeviceTemperature stream temp field.

Request
POST /api/v0/data/search
{
    "size" : 0,
    "query" :
    {
            "term" : { "streamId": "myDeviceTemperature" }
    },
    "aggs" :
    {
        "stats_temperature" : { "stats" : { "field" : "@temperature_v0.value.temp" } }
     }
}

If a model has been provided, search query must be prefixed by @<model> : @temperature_v0.value.datapath

Response
{
  "took": 1,
  "hits": {
    "total": 2
  },
  "aggregations": {
    "stats_temperature": {
      "count": 2,
      "min": 24.1,
      "max": 25.9,
      "avg": 25,
      "sum": 50
    }
  }
}

To perform the same search query; but with the 'hits' part extracted and JSON formated as an array of data messages (to use when you are only interested in the 'hits' part of Elasticsearch answer) :

Request
POST /api/v0/data/search/hits
X-API-Key: <your API key>
Accept: application/json

body param

description

dsl request

elasticsearch DSL request

exemple:

This query requests last data for all devices using the model : temperature_v0.

Request
POST /api/v0/data/search/hits
{
    "size" : 10,
    "query" : {"term" : { "model": "temperature_v0" }
     }
}
Response
[
  {
    "id": "57308b3b7d84805820b35345",
    "streamId": "myDeviceTemperature",
    "timestamp": "2016-05-09T13:06:03.903Z",
    "model": "temperature_v0",
    "value": {
      "temp": 25.9
    },
    "created": "2016-05-09T13:06:03.907Z"
  },
  {
    "id": "573087777d84805820b35344",
    "streamId": "myDeviceTemperature",
    "timestamp": "2016-05-09T12:49:59.966Z",
    "model": "temperature_v0",
    "value": {
      "temp": 24.1
    },
    "created": "2016-05-09T12:49:59.977Z"
  },
  {
    "id": "5730b1577d84805820b35347",
    "streamId": "myStreamDemo-temperature",
    "timestamp": "2016-05-09T15:48:39.390Z",
    "model": "temperature_v0",
    "value": {
      "temp": 24.1
    },
    "created": "2016-05-09T15:48:39.395Z"
  }
]

7.3.1. Geo Query for data injected BEFORE 2017/04

Geo Query can be performed through *location* field.

Request

POST /api/v0/data/search/hits

{
  "query": {
    "filtered": {
      "filter": {
        "geo_distance": {
          "distance": "10km",
          "location": {
            "lat": 43.848,
            "lon": -3.417
          }
        }
      }
    }
  }
}
Response
[
  {
    "id": "57308b3b7d84805820b35345",
    "streamId": "myDeviceTemperature",
    "location" : {
        "lat": 43.8,
        "lon": -3.3
    }
    "timestamp": "2016-05-09T13:06:03.903Z",
    "model": "temperature_v0",
    "value": {
      "temp": 25.9
    },
    "created": "2016-05-09T13:06:03.907Z"
  }
]

7.3.2. Geo Query for data injected AFTER 2017/04

Geo Query can be performed through all fields with name mathing *location* (case insensitive).
In order to geoquery these fields, you must add @geopoint to the location query path : *location*.@geopoint

Request

POST /api/v0/data/search/hits

{
  "query": {
    "filtered": {
      "filter": {
        "geo_distance": {
          "distance": "10km",
          "location.@geopoint": {
            "lat": 43.848,
            "lon": -3.417
          }
        }
      }
    }
  }
}
Response
[
  {
    "id": "57308b3b7d84805820b35345",
    "streamId": "myDeviceTemperature",
    "location" : {
        "lat": 43.8,
        "lon": -3.3
    }
    "timestamp": "2016-05-09T13:06:03.903Z",
    "model": "temperature_v0",
    "value": {
      "temp": 25.9
    },
    "created": "2016-05-09T13:06:03.907Z"
  }
]

7.3.3. Search Query samples

Here are some query samples that can be used. Aggregations are very usefull to retrieve data grouped by any criteria : list all known tags, get all last value per stream, get mean temperature per tag, get the list of streams that have not send data since a date…​ The aggregations results are stored as 'buckets' in the result.
You can also add filters (geoquery, wildcards, terms…​) to all your aggregations query to target specific 'buckets' or data.

Give me all you got !
{
    "query": {
        "match_all" : {}
    }
}
Give me the list of all known tags
{
    "size": 0,
    "aggs": {
        "grouped_by_tags": {
            "terms": {
                "field": "tags",
                "size": 0
            }
        }
    }
}
result
{
  "took": 44,
  "hits": {
    "total": 66
  },
  "aggregations": {
    "grouped_by_tags": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
        {
          "key": "tag_1",
          "doc_count": 53
        },
        {
          "key": "tag_2",
          "doc_count": 13
        }
      ]
    }
  }
}
Give me the last value of all my streams
{
    "size":0,
    "aggs": {
        "tags": {
            "terms": {
                "field": "streamId",
                "size": 0
            },
            "aggs": {
                "last_value": {
                    "top_hits": {
                        "size": 1,
                        "sort": [
                            {
                                "timestamp": {
                                    "order": "desc"
                                }
                            }
                        ]
                    }
                }
            }
        }
    }
}
result
{
  "took": 19,
  "hits": {
    "total": 11
  },
  "aggregations": {
    "tags": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
        {
          "key": "device_1",
          "doc_count": 7,
          "last_value": {
            "hits": {
              "total": 7,
              "max_score": null,
              "hits": [
                {
                    ...
                }
              ]
            }
          }
        },
        {
          "key": "device_2",
          "doc_count": 123,
          "last_value": {
            "hits": {
              "total": 123,
              "max_score": null,
              "hits": [
                {
                    ...
                }
              ]
            }
          },
         ...
        }
      ]
    }
  }
}
Give me the list of devices that have not send data since 2017/03/23 10:00:00
{
    "size":0,
    "aggs": {
        "tags": {
            "terms": {
                "field": "streamId",
                "size": 0
            },
            "aggs": {
                "last_date": {
                    "max": {
                        "field": "timestamp"
                    }
                },
                "filter_no_info_since": {
                    "bucket_selector": {
                        "buckets_path": {
                            "lastdate":"last_date"
                        },
                        "script": {
                            "inline": "lastdate<1490263200000",
                            "lang" :"expression"
                        }
                    }
                }
            }
        }
    }
}
result
{
  "took": 8,
  "hits": {
    "total": 9
  },
  "aggregations": {
    "tags": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
        {
          "key": "device_12",
          "doc_count": 7,
          "last_date": {
            "value": 1489504105020,
            "value_as_string": "2017-03-14T15:08:25.020Z"
          }
        },
        {
          "key": "device_153",
          "doc_count": 2,
          "last_date": {
            "value": 1489049619254,
            "value_as_string": "2017-03-09T08:53:39.254Z"
          }
        }
      ]
    }
  }
}

8. Kibana

Kibana is a tool to visualize all the data injected in Live Objects.

On the first connection, you will be redirected to the 'index pattern' screen.
Keep all options as default; and choose 'timestamp' for the 'Time-field name' box. Juste press 'Create' button, this will create a new index pattern for Kibana.

Do not check 'Do not expand index pattern when searching' or 'Use event times to create index names' checkboxes, this will lead to error messages in later screens. In that case, you should delete the 'index-pattern' created, and recreate a new one without these options.

If you add new fields in your data model, you will need to refresh this index pattern in order for kibana to be able to use these new fields.
Just go to the 'Settings' tab and click on the 'refresh field list' orange button on the top.

Kibana is based on 3 main tabs : Discover, Visualize and Dashboard.

8.1. Discover

Here you will access to all your data. The idea is to 'play' with the filters on the left side of the screen to extract usefull data you need to explore.
You can then save this filtered 'search' to visualize it in the next 'visualize' screen.

There is an important time filter on the upper-right corner of the screen. By default, it will display only last 15 minutes of data. You can choose to display 'last month' data for instance.

8.2. Visualize

Here you will be able to create histogram, map, charts, table, metrics; based on your previous search. You can then save this visualization to be displayed in the next 'dashboard' screen.

8.3. Dashboard

Here you will be able to display the visualization tab you have previously created; and gather them all in a 'dashboard' page. You can create and save several dashboards meant for different users. You can share this dahsboards with the 'share' button.

8.4. Decoding service (Beta)

8.4.1. Overview

The data messages sent to the Live Objects platform can be encoded in a customer specific format. For instance, the payload may be a string containing an hexadecimal value or a csv value. The data decoding feature enables you to provision your own decoding grammar. On receiving the encoded message, the Live Objects platform will use the grammar to decode the payload into plain text json fields and record the json in the Store service. The stored message will then be searchable with the Advanced Search service. A "template" option allows to perform mathematical operations on the decoded fields or to define an output format. The decoding feature is not activated by default.

8.4.2. Binary decoding

this feature is only supported on LPWA interface.

Decoder provisioning

The custom decoder describes the grammar to be used to decode the message payload. The Live Objects API to manage the decoders are described in the swagger documentation : https://liveobjects.orange-business.com/swagger-ui/index.html.

The binary decoding module uses the Java Binary Block Parser (JBBP) library.
You must use the JBBP DSL language to describe the binary payload format for your decoder.

Additional types

the float and utf8 are additional types that can be used in the grammar (see examples).

Example : create a binary decoder with the REST API
POST /api/v0/decoders/binary
X-API-Key: <your API key>
Accept: application/json
{"encoding":"twointegers",  (1)
"enabled":true,  (2)
"format":"int pressure;int temperature;", (3)
"template":"{\"pressure\":{{pressure}}, \"temperature\" : \"{{#math}}{{temperature}}/10{{/math}} celsius\"}" (4)
}
1 : identifies the decoder. This name will be associated to the devices during the provisioning and will be present in the data message.
2 : activation/deactivation of the decoder.
3 : describes the payload frame (cf. JBBP DSL language). The name of the fields will be found in the resulting decoded payload json.
4 : optional parameter describing a post-decoding template format. In this example, the output temperature will be divided by 10 and stored in a string format including its unit. More information on templates.
Endianness ?

The decoding service uses the big-endian order (the high bytes come first). If your device uses little-endian architecture, you can use the < character to prefix a type in your format description.

Example : create a binary decoder for a device sending data in little-endian format
POST /api/v0/decoders/binary
X-API-Key: <your API key>
Accept: application/json
{"encoding":"my_little_endian_encoding",
"enabled":true,
"format":"<float temperature;"}  (1)
1 : <float means 32-bit float sent in little-endian.
How to test the binary decoder format?

The Live Objects API provides a "test" endpoint which takes a payload format and a payload value as input and provides the decoded value in the response body, if the decoding is successful. Optionally, you can provide a post-decoding template which will describe the output format.

In the following example, the decoded value for the pressure will remain unchanged,
while the decoded value for temperature will be divided by 10.
The test endpoint is described in swagger.

Request
POST /api/v0/decoders/binary/test
 X-API-Key: <your API key>
Accept: application/json
{
"binaryPayloadStructure":"int  pressure; int temperature;",
"binaryPayloadHexString":"000003F5000000DD",
"template":"{\"pressure\":{{pressure}}, \"temperature\" : \"{{temperature}}/10\"}"
}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "temperature": 22.1,
      "pressure": 1013
   },
   "descriptionValid": true
}
How to customize the fields once the payload has been decoded?

The fields resulting of a decoded payload might need to be processed using a template description, in order to change their output format. More information on templates.

Referencing a decoder in a LPWA device

When provisioning a LPWA device, you may reference the decoder to be used for the device so that Live Objects will automatically decode all the payloads received from this device, using the referenced decoder.

Example :

landing

Message decoding

The data message is decoded using the decoder previously provisioned and the decoded fields are added to the value. The encoded raw payload is kept in the decoded message. Once the message has been decoded and stored, "Advanced Search" requests can be performed using the newly decoded fields.

Table 1. Examples :
Type Frame format Payload example Decoded payload (json)

binary

int temperature;

000000DD

"payload" : "000000DD", "temperature":221

binary

ubyte temperature;

DD

"payload" : "DD", "temperature":221

binary

utf8 [16] myString;

2855332e3632542b323144323235503029

"payload" : "2855332e3632542b323144323235503029", "myString":"(U3.62T+21D225P0)"

binary

byte:1 is_led_on; float pressure; float temperature; float altitude; ubyte battery_lvl; byte[6] raw_gps; ushort altitude_gps;

00447CE00041CEF5C345CAB8CD38000000000000FFFF

"payload" : "00447CE00041CEF5C345CAB8CD38000000000000FFFF", "is_led_on":0,"pressure":1011.5,"temperature":25.87,"altitude":6487.1,"battery_lvl":56,"raw_gps_list":[0,0,0,0,0,0],"altitude_gps":65535

binary

float pi;measure[2] {int length; utf8 [length] name;float value;}

4048F5C30000000BC2A955544638537472696E674148000000000012C2A9616E6F7468657255544638537472696E67447D4000

"payload" : "4048F5C30000000BC2A955544638537472696E674148000000000012C2A9616E6F7468657255544638537472696E67447D4000", "pi":3.14, "measure_list":[{ "length":11,"name":"©UTF8String","value":12.5}, {"length":18,"name":"©anotherUTF8String","value":1013.0} ]

Table 2. Json fields

value.payload

a string containing the encoded payload in hexadecimal (raw value)

metadata.encoding

contains the decoder name

model

remains unchanged after decoding

additional LPWA fields (lora port, snr…​) in the value

remain unchanged after decoding.

8.4.3. Csv decoding

Decoder provisioning

The custom decoder describes the columns format and options to be used to decode the message csv payload. The Live Objects API to manage the decoders are described in the swagger documentation : https://liveobjects.orange-business.com/swagger-ui/index.html.

When provisioning a csv decoder, you must specify an ordered list of column names and their associated type. Three column types are available : STRING, NUMERIC or BOOLEAN.
Several options (column separator char, quote char, escape char…​) may be set to customize the csv decoding.

A template option enables you to provide a post-decoding output format including mathematical evaluation. More information on templates.

Column types
  • STRING column may contain UTF-8 characters

  • NUMERIC column may contain integer (32 bits), long (64 bits), float or double values. The values may be signed.

  • BOOLEAN column must contain true or false.

Table 3. Available options
name default definition example

quoteChar

double-quote "\""

character used for quoting values that contain column separator characters or linefeed.

"pierre, dupont",25,true will be decoded as 3 fields.

columnSeparator

comma ","

character used to separate values.

lineFeedSeparator

"\n"

character used to separate data rows. If the message payload contains several rows, only the first one will be decoded.

the decoding result for pierre,35,true\nmarie,25,false will be 3 fields containing pierre, 35 and true.

useEscapeChar

false

set to true if you want to use an escape char.

escapeChar

backslash "\\"

character used to escape values.

skipWhiteSpace

false

if set to true, will trim the decoded values (white spaces before and after will be removed).

Example 1 : create a simple csv decoder with the REST API
POST /api/v0/decoders/csv
X-API-Key: <your API key>
Accept: application/json
{
    "encoding":"my csv encoding", (1)
    "enabled":true, (2)
    "columns": [ (3)
        {"name":"column1","jsonType":"STRING"},
        {"name":"column2","jsonType":"NUMERIC"},
        {"name":"column3","jsonType":"BOOLEAN"}
    ]
}
1 : identifies the decoder. This name will be associated to the devices during the provisioning and will be present in the data message.
2 : activation/deactivation of the decoder.
3 : an ordered list of column descriptions.
Example 2 : create a csv decoder with options, using the REST API
POST /api/v0/decoders/csv
X-API-Key: <your API key>
Accept: application/json
{
    "encoding":"my csv encoding with options",
    "enabled":true,
    "columns": [
        {"name":"unit","jsonType":"STRING"},
        {"name":"temperature","jsonType":"NUMERIC"},
        {"name":"normal","jsonType":"BOOLEAN"}
    ],
    "options" : {
        "columnSeparator": "|",
        "quoteChar": "\"",
        "lineFeedSeparator": "/r/n"
    }
}
In the POST request, you can provide only the options you wish to modify. The other options will keep the default values.
How to customize the fields once the payload has been decoded?

The fields resulting of a decoded payload might need to be processed using a template description, in order to change their output format. More information on templates.

How to test the csv decoder ?

The Live Objects API provides a "test" endpoint which takes a csv format description and a payload value as input and provides the decoded value in the response body, if the decoding is successful. The test endpoint is described in swagger.

Request
POST /api/v0/decoders/csv/test
 X-API-Key: <your API key>
Accept: application/json
{
    "columns": [
        {"name":"unit","jsonType":"STRING"},
        {"name":"temperature","jsonType":"NUMERIC"},
        {"name":"threasholdReached","jsonType":"BOOLEAN"}
    ] ,
    "options":{
        "columnSeparator": ","
    },
    "csvPayload":"celsius,250,true",
    "template":"{\"temperature\" : \"{{temperature}}/10\", \"unit\":\"{{unit}}\", \"thresholdReached\":\"{{thresholdReached}}\"} "
}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "unit": "celsius",
      "thresholdReached": "true",
      "temperature": 25
   },
   "descriptionValid": true
}
Message decoding

The data message is decoded using the decoder previously provisioned and the decoded fields are added to the value. The csv encoded raw payload is kept in the decoded message. Once the message has been decoded and stored, "Advanced Search" requests can be performed using the newly decoded fields.

Example in http :

Request
POST /api/v0/data/streams/{streamId}
X-API-Key: <your API key>
Accept: application/json
{
  "value": {"payload":"celsius,25,true"},
  "model": "temperature_v0",
  "metadata" : {"encoding" : "my csv encoding"}
 }

The data message will be stored as:

{
      "id": "585aa47de4b019917e342edd",
      "streamId": "stream0",
      "timestamp": "2016-12-21T15:49:17.693Z",
      "model": "temperature_v0",
      "value":       {
         "payload": "celsius,25,true",
         "normal": true,
         "unit": "celsius",
         "temperature": 25
      },
      "metadata": {"encoding": "my csv encoding"},
      "created": "2016-12-21T15:49:17.750Z"
}
Table 4. Json fields

value.payload

a string containing the csv encoded payload (raw value)

metadata.encoding

contains the decoder name

model

remains unchanged after decoding

8.4.4. Templating

The Live Objects provides, for the decoder creation and the decoder test APIs, an optional parameter named "template". This parameter is a string field describing the target output fields in a mustache-like format.

Table 5. Available functions :

{{#math}}{{/math}}

performs mathematical operations on a field

{{#toUpperCase}}{{/toUpperCase}}

converts a string to upper case

{{#toLowerCase}}{{/toLowerCase}}

converts a string to lower case

The following examples shows, for the same raw binary payload, the output if you are not using any template, or if you define a custom template.

Request (WITHOUT the template parameter)
POST /api/v0/decoders/binary/test
 X-API-Key: <your API key>
Accept: application/json
{
"binaryPayloadStructure":"byte:1 led; ushort pressure; ushort temperature; ushort altitude; ubyte battery; byte[6] raw_gps; ushort altitude_gps;",
"binaryPayloadHexString":"0027830a1bfd6738000000000000ffff"}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "led": 0,
      "pressure": 10115,
      "temperature": 2587,
      "altitude": 64871,
      "battery": 56,
      "raw_gps":       [
         0,
         0,
         0,
         0,
         0,
         0
      ],
      "altitude_gps": 65535
   },
   "descriptionValid": true
}
Request (WITH the template parameter)
POST /api/v0/decoders/binary/test
 X-API-Key: <your API key>
Accept: application/json
{
"binaryPayloadStructure":"byte:1 led; ushort pressure; ushort temperature; ushort altitude; ubyte battery; byte[6] raw_gps; ushort altitude_gps;",
"binaryPayloadHexString":"0027830a1bfd6738000000000000ffff",
"template":"{\"pressure\": \"{{pressure}} / 10\", \"temperature\": \"{{temperature}} / 100\", \"altitude\": \"{{altitude}} / 10\", \"view\": { \"Pressure\": \"{{#math}}{{pressure}}/10{{/math}} hPa\",             \"Temperature\": \"{{#math}}{{temperature}}/100{{/math}} C\",\"Altitude\": \"{{#math}}{{altitude}}/100{{/math}} m\",\"GPSAltitude\": \"{{altitude_gps}} m\",\"Battery\": \"{{battery}} %\"}}}"
}
Response
{
   "parsingOk": true,
   "decodingResult":    {
      "altitude": 6487.1,
      "view":       {
         "Pressure": "1011.5 hPa",
         "Temperature": "25.87 C",
         "Altitude": "648.71 m",
         "GPSAltitude": "65535 m",
         "Battery": "56 %"
      },
      "temperature": 25.87,
      "pressure": 1011.5,
      "led": 0,
      "battery": 56,
      "raw_gps":       [
         0,
         0,
         0,
         0,
         0,
         0
      ],
      "altitude_gps": 65535
   },
   "descriptionValid": true
}
the {{#math}}{{/math}} template is needed only if you wish to evaluate a mathematical expression within a string.
Example for a template containing :
\"Temperature\": \"{{temperature}}/100 celsius\" (1)
\"Temperature\": \"{{#math}}{{temperature}}/100{{/math}} celsius\" (2)
\"Temperature\": \"{{#math}}{{temperature}}/100{{/math}}\" (3)
\"Temperature\": \"{{temperature}}/100\" (4)
1 the output will be like "Temperature": "2587/100 celsius" (the division is not evaluated).
2 the output will be like "Temperature": "25.87 celsius" (a string output. the division is evaluated).
3 the output will be like "Temperature": 25.87 (a numeric). In this case, the {{#math}} function is not needed.
4 the output will be like "Temperature": 25.87 (a numeric)
You need to specify in the template, all the fields you wish to get in the output, even if they are not modified by the template.
Example :  "template":"{\"pressure\":{{pressure}}, \"temperature\" : {{temperature}}/10}"
If you omit the "pressure" field in the template, it will simply not appear in the output.
If the decoded value contains a "location" field with latitude and longitude, it will override the location field provided in Live Objects at the same json level as the "value" field.

9. Simple Event Processing

9.1. Concepts

Simple event processing (SEP) service is aimed at detecting notable single event from the flow of data messages.

Simple event processing combines a stateless boolean detection function (matching rule) with a frequency function (firing rule).

It generates fired events as output that your business application can consume to initiate downstream action(s) like alarming, execute a business process, etc.

Simple Event Processing service E2E overview

lom_sep_architecture

9.2. Processing rules

You can set up Matching rules and Firing rules to define how data messages are processed by the SEP service and how fired events are triggered:

9.2.1. Matching rule

A matching rule is a simple or compound rule that will be applied on each data message to evaluate if a « match » occurs. A matching rule is evaluated as a boolean result. Matching rule supports numeric, string, logic and distance operators and is based on JsonLogic.

Matching context (containing data message and matching rule id, etc.) are processed by the firing rules associated to these matching rules.

9.2.2. Firing rule

A firing rule applies to the matches triggered by one or many matching rules and defines when fired events must be generated.

A firing rule specifies:

  • the list of matching rules associated to this firing rule – when these matching rules match, the firing rule is applied,

  • the frequency of firing: once, sleep and always,

  • optionally, a list of aggregation keys identifying fields to extract from the matching context to identify the firing context.  

The firing rule is applied as follow on each matching context:

  • the firing rule generates the firing context from the matching context, by extracting one or multiple fields defined with the aggregation keys,

  • the firing rule then applies the frequency parameter to optionally throttle the triggering of fired events belonging to the same firing context.  

If the frequency of the firing rule is defined as ONCE or SLEEP then firing guards are created in the system to prevent new generation of fired events for a given firing context. You can manage the firing guards, and for example, remove a firing guard to re-activate a firing rule for a specific firing context.

As an example, by setting the metadata.source field as aggregation key, if a fired event is generated for a device “A”, a firing guard will prevent new fired event for this device “A” and this firing rule. By the way, fired events could occur for devices “B”, “C”, etc…​ for this rule.

With SLEEP mode, a duration specifies the minimum time between two fired events. When the duration is elapsed, the firing guards is removed and new fired events could occur. This duration is computed for each element of the tuple composed of firing rule id + aggregation keys + value (firingRuleID:metadata.source:deviceId1 , firingRuleID:metadata.source:deviceId2, …)

9.2.3. Fired events consumption

Fired events are accessible with the MQTT API. Your business applications must connect with payload+bridge mode and subscribe to router/~event/v1/data/eventprocessing/fired topic to receive the fired events.

9.2.4. Examples

Here are some examples of usage of the simple event processing service.

Data message sent by a device with temperature set to 100 and location set at San Francisco (37.773972,-122.431297)

{
"streamId":"urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
"timestamp":"2016-08-29T08:27:52.874Z",
"location":{"lat":37.773972,"lon":-122.431297},
"model":"temperatureDevice_v0",
"value":{"temp":100},
"metadata":{"source":"urn:lo:nsid:dongle:00-14-22-01-23-45"}
}

Matching rule: numeric (temperature higher than 99) and distance operator (distance between data message and Paris (48.800206, 2.296565) must be higher than 6km)

{
    "name": "compound rule with numeric and distance operators",
    "enabled": true,
    "dataPredicate":
    {
                "and" :
                [
                  { ">" : [ { "distance" : [
                    { "var" : "location.lat"},
                    { "var" : "location.lon"},
                    48.800206,
                    2.296565 ] },
                    6 ] },
                  { ">" : [ { "var" : "value.temp" }, 99 ] }
                ]
    }
}

Firing rule with frequency ONCE and aggregationKeys based on the source field :

{
    "name": "firing rule test",
    "enabled": true,
    "matchingRuleIds": ["{matchingRuleId}"],
    "aggregationKeys":["metadata.source"],
    "firingType":"ONCE"
}

Fired event will be generated once for each source sending data with temperature higher than 99 and not located within a radius of 6km of Paris.

Example with other operators ">", "if", "in", "cat" :

{">":[{"var":{"cat":["value.", {"if" : [
  {"in": [{"var":"model"}, "v0"] }, "temp",
  {"in": [{"var":"model"}, "v1"] }, "temperature",
  "t"
]}]}},100]}

This rule allows to specify the field to be compared to the value "100” based on the model of the data message.

If the model value is:

  • "v0", the comparison will be made with the field "value.temp”,

  • "v1", the comparison will be made with the field "value.temperature”,

  • else it will be made with the field "value.t”.

10. State Processing

10.1. Concepts

State processing (SP) service aims at detecting changes in "device state" computed from data messages.

A state can represent any result computed from Live Objects data messages : geo-zone ("paris-area", "london-area", ..), temperature status ("hot", "cold", ..), availability status ("ok" , "ko"). Each state is identified by a key retrieved from the user-defined json-path in the data message.

stateKeyPath examples : "streamId", "metadata.source"

A state is computed by applying a state function to a data Message. A notification is sent by Live Objects each time a state value change. State processing differs from event processing as it provides statefull rules which is useful for uses case more complex than normal/alert status. State processing can be seen as a basic state machine where transitions between states are managed by the state function result and events are transition notifications.

10.2. State Processing rules

You can set up StateProcessing rules to define how data messages are processed by the SP service.

A StateProcessing rule applies to all new data messages.

A StateProcessing rule specifies:

  • an optional boolean function : filterPredicate. It filters data on which the state processing logic should be applied. This boolean function is described in jsonLogic syntax. If no filter predicate is specified, state function is applied to every data message.

  • a json path relative to data message : stateKeyPath. This path will be used to retrieve the state key. In many cases the state key will be streamId value or metadata.source value in order to associate a state with a device status.

  • a state function stateFunction which is the core of the state processing logic. This function takes as input a data message and computes a state associated whith the state key.

The State function is written in JsonLogic Syntax and can return any primitive value : String, Number, Boolean.  

10.2.1. State change events

State processing events are accessible with the MQTT API. Your business applications must connect with payload+bridge mode and subscribe to router/~event/v1/data/eventprocessing/statechange topic to receive the events.

10.2.2. State processing initialization

When a state is computed for the first time, it generate a state change event with previous state equals null.

10.2.3. Examples

Here are some examples of usage of the state processing.

Temperature monitoring of a device sensor, with 3 temperature range.

Temperature State processing logic:

  • if temperature is below 0 degree Celcius, sensor state is cold.

  • if temperature is between 0 and 100 degrees Celcius sensor state is normal.

  • if temperature is higher than 100 degrees Celcius sensor state is hot.

The sensor is identified by the streamId field within the data message.

{
        "name": "temperature state rule",
        "enabled": true,
        "stateKeyPath": "streamId",
        "stateFunction": {
                "if": [{
                        "<": [{
                                "var": "value.temp"
                        },
                        0]
                },
                "cold",
                {
                        "<": [{
                                "var": "value.temp"
                        },
                        100]
                },
                "normal",
                "hot"]
        }
}

We assume that the current state of the sensor is "normal". The following data message will generate a state change event from "normal" to "hot" for state key : "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature".

{
"streamId":"urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
"timestamp":"2017-05-24T08:29:49.029Z",
"location":{"lat":37.773972,"lon":-122.431297},
"model":"temperatureDevice_v0",
"value":{"temp":200},
"metadata":{"source":"urn:lo:nsid:dongle:00-14-22-01-23-45"}
}

State change event :

{
        "stateKey": "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
        "previousState":"normal",
        "newState": "hot",
        "timestamp": "2017-05-24T08:29:49.029Z",
        "stateProcessingRuleId": "266d3b22-70e0-4f28-9df1-5186c6094f5b",
        "data": {
                "streamId": "urn:lo:nsid:dongle:00-14-22-01-23-45!temperature",
                "timestamp":"2017-05-24T08:29:49.029Z",
                "location":{"lat":37.773972,"lon":-122.431297},
                "model":"temperatureDevice_v0",
                "value": {
                        "temp": 200
                },
                "metadata":{"source":"urn:lo:nsid:dongle:00-14-22-01-23-45"}
        }
}

Geozone supervision of a tracker.

Use case : Tracking of package between the shipment zone , transportation zone and delivery zone. A state change event will be sent when the tracker changes of zone.

  • Shipment zone = San Francisco GPS polygon : (38.358596, -123.019952) (38.306889, -120.954523) (37.124990, -121.789484)

  • Delivery zone = LA GPS polygon : (34.238622, -118.909873) (34.346562, -117.747086) (33.620728, -117.551111) (33.533648, -118.269687)

  • Transportation zone = 101 Highway : (37.632325, -121.609282) (34.253397, -118.027739) (33.679366, -119.203276) (37.440666, -122.641996)

First we need to register these zones in the context repository of the state processing / event processing.

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/shipment

[38.358596, -123.019952, 38.306889, -120.954523, 37.124990, -121.789484]

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/transportation

[37.632325, -121.609282, 34.253397, -118.027739 , 33.679366, -119.203276 , 37.440666, -122.641996]

PUT liveobjects.orange-business.com/api/v0/eventprocessing/context/delivery

[34.238622, -118.909873 ,34.346562, -117.747086,33.620728, -117.551111,33.533648, -118.269687]

Refer to the context repository section for more details about context APIs.

Geo tracking state processing rule :

{
        "name": "geo tracking",
        "enabled": true,
        "stateFunction": {
                "if": [{
                        "inside": [{
                                "var": "location.lat"
                        },
                        {
                                "var": "location.lon"
                        },
                        {
                                "ctx": "shipment"
                        }]
                },
                "shipment_zone",
                {
                        "inside": [{
                                "var": "location.lat"
                        },
                        {
                                "var": "location.lon"
                        },
                        {
                                "ctx": "transportation"
                        }]
                },
                "transportation_zone",
                {
                        "inside": [{
                                "var": "location.lat"
                        },
                        {
                                "var": "location.lon"
                        },
                        {
                                "ctx": "delivery"
                        }]
                },
                "delivery_zone",
                "unknown_zone"]
        },
        "filterPredicate": {
                "in": ["tracker",
                {
                        "var": "tags"
                }]
        },
        "stateKeyPath": "streamId"
}

A first data message in SF area will generate an event with no previous state. Any other message in SF area would not generate event, because state would be unchanged.

{
        "stateKey": "stream01",
        "newState": "shipment_zone",
        "timestamp": "2017-05-24T12:45:17.781Z",
        "stateProcessingRuleId": "b228e7bb-fd33-4c70-800c-f9bf882e2622",
        "data": {
                "streamId": "stream01",
                "location": {
                        "lat": 37.602902,
                        "lon": -122.169846
                },
                "tags": ["a",
                "b",
                "tracker"]
        }
}

The second message in Highway 101 area will generate the following event. Any other message in Highway 101 area would not generate event because state would be unchanged.

{
        "stateKey": "stream01",
        "previousState": "shipment_zone",
        "newState": "transportation_zone",
        "timestamp": "2017-05-24T12:45:17.922Z",
        "stateProcessingRuleId": "b228e7bb-fd33-4c70-800c-f9bf882e2622",
        "data": {
                "streamId": "stream01",
                "location": {
                        "lat": 36.969311,
                        "lon": -121.562765
                },
                "tags": ["a",
                "b",
                "tracker"]
        }
}

The third message in LA area will generate the following event. Any other message in LA area would not generate event because state would be unchanged.

{
        "stateKey": "stream01",
        "previousState": "transportation_zone",
        "newState": "delivery_zone",
        "timestamp": "2017-05-24T12:45:17.979Z",
        "stateProcessingRuleId": "b228e7bb-fd33-4c70-800c-f9bf882e2622",
        "data": {
                "streamId": "stream01",
                "location": {
                        "lat": 33.881571,
                        "lon": -118.154555
                },
                "tags": ["a",
                "b",
                "tracker"]
        }
}

11. MQTT interface

Live Objects supports the MQTT protocol to enable bi-directional (publish/subscribe) communications between devices or applications and the platform.

MQTT can be used with or without encryption (TLS/SSL layer).

Live Objects also supports MQTT over WebSocket.

The Live Objects MQTT interface offers multiples "modes":

  • mode "Device": dedicated to device use-cases, based on simple JSON messages,

  • mode "Bridge": full access to Live Objects internal bus capacities, useful for application or gateway use cases.

11.1. Endpoints

MQTT endpoints:

  • mqtt://liveobjects.orange-business.com:1883 for non SSL connection

  • mqtts://liveobjects.orange-business.com:8883 for SSL connection

MQTT over Websocket endpoints:

  • ws://liveobjects.orange-business.com:80/mqtt

  • wss://liveobjects.orange-business.com:443/mqtt

It is recommended to use the MQTTS endpoint for your production environment, otherwise your communication with Live Objects will not be secured.

The certificate presented by the MQTT server is signed by VeriSign. The public root certificate to import is the following:

-----BEGIN CERTIFICATE-----
MIIE0zCCA7ugAwIBAgIQGNrRniZ96LtKIVjNzGs7SjANBgkqhkiG9w0BAQUFADCB
yjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL
ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJp
U2lnbiwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxW
ZXJpU2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0
aG9yaXR5IC0gRzUwHhcNMDYxMTA4MDAwMDAwWhcNMzYwNzE2MjM1OTU5WjCByjEL
MAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQLExZW
ZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTowOAYDVQQLEzEoYykgMjAwNiBWZXJpU2ln
biwgSW5jLiAtIEZvciBhdXRob3JpemVkIHVzZSBvbmx5MUUwQwYDVQQDEzxWZXJp
U2lnbiBDbGFzcyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9y
aXR5IC0gRzUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvJAgIKXo1
nmAMqudLO07cfLw8RRy7K+D+KQL5VwijZIUVJ/XxrcgxiV0i6CqqpkKzj/i5Vbex
t0uz/o9+B1fs70PbZmIVYc9gDaTY3vjgw2IIPVQT60nKWVSFJuUrjxuf6/WhkcIz
SdhDY2pSS9KP6HBRTdGJaXvHcPaz3BJ023tdS1bTlr8Vd6Gw9KIl8q8ckmcY5fQG
BO+QueQA5N06tRn/Arr0PO7gi+s3i+z016zy9vA9r911kTMZHRxAy3QkGSGT2RT+
rCpSx4/VBEnkjWNHiDxpg8v+R70rfk/Fla4OndTRQ8Bnc+MUCH7lP59zuDMKz10/
NIeWiu5T6CUVAgMBAAGjgbIwga8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8E
BAMCAQYwbQYIKwYBBQUHAQwEYTBfoV2gWzBZMFcwVRYJaW1hZ2UvZ2lmMCEwHzAH
BgUrDgMCGgQUj+XTGoasjY5rw8+AatRIGCx7GS4wJRYjaHR0cDovL2xvZ28udmVy
aXNpZ24uY29tL3ZzbG9nby5naWYwHQYDVR0OBBYEFH/TZafC3ey78DAJ80M5+gKv
MzEzMA0GCSqGSIb3DQEBBQUAA4IBAQCTJEowX2LP2BqYLz3q3JktvXf2pXkiOOzE
p6B4Eq1iDkVwZMXnl2YtmAl+X6/WzChl8gGqCBpH3vn5fJJaCGkgDdk+bW48DW7Y
5gaRQBi5+MHt39tBquCWIMnNZBU4gcmU7qKEKQsTb47bDN0lAtukixlE0kF6BWlK
WE9gyn6CagsCqiUXObXbf+eEZSqVir2G3l6BFoMtEMze/aiCKm0oHw0LxOXnGiYZ
4fQRbxC1lfznQgUy286dUV4otp6F01vvpX1FQHKOtw5rDgb7MzVIcbidJ4vEZV8N
hnacRHr2lVz2XTIIM6RUthg/aFzyQkqFOFSDX9HoLPKsEdao7WNq
-----END CERTIFICATE-----

11.2. MQTT support

The MQTT bridge acts as a standard MQTT v3.1.1 message broker (cf. MQTT Protocol Specification 3.1.1), with some limitations:

  • the "Will" functionality is not implemented (all "willXXX" flags and headers are not taken into account),

  • the "retain" functionality is not implemented,

  • the "duplicate" flag is not used.

11.2.1. Connecting

The first packet exchanged should be a MQTT Connect packet, sent from the client to the MQTT endpoint.

This packet must contain:

  • clientId: usage depends on the "mode",

  • username: used to select a mode and encoding:

  • password: a tenant API Key

  • willRetain, willQoS, willFlag, willTopic, willMessage: !!! Not taken into account !!!,

  • keepAlive: any value, will be correctly be interpreted by MQTT bridge (recommended: 30 seconds).

On reception, the MQTT bridge validates the API Key provided.

  • If the tenantKey is valid, then MQTT Bridge returns a MQTT CONNACK message with return code 0x00 Connection Accepted.

  • If the tenantKey is not valid, then MQTT Bridge returns a MQTT CONNACK message with return code 0x04 Connection Refused: bad user name or password, and closes the TCP connection.

11.2.2. MQTT Ping Req/Res

MQTT Bridge answers to PINGREQ packets with PINGRES packets: this is a way for the MQTT client to avoid connection timeouts.

11.2.3. MQTT Disconnect

MQTT Bridge closes the MQTT / TCP connection when receiving a MQTT DISCONNECT message.

11.2.4. TCP Disconnect

When the TCP connection closes (by client or MQTT bridge), the MQTT bridge will close the currently active subscriptions, etc.

11.3. "Device" mode

In the "Device" mode, a single MQTT connection is associated with a specific device, and JSON messages can be exchanged to support various Device Management and Data features:

  • notifying of the device connectivity status,

  • notifying of the current device configuration and receiving configuration updates,

  • notifying of the list of current device "resources" (i.e. binary contents) versions, and receiving resource update requests,

  • receiving commands and responding to them,

  • sending data messages that will be stored.

Device management features

landing

11.3.1. Connection

When initiating the MQTT connection, to select the "Device" mode you must use the following credentials:

  • clientId : your device unique identifier (cf. Device Identifier (URN)),

  • username : json+device, (where "json" indicates the encoding, "device" the mode),

  • password : a valid API key value.

As soon as the MQTT connection has been accepted by Live Objects, your device will appear as "connected" in Live Objects, with various information regarding the MQTT connection.

Once you close the connection (or if the connection timeouts), your device will appear as "disconnected" in Live Objects.

11.3.2. Device Identifier (URN)

The "device id" used as MQTT client Id must be a valid Live Objects URN of the following format:

urn:lo:nsid:{namespace}:{id}

Where:

  • namespace:
    your device identifier "namespace", used to avoid conflicts between various families of identifier (ex: device model, identifier class "imei", msisdn", "mac", etc.).
    Should preferrably only contain alphanumeric caracters (a-z, A-Z, 0-9).

  • id:
    your device id (ex: IMEI, serial number, MAC adress, etc.)
    Should only contain alphanumeric caracters (a-z, A-Z, 0-9) and/or any special caracters amongst : - _ | + and must avoid # / !.

Examples
urn:lo:nsid:tempSensor:17872800001W
urn:lo:nsid:gtw_M50:7891001

11.3.3. Summary

Authorized MQTT actions from the device:

publish to

dev/info

to annonce the current status

publish to

dev/cfg

to annonce the current configuration or respond to a config update request

subscribe to

dev/cfg/upd

to receive configuration update requests

publish to

dev/data

to forward collected data

subscribe to

dev/cmd

to receive commands

publish to

dev/cmd/res

to return command responses

publish to

dev/rsc

to announce the current resource versions

subscribe to

dev/rsc/upd

to receive resource update requests

publish to

dev/rsc/upd/res

to respond to resource update requests

11.3.4. Current Status

To notify Live Objects of its current status, your device must publish a message to the MQTT topic dev/info with the following JSON structure:

{
   "info": <<metadata>>
}

Where:

  • metadata:
    A JSON object describing the current asset status.

Example
{
   "info": {
      "IP": "4.4.4.7",
      "gpsActive": true
   }
}

Live Objects registers that status, or updates the already registered one (by adding the new declared key/values or existing the ones already known) until next connection.

11.3.5. Current Config

To notify Live Objects of its current configuration, your device must publish a message to the MQTT topic dev/cfg with the following JSON structure:

{
   "cfg": {
      "<<param1Key>>": {
         "t": "<<param1Type>>",
         "v": <<param1Value>>
      },
      ...
   }
}

Where:

  • param{X}Key: the identifier for the device configuration parameters,

  • param{X}Type : indicates the config parameter type between

    • "i32": the value must be an integer between -2,147,483,647 and 2,147,483,647,

    • "u32": the value must a positive integer between 0 and 4,294,967,296,

    • "str": the value is a UTF-8 string,

    • "bin": the value is a base64 encoded binary content,

    • "f64": the value is float (64 bits) value,

  • param{X}Value : the config parameter value.

Example:
{
   "cfg": {
      "log_level": {
         "t": "str",
         "v": "DEBUG"
      },
      "secret_key": {
         "t": "bin",
         "v": "Nzg3ODY4Ng=="
      },
      "conn_freq": {
         "t": "i32",
         "v": 80000
      }
   }
}

11.3.6. Config update

When your device is ready to receive configuration updates, it can subscribe to the MQTT topic dev/cfg/upd from where it will receive messages of the following format:

{
   "cfg": {
      "<<param1Key>>": {
         "t": "<<param1Type>>",
         "v": <<param1Value>>
      },
      ...
   },
   "cid": <<correlationId>>
}

Message fields:

  • param{X}Key : The identifier of a device configuration parameter that must be updated,

  • param{X}Type, param{X}Value : the new type and value to apply to the parameter,

  • correlationId : an identifier that your device must set when publishing your new configuration, so that Live Objects updates the status of your configuration parameters.

Example:
{
   "cfg": {
      "logLevel": {
         "t": "bin",
         "v": "DEBUG"
      },
      "connPeriod": {
         "t": "i32",
         "v": 80000
      }
   },
   "cid": 907237823
}

11.3.7. Config update response

Once your device has processed a configuration update request, it must return a response to Live Objects by publishing on topic dev/cfg the current value for the parameters that were updated:

{
   "cfg": {
      "<<param1Key>>": {
         "t": "<<param1Type>>",
         "v": <<param1Value>>,
      },
      ...
   },
   "cid": <<correlationId>>
}

Message fields:

  • config : the new configuration of your device (complete or at least all parameters that were in the configuration update request),

  • correlationId : the correlationId of the configuration udpate request.

Example:

{
   "cfg": {
      "logLevel": {
         "t": "bin",
         "v": "DEBUG"
      },
      "connPeriod": {
         "t": "i32",
         "v": 80000
      }
   },
   "cid": 907237823
}

If the new value for a parameter is the one that was requested in the configuration update request, the parameter will be considered as sucessfully updated by Live Objects.

If the new value for a parameter is not the one request, the parameter update will be considered as "failed" by Live Objects.

11.3.8. Data push

To publish collected data into Live Objects, your device must publish on the MQTT topic dev/data the following messages:

{
   "s":  "<<streamId>>",
   "ts": "<<timestamp>>",
   "m":  "<<model>>",
   "v": {
          ... <<value>> JSON object ...
   },
   "t" : [<<tag1>>,<<tag2>>,...]
   "loc": [<<latitude>>, <<longitude>>]
}

Message fields:

  • streamId : identifier of the timeseries this message belongs to,

  • timestamp : data/time associated with the message, is ISO 8601 format,

  • model : a string identifying the schema used for the "value" part of the message, to avoid conflict at data indexing,

  • value : a free JSON object describing the collected information,

  • tags : list of strings associated to the message to convey extra-information,

  • latitude, longitude : details of the geo location associated with the message (in degrees),

Example:
{
   "s":   "mydevice!temp",
   "ts":  "2016-01-01T12:15:02Z",
   "m":   "tempV1",
   "loc": [45.4535, 4.5032],
   "v": {
      "temp":     12.75,
      "humidity": 62.1,
      "gpsFix":   true,
      "gpsSats":   [12, 14, 21]
   },
   "t" : [ "City.NYC", "Model.Prototype" ]
}

11.3.9. Commands

When your device is ready to receive commands, it can subscribe to the MQTT topic dev/cmd from where it can receive the following messages:

{
   "req":  "<<request>>",
   "arg": {
      "<<arg1>>": <<arg1Value>>,
      "<<arg2>>": <<arg2Value>>,
      ...
   },
   "cid":  <<correlationId>>
}

Message fields:

  • request : string identifying the method called on the device,

  • arg{X}, arg{X}Value : name and value (any valid JSON value) of an argument passed to the request call,

  • correlationId : an identifier that must be returned in the command response to help Live Objects match the response and request.

Example:
{
   "req":  "buzz",
   "arg": {
      "durationSec": 100,
      "freqHz":     800.0
   },
   "cid": 12238987
}

11.3.10. Commands response

To respond to a command, your device must publish the response to the MQTT topic dev/cmd/res with a message of the following format:

{
   "res": {
      "<<res1>>": "<<res1Value>>",
      "<<res2>>": "<<res2Value>>",
      ...
   },
   "cid":  <<correlationId>>
}

Message fields:

  • res{X}, res{X}Value : optional information returned by the command execution,

  • correlationId : a copy of the command correlationId value.

Example #1:
{
   "res": {
      "done": true
   },
   "cid": 12238987
}
Example #2:
{
   "res": {
      "error": "unknown method 'buzz'"
   },
   "cid": 12238987
}

11.3.11. Current Resources

Once connected, your device can announce the currently deployed versions of its resources by publishing a message on MQTT topic dev/rsc with the following format:

{
   "rsc": {
      "<<resource1Id>>": {
         "v": "<<resource1Version>>",
         "m": <<resource1Metadata>>
      },
      "<<resource2Id>>": {
         "v": "<<resource2Version>>",
         "m": <<resource2Metadata>>
      },
      ...
   }
}

Message fields:

  • resource{X}Id : resource identifier,

  • resource{X}Version : currently deployed version of this resource,

  • resource{X}Metadata : (JSON object) (optional) metadata associated with this resource, useful to resource update.

Example:
{
   "rsc": {
      "X11_firmware": {
         "v": "1.2",
         "m": {
            "username": "78723-672-1232"
         }
      },
      "X11_modem_driver": {
         "v": "4.0.M2"
      }
   }
}

11.3.12. Resources update

When your device is ready to receive resource update request, it just needs to subscribe to MQTT topic dev/rsc/upd. From then on it will receive such request as message with the following JSON format:

{
   "id": "<<resourceId>>",
   "old": "<<resourceCurrentVersion>>",
   "new": "<<resourceNewVersion>>",
   "m": {
      // ... <<metadata>> JSON object ...,
   },
   "cid": "<<correlationId>>"
}

Message fields:

  • resourceId : identifier of resource to update,

  • resourceCurrentVersion : current resource version,

  • resourceNewVersion : new resource version, to download an apply,

  • correlationId : an identifier that must be returned in the resource update response to help Live Objects match the response and request.

Example:
{
   "id": "X11_firmware",
   "old": "1.1",
   "new": "1.2",
   "m": {
      "uri": "http://.../firmware/1.2.bin",
      "md5": "098f6bcd4621d373cade4e832627b4f6"
   },
   "cid": 3378454
}

11.3.13. Resources update response

Once your device receives a "Resource update request", it needs to respond to indicate if it accepts or not the new resource version, by publishing a message on topic dev/rsc/upd/res with the following JSON format:

{
   "res": "<<responseStatus>>",
   "cid": "<<correlationId>>"
}

Message fields:

  • responseStatus : indicates the response status to the resource update request:

    • "OK" : the update is accepted and will start,

    • "UNKNOWN_RESOURCE" : the update is refused, because the resource (identifier) is unsupported by the device,

    • "WRONG_SOURCE_VERSION" : the device is no longer in the "current" resource version specified in the resource update request,

    • "INVALID_RESOURCE" : the requested new resource version has incorrect version format or metadata,

    • "NOT_AUTHORIZED" : the device refuses to update the targeted resource (ex: bad timing, "read-only" resource, etc.),

    • "INTERNAL_ERROR" : an error occured on the device, preventing for the requested resource update,

  • correlationId : copy of the correlationId field from the resource update request.

Example #1:
{
   "res": "OK",
   "cid": 3378454
}
Example #2:
{
   "res": "UNKNOWN_RESOURCE",
   "cid": 778794
}

11.4. "Bridge" mode

In the "Bridge" mode, a single MQTT connection can be used to exchange data related to multiple devices or applications.

For example, a "gateway" device could communicate with Live Objects and forward data collected by multiple devices using this mode.

An application that wants to consume flows of data collected by Live Objects and interacts through Live Objects with devices would also use this mode.

11.4.1. Connection

When initiating the MQTT connection, to select the "Bridge" mode you must use the following credentials:

  • clientId : any value - only used as "consumerId" for the Router subscriptions,

  • username : format "{encoding}+bridge" (or just "{encoding}") :

    • "json+bridge" : select "Bridge" mode with "JSON encoding (V0)",

    • "payload+bridge" : select "Bridge" mode with no encoding (only message payloads are available).

11.4.2. Summary

In "bridge" mode, the topics used for publications and subscriptions must follow on of the following format:

  • pubsub/{pubSubTopic}, to use the Live Objects bus in "PubSub" mode,

  • fifo/{fifoId}, directly publish into / consume from a specific FIFO queue,

  • router/{routingKey}, to directly publish to the Live Objects "Router" or consume from it.

All publications made on the MQTT bridge are forwarded to the Live Objects message bus as FIFO, PubSub or Router publications.

All subscriptions made on the MQTT bridge are forwarded to the Live Objects message bus as FIFO, PubSub or Router subscriptions.

11.4.3. PubSub publication

To publish on a PubSub topic, the MQTT client must publish in MQTT on a topic of the following format:

pubsub/{pubSubTopic}

where pubSubTopic is the name of the PubSub topic.

If pubSubTopic starts with the "~" character, then the selected "encoding" is applied to decode the published MQTT message:

  • if encoding = "JSON (V0)", the message should be a valid JSON-encoded message,

  • if encoding = "payload", the MQTT message content becomes the Live Objects message payload.

If pubSubTopic does not start with the "~" character, then the MQTT message content becomes the generated Live Objects message payload.

MQTT message "qos" 0, 1 and 2 are supported, but don’t offer any guarantee here: currently subscribed client to this PubSub topic may or may not receive the message.
Example #1 - any encoding / random message on standard topic
[on MQTT interface]
   action  = MQTT PUBLISH
   topic   = 'pubsub/data'
   content = 'Hello world!'

[on Live Objects bus]
   action  = PubSub publication
   topic   = 'data'
   message = ( payload = "Hello world!" )
Example #2 - JSON encoding / bad message on "~" topic
[on MQTT interface, with encoding=JSON (V0)]
   action  = MQTT PUBLISH
   topic   = 'pubsub/~device/connects'
   content = 'blob'

=> message is not a valid JSON message,
so message is dropped and MQTT connection closed.
Example #3 - JSON encoding / correct message on "~" topic
[on MQTT interface, with encoding = JSON (V0)]
   action  = MQTT PUBLISH
   topic   = 'pubsub/~device/connects'
   content = '{"payload":"Hello world!","timestamp": 1447944553720}'

[on Live Objects bus]
   action  = PubSub publication
   topic   = '~device/connects'
   message = ( payload = "Hello world!" , timestamp = 1447944553720 )
Example #4 - payload encoding / random message on "~" topic
[on MQTT interface, with encoding = payload]
   action  = MQTT PUBLISH
   topic   = 'pubsub/~device/connects'
   content = 'test 1 2 3'

[on Live Objects bus]
   action  = PubSub publication
   topic   = '~device/connects'
   message = ( payload = "test 1 2 3" )

11.4.4. PubSub subscription

To subscribe to a PubSub topic, a MQTT client connected in "Bridge" mode must subscribe to the following MQTT topic:

pubsub/{pubSubTopic}

where pubSubTopic is the name of the PubSub topic.

A MQTT SUBACK packet is returned by Live Objects only once the subscription is active on Live Objects internal message bus.

If pubSubTopic starts with the "~" character, then the selected "encoding" is applied to encode the messages consumed from the internal Live Objects message bus.

If pubSubTopic does not start with the "~" character, then only the Live Objects message "payload" attribute is returned in the MQTT message.

MQTT message "qos" 0, 1 and 2 are supported, but don’t offer any guarantee here: currently subscribed client to this PubSub topic may or may not receive the message.

11.4.5. FIFO publication

To publish directly into a FIFO queue, a MQTT client connected in "Bridge" mode must publish to the following MQTT topic:

fifo/{fifoId}

where fifoId is the identifier of the targeted FIFO queue.

If the "fifoId" starts with "~", the same process is applied to the MQTT publication as for the PubSub publication.

Regarding the "qos" of the MQTT publication:

  • qos = 0 : no acknowledgement is returned, so no guarantee is offered to the client,

  • qos = 1 : a MQTT PUBACK packet is returned only once the message has been stored into the targeted FIFO, or once the message has been dropped because the targeted FIFO does not exists,

  • qos = 2 : idem as for qos=1 but with a PUBREL packet.

11.4.6. FIFO subscription

To subscribe to a FIFO queue, a MQTT client connected in "Bridge" mode must subscribe to the following MQTT topic:

fifo/{fifoId}

where fifoId is the identifier of the targeted FIFO queue.

If the subscription succeeds, Live Objects only returns a MQTT SUBACK packet with return code equals to the requested qos once the subscription is active.

If the subscription fails (for ex. because the FIFO does not exist), a MQTT SUBACK packet is returned with return code 0X80 (= Failure).

As for the PubSub subscriptions:

If fifoId_ starts with the "~" character, then the selected "encoding" is applied to encode the messages consumed from the internal Live Objects message bus.

If fifoId does not start with the "~" character, then only the Live Objects message "payload" attribute is returned in the MQTT message.

Regarding MQTT subscription "qos":

  • qos = 0 : messages consumed from the FIFO disappear from the FIFO queue as soon as written on socket by the Live Objects MQTT interface - so consuming from a FIFO with qos=0 offers no guarantee of message delivery,

  • qos = 1 or 2 : messages consumed from the FIFO are removed from the FIFO only once the first acknowledgement (PUBACK or PUBREL) is received from the subscribed client - by consuming with qos > 0 from a FIFO queue, 'at least once' message delivery is guaranteed.

11.4.7. Router publication

To publish on the Live Objects "Router", a MQTT client connected in "Bridge" mode must publish to the following MQTT topic:

router/{mqttRoutingKey}

If mqttRoutingKey starts with ~, the same process is applied to the MQTT publication as for the PubSub publication.

The message is then published on the Router of the Live Objects internal bus with a routing key equals to mqttRoutingKey where all the / have been replaced by .

(this conversion enables a more MQTT-friendly format for the Live Objects routing keys)

Regarding the "qos" of the MQTT publication:

  • qos = 0 : no acknowledgement is returned, so no guarantee is offered to the client,

  • qos = 1 : a MQTT PUBACK packet is returned only once the message has been stored into all FIFO queues with bindings matching the routing key,

  • qos = 2 : idem as for qos=1 but with a PUBREL packet.

11.4.8. Router subscription

To subscribe to the Live Objects Router, a MQTT client connected in Bridge mode must subscribe to the following MQTT topic:

router/{mqttRoutingKeyFilter}

Live Objects then creates a subscription directly on the message bus Router with routing key filter equals to a converted mqttRoutingKeyFilter:

  • every / is replaced by a .

  • every MQTT wildcard + is replaced by a *

  • MQTT wildcard # stays #

Examples:
  • router/# = Router subscription with routing key filter "#"

  • router/~android/1233231/data = Router subscription with routing key filter ~android.1233231.data

  • router/~android/+/data = Router subscription with routing key filter ~android.*.data

  • router/~android/# = Router subscription with routing key filter ~android.#

Once the subscription is active, Live Objects returns a MQTT SUBACK packet with return code equals to the requested qos once the subscription is active.

If a problem occurs and the subscription fails, a MQTT SUBACK packet is returned with return code 0X80 (= Failure).

As for the PubSub subscriptions:

If pubSubTopic starts with the ~ character, then the selected encoding is applied to encode the messages consumed from the internal Live Objects message bus.

If pubSubTopic does not start with the ~ character, then only the Live Objects message payload attribute is returned in the MQTT message.

Regarding MQTT subscription qos: all values (0, 1, 2) are supported but don’t offer any delivery guarantee.

Router subscription for data message

These topics allows to subscribe to the data messages sent to Live Objects.

Relevant topics :
  • router/~event/v1/data/new/ to subscribe to all data messages sent to Live Objects. Associated routing key filter is ~event.v1.data.new.

  • router/~event/v1/data/new/urn/lora/ to subscribe to all LPWA devices uplink data messages. Associated routing key filter is ~event.v1.data.new.urn.lora.

  • router/~event/v1/data/new/urn/msisdn/ to subscribe to all devices, using SMS interface, uplink data messages. Associated routing key filter is ~event.v1.data.new.urn.msisdn.

Exemple to subscribe to a specific LPWA device :
  • router/~event/v1/data/new/urn/lora/<devEUI>/ to subscribe to one device uplink message data stream. Associated routing key filter is ~event.v1.data.new.urn.lora.<devEUI>.

12. REST API

12.1. Endpoints

https://liveobjects.orange-business.com/api/

The current version is version “v0”. As a consequence all methods described in this document are available on URLs starting by:

https://liveobjects.orange-business.com/api/v0/

12.2. Principles

Live Objects exposes REST API providing these functionalities :

  • API key operations

  • Device management for managed devices (inventory, parameters, command ressources operations)

  • Device management for MyPlug devices

  • Device management for LPWA

  • Bus management (create FIFO, binding)

  • Data management (store and search)

  • Contact : Email management (send email)

  • Portal User management

12.2.1. Content

By default all methods that consume or return content only accept one format: JSON (cf. http://json.org ).

As a consequence, for those methods the use of HTTP headers Content-Type or Accept with value application/json is optional.

12.2.2. API-key authentication

Clients of the Live Objects Rest API are authenticated, based on an API key that must be provided with any request made to the API.

This API key must be added has a HTTP header named X-API-Key to the request.

Example (HTTP request to the API)
GET /api/v0/assets HTTP/1.1
Host: <base URL>
X-API-Key: <API key>

If you don’t provide such an API Key, or if you use an invalid API key, Live Objects responds with the standard HTTP Status code 403 Forbidden.

12.2.3. Paging

Some methods that return a list of entities allow paging: the method doesn’t return the full list of entities, but only a subset of the complete list matching your request.

You need to use two standard query parameters (i.e. that must be added at the end of the URL, after a ?, separated by a &, and defined like this: <param>=<value>):

  • size: maximum number of items to return (i.e. number of items per page),

  • page: number of the page to display (starts at 0).

Those parameters are not mandatory: by default skip will be set to 0 and size to 20.

Example:

  • If size=10 and page=0 then item number 0 to 9 (at most) will be returned.

  • If size=20 and page=1, then items number 20 to 39 (at most) will be returned.

Example (HTTP request to the API)
GET /api/v0/assets?page=100&size=20 HTTP/1.1
Host: <base URL>
X-API-Key: <API key>

The responses of such methods are a “page” of items - a JSON object with the following attributes:

  • totalCount: total number of entities matching request in service (only part of them are returned),

  • size: the value for “size” taken into account (can be different of the one in request if the value was invalid),

  • page: the value for “page” taken into account (can be different of the one in request if the value was invalid),

  • data: list of returned entities.

12.3. Swagger

All HTTP REST methods (device management, data management and bus management, etc.) are described in the swagger available here : https://liveobjects.orange-business.com/swagger-ui/index.html.

13. Web portal

The Live Objects web portal is available at https:/liveobjects.orange-business.com.

13.1. landing page

landing

13.2. sign in

landing

13.3. dashboard (home)

landing

13.4. devices

13.4.1. device list

landing

13.4.2. device status

landing

13.4.3. device parameters

landing

13.4.4. device commands

landing

13.4.5. device resources

landing

13.5. data

landing

13.6. simulating

landing

13.7. configuration

13.7.1. account

landing

13.7.2. API keys

landing

13.7.3. users

landing

13.7.4. messages

landing

13.7.5. device resources

landing

14. MyPlug interface

Messages are available on Live Objects bus as notification of activity from the MyPlug devices and accessories associated with your account.

Those messages are published in Router mode with the following routing keys:

  • router/~event/myplug/{gatewayMac}/event for events triggered by a MyPlug gateway,

  • router/~event/myplug_acc/{accessoryMac}/event for events generated from accessories.

14.1. message structure

14.1.1. source

The source attribute/field of the messages emitted from MyPlug activity identify the MyPlug gateway or the MyPlug gateway + accessory, that triggered the event.

If the event has been triggered by the MyPlug gateway only (ex: communication lost, new accessory association…​), then source only contains one element: a source of order=0 with namespace "myplug" and as id the MAC identifier of the MyPlug gateway.

{
   "source": [
      {
         "order": 0,
         "namespace": "myplug",
         "id": "283657E9A51A1F0A"
      }],

   ...

}

If the event has been triggered by a MyPlug accessory only (ex: flood alarm…​), then source contains two element:

  • a source element with order=0, namespace="myplug_acc" and the accessory MAC identifier as id,

  • a source element with order=1, namespace="myplug" and the gateway MAC identifier as id.

{
   "source": [{
      "order": 0,
      "namespace": "myplug_acc",
      "id": "A6564CD9756FD32D"
   },{
      "order": 1,
      "namespace": "myplug",
      "id": "283657E9A51A1F0A"
   }],

   ...

}
timestamp

This is the instant of the event generation. It is a JAVA EPOCH corresponding to the number of milliseconds from since 1/1/1970.

event

This field contains the event information. The type of the alarm is a string that depends on the asset that has produced the alarm.

eventLifecycle

The type of event into its life cycle. Here are the possible values: ONE_SHOT, BEGIN, END, ONGOING.

data

The content of this field depends on the myplug event that is described by the message.

Some data values are always present: * type: which contains the accessory type that generate the event * name: in case of accessory event, it contains the name of the accessory

14.1.2. standard events

  • MQTT topic = "router/~event/myplug/{myplugMac}/event"

  • message:

    • event: cf. table,

    • data: cf. table.

Available events:

event eventLifeCycle data fields meaning

GwLost

ONE_SHOT

No communication with gateway for 25 hours. Failed to communicate with the gateway.

BindEvent

BEGIN

"type" ⇒ type of accessory

An accessory has been associated to the LiveIntercom.

BindEvent

END

"type" ⇒ type of accessory

An accessory has been dissociated to the LiveIntercom

Accessory: LiveIntercom
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "MY_INTERCOM"

Available events:

event eventLifeCycle meaning

PowerLost

BEGIN

The LiveIntercom has just been disconnected from power supply.

PowerLost

ON_GOING

The LiveIntercom is still not connecter connected to the power supply.

PowerLost

END

The LiveIntercom is connected again on power supply.

LowBat

ONE_SHOT

The battery level is low.

AlarmPb

ONE_SHOT

The user has pressed the alarm button.

SocAlarmPb

ONE_SHOT

The user push the social alarm button.

Accessory: Emergency push button
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "SOCIAL_ALARM_BUTTON"

Available events:

event eventLifeCycle meaning

Lowbat

ONE_SHOT

The battery level is low.

EmergencyPb

ONE_SHOT

Button push action.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

Accessory: Smoke detector
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "SMOKE_DETECTOR"

Available events:

event eventLifeCycle Meaning

Lowbat

ONE_SHOT

The battery level is low.

Smoke

ONE_SHOT

Smoke detected.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

TestPb

ONE_SHOT

Test buttonpushed.

Accessory: Flood detector
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "FLOOD_DETECTOR"

Available events:

event eventLifeCycle Meaning

Lowbat

ONE_SHOT

The battery level is low.

Flood

ONE_SHOT

Flood detected.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

Accessory: Wall plug
  • MQTT topic = "router/~event/myplug_acc/{accessoryMac}/event"

  • message:

    • event: cf. table,

    • data:

      • name = accessory name (defined by user),

      • type = "WALL_PLUG"

Available events:

event eventLifeCycle Meaning

Lowbat

ONE_SHOT

The battery level is low.

SwitchError

ONE_SHOT

Switch error.

PowerLost

BEGIN

The WallPlug has just been disconnected from power supply.

PowerLost

END

The WallPlug is connected again on power supply.

Comlost

BEGIN

Local RF communication lost.

Comlost

END

Local RF communication comeback.

15. Limitations

15.1. Rate limiting

Rate limiting is applied to each API key and control the number of calls or messages per time window (e.g. 1 call per second). Depending on the offer, a rate limiting configuration may be applied to http interface, mqtt interface or both.

Http interface

Each response of the web controller contains 3 headers giving additional information on the status of the current request regarding rate limitation:

X-RateLimit-Limit: 5
X-RateLimit-Remaining: 3
X-RateLimit-Reset: 1479745936295
  • X-Rate-Limit-Limit is the rate limit ceiling per second

  • X-Rate-Limit-Remaining is the number of requests left for this time window

  • X-RateLimit-Reset is the ending date of the current time window (expressed in epoch milliseconds).

When receiving a request that would exceed authorized traffic limit, the web application returns a 429 Too Many Requests error with an empty body.

Note that all X-RateLimit headers are present in the response, as they would in a successful response.

Mqtt interface

For MQTT connection, if the quota is reached, the MQTT session will be disconnected. If an API key is used for several MQTT sessions at the same time, the sum of the requests is computed for this API key.

No reason or additional information is provided to the client software. The client is expected to try to reconnect repeatedly and re-send its data until traffic is allowed again in the next time window.

Limitation

Trial offer

REST Max req. per s. per API Key

5

MQTT Max req. per s. per API Key (uplink)

5

15.2. Ressources limitation

Limitation

Trial offer

Number of FIFO

2

Size max - sum of FIFO (bytes)

1 048 576

Maximum number of users

5

Maximum number of API keys

5

15.3. Compute quota

Limitation

Trial offer

Search service

30ms per window of 10 s.

16. glossary

API

Application Programming Interface

FIFO

First In First Out

HTTP

HyperText Transfer Protocol

IoT

Internet of Things

IP

Internet Protocol

LED

Light-Emitting Diode

LPWA

Low-Power, Wide Area radio protocol

LPWAN

Low-Power, Wide Area Network

LOM

Live Objects Manage

M2M

Machine To Machine

MQTT

Message Queue Telemetry Transport

PPA

Personal Package Archives

PubSub

Publish and Subscribe

REST

REpresentational State Transfer

SaaS

Software as a Service

SDK

Software Development Kit

SIM

Subscriber Identity Module

TCP

Transmission Control Protocol