Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog > 2018 > February
2018
rfox

Bashing up a Gateway with OMF

Posted by rfox Employee Feb 27, 2018

In the Perth office, we have a Dell Gateway 3003 set up for testing various Edge ideas.

After wanting to learn bash for a while, I realised that a bash OMF script for reading/sending the values is a great little problem to learn with as it requires file/text processing and sending HTTP requests that aren't simple.

The Gateway 3003 includes a few sensors, which (in typical Linux fashion) you interact with via files - for example, to read the temperature sensor you:

  • Read the raw value from /sys/bus/iio/iio:device1/in_temp_raw
  • Read the offset value from /sys/bus/iio/iio:device1/in_temp_offset
  • Read the scale from /sys/bus/iio/iio:device1/in_temp_scale
  • Calculate the real sensor reading by: real=(raw+offset)*scale

In addition, as the Gateway runs Ubuntu Core, I needed to install a "classic snap" (`snap install classic --edge --devmode` then `sudo classic`) in order to have a "standard" Linux environment (with tools like cURL, awk and bc). I had already ported the calculations to use awk, but in future I think I would use bc as it seems simpler to use in this case.The overall structure follows the Python example, so it always creates the types, container and assets messages before sending data.

Requirements

  • A bash environment
  • A connector relay to send to (this was written using relay beta 1.2.000.0001)
  • cURL (would also work with wget, but requires modifications to the way we send headers)
  • awk (could be ported to use bc)
  • Optional: grep (if you wanted to parse a file such as /proc/meminfo to get memory related information in to PI, for example)

Configuration and Static Variables

The first section of code deals with configuration and static variables for the script. The variables are set to be readonly as they do not need to be modified during script operation.

The element "DellGW-3003-PER_bash" will be created under the root element configured in the relay. This is the Producer Token.

A child element will be created under this element (in this case called "DellGW-3003-PER-Data" - the AF Element name).

We also configure the root sensor directory, as all of our sensors are below this directory.

We also configure the request timeout - cURL will try for 30 seconds to send data to the relay, then it will stop trying (and read new data).

Finally, we configure the log file this script will write to. All output from this script will be appended to the file.

 

Configuration and Static Variables

#!/bin/bash

# AF Element Configuration

readonly DEVICENAME="DellGW-3003-PER"

readonly DEVICELOC="PerthOffice"

readonly AFELEMENT="DellGW-3003-PER-Data"

readonly DEVICETYPE="Dell Gateway 3003"

readonly ROOTSENSORDIR="/sys/bus/iio/devices/"

# Message Types Configuration

readonly ASSETSMESSAGENAME="DGW_bashPerf_asset"

readonly DATAMESSAGENAME="DGW_bashPerf_data"

readonly DATACONTAINER="${DEVICENAME}_bashperf_container"

# Relay Configuration

readonly RELAYURL='https://<Relay URL Here>:5460/ingress/messages'

readonly PRODUCERTOKEN="${DEVICENAME}_bash" # will need to actually use producer tokens once the relay supports them

readonly REQUESTTIMEOUT=30 # adjust as necessary - as it is, this will dump data if it can't send in 30s but it may be better to use a shorter timeout

readonly WAITTIME=1 # Wait time between messages (in seconds). Takes float values.

readonly LOGNAME=<your_log_file_path_here>

 

 

sendOMF Helper Function

We're going to be sending a lot of OMF messages, so writing a nice little helper function around cURL to set the headers is going to really simplify things. We can also get the HTTP return code for the request in this way, which is very useful for logging and debugging.

It's general enough that it can be used to send all three types of OMF messages (type, container, data) and for all valid actions (create, update, delete).

As it's defined, you need to pass it the action, messagetype and messageJSON in that order (separated by a space).

 

It then sends the data via cURL. -H is used to specify headers, and multiple headers can be used (like in this case). --data specifies the message payload. --insecure is used to ignore any server certificate based errors. If I had properly configured certificates, this wouldn't be required.

We use -w to make cURL print the HTTP status code, which allows for debugging. 200 is a successful request.

-s is used to make cURL silent (aside from what we write via -w). --max-time is the timeout on the request (that is set in the previous section).

Helper Function

sendOMF(){

# Helper function to send an OMF message to the server

# Requires action, messageType and messageJSON to be passed in that order

    action=$1

    messageType=$2

    messageJSON=${3//$'\n'/}    # strip any newlines from the message as they break JSON

    # using curl to send data

        curl  -H 'producertoken:'$PRODUCERTOKEN'' \

          -H 'messagetype:'$messageType'' \

          -H 'action:'${action}'' \

          -H 'messageformat:json' \

          -H 'omfversion:1.0' \

         --data "${messageJSON}" \

         --insecure \

          -w '%{http_code}' \

          -s \

         --max-time ${REQUESTTIMEOUT} \

          ${RELAYURL}

}

 

Defining Types

Alright, now that we've made our general function definitions, we can start sending some stuff to the relay! First up: defining our types. We have two types here: Data and Asset.

 

The datamessage type is dynamic, and includes all of our "live data". Time is the index for this type. For each data stream, we need to define the type of the stream (e.g. string, number).

We also need to include a type for the Time, specifying that it's a date-time formatted string. We could also specify that the numbers are all a specific type of float, but the default is a float32 which is fine for this case.

See here for the valid types and formats.

The assetmessage is static as it contains metadata about the asset. It has similar options as the datamessage.

 

Finally, we call our helper function and store the output in typesResponse and write that we have completed the action to the log.

Type Definition

# Create the types message

echo "

Creating Types Message" >> $LOGNAME

msg='[{"id":"'${DATAMESSAGENAME}'",

       "type":"object",

       "classification":"dynamic",

       "properties":{"Time":{"format":"date-time",

                             "type":"string",

                             "isindex":true},

                     "Humidity":{"type":"number"},

                     "Temperature":{"type":"number"},

                     "Acceleration X":{"type":"number"},

                     "Acceleration Y":{"type":"number"},

                     "Acceleration Z":{"type":"number"},

                     "Pressure":{"type":"number"}}},

      {"id":"'${ASSETSMESSAGENAME}'",

       "type":"object",

       "classification":"static",

       "properties":{"Name":{"type":"string",

                             "isindex":true},

                     "Device Type":{"type":"string"},

                     "Location":{"type":"string"},

                     "Data Ingress Method":{"type":"string"}}}]'

 

 

typesResponse=$(sendOMF "create" "type" "${msg}")

echo "

Created Types Message with response ${typesResponse}

 

 

Creating Container Message" >> $LOGNAME

 

Defining Containers

We need to define containers so that we can send multiple messages to the relay at once. It's actually a little redundant in this code, as we don't send multiple messages. A future update could pack multiple data points together and send them all at once, or use some implementation of a message queue.

Container Definition

msg='[{"id":"'${DATACONTAINER}'",

       "typeid":"'${DATAMESSAGENAME}'"}]'

 

 

containerResponse=$(sendOMF "create" "container" "${msg}")

echo "

Created Continer Message with response ${containerResponse}

 

 

Creating Assets Message" >> $LOGNAME

 

Defining Assets

This is where it gets a little bit tricky.  We define two assets here. One with the AF Element we specified earlier (with static data in it). The other is a __Link type, which is a special type.

The first will create a link between the AF Element defined by the Producer Token to the AF Element we want to write to using our Assets Message. The second links our container to that AF Element.

Assets Definition

msg='[{"typeid":"'${ASSETSMESSAGENAME}'",

       "values":[{"Name":"'${AFELEMENT}'",

                 "Device Type":"'${DEVICETYPE}'",

                 "Location":"'${DEVICELOC}'",

                 "Data Ingress Method":"OMF (via bash)"}]},

      {"typeid":"__Link",

       "values":[{"source":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"_ROOT"},

                  "target":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"'${AFELEMENT}'"}},

                 {"source":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"'${AFELEMENT}'"},

                 "target":{"containerid":"'${DATACONTAINER}'"}}]}]'

 

 

assetsResponse=$(sendOMF "create" "Data" "${msg}")

echo "

Created Assets Message with response ${assetsResponse}" >> $LOGNAME

 

Sending Data

Sending data to the relay comprises of three steps, all of which are a part of a single loop.

First, we grab the raw values from a file.  Then we combine the raw values to obtain the real sensor readings. Finally, we package the sensor readings in to something we can send to the relay.

We initialise a number for counting the messages we have sent. This is simply used for debugging. Then we start an endless loop:

Loop start

msgno=1

while true

do

Raw values from file

This is pretty simple, we use cat to print the file but then store the output in a variable. There may be a better way to do this, but cat is a simple tool.

Reading Values

    rh_raw=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_raw)

    rh_offset=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_offset)

    rh_scale=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_scale)

 

    temp_raw=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_raw)

    temp_offset=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_offset)

    temp_scale=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_scale)

 

    accel_x_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_x_raw)

    accel_x_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_x_scale)

    accel_y_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_y_raw)

    accel_y_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_y_scale)

    accel_z_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_z_raw)

    accel_z_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_z_scale)

 

    pressure_raw=$(cat ${ROOTSENSORDIR}/iio:device2/in_pressure_raw)

    pressure_scale=$(cat ${ROOTSENSORDIR}/iio:device2/in_pressure_scale)

Obtaining Actual Sensor Readings

In the manual for the Gateway [pdf], we get the conversion from the above variables to real values. The general form is:

   real=(raw+offset)*scale

I used awk to process the variables but I think that bc (basic calculator) might be simpler. Awk is a programming language for text processing, and is usually included in most Unix-like operating systems.

What is important to remember is that awk does not automatically get access to the variables we set within bash.

The gist of awk is that you:

  • Call it, and pass it a list of variables
    • E.g. awk -v rh_raw=$rh_raw will call awk and set a variable within awk called rh_raw with the same value as the shell variable $rh_raw.
  • Give awk a BEGIN{} statement, which will execute what is within the curly braces immediately.
    • Within the BEGIN{} statement, we perform the calculation above, then print the real value
  • As the output of awk is assigned to a variable, whatever is printed by awk will be stored in the bash variable

Obtaining Actual Readings

    Humidity=$(awk -v rh_raw=$rh_raw -v rh_offset=$rh_offset -v rh_scale=$rh_scale \

    'BEGIN{rh_real=(rh_raw + rh_offset) * rh_scale; print rh_real; }')

    Temperature=$(awk -v temp_raw=$temp_raw -v temp_offset=$temp_offset -v temp_scale=$temp_scale \

    'BEGIN{temp_real=(temp_raw + temp_offset) * temp_scale; print temp_real; }')

    AccelerationX=$(awk -v accel_x_raw=$accel_x_raw -v accel_x_scale=$accel_x_scale \

    'BEGIN{accel_x_real=accel_x_raw*accel_x_scale; print accel_x_real}')

    AccelerationY=$(awk -v accel_y_raw=$accel_y_raw -v accel_y_scale=$accel_y_scale \

    'BEGIN{accel_y_real=accel_y_raw*accel_y_scale; print accel_y_real}')

    AccelerationZ=$(awk -v accel_z_raw=$accel_z_raw -v accel_z_scale=$accel_z_scale \

    'BEGIN{accel_z_real=accel_z_raw*accel_z_scale; print accel_z_real}')

    Pressure=$(awk -v pressure_raw=$pressure_raw -v pressure_scale=$pressure_scale \

    'BEGIN{pressure_real=pressure_raw*pressure_scale; print pressure_real}')

 

Pack and Send

 

Finally, we need to package the data and send it to the relay.

Put it all together into a JSON format, and send it via our helper function.

Write the message data to the log file, as well as the HTTP response code.

Pack and Send

msg='[{"containerid":"'${DATACONTAINER}'",

           "values":[{"Time":"'$(date --utc +%FT%TZ)'",

                      "Humidity":'${Humidity}',

                      "Temperature":'${Temperature}',

                      "Acceleration X":'${AccelerationX}',

                      "Acceleration Y":'${AccelerationY}',

                      "Acceleration Z":'${AccelerationZ}',

                      "Pressure":'${Pressure}'}]}]'

    #echo "Sending message" ${msgno}  ${msg//$'\n'/}

    dataResponse=$(sendOMF "create" "data" "${msg}")

 

 

    echo "Sent message" ${msgno}  ${msg//$''/} "HTTP" $dataResponse >> $LOGNAME

 

 

Finishing the Loop

Finally we need to increment the message counter, wait for the specified time between messages, and go back to the start of the loop.

Finish up the loop

((++msgno))

    sleep ${WAITTIME}

done

 

And we're done! We've set up an AF structure and can continually write values from files in a Linux filesystem to the AF Element.

 

The project is also on Github here.

greche

Smart Office São Paulo

Posted by greche Employee Feb 5, 2018

 

The Smart Office São Paulo project was developed by the Brazil Office with a focus on using the PI System to analyze the comfort conditions of the Office.

 

To collect data from different sites of the office, we used six Arduinos connected to four sensors.

Humidity, luminosity, temperature, and noise were sent to the PI Data Archive using the UFL connector and useful information was extracted from that data.

For example, the luminosity was used to infer when the first person arrived today and when the last person left, on the day before.

 

Data to inform the current weather condition is stored in the PI Data Archive, along with the time to travel between two locations. The main traveling points include a few bus stations and airports.

 

You can find a lengthier explanation of this solution in the video above. Feel free to contact us if any questions arise.

 

Alex Santos - asantos@osisoft.com   

Gustavo Reche - greche@osisoft.com

Filter Blog

By date: By tag: