Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog

What makes Event Frames great is that they are an extremely rich bookmarking feature for your real time data.They can be templatized, have a security model, store computed results, etc. Event frames are stored in the PIFD database, that underlines our AF database, due to this large feature set, they also take up a lot of space in this AF database. This means, that it is possible to run into scaling problems when using event frames.

Whenever you have a scaling problem, you typically want to do the following thing:

1. Size appropriately

2. Monitor

3. Limit growth

4. Offloading

5. Retain only what is needed

 

In this blog post, I want to talk about 2. Monitor.

 

How to do this manually

Within PI System Explorer (PSE), under the property page for a database, you can view a count of the set of object contained in that database. In particular event frames.

 

 

As event frames of the same AF server are all stored in the same SQL database, a possible better number to monitor is the global number of event frames, which is found in the counts for the PI AF Server.

 

It is also possible to get this information using AF SDK, using the GetObjectCount method:  https://techsupport.osisoft.com/Documentation/PI-AF-SDK/html/M_OSIsoft_AF_PISystem_GetObjectCounts.htm 

 

Snippet:

var af = new PISystems()[afServerName];
var counts = af.GetObjectCounts(null);
Console.WriteLine($"Total number of event frame: {counts[AFIdentity.EventFrame]}");

 

Obtain the same information using an SQL Query

 

As Event Frames are stored in an SQL database, one could hope that there is a table that contains only event frames and the number of rows should be related to the number of event frames.

And that is indeed the case.

 

Here is the query:

SELECT
t.NAME AS [Table Name],
p.rows AS [Row Counts]
FROM
sys.tables t
INNER JOIN
sys.partitions p
ON t.OBJECT_ID = p.OBJECT_ID
where t.name Like 'AFEventFrame'
GROUP BY
t.Name,
p.Rows s

 

Monitor and historize this metric

To retrieve a large amount of data out of an SQL databse, the PI Interface for RDBMS https://livelibrary.osisoft.com/LiveLibrary/web/ui.xql?action=html&resource=publist_home.html&pub_category=PI-Interface-for-Relational-Database-(RDBMS-via-ODBC) is off the course the way to go.

But, to historize only a few values at a slow rate, we can combine AF table look up feature and the analysis service.

First, I would create a connection to PIFD using a linked table.

This allows us to retrieve this information via a table data reference

 

We can then historize this data using af periodic AF analysis, to store the data into a tag.

 

This now helps you monitor Event Frame growth.

 

Off course, this number is kind of useless unless you actually have an idea for how many event frames you actual want to keep in your PI System.This is typically done to an initial sizing, base on your system spec, querying needs, etc. And refined base on your actual experience. But, this is a topic for an other time.

chuck

Did You Know?  Rubik's Cube

Posted by chuck Employee Jul 31, 2019

From time to time I am reminded of little known bits from OSIsoft's history.  This was one I just have to share with you.

 

A young OSIsoft support/developer engineer named Dan Knights was crowned World Champion in speed solving Rubik's Cube on 24 August 2003.  Yes, true fact.  The World Championships were held in Toronto, Ontario that year with over 100 challengers from over 20 countries - all competing for various titles requiring solving the colorful cube.  24 year old Dan won with a Guinness world record beating time of 20.2 seconds, surpassing the previous record of 22.95 seconds.  The previous world record had been set at the World Championships in 1982 in Budapest.  Knights had only been solving the cube for about four years - he started playing with the cube after reading on the Internet about a woman who had solved the puzzle in 17 seconds. 

 

The woman was Jessica Fridrich, a professor from New York who developed the widely used Fridrich method of solving the cube after first having mastered the cube in her native Czechoslovakia back in 1981.  (Ms Fridrich was 39 in 2003.)  Ms Fridrich placed second in the finals where Dan found himself in first place.

 

Dan Knights said he trained for the competition by practicing as much as he could and by using hypnotherapy to overcome stage fright.  He was amazed he and the event drew so much attention.  "I just kept thinking, 'It's only a toy.' "  Following the competition Dan took it easy for a while.  "I'll just sit on the couch and veg:".  He was thinking to use the prize money for a trip to Hawaii. 

 

Shortly after winning his championship, Dan left OSIsoft for other endeavors.  

Here is Dan's own page on cubing: https://www.knightslab.org/speedcubing

Want to watch a much younger Dan solve the cube?  https://www.youtube.com/watch?v=MNKroFhyOBw

 

Reference:  Toronto Star (www.thestar.com) 25 August 2003, "American "knighted" Rubik's cube champ"

OSIsoft is pleased to announce the release of PI Web API 2018 SP1. This is a standalone installation kit available at my.osisoft.com, and is grouped together with PI Server downloads. The PI Web API is a suite of REST services that provide access to PI System data. The product is a member of the Developer Technologies suite of products and is targeted at providing cross-platform, multi-user programmatic access. Some highlights of this release are:

  • Notifications
    • Read for child NotificationContactTemplates
    • Read for NotificationPlugIn
    • Read, create, update, and delete for SecurityEntries on NotificationRules, NotificationRuleTemplates and NotificationContactTemplates
    • Create, update and delete for NotificationRules, NotificationRulesTemplates, NotificationContactTemplates and NotificationRuleSubscribers
    • GetByPath endpoints for NotificationContactTemplates, NotificationRules, and NotificationRuleTemplates
    • Search endpoint for NotificationContactTemplates
  • Stream Sets GetJoined Endpoint
    • Returns a set of recorded values (x-axis) with another set of data for any number of streams (Y, Y', Y''... axis) that are interpolated based on the points returned for the x-axis
  • Stream Updates (CTP)
    • Client code will be notified of changes in AF metadata through an Exception item in the response payload.
    • The selectedFields parameter is now honored in both registration and poll for updates.
    • Responses now include PreviousEventAction information with each data value.
    • Error messages are returned for markers that are no longer valid with every poll using that marker. Previously, the error message would be returned only once.
  • Expose 'Paths' and 'DataReference' properties on objects of attribute
  • Expose 'Paths' property on objects of element
  • The version of Web ID returned by PI Web API can be configured
    • PI Web API instances that run 2018 SP1 can now work together with older versions behind a load balancer since they can be configured to return the same version of Web ID

 

To download, visit the OSIsoft Customer Portal. Installation kits are grouped with PI Server downloads.

With the new release of PI Web API, OSIsoft is pleased to announce some updates  to our getting started material for developers.  

To facilitate developer learning and best practice use of the latest PI Web API, we will be supporting the release with:

1)      New code samples on Github

2)      New private online course for PI Web API developers - available on PI Square in mid-June 2019

These additions will replace the previous client libraries for PI Web API, removed from GitHub in November of 2018, with new code samples developed and approved by OSIsoft engineering.

In addition, with the release of PI Web API 2018 SP1, we will be removing the inclusion of "Open API Specification" (formerly known as "Swagger™ Specification") from PI Web API. While the Open API Specification facilitated the rapid generation of code libraries while creating code, it had not been officially tested and validated with the wide diversity of code generating tools available to developers. In cases some code generators could create sub optimal code.

To follow best practices, and help developers learn how to build optimal code for PI Web API, OSIsoft decided to remove the Open API Specification from the latest PI Web API installation.  The new, approved code samples on GitHub are the preferred approach to learn, and ensure optimal coding practices.

If you have downloaded, and have access to the client libraries or the Open API Specification, feel free to continue to use these for learning needs however please do not incorporate the sample client libraries into your production applications. For those using Open API Specification, please recognize that code may not be optimized and may require further review and optimization.

If you have any questions about the new code samples or changes, please contact Frank Garriel, Technical Product Manager (fgarriel@osisoft.com)

Hard to imagine but this time next week we will be gathering in San Francisco for PI World 2019.  Seems like just yesterday we started reviewing proposals for talks and labs and deciding what to include in our agenda!  Here are a few of my favorite talks you can look forward to.  (Remember we publish most PI World talks to our website within just a day or a few days after the talk is given at PI World.)

 

Session Code US19NA-D2MM02

Title: Artificial Intelligence-enabled autonomous operations at CEMEX with Petuum Industrial AI Autopilot

Description:  CEMEX Manufacturing processes are concerned with quality, production costs from energy, fuels and materials and equipment efficiency, which in turn are often simultaneous tradeoffs, to be made in real time.

Standardized repeatable processes easily replicated day after day, require understanding of process variables, forecasts and recommendations on changes that can be incorporated into workflows. CEMEX will show how companies working together created a data flow using PI infrastructure between plant control systems.

CEMEX speakers will guide the audience on how actionable prescriptions in real-time for plant subsystems were validated and implemented into the operational control systems in supervised operations.

Track Day 2: Mining, Materials, Supply Chain

Speakers Rodrigo Javier Quintero De la Garza (Cemex), Prabal Acharyya (Petuum)

 

 

Session Code US19NA-D2FB03

Title: Small effort with a big Payoff: Using PI Event Frames to drive Pack Line productivity (Cargill)

Description:  Cargill is one of the largest privately held businesses in the world, with a diverse portfolio ranging from Agriculture and Food to Industrial and Financial businesses. Working across so many varied businesses means standardization can be a challenge.

Cargill wants to share how localized efforts in the use of PI AF, Event Frames, and Data Link can lead to new insights that drive tangible action to improve pack line productivity. In many Cargill facilities, downtime is tracked for events greater than 1 minute and sites record context around these events. But what about micro stops? During daily production meetings, Cargill Fullerton consistently tracked micro stops as a top contributor to production loss. As a result, the site utilized PI Event Frames to gain more granular insight to those losses and drive the team from a reactive to a proactive culture. This talk will take you through the journey of solution implementation, quantifying the data, the findings and the value realized.

Track Day 2: Food and Beverage

Speakers Lauren Vahle (Cargill Global Edible Oil Solutions); Monica Varner-Pierson (Cargill)

 

 

Session Code US19NA-D2TT04

Title: Concurrent Programming for PI Developers

Description:  All too often projects fail because the capabilities of your programming toolstack are not being exploited to their fullest. We will show you how to break out of this vertical-stack prison by demonstrating how concurrent programming works. You will be exposed to Google's Go, which is a high-performance language and toolchain specifically geared towards concurrent programming. We will show how you can take advantage of Go in IoT projects and within your datacenter so that your projects may unlock the full potential of your existing hardware investment.

Track Day 2: Tech Talks

Speakers Christopher Sawyer (OSIsoft)

 

 

Session Code US19NA-D2MA03

Title: Integration & Transformation of Data for Analysis & Quality Control in Real Time (Kimball)

Description Presentations shows how we stared collecting data to optimize our processes and improve our quality costs.

Track Day 2: Manufacturing

Speakers Josue Fernandez (Kimball Electronics)

Chuck's notes:  The abstract doesn't do this talk justice.  The talk is a good introduction to challenges of descrete manufacturing (versus continuous).  PI System infrastructure was used in combination with existing and new instrumentation (IoT) to accomplish unified set of data for consumption by personnel as well as an "in place" upgrade of manufacturing assets using data.  The solution brings data into the PI System from more than a dozen different special purpose machines, each with their own data tracking and status monitoring software.  The customer users these unified data to track process and quality acros their operations.

 

 

Session Code US19NA-D2PG07

Title: How OSIsoft PI supports Unipers Maintenance Strategy Planning

Description: Uniper has started its 3-year Digitization journey in 2016. A major element of the program is the consistent and harmonized introduction of the OSIsoft PI system for our asset fleet. Now, 2/3 through through the program, Uniper leverage the central availability of machine data to optimize its maintenance CAPEX budget and reduced it by 16% for 2019ff.

Track Day 2: Power Generation

Speakers Stephan Dr.-Ing. van Aaken (Uniper SE)

 

 

Session Code US19NA-D2P101

Title: Selecting the Right Analytics Tool (Omicron)

Description: There are several analytics tools and approaches available for working with PI data: Performance Equations, AF analytics, custom data references, PI ACE, PI DataLInk and Business Intelligence (BI) tools. It can be a quandary in determining which tool should you use for what. Should you focus on only one tool or use a mix? As it turns out, the answer is not as simple as basing it on the specific analytic. Other considerations should be put into the decision including: scalability, reliability, maintainability, and future-proofing, to name a few.

This talk will discuss the various tools available for performing analytics on PI data and their strengths and weaknesses, their scalability, reliability, maintainability, and future-proofing. The tools will be separated into two major classes: server side (persistent) analytics and client side (query time) analytics and the general differences between the two classes. Attendees will learn practical guidelines to for selecting analytics tools

Track Day 2: PI Geek Track

Speakers David Soll (Omicron)

 

 

Session Code UC19NA-D2FW03

Title: Using operating data to enhance operations and spark sustainable innovation (UC Davis)

Description" The UC Davis campus operates as a mini-city, with its own utilities serving about 1,200 buildings. Operating data from these systems is stored in a PI database. Two teams within Facilities Management use PI data to generate operational improvements and enhance collaborations. This enables a culture of sustainable innovation.

The Buildings Energy Engineering team implements projects to save energy in campus buildings and recoups financial savings to fund its operations. The talk will present innovative optimizations implemented in HVAC systems, and measurement & verification methods used to demonstrate financial savings.

The Utilities Data and Engineering team supports the operations and growth of the utilities systems. The team automated a process for identifying and solving energy meter issues. The team also developed a visualization that brings together the operation of the chilled water plant with the 100+ buildings connected to the chilled water loop.

Track Day 2: Water, Facilities, and Data Centers

Speakers Nicolas Fauchier-Magnan (University of California, Davis), Joseph Yonkoski (University of California, Davis)

 

 

We hope you find this teaser interesting and hope you will join us for PI World San Francisco 2019.  Remember:

  • We will live stream our morning keynote sessions from Tuesday, 9 April 2019 to the internet.  You can join with us live even if you can't attend in person.
  • The  talks above and over a hundred other sessions from PI World will be published to our website - check back later to see these talks!

We are working hard to continuously make your experiences with OSIsoft better and the next major milestone towards that goal will happen in Q1 2019. We’ll be reconstructing the majority of the Tech Support site into a portal experience that we’re calling myOSIsoft.

 

Don’t worry, we’re not changing our phone support or our PI Square communities. We are adding better ways for you to find KB articles, look at old cases, initiate new cases, search for solutions that spans from your own cases to PI Square and more!

 

See updates and early screenshots at my.osisoft.com

 

Sign up for monthly updates on features we’re building

 

A common misconception that I encounter is that the PI AF Application Service (i.e. PI AF Server) connects to the PI Data Archive in order to service client and application requests. This short post is to demonstrate via simple diagrams the access flow of the client applications: PI System Explorer, PI Analysis Service, and PI Notifications Service.

 


This diagram shows how PI System Explorer interacts with the PI Data Archive, PI Analysis Service, and PI AF Server

 

 

 

This diagram shows how PI Analysis Service interacts with the PI Data Archive and PI AF Server

 

 

 

This diagram shows how PI Notifications Service interacts with the PI Data Archive and PI AF Server

 

This is how we do it.

 

The professional installation standard used by OSIsoft Services Engineers is available to you, our customers and partners, as KB01702. This field service technical standard has been an internal-only guide used by OSIsoft Service Engineers when installing a PI Data Archive at a customer site. We are now sharing it with a wider audience!

 

Drawing on the firsthand experience of the OSIsoft Delivery Services team, this technical standard includes considerations that are beyond the scope of the installation guide. The technical standard is now available for customers and partners who wish to deploy a PI Data Archive using OSIsoft's best practices or improve their existing deployments.

 

We plan to release more of our internal standards throughout the year, so stay tuned!

 

Do you have your own best practices that you think should be included or conflict with this standard? Post them here for consideration.

 

Like and Share if you want more of these.

 

 

 

 

Hello all,

 

In honor of World Backup Day which recently passed this weekend on March 31, I thought it would be a good time to quickly remind the PI community to double check their backup configurations and strategies!  As a support engineer, many cases of hardware failure, or other unintended configuration changes can be resolved much quicker and with much less headache for everyone when recent backups are readily available.

 

For information on OSIsoft PI backup recommendations, check out some of the following articles:

 

PI Data Archive Backup Best Practices

Supporting 3rd Party VSS Backups of the PI Server

PI AF Backup considerations

How to perform a full, manual backup of the PI AF database

Backup Strategy for PI Vision

Microsoft: Backup and Restore of SQL Server Databases

 

Adam Fink

Are you trying to capture data in the PI System from a new data source?

Do you have picture or video data that needs to be analyzed and stored into PI?

Are you interested in how you can get your IoT data into PI? Are you interested in edge or cloud data storage?

 

Check out the "Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams" hands-on lab at PI World SF 2018:

 

This lab, Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams, was created by the OSIsoft Applied Research team based on a couple different projects undertaken by the research team in the last year.  For the lab, we will explore what it takes to get simulated sensor data, weather data, and picture classification data using machine learning into OMF and send that to the PI System, OSIsoft Cloud Services and Edge Data Store.  This lab is offered on Thursday (Day 3) afternoon of PI World and during it, we will do some basic programming in Python and Javascript, with the configuration in Node-RED.  If any of these topics – OMF, PI Connector Relay, OSIsoft Cloud Services, Edge Data Store – sound interesting, come learn more about it in this lab: Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams. 

 

To learn more about the hands-on labs at PI World SF 2018, visit https://piworld.osisoft.com/us2018/sessions

If you are interested in signing up for a lab, please do so as part of registration for the conference. If you have already registered and would like to add a lab, please email piworld@osisoft.com.

Hi All,

 

Thanks for attending the webinar, where we covered the following topics on PI Vision:

  • Latest Features & Highlights of PI Vision
  • Live Demo
  • Conversion of PI Process Book Displays to PI Vision Displays
  • PI Vision Extensibility

 

We have made the materials used in the session available for everyone. This includes the slide deck, recording of the session and the PI System Jumpstart brochure.

 

The link to the recording can be found here. The other documents are attached to this post in PI Square.

 

If you have any feedback, please let us know in the comments below.

 

 

P.S. In the Q&A session, I said I had to double check if PI Vision accepted GIFs in displays. After I did a test, I can confirm it does accept GIFs.

 

Moderator note: I've removed the attachement of the "PI System Jumpstart Workshop Datasheet.pdf ". Here is the link to the updated file: https://techsupport.osisoft.com/Downloads/File/1bec054c-0717-4336-be8d-35be72cb97e6

rfox

Bashing up a Gateway with OMF

Posted by rfox Feb 27, 2018

In the Perth office, we have a Dell Gateway 3003 set up for testing various Edge ideas.

After wanting to learn bash for a while, I realised that a bash OMF script for reading/sending the values is a great little problem to learn with as it requires file/text processing and sending HTTP requests that aren't simple.

The Gateway 3003 includes a few sensors, which (in typical Linux fashion) you interact with via files - for example, to read the temperature sensor you:

  • Read the raw value from /sys/bus/iio/iio:device1/in_temp_raw
  • Read the offset value from /sys/bus/iio/iio:device1/in_temp_offset
  • Read the scale from /sys/bus/iio/iio:device1/in_temp_scale
  • Calculate the real sensor reading by: real=(raw+offset)*scale

In addition, as the Gateway runs Ubuntu Core, I needed to install a "classic snap" (`snap install classic --edge --devmode` then `sudo classic`) in order to have a "standard" Linux environment (with tools like cURL, awk and bc). I had already ported the calculations to use awk, but in future I think I would use bc as it seems simpler to use in this case.The overall structure follows the Python example, so it always creates the types, container and assets messages before sending data.

Requirements

  • A bash environment
  • A connector relay to send to (this was written using relay beta 1.2.000.0001)
  • cURL (would also work with wget, but requires modifications to the way we send headers)
  • awk (could be ported to use bc)
  • Optional: grep (if you wanted to parse a file such as /proc/meminfo to get memory related information in to PI, for example)

Configuration and Static Variables

The first section of code deals with configuration and static variables for the script. The variables are set to be readonly as they do not need to be modified during script operation.

The element "DellGW-3003-PER_bash" will be created under the root element configured in the relay. This is the Producer Token.

A child element will be created under this element (in this case called "DellGW-3003-PER-Data" - the AF Element name).

We also configure the root sensor directory, as all of our sensors are below this directory.

We also configure the request timeout - cURL will try for 30 seconds to send data to the relay, then it will stop trying (and read new data).

Finally, we configure the log file this script will write to. All output from this script will be appended to the file.

 

Configuration and Static Variables

#!/bin/bash

# AF Element Configuration

readonly DEVICENAME="DellGW-3003-PER"

readonly DEVICELOC="PerthOffice"

readonly AFELEMENT="DellGW-3003-PER-Data"

readonly DEVICETYPE="Dell Gateway 3003"

readonly ROOTSENSORDIR="/sys/bus/iio/devices/"

# Message Types Configuration

readonly ASSETSMESSAGENAME="DGW_bashPerf_asset"

readonly DATAMESSAGENAME="DGW_bashPerf_data"

readonly DATACONTAINER="${DEVICENAME}_bashperf_container"

# Relay Configuration

readonly RELAYURL='https://<Relay URL Here>:5460/ingress/messages'

readonly PRODUCERTOKEN="${DEVICENAME}_bash" # will need to actually use producer tokens once the relay supports them

readonly REQUESTTIMEOUT=30 # adjust as necessary - as it is, this will dump data if it can't send in 30s but it may be better to use a shorter timeout

readonly WAITTIME=1 # Wait time between messages (in seconds). Takes float values.

readonly LOGNAME=<your_log_file_path_here>

 

 

sendOMF Helper Function

We're going to be sending a lot of OMF messages, so writing a nice little helper function around cURL to set the headers is going to really simplify things. We can also get the HTTP return code for the request in this way, which is very useful for logging and debugging.

It's general enough that it can be used to send all three types of OMF messages (type, container, data) and for all valid actions (create, update, delete).

As it's defined, you need to pass it the action, messagetype and messageJSON in that order (separated by a space).

 

It then sends the data via cURL. -H is used to specify headers, and multiple headers can be used (like in this case). --data specifies the message payload. --insecure is used to ignore any server certificate based errors. If I had properly configured certificates, this wouldn't be required.

We use -w to make cURL print the HTTP status code, which allows for debugging. 200 is a successful request.

-s is used to make cURL silent (aside from what we write via -w). --max-time is the timeout on the request (that is set in the previous section).

Helper Function

sendOMF(){

# Helper function to send an OMF message to the server

# Requires action, messageType and messageJSON to be passed in that order

    action=$1

    messageType=$2

    messageJSON=${3//$'\n'/}    # strip any newlines from the message as they break JSON

    # using curl to send data

        curl  -H 'producertoken:'$PRODUCERTOKEN'' \

          -H 'messagetype:'$messageType'' \

          -H 'action:'${action}'' \

          -H 'messageformat:json' \

          -H 'omfversion:1.0' \

         --data "${messageJSON}" \

         --insecure \

          -w '%{http_code}' \

          -s \

         --max-time ${REQUESTTIMEOUT} \

          ${RELAYURL}

}

 

Defining Types

Alright, now that we've made our general function definitions, we can start sending some stuff to the relay! First up: defining our types. We have two types here: Data and Asset.

 

The datamessage type is dynamic, and includes all of our "live data". Time is the index for this type. For each data stream, we need to define the type of the stream (e.g. string, number).

We also need to include a type for the Time, specifying that it's a date-time formatted string. We could also specify that the numbers are all a specific type of float, but the default is a float32 which is fine for this case.

See here for the valid types and formats.

The assetmessage is static as it contains metadata about the asset. It has similar options as the datamessage.

 

Finally, we call our helper function and store the output in typesResponse and write that we have completed the action to the log.

Type Definition

# Create the types message

echo "

Creating Types Message" >> $LOGNAME

msg='[{"id":"'${DATAMESSAGENAME}'",

       "type":"object",

       "classification":"dynamic",

       "properties":{"Time":{"format":"date-time",

                             "type":"string",

                             "isindex":true},

                     "Humidity":{"type":"number"},

                     "Temperature":{"type":"number"},

                     "Acceleration X":{"type":"number"},

                     "Acceleration Y":{"type":"number"},

                     "Acceleration Z":{"type":"number"},

                     "Pressure":{"type":"number"}}},

      {"id":"'${ASSETSMESSAGENAME}'",

       "type":"object",

       "classification":"static",

       "properties":{"Name":{"type":"string",

                             "isindex":true},

                     "Device Type":{"type":"string"},

                     "Location":{"type":"string"},

                     "Data Ingress Method":{"type":"string"}}}]'

 

 

typesResponse=$(sendOMF "create" "type" "${msg}")

echo "

Created Types Message with response ${typesResponse}

 

 

Creating Container Message" >> $LOGNAME

 

Defining Containers

We need to define containers so that we can send multiple messages to the relay at once. It's actually a little redundant in this code, as we don't send multiple messages. A future update could pack multiple data points together and send them all at once, or use some implementation of a message queue.

Container Definition

msg='[{"id":"'${DATACONTAINER}'",

       "typeid":"'${DATAMESSAGENAME}'"}]'

 

 

containerResponse=$(sendOMF "create" "container" "${msg}")

echo "

Created Continer Message with response ${containerResponse}

 

 

Creating Assets Message" >> $LOGNAME

 

Defining Assets

This is where it gets a little bit tricky.  We define two assets here. One with the AF Element we specified earlier (with static data in it). The other is a __Link type, which is a special type.

The first will create a link between the AF Element defined by the Producer Token to the AF Element we want to write to using our Assets Message. The second links our container to that AF Element.

Assets Definition

msg='[{"typeid":"'${ASSETSMESSAGENAME}'",

       "values":[{"Name":"'${AFELEMENT}'",

                 "Device Type":"'${DEVICETYPE}'",

                 "Location":"'${DEVICELOC}'",

                 "Data Ingress Method":"OMF (via bash)"}]},

      {"typeid":"__Link",

       "values":[{"source":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"_ROOT"},

                  "target":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"'${AFELEMENT}'"}},

                 {"source":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"'${AFELEMENT}'"},

                 "target":{"containerid":"'${DATACONTAINER}'"}}]}]'

 

 

assetsResponse=$(sendOMF "create" "Data" "${msg}")

echo "

Created Assets Message with response ${assetsResponse}" >> $LOGNAME

 

Sending Data

Sending data to the relay comprises of three steps, all of which are a part of a single loop.

First, we grab the raw values from a file.  Then we combine the raw values to obtain the real sensor readings. Finally, we package the sensor readings in to something we can send to the relay.

We initialise a number for counting the messages we have sent. This is simply used for debugging. Then we start an endless loop:

Loop start

msgno=1

while true

do

Raw values from file

This is pretty simple, we use cat to print the file but then store the output in a variable. There may be a better way to do this, but cat is a simple tool.

Reading Values

    rh_raw=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_raw)

    rh_offset=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_offset)

    rh_scale=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_scale)

 

    temp_raw=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_raw)

    temp_offset=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_offset)

    temp_scale=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_scale)

 

    accel_x_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_x_raw)

    accel_x_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_x_scale)

    accel_y_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_y_raw)

    accel_y_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_y_scale)

    accel_z_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_z_raw)

    accel_z_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_z_scale)

 

    pressure_raw=$(cat ${ROOTSENSORDIR}/iio:device2/in_pressure_raw)

    pressure_scale=$(cat ${ROOTSENSORDIR}/iio:device2/in_pressure_scale)

Obtaining Actual Sensor Readings

In the manual for the Gateway [pdf], we get the conversion from the above variables to real values. The general form is:

   real=(raw+offset)*scale

I used awk to process the variables but I think that bc (basic calculator) might be simpler. Awk is a programming language for text processing, and is usually included in most Unix-like operating systems.

What is important to remember is that awk does not automatically get access to the variables we set within bash.

The gist of awk is that you:

  • Call it, and pass it a list of variables
    • E.g. awk -v rh_raw=$rh_raw will call awk and set a variable within awk called rh_raw with the same value as the shell variable $rh_raw.
  • Give awk a BEGIN{} statement, which will execute what is within the curly braces immediately.
    • Within the BEGIN{} statement, we perform the calculation above, then print the real value
  • As the output of awk is assigned to a variable, whatever is printed by awk will be stored in the bash variable

Obtaining Actual Readings

    Humidity=$(awk -v rh_raw=$rh_raw -v rh_offset=$rh_offset -v rh_scale=$rh_scale \

    'BEGIN{rh_real=(rh_raw + rh_offset) * rh_scale; print rh_real; }')

    Temperature=$(awk -v temp_raw=$temp_raw -v temp_offset=$temp_offset -v temp_scale=$temp_scale \

    'BEGIN{temp_real=(temp_raw + temp_offset) * temp_scale; print temp_real; }')

    AccelerationX=$(awk -v accel_x_raw=$accel_x_raw -v accel_x_scale=$accel_x_scale \

    'BEGIN{accel_x_real=accel_x_raw*accel_x_scale; print accel_x_real}')

    AccelerationY=$(awk -v accel_y_raw=$accel_y_raw -v accel_y_scale=$accel_y_scale \

    'BEGIN{accel_y_real=accel_y_raw*accel_y_scale; print accel_y_real}')

    AccelerationZ=$(awk -v accel_z_raw=$accel_z_raw -v accel_z_scale=$accel_z_scale \

    'BEGIN{accel_z_real=accel_z_raw*accel_z_scale; print accel_z_real}')

    Pressure=$(awk -v pressure_raw=$pressure_raw -v pressure_scale=$pressure_scale \

    'BEGIN{pressure_real=pressure_raw*pressure_scale; print pressure_real}')

 

Pack and Send

 

Finally, we need to package the data and send it to the relay.

Put it all together into a JSON format, and send it via our helper function.

Write the message data to the log file, as well as the HTTP response code.

Pack and Send

msg='[{"containerid":"'${DATACONTAINER}'",

           "values":[{"Time":"'$(date --utc +%FT%TZ)'",

                      "Humidity":'${Humidity}',

                      "Temperature":'${Temperature}',

                      "Acceleration X":'${AccelerationX}',

                      "Acceleration Y":'${AccelerationY}',

                      "Acceleration Z":'${AccelerationZ}',

                      "Pressure":'${Pressure}'}]}]'

    #echo "Sending message" ${msgno}  ${msg//$'\n'/}

    dataResponse=$(sendOMF "create" "data" "${msg}")

 

 

    echo "Sent message" ${msgno}  ${msg//$''/} "HTTP" $dataResponse >> $LOGNAME

 

 

Finishing the Loop

Finally we need to increment the message counter, wait for the specified time between messages, and go back to the start of the loop.

Finish up the loop

((++msgno))

    sleep ${WAITTIME}

done

 

And we're done! We've set up an AF structure and can continually write values from files in a Linux filesystem to the AF Element.

 

The project is also on Github here.

greche

Smart Office São Paulo

Posted by greche Employee Feb 5, 2018

 

The Smart Office São Paulo project was developed by the Brazil Office with a focus on using the PI System to analyze the comfort conditions of the Office.

 

To collect data from different sites of the office, we used six Arduinos connected to four sensors.

Humidity, luminosity, temperature, and noise were sent to the PI Data Archive using the UFL connector and useful information was extracted from that data.

For example, the luminosity was used to infer when the first person arrived today and when the last person left, on the day before.

 

Data to inform the current weather condition is stored in the PI Data Archive, along with the time to travel between two locations. The main traveling points include a few bus stations and airports.

 

You can find a lengthier explanation of this solution in the video above. Feel free to contact us if any questions arise.

 

Alex Santos - asantos@osisoft.com   

Gustavo Reche - greche@osisoft.com

This Lab was part of the TechCon Hands-on Labs during PI Users Conference in 2017 in San Francisco.  The Lab was also offered in Sao Paulo, Brazil during LATAM Regional 2017 and at UC EMEA London. The Lab manual is attached; the manual is intended for an instructor led interactive workshop – so you may find the written content short on some of the explanations.

 


You can access the VM used for the lab via Home Page - OSIsoft Learning and, look under the Virtual Learning Environment.

Below is an extract from the Intro section of the Lab manual:

 

At TechCon 2016, we reviewed an end-to-end use case for developing a machine learning (multivariate PCA - principal component analysis) model to predict equipment failure. This lab builds on those concepts but we now use data from a process unit operation and apply data science and machine learning methods for diagnostics.

 

Troubleshooting faulty processes and equipments – also known as FDD (fault detection and diagnostics) or anomaly detection is a challenge.  This hands-on-lab provides an end-to-end walk-through for applying data driven techniques - specifically machine learning - for such tasks.

 

The learning objectives of this lab include:

  • Extracting data from the PI System using PI Integrator
  • Using the PI System data with R, data cleansing, feature selection, model development for a multivariate process using PCA (principal component), etc.
  • Using the PCA model with Shiny  https://shiny.rstudio.com/ to create an interactive display for visualizing and exploring faults vs. normal operation; also using SVM (support vector machine) for classification and prediction of Air Handler (AHU) fault/no-fault state
  • Using Azure ML with PI System data for machine learning
  • Deploying the machine learning model for continuous execution with real-time data
  • Understanding the end-to-end data science process – data retrieval, data cleansing, shaping and preparation with meta-data context, feature selection via domain specific guidelines, applying machine learning methods, visualizing the results and operationalizing the findings

 

The application of data science and machine learning methods are well known in several fields – image and speech recognition, fraud detection, search, shopping recommendations, and others.  In manufacturing, including manufacturing operations management, and particularly in plant-floor operations with time-series sensor data, select data science/machine learning methods are highly effective.

 

Principal Component Analysis (PCA) is one such well-known and established machine learning technique for gaining insights from multivariate process operations data. PCA has several use cases – exploratory analysis, feature reconstruction, outlier detection, and others. And, other derived algorithms such as PLS (projection to latent structures), O-PLS (orthogonal …), PLS-DA (… discriminant analysis) etc. are widely used in the industry.

In a multivariate process, several parameters - sometimes just a handful but often dozens of parameters - vary simultaneously, resulting in multi-dimensional datasets that are difficult to visualize and analyze. Examples of multivariate processes are:

  • Brewery - Beer fermentation
  • Oil Refinery – Distillation column
  • Facilities – Heating, Ventilation and Air-Conditioning (HVAC) - Air Handler Unit

 

...In this lab, we use the Air Handler Unit (AHU) to illustrate an approach for analyzing such multivariate processes.  A typical HVAC system with AHU is shown below.

 

AirHandler.png

Figure HVAC system with Air Handling Unit (AHU)

 

Sensor data available from the AHU, as part of the BMS (building management system) are:

  • Outside air temperature
  • Relative Humidity
  • Mixed air temperature
  • Supply air temperature
  • Damper position
  • Chilled water flow
  • Supply air flow
  • Supply air fan VFD (variable frequency drive) power

During the course of a day, the AHU operating conditions change continuously as the outside air temperature rises and falls, along with changing humidity and wind conditions, changing thermostat set-points, building occupancy level, and others. The BMS control system adjusts the supply air flow rate, chilled water flow rate, damper position etc. to provide the necessary heating or cooling to the rooms to ensure tenant comfort.

 

However, fault conditions such as incorrect/drifting sensor measurements (temperature, flow, pressure …), dampers stuck at open/closed/partial-open position, stuck chilled water valve, and others, can waste energy, or lead to tenant complaints from other malfunctions causing rooms to get too hot or too cold.

For troubleshooting and diagnostics, HVAC engineers need tools to answer questions such as:

  • How can I use data to detect faulty AHU operations i.e. air damper stuck open at 100% open on a hot day in mid-July?
  • What’s the AHU “state” during 100 ºF + days? In 2016? In 2015? And, in 2014 before we installed the Economizer?
  • What are the AHU outlier/extreme operating states?
  • How did it get to the extreme state; what were the immediate prior operating states for that day?
  • What’s the AHU state at supply fan flow limit constraint? When did it happen?

Hello PI Geeks!

 

We are planning our next Hackathon at PI World 2018 where we expect tens of esteemed PI professionals, industry experts, and data scientists to compete. You can have your business challenge be the topic of the event which means there will be a whole group of engineers who will compete to add value to your business by solving one of your challenges.

 

We have been hosting several successful hackathons over the past few years (2017 SF, 2017 London, 2016 SF, 2016 Berlin, 2015 SF). In 2016, for example, the topic of the Programming Hackathon was Innovation Around Smart Cities. Data was sponsored by the San Diego International Airport and made available to our hackers. The executives from the airport were really happy with the final results of the hackathon mainly because:

 

  • They were inspired by the new creative apps and business models developed by our hackers, which could add a lot of value to their business.
  • They learned new ways to gain insight into the data they already had in their PI System.
  • They were able to detect where they could be more efficient in their industrial processes.

 

While starting to organize the PI World SF Hackathon 2018 we are looking to find our data sponsor. This is where you come in! We are seeking for a  customer who may be willing to share their data with us for the event. A good data sponsor typically has the following qualifications:

 

  • Owns a PI System with AF already in place
  • Has a few data-oriented high level business challenges or aspirations
  • Has at least tens of assets and many hundreds of data streams in place
  • Has at least 1 year of historical data
  • Has sampling rate of at least several samples a minute on the majority of the tags
  • Is willing to share their data with us – We are willing to consider an anonymized/obfuscated version of the dataset as well

 

In case you are interested becoming the new data sponsor for the Programming Hackathon, please don’t hesitate to contact me by e-mail (mloeff@osisoft.com).

Filter Blog

By date: By tag: