Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog

We are working hard to continuously make your experiences with OSIsoft better and the next major milestone towards that goal will happen in Q1 2019. We’ll be reconstructing the majority of the Tech Support site into a portal experience that we’re calling myOSIsoft.

 

Don’t worry, we’re not changing our phone support or our PI Square communities. We are adding better ways for you to find KB articles, look at old cases, initiate new cases, search for solutions that spans from your own cases to PI Square and more!

 

See updates and early screenshots at my.osisoft.com

 

Sign up for monthly updates on features we’re building

 

A common misconception that I encounter is that the PI AF Application Service (i.e. PI AF Server) connects to the PI Data Archive in order to service client and application requests. This short post is to demonstrate via simple diagrams the access flow of the client applications: PI System Explorer, PI Analysis Service, and PI Notifications Service.

 


This diagram shows how PI System Explorer interacts with the PI Data Archive, PI Analysis Service, and PI AF Server

 

 

 

This diagram shows how PI Analysis Service interacts with the PI Data Archive and PI AF Server

 

 

 

This diagram shows how PI Notifications Service interacts with the PI Data Archive and PI AF Server

 

This is how we do it.

 

The professional installation standard used by OSIsoft Services Engineers is available to you, our customers and partners, as KB01702. This field service technical standard has been an internal-only guide used by OSIsoft Service Engineers when installing a PI Data Archive at a customer site. We are now sharing it with a wider audience!

 

Drawing on the firsthand experience of the OSIsoft Delivery Services team, this technical standard includes considerations that are beyond the scope of the installation guide. The technical standard is now available for customers and partners who wish to deploy a PI Data Archive using OSIsoft's best practices or improve their existing deployments.

 

We plan to release more of our internal standards throughout the year, so stay tuned!

 

Do you have your own best practices that you think should be included or conflict with this standard? Post them here for consideration.

 

Like and Share if you want more of these.

 

 

 

 

Hello all,

 

In honor of World Backup Day which recently passed this weekend on March 31, I thought it would be a good time to quickly remind the PI community to double check their backup configurations and strategies!  As a support engineer, many cases of hardware failure, or other unintended configuration changes can be resolved much quicker and with much less headache for everyone when recent backups are readily available.

 

For information on OSIsoft PI backup recommendations, check out some of the following articles:

 

PI Data Archive Backup Best Practices

Supporting 3rd Party VSS Backups of the PI Server

PI AF Backup considerations

How to perform a full, manual backup of the PI AF database

Backup Strategy for PI Vision

Microsoft: Backup and Restore of SQL Server Databases

 

Adam Fink

Are you trying to capture data in the PI System from a new data source?

Do you have picture or video data that needs to be analyzed and stored into PI?

Are you interested in how you can get your IoT data into PI? Are you interested in edge or cloud data storage?

 

Check out the "Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams" hands-on lab at PI World SF 2018:

 

This lab, Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams, was created by the OSIsoft Applied Research team based on a couple different projects undertaken by the research team in the last year.  For the lab, we will explore what it takes to get simulated sensor data, weather data, and picture classification data using machine learning into OMF and send that to the PI System, OSIsoft Cloud Services and Edge Data Store.  This lab is offered on Thursday (Day 3) afternoon of PI World and during it, we will do some basic programming in Python and Javascript, with the configuration in Node-RED.  If any of these topics – OMF, PI Connector Relay, OSIsoft Cloud Services, Edge Data Store – sound interesting, come learn more about it in this lab: Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams. 

 

To learn more about the hands-on labs at PI World SF 2018, visit https://piworld.osisoft.com/us2018/sessions

If you are interested in signing up for a lab, please do so as part of registration for the conference. If you have already registered and would like to add a lab, please email piworld@osisoft.com.

Hi All,

 

Thanks for attending the webinar, where we covered the following topics on PI Vision:

  • Latest Features & Highlights of PI Vision
  • Live Demo
  • Conversion of PI Process Book Displays to PI Vision Displays
  • PI Vision Extensibility

 

We have made the materials used in the session available for everyone. This includes the slide deck, recording of the session and the PI System Jumpstart brochure.

 

The link to the recording can be found here. The other documents are attached to this post in PI Square.

 

If you have any feedback, please let us know in the comments below.

 

 

P.S. In the Q&A session, I said I had to double check if PI Vision accepted GIFs in displays. After I did a test, I can confirm it does accept GIFs.

rfox

Bashing up a Gateway with OMF

Posted by rfox Employee Feb 27, 2018

In the Perth office, we have a Dell Gateway 3003 set up for testing various Edge ideas.

After wanting to learn bash for a while, I realised that a bash OMF script for reading/sending the values is a great little problem to learn with as it requires file/text processing and sending HTTP requests that aren't simple.

The Gateway 3003 includes a few sensors, which (in typical Linux fashion) you interact with via files - for example, to read the temperature sensor you:

  • Read the raw value from /sys/bus/iio/iio:device1/in_temp_raw
  • Read the offset value from /sys/bus/iio/iio:device1/in_temp_offset
  • Read the scale from /sys/bus/iio/iio:device1/in_temp_scale
  • Calculate the real sensor reading by: real=(raw+offset)*scale

In addition, as the Gateway runs Ubuntu Core, I needed to install a "classic snap" (`snap install classic --edge --devmode` then `sudo classic`) in order to have a "standard" Linux environment (with tools like cURL, awk and bc). I had already ported the calculations to use awk, but in future I think I would use bc as it seems simpler to use in this case.The overall structure follows the Python example, so it always creates the types, container and assets messages before sending data.

Requirements

  • A bash environment
  • A connector relay to send to (this was written using relay beta 1.2.000.0001)
  • cURL (would also work with wget, but requires modifications to the way we send headers)
  • awk (could be ported to use bc)
  • Optional: grep (if you wanted to parse a file such as /proc/meminfo to get memory related information in to PI, for example)

Configuration and Static Variables

The first section of code deals with configuration and static variables for the script. The variables are set to be readonly as they do not need to be modified during script operation.

The element "DellGW-3003-PER_bash" will be created under the root element configured in the relay. This is the Producer Token.

A child element will be created under this element (in this case called "DellGW-3003-PER-Data" - the AF Element name).

We also configure the root sensor directory, as all of our sensors are below this directory.

We also configure the request timeout - cURL will try for 30 seconds to send data to the relay, then it will stop trying (and read new data).

Finally, we configure the log file this script will write to. All output from this script will be appended to the file.

 

Configuration and Static Variables

#!/bin/bash

# AF Element Configuration

readonly DEVICENAME="DellGW-3003-PER"

readonly DEVICELOC="PerthOffice"

readonly AFELEMENT="DellGW-3003-PER-Data"

readonly DEVICETYPE="Dell Gateway 3003"

readonly ROOTSENSORDIR="/sys/bus/iio/devices/"

# Message Types Configuration

readonly ASSETSMESSAGENAME="DGW_bashPerf_asset"

readonly DATAMESSAGENAME="DGW_bashPerf_data"

readonly DATACONTAINER="${DEVICENAME}_bashperf_container"

# Relay Configuration

readonly RELAYURL='https://<Relay URL Here>:5460/ingress/messages'

readonly PRODUCERTOKEN="${DEVICENAME}_bash" # will need to actually use producer tokens once the relay supports them

readonly REQUESTTIMEOUT=30 # adjust as necessary - as it is, this will dump data if it can't send in 30s but it may be better to use a shorter timeout

readonly WAITTIME=1 # Wait time between messages (in seconds). Takes float values.

readonly LOGNAME=<your_log_file_path_here>

 

 

sendOMF Helper Function

We're going to be sending a lot of OMF messages, so writing a nice little helper function around cURL to set the headers is going to really simplify things. We can also get the HTTP return code for the request in this way, which is very useful for logging and debugging.

It's general enough that it can be used to send all three types of OMF messages (type, container, data) and for all valid actions (create, update, delete).

As it's defined, you need to pass it the action, messagetype and messageJSON in that order (separated by a space).

 

It then sends the data via cURL. -H is used to specify headers, and multiple headers can be used (like in this case). --data specifies the message payload. --insecure is used to ignore any server certificate based errors. If I had properly configured certificates, this wouldn't be required.

We use -w to make cURL print the HTTP status code, which allows for debugging. 200 is a successful request.

-s is used to make cURL silent (aside from what we write via -w). --max-time is the timeout on the request (that is set in the previous section).

Helper Function

sendOMF(){

# Helper function to send an OMF message to the server

# Requires action, messageType and messageJSON to be passed in that order

    action=$1

    messageType=$2

    messageJSON=${3//$'\n'/}    # strip any newlines from the message as they break JSON

    # using curl to send data

        curl  -H 'producertoken:'$PRODUCERTOKEN'' \

          -H 'messagetype:'$messageType'' \

          -H 'action:'${action}'' \

          -H 'messageformat:json' \

          -H 'omfversion:1.0' \

         --data "${messageJSON}" \

         --insecure \

          -w '%{http_code}' \

          -s \

         --max-time ${REQUESTTIMEOUT} \

          ${RELAYURL}

}

 

Defining Types

Alright, now that we've made our general function definitions, we can start sending some stuff to the relay! First up: defining our types. We have two types here: Data and Asset.

 

The datamessage type is dynamic, and includes all of our "live data". Time is the index for this type. For each data stream, we need to define the type of the stream (e.g. string, number).

We also need to include a type for the Time, specifying that it's a date-time formatted string. We could also specify that the numbers are all a specific type of float, but the default is a float32 which is fine for this case.

See here for the valid types and formats.

The assetmessage is static as it contains metadata about the asset. It has similar options as the datamessage.

 

Finally, we call our helper function and store the output in typesResponse and write that we have completed the action to the log.

Type Definition

# Create the types message

echo "

Creating Types Message" >> $LOGNAME

msg='[{"id":"'${DATAMESSAGENAME}'",

       "type":"object",

       "classification":"dynamic",

       "properties":{"Time":{"format":"date-time",

                             "type":"string",

                             "isindex":true},

                     "Humidity":{"type":"number"},

                     "Temperature":{"type":"number"},

                     "Acceleration X":{"type":"number"},

                     "Acceleration Y":{"type":"number"},

                     "Acceleration Z":{"type":"number"},

                     "Pressure":{"type":"number"}}},

      {"id":"'${ASSETSMESSAGENAME}'",

       "type":"object",

       "classification":"static",

       "properties":{"Name":{"type":"string",

                             "isindex":true},

                     "Device Type":{"type":"string"},

                     "Location":{"type":"string"},

                     "Data Ingress Method":{"type":"string"}}}]'

 

 

typesResponse=$(sendOMF "create" "type" "${msg}")

echo "

Created Types Message with response ${typesResponse}

 

 

Creating Container Message" >> $LOGNAME

 

Defining Containers

We need to define containers so that we can send multiple messages to the relay at once. It's actually a little redundant in this code, as we don't send multiple messages. A future update could pack multiple data points together and send them all at once, or use some implementation of a message queue.

Container Definition

msg='[{"id":"'${DATACONTAINER}'",

       "typeid":"'${DATAMESSAGENAME}'"}]'

 

 

containerResponse=$(sendOMF "create" "container" "${msg}")

echo "

Created Continer Message with response ${containerResponse}

 

 

Creating Assets Message" >> $LOGNAME

 

Defining Assets

This is where it gets a little bit tricky.  We define two assets here. One with the AF Element we specified earlier (with static data in it). The other is a __Link type, which is a special type.

The first will create a link between the AF Element defined by the Producer Token to the AF Element we want to write to using our Assets Message. The second links our container to that AF Element.

Assets Definition

msg='[{"typeid":"'${ASSETSMESSAGENAME}'",

       "values":[{"Name":"'${AFELEMENT}'",

                 "Device Type":"'${DEVICETYPE}'",

                 "Location":"'${DEVICELOC}'",

                 "Data Ingress Method":"OMF (via bash)"}]},

      {"typeid":"__Link",

       "values":[{"source":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"_ROOT"},

                  "target":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"'${AFELEMENT}'"}},

                 {"source":{"typeid":"'${ASSETSMESSAGENAME}'",

                            "index":"'${AFELEMENT}'"},

                 "target":{"containerid":"'${DATACONTAINER}'"}}]}]'

 

 

assetsResponse=$(sendOMF "create" "Data" "${msg}")

echo "

Created Assets Message with response ${assetsResponse}" >> $LOGNAME

 

Sending Data

Sending data to the relay comprises of three steps, all of which are a part of a single loop.

First, we grab the raw values from a file.  Then we combine the raw values to obtain the real sensor readings. Finally, we package the sensor readings in to something we can send to the relay.

We initialise a number for counting the messages we have sent. This is simply used for debugging. Then we start an endless loop:

Loop start

msgno=1

while true

do

Raw values from file

This is pretty simple, we use cat to print the file but then store the output in a variable. There may be a better way to do this, but cat is a simple tool.

Reading Values

    rh_raw=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_raw)

    rh_offset=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_offset)

    rh_scale=$(cat ${ROOTSENSORDIR}/iio:device0/in_humidityrelative_scale)

 

    temp_raw=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_raw)

    temp_offset=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_offset)

    temp_scale=$(cat ${ROOTSENSORDIR}/iio:device0/in_temp_scale)

 

    accel_x_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_x_raw)

    accel_x_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_x_scale)

    accel_y_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_y_raw)

    accel_y_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_y_scale)

    accel_z_raw=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_z_raw)

    accel_z_scale=$(cat ${ROOTSENSORDIR}/iio:device1/in_accel_z_scale)

 

    pressure_raw=$(cat ${ROOTSENSORDIR}/iio:device2/in_pressure_raw)

    pressure_scale=$(cat ${ROOTSENSORDIR}/iio:device2/in_pressure_scale)

Obtaining Actual Sensor Readings

In the manual for the Gateway [pdf], we get the conversion from the above variables to real values. The general form is:

   real=(raw+offset)*scale

I used awk to process the variables but I think that bc (basic calculator) might be simpler. Awk is a programming language for text processing, and is usually included in most Unix-like operating systems.

What is important to remember is that awk does not automatically get access to the variables we set within bash.

The gist of awk is that you:

  • Call it, and pass it a list of variables
    • E.g. awk -v rh_raw=$rh_raw will call awk and set a variable within awk called rh_raw with the same value as the shell variable $rh_raw.
  • Give awk a BEGIN{} statement, which will execute what is within the curly braces immediately.
    • Within the BEGIN{} statement, we perform the calculation above, then print the real value
  • As the output of awk is assigned to a variable, whatever is printed by awk will be stored in the bash variable

Obtaining Actual Readings

    Humidity=$(awk -v rh_raw=$rh_raw -v rh_offset=$rh_offset -v rh_scale=$rh_scale \

    'BEGIN{rh_real=(rh_raw + rh_offset) * rh_scale; print rh_real; }')

    Temperature=$(awk -v temp_raw=$temp_raw -v temp_offset=$temp_offset -v temp_scale=$temp_scale \

    'BEGIN{temp_real=(temp_raw + temp_offset) * temp_scale; print temp_real; }')

    AccelerationX=$(awk -v accel_x_raw=$accel_x_raw -v accel_x_scale=$accel_x_scale \

    'BEGIN{accel_x_real=accel_x_raw*accel_x_scale; print accel_x_real}')

    AccelerationY=$(awk -v accel_y_raw=$accel_y_raw -v accel_y_scale=$accel_y_scale \

    'BEGIN{accel_y_real=accel_y_raw*accel_y_scale; print accel_y_real}')

    AccelerationZ=$(awk -v accel_z_raw=$accel_z_raw -v accel_z_scale=$accel_z_scale \

    'BEGIN{accel_z_real=accel_z_raw*accel_z_scale; print accel_z_real}')

    Pressure=$(awk -v pressure_raw=$pressure_raw -v pressure_scale=$pressure_scale \

    'BEGIN{pressure_real=pressure_raw*pressure_scale; print pressure_real}')

 

Pack and Send

 

Finally, we need to package the data and send it to the relay.

Put it all together into a JSON format, and send it via our helper function.

Write the message data to the log file, as well as the HTTP response code.

Pack and Send

msg='[{"containerid":"'${DATACONTAINER}'",

           "values":[{"Time":"'$(date --utc +%FT%TZ)'",

                      "Humidity":'${Humidity}',

                      "Temperature":'${Temperature}',

                      "Acceleration X":'${AccelerationX}',

                      "Acceleration Y":'${AccelerationY}',

                      "Acceleration Z":'${AccelerationZ}',

                      "Pressure":'${Pressure}'}]}]'

    #echo "Sending message" ${msgno}  ${msg//$'\n'/}

    dataResponse=$(sendOMF "create" "data" "${msg}")

 

 

    echo "Sent message" ${msgno}  ${msg//$''/} "HTTP" $dataResponse >> $LOGNAME

 

 

Finishing the Loop

Finally we need to increment the message counter, wait for the specified time between messages, and go back to the start of the loop.

Finish up the loop

((++msgno))

    sleep ${WAITTIME}

done

 

And we're done! We've set up an AF structure and can continually write values from files in a Linux filesystem to the AF Element.

 

The project is also on Github here.

greche

Smart Office São Paulo

Posted by greche Employee Feb 5, 2018

 

The Smart Office São Paulo project was developed by the Brazil Office with a focus on using the PI System to analyze the comfort conditions of the Office.

 

To collect data from different sites of the office, we used six Arduinos connected to four sensors.

Humidity, luminosity, temperature, and noise were sent to the PI Data Archive using the UFL connector and useful information was extracted from that data.

For example, the luminosity was used to infer when the first person arrived today and when the last person left, on the day before.

 

Data to inform the current weather condition is stored in the PI Data Archive, along with the time to travel between two locations. The main traveling points include a few bus stations and airports.

 

You can find a lengthier explanation of this solution in the video above. Feel free to contact us if any questions arise.

 

Alex Santos - asantos@osisoft.com   

Gustavo Reche - greche@osisoft.com

Hello PI Geeks!

 

We are planning our next Hackathon at PI World 2018 where we expect tens of esteemed PI professionals, industry experts, and data scientists to compete. You can have your business challenge be the topic of the event which means there will be a whole group of engineers who will compete to add value to your business by solving one of your challenges.

 

We have been hosting several successful hackathons over the past few years (2017 SF, 2017 London, 2016 SF, 2016 Berlin, 2015 SF). In 2016, for example, the topic of the Programming Hackathon was Innovation Around Smart Cities. Data was sponsored by the San Diego International Airport and made available to our hackers. The executives from the airport were really happy with the final results of the hackathon mainly because:

 

  • They were inspired by the new creative apps and business models developed by our hackers, which could add a lot of value to their business.
  • They learned new ways to gain insight into the data they already had in their PI System.
  • They were able to detect where they could be more efficient in their industrial processes.

 

While starting to organize the PI World SF Hackathon 2018 we are looking to find our data sponsor. This is where you come in! We are seeking for a  customer who may be willing to share their data with us for the event. A good data sponsor typically has the following qualifications:

 

  • Owns a PI System with AF already in place
  • Has a few data-oriented high level business challenges or aspirations
  • Has at least tens of assets and many hundreds of data streams in place
  • Has at least 1 year of historical data
  • Has sampling rate of at least several samples a minute on the majority of the tags
  • Is willing to share their data with us – We are willing to consider an anonymized/obfuscated version of the dataset as well

 

In case you are interested becoming the new data sponsor for the Programming Hackathon, please don’t hesitate to contact me by e-mail (mloeff@osisoft.com).

Preamble

Both PI Web API and PI Vision require an SSL certificate upon installation. The default installation will create a self-signed certificate, but users will see an ugly certificate error when navigating to it. Users can click through these errors, but configuring it in this way is bad practice. If your website is configured correctly, then these errors indicate a potential man-in-the-middle attack. You want your users to alert you if they see these errors, not click through them on a daily basis.

Chrome.pngIE.png

The simplest way to get a secure certificate that provides the best user experience within your corporate network is to use your Enterprise Certificate Authority to generate it. Users will see a nice, green padlock:

good.png

In this post, I'll walk you through setting this up. I'll assume you have obtained the following:

  • A Server with PI Vision or PI Web API installed, or to be installed. This server will be referred to from now on as the PI Web Server
  • A Domain account that is a Local Administrator on the PI Web Server
  • A Domain Administrator on standby, in case changes need to be made (see later steps for details)
  • Permission from your IT department for using Active Directory Certificate Services automatic enrolment in order to obtain certificates for your PI System production environment.

Steps

  1. On the PI Web Server, log in using a domain account that is a member of the Local Administrators group.
  2. Click Start.
  3. In the Search programs and files box, type mmc.exe, and press ENTER.
  4. On the File menu, click Add/Remove Snap-in.
  5. In the list of available snap-ins, click Certificates, and then click Add.
  6. Click Computer account, and click Next.
  7. Click Local computer, and click Finish.
  8. Click OK.
  9. In the console tree, double-click Certificates (Local Computer), and then double-click Personal.
  10. Right-click Personal, point to All Tasks, and then click Request New Certificate to start the Certificate Enrollment wizard.
  11. Click Next.
  12. Click Next.
  13. Try to find the Web Server template. If you do not see it like in the below screenshot, click cancel, go down to the Appendix 1 part of this article and follow the directions there, then come back and follow on again from step 9.
    cn.png
  14. Select the Web Server template. Click the warning icon below More information is required to enroll for this certificate. Click here to configure these settings.
  15. In the Subject name area under Type, click Common Name.
  16. In the Subject name area under Value, enter the fully qualified domain name of the server, and then click Add.
  17. In the Alternative name area under Type, click DNS.
  18. In the Alternative name area under Value, enter the fully qualified domain name of the PI Web Server, and then click Add.
  19. In the Alternative name area under Value, enter the machine name of the PI Web Server, and then click Add.
  20. Repeat the previous step for any other alternative name you would like users to use when navigating to the web application. Appropriate DNS entries will also need to be created, but this is beyond the scope of this article.
  21. Click OK.
  22. Click Enroll.
  23. Click Finish.
  24. Click Certificates then double click on your new certificate. On the Details tab, under Subject Alternative Name the names you entered above should be present.
  25. Install your software on your PI Web Server, be it PI Web API or PI Vision. If you've already installed the software, click Start, navigate to the PI Web API Admin Utility and follow the wizard to change your current self-signed certificate to your newly created certificate.

Appendix 1: If the Web Server Template is unavailable

  1. On the Certificate Authority Server (which is usually the domain controller), log in as a Domain Administrator or CA Administrator.
  2. On the CA computer, click Start, type certtmpl.msc, and then press ENTER.
  3. In the contents pane, right-click the Web Server template, and then click Properties.
  4. Click the Security tab.
  5. We need to add the computer account for the PI Vision server to this template, and give it Enroll permission. For detailed directions, follow the screenshot below and the directions underneath.
    template security.png
  6. Click Add...
  7. Click Object Types...
  8. Ensure Computers is checked.
  9. Click OK.
  10. Type the name of your PI Web Server into the object names box. In the example in the screenshot, the machine name for the server is MASTERWEB.
  11. Click Check Names and ensure that you find the account (the name should underline)
  12. Click OK
  13. Check the Enroll box under Allow with your PI Vision Server computer account selected
  14. Click OK

After following the above steps, go back to your PI Vision Server and continue the original steps.

Conclusion

Comments or corrections welcome. If you've got any questions, feel free to post them and we'll discuss!

          This post will contain an overview of an intern project designed to show the value of the PI System in a Facility context. The project was undertaken at the Montreal OSIsoft office, by two interns Georges Khairallah and Ali Idrici – both of whom are studying Mechanical Engineering.

 

 

Background

OSIsoft Canada ULC - Montreal is located on one of the floors of an old building in Montreal downtown. The building management team does not have any building management system (BMS) to manage energy and thus they do not have data regarding where most of the energy is being used. In fact, only the lighting systems can be controlled by the different offices. Other control systems, such as the HVAC system, are controlled by various master switches that regulate the entire facility.

OSIsoft Montreal’s office has expanded throughout the past several years with a wider working area and two distinct spaces located on the same floor.

Despite not having a BMS, OSIsoft Montreal would like demonstrate that they can measure and manage their office’s energy consumption from lighting and track human presence with their real-time operational data software, the PI System, in order to lower energy usage and reduce false alarms.

 

Project Description

OSIsoft Montreal is looking to reduce energy consumption from unnecessary lighting as well as reduce the number of false alarms caused from absence of occupancy presence information. Currently, there are no systems in place to monitor and track real-time usage or presence. However, the company is looking to implement a number of IIoT (Industrial Internet of things) devices and sensors across their workspace, then send the collected data to one PI System, where real-time analyses will be generated and displayed.

 

Summarizing the two critical business issues defined by OSIsoft Montreal:

 

Scope Overview – Situation 1: Energy Efficiency

  • Critical Business Issue

      Unable to cut down the energy consumption from unnecessary lighting.

  • Problem/Reason:

Tedious to walk across all work areas before leaving the office. Lack of visibility of every room’s lighting conditions.

Need real time visibility on every room’s lighting status with metrics (energy usage, costs) across all working areas to determine where appropriate action is required.

o   Delta (Benefit):

       - Need to cut travel time before heading out of the office by 80%

- Need to save 25% of energy consumption from lighting

  • Target Date:

Mid of August 2017– Presentation to OSIsoft Montreal Team

 

Scope Overview – Situation 2: Alarms Management and Delayed Departure

  • Critical Business Issue:

      Unable to reduce the number of false alarms.

  • Problem/Reason:

Disturbs other colleagues with unnecessary agitation, and a 10 minute delay to deal with the alarm company and building surveyor if triggered. The absence of occupant presence information leads to inaccurate assumptions on whether to activate the alarm at time of departure.

Need real time occupant presence monitoring to activate the alarm system only when one person remains.

o   Delta (Benefit):

                            Need to reduce false alerts to 1/4 of current occurrences (currently: once a quarter)

  • Target Date:

Mid of August 2017– Presentation to OSIsoft Montreal Team

 

While OSIsoft Montreal does currently have a workaround to their business issues, they do not have a time saving solution, nor an effortless process to locate lightning statuses in each room, and no easy method to detect human presence in the office at all times.

 

 

Project Goals

The business goals of this project are to leverage OSIsoft’s real-time data infrastructure, the PI System, in order to:

  • Increase awareness on lighting consumption to the Montreal team
  • Highlight the zones which contribute the most to the energy bill
  • Track presence in the office at any time of the day
  • Indicate when there is only one person in the office to let them activate the alarm and easily locate where remaining lights must be shut off
  • Automate the procedure necessary before heading out of the office

 

 

How Are We Collecting the Required Data?

 

The collection of light intensity data is achieved by an Electric-Imp.

The collection of human presence data uses an Omron D6T 1x8 low-resolution thermal camera temperature sensor, that is connected to a DragonBoard410c.

Both of these devices send the raw data as secured HTTPs Requests to a REST EndPoint in a Docker located in an Azure Cloud.

 

Project Architecture

The following figure represents the target data flow for the generation of lighting and human presence reports:

fig1.png

      Figure 1 – OSIsoft Montreal System Architecture Used For This Project

 

 

Using Analytics on PI AF to Analyze Incoming Raw Data

 

o   Data coming from Electric imps:

ON/OFF light status is obtained by performing a periodic comparison of the current light intensity to a threshold value. Later on, energy consumption can be obtained knowing how many watts each light is consuming when turned on. Other statistics such as daily and weekly light energy consumption are also computed on Analytics.

fig2.png

     Figure 2 – Asset Framework display of lighting status analysis

 

o   Data coming from Thermal sensor:

Temperature profiles are retrieved 4 times a seconds and each temperature profile contains 8 values. The following graph displays the raw data coming from the thermal sensor as someone walks past the sensor and then comes back.

fig3.png

    Figure 3 – Temperature profile variation with respect to time (PI Vision display)

 

Using Pi analytics, an attribute named “Polarity” (see yellow curve on fig. 4) is computed for every temperature profile. Polarity quantifies how shifted to one side or another the temperature values are. For simplicity’s sake, the remaining of the analysis focuses on polarity variation.

fig4.png

     Figure 4 – Polarity variation with respect to time (PI Vision display)

 

When the polarity switches continuously from negative to positive, it means someone walked passed the sensor . Conversely, when the polarity switches from positive to negative, someone passes by the sensor in the other direction. Further analytics can be performed on the Polarity attribute to extract it’s sign only.

fig5.png

     Figure 5 – Polarity sign variation with respect to time (PI Vision display)

 

Switching polarity sign from -1 to +1 triggers a positive increment in the counter whereas polarity sign switching from +1 to -1 triggers a negative increment.

 

 

 

Using PI Vision 2017 to Display the Processed Data

 

o   Dashboard for Lights:

fig7.png

      Figure 6 – Live Lighting status for every room in the Montreal office (PI Vision display)

fig6.png

      Figure 7 – Light status for every Electric Imp (PI Vision display)

 

o   Dashboard for human counter :

fig8.png

      Figure 8 – Counting number of individuals in the Montreal office (PI Vision display)

In response to user voice :

 

Introduction :

Event frames are really useful to find out when some events happen for a specific time range.

However, it´s not possible to run an anaylsis in that event frame while the event frame is running.

Here is a workaround of how you can actually do it. The trick is to run the analysis under certain condition and link this analysis to an event frame.

Intuitively, we try to build an analysis based on an event frame but that is not possible if we want to run calculation during the event frame. We will see instead how to run an event frame based on a analysis.

That will allow us to do some calculations during an opened event frame.

 

 

Problem description:

Let´s say we want to do some calculation over a time range for a periodic signal.

But the time range is not fixed and the period of the signal can change over time.

In others words, we don’t know the explicit start and the end times of the calculation.

The only thing we know is that we want to run the calculation (to find the integral of the ‘sinusoid’ signal in our example) while the signal is increasing and stop the calculation when it’s decreasing.

Let’s plot a graph to summarize what we want to achieve:

 

In our example, we will use instead the Sinusoid signal from the PI Interface for Random, Fractal, Station, Sinusoid Data:

 

 

As the period of the signal is not always the same we can’t use a fixed time range.

We will have to retrieve the start and the end times using another tag.

 

In this example we have a variable Trigger2 linked to a PI Tag called ST2. (This PI Tag will be needed if we want to create later on an event frame.)

This variable is equal to 1 when the condition of the calculation is met, otherwise it’s equal to 0.

Here we simply define Trigger2 in that:

  • If the previous value in the archive of the tag Sinusoid is lower than the current value of Sinusoid then the value of Trigger2 is equal to 1

  • If not; Trigger2 is equal to 0

If we were working with a “real “signal, the only difference would have been the variable Trigger2. It would have to take into account the noise present on the signal. We should have ended up with something like that:

Then comes the trick of this analysis:

 

We create a variable startdate. This variable has to be mapped to a PI Tag (Calcultime in the example) because we will need to store it in the archive. Do not forget in AF to set the Value Type of this tag to “DateTime”.

Thanks to the analysis, we will store the actual timestamp of the Sinusoid tag if the trigger is equal to 0. If not, we won´t send any data to the PI Data Archive.

That way, after the Trigger2 variable jumps from 0 to 1 (in other words, when the Sinusoid signal starts increasing) we won´t update the timestamp in the archive.

The value stored in the archive is the beginning of our integral calculation.

Then, we can use this timestamp to run our calculation from Calcultime to now ‘*’.

Now it’s time to put everything together in a table understand for a better understanding:

While Trigger2 is set to 0 (when the Sinusoid value is not increasing) we store the current PI Time of the Sinusoid tag. The result in Calcultime and PI Time column are the same.

WhileTrigger2 is equal to 1, we don´t send the PI Time anymore to the archive so the value of Calcultime is the last value stored in the archive. This value corresponds to the beginning of our integral calculation Tot2.

 

Please note that the analysis scheduler is set on Event-Triggered. Indeed, we don’t know how long is the time range of the calculation of the integral so it´s important not to set it to periodic in that case.

Now we are running the integral of the signal from Calcultime to now. We can see the result Tot2 increasing in real time:

 

 

About the precision of the calculation:

In our example, we decide to compare the previous event in the archive and the current value of the tag..

That way, the precision of the integral depends on the configuration of the Sinusoid tag (scan class, exception/compression parameters, etc)

 

Link an event frame to this calculation:

It´s possible to link that calculation to an event frame so we can easily find when the calculation was running or not.

To do that, you should create a PI Tag linked to the variable Trigger 2 in my example. Actually I already did that in the previous part but this should be done if you want to use the event frame.

Then we can backfill the data and we can when the calculation was running in the last 2 days :

Thanks to the event frame, we can easily find when the calculation was running or not using Coresight for instance :

 

Please note that this method is quite heavy in configuration and required 2 tags at least per Analysis.

The possibility to run some calculations during an opened event frame will be added to the next release of Analytics (2017 R2)

This article was written using a Virtual Learning Lab (VLE) virtual machine. If you have your own PI System, great! You're welcome to follow along with what you've got, but if you'd like to access the machine used to build this article, you must have a subscription to the VLE. You can purchase a subscription for 1 month or 1 year here. If you've already got a subscription, visit the My Subscription page and start the machine titled "UC 2017 Hands-on Lab: Tips and Tricks with PI Builder and PI System Explorer". Once provisioned, connect with the credentials: user: pischool\student01 password: student. You can work from the full manual for the lab by downloading it here.

 

Software Versions Used in this Article

Product
Version
PI System Explorer2017 - 2.9.X
PI Asset Framework2017 - 2.9.X

 

Introduction

When building an Element template, it can be hard to figure out how to configure PI Point attributes. If you have a consistent tag naming convention, substitution parameters can be used directly, but what do you do if you don’t have a consistent naming pattern? You could bind the attribute to its appropriate tag by hand, but this might give you headaches down the line when you try to improve the template and don't see your improvements echo to all of the elements based on the template. This article works through the best method of configuring these data references when you're in this situation. We're going to demonstrate this by adding a new Attribute named Discharge Pressure to the Compressor Template, change the Units of Measure to psig, and make it Data Reference PI Point.  Then add a child-Attribute to this Attribute called Tag Name. If you find yourself in this situation while building an Asset Framework database in the future, follow this article to ensure you use best practises when doing so. In a nutshell:

The Bad Way - Hard-coded PI Point Data References

On the Element TemplateOn a Specific Element
badTemplate.pngbadElement.png

 

The Good Way - Soft-coded PI Point Data References

On the Element TemplateOn a Specific Element
goodTemplate.pnggoodElement.png

 

Prepare a "PI Servers" Element to Hold PI Data Archive Configuration

It's useful for any PI AF Database to have the PI Data Archive names held inside attribute values. This makes it a whole lot easier if you ever have to move to another PI Data Archive with a different name. You'll just need to change a single attribute value to migrate your entire database! You'll only need to do this step once for your database, then you'll be able to reuse it for all configuration int he future.

  1. Open PI System Explorer
  2. Press the Ctrl+1 key combination to go to the Elements view.
  3. Create an Element PI Servers based on the PI Server Template, and name it PI Servers. Hint: If you're doing this on your own system, you'll have to also create the PI Server template. Head to the Library, and create an Element Template called "PI Servers" and give it a single attribute of string type called "Server1".
    1.png
  4. Click on the PI Servers element, then click on the Attribute tab in the Attribute Viewing pane.  Enter the server name into the Server1 Attribute.
    2.png

 

Add a New Attribute on Your Element Template

  1. Press the Ctrl+3 key combination to navigate to the Library view.
  2. Select the Compressor Template under Element Templates.
  3. Click on the Attribute tab in the Attribute Viewing pane.
  4. Right click anywhere on the white space in the Viewing Pane and select New Attribute Template.
  5. Select the Attribute, press the F2 key, and type Discharge Pressure.
  6. For the Data Reference select PI Point.  Click inside the combo box for Default UOM and type in psig.
    3.jpg
  7. Select the Discharge Pressure Attribute and set the Data Reference to PI Point.  Click the Settings button, then in the PI Point Data Reference dialog type %@\PI Servers|Server1% in the field next to the Data Server (this grabs the value of Server1 that we ended up with in the above steps), and then type %@.|Tag Name% in the field next to the Tag Name. If this syntax doesn't make much sense now, don't worry. We're going to create a sub-attribute later called "Tag Name" that this substitution syntax will grab.
  8. One last thing, it is a best practice to never to use <default> units for a measurement.  So click on the Source Units combo-box and select psig from the available units of measure.
    7.png
  9. Click the OK Button. Note: The "quick" way to do the above steps is (once you become familiar with the syntax), is to delete the text under the Settings button and type \\%@\PI Servers|Server1%\%@.|Tag Name%;UOM=psig directly.
  10. Select the Discharge Pressure Attribute.  Right click and select New Child Attribute Template.  Press the F2 key and type Tag Name.  Change the Value Type to String. Under Properties select Hidden. Normally you would mark Attributes as Hidden if they are not important for end users to see. In our case end users don’t need to see the Tag Name as long as the Discharge Pressure attribute is displaying correctly. However, it's sometimes useful to leave this "Tag Name" attribute as visible - some users like being able to see which point this attribute is bound to.
    4.png
  11. Press the Ctrl+S key combination to Check In your changes.

 

Configure the Tag Name Attribute for a Specific Element

  1. Press the Ctrl+1 key combination to go to the Elements view.
  2. Select the first compressor element (name starts with K) in the Browser pane (Facility1>Area1>Rotating Equipment) then click on the Attribute tab in the Attribute Viewing pane.
    9.jpg
  3. Select the child-Attribute Tag Name, press the F2 key, and type cdt158 for the value.  Press the F5 key to refresh.  The Discharge Pressure Attribute is now receiving data.

 

Conclusion

Once this is configured, you would use PI Builder to manually bind the tag names to your desired tags. Following the above procedure greatly enhances the ease of management of your AF Database, and is considered best practise at the time of the publishing of this article. If you run into any issues when working through this or have any questions, you're welcome to post a comment!

 

Further Resources

  • If you're interested in learning PI AF, check out the online course
  • For a great article on tips and tricks with PI AF, check out this post
  • The full manual used to resource this post can be downloaded here

 

This article was written and adapted from materials originally developed by Ales Soudek and Nick Pabo.

As of the time of writing this, there are only a few ways I can think of to make sure that a connector is running and healthy.

 

  1. Checking the tags that it is writing to and making sure they are updating.
  2. Checking the administration page to ensure that all lights are green.

 

The purpose of this is to show you how it is possible to monitor the connectors at your organization using Powershell and AF SDK.

 

When you first saw this post you might be thinking that the only way you could check if your connector was working was by checking the administration page, but that is only partially true! The items on the administration page can be retrieved by making a REST type call to the connector webpage. So, all of the information that you can view on the connector administration site can be extracted and written to PI tags, offering an easy solution for monitoring the health of your connectors.

 

I have included a script, which is attached to this webpage. If you'd like to skip straight to the part where I talk about the Powershell script and what it can do, please use the link in the table of contents to skip to that section. The attachments can be found at the bottom of this post.

 

Table of Contents

 

 

What types of information can we pull from the connector webpage?

First, let's cover where this information is stored. Pull up the administration page for your favorite connector. I'll be using the PI Connector for OPC UA. In the screenshot you see below, each of these fields can be queried by making a REST call to the connector. So, let's work on finding how to query for the status of the Connector, Data sources, and Servers configured to receive data from the connector.

 

I am using Chrome for this, but you can also perform similar actions in other web browsers. When on the Connector Administration page, hit F12. You should see the Chrome Developer Tools window pop up. From there, let's browse to the Network tab. The Network tab will allow us to see the requests being made as well as the responses being provided by the connector. Let's take a look at the Data Source Status (this shows up as Datasource%20Status). Expanding the object allows us to see the properties underneath it. For my data source, we can see that it is named OPCServer1, it has a status of Connected, and a message stating that I have No Data Filter set.

 

We can also see the URL that was used to retrieve this information from the Headers section.

 

Information on the Connector State, PI Data Archive, and AF connections can be found in a similar manner under the Network tab by looking at ConnectorState, PI%20Data%20Archive%20Connections, and PI%20AF%20Connections respectively.

 

How can we obtain this information using Powershell?

Now that we know what types of information we can get, let's go through how Powershell can query for and store this information.

 

Because we want this script to run periodically, we will need stored the credentials on the machine. But, we don't want to just store credentials in plain text on the machine running the script, so we will encrypt them. Let's set the username variable first:

#username for logging into the PI Connector Administration Page.

$user = "domain\user"

 

Next, let's store the password and encrypt it. We will then set the password to the encrypted file that contains the password:

#Convert password for user account to a secure string in a text file. It can only be decrypted by the user account it was encrypted with.

"password" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "file location for the stored password file"

$pass = "file location for the stored password file"

 

Finally, we will decrypt the credentials when running the script using the command below. These credentials can only be decrypted by the user that encrypted them, so make sure to encrypt the credentials with the same user that will be running this script.

#Credentials that will be used to login to the PI Connector Administration Page.

$cred = New-Object -TypeName System.Management.Automation.PSCredential `

-ArgumentList $user, (Get-Content $pass | ConvertTo-SecureString)

 

The connector's also use self-signed certificates, so you may get an error when attempting the GET request. To get around this, we will include the following code to ignore the certificate errors:

#Ignore invalid certificate errors when connecting to the PI Connector Administration Page. This is because the connector uses a self-signed certificate, but Powershell wants to use a validated certificate.

Add-Type @"

    using System;

    using System.Net;

    using System.Net.Security;

    using System.Security.Cryptography.X509Certificates;

    public class ServerCertificateValidationCallback

    {

        public static void Ignore()

        {

            ServicePointManager.ServerCertificateValidationCallback +=

                delegate

                (

                    Object obj,

                    X509Certificate certificate,

                    X509Chain chain,

                    SslPolicyErrors errors

                )

                {

                    return true;

                };

        }

    }

"@

 

 

[ServerCertificateValidationCallback]::Ignore();

 

Now that all of that is out of the way, let's get to the part where we pull the information from the webpage. For this, we will be using the Invoke-WebRequest function. If we wanted to query for the data source status shown above, our function would look like this:

$DataSourceStatusResponse = Invoke-WebRequest -Method GET  "https://nlewis-iis:5460/admin/api/instrumentation/Datasource%20Status" -Credential $cred | ConvertFrom-Json

We are using the GET method which we can see for the Request Method in the Headers. The login to the connector webpage uses basic authentication, so we are passing it credentials that we have stored in the variable $cred. Finally, we pass this to the ConvertFrom-Json function in order to store the information retrieved from Invoke-WebRequest in a Powershell object under the variable $DataSourceStatusResponse.

 

For our data source status, we can then take a look at the variable to see what information we now have. We can see that under the variable we can see our data source OPCServer1. If we had additional data sources, they would show up here.

 

If we browse further into the variable, we can then find the Message and Status fields we were looking for:

 

If we wanted to store the status in a variable ($OPCServer1_Status), we could then achieve this as follows:

$OPCServer1_Status = $DataSourceStatusResponse.OPCServer1.Object.Status

 

Now we just need to retrieve the other information we want in a similar fashion and we are ready to write the values to PI tags!

 

 

Writing the values to PI

For this, we will be using AF SDK in Powershell to achieve this. There are also native Powershell functions for the PI System that come with PI System Management Tools that could be used instead of using AF SDK.

 

There are a few steps in order to this this.

 

1. We need to load the AF SDK assemblies.

# Load AFSDK

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDKCommon") | Out-Null

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDK") | Out-Null

 

2. We need an object that can store the point attributes for the tags we will be creating. The script I created will automatically create the PI tags if it cannot find them.

#Create an object with point attributes for the points you are creating

$myattributes = New-Object 'System.Collections.Generic.Dictionary[[String], [Object]]'

 

3. Store the tag attributes in the tag attribute object. For myself, I am making these string tags with a point source of CM.

<#Add the attributes to your point. I am making the points that will be created string tags, which corresponds to a value of 105.

Different point types can be found here: https://techsupport.osisoft.com/Documentation/PI-AF-SDK/html/T_OSIsoft_AF_PI_PIPointType.htm

#>

$myattributes.Add("pointtype", 105)

$myattributes.Add("pointsource","CM")

 

4. Next, we will need to initialize the PI Data Archive, AF Server, buffering options, and instantyiate our new values we will be using. We are using the default PI Data Archive and AF Server for this.

# Create AF Object

$PISystems=New-object 'OSIsoft.AF.PISystems'

$PISystem=$PISystems.DefaultPISystem

$myAFDB=$PISystem.Databases.DefaultDatabase

 

# Create PI Object

$PIDataArchives=New-object 'OSIsoft.AF.PI.PIServers'

$PIDataArchive=$PIDataArchives.DefaultPIServer

 

# Create AF UpdateOption

$AFUpdateOption = New-Object 'OSISoft.AF.Data.AFUpdateOption'

 

#Set AF Update Option to Replace

$AFUpdateOption.value__ = "0"

 

# Create AF BufferOption

$AFBufferOption = New-Object 'OSISoft.AF.Data.AFBufferOption'

 

#Set AF Buffer Option to Buffer if Possible

$AFBufferOption.value__ = "1"

 

# Instantiate a new 'AFValue' object to persist...

$newValueX = New-Object 'OSIsoft.AF.Asset.AFValue'

 

# Apply timestamp

$newValueX.Timestamp = New-object 'OSIsoft.AF.Time.AFTime'(Get-Date)

 

With that all out of the way, we just need to create our PI tag, assign it a value and timestamp, and send it on its way.

 

5. Assign a name to the PI tag.

# Assign Tag Name to the PI Point. Here I denote that this is for the data source OPCServer1 and I am retrieving the status.

$tagNameX = "OPCUAConnector.DataSource.OPCServer1.Status"

 

6. Find the tag, and create it if it does not exist. This finds the tag based off of the PI Data Archive we specified earlier, as well as the tag name

#initiate the PI Point

$piPointX = $null

 

#Find the PI Point, and create it if it does not exist

 

 

if([OSIsoft.AF.PI.PIPoint]::TryFindPIPoint($PIDataArchive,$tagNameX,[ref]$piPointX) -eq $false) 

{  

     $piPointX = $piDataArchive.CreatePIPoint($tagNameX, $myattributes) 

#Set the PI tag for $newValueX to $piPointX

$newValueX.pipoint = $piPointX

 

7. Lastly, we can apply the value to $newValueX and write this value to PI! We set the value equal to the call we made earlier for the Data Source Response where we retrieved the status of the server. We then user $newValueX.PIPoint.UpdateValue in order to write the new value to the tag.

    $newValueX.Value = "$DataSourceStatusResponse.OPCServer1.Object.Status"

    $newValueX.PIPoint.UpdateValue($newValueX.Value,$AFUpdateOption)

 

And that's it! That is all the code required in order to pull the information from the connector page and write it to a PI tag.

 

 

The Connector Monitoring Script

If you were wondering while reading this whether or not someone built out a script that already pulls in some of the relevant information, then you're in the right place. While writing this I also developed a script that will write the data from the connector pages you supply it with to PI tags. Let's go over how it works.

 

How the script works:

  1. You supply it credentials based off of the method we discussed earlier, where we encrypt the password.
  2. You provide it a list of connectors. In this list there is the name of the connector (which will be used in the tag naming convention), as well as the URL to the admin page. Make sure to exclude the /ui at the end of the URL. These are then stored in an object called $Connectors.

#Create an object called Connectors. This will hold the different connectors you want to gather information from.

$Connectors = @{}

#When adding a connector, give it a name, followed by the link to the admin page like below.

$Connectors.add("OPCUA_nlewis-iis","https://nlewis-iis:5460/admin")

$Connectors.add("PING_nlewis-iis","https://nlewis-iis:8001/admin")

 

    3. A connector object is then generated and the items in the list are added to this object.

    4. We query the web pages and then write the values to PI for each of the objects in the connector object.

 

What the script does

  • Tags are based on the naming convention of: <Provided Connector Name>.<Type>.<Server>.Status, where the type can be DataSource, AFServer, or PIServer
    • For the status of the AF Server (AF Server is named nlewis-af1) for a connector I named OPCUA_nlewis-iis, the tag would be named OPCUA_nlewis-iis.AFServer.nlewis-af1.Status.
  • If the script cannot connect to the admin page, it writes an error message to the connector state tag.
  • If the service is running but the connector is stopped via the webpage, the script writes to all tags for that connector that the connector is stopped.

 

There are two parts to the script. The first part generates the encrypted password file. The second part you run to pull the information, create the tags if they do not exist, and write to them.

 

Please edit the code to include the connectors used in your environment. The scripts can be found attached to this post.

 

PI Connectors and PI Interfaces   All Things PI - Ask, Discuss, Connect

I am thrilled to announce the posting of the PI System Connectivity Toolkit for LabVIEW on the National Instruments Reference Design Portal.

 

National Instruments equips engineers and scientists with systems that accelerate productivity, innovation, and discovery.

 

National Instruments measurement platforms often connect to the kinds of sensors and perform the kinds of analyses that aren’t usually found in the industrial control systems, which are traditionally connected to the PI System.  Examples of this are vibration analysis, motor current signature analysis, thermography, acoustic, electro-magnetic induction, and the analysis of other equipment condition performance indicators.

 

By enabling bi-directional communication between LabVIEW and the PI System, maintenance and reliability personnel can gain deeper insights, not only into the condition of equipment, through analysis of the signals in LabVIEW, but also into the process conditions that effect the equipment and vice versa.

 

LabVIEW edge analytics are enhanced by process data from other monitoring and control systems via the PI System.  The PI System real-time data infrastructure furthermore makes the LabVIEW data available enterprise-wide, for better insights and decision-making across an organization, and so that data can be integrated with other systems for predictive analytics, augmented reality, and computerized maintenance management for automating maintenance processes.

 

To obtain installation instructions, LabVIEW Virtual Instruments, and sample code files, see the following posting on the National Instruments Reference Design Portal:

http://forums.ni.com/t5/Reference-Design-Portal/OSIsoft-PI-System-connectivity-toolkit-for-LabVIEW/ta-p/3568074

 

The write-to-PI function requires a license for the PI-UFL Connector.  Please contact your Account Manager or Partner Manager.

 

The read-from-PI function requires a PI Web API license which can be downloaded and used for free for development purposes from OSIsoft Tech Support website.

 

For more information on LabVIEW, please see http://www.ni.com/labview.

 

Please direct any questions to NationalInstruments@osisoft.com.

Filter Blog

By date: By tag: