Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog

In response to user voice :

 

Introduction :

Event frames are really useful to find out when some events happen for a specific time range.

However, it´s not possible to run an anaylsis in that event frame while the event frame is running.

Here is a workaround of how you can actually do it. The trick is to run the analysis under certain condition and link this analysis to an event frame.

Intuitively, we try to build an analysis based on an event frame but that is not possible if we want to run calculation during the event frame. We will see instead how to run an event frame based on a analysis.

That will allow us to do some calculations during an opened event frame.

 

 

Problem description:

Let´s say we want to do some calculation over a time range for a periodic signal.

But the time range is not fixed and the period of the signal can change over time.

In others words, we don’t know the explicit start and the end times of the calculation.

The only thing we know is that we want to run the calculation (to find the integral of the ‘sinusoid’ signal in our example) while the signal is increasing and stop the calculation when it’s decreasing.

Let’s plot a graph to summarize what we want to achieve:

 

In our example, we will use instead the Sinusoid signal from the PI Interface for Random, Fractal, Station, Sinusoid Data:

 

 

As the period of the signal is not always the same we can’t use a fixed time range.

We will have to retrieve the start and the end times using another tag.

 

In this example we have a variable Trigger2 linked to a PI Tag called ST2. (This PI Tag will be needed if we want to create later on an event frame.)

This variable is equal to 1 when the condition of the calculation is met, otherwise it’s equal to 0.

Here we simply define Trigger2 in that:

  • If the previous value in the archive of the tag Sinusoid is lower than the current value of Sinusoid then the value of Trigger2 is equal to 1

  • If not; Trigger2 is equal to 0

If we were working with a “real “signal, the only difference would have been the variable Trigger2. It would have to take into account the noise present on the signal. We should have ended up with something like that:

Then comes the trick of this analysis:

 

We create a variable startdate. This variable has to be mapped to a PI Tag (Calcultime in the example) because we will need to store it in the archive. Do not forget in AF to set the Value Type of this tag to “DateTime”.

Thanks to the analysis, we will store the actual timestamp of the Sinusoid tag if the trigger is equal to 0. If not, we won´t send any data to the PI Data Archive.

That way, after the Trigger2 variable jumps from 0 to 1 (in other words, when the Sinusoid signal starts increasing) we won´t update the timestamp in the archive.

The value stored in the archive is the beginning of our integral calculation.

Then, we can use this timestamp to run our calculation from Calcultime to now ‘*’.

Now it’s time to put everything together in a table understand for a better understanding:

While Trigger2 is set to 0 (when the Sinusoid value is not increasing) we store the current PI Time of the Sinusoid tag. The result in Calcultime and PI Time column are the same.

WhileTrigger2 is equal to 1, we don´t send the PI Time anymore to the archive so the value of Calcultime is the last value stored in the archive. This value corresponds to the beginning of our integral calculation Tot2.

 

Please note that the analysis scheduler is set on Event-Triggered. Indeed, we don’t know how long is the time range of the calculation of the integral so it´s important not to set it to periodic in that case.

Now we are running the integral of the signal from Calcultime to now. We can see the result Tot2 increasing in real time:

 

 

About the precision of the calculation:

In our example, we decide to compare the previous event in the archive and the current value of the tag..

That way, the precision of the integral depends on the configuration of the Sinusoid tag (scan class, exception/compression parameters, etc)

 

Link an event frame to this calculation:

It´s possible to link that calculation to an event frame so we can easily find when the calculation was running or not.

To do that, you should create a PI Tag linked to the variable Trigger 2 in my example. Actually I already did that in the previous part but this should be done if you want to use the event frame.

Then we can backfill the data and we can when the calculation was running in the last 2 days :

Thanks to the event frame, we can easily find when the calculation was running or not using Coresight for instance :

 

Please note that this method is quite heavy in configuration and required 2 tags at least per Analysis.

The possibility to run some calculations during an opened event frame will be added to the next release of Analytics (2017 R2)

This article was written using a Virtual Learning Lab (VLE) virtual machine. If you have your own PI System, great! You're welcome to follow along with what you've got, but if you'd like to access the machine used to build this article, you must have a subscription to the VLE. You can purchase a subscription for 1 month or 1 year here. If you've already got a subscription, visit the My Subscription page and start the machine titled "UC 2017 Hands-on Lab: Tips and Tricks with PI Builder and PI System Explorer". Once provisioned, connect with the credentials: user: pischool\student01 password: student. You can work from the full manual for the lab by downloading it here.

 

Software Versions Used in this Article

Product
Version
PI System Explorer2017 - 2.9.X
PI Asset Framework2017 - 2.9.X

 

Introduction

When building an Element template, it can be hard to figure out how to configure PI Point attributes. If you have a consistent tag naming convention, substitution parameters can be used directly, but what do you do if you don’t have a consistent naming pattern? You could bind the attribute to its appropriate tag by hand, but this might give you headaches down the line when you try to improve the template and don't see your improvements echo to all of the elements based on the template. This article works through the best method of configuring these data references when you're in this situation. We're going to demonstrate this by adding a new Attribute named Discharge Pressure to the Compressor Template, change the Units of Measure to psig, and make it Data Reference PI Point.  Then add a child-Attribute to this Attribute called Tag Name. If you find yourself in this situation while building an Asset Framework database in the future, follow this article to ensure you use best practises when doing so. In a nutshell:

The Bad Way - Hard-coded PI Point Data References

On the Element TemplateOn a Specific Element
badTemplate.pngbadElement.png

 

The Good Way - Soft-coded PI Point Data References

On the Element TemplateOn a Specific Element
goodTemplate.pnggoodElement.png

 

Prepare a "PI Servers" Element to Hold PI Data Archive Configuration

It's useful for any PI AF Database to have the PI Data Archive names held inside attribute values. This makes it a whole lot easier if you ever have to move to another PI Data Archive with a different name. You'll just need to change a single attribute value to migrate your entire database! You'll only need to do this step once for your database, then you'll be able to reuse it for all configuration int he future.

  1. Open PI System Explorer
  2. Press the Ctrl+1 key combination to go to the Elements view.
  3. Create an Element PI Servers based on the PI Server Template, and name it PI Servers. Hint: If you're doing this on your own system, you'll have to also create the PI Server template. Head to the Library, and create an Element Template called "PI Servers" and give it a single attribute of string type called "Server1".
    1.png
  4. Click on the PI Servers element, then click on the Attribute tab in the Attribute Viewing pane.  Enter the server name into the Server1 Attribute.
    2.png

 

Add a New Attribute on Your Element Template

  1. Press the Ctrl+3 key combination to navigate to the Library view.
  2. Select the Compressor Template under Element Templates.
  3. Click on the Attribute tab in the Attribute Viewing pane.
  4. Right click anywhere on the white space in the Viewing Pane and select New Attribute Template.
  5. Select the Attribute, press the F2 key, and type Discharge Pressure.
  6. For the Data Reference select PI Point.  Click inside the combo box for Default UOM and type in psig.
    3.jpg
  7. Select the Discharge Pressure Attribute and set the Data Reference to PI Point.  Click the Settings button, then in the PI Point Data Reference dialog type %@\PI Servers|Server1% in the field next to the Data Server (this grabs the value of Server1 that we ended up with in the above steps), and then type %@.|Tag Name% in the field next to the Tag Name. If this syntax doesn't make much sense now, don't worry. We're going to create a sub-attribute later called "Tag Name" that this substitution syntax will grab.
  8. One last thing, it is a best practice to never to use <default> units for a measurement.  So click on the Source Units combo-box and select psig from the available units of measure.
    7.png
  9. Click the OK Button. Note: The "quick" way to do the above steps is (once you become familiar with the syntax), is to delete the text under the Settings button and type \\%@\PI Servers|Server1%\%@.|Tag Name%;UOM=psig directly.
  10. Select the Discharge Pressure Attribute.  Right click and select New Child Attribute Template.  Press the F2 key and type Tag Name.  Change the Value Type to String. Under Properties select Hidden. Normally you would mark Attributes as Hidden if they are not important for end users to see. In our case end users don’t need to see the Tag Name as long as the Discharge Pressure attribute is displaying correctly. However, it's sometimes useful to leave this "Tag Name" attribute as visible - some users like being able to see which point this attribute is bound to.
    4.png
  11. Press the Ctrl+S key combination to Check In your changes.

 

Configure the Tag Name Attribute for a Specific Element

  1. Press the Ctrl+1 key combination to go to the Elements view.
  2. Select the first compressor element (name starts with K) in the Browser pane (Facility1>Area1>Rotating Equipment) then click on the Attribute tab in the Attribute Viewing pane.
    9.jpg
  3. Select the child-Attribute Tag Name, press the F2 key, and type cdt158 for the value.  Press the F5 key to refresh.  The Discharge Pressure Attribute is now receiving data.

 

Conclusion

Once this is configured, you would use PI Builder to manually bind the tag names to your desired tags. Following the above procedure greatly enhances the ease of management of your AF Database, and is considered best practise at the time of the publishing of this article. If you run into any issues when working through this or have any questions, you're welcome to post a comment!

 

Further Resources

  • If you're interested in learning PI AF, check out the online course
  • For a great article on tips and tricks with PI AF, check out this post
  • The full manual used to resource this post can be downloaded here

 

This article was written and adapted from materials originally developed by Ales Soudek and Nick Pabo.

As of the time of writing this, there are only a few ways I can think of to make sure that a connector is running and healthy.

 

  1. Checking the tags that it is writing to and making sure they are updating.
  2. Checking the administration page to ensure that all lights are green.

 

The purpose of this is to show you how it is possible to monitor the connectors at your organization using Powershell and AF SDK.

 

When you first saw this post you might be thinking that the only way you could check if your connector was working was by checking the administration page, but that is only partially true! The items on the administration page can be retrieved by making a REST type call to the connector webpage. So, all of the information that you can view on the connector administration site can be extracted and written to PI tags, offering an easy solution for monitoring the health of your connectors.

 

I have included a script, which is attached to this webpage. If you'd like to skip straight to the part where I talk about the Powershell script and what it can do, please use the link in the table of contents to skip to that section. The attachments can be found at the bottom of this post.

 

Table of Contents

 

 

What types of information can we pull from the connector webpage?

First, let's cover where this information is stored. Pull up the administration page for your favorite connector. I'll be using the PI Connector for OPC UA. In the screenshot you see below, each of these fields can be queried by making a REST call to the connector. So, let's work on finding how to query for the status of the Connector, Data sources, and Servers configured to receive data from the connector.

 

I am using Chrome for this, but you can also perform similar actions in other web browsers. When on the Connector Administration page, hit F12. You should see the Chrome Developer Tools window pop up. From there, let's browse to the Network tab. The Network tab will allow us to see the requests being made as well as the responses being provided by the connector. Let's take a look at the Data Source Status (this shows up as Datasource%20Status). Expanding the object allows us to see the properties underneath it. For my data source, we can see that it is named OPCServer1, it has a status of Connected, and a message stating that I have No Data Filter set.

 

We can also see the URL that was used to retrieve this information from the Headers section.

 

Information on the Connector State, PI Data Archive, and AF connections can be found in a similar manner under the Network tab by looking at ConnectorState, PI%20Data%20Archive%20Connections, and PI%20AF%20Connections respectively.

 

How can we obtain this information using Powershell?

Now that we know what types of information we can get, let's go through how Powershell can query for and store this information.

 

Because we want this script to run periodically, we will need stored the credentials on the machine. But, we don't want to just store credentials in plain text on the machine running the script, so we will encrypt them. Let's set the username variable first:

#username for logging into the PI Connector Administration Page.

$user = "domain\user"

 

Next, let's store the password and encrypt it. We will then set the password to the encrypted file that contains the password:

#Convert password for user account to a secure string in a text file. It can only be decrypted by the user account it was encrypted with.

"password" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "file location for the stored password file"

$pass = "file location for the stored password file"

 

Finally, we will decrypt the credentials when running the script using the command below. These credentials can only be decrypted by the user that encrypted them, so make sure to encrypt the credentials with the same user that will be running this script.

#Credentials that will be used to login to the PI Connector Administration Page.

$cred = New-Object -TypeName System.Management.Automation.PSCredential `

-ArgumentList $user, (Get-Content $pass | ConvertTo-SecureString)

 

The connector's also use self-signed certificates, so you may get an error when attempting the GET request. To get around this, we will include the following code to ignore the certificate errors:

#Ignore invalid certificate errors when connecting to the PI Connector Administration Page. This is because the connector uses a self-signed certificate, but Powershell wants to use a validated certificate.

Add-Type @"

    using System;

    using System.Net;

    using System.Net.Security;

    using System.Security.Cryptography.X509Certificates;

    public class ServerCertificateValidationCallback

    {

        public static void Ignore()

        {

            ServicePointManager.ServerCertificateValidationCallback +=

                delegate

                (

                    Object obj,

                    X509Certificate certificate,

                    X509Chain chain,

                    SslPolicyErrors errors

                )

                {

                    return true;

                };

        }

    }

"@

 

 

[ServerCertificateValidationCallback]::Ignore();

 

Now that all of that is out of the way, let's get to the part where we pull the information from the webpage. For this, we will be using the Invoke-WebRequest function. If we wanted to query for the data source status shown above, our function would look like this:

$DataSourceStatusResponse = Invoke-WebRequest -Method GET  "https://nlewis-iis:5460/admin/api/instrumentation/Datasource%20Status" -Credential $cred | ConvertFrom-Json

We are using the GET method which we can see for the Request Method in the Headers. The login to the connector webpage uses basic authentication, so we are passing it credentials that we have stored in the variable $cred. Finally, we pass this to the ConvertFrom-Json function in order to store the information retrieved from Invoke-WebRequest in a Powershell object under the variable $DataSourceStatusResponse.

 

For our data source status, we can then take a look at the variable to see what information we now have. We can see that under the variable we can see our data source OPCServer1. If we had additional data sources, they would show up here.

 

If we browse further into the variable, we can then find the Message and Status fields we were looking for:

 

If we wanted to store the status in a variable ($OPCServer1_Status), we could then achieve this as follows:

$OPCServer1_Status = $DataSourceStatusResponse.OPCServer1.Object.Status

 

Now we just need to retrieve the other information we want in a similar fashion and we are ready to write the values to PI tags!

 

 

Writing the values to PI

For this, we will be using AF SDK in Powershell to achieve this. There are also native Powershell functions for the PI System that come with PI System Management Tools that could be used instead of using AF SDK.

 

There are a few steps in order to this this.

 

1. We need to load the AF SDK assemblies.

# Load AFSDK

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDKCommon") | Out-Null

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDK") | Out-Null

 

2. We need an object that can store the point attributes for the tags we will be creating. The script I created will automatically create the PI tags if it cannot find them.

#Create an object with point attributes for the points you are creating

$myattributes = New-Object 'System.Collections.Generic.Dictionary[[String], [Object]]'

 

3. Store the tag attributes in the tag attribute object. For myself, I am making these string tags with a point source of CM.

<#Add the attributes to your point. I am making the points that will be created string tags, which corresponds to a value of 105.

Different point types can be found here: https://techsupport.osisoft.com/Documentation/PI-AF-SDK/html/T_OSIsoft_AF_PI_PIPointType.htm

#>

$myattributes.Add("pointtype", 105)

$myattributes.Add("pointsource","CM")

 

4. Next, we will need to initialize the PI Data Archive, AF Server, buffering options, and instantyiate our new values we will be using. We are using the default PI Data Archive and AF Server for this.

# Create AF Object

$PISystems=New-object 'OSIsoft.AF.PISystems'

$PISystem=$PISystems.DefaultPISystem

$myAFDB=$PISystem.Databases.DefaultDatabase

 

# Create PI Object

$PIDataArchives=New-object 'OSIsoft.AF.PI.PIServers'

$PIDataArchive=$PIDataArchives.DefaultPIServer

 

# Create AF UpdateOption

$AFUpdateOption = New-Object 'OSISoft.AF.Data.AFUpdateOption'

 

#Set AF Update Option to Replace

$AFUpdateOption.value__ = "0"

 

# Create AF BufferOption

$AFBufferOption = New-Object 'OSISoft.AF.Data.AFBufferOption'

 

#Set AF Buffer Option to Buffer if Possible

$AFBufferOption.value__ = "1"

 

# Instantiate a new 'AFValue' object to persist...

$newValueX = New-Object 'OSIsoft.AF.Asset.AFValue'

 

# Apply timestamp

$newValueX.Timestamp = New-object 'OSIsoft.AF.Time.AFTime'(Get-Date)

 

With that all out of the way, we just need to create our PI tag, assign it a value and timestamp, and send it on its way.

 

5. Assign a name to the PI tag.

# Assign Tag Name to the PI Point. Here I denote that this is for the data source OPCServer1 and I am retrieving the status.

$tagNameX = "OPCUAConnector.DataSource.OPCServer1.Status"

 

6. Find the tag, and create it if it does not exist. This finds the tag based off of the PI Data Archive we specified earlier, as well as the tag name

#initiate the PI Point

$piPointX = $null

 

#Find the PI Point, and create it if it does not exist

 

 

if([OSIsoft.AF.PI.PIPoint]::TryFindPIPoint($PIDataArchive,$tagNameX,[ref]$piPointX) -eq $false) 

{  

     $piPointX = $piDataArchive.CreatePIPoint($tagNameX, $myattributes) 

#Set the PI tag for $newValueX to $piPointX

$newValueX.pipoint = $piPointX

 

7. Lastly, we can apply the value to $newValueX and write this value to PI! We set the value equal to the call we made earlier for the Data Source Response where we retrieved the status of the server. We then user $newValueX.PIPoint.UpdateValue in order to write the new value to the tag.

    $newValueX.Value = "$DataSourceStatusResponse.OPCServer1.Object.Status"

    $newValueX.PIPoint.UpdateValue($newValueX.Value,$AFUpdateOption)

 

And that's it! That is all the code required in order to pull the information from the connector page and write it to a PI tag.

 

 

The Connector Monitoring Script

If you were wondering while reading this whether or not someone built out a script that already pulls in some of the relevant information, then you're in the right place. While writing this I also developed a script that will write the data from the connector pages you supply it with to PI tags. Let's go over how it works.

 

How the script works:

  1. You supply it credentials based off of the method we discussed earlier, where we encrypt the password.
  2. You provide it a list of connectors. In this list there is the name of the connector (which will be used in the tag naming convention), as well as the URL to the admin page. Make sure to exclude the /ui at the end of the URL. These are then stored in an object called $Connectors.

#Create an object called Connectors. This will hold the different connectors you want to gather information from.

$Connectors = @{}

#When adding a connector, give it a name, followed by the link to the admin page like below.

$Connectors.add("OPCUA_nlewis-iis","https://nlewis-iis:5460/admin")

$Connectors.add("PING_nlewis-iis","https://nlewis-iis:8001/admin")

 

    3. A connector object is then generated and the items in the list are added to this object.

    4. We query the web pages and then write the values to PI for each of the objects in the connector object.

 

What the script does

  • Tags are based on the naming convention of: <Provided Connector Name>.<Type>.<Server>.Status, where the type can be DataSource, AFServer, or PIServer
    • For the status of the AF Server (AF Server is named nlewis-af1) for a connector I named OPCUA_nlewis-iis, the tag would be named OPCUA_nlewis-iis.AFServer.nlewis-af1.Status.
  • If the script cannot connect to the admin page, it writes an error message to the connector state tag.
  • If the service is running but the connector is stopped via the webpage, the script writes to all tags for that connector that the connector is stopped.

 

There are two parts to the script. The first part generates the encrypted password file. The second part you run to pull the information, create the tags if they do not exist, and write to them.

 

Please edit the code to include the connectors used in your environment. The scripts can be found attached to this post.

 

PI Connectors and PI Interfaces   All Things PI - Ask, Discuss, Connect

I am thrilled to announce the posting of the PI System Connectivity Toolkit for LabVIEW on the National Instruments Reference Design Portal.

 

National Instruments equips engineers and scientists with systems that accelerate productivity, innovation, and discovery.

 

National Instruments measurement platforms often connect to the kinds of sensors and perform the kinds of analyses that aren’t usually found in the industrial control systems, which are traditionally connected to the PI System.  Examples of this are vibration analysis, motor current signature analysis, thermography, acoustic, electro-magnetic induction, and the analysis of other equipment condition performance indicators.

 

By enabling bi-directional communication between LabVIEW and the PI System, maintenance and reliability personnel can gain deeper insights, not only into the condition of equipment, through analysis of the signals in LabVIEW, but also into the process conditions that effect the equipment and vice versa.

 

LabVIEW edge analytics are enhanced by process data from other monitoring and control systems via the PI System.  The PI System real-time data infrastructure furthermore makes the LabVIEW data available enterprise-wide, for better insights and decision-making across an organization, and so that data can be integrated with other systems for predictive analytics, augmented reality, and computerized maintenance management for automating maintenance processes.

 

To obtain installation instructions, LabVIEW Virtual Instruments, and sample code files, see the following posting on the National Instruments Reference Design Portal:

http://forums.ni.com/t5/Reference-Design-Portal/OSIsoft-PI-System-connectivity-toolkit-for-LabVIEW/ta-p/3568074

 

The write-to-PI function requires a license for the PI-UFL Connector.  Please contact your Account Manager or Partner Manager.

 

The read-from-PI function requires a PI Web API license which can be downloaded and used for free for development purposes from OSIsoft Tech Support website.

 

For more information on LabVIEW, please see http://www.ni.com/labview.

 

Please direct any questions to NationalInstruments@osisoft.com.

jkorman

Balancing on the Edge

Posted by jkorman Employee Jan 10, 2017

Please check out my blog post about balancing Edge vs Cloud computing and let me know your thoughts!

http://www.osisoft.com/Blogs/Balancing-on-the-Edge/

 

Kind Regards,

Jeremy

While developing web-based applications that leverage the PI Web API, I often find myself asking the following question:

"How can I get all WebIds of a subset of attributes for all elements?

One obvious use-case that comes to mind is to display a table of attribute data for all elements of a certain type. Maybe I have a template that defines 20 attributes, but I only need to display 10 of them. Of those 10 attributes, maybe I only want to display snapshot values for 6 of them, and the other four I want to trend in a sparkline.

In order to accomplish this, a series of requests to the PI Web API needs to be made:

  1. Get all the WebIds for the elements I need
  2. Loop through each of these elements, and for each element get all the WebIds for the attributes I need
  3. Make streamset requests to the PI Web API with the WebIds of my attributes

In the past, making all of these calls to the PI Web API did not scale very well in terms of performance. Today, we have batch requests with request template functionality that makes implementing this use-case a lot easier. Even with batch requests, however, I have discovered some performance implications that I believe are worth knowing about. I will share these in this blog post.

 

 

Setting up the batch request

 

As a starting point, we need to write a batch request to get the elements we want. Throughout this blog post, I am using the public PI Web API endpoint with the NuGreen database. Suppose I want to get all elements of the "Boiler" template type. This batch request may look something like this:

 

{
 "database": {
  "Method": "GET",
  "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen"
 },
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["database"],
  "Parameters": ["$.database.Content.Links.Elements"]
 }
}

 

Notice that I am specifically asking for elements that are instances of the "Boiler" template. Also, I am using the "searchFullHierarchy" parameter in the element sub-request. I often include this parameter because I need all element descendants of a specific root element. As such, I will sometimes write my element query as follows:

 

{
 "rootElement": {
  "Method": "GET",
  "Resource": "https://devdata.osisoft.com/piwebapi/elements?path=\\\\PISRV1\\NuGreen\\NuGreen"
 },
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["rootElement"],
  "Parameters": ["$.rootElement.Content.Links.Elements"]
 }
}

 

The only difference here is that I have changed my starting point from an AF database to as specific AF element. For the rest of my examples, I'll be sticking to the database method.

 

Moving on to attributes

 

Now that we have the element batch query, we can expand it to get all attributes for each of the elements returned. This is where the new request template functionality of batch requests in the PI Web API comes into play. Here is what the query may look like:

 

{
  "database": {
   "Method": "GET",
   "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  },
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  },
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": "{0}?searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path"
   },
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Links.Attributes"]
  }
}

 

Notice the use of RequestTemplate in the "attributes" sub-request. As documented in the PI Web API:

 

A request can alternatively specify a request template in place of a resource. In this case, a single JsonPath may select multiple tokens, and a separate subrequest will be made from the template for each token. The responses of these subrequests will returned as the content of a single outer response.

 

This means my batch query is going to loop through all of my elements, and make a separate sub-request for each element's attributes. Even better, this is all being handled internally by the PI Web API... pretty cool!

Before we continue, there are a few things to note about the attribute query:

 

  1. I am also using searchFullHierarchy for the attributes. This is important because an element can have many levels of nested attributes. Writing a batch query to loop through all these levels in a generic way would probably be impossible. Luckily we have "searchFullHierarchy" for this.
  2. I have included "selectedFields" in all my sub-requests. As we will see later, this results in a MAJOR performance improvement over letting PI Web API return all of its default metadata.

 

Now, moving on...

 

What if I only want certain attributes?

 

Well, you can do that. PI Web API 2016 introduced a new function to request multiple attributes by WebId or path in one call. Suppose I only want the following attributes for my Boiler elements:

 

  • Asset Name
  • Model
  • Plant

 

Then our batch query may look something like this:

 

{
  "database": {
   "Method": "GET",
   "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  },
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  },
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": "https://devdata.osisoft.com/piwebapi/attributes/multiple?selectedFields=Items.Object.WebId;Items.Object.Path&path={0}|Asset Name&path={0}|Model&path={0}|Plant"
   },
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Path"]
  }
}

 

Here, I'm using the multiple attributes function call and supplying the "path" parameter in the resource URL multiple times. Each instance of this parameter appends the relative path to the attributes I want to the sub-request parameter (which happens to be the element path). From here, it is only a matter of writing your client-side code to properly construct the resource URL based on the attributes you want.

 

Where do I go from here?

 

Now that we have all the WebIds for my attributes, we have to decide what to do with them. Usually I will start by creating a flat data structure that maps my element paths to attributes to WebIds. In JavaScript, this may look something like this:

 

// HTTP response is stored in `res' variable
var WebIds = {};


res.elements.Content.Items.forEach(function (element) {
    WebIds[element.Path] = {};
});


res.attributes.Content.Items.forEach(function (subRes) {
    subRes.Content.Items.forEach(function (attribute) {
        var path = attribute.Object.Path,
            elementPath = path.substring(0, path.indexOf('|')),
            attributePath = path.substring(path.indexOf('|'));


        WebIds[elementPath][attributePath] = attribute.Object.WebId;
    });
});

 

After putting the WebIds into a structure that is a bit more usable, you could use them in a variety of ways. Typically I will create an additional PI Web API batch call for all of the streamsets I need. For example, I may want to use an "end" streamset for 5 of my attributes, but use a "plot" streamset" for only 2 of them.

 

How is the performance?

 

I ran different flavors of these types of batch queries and collected the response times. First I will present the raw results and then comment on them.

 

Query description
Response time (Path and WebId only)
Response time (all metadata)
All attributes for all boiler elements143 ms2611 ms
All attributes for all elements435 ms3809 ms
Specific attributes (3) for all boiler elements180 ms186 ms
Specific attributes (3) for all elements635 ms1120 ms
All attributes except one for all boiler elements453 ms2895 ms
All attributes except one for all elements3249 ms5272 ms

 

I included results for all elements (not filtering by template type) to demonstrate how these batch queries scale. I do NOT recommend doing this on a large AF structure.

 

Based on these results, I have made the following observations. YYMV, however, as PI Web API appears to do some internal caching that can cause successive calls to perform faster.

 

  1. When getting all attributes (not specific ones using the "multiple" function call), it is important to be mindful of which fields you are selecting. The more metadata you ask for, the longer the request will take.
  2. Using the "multiple" function call for ad-hoc attributes does not perform very well if you have a lot of elements and are requesting a lot of attributes for each element. You're better off just asking for all attributes with a call to "attributes," unless you only need a small subset of attributes.
  3. The more elements that need to be processed, the longer the query. This makes intuitive sense and is to be expected.

 

Concluding remarks

 

I hope this blog post is helpful to you as you use the PI Web API to develop your own web apps. What other ways have you discovered of getting attributes with the PI Web API?

I would like to share some thoughts about PI calculation datasets in ProcessBook and what would help us to keep – I’d rather say bring - them under control.

Undoubtedly, datasets come in very handy for users to do some simple calculations on the fly. But they are also a challenge for PI administrators when they start to get out of control - I'd almost say they become a mess then.

 

How did that come about?

In a perfect "PI world" data is accessible by what- and whomever and tag names never change. Users start building displays with calculations and - as they are very handy - they tend to spread more and more all over the company.

 

In reality we have PI security and systems that change over time. Display builder and user not always share the same access rights. That's the point where it gets weird. Not primarily for the user but definitely for the administrators. Thousands of error messages start to flood the server logs and make them almost unreadable:

User query failed: Connection ID: 6092, User: xxx, User ID: yyy, Point ID: 0, Type: expcalc, …

 

Users often are not even aware of using datasets. They picked them up with a copied display. Missing data access or renamed points are the most common source of trouble.

Besides a small green spot becoming red (status report) there is nothing that could draw users notice. That's nothing unusual in an extensive display, since this could be also be triggered by a temporary system digital state.

Individual datasets, spread all over the company are not manageable - and that's by far not the only handicap.

 

How to relieve the symptoms?

First of all, error messages should appear at the place where the error occurs.

It depends on the point of view, what would be the right place in case of calculation datasests.

Technically, the PI server might be the right place, practically and for logical reasons it'd be the client. That's the only place it can be fixed.

Here are my proposals:

Provide options to

  • prompt users for calculation dataset problems in ProcessBook
  • automatically disable datasets that can't be executed (security, tag not found) in ProcessBook
  • suppress those error messages (Point ID: 0) on the server
  • disable datasets in ProcessBook (stepwise, e.g. creation, execution, disable)

How to cure?

What I definitely see, is an urgent need (at least in our case) to replace the unmanageable decentralized approach with a centralized one.

This is where AF On Demand Calculations come in.

 

Anyway, replacing datasets with On Demand Calculations will be a big challenge and there are several topics that need to be discussed, e.g.:

  • support for migration from datasets to AF On demand calculations (export)
  • how to provide the individual flexibility of personal datasets (may be an organizational matter)
  • ...

 

Conclusion

From an administrators point of view the most annoying issue is the log entries for failing calculations in the server log.

500 messages per hour on a 50k server with 100 connected ProcessBooks are more the rule but the exception. An option to suppress those calculation errors with "Point ID: 0" would be a blessing.

 

A good starting point on the client could be to more obviously make users aware of the problem of failing calculation datasets.

This can (and may be should be) be annoying and therefore an option to disable specific datasets is needed. Together with an option to disable permanently failing calculations automatically this would be perfect.

 

Finally, a centralized and manageable approach with e.g. On Demand Calculations should be the goal - with benefits for all the new and future clients. Let’s do it step by step.

 

May be someone wants to picks this up and makes an enhancement request out of it

Purpose

I have seen quite a few questions and confusion over the last year regarding the timestamps returned when querying data using the PowerShell Tools for the PI System.  Specifically, that the timestamps returned do not match the client time-zone settings similar to the behavior of other PI client tools. (PI DataLink, PI ProcessBook, PI Coresight, PI SMT etc)    It also came to my attention that there was not a clear explanation for this behavior in documentation or here on PI Square at this time.

 

Current time on test machine:

In PI SMT - values for SINUSOID over the last two hours:

Using PowerShell tools for the PI System - values for SINUSOID over the last two hours:

# connect to PI Data Archive
$con = Connect-PIDataArchive -PIDataArchiveMachineName "afink-pi1" 

# set start time to two hours ago
$starttime = Get-Date
$starttime = $starttime.AddHours(-2)
# set end time of now
$endtime = Get-Date

# query values of SINUSOID for last two hours
$values = Get-PIValue -PointName Sinusoid -StartTime $starttime -EndTime $endtime -Connection $con

# write values to console
Write-Output $values


Background/Explanation

The PI Data Archive stores all data using UTC time.  Therefore, when a PI client tool requests data for a tag, the values are returned with UTC timestamps.  It is the responsibility of the PI client to convert these times from UTC to local time based on the Windows Time Zone settings on the client machine.  When a client requests data from the PI Data Archive, the timestamps for the returned values are absorbed by the client as System.DateTime .NET objects in UTC time.  The client then calls the ToLocalTime() method on the returned values and populates the spreadsheet, trend, or other symbol with the timestamp in the client's local time zone.  This is the behavior of OSIsoft's most used client tools such as PI DataLink, PI Coresight, PI ProcessBook.

 

The PowerShell tools for the PI System, however, do not automatically call the ToLocalTime() on returned values, so all values are still shown in UTC by default.  This is what raises the questions/confusion described above.

 

Solution

Luckily for us, since PowerShell is also built on .NET, we simply need to the same steps used for all other PI client tools as described above.  The results of my SINUSOID query are stored in the variable $values.  If we select the "Timestamp" property of our resultant events and pipe it to the Get-Member cmd-let (aliased as "gm"), we can see the object type is indeed System.DateTime.  We can also see the various methods and properties associated with the object type.

 

 

Specifically, notice the ToLocalTime() method.  Using this method, we can verify that we can now return the values with the proper client time zone timestamps.

 

Finally, since the ultimate goal is often to generate a .csv or .txt file with the resultant archive data, we can use some PowerShell tricks to replace our UTC timestamp column with our client local time column. Specifically, using calculated properties.

 

In fact, thanks to the PowerShell pipeline, we can select our properties, build our custom output column, and write the result to a .csv file all in a one-liner:

# selects the value, and creates a custom column for client tz timestamps and outputs to file C:\values.csv
$values | Select-Object -Property value,@{Name='Timestamp (Client Time Zone)';Expression={$_.timestamp.tolocaltime()}}  | Export-Csv -Path C:\values.csv -NoTypeInformation

 

Lets break this down a little bit... this one liner can be broken down into 3 parts separated by the  "|" characters.  The | allows us to take the output of one cmd-let or script block, and pass it as the input to the next cmd-let or script block.

 

Part 1:

$values |

 

Here we are simply setting our "start off point".  We currently have our $values variable which contains our SINUSOID values and UTC timestamps.  Keep in mind this variable also contains other properties, such as isGood, isAnnotated, and WriteMode.  We take all of this information and pass it to the next step:

 

Part 2:

Select-Object -Property value,@{Name='Timestamp (Local Time Zone)';Expression={$_.timestamp.tolocaltime()}}  |

 

In this case, we are only interested in the value and our new client timezone timestamp properties for our resultant .csv file.  Therefore, from our query results, we are only selecting the "value" column and our new column "Timestamp (Local Time Zone)", which we are creating by specifying the Name of the new column, and the Expression to evaluate for the new column.  In PowerShell, we do this by adding the new column as a hash-table, which is why we must use the @{Name='';Expression=''} syntax.  The operation we want to perform to generate the data for our "Timestamp (Local Time Zone)" column is call the ToLocalTime() method as previously discussed.  The "$_." tells PowerShell to call the timestamp.ToLocalTime() method on whatever object was piped to it, and from Part 1, we had piped our $values variable.  Therefore, we will be taking the "timestamp" column of our $values variable, and calling ToLocalTime(), resulting in our new column "Timestamp (Local Time Zone)".

 

For more information on calculated properties in PowerShell, check out the following TechNet article:

< Windows PowerShell Tip: Using Calculated Properties >

 

Part 3:

Export-Csv -Path C:\values.csv -NoTypeInformation

 

Now that we have selected only the columns we are interested in from the previous step ("value" and our new calculated "Timestamp (Local Time Zone)" columns), we can export the result to a .csv file using Export-Csv and specifying a path for the output file.  The NoTypeInformation flag prevents object type data from being pre-appended to the .csv file header.  Oftentimes this extra note in the .csv file can be cumbersome, so we can remove it in this way.  Without the -NoTypeInformation flag, the output file will look like this:

 

Now, we have a .csv export of our desired PI data and have it properly returned in our client time zone which we can now use for further analysis in other programs.  Furthermore, since we were able to do this in PowerShell, we can run our queries in parallel, against multiple servers, and in fully automated fashion.

 

The full script would look something like this:

# connect to PI Data Archive
$con = Connect-PIDataArchive -PIDataArchiveMachineName "afink-pi1" 

# set start time to two hours ago
$starttime = Get-Date
$starttime = $starttime.AddHours(-2)
# set end time of now
$endtime = Get-Date

# query values of SINUSOID for last two hours
$values = Get-PIValue -PointName Sinusoid -StartTime $starttime -EndTime $endtime -Connection $con

# write values to console
Write-Output $values

# selects the value, and creates a custom column for client tz timestamps and outputs to file C:\values.csv
$values | Select-Object -Property value,@{Name='Timestamp (Client Time Zone)';Expression={$_.timestamp.tolocaltime()}}  | Export-Csv -Path C:\values.csv -NoTypeInformation


 

Happy slinging!

 

- Adam Fink

KristofDM

Osi to the cloud

Posted by KristofDM Dec 15, 2016

So, we are moving the entire pi infrastructure to the cloud or at least are trying to...

We are looking at IaaS and PaaS where possible.

I'll write down my successes (and losses)  here to help the community members that might have the same idea.

 

Please note that not all things we do are 'supported' by osisoft.. (yet!).

 

First some info on the Environment.

We have a pi data archive 3.4.935.72 (2015 R2) server.

Pi coresight

Pi AF

Pi SQL Data access server

and a  'heavy consumer app' server.

SQL server for coresight and AF.

 

Testing:

To make sure we make the most out of it i wanted to have the AF and coresight databases hosted in PaaS.

This is not yet supported and was only tested on an older PaaS sql edition (pre v12) .

We have VPN with azure and are able to join AzureVM's in our local AD.

 

PiArchive

I went ahead and Installed the PI data archive on a AzureVM with ssd storage;

No problems here

 

PiCoresight

Installed Coresight on an AzureVM and adapted connect strings to use PaaS.

Migrated on-premise database to PaaS using sql management studio.

issues

1: The DvService user (in the COresight SQL database) is linked with a sql login from the domain !

=> Not supported in Azure,Dropped and recreated on PaaS.

2: Coresight connection string to sql in web.conf adapted so a Azure database can be used

code:

<add name="CoresightConnectionString"  connectionString="Data Source=yourPaaSSQlServer.database.windows.net;Initial Catalog=YourPaasDatabaseName;User ID=DatabaseUSer;Password=DatabaseUSerPassword;Encrypt=true;Trusted_Connection=false; Application Name='Coresight Web Service'" />

 

It works ! the Coresight overview page does not show any more errors.

 

PiAfServer

Moving the AF database to the cloud sql PaaS.

I probably have and advantage here since i'm a classic Database Admin so if anything is not clear. shoot!

 

The AF Database in SQL servers uses some stored procedures/functions  to create a replica, get the status of the sql agent, get the Sysadmin name, etc. ...

Since these are all calls to the database instance (the sql Engine) they will no longer work in a PaaS database. For obvious reasons Microsoft does not allow these kind of queries.

The instance catalog is not the traditional sql server kind and thus is unusable.

 

Since this is a tricky part, we made a copy of the database in our local sql server (PiAfCopy) and:

 

Changed the stored proc on the sa user to always return SA as string value.

Changed the Check SQLagent Job to always return 1

Dropped the procs for the replica (Osi discourages users to use it anyway)

Changed all windows logins to sql logins.

 

Adapted Functions:

fn_getSaName

IsSqlAgentEnabled

 

Dropped Stored Procedures:

[Ddbo].[USP_AUDIT_ENABLED]

[dbo].[usp_replicate_primary_createdistributor]

[dbo].[USPC_CDC_ENABLE]

 

Now , using sql management studio 2016, i deployed the database to the PaaS sql.

Issue: Out of memory

This can be quite intensive on the client computer so having enough RAM is essential !

In case you haven't, try closing as much apps ass possible.

You could also do this on the database server itself but it needs an internet connection!

 

After some time (it takes a long time) the database should be deployed in Azure PaaS.

 

Installing AF server + System Eplxorer!

 

We'v setup an IaaS server and installed AF like we normally did. We disabled the database execution utility and disabled check connection while specifying the database server.

Just specified the PaaSSQLserver.

 

Installation went smooth.

 

The connect string needed to be changed.  this time in AfService.Exe.Config

It's located  C:\Program Files\PIPC\AF. It is somewhat different from coresight...

 

    <add key="connectString" value="Persist Security Info=False;server=YourPaasSqlServer.database.windows.net,1433;database=YourPaasDatabase;USER id=DatabseUser;PASSWORD=DatabaseUserPassword; Application Name=AF Application Server;" />

 

Restarted af and..

 

It worked!

Initially i had an error while creating new ellements.

"cannot check in model with unique id {Some very long SID} because there is another model with an identical name"

But after retrying some times, it vanished!

 

Next steps:

Analysis service, Notifications, pi sql data access server.

To be continued!

 

 

SQL DATA ACCESS (PIOLEDB)

After installing the pi data access layer on a IaaS server, running the sql commander fails.

Under the hood  the PI sql commander tries to use the (undocumented) system sp sp_oledb_ro_usrname .

The sp_oledb_ro_usrname stored procedure returns the oledb database read only status.

This kind of queries are not supported on Microsoft SQL Azure Paas.

 

Here ends the SQL PaaS try-out.

 

We'll look at Hosting the SQL on premise ...

rkoonce

Excel Is Our Friend – Part V

Posted by rkoonce Employee Dec 14, 2016

Excel Is Our Friend – Part V


Congratulations! You have escaped Interface Island! And if you are ever caught on another similar island, you have developed the survival skills to survive and eventually escape. The bad news is that some of your toolbox was left behind on that island. When you find yourself in need the tools there, you could always swing by there and get a “copy” You left your shelter behind, it will have to be rebuilt. On a new island, project references will need to be re-added, functions re-created in the new Island’s project. But what if there was a way to take the recyclables with you? Like that user defined function? That might come in handy anywhere. So, the good news is that you can have your cake and eat it too.

How to: Making Your Custom Spreadsheet Functions Available Anywhere

To use a custom function, the workbook containing the module in which you created the function must be open. If that workbook is not open, you get a #NAME? error when you try to use the function. Even if the workbook is open, if you use the function in a different workbook, you must precede the function name with the name of the workbook in which the function resides. For example, if you create a function called DISCOUNT in a workbook called Personal.xlsb and you call that function from another workbook, you must type =personal.xlsb!discount(), not simply =discount().

You can save yourself some keystrokes (and possible typing errors) by selecting your custom functions from the Insert Function dialog box. Your custom functions appear in the User Defined category:

ExcelFig7.png

In the instructions, a workbook type of xlsb is referenced above. But the screenshot has an xlsm… my workbook was created long ago, originally as an .xls (I am a dinosaur).

* Tip: An even easier way to make your custom functions available at all times is to store them in a separate workbook and then save that workbook as an add-in. You can then make the add-in available whenever you run Excel. Detailed instructions are found at the office.microsoft.com site linked here. This varies slightly from my own “old skool” methods, but I agree with the authors that this new way seems an easier way. I’ll be converting to an add-in when I get the chance.


Note: An .xlsb extension indicates Excel Binary Format. The .xlsm extension denotes a macro enabled workbook. An .xlam extension indicates Add-in. All of these formats require a trust for location and/or a much more complicated trust for publisher.

  • How to: Let Excel Teach You VBA


All of us can learn a lot of VBA by using the “Record Macro” function. After recording your actions within Excel, just examine the VBA code afterward and learn. It’s like pulling the curtain back on the Wizard of Oz. Prior to VBA, I was using Excel 4’s macro language, and the macro recorder was instrumental for me, personally, when first learning VBA. I still use it on occasion.


* On the Developer Ribbon, click “Record Macro”.
* Note the Macro Name and location.
* Perform some actions within the workbook, select a range of cells, change the font, change the number format, type some text or a formula.
* Click the Stop Recording button (same location as above).
* Examine the code that was generated.

ExcelFig8.png

  • FizzBlog Exercise:


  • Create a new macro-enabled workbook.
  • Copy the mySplitter function into this new workbook (the functions that we created earlier in this blog series).
  • Save this workbook as an Excel Add-In > “MyFunctions.xlam”
  • In Excel>File>Options> Add-ins>Load the newly created add-in in Excel.
  • Create a new workbook. Click Insert Function
  • FizzBlog Challenge:


  • In the dialog to select the user defined functions, is the function name prepended with the add-in worksheet name? workbook name? Or none? Prove it with a screenshot.
  • Were you able to make it work with a regular workbook? Or does it only work in macro-enabled workbooks?
  • Did you have to do anything special to make it work? (Did I leave out any significant steps in the FizzBlog Exercise?) If so, please explain.
  • Sharing your recyclables:


  • If you have a useful reusable function(s), please share it below.
  • Perhaps you just had an idea for one, please post it as a comment! (Santa might be reading this thread?)


This entry concludes my blog series for "Excel Is Our Friend" (perhaps I will append it someday). I hope that some of you have learned a few things along the way. Regardless, I really hope you enjoyed the series.

I plan to begin another series soon, but first I will have to choose my next destination. I have several ideas that I have been kicking around, but I would welcome any suggestions regarding any "islands" you would like to explore.


Cheers! (from the grass hut)

Introduction

 

In line with our partnership with Dell, our Systems Engineering team in Singapore received an opportunity to test out a Dell Edge Gateway unit for a short period. The Dell Edge Gateway is like a mini PC that is designed for Industrial-use to aggregate, secure and relay data from diverse sensors and equipment. It able to accept many communication protocols such as BACNet, Modbus, and CANbus, to modern wireless mesh networks like Zigbee, 6LoWPAN, and ZWave.

 

For this testing, we decided to install the full PI system and connect it to temperature sensors to perform simple direct data collection examples.
In this post, we will show what we have done on the Dell Gateway and go through 2 examples:

1) how to connect and collect data from an industrial thermometer transmitter via RS232 port

2) how to collect data from Temperature-Humidity sensor via Raspberry Pi through PI Web API

 

Setup

 

Sensors: Comet Thermometer (T0310) with RS232 output and Temperature-Humidity sensor (DHT22) (with jumper cables, breadboard and 10k Ohm resistor )

Development Board (for example 2): Raspberry pi3 Model B

Gateway Device: Dell Edge Gateway Model 5000. OS - Windows 10 IoT Industry. Processor - Intel Atom E3827 1.75 GHz 2 Cores. Memory - 8 GB

PI System component installed: PI Data Archive 2016, PI AF 2016 (including SQL Server 2012 R2 Express), PI ProcessBook, PI Coresight and PI DataLink (including MS Excel 2013)

Data Collection Method: PI Interface for Modbus Serial PLC (Example 1) and PI Web API 2016 (Example 2)

 

In this post, we will assume that all of the PI System components have been installed successfully on the Dell Gateway beforehand. Dell Gateway unit is running in Workgroup and does not have internet access (although it's possible).

 

Example 1

Comet Temperature sensor (T0310) comes with RS232 Serial output port which can be connected to Dell Gateway via a Serial Port COM1 directly.

PI system on Dell Gateway can easily collects the sensor’s real-time reading (Degree Fahrenheit) into a PI tag via PI Modbus Serial interface.

 

Depending on how familiar you are with PI Modbus interface/tag, the interface configuration was straight forward with all default setting.

As for PI Tag Configuration, you need to follow the manual. Some of important tag attributes are:

Tagname
Point SourcePoint Type
Location 1
Location 2Location 3Location 4Location 5Instrument Tag
T0310_Comet_TemperatureMODBUSSFloat3211104149COM1

 

Once we started the interface, the data then started flowing to the PI server.

We built a display on PI Processbook and imported it to PI Coresight.

 

 

Example 2

Plug Temperature-Humidity sensor (DHT22) onto the breadboard and connect jumper cables as per diagram below.

  • Connect Pin 1 of the sensor to 3.3V GPIO
  • Connect Pin 2 of the sensor to GPIO4
  • Connect Pin 4 of the sensor to Ground
  • Connect an Ethernet cable between the Dell Gateway and Raspberry pi Ethernet port

Image result for dht22 raspberry pi ref: DHT22, gpio

Once everything is ready. On the PI Server side, we created 2 PI tags to collect the sensor’s readings (Temperature and Humidity). These 2 tags can have 'float32' type and use 'PS = L'. Other tag's configuration can be kept as default. 

The data will be reading off from Raspberry pi into PI tags via PI Web API.

 

Make sure to set the AuthenticationMethods parameter on PI Web API 's 'System Configuration' in AF Configuration Database to 'Anonymous'. 

Then, note down the WebID of two new PI Tags that were created. You should have 2 sets of webID; one for Temperature tag and one for Humidity tag.

 

Code

 

We used Python scripts. We need 2 scripts below:

1) DHT22.py available online via github

2) dev.py - Write sensor's data to PI every 5 seconds. Make sure to modify 'base_url' and insert WebIDs for 'temperature' and 'humidity'.

Note: 'webapi.cert' is a self-signed certificate stored at the same location as the scripts. More information on Certificate Generation is available here.

 

import requests
import os
import time
import pigpio
import DHT22


base_url = 'https://gateway5000/piwebapi'
os.environ['REQUESTS_CA_BUNDLE']='webapi.cert'
temperature='P0KXR0VK08VUemBzWNbx97QQZgUAAAR0FURVdBWTUwMDBcREhUMjIgVEVNUEVSQVRVUkU'
humidity='P0KXR0VK08VUemBzWNbx97QQZwUAAAR0FURVdBWTUwMDBcREhUMjIgSFVNSURJVFk'
def post_pi_value(value,webid):  
    data = {'Value': value}  
    headers = {'Content-Type': 'application/json'}  
    response = requests.post(base_url + '/streams/'\
                             + webid + '/value', json=data,\
                             headers=headers)  
    return response
def get_pi_value(webid):  
    response = requests.get(base_url + '/streams/'\
                             + webid + '/value')  
    return response
pi=pigpio.pi()
s=DHT22.sensor(pi,4)
i=1000
while True:
    s.trigger()
    post_pi_value(s.temperature(),temperature)
    post_pi_value(s.humidity(),humidity)
    r = get_pi_value(temperature)
    #print r.text
    i=i+1
    time.sleep(5)

Credit: Eugene Lee

 

You can setup to run above python script on startup.

Or start/stop on demand using bash scripts below.

and

These two files can be placed at the same location as the Python scripts. When you want to start/stop reading the sensor's data, then you can execute command from the terminal application (command-line) as shown below.

 

Conclusion

 

In this post, we made use of Dell Gateway's capability to directly connect to data sources via Serial and Ethernet ports. We selected simple sensors that are easy to connect to and come at low cost. We have successfully delivered data to PI Server, which resided on the same machine. Since Dell Gateway can accept many other communication protocols, this means the data collection from many other IoT devices are possible. In my opinion, Dell Gateway can be a very good interface node for the PI system, where all sensors can be connected and gathered to one location. However, it is not suitable for hosting our full version of PI system. This is because it has physical limitation such as 2 processors and only 8 GB RAM. The performance will surely be limited if we have large number of tags or utilized AF/AF Analyses heavily. In the future, with higher specification, it maybe possible.

 

If you have any questions, please post them here and if you have an experience in using Dell Gateway, please share with us.

Looking for help with a technical issue or challenge? All Things PI is your place for online support from OSI Technical Support, as well as fellow users!

 

Who is available on PI Square to help?

 

All of OSIsoft Technical Support monitors and participates on our Community, so the same support you can get via a phone call or email, will be what you'll receive here!

 

In addition, we have active Product Specialists, Product Managers, Developers, and more who are always keeping an eye out for discussions relevant to their expertise.

 

How do I post a question?

 

Use any search bar to try to see if your question has been asked before - search results will pop up automatically, and if you need to you can click the "Search all results ->" button at the bottom to see more.

 

If your issue hasn't been discussed before, simply press the "Ask It" button to launch a new post. Fill in as many details as you can, including background to the issue.

 

In the "In a Place" section, browse to the appropriate sub-location of All Things PI - we have product-specific subspaces such as PI Server, PI Interfaces & Connectors, PI Visualization, etc. See below on why this is beneficial to you!

 

Create a few tags to better catalogue your post. These tags are used in the site's search engine, and make it easier for others to find your post!

 

Why should I post in a product-specific Space?

 

Our specialized Support resources such as Product Specialists, Product Managers, and Developers will often follow specific product spaces so that they get alerted to new discussions. By posting in the right product space, you'll have a better chance of getting targeted support for your issue!

 

What if I need a remote session?

 

If we can't solve your problem on PI Square, one of our engineers will assist you in creating a full Tech Support case, and will contact you directly to start a remote session. The process is seamless, and won't be any extra work for you.

 

What about the rest of PI Square?

 

OSIsoft Technical Support resources will be monitoring discussions in the All Things PI space and product subspaces to provide technical support and assistance. (For questions: Taylor McManus)

 

OSIsoft Development Support teams will be monitoring questions in the PI Developers Club space and sub-spaces there to help provide suggestions and answers to development questions. (For questions: Ahmad Fattahi)

 

OSIsoft Learning will be monitoring the Learn PI space and class content and will be replying to questions, comments, ideas, and suggestions within that area.(For questions: Kelsey Holstein

 

OSIsoft PI Square team will be monitoring the Welcome to PI Square! space and will reply to your questions, comments, ideas, and suggestions within that area.(For questions: Jarita Sirois)

 

How do I get started?

 

Head on over to All Things PI and dive into the Community!

This post will contain an overview of an internship project designed to showcase the value of the PI System in a Smart City context. The project was undertaken at the Montreal OSIsoft office, by two interns – Georges Khairallah and Zachary Zoldan  – both of whom are studying Mechanical Engineering at Concordia University in Montreal.

As a proof of concept for the PI System as a tool for the management of smart cities, we chose to bring in data from public bike sharing systems. We then set about collecting data from other sources that may affect bike share usage – data such as weather, festival and events, public transit disruptions and traffic data. In undertaking this project, we hope to better understand the loading patterns of bike sharing systems, analyze how they react to changes in their environment, and be able to predict future loading.

 

 

 

What is a Smart City?

 

A smart city implements technology to continuously improve its citizen’s daily lives. Through analyzing data in real time, through the movement of people in traffic to noise and pollution, a Smart City is able to understand its environment and quickly adjust and act according to ensure it maximizes the usage of its resources. A smart city’s assets can all be monitored in real time, something which is becoming easier with the emergence of IoT technologies.

 

We chose to monitor bike sharing systems due to the abundance of live and historical open data, and the numerous analyses which can be run on the system. Bike sharing systems represent one aspect of a smart city, and by demonstrating what can be done with just this data, we can show the potential value of the PI System in a Smart City context.

 

 

 

What is a bike sharing system?

 

A bike sharing system is a service which allows users to rent bikes on a short term basis. It consists of a network of stations which are distributed around a city. Each station consists of a pay station, bike docks, and bikes that the user can take out, and drop off at any other station.  Bikes are designed to be durable, resistant to inclement weather, and to fit the majority of riders.

Users can be classified as “members” or “casual users”. Members are those who pay an annual, semi-annual, or monthly fee for unlimited access to the bike sharing network. Members access a network with a personal key fob, allowing the bike sharing system to keep a record of a members’ trips. Casual users pay for a one-way, 24-hour or 72-hour pass using their credit card at the pay station.

 

o1.png

 

Every bike is equipped with unique identifier in the form of a RFID tag, which keeps track of the station it was taken out from and returned to. Coupled with a member’s key fob identification, or a 4 digit-code generated from the pay station for an occasional user, we can track each trip and determine whether it was taken by a member or a casual user.

 

 

What datasets were available to us?

 

We had access to live open data from numerous bike sharing systems , from cities such as Montreal, New York City, Philadelphia, Boston and Toronto. Each bike sharing system posts data in the form of a JSON document, updated every 10-15 minutes, containing data pertaining to individual bike stations. It reported back with the number of docks available at a station, number of bikes available at a station as well as station metadata such as station name, longitude/latitude positioning and station ID.

Example Live Data.png

We also had access to historical trip data from different bike sharing systems. This data came in the form of CSV files, with each row entry representing a single bike trip. Each individual trip is tracked using an RFID chip embedded in the bike, and can tell us the start time/end time of a trip, start station/end station and whether the trip was taken by a member or a casual user.

The specifics of the historical data vary from system to system. Montreal’s BIXI system only offers the trip duration and account type on top of the trip information, whereas New York City’s CITIBike also offers data on member’s gender and age.

 

o99.png

 

We then brought in data sets which we thought might affect bike usage – data such as weather data, festival and events data, traffic data, and public transit disruptions. Weather data was collected from an open weather API, and public transit data was collected using a web scraper and the PI Connector for UFL.

Importing data into the PI System:

Much of our data was text-based, so we chose to use the PI Connector for UFL to import the data into our PI System. The Connector was set up using the REST endpoint configuration for most of our data sources. It was also set up to create PI AF elements and our PI AF hierarchy.

We stored most of our live data as PI Points, while the historical trip data was stores as Event Frames.

 

To learn more about using the PI Connector for UFL to create event frames, click here.

To learn more about using the PI Connector for UFL as a REST endpoint, click here.

To learn more about creating PI AF elements with the PI Connector for UFL, click here.

 

Historical Data Analyses:

 

The historical bike trips have a designated start and end time – to us, it seemed like the best way to store this data was as Event Frames, with an Event Frame for each bike trip. We used the PI Connector for UFL to create Event Frames from our CSV files, we loaded more than 5 million event frames in our PI System!

We used the PI integrator for BA to publish various “Event Views” and then used Power BI to visualize the data. We can then access this data using a Business Intelligence tool such as Microsoft Power BI.

 

June 2015 Trips vs Rain.png

Above we can clearly see that the days which experienced the highest amount of usage had 0 rainy events in other words it was a sunny day (as shown by the variable “temp_numerical”), whereas the days that held the least amount of trips had the most amount of rainy events.

 

o5.png

Above, we have a dashboard that represents the trips from June 2015. It holds a total of 556,000 trips.

On the top left, we can observe bubbles that represent the location of the stations. Furthermore, the color of the bubble shows to which neighborhood it belongs to, and finally the size of the bubble indicates the amount of trips made from that bubble (or station).

On the bottom right, we can determine that “Le Plateau” neighborhood is the most popular, with “Downtown” coming in second place.

Next, we can slice through “Le Plateau” using the donut chart on the top right, to determine whether all of the stations were equally used, or some stations outperformed others?

 

o56.png

Looks like many stations outperform other, why is that? Although we observe two stations nearby, one of them holds 7140 trips whereas the other has only 2378 trips, the station that is nearest to metro stations experience the most trips because it is the path of least resistance.

 

o7.png

 

This was the same case for “Downtown” neighborhood!

Since the bike share companies have information regarding the revenue each bike station is approximately generating, the city can make better financial decisions regarding the possibility of withdrawing stations under-performing because at some point maintenance costs outweigh profit.

 

 

 

Live Data Analyses:

 

Using AF analyses and the live data available, we created several KPIs in order to keep track of usage at each station throughout a city’s bike sharing network. These include station utilization (how empty a station is), and 4/8/12/24 hour combined in/out traffic events.

 

Let’s take a look at the 4 hour combined in/our traffic events in Montreal, over one weekday:

 

 

4 Hr total traffic - Montreal only.png

We see that there are two characteristic peaks, one at 11 am (representing traffic from 7 am - 11 am) and another at 8 pm (representing traffic from 4 pm - 8 pm). As expected, this corresponds to commuting times for the standard 9-5 workday. But is this trend repeated on the weekends?

Let’s look at the same metric, but over one week. Different colors indicate different days. (Saturday-Friday, Left-Right)

We can see the two characteristic weekday peaks which were present in the previous graph, but the usage on weekends tell a different story. There is only one usage peak on the weekend, since users are not using the bikes to commute to work.

 


Can we use the PI system to explain why the midday usage on Monday has dropped off compared to other weekdays? Let’s bring in our weather data and see if it had any effect on the usage.

o8.png

The added purple line which varies from 0 to 1, acts as a marker to when it was raining in Montreal, with "1" representing rainy conditions. Now, we have a clear explanation for why Monday July 18th experience a drop in usage during the day, it was raining!

 

o9.png

 

 

 

Predicting future demand:

 

A key issue which bike share system managers must deal with is the need to ensure bikes are available to users at all times. This means they often need to redistribute bikes from one station to another to ensure bike availability. System administrators need to be able to predict usage at a station level and on the system level to account for this.

Using data collected with the PI system, we can predict the future usage at the station level and at the network level. We looked at the value at a given time, say 3:00 PM on a Monday, and then averaged values from the past three weeks. We were able to accurately predict future usage within a margin of error of 2%-5% . We store the forecasted future usage using AF analyses and PI System’s “future data” option.

o43.png
Mapping Smart City assets with the PI Integrator for ESRI ArcGIS and ESRI Online:

To learn more about mapping smart city assets with the PI Integrator for ESRI ArcGIS, click here.

 

From the beginning of the project, we knew that the geographical location of our assets would be incredibly important for our analysis. The usage of any bike station is directly tied to its location – those located closer to major office buildings, pedestrian malls, and points of interest will experience an increase in usage. By mapping our assets (along with all associated PI points and KPIs), we can view any geographical trends that might emerge.

 

We started off by mapping our AF assets (Bike stations in Montreal) using the ESRI online. The examples below show how we can view our elements in a much more intuitive way, and combine secondary data sets such as the location of bike paths, subway stations and bike accidents. We set the marker size to be tied to a specific station attribute, the 4-hour total in/out traffic events.

 

We can click on any circle and bring up a configured pop-up showing all relevant AF attributes and KPIs.

 

o87.pngo88.png

We can then further enrich this data by adding other map layers. For example, we can see that most bike stations coincide with the location of bicycle paths (left) and the location of subway stations (right).

 

o90.pngo91.png

 

 

Conclusion:

 

In this post, we were able to show the value of the PI System in just analyzing bike sharing systems, which is just one small aspect of a smart city. The PI System excelled at data collection, time-based analysis, and helping us visualize numerous datasets. We could have performed a much more in depth analysis if we had access to other closed smart city datasets such as power consumption, waste management and more. The potential for further analysis is there, all we need is access to the data.

 

 

 

Demo Video

chuck

OSIsoft's New Offices

Posted by chuck Employee Aug 14, 2016

Have you seen the progress of construction for OSIsoft's new offices in the new San Leandro Tech Campus (SLTC) ?

 

Just this month the base of the sculpture Truth in Beauty was installed.  The building is all closed in and interior finishing progressing rapidly.  The parking structure is quickly coming together.

If interested in watching the progress, there are some videos on YouTube you can enjoy! 

 

OSIsoft Headquarters (new) construction - view 1 -

https://www.youtube.com/watch?v=5k-FzZsIhwM

OSIsoft Headquarters (new) construction - view 2 -

https://www.youtube.com/watch?v=YbrnJ84b490

Parking structure construction -

https://www.youtube.com/watch?v=ggcJS73LfF0

View from the new office building - currently the tallest building in San Leandro

https://www.youtube.com/watch?v=mNVcprtuyC0

Sculpture installation - part 1 -

https://www.youtube.com/watch?v=D1gustPKpbY

 

... and more information from the official website for the San Leandro Tech Campus -

http://sl-tc.com/

 

( No I didn't make any of the videos, but I am really impressed with what others have done to capture SLTC construction progress! )

Last time we showed how to choose database and print some information. Today we will look on how to get an attribute value.

So we reached the database already:

PISystems myPIsystems = new PISystems();
AFDatabases myDatabases = myPIsystems["W-SRV1"].Databases;
AFDatabase myDatabase = myDatabases["test"];

How to reach the element

I have this structure in my database:

If I would like to reach the Element1, I need this code:

AFElements myElements = myDatabase.Elements;
AFElement myElement = myElements["Element1"];

This is OK, but what to do, if I would like to reach child_Element1?

 

There are two options. First do the same again and select element form elements under Element1:

AFElements myElements2 = myElement.Elements;
AFElement myElement2 = myElements2 ["child_Element1"];

Or use the path in the beginning:

AFElements myElements = myDatabase.Elements;
AFElement myElement = myElements["Element1\\child_Element1"];

NOTE: The symbol of \ have some function in the string, so we need to us \\ and is translated to the \ when string is used.

How to reach the attribute

Now we have myElement and we would like to have an attribute under this element. This is similar as before:

AFAttributes myAttributes = myElement.Attributes;
AFAttribute myAttribute = myAttributes["Attribute1"];

As you see nothing changed, we still use the same logic as with elements.

 

How to get the value

To get the value we first need to get it as AFValue:

AFValue myAFValue = myAttribute.GetValue();

This is current value, how to get value form specific time will be showed in the next chapter.

We can get the float value from the AFValue now. Value in the AFValue is of object type so I decided to  retype it to float (my value is float and I know it).

float myValue = (float) myAFValue.Value; 

Printing information

Now we can print some information:

Console.WriteLine("Attribute path: {0}", myAttribute.GetPath());
Console.WriteLine("Value: {0}", myValue);
Console.WriteLine("Timestamp: {0}", myAFValue.Timestamp);
Console.WriteLine("Value type: {0}", myAFValue.ValueType);
Console.WriteLine("Source PI Point: {0}", myAFValue.PIPoint);

Results

Recapitulation

So here is the code with shortcuts

AFElement myElement = myDatabase.Elements["Element1\\child_Element1"];
AFAttribute myAtribute = myElement.Attributes["Attribute1"];
AFValue myAFValue = myAtribute.GetValue();
float myValue = (float) myAFValue.Value;

End

So now we are able to get current value of an attribute and get some another information from it. How to get a value from specific time will be topic of the next chapter.

Filter Blog

By date: By tag: