Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog

Hello PI Geeks!


We are planning our next Hackathon at PI World 2018 where we expect tens of esteemed PI professionals, industry experts, and data scientists to compete. You can have your business challenge be the topic of the event which means there will be a whole group of engineers who will compete to add value to your business by solving one of your challenges.


We have been hosting several successful hackathons over the past few years (2017 SF, 2017 London, 2016 SF, 2016 Berlin, 2015 SF). In 2016, for example, the topic of the Programming Hackathon was Innovation Around Smart Cities. Data was sponsored by the San Diego International Airport and made available to our hackers. The executives from the airport were really happy with the final results of the hackathon mainly because:


  • They were inspired by the new creative apps and business models developed by our hackers, which could add a lot of value to their business.
  • They learned new ways to gain insight into the data they already had in their PI System.
  • They were able to detect where they could be more efficient in their industrial processes.


While starting to organize the PI World SF Hackathon 2018 we are looking to find our data sponsor. This is where you come in! We are seeking for a  customer who may be willing to share their data with us for the event. A good data sponsor typically has the following qualifications:


  • Owns a PI System with AF already in place
  • Has a few data-oriented high level business challenges or aspirations
  • Has at least tens of assets and many hundreds of data streams in place
  • Has at least 1 year of historical data
  • Has sampling rate of at least several samples a minute on the majority of the tags
  • Is willing to share their data with us – We are willing to consider an anonymized/obfuscated version of the dataset as well


In case you are interested becoming the new data sponsor for the Programming Hackathon, please don’t hesitate to contact me by e-mail (


Both PI Web API and PI Vision require an SSL certificate upon installation. The default installation will create a self-signed certificate, but users will see an ugly certificate error when navigating to it. Users can click through these errors, but configuring it in this way is bad practice. If your website is configured correctly, then these errors indicate a potential man-in-the-middle attack. You want your users to alert you if they see these errors, not click through them on a daily basis.


The simplest way to get a secure certificate that provides the best user experience within your corporate network is to use your Enterprise Certificate Authority to generate it. Users will see a nice, green padlock:


In this post, I'll walk you through setting this up. I'll assume you have obtained the following:

  • A Server with PI Vision or PI Web API installed, or to be installed. This server will be referred to from now on as the PI Web Server
  • A Domain account that is a Local Administrator on the PI Web Server
  • A Domain Administrator on standby, in case changes need to be made (see later steps for details)
  • Permission from your IT department for using Active Directory Certificate Services automatic enrolment in order to obtain certificates for your PI System production environment.


  1. On the PI Web Server, log in using a domain account that is a member of the Local Administrators group.
  2. Click Start.
  3. In the Search programs and files box, type mmc.exe, and press ENTER.
  4. On the File menu, click Add/Remove Snap-in.
  5. In the list of available snap-ins, click Certificates, and then click Add.
  6. Click Computer account, and click Next.
  7. Click Local computer, and click Finish.
  8. Click OK.
  9. In the console tree, double-click Certificates (Local Computer), and then double-click Personal.
  10. Right-click Personal, point to All Tasks, and then click Request New Certificate to start the Certificate Enrollment wizard.
  11. Click Next.
  12. Click Next.
  13. Try to find the Web Server template. If you do not see it like in the below screenshot, click cancel, go down to the Appendix 1 part of this article and follow the directions there, then come back and follow on again from step 9.
  14. Select the Web Server template. Click the warning icon below More information is required to enroll for this certificate. Click here to configure these settings.
  15. In the Subject name area under Type, click Common Name.
  16. In the Subject name area under Value, enter the fully qualified domain name of the server, and then click Add.
  17. In the Alternative name area under Type, click DNS.
  18. In the Alternative name area under Value, enter the fully qualified domain name of the PI Web Server, and then click Add.
  19. In the Alternative name area under Value, enter the machine name of the PI Web Server, and then click Add.
  20. Repeat the previous step for any other alternative name you would like users to use when navigating to the web application. Appropriate DNS entries will also need to be created, but this is beyond the scope of this article.
  21. Click OK.
  22. Click Enroll.
  23. Click Finish.
  24. Click Certificates then double click on your new certificate. On the Details tab, under Subject Alternative Name the names you entered above should be present.
  25. Install your software on your PI Web Server, be it PI Web API or PI Vision. If you've already installed the software, click Start, navigate to the PI Web API Admin Utility and follow the wizard to change your current self-signed certificate to your newly created certificate.

Appendix 1: If the Web Server Template is unavailable

  1. On the Certificate Authority Server (which is usually the domain controller), log in as a Domain Administrator or CA Administrator.
  2. On the CA computer, click Start, type certtmpl.msc, and then press ENTER.
  3. In the contents pane, right-click the Web Server template, and then click Properties.
  4. Click the Security tab.
  5. We need to add the computer account for the PI Vision server to this template, and give it Enroll permission. For detailed directions, follow the screenshot below and the directions underneath.
    template security.png
  6. Click Add...
  7. Click Object Types...
  8. Ensure Computers is checked.
  9. Click OK.
  10. Type the name of your PI Web Server into the object names box. In the example in the screenshot, the machine name for the server is MASTERWEB.
  11. Click Check Names and ensure that you find the account (the name should underline)
  12. Click OK
  13. Check the Enroll box under Allow with your PI Vision Server computer account selected
  14. Click OK

After following the above steps, go back to your PI Vision Server and continue the original steps.


Comments or corrections welcome. If you've got any questions, feel free to post them and we'll discuss!

          This post will contain an overview of an intern project designed to show the value of the PI System in a Facility context. The project was undertaken at the Montreal OSIsoft office, by two interns Georges Khairallah and Ali Idrici – both of whom are studying Mechanical Engineering.




OSIsoft Canada ULC - Montreal is located on one of the floors of an old building in Montreal downtown. The building management team does not have any building management system (BMS) to manage energy and thus they do not have data regarding where most of the energy is being used. In fact, only the lighting systems can be controlled by the different offices. Other control systems, such as the HVAC system, are controlled by various master switches that regulate the entire facility.

OSIsoft Montreal’s office has expanded throughout the past several years with a wider working area and two distinct spaces located on the same floor.

Despite not having a BMS, OSIsoft Montreal would like demonstrate that they can measure and manage their office’s energy consumption from lighting and track human presence with their real-time operational data software, the PI System, in order to lower energy usage and reduce false alarms.


Project Description

OSIsoft Montreal is looking to reduce energy consumption from unnecessary lighting as well as reduce the number of false alarms caused from absence of occupancy presence information. Currently, there are no systems in place to monitor and track real-time usage or presence. However, the company is looking to implement a number of IIoT (Industrial Internet of things) devices and sensors across their workspace, then send the collected data to one PI System, where real-time analyses will be generated and displayed.


Summarizing the two critical business issues defined by OSIsoft Montreal:


Scope Overview – Situation 1: Energy Efficiency

  • Critical Business Issue

      Unable to cut down the energy consumption from unnecessary lighting.

  • Problem/Reason:

Tedious to walk across all work areas before leaving the office. Lack of visibility of every room’s lighting conditions.

Need real time visibility on every room’s lighting status with metrics (energy usage, costs) across all working areas to determine where appropriate action is required.

o   Delta (Benefit):

       - Need to cut travel time before heading out of the office by 80%

- Need to save 25% of energy consumption from lighting

  • Target Date:

Mid of August 2017– Presentation to OSIsoft Montreal Team


Scope Overview – Situation 2: Alarms Management and Delayed Departure

  • Critical Business Issue:

      Unable to reduce the number of false alarms.

  • Problem/Reason:

Disturbs other colleagues with unnecessary agitation, and a 10 minute delay to deal with the alarm company and building surveyor if triggered. The absence of occupant presence information leads to inaccurate assumptions on whether to activate the alarm at time of departure.

Need real time occupant presence monitoring to activate the alarm system only when one person remains.

o   Delta (Benefit):

                            Need to reduce false alerts to 1/4 of current occurrences (currently: once a quarter)

  • Target Date:

Mid of August 2017– Presentation to OSIsoft Montreal Team


While OSIsoft Montreal does currently have a workaround to their business issues, they do not have a time saving solution, nor an effortless process to locate lightning statuses in each room, and no easy method to detect human presence in the office at all times.



Project Goals

The business goals of this project are to leverage OSIsoft’s real-time data infrastructure, the PI System, in order to:

  • Increase awareness on lighting consumption to the Montreal team
  • Highlight the zones which contribute the most to the energy bill
  • Track presence in the office at any time of the day
  • Indicate when there is only one person in the office to let them activate the alarm and easily locate where remaining lights must be shut off
  • Automate the procedure necessary before heading out of the office



How Are We Collecting the Required Data?


The collection of light intensity data is achieved by an Electric-Imp.

The collection of human presence data uses an Omron D6T 1x8 low-resolution thermal camera temperature sensor, that is connected to a DragonBoard410c.

Both of these devices send the raw data as secured HTTPs Requests to a REST EndPoint in a Docker located in an Azure Cloud.


Project Architecture

The following figure represents the target data flow for the generation of lighting and human presence reports:


      Figure 1 – OSIsoft Montreal System Architecture Used For This Project



Using Analytics on PI AF to Analyze Incoming Raw Data


o   Data coming from Electric imps:

ON/OFF light status is obtained by performing a periodic comparison of the current light intensity to a threshold value. Later on, energy consumption can be obtained knowing how many watts each light is consuming when turned on. Other statistics such as daily and weekly light energy consumption are also computed on Analytics.


     Figure 2 – Asset Framework display of lighting status analysis


o   Data coming from Thermal sensor:

Temperature profiles are retrieved 4 times a seconds and each temperature profile contains 8 values. The following graph displays the raw data coming from the thermal sensor as someone walks past the sensor and then comes back.


    Figure 3 – Temperature profile variation with respect to time (PI Vision display)


Using Pi analytics, an attribute named “Polarity” (see yellow curve on fig. 4) is computed for every temperature profile. Polarity quantifies how shifted to one side or another the temperature values are. For simplicity’s sake, the remaining of the analysis focuses on polarity variation.


     Figure 4 – Polarity variation with respect to time (PI Vision display)


When the polarity switches continuously from negative to positive, it means someone walked passed the sensor . Conversely, when the polarity switches from positive to negative, someone passes by the sensor in the other direction. Further analytics can be performed on the Polarity attribute to extract it’s sign only.


     Figure 5 – Polarity sign variation with respect to time (PI Vision display)


Switching polarity sign from -1 to +1 triggers a positive increment in the counter whereas polarity sign switching from +1 to -1 triggers a negative increment.




Using PI Vision 2017 to Display the Processed Data


o   Dashboard for Lights:


      Figure 6 – Live Lighting status for every room in the Montreal office (PI Vision display)


      Figure 7 – Light status for every Electric Imp (PI Vision display)


o   Dashboard for human counter :


      Figure 8 – Counting number of individuals in the Montreal office (PI Vision display)

In response to user voice :


Introduction :

Event frames are really useful to find out when some events happen for a specific time range.

However, it´s not possible to run an anaylsis in that event frame while the event frame is running.

Here is a workaround of how you can actually do it. The trick is to run the analysis under certain condition and link this analysis to an event frame.

Intuitively, we try to build an analysis based on an event frame but that is not possible if we want to run calculation during the event frame. We will see instead how to run an event frame based on a analysis.

That will allow us to do some calculations during an opened event frame.



Problem description:

Let´s say we want to do some calculation over a time range for a periodic signal.

But the time range is not fixed and the period of the signal can change over time.

In others words, we don’t know the explicit start and the end times of the calculation.

The only thing we know is that we want to run the calculation (to find the integral of the ‘sinusoid’ signal in our example) while the signal is increasing and stop the calculation when it’s decreasing.

Let’s plot a graph to summarize what we want to achieve:


In our example, we will use instead the Sinusoid signal from the PI Interface for Random, Fractal, Station, Sinusoid Data:



As the period of the signal is not always the same we can’t use a fixed time range.

We will have to retrieve the start and the end times using another tag.


In this example we have a variable Trigger2 linked to a PI Tag called ST2. (This PI Tag will be needed if we want to create later on an event frame.)

This variable is equal to 1 when the condition of the calculation is met, otherwise it’s equal to 0.

Here we simply define Trigger2 in that:

  • If the previous value in the archive of the tag Sinusoid is lower than the current value of Sinusoid then the value of Trigger2 is equal to 1

  • If not; Trigger2 is equal to 0

If we were working with a “real “signal, the only difference would have been the variable Trigger2. It would have to take into account the noise present on the signal. We should have ended up with something like that:

Then comes the trick of this analysis:


We create a variable startdate. This variable has to be mapped to a PI Tag (Calcultime in the example) because we will need to store it in the archive. Do not forget in AF to set the Value Type of this tag to “DateTime”.

Thanks to the analysis, we will store the actual timestamp of the Sinusoid tag if the trigger is equal to 0. If not, we won´t send any data to the PI Data Archive.

That way, after the Trigger2 variable jumps from 0 to 1 (in other words, when the Sinusoid signal starts increasing) we won´t update the timestamp in the archive.

The value stored in the archive is the beginning of our integral calculation.

Then, we can use this timestamp to run our calculation from Calcultime to now ‘*’.

Now it’s time to put everything together in a table understand for a better understanding:

While Trigger2 is set to 0 (when the Sinusoid value is not increasing) we store the current PI Time of the Sinusoid tag. The result in Calcultime and PI Time column are the same.

WhileTrigger2 is equal to 1, we don´t send the PI Time anymore to the archive so the value of Calcultime is the last value stored in the archive. This value corresponds to the beginning of our integral calculation Tot2.


Please note that the analysis scheduler is set on Event-Triggered. Indeed, we don’t know how long is the time range of the calculation of the integral so it´s important not to set it to periodic in that case.

Now we are running the integral of the signal from Calcultime to now. We can see the result Tot2 increasing in real time:



About the precision of the calculation:

In our example, we decide to compare the previous event in the archive and the current value of the tag..

That way, the precision of the integral depends on the configuration of the Sinusoid tag (scan class, exception/compression parameters, etc)


Link an event frame to this calculation:

It´s possible to link that calculation to an event frame so we can easily find when the calculation was running or not.

To do that, you should create a PI Tag linked to the variable Trigger 2 in my example. Actually I already did that in the previous part but this should be done if you want to use the event frame.

Then we can backfill the data and we can when the calculation was running in the last 2 days :

Thanks to the event frame, we can easily find when the calculation was running or not using Coresight for instance :


Please note that this method is quite heavy in configuration and required 2 tags at least per Analysis.

The possibility to run some calculations during an opened event frame will be added to the next release of Analytics (2017 R2)

This article was written using a Virtual Learning Lab (VLE) virtual machine. If you have your own PI System, great! You're welcome to follow along with what you've got, but if you'd like to access the machine used to build this article, you must have a subscription to the VLE. You can purchase a subscription for 1 month or 1 year here. If you've already got a subscription, visit the My Subscription page and start the machine titled "UC 2017 Hands-on Lab: Tips and Tricks with PI Builder and PI System Explorer". Once provisioned, connect with the credentials: user: pischool\student01 password: student. You can work from the full manual for the lab by downloading it here.


Software Versions Used in this Article

PI System Explorer2017 - 2.9.X
PI Asset Framework2017 - 2.9.X



When building an Element template, it can be hard to figure out how to configure PI Point attributes. If you have a consistent tag naming convention, substitution parameters can be used directly, but what do you do if you don’t have a consistent naming pattern? You could bind the attribute to its appropriate tag by hand, but this might give you headaches down the line when you try to improve the template and don't see your improvements echo to all of the elements based on the template. This article works through the best method of configuring these data references when you're in this situation. We're going to demonstrate this by adding a new Attribute named Discharge Pressure to the Compressor Template, change the Units of Measure to psig, and make it Data Reference PI Point.  Then add a child-Attribute to this Attribute called Tag Name. If you find yourself in this situation while building an Asset Framework database in the future, follow this article to ensure you use best practises when doing so. In a nutshell:

The Bad Way - Hard-coded PI Point Data References

On the Element TemplateOn a Specific Element


The Good Way - Soft-coded PI Point Data References

On the Element TemplateOn a Specific Element


Prepare a "PI Servers" Element to Hold PI Data Archive Configuration

It's useful for any PI AF Database to have the PI Data Archive names held inside attribute values. This makes it a whole lot easier if you ever have to move to another PI Data Archive with a different name. You'll just need to change a single attribute value to migrate your entire database! You'll only need to do this step once for your database, then you'll be able to reuse it for all configuration int he future.

  1. Open PI System Explorer
  2. Press the Ctrl+1 key combination to go to the Elements view.
  3. Create an Element PI Servers based on the PI Server Template, and name it PI Servers. Hint: If you're doing this on your own system, you'll have to also create the PI Server template. Head to the Library, and create an Element Template called "PI Servers" and give it a single attribute of string type called "Server1".
  4. Click on the PI Servers element, then click on the Attribute tab in the Attribute Viewing pane.  Enter the server name into the Server1 Attribute.


Add a New Attribute on Your Element Template

  1. Press the Ctrl+3 key combination to navigate to the Library view.
  2. Select the Compressor Template under Element Templates.
  3. Click on the Attribute tab in the Attribute Viewing pane.
  4. Right click anywhere on the white space in the Viewing Pane and select New Attribute Template.
  5. Select the Attribute, press the F2 key, and type Discharge Pressure.
  6. For the Data Reference select PI Point.  Click inside the combo box for Default UOM and type in psig.
  7. Select the Discharge Pressure Attribute and set the Data Reference to PI Point.  Click the Settings button, then in the PI Point Data Reference dialog type %@\PI Servers|Server1% in the field next to the Data Server (this grabs the value of Server1 that we ended up with in the above steps), and then type %@.|Tag Name% in the field next to the Tag Name. If this syntax doesn't make much sense now, don't worry. We're going to create a sub-attribute later called "Tag Name" that this substitution syntax will grab.
  8. One last thing, it is a best practice to never to use <default> units for a measurement.  So click on the Source Units combo-box and select psig from the available units of measure.
  9. Click the OK Button. Note: The "quick" way to do the above steps is (once you become familiar with the syntax), is to delete the text under the Settings button and type \\%@\PI Servers|Server1%\%@.|Tag Name%;UOM=psig directly.
  10. Select the Discharge Pressure Attribute.  Right click and select New Child Attribute Template.  Press the F2 key and type Tag Name.  Change the Value Type to String. Under Properties select Hidden. Normally you would mark Attributes as Hidden if they are not important for end users to see. In our case end users don’t need to see the Tag Name as long as the Discharge Pressure attribute is displaying correctly. However, it's sometimes useful to leave this "Tag Name" attribute as visible - some users like being able to see which point this attribute is bound to.
  11. Press the Ctrl+S key combination to Check In your changes.


Configure the Tag Name Attribute for a Specific Element

  1. Press the Ctrl+1 key combination to go to the Elements view.
  2. Select the first compressor element (name starts with K) in the Browser pane (Facility1>Area1>Rotating Equipment) then click on the Attribute tab in the Attribute Viewing pane.
  3. Select the child-Attribute Tag Name, press the F2 key, and type cdt158 for the value.  Press the F5 key to refresh.  The Discharge Pressure Attribute is now receiving data.



Once this is configured, you would use PI Builder to manually bind the tag names to your desired tags. Following the above procedure greatly enhances the ease of management of your AF Database, and is considered best practise at the time of the publishing of this article. If you run into any issues when working through this or have any questions, you're welcome to post a comment!


Further Resources

  • If you're interested in learning PI AF, check out the online course
  • For a great article on tips and tricks with PI AF, check out this post
  • The full manual used to resource this post can be downloaded here


This article was written and adapted from materials originally developed by Ales Soudek and Nick Pabo.

As of the time of writing this, there are only a few ways I can think of to make sure that a connector is running and healthy.


  1. Checking the tags that it is writing to and making sure they are updating.
  2. Checking the administration page to ensure that all lights are green.


The purpose of this is to show you how it is possible to monitor the connectors at your organization using Powershell and AF SDK.


When you first saw this post you might be thinking that the only way you could check if your connector was working was by checking the administration page, but that is only partially true! The items on the administration page can be retrieved by making a REST type call to the connector webpage. So, all of the information that you can view on the connector administration site can be extracted and written to PI tags, offering an easy solution for monitoring the health of your connectors.


I have included a script, which is attached to this webpage. If you'd like to skip straight to the part where I talk about the Powershell script and what it can do, please use the link in the table of contents to skip to that section. The attachments can be found at the bottom of this post.


Table of Contents



What types of information can we pull from the connector webpage?

First, let's cover where this information is stored. Pull up the administration page for your favorite connector. I'll be using the PI Connector for OPC UA. In the screenshot you see below, each of these fields can be queried by making a REST call to the connector. So, let's work on finding how to query for the status of the Connector, Data sources, and Servers configured to receive data from the connector.


I am using Chrome for this, but you can also perform similar actions in other web browsers. When on the Connector Administration page, hit F12. You should see the Chrome Developer Tools window pop up. From there, let's browse to the Network tab. The Network tab will allow us to see the requests being made as well as the responses being provided by the connector. Let's take a look at the Data Source Status (this shows up as Datasource%20Status). Expanding the object allows us to see the properties underneath it. For my data source, we can see that it is named OPCServer1, it has a status of Connected, and a message stating that I have No Data Filter set.


We can also see the URL that was used to retrieve this information from the Headers section.


Information on the Connector State, PI Data Archive, and AF connections can be found in a similar manner under the Network tab by looking at ConnectorState, PI%20Data%20Archive%20Connections, and PI%20AF%20Connections respectively.


How can we obtain this information using Powershell?

Now that we know what types of information we can get, let's go through how Powershell can query for and store this information.


Because we want this script to run periodically, we will need stored the credentials on the machine. But, we don't want to just store credentials in plain text on the machine running the script, so we will encrypt them. Let's set the username variable first:

#username for logging into the PI Connector Administration Page.

$user = "domain\user"


Next, let's store the password and encrypt it. We will then set the password to the encrypted file that contains the password:

#Convert password for user account to a secure string in a text file. It can only be decrypted by the user account it was encrypted with.

"password" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "file location for the stored password file"

$pass = "file location for the stored password file"


Finally, we will decrypt the credentials when running the script using the command below. These credentials can only be decrypted by the user that encrypted them, so make sure to encrypt the credentials with the same user that will be running this script.

#Credentials that will be used to login to the PI Connector Administration Page.

$cred = New-Object -TypeName System.Management.Automation.PSCredential `

-ArgumentList $user, (Get-Content $pass | ConvertTo-SecureString)


The connector's also use self-signed certificates, so you may get an error when attempting the GET request. To get around this, we will include the following code to ignore the certificate errors:

#Ignore invalid certificate errors when connecting to the PI Connector Administration Page. This is because the connector uses a self-signed certificate, but Powershell wants to use a validated certificate.

Add-Type @"

    using System;

    using System.Net;

    using System.Net.Security;

    using System.Security.Cryptography.X509Certificates;

    public class ServerCertificateValidationCallback


        public static void Ignore()


            ServicePointManager.ServerCertificateValidationCallback +=



                    Object obj,

                    X509Certificate certificate,

                    X509Chain chain,

                    SslPolicyErrors errors



                    return true;









Now that all of that is out of the way, let's get to the part where we pull the information from the webpage. For this, we will be using the Invoke-WebRequest function. If we wanted to query for the data source status shown above, our function would look like this:

$DataSourceStatusResponse = Invoke-WebRequest -Method GET  "https://nlewis-iis:5460/admin/api/instrumentation/Datasource%20Status" -Credential $cred | ConvertFrom-Json

We are using the GET method which we can see for the Request Method in the Headers. The login to the connector webpage uses basic authentication, so we are passing it credentials that we have stored in the variable $cred. Finally, we pass this to the ConvertFrom-Json function in order to store the information retrieved from Invoke-WebRequest in a Powershell object under the variable $DataSourceStatusResponse.


For our data source status, we can then take a look at the variable to see what information we now have. We can see that under the variable we can see our data source OPCServer1. If we had additional data sources, they would show up here.


If we browse further into the variable, we can then find the Message and Status fields we were looking for:


If we wanted to store the status in a variable ($OPCServer1_Status), we could then achieve this as follows:

$OPCServer1_Status = $DataSourceStatusResponse.OPCServer1.Object.Status


Now we just need to retrieve the other information we want in a similar fashion and we are ready to write the values to PI tags!



Writing the values to PI

For this, we will be using AF SDK in Powershell to achieve this. There are also native Powershell functions for the PI System that come with PI System Management Tools that could be used instead of using AF SDK.


There are a few steps in order to this this.


1. We need to load the AF SDK assemblies.

# Load AFSDK

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDKCommon") | Out-Null

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDK") | Out-Null


2. We need an object that can store the point attributes for the tags we will be creating. The script I created will automatically create the PI tags if it cannot find them.

#Create an object with point attributes for the points you are creating

$myattributes = New-Object 'System.Collections.Generic.Dictionary[[String], [Object]]'


3. Store the tag attributes in the tag attribute object. For myself, I am making these string tags with a point source of CM.

<#Add the attributes to your point. I am making the points that will be created string tags, which corresponds to a value of 105.

Different point types can be found here:


$myattributes.Add("pointtype", 105)



4. Next, we will need to initialize the PI Data Archive, AF Server, buffering options, and instantyiate our new values we will be using. We are using the default PI Data Archive and AF Server for this.

# Create AF Object

$PISystems=New-object 'OSIsoft.AF.PISystems'




# Create PI Object

$PIDataArchives=New-object 'OSIsoft.AF.PI.PIServers'



# Create AF UpdateOption

$AFUpdateOption = New-Object 'OSISoft.AF.Data.AFUpdateOption'


#Set AF Update Option to Replace

$AFUpdateOption.value__ = "0"


# Create AF BufferOption

$AFBufferOption = New-Object 'OSISoft.AF.Data.AFBufferOption'


#Set AF Buffer Option to Buffer if Possible

$AFBufferOption.value__ = "1"


# Instantiate a new 'AFValue' object to persist...

$newValueX = New-Object 'OSIsoft.AF.Asset.AFValue'


# Apply timestamp

$newValueX.Timestamp = New-object 'OSIsoft.AF.Time.AFTime'(Get-Date)


With that all out of the way, we just need to create our PI tag, assign it a value and timestamp, and send it on its way.


5. Assign a name to the PI tag.

# Assign Tag Name to the PI Point. Here I denote that this is for the data source OPCServer1 and I am retrieving the status.

$tagNameX = "OPCUAConnector.DataSource.OPCServer1.Status"


6. Find the tag, and create it if it does not exist. This finds the tag based off of the PI Data Archive we specified earlier, as well as the tag name

#initiate the PI Point

$piPointX = $null


#Find the PI Point, and create it if it does not exist



if([OSIsoft.AF.PI.PIPoint]::TryFindPIPoint($PIDataArchive,$tagNameX,[ref]$piPointX) -eq $false) 


     $piPointX = $piDataArchive.CreatePIPoint($tagNameX, $myattributes) 

#Set the PI tag for $newValueX to $piPointX

$newValueX.pipoint = $piPointX


7. Lastly, we can apply the value to $newValueX and write this value to PI! We set the value equal to the call we made earlier for the Data Source Response where we retrieved the status of the server. We then user $newValueX.PIPoint.UpdateValue in order to write the new value to the tag.

    $newValueX.Value = "$DataSourceStatusResponse.OPCServer1.Object.Status"



And that's it! That is all the code required in order to pull the information from the connector page and write it to a PI tag.



The Connector Monitoring Script

If you were wondering while reading this whether or not someone built out a script that already pulls in some of the relevant information, then you're in the right place. While writing this I also developed a script that will write the data from the connector pages you supply it with to PI tags. Let's go over how it works.


How the script works:

  1. You supply it credentials based off of the method we discussed earlier, where we encrypt the password.
  2. You provide it a list of connectors. In this list there is the name of the connector (which will be used in the tag naming convention), as well as the URL to the admin page. Make sure to exclude the /ui at the end of the URL. These are then stored in an object called $Connectors.

#Create an object called Connectors. This will hold the different connectors you want to gather information from.

$Connectors = @{}

#When adding a connector, give it a name, followed by the link to the admin page like below.




    3. A connector object is then generated and the items in the list are added to this object.

    4. We query the web pages and then write the values to PI for each of the objects in the connector object.


What the script does

  • Tags are based on the naming convention of: <Provided Connector Name>.<Type>.<Server>.Status, where the type can be DataSource, AFServer, or PIServer
    • For the status of the AF Server (AF Server is named nlewis-af1) for a connector I named OPCUA_nlewis-iis, the tag would be named OPCUA_nlewis-iis.AFServer.nlewis-af1.Status.
  • If the script cannot connect to the admin page, it writes an error message to the connector state tag.
  • If the service is running but the connector is stopped via the webpage, the script writes to all tags for that connector that the connector is stopped.


There are two parts to the script. The first part generates the encrypted password file. The second part you run to pull the information, create the tags if they do not exist, and write to them.


Please edit the code to include the connectors used in your environment. The scripts can be found attached to this post.


PI Connectors and PI Interfaces   All Things PI - Ask, Discuss, Connect

I am thrilled to announce the posting of the PI System Connectivity Toolkit for LabVIEW on the National Instruments Reference Design Portal.


National Instruments equips engineers and scientists with systems that accelerate productivity, innovation, and discovery.


National Instruments measurement platforms often connect to the kinds of sensors and perform the kinds of analyses that aren’t usually found in the industrial control systems, which are traditionally connected to the PI System.  Examples of this are vibration analysis, motor current signature analysis, thermography, acoustic, electro-magnetic induction, and the analysis of other equipment condition performance indicators.


By enabling bi-directional communication between LabVIEW and the PI System, maintenance and reliability personnel can gain deeper insights, not only into the condition of equipment, through analysis of the signals in LabVIEW, but also into the process conditions that effect the equipment and vice versa.


LabVIEW edge analytics are enhanced by process data from other monitoring and control systems via the PI System.  The PI System real-time data infrastructure furthermore makes the LabVIEW data available enterprise-wide, for better insights and decision-making across an organization, and so that data can be integrated with other systems for predictive analytics, augmented reality, and computerized maintenance management for automating maintenance processes.


To obtain installation instructions, LabVIEW Virtual Instruments, and sample code files, see the following posting on the National Instruments Reference Design Portal:


The write-to-PI function requires a license for the PI-UFL Connector.  Please contact your Account Manager or Partner Manager.


The read-from-PI function requires a PI Web API license which can be downloaded and used for free for development purposes from OSIsoft Tech Support website.


For more information on LabVIEW, please see


Please direct any questions to


Balancing on the Edge

Posted by jkorman Employee Jan 10, 2017

Please check out my blog post about balancing Edge vs Cloud computing and let me know your thoughts!


Kind Regards,


While developing web-based applications that leverage the PI Web API, I often find myself asking the following question:

"How can I get all WebIds of a subset of attributes for all elements?

One obvious use-case that comes to mind is to display a table of attribute data for all elements of a certain type. Maybe I have a template that defines 20 attributes, but I only need to display 10 of them. Of those 10 attributes, maybe I only want to display snapshot values for 6 of them, and the other four I want to trend in a sparkline.

In order to accomplish this, a series of requests to the PI Web API needs to be made:

  1. Get all the WebIds for the elements I need
  2. Loop through each of these elements, and for each element get all the WebIds for the attributes I need
  3. Make streamset requests to the PI Web API with the WebIds of my attributes

In the past, making all of these calls to the PI Web API did not scale very well in terms of performance. Today, we have batch requests with request template functionality that makes implementing this use-case a lot easier. Even with batch requests, however, I have discovered some performance implications that I believe are worth knowing about. I will share these in this blog post.



Setting up the batch request


As a starting point, we need to write a batch request to get the elements we want. Throughout this blog post, I am using the public PI Web API endpoint with the NuGreen database. Suppose I want to get all elements of the "Boiler" template type. This batch request may look something like this:


 "database": {
  "Method": "GET",
  "Resource": "\\\\PISRV1\\NuGreen"
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["database"],
  "Parameters": ["$.database.Content.Links.Elements"]


Notice that I am specifically asking for elements that are instances of the "Boiler" template. Also, I am using the "searchFullHierarchy" parameter in the element sub-request. I often include this parameter because I need all element descendants of a specific root element. As such, I will sometimes write my element query as follows:


 "rootElement": {
  "Method": "GET",
  "Resource": "\\\\PISRV1\\NuGreen\\NuGreen"
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["rootElement"],
  "Parameters": ["$.rootElement.Content.Links.Elements"]


The only difference here is that I have changed my starting point from an AF database to as specific AF element. For the rest of my examples, I'll be sticking to the database method.


Moving on to attributes


Now that we have the element batch query, we can expand it to get all attributes for each of the elements returned. This is where the new request template functionality of batch requests in the PI Web API comes into play. Here is what the query may look like:


  "database": {
   "Method": "GET",
   "Resource": "\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": "{0}?searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path"
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Links.Attributes"]


Notice the use of RequestTemplate in the "attributes" sub-request. As documented in the PI Web API:


A request can alternatively specify a request template in place of a resource. In this case, a single JsonPath may select multiple tokens, and a separate subrequest will be made from the template for each token. The responses of these subrequests will returned as the content of a single outer response.


This means my batch query is going to loop through all of my elements, and make a separate sub-request for each element's attributes. Even better, this is all being handled internally by the PI Web API... pretty cool!

Before we continue, there are a few things to note about the attribute query:


  1. I am also using searchFullHierarchy for the attributes. This is important because an element can have many levels of nested attributes. Writing a batch query to loop through all these levels in a generic way would probably be impossible. Luckily we have "searchFullHierarchy" for this.
  2. I have included "selectedFields" in all my sub-requests. As we will see later, this results in a MAJOR performance improvement over letting PI Web API return all of its default metadata.


Now, moving on...


What if I only want certain attributes?


Well, you can do that. PI Web API 2016 introduced a new function to request multiple attributes by WebId or path in one call. Suppose I only want the following attributes for my Boiler elements:


  • Asset Name
  • Model
  • Plant


Then our batch query may look something like this:


  "database": {
   "Method": "GET",
   "Resource": "\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": ";Items.Object.Path&path={0}|Asset Name&path={0}|Model&path={0}|Plant"
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Path"]


Here, I'm using the multiple attributes function call and supplying the "path" parameter in the resource URL multiple times. Each instance of this parameter appends the relative path to the attributes I want to the sub-request parameter (which happens to be the element path). From here, it is only a matter of writing your client-side code to properly construct the resource URL based on the attributes you want.


Where do I go from here?


Now that we have all the WebIds for my attributes, we have to decide what to do with them. Usually I will start by creating a flat data structure that maps my element paths to attributes to WebIds. In JavaScript, this may look something like this:


// HTTP response is stored in `res' variable
var WebIds = {};

res.elements.Content.Items.forEach(function (element) {
    WebIds[element.Path] = {};

res.attributes.Content.Items.forEach(function (subRes) {
    subRes.Content.Items.forEach(function (attribute) {
        var path = attribute.Object.Path,
            elementPath = path.substring(0, path.indexOf('|')),
            attributePath = path.substring(path.indexOf('|'));

        WebIds[elementPath][attributePath] = attribute.Object.WebId;


After putting the WebIds into a structure that is a bit more usable, you could use them in a variety of ways. Typically I will create an additional PI Web API batch call for all of the streamsets I need. For example, I may want to use an "end" streamset for 5 of my attributes, but use a "plot" streamset" for only 2 of them.


How is the performance?


I ran different flavors of these types of batch queries and collected the response times. First I will present the raw results and then comment on them.


Query description
Response time (Path and WebId only)
Response time (all metadata)
All attributes for all boiler elements143 ms2611 ms
All attributes for all elements435 ms3809 ms
Specific attributes (3) for all boiler elements180 ms186 ms
Specific attributes (3) for all elements635 ms1120 ms
All attributes except one for all boiler elements453 ms2895 ms
All attributes except one for all elements3249 ms5272 ms


I included results for all elements (not filtering by template type) to demonstrate how these batch queries scale. I do NOT recommend doing this on a large AF structure.


Based on these results, I have made the following observations. YYMV, however, as PI Web API appears to do some internal caching that can cause successive calls to perform faster.


  1. When getting all attributes (not specific ones using the "multiple" function call), it is important to be mindful of which fields you are selecting. The more metadata you ask for, the longer the request will take.
  2. Using the "multiple" function call for ad-hoc attributes does not perform very well if you have a lot of elements and are requesting a lot of attributes for each element. You're better off just asking for all attributes with a call to "attributes," unless you only need a small subset of attributes.
  3. The more elements that need to be processed, the longer the query. This makes intuitive sense and is to be expected.


Concluding remarks


I hope this blog post is helpful to you as you use the PI Web API to develop your own web apps. What other ways have you discovered of getting attributes with the PI Web API?

I would like to share some thoughts about PI calculation datasets in ProcessBook and what would help us to keep – I’d rather say bring - them under control.

Undoubtedly, datasets come in very handy for users to do some simple calculations on the fly. But they are also a challenge for PI administrators when they start to get out of control - I'd almost say they become a mess then.


How did that come about?

In a perfect "PI world" data is accessible by what- and whomever and tag names never change. Users start building displays with calculations and - as they are very handy - they tend to spread more and more all over the company.


In reality we have PI security and systems that change over time. Display builder and user not always share the same access rights. That's the point where it gets weird. Not primarily for the user but definitely for the administrators. Thousands of error messages start to flood the server logs and make them almost unreadable:

User query failed: Connection ID: 6092, User: xxx, User ID: yyy, Point ID: 0, Type: expcalc, …


Users often are not even aware of using datasets. They picked them up with a copied display. Missing data access or renamed points are the most common source of trouble.

Besides a small green spot becoming red (status report) there is nothing that could draw users notice. That's nothing unusual in an extensive display, since this could be also be triggered by a temporary system digital state.

Individual datasets, spread all over the company are not manageable - and that's by far not the only handicap.


How to relieve the symptoms?

First of all, error messages should appear at the place where the error occurs.

It depends on the point of view, what would be the right place in case of calculation datasests.

Technically, the PI server might be the right place, practically and for logical reasons it'd be the client. That's the only place it can be fixed.

Here are my proposals:

Provide options to

  • prompt users for calculation dataset problems in ProcessBook
  • automatically disable datasets that can't be executed (security, tag not found) in ProcessBook
  • suppress those error messages (Point ID: 0) on the server
  • disable datasets in ProcessBook (stepwise, e.g. creation, execution, disable)

How to cure?

What I definitely see, is an urgent need (at least in our case) to replace the unmanageable decentralized approach with a centralized one.

This is where AF On Demand Calculations come in.


Anyway, replacing datasets with On Demand Calculations will be a big challenge and there are several topics that need to be discussed, e.g.:

  • support for migration from datasets to AF On demand calculations (export)
  • how to provide the individual flexibility of personal datasets (may be an organizational matter)
  • ...



From an administrators point of view the most annoying issue is the log entries for failing calculations in the server log.

500 messages per hour on a 50k server with 100 connected ProcessBooks are more the rule but the exception. An option to suppress those calculation errors with "Point ID: 0" would be a blessing.


A good starting point on the client could be to more obviously make users aware of the problem of failing calculation datasets.

This can (and may be should be) be annoying and therefore an option to disable specific datasets is needed. Together with an option to disable permanently failing calculations automatically this would be perfect.


Finally, a centralized and manageable approach with e.g. On Demand Calculations should be the goal - with benefits for all the new and future clients. Let’s do it step by step.


May be someone wants to picks this up and makes an enhancement request out of it


I have seen quite a few questions and confusion over the last year regarding the timestamps returned when querying data using the PowerShell Tools for the PI System.  Specifically, that the timestamps returned do not match the client time-zone settings similar to the behavior of other PI client tools. (PI DataLink, PI ProcessBook, PI Coresight, PI SMT etc)    It also came to my attention that there was not a clear explanation for this behavior in documentation or here on PI Square at this time.


Current time on test machine:

In PI SMT - values for SINUSOID over the last two hours:

Using PowerShell tools for the PI System - values for SINUSOID over the last two hours:

# connect to PI Data Archive
$con = Connect-PIDataArchive -PIDataArchiveMachineName "afink-pi1" 

# set start time to two hours ago
$starttime = Get-Date
$starttime = $starttime.AddHours(-2)
# set end time of now
$endtime = Get-Date

# query values of SINUSOID for last two hours
$values = Get-PIValue -PointName Sinusoid -StartTime $starttime -EndTime $endtime -Connection $con

# write values to console
Write-Output $values


The PI Data Archive stores all data using UTC time.  Therefore, when a PI client tool requests data for a tag, the values are returned with UTC timestamps.  It is the responsibility of the PI client to convert these times from UTC to local time based on the Windows Time Zone settings on the client machine.  When a client requests data from the PI Data Archive, the timestamps for the returned values are absorbed by the client as System.DateTime .NET objects in UTC time.  The client then calls the ToLocalTime() method on the returned values and populates the spreadsheet, trend, or other symbol with the timestamp in the client's local time zone.  This is the behavior of OSIsoft's most used client tools such as PI DataLink, PI Coresight, PI ProcessBook.


The PowerShell tools for the PI System, however, do not automatically call the ToLocalTime() on returned values, so all values are still shown in UTC by default.  This is what raises the questions/confusion described above.



Luckily for us, since PowerShell is also built on .NET, we simply need to the same steps used for all other PI client tools as described above.  The results of my SINUSOID query are stored in the variable $values.  If we select the "Timestamp" property of our resultant events and pipe it to the Get-Member cmd-let (aliased as "gm"), we can see the object type is indeed System.DateTime.  We can also see the various methods and properties associated with the object type.



Specifically, notice the ToLocalTime() method.  Using this method, we can verify that we can now return the values with the proper client time zone timestamps.


Finally, since the ultimate goal is often to generate a .csv or .txt file with the resultant archive data, we can use some PowerShell tricks to replace our UTC timestamp column with our client local time column. Specifically, using calculated properties.


In fact, thanks to the PowerShell pipeline, we can select our properties, build our custom output column, and write the result to a .csv file all in a one-liner:

# selects the value, and creates a custom column for client tz timestamps and outputs to file C:\values.csv
$values | Select-Object -Property value,@{Name='Timestamp (Client Time Zone)';Expression={$_.timestamp.tolocaltime()}}  | Export-Csv -Path C:\values.csv -NoTypeInformation


Lets break this down a little bit... this one liner can be broken down into 3 parts separated by the  "|" characters.  The | allows us to take the output of one cmd-let or script block, and pass it as the input to the next cmd-let or script block.


Part 1:

$values |


Here we are simply setting our "start off point".  We currently have our $values variable which contains our SINUSOID values and UTC timestamps.  Keep in mind this variable also contains other properties, such as isGood, isAnnotated, and WriteMode.  We take all of this information and pass it to the next step:


Part 2:

Select-Object -Property value,@{Name='Timestamp (Local Time Zone)';Expression={$_.timestamp.tolocaltime()}}  |


In this case, we are only interested in the value and our new client timezone timestamp properties for our resultant .csv file.  Therefore, from our query results, we are only selecting the "value" column and our new column "Timestamp (Local Time Zone)", which we are creating by specifying the Name of the new column, and the Expression to evaluate for the new column.  In PowerShell, we do this by adding the new column as a hash-table, which is why we must use the @{Name='';Expression=''} syntax.  The operation we want to perform to generate the data for our "Timestamp (Local Time Zone)" column is call the ToLocalTime() method as previously discussed.  The "$_." tells PowerShell to call the timestamp.ToLocalTime() method on whatever object was piped to it, and from Part 1, we had piped our $values variable.  Therefore, we will be taking the "timestamp" column of our $values variable, and calling ToLocalTime(), resulting in our new column "Timestamp (Local Time Zone)".


For more information on calculated properties in PowerShell, check out the following TechNet article:

< Windows PowerShell Tip: Using Calculated Properties >


Part 3:

Export-Csv -Path C:\values.csv -NoTypeInformation


Now that we have selected only the columns we are interested in from the previous step ("value" and our new calculated "Timestamp (Local Time Zone)" columns), we can export the result to a .csv file using Export-Csv and specifying a path for the output file.  The NoTypeInformation flag prevents object type data from being pre-appended to the .csv file header.  Oftentimes this extra note in the .csv file can be cumbersome, so we can remove it in this way.  Without the -NoTypeInformation flag, the output file will look like this:


Now, we have a .csv export of our desired PI data and have it properly returned in our client time zone which we can now use for further analysis in other programs.  Furthermore, since we were able to do this in PowerShell, we can run our queries in parallel, against multiple servers, and in fully automated fashion.


The full script would look something like this:

# connect to PI Data Archive
$con = Connect-PIDataArchive -PIDataArchiveMachineName "afink-pi1" 

# set start time to two hours ago
$starttime = Get-Date
$starttime = $starttime.AddHours(-2)
# set end time of now
$endtime = Get-Date

# query values of SINUSOID for last two hours
$values = Get-PIValue -PointName Sinusoid -StartTime $starttime -EndTime $endtime -Connection $con

# write values to console
Write-Output $values

# selects the value, and creates a custom column for client tz timestamps and outputs to file C:\values.csv
$values | Select-Object -Property value,@{Name='Timestamp (Client Time Zone)';Expression={$_.timestamp.tolocaltime()}}  | Export-Csv -Path C:\values.csv -NoTypeInformation


Happy slinging!


- Adam Fink


Osi to the cloud

Posted by KristofDM Dec 15, 2016

So, we are moving the entire pi infrastructure to the cloud or at least are trying to...

We are looking at IaaS and PaaS where possible.

I'll write down my successes (and losses)  here to help the community members that might have the same idea.


Please note that not all things we do are 'supported' by osisoft.. (yet!).


First some info on the Environment.

We have a pi data archive 3.4.935.72 (2015 R2) server.

Pi coresight


Pi SQL Data access server

and a  'heavy consumer app' server.

SQL server for coresight and AF.



To make sure we make the most out of it i wanted to have the AF and coresight databases hosted in PaaS.

This is not yet supported and was only tested on an older PaaS sql edition (pre v12) .

We have VPN with azure and are able to join AzureVM's in our local AD.



I went ahead and Installed the PI data archive on a AzureVM with ssd storage;

No problems here



Installed Coresight on an AzureVM and adapted connect strings to use PaaS.

Migrated on-premise database to PaaS using sql management studio.


1: The DvService user (in the COresight SQL database) is linked with a sql login from the domain !

=> Not supported in Azure,Dropped and recreated on PaaS.

2: Coresight connection string to sql in web.conf adapted so a Azure database can be used


<add name="CoresightConnectionString"  connectionString="Data;Initial Catalog=YourPaasDatabaseName;User ID=DatabaseUSer;Password=DatabaseUSerPassword;Encrypt=true;Trusted_Connection=false; Application Name='Coresight Web Service'" />


It works ! the Coresight overview page does not show any more errors.



Moving the AF database to the cloud sql PaaS.

I probably have and advantage here since i'm a classic Database Admin so if anything is not clear. shoot!


The AF Database in SQL servers uses some stored procedures/functions  to create a replica, get the status of the sql agent, get the Sysadmin name, etc. ...

Since these are all calls to the database instance (the sql Engine) they will no longer work in a PaaS database. For obvious reasons Microsoft does not allow these kind of queries.

The instance catalog is not the traditional sql server kind and thus is unusable.


Since this is a tricky part, we made a copy of the database in our local sql server (PiAfCopy) and:


Changed the stored proc on the sa user to always return SA as string value.

Changed the Check SQLagent Job to always return 1

Dropped the procs for the replica (Osi discourages users to use it anyway)

Changed all windows logins to sql logins.


Adapted Functions:




Dropped Stored Procedures:





Now , using sql management studio 2016, i deployed the database to the PaaS sql.

Issue: Out of memory

This can be quite intensive on the client computer so having enough RAM is essential !

In case you haven't, try closing as much apps ass possible.

You could also do this on the database server itself but it needs an internet connection!


After some time (it takes a long time) the database should be deployed in Azure PaaS.


Installing AF server + System Eplxorer!


We'v setup an IaaS server and installed AF like we normally did. We disabled the database execution utility and disabled check connection while specifying the database server.

Just specified the PaaSSQLserver.


Installation went smooth.


The connect string needed to be changed.  this time in AfService.Exe.Config

It's located  C:\Program Files\PIPC\AF. It is somewhat different from coresight...


    <add key="connectString" value="Persist Security Info=False;,1433;database=YourPaasDatabase;USER id=DatabseUser;PASSWORD=DatabaseUserPassword; Application Name=AF Application Server;" />


Restarted af and..


It worked!

Initially i had an error while creating new ellements.

"cannot check in model with unique id {Some very long SID} because there is another model with an identical name"

But after retrying some times, it vanished!


Next steps:

Analysis service, Notifications, pi sql data access server.

To be continued!




After installing the pi data access layer on a IaaS server, running the sql commander fails.

Under the hood  the PI sql commander tries to use the (undocumented) system sp sp_oledb_ro_usrname .

The sp_oledb_ro_usrname stored procedure returns the oledb database read only status.

This kind of queries are not supported on Microsoft SQL Azure Paas.


Here ends the SQL PaaS try-out.


We'll look at Hosting the SQL on premise ...


Excel Is Our Friend – Part V

Posted by rkoonce Dec 14, 2016

Excel Is Our Friend – Part V

Congratulations! You have escaped Interface Island! And if you are ever caught on another similar island, you have developed the survival skills to survive and eventually escape. The bad news is that some of your toolbox was left behind on that island. When you find yourself in need the tools there, you could always swing by there and get a “copy” You left your shelter behind, it will have to be rebuilt. On a new island, project references will need to be re-added, functions re-created in the new Island’s project. But what if there was a way to take the recyclables with you? Like that user defined function? That might come in handy anywhere. So, the good news is that you can have your cake and eat it too.

How to: Making Your Custom Spreadsheet Functions Available Anywhere

To use a custom function, the workbook containing the module in which you created the function must be open. If that workbook is not open, you get a #NAME? error when you try to use the function. Even if the workbook is open, if you use the function in a different workbook, you must precede the function name with the name of the workbook in which the function resides. For example, if you create a function called DISCOUNT in a workbook called Personal.xlsb and you call that function from another workbook, you must type =personal.xlsb!discount(), not simply =discount().

You can save yourself some keystrokes (and possible typing errors) by selecting your custom functions from the Insert Function dialog box. Your custom functions appear in the User Defined category:


In the instructions, a workbook type of xlsb is referenced above. But the screenshot has an xlsm… my workbook was created long ago, originally as an .xls (I am a dinosaur).

* Tip: An even easier way to make your custom functions available at all times is to store them in a separate workbook and then save that workbook as an add-in. You can then make the add-in available whenever you run Excel. Detailed instructions are found at the site linked here. This varies slightly from my own “old skool” methods, but I agree with the authors that this new way seems an easier way. I’ll be converting to an add-in when I get the chance.

Note: An .xlsb extension indicates Excel Binary Format. The .xlsm extension denotes a macro enabled workbook. An .xlam extension indicates Add-in. All of these formats require a trust for location and/or a much more complicated trust for publisher.

  • How to: Let Excel Teach You VBA

All of us can learn a lot of VBA by using the “Record Macro” function. After recording your actions within Excel, just examine the VBA code afterward and learn. It’s like pulling the curtain back on the Wizard of Oz. Prior to VBA, I was using Excel 4’s macro language, and the macro recorder was instrumental for me, personally, when first learning VBA. I still use it on occasion.

* On the Developer Ribbon, click “Record Macro”.
* Note the Macro Name and location.
* Perform some actions within the workbook, select a range of cells, change the font, change the number format, type some text or a formula.
* Click the Stop Recording button (same location as above).
* Examine the code that was generated.


  • FizzBlog Exercise:

  • Create a new macro-enabled workbook.
  • Copy the mySplitter function into this new workbook (the functions that we created earlier in this blog series).
  • Save this workbook as an Excel Add-In > “MyFunctions.xlam”
  • In Excel>File>Options> Add-ins>Load the newly created add-in in Excel.
  • Create a new workbook. Click Insert Function
  • FizzBlog Challenge:

  • In the dialog to select the user defined functions, is the function name prepended with the add-in worksheet name? workbook name? Or none? Prove it with a screenshot.
  • Were you able to make it work with a regular workbook? Or does it only work in macro-enabled workbooks?
  • Did you have to do anything special to make it work? (Did I leave out any significant steps in the FizzBlog Exercise?) If so, please explain.
  • Sharing your recyclables:

  • If you have a useful reusable function(s), please share it below.
  • Perhaps you just had an idea for one, please post it as a comment! (Santa might be reading this thread?)

This entry concludes my blog series for "Excel Is Our Friend" (perhaps I will append it someday). I hope that some of you have learned a few things along the way. Regardless, I really hope you enjoyed the series.

I plan to begin another series soon, but first I will have to choose my next destination. I have several ideas that I have been kicking around, but I would welcome any suggestions regarding any "islands" you would like to explore.

Cheers! (from the grass hut)



In line with our partnership with Dell, our Systems Engineering team in Singapore received an opportunity to test out a Dell Edge Gateway unit for a short period. The Dell Edge Gateway is like a mini PC that is designed for Industrial-use to aggregate, secure and relay data from diverse sensors and equipment. It able to accept many communication protocols such as BACNet, Modbus, and CANbus, to modern wireless mesh networks like Zigbee, 6LoWPAN, and ZWave.


For this testing, we decided to install the full PI system and connect it to temperature sensors to perform simple direct data collection examples.
In this post, we will show what we have done on the Dell Gateway and go through 2 examples:

1) how to connect and collect data from an industrial thermometer transmitter via RS232 port

2) how to collect data from Temperature-Humidity sensor via Raspberry Pi through PI Web API




Sensors: Comet Thermometer (T0310) with RS232 output and Temperature-Humidity sensor (DHT22) (with jumper cables, breadboard and 10k Ohm resistor )

Development Board (for example 2): Raspberry pi3 Model B

Gateway Device: Dell Edge Gateway Model 5000. OS - Windows 10 IoT Industry. Processor - Intel Atom E3827 1.75 GHz 2 Cores. Memory - 8 GB

PI System component installed: PI Data Archive 2016, PI AF 2016 (including SQL Server 2012 R2 Express), PI ProcessBook, PI Coresight and PI DataLink (including MS Excel 2013)

Data Collection Method: PI Interface for Modbus Serial PLC (Example 1) and PI Web API 2016 (Example 2)


In this post, we will assume that all of the PI System components have been installed successfully on the Dell Gateway beforehand. Dell Gateway unit is running in Workgroup and does not have internet access (although it's possible).


Example 1

Comet Temperature sensor (T0310) comes with RS232 Serial output port which can be connected to Dell Gateway via a Serial Port COM1 directly.

PI system on Dell Gateway can easily collects the sensor’s real-time reading (Degree Fahrenheit) into a PI tag via PI Modbus Serial interface.


Depending on how familiar you are with PI Modbus interface/tag, the interface configuration was straight forward with all default setting.

As for PI Tag Configuration, you need to follow the manual. Some of important tag attributes are:

Point SourcePoint Type
Location 1
Location 2Location 3Location 4Location 5Instrument Tag


Once we started the interface, the data then started flowing to the PI server.

We built a display on PI Processbook and imported it to PI Coresight.



Example 2

Plug Temperature-Humidity sensor (DHT22) onto the breadboard and connect jumper cables as per diagram below.

  • Connect Pin 1 of the sensor to 3.3V GPIO
  • Connect Pin 2 of the sensor to GPIO4
  • Connect Pin 4 of the sensor to Ground
  • Connect an Ethernet cable between the Dell Gateway and Raspberry pi Ethernet port

Image result for dht22 raspberry pi ref: DHT22, gpio

Once everything is ready. On the PI Server side, we created 2 PI tags to collect the sensor’s readings (Temperature and Humidity). These 2 tags can have 'float32' type and use 'PS = L'. Other tag's configuration can be kept as default. 

The data will be reading off from Raspberry pi into PI tags via PI Web API.


Make sure to set the AuthenticationMethods parameter on PI Web API 's 'System Configuration' in AF Configuration Database to 'Anonymous'. 

Then, note down the WebID of two new PI Tags that were created. You should have 2 sets of webID; one for Temperature tag and one for Humidity tag.




We used Python scripts. We need 2 scripts below:

1) available online via github

2) - Write sensor's data to PI every 5 seconds. Make sure to modify 'base_url' and insert WebIDs for 'temperature' and 'humidity'.

Note: 'webapi.cert' is a self-signed certificate stored at the same location as the scripts. More information on Certificate Generation is available here.


import requests
import os
import time
import pigpio
import DHT22

base_url = 'https://gateway5000/piwebapi'
def post_pi_value(value,webid):  
    data = {'Value': value}  
    headers = {'Content-Type': 'application/json'}  
    response = + '/streams/'\
                             + webid + '/value', json=data,\
    return response
def get_pi_value(webid):  
    response = requests.get(base_url + '/streams/'\
                             + webid + '/value')  
    return response
while True:
    r = get_pi_value(temperature)
    #print r.text

Credit: Eugene Lee


You can setup to run above python script on startup.

Or start/stop on demand using bash scripts below.


These two files can be placed at the same location as the Python scripts. When you want to start/stop reading the sensor's data, then you can execute command from the terminal application (command-line) as shown below.




In this post, we made use of Dell Gateway's capability to directly connect to data sources via Serial and Ethernet ports. We selected simple sensors that are easy to connect to and come at low cost. We have successfully delivered data to PI Server, which resided on the same machine. Since Dell Gateway can accept many other communication protocols, this means the data collection from many other IoT devices are possible. In my opinion, Dell Gateway can be a very good interface node for the PI system, where all sensors can be connected and gathered to one location. However, it is not suitable for hosting our full version of PI system. This is because it has physical limitation such as 2 processors and only 8 GB RAM. The performance will surely be limited if we have large number of tags or utilized AF/AF Analyses heavily. In the future, with higher specification, it maybe possible.


If you have any questions, please post them here and if you have an experience in using Dell Gateway, please share with us.

Looking for help with a technical issue or challenge? All Things PI is your place for online support from OSI Technical Support, as well as fellow users!


Who is available on PI Square to help?


All of OSIsoft Technical Support monitors and participates on our Community, so the same support you can get via a phone call or email, will be what you'll receive here!


In addition, we have active Product Specialists, Product Managers, Developers, and more who are always keeping an eye out for discussions relevant to their expertise.


How do I post a question?


Use any search bar to try to see if your question has been asked before - search results will pop up automatically, and if you need to you can click the "Search all results ->" button at the bottom to see more.


If your issue hasn't been discussed before, simply press the "Ask It" button to launch a new post. Fill in as many details as you can, including background to the issue.


In the "In a Place" section, browse to the appropriate sub-location of All Things PI - we have product-specific subspaces such as PI Server, PI Interfaces & Connectors, PI Visualization, etc. See below on why this is beneficial to you!


Create a few tags to better catalogue your post. These tags are used in the site's search engine, and make it easier for others to find your post!


Why should I post in a product-specific Space?


Our specialized Support resources such as Product Specialists, Product Managers, and Developers will often follow specific product spaces so that they get alerted to new discussions. By posting in the right product space, you'll have a better chance of getting targeted support for your issue!


What if I need a remote session?


If we can't solve your problem on PI Square, one of our engineers will assist you in creating a full Tech Support case, and will contact you directly to start a remote session. The process is seamless, and won't be any extra work for you.


What about the rest of PI Square?


OSIsoft Technical Support resources will be monitoring discussions in the All Things PI space and product subspaces to provide technical support and assistance. (For questions: Taylor McManus)


OSIsoft Development Support teams will be monitoring questions in the PI Developers Club space and sub-spaces there to help provide suggestions and answers to development questions. (For questions: Ahmad Fattahi)


OSIsoft Learning will be monitoring the Learn PI space and class content and will be replying to questions, comments, ideas, and suggestions within that area.(For questions: Kelsey Holstein


OSIsoft PI Square team will be monitoring the Welcome to PI Square! space and will reply to your questions, comments, ideas, and suggestions within that area.(For questions: Jarita Sirois)


How do I get started?


Head on over to All Things PI and dive into the Community!

Filter Blog

By date: By tag: