Skip navigation
All Places > PI Developers Club > Blog > 2013 > July

Here I am again already working on the draft (final version by the time you are reading this) of my 3rd blog about the PowerShell Tools for the PI System within the space of a week. That has to be some kind of record for me to maintain my attention on one topic for so long.


So far I covered a couple of aspects to the PowerShell Tools for the PI System; Connecting to the PI Server and AF Server (collectively known as the PI System), and some real world PI Server archive management issues that I've had to deal with. Hopefully you've enjoyed reading them so far.


For this next blog post I want to talk about PI Interfaces with particular emphasis on two of the most commonly used, the PI Interface for OPC and the PI Interface for Performance Monitor. Now the fact that I am going to focus on those two interfaces in my post doesn't mean you should only carry on reading if you have a particular interest or heavy use of those 2 PI Interfaces, the point of this post is to show you once again how easy, simple and somewhat relaxing it is to automate changes/auditing of PI Interfaces in general with the PowerShell Tools for the PI System. Yes, I did say "relaxing", not sure why but that's just how it feels sometimes working with these OSIsoft CmdLets you just don't need to think too much about it as OSIsoft have done most of the hard work. Anyway feels like I am waffling again so lets get on with it.


PI Interface design changes to 50+instances


Okay, here is the scenario. Imagine you are working in an environment when you have a blueprint of a complete PI System, how the various aspects of the PI System (AF Server, PI Interfaces, PI Server, ...) interact with each other. That blueprint is then used to produce numerous replica PI Systems that are deployed to numerous geographical locations - if you have an interest in how to do that type of deployment then check out my UC 2013 presentation on that very topic. Each of the deployed PI Systems are then merrily doing their job, collecting data, visualizing data, and replicating data back to a single central PI System (via PI to PI).

Yes the title is correct, despite the brilliant work that has gone into PI Server there are still numerous management issues. Some of those issues are related to the PI Server archives and I want to cover a couple of them in this blog post which follows on from my previous blog post on connecting to the PI System via PowerShell.


Archive Time Span

Let me start with a simple scenario that I wanted to track automatically; I wanted to continuously know what the complete lifespan of all on-line archives is. I am working with space limited PI Systems where there is a requirement to only have a small time span of data available - for example the last 12 months of data. To achieve this from the PI Server perspective you need to set the tuning parameter "Archive_OverwriteDataOnAutoShiftFailure" to "1" to allow old archive files to be overwritten. The main reason for that overwrite of data to occur is typically low disk space, which in my scenario is perfect (and it works fine).


However, what I cannot tell automatically via Performance Counters is what is the time span of all on-line archive. You can't even deduce that from the existing performance counters. So how am I supposed to monitor that my rolling archives are indeed keeping at least 12 months of archive data on-line? The answer is to use the PowerShell Tools for the PI System with a couple of lines of code, it is so simple that I encourage everyone to start exploring the PowerShell Tools for the PI System to find other "quick wins" for your PI System management.


At the moment I am dealing purely with a single PI Server and running the PowerShell scripts on the PI server - I will tackle PI Collective and Collective monitoring in my next blog post.


We need to connect to the PI server, which you should remember from my last blog post:

if ((Get-PSSnapin "OSIsoft.PowerShell" -ErrorAction SilentlyContinue) -eq $null) { Add-PSSnapin "OSIsoft.PowerShell" }

[OSIsoft.SMT.PIServer]$PI = Get-PIServer -Name "vCampusDemo" | Connect-PIServer



Okay, so now we are connected.
In order to retrieve all the registered archives from the connected PI server OSIsoft have kindly built us the "Get-PIArchive" CmdLet. Without specifying an archive name the CmdLet will return an Array of all the registered archives. 

[Array]$ARCHIVES = Get-PIArchive -PIServer $PI



Simple, right? 
However, I'm not interested in empty archives that currently have no data, and I want to order the archives in a chronological descending order so that I can take a short cut to get the time span of all archives. So I'm going to pipe the archive list to a "where" filter to eliminate the empty archives and then sort the remaining archives:

[Array]$ARCHIVES = Get-PIArchive -PIServer $PI | where { $_.StartTime -ne $null } | sort -Property StartTime -Descending



 A PI Server archive without a start time is an empty archive waiting to be used/shifted to. They've now gone, great. I also sorted the archive list because I want to use the first and last element of the array to provide a simple time span result of all archives. Before you start shouting at the screen, I know there could be archive gaps in between...I am coming to that next. For now here is the code:

$Now = [DateTime]::Now
$TimeSpan = New-TimeSpan -Seconds ($Now.Subtract($ARCHIVES.GetValue($ARCHIVES.Length - 1).StartTime).TotalSeconds)
Write-Host "(Simple) Total archive online timespan = " $TimeSpan.ToString()



Something to note with this is that the first Archive object in the array will be the primary archive, which has no end time. The end time is the current time so for simplicity I substitute the current time of the PI Server as that archive's end time. I then subtract the last Archive object's start time to get the time span (inclusive of archive gaps) of on-line archive data. Now with this time span I could do what I like with the value, most likely send to a PI tag (via the "Add-PIValue" CmdLet - subject to a later blog) and notify on its value.


As I alluded to earlier in this blog post I am not taking into account archive gaps. So the result I just got is not accurate enough for me, I want to know the time span of on-line archive and if there are archive gaps. This got me thinking as to the best way to achieve this with as little code as possible, after all I don't necessarily want a a "PI Professional" to maintain these PowerShell script, they should be as obvious as possible for others to maintain. I had to do some more research into some PowerShell CmdLets and better use of piping between CmdLets...some time later...I decided on how I would do it for now and was pleasantly surprised on how I ended up doing it.


Each PI Server archive has a LifeTime property, which is a TimeSpan object. So I could filter out empty archives and now select the LifeTime property of each archive. I then discovered the "Measure-Object" CmdLet that will provide you with statistics depending on what you want, and one of the statistics available is "Sum". Perfect, but you cannot sum TimeSpans. Instead I had to use the "ExpandProperty" parameter for the Select CmdLet so that I could sum up the "TotalSeconds" property of each TimeSpan. Now it is perfect.

$sum = Get-PIArchive -PIServer $PI | where { $_.StartTime -ne $null } | select -ExpandProperty LifeTime | Measure-Object TotalSeconds -Sum



Now I can compare the time span of Primary Archive -> Oldest Archive Time Span with the LifeTime TotalSeconds Time Span to detect if there are any archive gaps.

$ArchiveGaps = ((New-TimeSpan -Seconds $sum.Sum).ToString() -eq $TimeSpan.ToString()) 



Job done.


Archive names for rolling archives


Following on from the above scenario I found out to my horror that when the PI Server overwrites an old archive as per the tuning parameters configuration it doesn't rename the archive file! This means that the first time the archive file is created it is given a name based on the tuning parameters "Archive_AutoArchiveFileRoot" and "Archive_AutoArchiveFileFormat" then that archive name will remain forever no matter how many times the archive is overwritten. If like me you have some form of self-confessed OCD then the archive names not matching the configuration after an archive shift overwrite was keeping me up at night.  


Anyway, after already tackling the archive on-line data time span issue I had done a lot of the work for getting at the archives. What I need to do now was check the name of each archive and make sure it matched the tuning parameter configuration. The first check of the Powershell Tools for the PI System yielded the CmdLet I needed; Get-PITuningParameter. This was straightforward to get what I needed for my checks, having already connected to the PI Server:



$AutoArchiveFileRoot = (Get-PITuningParameter -Name "Archive_AutoArchiveFileRoot" -PIServer $PI).Value
$AutoArchiveFileFormat = (Get-PITuningParameter -Name "Archive_AutoArchiveFileFormat" -PIServer $PI).Value



The archive file format is one of three possibilities:


0: [root]_D_Mon_YYYY_H_M_S[.ext]
1: [root]_YYYY-MM-DD_HH-MM-SS[.ext]
2: [root]_UTCSECONDS[.ext]


Okay so the format is really just a date time format for the archive's start time, so we'll use the same logic for our check and, if required, rename.


Let's get the archives in a chronological order and assign what we think the name of the archive "should be":



$AutoArchiveFileExt = (Get-PITuningParameter -Name "Archive_AutoArchiveFileExt" -PIServer $PI).Value
$AutoArchiveFileRoot = (Get-PITuningParameter -Name "Archive_AutoArchiveFileRoot" -PIServer $PI).Value
$AutoArchiveFileFormat = (Get-PITuningParameter -Name "Archive_AutoArchiveFileFormat" -PIServer $PI).Value

[Array]$ARCHIVES = Get-PIArchive -PIServer $PI | where { $_.StartTime -ne $null } | sort -Property StartTime -Descending
foreach ($ARCHIVE in $ARCHIVES)
     $ARCHIVE_NAME = ""
     switch ($AutoArchiveFileFormat)
          0 {$ARCHIVE_NAME = '{0:d_Mon_yyyy_H_M_s}' -f $ARCHIVE.StartTime}
          1 {$ARCHIVE_NAME = '{0:yyyy-MM-dd_HH-mm-ss}' -f $ARCHIVE.StartTime}
          2 {$ARCHIVE_NAME = $ARCHIVE.StartTime.ToFileTimeUtc()}
     $ARCHIVE_NAME = [String]::Concat($AutoArchiveFileRoot, $ARCHIVE_NAME)


 We can simply format the Archive StartTime property to the required archive name format defined in the tuning parameters, then join that with the file root tuning parameter.


Next the comparison...



if ($ARCHIVE.Name -eq ($ARCHIVE_NAME))
          Write-Host "Archive name '" $ARCHIVE.Name "' is correct. No action."
          Write-Host "Archive name '" $ARCHIVE.Name "' is incorrect, should be '$ARCHIVE_NAME'."



You can use this as a sanity check that the logic for comparison is accurate. On "normal" usage of the PI Server you'll likely have all archives named correctly, unless you changed the tuning parameters after the first archive was created. For my scenario there was no telling how many of the archives would have been rolled over so I had to do this check periodically based on my data rates to the PI Server.


What do I do now that I know there are archives named incorrectly? Well unregister them and rename them of course. Couple more interesting CmdLets that OSIsoft have provided to us, "Unregister-PIArchive" and "Register-PIArchive". This is becoming easier than I first thought.


Okay then, I am going to unregister each non-conforming archive, rename it, then register it as the new name. However, I am not going to do anything with the primary archive...I'll wait for the next archive shift before renaming that one. I can live with having the primary archive named incorrectly (although OSIsoft should fix this whole issue in PI Server 2013).



          if ($ARCHIVE.EndTime -ne $null)
               Unregister-PIArchive -Name $ARCHIVE.Name -PIServer $PI 
               Move-Item $ARCHIVE.Name $ARCHIVE_NAME
               Move-Item ($ARCHIVE.Name + ".ann") ($ARCHIVE_NAME + ".ann")
               Register-PIArchive -Name ($ARCHIVE_NAME) -PIServer $PI 
               Write-Host "Primary archive '" $ARCHIVE.Name "' will not be renamed until after the next archive shift."



Yep, it really is that easy. All those sleepless nights wiped out in a few lines of code. You could have all kinds of fun with these OSIsoft CmdLets.


Job done.






For a couple of issues that historically have been complicated to achieve using regular command file scripts they have been solved with very little PowerShell code. Big thanks to OSIsoft for providing the CmdLets to simplify these real-world issues with very little effort.


Obviously any system management automation carries potential for unexpected exceptions, irregularities in PI Server setup/configuration ... so you should have some detailed knowledge of your PI System before going in with all guns blazing. I've omitted any detailed exception handling for simplicity of this blog post - make sure you handle exceptions and check your environment first!




What's next?


I want to look at some PI Module Database management for PI Interfaces. Whilst we don't have AF based PI Interfaces some of us still have to get our hands dirty with the PI Module Database. I didn't want to get my hands dirty, I like my hands, so I automated a whole bunch of PI Interface & PI Module Database work. This is coming up next.



Ever wondered how you use PowerShell to connect to the PI System?
Too busy to learn another data access method?
Not had any real exposure to PowerShell thus far?


Then read on as I introduce you to connecting to the PI Server and AF Server using the OSIsoft PowerShell Tools for the PI System.


This blog does not explain the specifics of PowerShell but merely shows you how to use PowerShell with the PI System. For details of PowerShell itself I suggest you ask our mutual colleagues; Google and Bing.


I am by no means a PowerShell guru, just a programming nomad who has currently settled in the land of PowerShell until I am ready for my next journey. I set myself a goal at the beginning of the year to understand PowerShell and happy to have achieved that so far. In fact, I’ve used it in projects already to save hours/days (and large amounts of $’s, £’s …) of work.


Feedback most welcome!
Suggestions for further PowerShell topics most welcome too!
Nomination for OSIsoft vCampus All-Star 2013 a must!   (I’m starting early this year…)


Adding the OSIsoft PowerShell snapin.


We can’t do anything without the Powershell Snapin that provides the CmdLets we need.


Snapins are added using the “Add-PSSnapin” CmdLet; opposite CmdLets tend to exist so in this instance there is a “Remove-PSSnapin” CmdLet too:



Add-PSSnapin "OSIsoft.PowerShell"
Remove-PSSnapin "OSIsoft.PowerShell"



Now of course that Snapin could already have been added which means we’ll get some ugly red text thrown at us, so we’ll handle that explicitly:



if ((Get-PSSnapin "OSIsoft.PowerShell" -ErrorAction SilentlyContinue) -eq $null) { Add-PSSnapin "OSIsoft.PowerShell" }



With the snapin loaded we have access to a wealth of PI System CmdLets that pretty much cover most aspects of the PI System from a management perspective. What’s more is the abstraction that the CmdLets provide mean scripts produced can easily be supported by any PowerShell scripters without needing to have the deep knowledge of the PI System SDKs.


Connecting to a PI Server.


Connecting to a PI Server is extremely simple and achieved with 2 CmdLets; Get-PIServer and Connect-PIServer (Disconnect-PIServer is available too but we don’t need that right now).


This is the code required using those 2 CmdLets to connect to a PI Server named “vCampusDemo”:



[OSIsoft.SMT.PIServer]$PI = Get-PIServer -Name "vCampusDemo" | Connect-PIServer



How beautiful does that look? Extremely.


Once you have your PIServer object the world is your oyster as to what you want to do with it, for example checking the version of the PI Server:



Write-Host $PI.Version



There are endless things you’ll want to do to that PI Server, but I’ll cover that in some further blogs. For now, let’s just concentrate on getting you connected.
If you’re working with a PI Collective and have a connection preference then that is already covered, just specify your preference when connecting:



[OSIsoft.SMT.PIServer]$PI = Get-PIServer -Name "vCampusDemo" | Connect-PIServer -ConnectionPreference RequirePrimary



One neat parameter of the Connect-PIServer CmdLet I found was “-AcceptServerIDChange”. It is pretty self-explanatory and works great – if you have a PI Server that recently changed Server Id and you’re connecting then it will automatically accept the new Server Id, which will be updated in the PI-SDK registry.


Connecting to an AF Server (PI System).


Can’t be as easy as connecting to a PI Server, can it? Yes, yes it can. The same CmdLet pattern is available for an AF Server; Get-AFServer, Connect-AFServer, Disconnect-AFServer.



[OSIsoft.AF.PISystem]$AF = Get-AFServer -Name "vCampusDemoAF" | Connect-AFServer



Just as beautiful as the PI Server connection I think you’ll agree. Same as with the PI Server, connection preference for an AF Collective is supported:



[OSIsoft.AF.PISystem]$AF = Get-AFServer -Name "vCampusDemoAF" | Connect-AFServer -ConnectionPreference RequirePrimary



Connection error handling


With both the PI Server and AF Server we have just connected with no regards for how we want to handle exceptions, for example a PI Server doesn’t exist, or an AF Server connection was rejected.


For both our connections to the PI Server and AF Server we can have the connection attempts silently executed and then make our own checks on the objects returned.


PI Server connection becomes:



[OSIsoft.SMT.PIServer]$PI = Get-PIServer -Name "vCampusDemo" -ErrorAction SilentlyContinue | Connect-PIServer -ErrorAction SilentlyContinue

if ($PI -eq $null) { Write-Host "Uh oh, no connection the PI Server." -ForegroundColor Red }



Then, you guessed it; we can do the same thing for the AF Server connection:



[OSIsoft.AF.PISystem]$AF = Get-AFServer -Name "vCampusDemoAF" -ErrorAction SilentlyContinue | Connect-AFServer -ErrorAction SilentlyContinue

if ($AF -eq $null) { Write-Host "Uh oh, no connection to the AF Server." -ForegroundColor Red }



There is currently a bug with the “Get-PIServer” CmdLet that means it will throw an error even if you specify “SilentlyContinue” as the ErrorAction. This is being addressed by OSIsoft.




What’s next?


With the easy bit of connecting out of the way the next set of blogs on PowerShell with the PI System will focus on some management aspects of both systems. They will include some PI Server audits (PI Mappings/Trusts/Firewall), archive management, PI Interface manipulation (PI Module Database + remote command file edits) … Beyond those blog posts we’ll switch our attention to the AF Server.


LightSwitch with PI

Posted by mhalhead Champion Jul 4, 2013

Microsoft LightSwitch is a Visual Studio add in for rapid application development. If you want to know more google it, but here's a good starting point. IMNHO LS is a really good tool for build quick CRUD style applications (forms over data), you probably could build a nice functional dashboard in LS but this doesn't mean that it is a good idea. The hard core developers reading this are shuddering, but there are times when you need a method of developing a solution without months of plumbing. I've also found that LS solutions are more maintainable as all the plumbing code isn't there. For me the biggest difference between LS and other RAD platforms is that LS is pure .NET. There is no weird and Wonderful runtime engine (other than the CLR) all there is, is a .NET framework that incorporate the best patterns and practises. Because it is pure .NET you can do pretty much anything in LS that you could do in a normal .NET app, yes you might fight the framework but you can do it; again this doesn't mean it is a good idea.


Microsoft's marketing department really hasn't done LS any favours; I think that its confused them. You can use LS for the following:

  • Create a quick well formed CRUD application
  • Create a OData service - you can publish a service only layer with LS providing an OData service that has minor things like really authentication and permissions
  • SharePoint 2013 development - LS is being pushed (badly) as a preferred method of writing SharePoint 2013 applications. We're still on 2010 so I have tried this
  • Mobile clients - the default UI is built in Silverlight but you can also create a companion UI in HTML5. Not you can't create a Win8 RT app directly. You could use the data service layer to create the server side and use the normal VS templates to create the client app.

So where does PI fit in? Well there are numerous examples floating around vCampus of data entry into PI. The example I'm going to use is Event Frames. Yes PI can generate EFs no problem (Abacus will make this work a lot better - Sorry Steve couldn't resist), but there is no easy way out of the box to let users enter additional information into the EFs. I don't know about everyone else but I'm certainly not going to give my users PSE with sufficient rights to edit events.


There are a number of mechanism for connecting to data in LS:

  • Database - Effectively the entity frame providers. I must confess I never tried anything other the SQL Server
  • SharePoint
  • OData
  • WCF RIA Service

Seeing that I'm trying to talk to AF my options are limited. Currently OSIsoft doesn't have OData support and even if they did LS doesn't support OData operation (please add some votes to the feature request on Microsoft's connect site). So WCF RIA Services is it.


The first thing to bear in mind is that LS uses an entity model for data. This means that you will need to project your EFTemplate as an entity; you do this with a plain old C# Object (POCO); don't worry I will show an example.


The first step is to create a new LS project and then add a Class Library Project to the solution; hint user a plain old class library not a WCF Service Library.

At vCampus Live 2011 you might remember Joel’s (aka SCADAhacker) demo scenario starting with a corporate user who opens a poisoned PDF. After loading a RAT (remote access Trojan) to reliably control the victim machine, he shows the ‘Pass the Hash’ technique for pivoting within corporate networks using legitimate credentials. In the classic PtH scenario the attacker takes control of a domain. 


The tools used in Joel’s demo are readily available to 'casual' attackers. More to the point, attack tools continue to evolve with more powerful exploits. In fact, the latest WCE tool is now kerberos aware. So too, our cyber defenses must evolve.


Since the PI System and most corporations rely heavily on Windows integrated security you should be in the know about Pass the Hash. In my view, Microsoft has declared war on Pass the Hash. The purpose of this post is to call your attention to the TechEd 2013 session featuring detailed information and mitigation advice regarding PtH.


“Pass the Hash and Other Credential Theft and Reuse: Preventing Lateral Movement and Privilege Escalation”


I highly recommend viewing the TechEd recording and slide deck.  Here’s my take on the top 3 mitigations with some PI System spin:

  • Mitigations 1 and 2 are quick wins. Don’t be administrator, at least, not all the time! Enable full UAC in Windows Vista and later.  The same is true for the PI System.  Administrator and piadmin access should be reserved for initial provisioning and disaster recover tasks. Don’t put your data (and systems) at risk due to unnecessary use of full permissions.
  • Mitigation 3: Restrict inbound traffic using Windows Firewall.  Your organization needs a way to impede unapproved lateral movement on the network. The Windows firewall is built-in... just do it.  While you’re there, get tough on outbound rules too, at least for your servers and interface nodes.

If you’re like me, you’ve learned to appreciate system defaults whenever possible. Outbound connections are allowed by default so you know there is going to be some extra effort and testing to block by default.  The observations below are meant as a starting point especially for those with a dedicated PI Server. Other apps running on the machine may require additional rules.


Observations with Windows Firewall Outbound Connections set to Block by Default

  1. Perhaps the first thing you might notice on a test VM is loss of network connectivity. Like a real server you’ll want to configure a static IP so the DHCP ports need not be open.
  2. Assuring kerberized logins with the domain controller are possible was a more obvious need. Sadly none of the built-in rules seem appropriate. Instead 2 rules were added to allow outbound connections to the following remote ports. Scope these rules for connection to domain controllers. You might need a third if your DNS is separate from AD.
    1. UDP rule – Local Ports: All, Remote Ports: 53, 88,123,389,464,3268
    2. TCP rule – Local Ports: All, Remote Ports: 88,389,464,3268
  3. Group policy support is recommended for domain members but not required. The following 3 built-in rules should be enabled to allow processing of group policy.
    1. Core Networking - Group Policy (LSASS-Out)
    2. Core Networking - Group Policy (NP-Out)
    3. Core Networking - Group Policy (TCP-Out)
  4. You might be expecting the PI Server to require a 5450 outbound rule. Not so, inbound rules are sufficient to allow the PI Server to respond to client connections. However a PI HA Secondary server will need an outbound rule to allow connections to the primary.  Similarly PIAFLink replication also needs an outbound rule to connect to a remote AF service. As such the rules should allow outbound connections to the following remote ports for PI and AF respectively. Incidentally, these rules are also applicable to interface nodes and other PI System roles acting as a client.
    1. TCP rule – Local Ports: All, Remote Ports: 5450
    2. TCP rule – Local Ports: All, Remote Ports: 5457, 5459

As a final tweak the above rules can be assigned to the applicable service. First though you’ll need use the Windows Service Control utility to ‘unrestrict’ the service. Then select the respective PI service on the Programs and Service tab of the Window firewall rule wizard. Yes, we are planning for ‘restricted’ mode services but that is a future discussion.

  • SC sidtype pinetmgr unrestricted
  • SC sidtype piaflink unrestricted

In summary, Microsoft's top 3 PtH mitigations make sense for a modern PI System and are in direct alignment with OSIsoft best practices. I hope this post was useful for understanding some of the relevant changes in our cyber ecosystem and helps you identify synergies for more efficient defensive initiatives.  Thanks for listening.



Filter Blog

By date: By tag: