Skip navigation
All Places > PI Developers Club > Blog
1 2 3 Previous Next

PI Developers Club

617 posts

Got a bunch of Event Frames laying around?


Getting rid of old event frames in your database is now made much easier in AF SDK 2.10 with the new DeleteEventFrames() method that is hanging off the AFEventFrame class.


All you need to do is collect a list of event frame object IDs and pass it in to DeleteEventFrames()




using System;
using System.Linq;
using OSIsoft.AF;
using OSIsoft.AF.EventFrame;
using OSIsoft.AF.Search;

namespace MassEFDelete
    class Program
        static void Main(string[] args)
            PISystems pi = new PISystems();
            PISystem sys = pi.DefaultPISystem;
            AFDatabase db = sys.Databases["Chris"];

            var cpu = db.Elements["CHRISMACAIR"];
            var machinetemplate = db.ElementTemplates["Macs"];

            string query = string.Format("Element:CHRISMACAIR");  // Every event frame where this AFElement is a factor
            var search = new AFEventFrameSearch(db, "CPU Search", query);
            var frames = search.FindEventFrames();  // Get every event

            var ids = (from p in frames
                       select p.ID).ToList();

            Console.WriteLine($"There are {ids.Count} event frames awaiting deletion.");

            AFEventFrame.DeleteEventFrames(sys, ids);

            // Search again
            frames = search.FindEventFrames();
            Console.WriteLine($"There are {frames.Count()} remaining event frames.");



Many of you have been frustrated with clearing event frames out in batch. Now you can conduct a lightweight search (fullload=false), grab the IDs out of the search and feed them into bulk delete all at once!

I just needed to merge some data from one PI Point into another PI Point on the same PI Server.



The PI OPC UA Connector was running for a while before following the PI Connector Mapping Guide and routing the output of the OPC UA Connector to the previously OPC DA PI Points.


This Powershell script uses AFSDK to get the events from PI Point "sinusoid" and write these into PI Point "testtag" for the last hour.


[Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDK") | Out-Null
[OSIsoft.AF.PI.PIServers] $piSrvs = New-Object OSIsoft.AF.PI.PIServers
[OSIsoft.AF.PI.PIServer] $piSrv = $piSrvs.DefaultPIServer
[OSIsoft.AF.PI.PIPoint] $piPoint = [OSIsoft.AF.PI.PIPoint]::FindPIPoint($piSrv, "SINUSOID")
[OSIsoft.AF.PI.PIPoint] $piPoint2 = [OSIsoft.AF.PI.PIPoint]::FindPIPoint($piSrv, "testag")
[OSIsoft.AF.Time.AFTimeRange] $timeRange = New-Object OSIsoft.AF.Time.AFTimeRange("*-1h", "*")
[OSIsoft.AF.Asset.AFValues] $piValues = $piPoint.RecordedValues($timeRange, [OSIsoft.AF.Data.AFBoundaryType]::Inside, $null, $true, 0)

foreach ($val in $piValues)
    Write-Host $val.Value " at " $val.Timestamp

I am happy to announce that the client libraries below were updated using the new PI Web API 2018 Swagger specification. All the new methods from PI Web API 2018 are available for you to use on the client side! The StreamUpdate related methods (still in CTP) were also added to those projects.



If you have questions, please don't hesitate to ask!

In this blog we will have a look at the new features included in the OSIsoft.AF.PI Namespace, namely

  • Methods added to find PIPoints in bulk from a list of IDs.
  • A change event that can be checked if it was the result of a rename


Finding PI Points using IDs

The PIPoint. FindPIPoints(PIServer, IEnumerable< Int32> , IEnumerable< String> ) and PIPoint. FindPIPointsAsync(PIServer, IEnumerable< Int32> , IEnumerable< String> , CancellationToken) methods were added to find PIPoints in bulk from a list if IDs.This helps to retrieve a list of PIPoint objects from the specified point ids without the requirement of knowing the PI Point names apriori.



var piDataArchive = new PIServers()[serverName];
var pointIDs = new List<int>() { 1, 3, 4, 6, 10000 };
var pointAttributes = new List<string>() { "Name", "PointType", "PointID" };
IList<PIPoint> myPoints = FindMyPIPoints(piDataArchive, pointIDs, pointAttributes);


private static IList<PIPoint> FindMyPIPoints(PIServer piDataArchive, IList<int> pointIDs,  IList<string> pointAttributes)
            return PIPoint.FindPIPoints(piServer: piDataArchive,
                                                  ids: pointIDs,
                                                  attributeNames: pointAttributes);


Asynchronous method (This call might use a background task to complete some of its work. See the Threading Overview for some matters to consider when execution transitions to another thread)

var tokenSource = new CancellationTokenSource();
var token = tokenSource.Token;
var findpointstask = FindMyPointsAsync(piDataArchive, pointIDs, pointAttributes, token);


private static async Task<IList<PIPoint>> FindMyPointsAsync(PIServer piDataArchive, IList<int> pointIDs,  IList<string> pointAttributes, CancellationToken token)
            return await PIPoint.FindPIPointsAsync(piServer: piDataArchive,
                                                      ids: pointIDs,
                                                      attributeNames: pointAttributes,
                                                      cancellationToken: token);



  • The PIPoint attribute names to be loaded from the server as the PIPoint objects are found. The GetAttribute(String) method can be used to access the loaded attribute values. If null, then no attribute values are loaded for the returned PIPoints.
  • Asynchronous methods throw AggregateException on failure which will contain one or inner more exceptions containing the failure
  • A cancellation token used to abort processing before completion. Passing the default CancellationToken.None will run to completion or until the PIConnectionInfo.OperationTimeOut period elapses.


Check if PI Point has been renamed

PIPointChangeInfo.IsRenameEvent Method is used to check if PIPointChangeInfo instance is a rename event. Previously this information was not possible to obtain through AF SDK and required further digging into the PI Data Archive.


Method Signature

public bool IsRenameEvent(
               out string oldName,
               out string newName


Implementation through PIServer.FindChangedPIPoints Method

IList<PIPointChangeInfo> changes = piDataArchive.FindChangedPIPoints(largeCount, cookie, out cookie, null);
if (!(changes is null))
     foreach (PIPointChangeInfo change in changes)
          if (change.IsRenameEvent(out string oldname, out string newname))
                Console.WriteLine($"\tRenamed New name: {newname} Old name: {oldname} ");



  • PIServer.FindChangedPIPoints method involves various aspects like cookies, filterpoints and  PIPointChangeInfo structure, which the reader is encouraged to explore in detail through the AF SDK documentation
  • Cookie: Use the return from a previous call to this method to find all changes since the last call. Pass null to begin monitoring for changes to PI Points on the PIServer. Pass an AFTime to find all changes, since a particular time
  • filterPoints (Optional): A list of PIPoint objects for which the resulting changes should be filtered


The code that demonstrates the use of the the above methods in the form of a simple console application can be obtained at: GitHub - ThyagOSI/WhatsNewAF2018

A sample output from the console application



Hope this post helps you, the PI developer to explore these methods further and implement them in your future projects.

Please feel free to provide any feedback that you may have on the new methods, its documentation or this blog post.



PI Web API 2018 was released in late June with some interesting features. In this blog post, I will show you some of those features using the PI Web API client library for .NET Standard which was already upgraded to 2018.


According to the Changelog of the PI Web API 2018 help, the new features of PI Web API 2018 are listed below:


  • Read for notification contact templates, notification rules, notification rule subscribers and notification rule templates
  • Read, create, update and delete for annotation attachments on event frames
  • Retrieve relative paths of an element
  • Retrieve full inheritance branch of an element template
  • Allow reading annotations of a stream using the "associations" parameter
  • Allow showing all child attribute templates of an element template
  • Add parameter support for tables
  • Filter attributes by trait and trait category
  • Support health traits
  • Incrementally receive updates from streams with Stream Updates (CTP)

Data model changes:

  • Expose 'ServerTime' property on objects of asset server, data server and system
  • Expose 'DisplayDigits', 'Span' and 'Zero' properties on objects of attribute and point
  • Expose 'DefaultUnitsNameAbbreviation' property on objects of attribute and attribute template


You can have an idea of the new features by looking at the list above but showing some code snippets will help you understand better.


Please visit the GitHub repository to download the source code package used in this blog post.




Before we start, please create a .NET Framework or .NET Core console application and add the OSIsoft.PIDevClub.PIWebApiClient according to the README.MD file of the client library repository if you want to try yourself.



Read for notification contact templates, notification rules, notification rule subscribers and notification rule templates


PI Web API 2018 comes with 4 new controllers (NotificationContactTemplate, NotificationRule, NotificationRuleSubscriber and NotificationRuleTemplate) with read access to the notification objects.


For this demonstration, I have created a Notification Rule to make sure that it is accessible through client.NotificationRule.GetNotificationRuleQuery() action.



            PIAssetDatabase db = client.AssetDatabase.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018");
            PIItemsNotificationRule notificationRules = client.NotificationRule.GetNotificationRulesQuery(db.WebId, query: "Name:=Not*");
            Console.WriteLine($"Found {notificationRules.Items.Count}");


HTTP Request: GET - /piwebapi/notificationrules/search?databaseWebId=F1RDbvBs-758HkKnOzuxolJRlgBW-drMj-Q0eZlG9e0V3JigTUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOA&query=Name%3a%3dNot*


Running this application, the result is:



Read, create, update and delete for annotation attachments on event frames


I have manually created a new Event Frame (EF) using PI System Explorer. I want to retrieve the same EF programmatically and add an annotation with a value of "EF Test" through the CreateAnnotiationWithHttpInfo() method available of the client library. I've reviewed the status code to make sure that the request was made successfully.



            PIAssetDatabase db = client.AssetDatabase.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018");
            PIItemsEventFrame efs = client.AssetDatabase.GetEventFrames(db.WebId);
            PIEventFrame ef = efs.Items.First();
            PIAnnotation piAnnotation = new PIAnnotation();
            piAnnotation.Value = "EF Test";
            ApiResponse<object> result = client.EventFrame.CreateAnnotationWithHttpInfo(ef.WebId, piAnnotation);
            if (result.StatusCode < 300)
                Console.WriteLine("Annotation in EF was created successfully!");

            PIItemsAnnotation piAnnotations = client.EventFrame.GetAnnotations(ef.WebId);
            foreach (PIAnnotation annotation in piAnnotations.Items)
                Console.WriteLine($"Annotation from Event Frame: {annotation.Value}");


HTTP Request: POST - /piwebapi/eventframes/F1FmbvBs-758HkKnOzuxolJRlgbFJEBGSE6BGbywAVXQAeEATUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFVkVOVEZSQU1FU1tFRjIwMTgwNzEwLTAwMV0/annotations


I can see my created annotations in PI System Explorer. Just right click on the EF and then click on "Annotate..." You will be able to see all EFs generated programmatically.



Retrieve relative paths of an element


PI Web API 2018 allows you to get a list of the full or relative paths to this display. Let's see how this works. Please refer to the AF Tree below.




Using the code snippet below.


            PIElement element = client.Element.GetByPath(@"\\MARC-PI2016\AFPartnerCourseWeather\Cities\Chicago");
            PIItemsstring relativePath = client.Element.GetPaths(element.WebId, @"\\MARC-PI2016\AFPartnerCourseWeather");
            Console.WriteLine($"The Relative Path is {relativePath.Items.First()}.");


The second line calls GetPaths with the second input being "\\MARC-PI2016\AFPartnerCourseWeather" as the full path of one of their parents' AF object, which in this case is the AF database.


HTTP Request: GET -  /piwebapi/elements/F1EmbvBs-758HkKnOzuxolJRlg-Jnwgsge5hGAwwAVXX0eAQTUFSQy1QSTIwMTZcQUZQQVJUTkVSQ09VUlNFV0VBVEhFUlxDSVRJRVNcQ0hJQ0FHTw/paths?relativePath=%5c%5cMARC-PI2016%5cAFPartnerCourseWeather


The result will be the element relative path which is the path relative to the element path of one of its parents.



Retrieve full inheritance branch of an element template


In this example I have create 3 element templates and 1 element:

  • TemplateA
  • TemplateAA derived from TemplateA
  • TemplateAAA derived from TemplateAA.



PI Web API provides two interesting methods: GetBaseElementTemplates() and GetDerivedElementTemplates() to get the base and derived element templates from element templates. Let's see how this work:


            PIElementTemplate templateA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateA]");
            PIElementTemplate templateAA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateAA]");
            PIElementTemplate templateAAA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateAAA]");

            PIItemsElementTemplate baseTemplatesFromTemplateA = client.ElementTemplate.GetBaseElementTemplates(templateA.WebId);           
            PIItemsElementTemplate baseTemplatesFromTemplateAA = client.ElementTemplate.GetBaseElementTemplates(templateAA.WebId);
            PIItemsElementTemplate baseTemplatesFromTemplateAAA = client.ElementTemplate.GetBaseElementTemplates(templateAAA.WebId);

            Console.WriteLine($"There are {baseTemplatesFromTemplateA.Items.Count} base templates in Element Template A");
            Console.WriteLine($"There are {baseTemplatesFromTemplateAA.Items.Count} base templates in Element Template AA");
            Console.WriteLine($"There are {baseTemplatesFromTemplateAAA.Items.Count} base templates in Element Template AAA");

            PIItemsElementTemplate derivedTemplatesFromTemplateA = client.ElementTemplate.GetDerivedElementTemplates(templateA.WebId);
            PIItemsElementTemplate derivedTemplatesFromTemplateAA = client.ElementTemplate.GetDerivedElementTemplates(templateAA.WebId);
            PIItemsElementTemplate derivedTemplatesFromTemplateAAA = client.ElementTemplate.GetDerivedElementTemplates(templateAAA.WebId);

            Console.WriteLine($"There are {derivedTemplatesFromTemplateA.Items.Count} derived templates in Element Template A");
            Console.WriteLine($"There are {derivedTemplatesFromTemplateAA.Items.Count} derived templates in Element Template AA");
            Console.WriteLine($"There are {derivedTemplatesFromTemplateAAA.Items.Count} derived templates in Element Template AAA");


HTTP Request: GET - /piwebapi/elementtemplates/F1ETbvBs-758HkKnOzuxolJRlg-av52dhufE6iHqLJJk5faATUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFTEVNRU5UVEVNUExBVEVTW1RFTVBMQVRFQV0/baseelementtemplates

HTTP Request GET - /piwebapi/elementtemplates/F1ETbvBs-758HkKnOzuxolJRlg-av52dhufE6iHqLJJk5faATUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFTEVNRU5UVEVNUExBVEVTW1RFTVBMQVRFQV0/derivedelementtemplates


Running the application, the results are:



Allow reading annotations of a stream using the "associations" parameter


Some of the actions from the Stream and StreamSets controllers have the associations parameter added as an optional input. If this input is null, the values will be retrieved without their annotations. If the associations input is equal to "Annotations", then the annotations will be retrieved as well. Let's take a look at the example using the Stream.GetRecorded() method.



            PIPoint point1 = client.Point.GetByPath($"\\\\{piDataArchive.Name}\\SINUSOID");
            PIExtendedTimedValues valuesWithAnnotation = client.Stream.GetRecorded(point1.WebId, "Annotations");
            IEnumerable<List<PIStreamAnnotation>> annotationsList = valuesWithAnnotation.Items.Where(v => v.Annotated == true).Select(v => v.Annotations);
            foreach (List<PIStreamAnnotation> annotations in annotationsList)
                foreach (PIStreamAnnotation annotation in annotations)
                    Console.WriteLine($"Showing annotation: {annotation.Value}, {annotation.ModifyDate}");


HTTP Request: GET  /piwebapi/streams/F1DPQuorgJ0MskeiLb6TmEmH5gAQAAAATUFSQy1QSTIwMTZcU0lOVVNPSUQ/recorded?associations=Annotations


The result is shown below:




Allow showing all child attribute templates of an element template


On this demonstration, I have created a new attribute template called ParentAttribute on the TemplateAAA element template which has 2 child attributes templates named ChildAttribute1 and ChildAttribute2.



On previous versions of PI Web API it was not possible to access the child attribute templates of an attribute template. But this is possible in 2018 version:


            PIElementTemplate templateAAA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateAAA]");
            PIItemsAttributeTemplate attributesWithChild = client.ElementTemplate.GetAttributeTemplates(templateAAA.WebId, showDescendants: true);
            foreach (PIAttributeTemplate attributeTemplate in attributesWithChild.Items)
                Console.WriteLine($"Showing attribute template -  Path: {attributeTemplate.Path}, HasChildren:{attributeTemplate.HasChildren}");


HTTP Request:  /piwebapi/elementtemplates/F1ETbvBs-758HkKnOzuxolJRlgoPGTVEM9d0S3M9GObE-kagTUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFTEVNRU5UVEVNUExBVEVTW1RFTVBMQVRFQUFBXQ/attributetemplates?showDescendants=True


The results are shown below:



Filter attributes by trait and trait category


PI Web API allows you to get the attribute traits of an attribute by adding two new inputs to the GetAttributes() method: trait and traitCategory. I have created the attribute traits for the MainAttribute.


Please refer to the code below.


            PIAttribute mainAttribute = client.Attribute.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018\AttributeTraits|MainAttribute");
            PIItemsAttribute attributeTraits = client.Attribute.GetAttributes(mainAttribute.WebId, trait: new List<string> { "Minimum", "Target" });



HTTP Request: GET -  /piwebapi/attributes/F1AbEbvBs-758HkKnOzuxolJRlgBG1jiF-E6BGbywAVXQAeEAUkg-GMAek0-aGWOEUJKR9QTUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxBVFRSSUJVVEVUUkFJVFN8TUFJTkFUVFJJQlVURQ/attributes?trait=Minimum&trait=Target


Please refer to the Attribute Trait page of the PI Web API help for more information.


Support health traits


PI Web API also allows you to get the health score and health status of an element by adding the same inputs of the previous example.



            PIElement mainElement = client.Element.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018\AttributeTraits");
            PIItemsAttribute healthAttributeTraits = client.Element.GetAttributes(mainElement.WebId, trait: new List<string> { "HealthScore" });


The variable healthAttributeTraits will store the Health Score attribute of the element.


Please refer to the Attribute Trait page of the PI Web API help for more information.


Incrementally receive updates from streams with Stream Updates (CTP)


PI Web API comes with Stream Updates whose behavior is similar to the PI Web API channels but it uses the default HTTP requests instead of WebSockets. Here is the description of this feature on the PI Web API help.


"Stream Updates is a way in PI Web API to stream incremental and most recent data updates for PIPoints/Attributes on streams and streamsets without opening a websocket. It uses markers to mark the specific event in a stream where the client got the last updates and uses those to get the updates since that point in the stream."


In the example below, we will get the WebIDs of 3 PI Points and retrieve their new real time values every 30 seconds.



            PIPoint point1 = client.Point.GetByPath("\\\\marc-pi2016\\sinusoid");
            PIPoint point2 = client.Point.GetByPath("\\\\marc-pi2016\\sinusoidu");
            PIPoint point3 = client.Point.GetByPath("\\\\marc-pi2016\\cdt158");
            List<string> webIds = new List<string>() { point1.WebId, point2.WebId, point3.WebId };

            PIItemsStreamUpdatesRegister piItemsStreamUpdatesRegister = client.StreamSet.RegisterStreamSetUpdates(webIds);
            List<string> markers = piItemsStreamUpdatesRegister.Items.Select(i => i.LatestMarker).ToList();
            int k = 3;
            while (k > 0)
                PIItemsStreamUpdatesRetrieve piItemsStreamUpdatesRetrieve = client.StreamSet.RetrieveStreamSetUpdates(markers);
                markers = piItemsStreamUpdatesRetrieve.Items.Select(i => i.LatestMarker).ToList();
                foreach (PIStreamUpdatesRetrieve item in piItemsStreamUpdatesRetrieve.Items)
                    foreach (PIDataPipeEvent piEvent in item.Events)
                        Console.WriteLine("Action={0}, Value={1}, SourcePath={2}", piEvent.Action, piEvent.Value, item.SourcePath);



HTTP Request: POST - /piwebapi/streamsets/updates?webId=F1DPQuorgJ0MskeiLb6TmEmH5gAQAAAATUFSQy1QSTIwMTZcU0lOVVNPSUQ&webId=F1DPQuorgJ0MskeiLb6TmEmH5gAgAAAATUFSQy1QSTIwMTZcU0lOVVNPSURV&webId=F1DPQuorgJ0MskeiLb6TmEmH5g9AQAAATUFSQy1QSTIwMTZcQ0RUMTU4 H


HTTP Request: GET /piwebapi/streamsets/updates?marker=796081654a5648a3bf0d322fb58df518_0&marker=e2604da295584f498ce65976437b0c0f_0&marker=6f402964b31e4635aac2b87a68b79e60_0


The results are shown below:



Exposing properties of objects


There was some data model changes:

  • Expose 'ServerTime' property on objects of asset server, data server and system
  • Expose 'DisplayDigits', 'Span' and 'Zero' properties on objects of attribute and point
  • Expose 'DefaultUnitsNameAbbreviation' property on objects of attribute and attribute template


We can easily demonstrate that through the following code:


            PIPoint point = client.Point.GetByPath($"\\\\{piDataArchive.Name}\\SINUSOID");
            //Expose 'ServerTime' property on objects of asset server, data server and system
            PIItemsAssetDatabase dbs = client.AssetServer.GetDatabases(afServer.WebId);
            afServer = client.AssetServer.Get(afServer.WebId);
            Console.WriteLine($"PI AF Server ServerTime is {afServer.ServerTime}");

            //Expose 'DisplayDigits', 'Span' and 'Zero' properties on objects of attribute and point
            Console.WriteLine($"Sinusoid PIPoint: DisplayDigits={point.DisplayDigits}, Span={point.Span}, Zero={point.Zero}");

            //Expose 'DefaultUnitsNameAbbreviation' property on objects of attribute and attribute template
            PIAttribute attribute = client.Attribute.GetByPath(@"\\MARC-PI2016\AFPartnerCourseWeather\Cities\Chicago|Temperature");
            Console.WriteLine($"DefaultUnitsNameAbbreviation of the attribute is {attribute.DefaultUnitsNameAbbreviation}");


The results are shown below:





I hope you've found this blog post useful to learn the new features of PI Web API 2018. Please provide your feedback so we can repeat when future PI Web API releases are published.

Note: Development and Testing purposes only. Not supported in production environments.


Link to other containerization articles

Containerization Hub



PI Data Archive 2018 has been released on 27 Jun 2018! It is now time for us to upgrade to experience all the latest enhancements.


Legacy subsystems such as PI AF Link Subsystem, PI Alarm Subsystem, PI Performance Equation Scheduler, PI Recalculation Subsystem and PI Batch Subsystem are not installed by default. These legacy subsystems mentioned above will not be in the PI Data Archive 2018 container because of the command line that I have chosen for it. This upgrade procedure assumes that you were not using any of these legacy subsystems.


We also have client side load balancing in addition to scheduled archive shifts for easier management of archives. Finally, there is the integrated PI Server installation kit which is the enhancement I am most excited about. The kit has the ability to let us generate a command line statement for use during silent installation. No more having to comb through the documentation to find the feature that you want to install. All you have to do is just use the GUI to select the features that you desire and save the command line to a file. The command line is useful in environments without a GUI such as a container environment.


Today, I will be guiding you on a journey to upgrade your PI Data Archive 2017R2 container to the  PI Data Archive 2018 container. In this article, Overcome limitations of the PI Data Archive container, I have addressed most of the limitations that were present in the original article Spin up PI Data Archive container. We are now left with the final limitation to address.


This example doesn't support upgrading without re-initialization of data.


I will show you how we can upgrade to the 2018 container without losing your data. Let's begin on this wonderful adventure!


Create 2017R2 container and inject data

See the "Create container" section in Overcome limitations of the PI Data Archive container for the detailed procedure on how to create the container. In this example, my container name will be pi17.

docker run -id -h pi17 --name pi17 pidax:17R2


Once your container is ready, we can use PI SMT to introduce some data which we can use as validation that the data has been persisted to the new container. I will create a PI Point called "test" to store some string data.

We will also change some tuning parameters such as Archive_AutoArchiveFileRoot and Archive_FutureAutoArchiveFileRoot to show that they are persisted as well.



Take a backup

Before proceeding with the upgrade, let us take a backup of the container using the backup script found here. This is so that we can roll back later on if needed.

The backup will be stored in a folder named after the container.


Build 2018 image

1. Get the files from elee3/PI-Data-Archive-container-build

2. Get the PI Server 2018 integrated install kill from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website

4. Your folder structure should look similar to this now.

5. Run build.bat.


Upgrade from 2017R2 to 2018

Now that we have the image built. We can perform the upgrade. To do so, stop the pi17 container.

docker stop pi17


Create the PI Data Archive 2018 container (I will name this pi18) by mounting the data volumes from the pi17 container.

docker run -id -h pi18 --name pi18 --volumes-from pi17 pidax:18



Now let us verify that the container named pi18 has our old data and tuning parameters and also let us check its version. We can do so with PI SMT.

Data has been persisted!

Tuning parameters has also been persisted!

Version is now 3.4.420.1182 which means the upgrade is successful. Note that the legacy subsystems that were mentioned above are no longer present.


Congratulations. You have successfully upgraded to the PI Data Archive 2018 container and retained your data.



Now what if you want to rollback to the previous version for whatever reasons? I will show you that it is also simple to do. There are two ways that we can go about doing this.


RestoreWill always workData added after the upgrade will be lost after the rollback. Only data prior to the backup will be present. Requires a backup
Non-RestoreData added after the upgrade is persisted after the rollbackMight not always work. It depends on whether the configuration files are compatible between versions. E.g. it works for 2018 to 2017R2 but not for 2015 to earlier versions


We will explore both methods in this blog since both methods will work for rolling back 2018 to 2017R2.


Restore method

In this method, we can remove pi17, recreate a fresh instance and restore the backup. In the container world, we treat software not as pets but more like cattle.

docker rm pi17
docker run -id -h pi17 --name pi17 pidax:17R2
docker stop pi17

Copy the backup folders into the appropriate volumes at C:\ProgramData\docker\volumes

docker start pi17


Now let us compare pi17 and pi18 with PI SMT. We can see that they have the same data but their versions are different.



Non-Restore method

In this method, data that is added AFTER the upgrade will still be persisted after rollback. Let us add some data to the pi18 container.


We shall also change the tuning parameter from container17 to container18.


Now, let's remove any pi17 container that exists so that we only have the pi18 container running. After that, we can do

docker rm -f pi17
docker stop pi18
docker run -id -h pi17 --name pi17 --volumes-from pi18 pidax:17R2


We can now verify that the data added after the upgrade still exists when we roll back to the 2017R2 container.




In this article, we have shown that it is easy to perform upgrades and rollbacks with containers while preserving data throughout the process. Upgrades that used to take days can now be done in minutes. There is no worry that upgrading will break your container since data is separated from the container. One improvement that I would like to see is that archives can be downgraded by an older PI Archive Subsystem automatically. Currently, this cannot be done. If you try to connect to a newer archive format with an older piarchss without downgrading the version manually, you will see



However, the reverse is possible. Connecting to an older archive format with a newer piarchss will upgrade the version automatically.

I really need to see my CPU status all the time

How many times do you fish through Task Manager in Windows or pop open a terminal to run htop to reassure yourself that your CPU is running hot?  If you write code this is all the time.   For one project at OSIsoft I actually bought a DEC VT-420 dumb tube and a serial-to-USB adapter whose sole purpose was to run top as I was working because I needed to kill errant programs that often.  But that wasn't very green and it took up space on my desk.


I've found a better way. I have a Task Manager CPU display on my keyboard and I'm tracking my CPU% on the PI Server continuously.   Using Go*.



You can use any Web API Client library you want, but this was a good way to get a one-off daemon up and running quickly in a matter of minutes.

PI AF was released last week along with a new version AF SDK (, so let me show you a feature that has been long requested by the community and that it's now available for you: the AFSession structure. This structure is used to represent a session on the AF Server and it exposes the following members:


Public PropertyDescription
AuthenticationTypeThe authentication type of the account which made the connection.
ClientHostThe IP address of the client host which made the connection.
ClientPortThe port number of the client host which made the connection.
EndTimeThe end time of the connection.
GracefulTerminationA boolean that indicates if the end time was logged for graceful client termination.
StartTimeThe start time of the connection.
UserNameThe username of the account which made the connection.


In order to get session information of a given PI System, the PISystem class now exposes a function called GetSessions(AFTime? startTime, AFTime? endTime, AFSortOrder sortOrder, int startIndex, int maxCount) and it returns an array of AFSessions. The AFSortOrder is an enumeration defining whether you want the startTime to be ascending or descending. Note that you can specify AFTime.MaxValue at the endTime to search only sessions which are still open.


From the documentation's remarks: The returned session data can be used to determine information about clients that are connected to the server. This information can be used to identify active clients. Then from the client machine, you can use the GetClientRpcMetrics() (for AF Server) method to determine what calls the clients are making to the server. Session information is not replicated in PI AF Collective environments. In these setups, make sure you connect to the member you want to retrieve session info from.


Shall we see it in action? The code I'm using is very simple:


var piSystem = (new PISystems()).DefaultPISystem;
var sessions = piSystem.GetSessions(new AFTime("*-1d"), null, AFSortOrder.Descending);
foreach (var session in sessions)
     Console.WriteLine($"---- {session.ClientHost}:{session.ClientPort} ----");
     Console.WriteLine($"Username: {session.UserName}");
     Console.WriteLine($"Start time: {session.StartTime}");
     Console.WriteLine($"End time: {session.EndTime}");
     Console.WriteLine($"Graceful: {session.GracefulTermination}");


A cropped version of the result can be seen below:


---- ----

Username: OSI\rborges

Start time: 07/02/18 13:18:54

End time:



---- ----

Username: OSI\rborges

Start time: 07/02/18 13:06:36

End time: 07/02/18 13:11:51

Graceful: True


---- ----

Username: OSI\rborges

Start time: 07/02/18 13:06:17

End time: 07/02/18 13:06:19

Graceful: True


As you can see, now we can easily monitor sessions in your PI System. Share your thoughts about it in the comments and how you are planning on using it.


Happy coding!



Rick's post on how to use metrics with AF SDK.

Note: Development and Testing purposes only. Not supported in production environments.


Link to other containerization articles

Containerization Hub



In this blog post, we will be exploring how to overcome the limitations that were previously mentioned in the blog post Spin up PI Data Archive container. Container technology can contribute to the manageability of a PI System (installations/migrations/maintenance/troubleshooting that used to take weeks can potentially be reduced to minutes) so I would like to try and overcome as many limitations as I can so that they will become production ready. Let us have a look at the limitations that were previously mentioned.


1. This example does not persist data or configuration between runs of the container image.

2. This example relies on PI Data Archive trusts and local accounts for authentication.

3. This example doesn't support VSS backups.


Let us go through them one at a time.


Data and Configuration Persistence

This limitation can be solved by separating the data from the application container. In Docker, we can make use of something called Volumes which are completely managed by Docker. When we persist data in volumes, the data will exist beyond the life cycle of the container. Therefore, even if we destroy the container, the data will still remain. We create external data volumes by including the VOLUME directive in the Dockerfile like such


VOLUME ["C:/Program Files/PI/arc","C:/Program Files/PI/dat","C:/Program Files/PI/log"]


When we instantiate the container, Docker will now know that it has to create the external data volumes to store the data and configuration that exists in the PI Data Archive arc, dat and log directories.


Windows Authentication

This issue can be addressed with the use of GMSA and a little voodoo magic. This enables the container host to obtain the TGT for the container so that the container is able to perform Kerberos authentication and it will be connected to the domain. The container host will need to be domain joined for this to happen.


VSS Backups

When data is persisted externally, we can leverage on the VSS provider in the container host to perform the VSS snapshot for us so that we do not have to stop the container while performing the backup. This way, the container will be able to run 24/7 without any downtime (as required by production environments). The PI Data Archive has mechanisms to put the archive in a consistent state and freeze it to prepare for snapshot.


Create container

1. Grab the files in the 2017R2 folder from my Github repo and place them into a folder. elee3/PI-Data-Archive-container-build

2. Get PI Data Archive 2017 R2A Install Kit and extract it into the folder as well. Download from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website and place it in the Enterprise_X64 folder.

4. Your folder structure should look similar to this now.

5. Execute buildx.bat. This will build the image.

6. Once the build is complete, you can navigate to the Kerberos folder and run the powershell script inside to create a Kerberos enabled container

.\New-KerberosPIDA.ps1 -AccountName <GMSA name> -ContainerName <container name>

You can request for a GMSA from your IT department and get it installed on your container host with the Install-ADServiceAccount cmdlet.


If you think it will be difficult for you to get a GMSA from your IT department, then you can use the following command as well to create a non Kerberos enabled container

docker run -id -h <DNS hostname> --name <container name> pidax:17R2

7. Go to the pantry to make some tea or coffee. After about 1.5 minutes, your container will be ready.


Demo of container abilities

1. Kerberos

This section only applies if you created a Kerberos enabled container. After creating a mapping for my domain account using PI System Management Tools (SMT) (the container automatically creates an initial trust for the container host so that you can create the mapping), let me now try to connect to the PI Data Archive container using PI System Explorer (PSE). After successful connection, let me go view the message logs of the PI Data Archive container.

We can see that we have Kerberos authentication from AFExplorer.exe a.k.a PSE.


2. Persist Data and Configuration

When I kill off the container, I noticed that I am still able to see the configuration and data volumes persisted on my container host so I don't have to worry that my data and configuration is lost.


3. VSS Backups

Finally, what if I do not want to stop my container but I want to take a back up of my config and data? For that, we can make use of the VSS provider on the container host. Obtain the 3 files here. elee3/PI-Data-Archive-container-build

Place them anywhere on your container host. Execute

.\backup.ps1 -ContainerName <container name>


The output of the command will look like this.


Your backup will be found in the pibackup folder that is automatically created and will look like this. pi17 is the name of my container.


Your container is still running all the time.


4. Restore a backup to a container

Now that we have a backup, let me show you how to restore it to a new container. It is a very simple 3 step process.

  • docker stop the new container
  • Copy the backup files into the persisted volume. (You can find the volumes at C:\ProgramData\docker\volumes)
  • docker start the container

As you can see, it can't get any simpler . When I go to browse my new container, I can see the values that I entered in my old container which had its backup taken.



In this blog post, we addressed the limitations of the original PI Data Archive container to make it more production ready. Do we still have any need of the original PI Data Archive container then? My answer is yes. If you do not need the capabilities offered by this enhanced container, then you can use the original one. Why? Simply because the original one starts up in 15 seconds while this one starts up in 1.5 minutes! The 1.5 minutes is due to limitations in Windows Containers. So if you need to spin up PI Data Archive containers quickly without having to worry about these limitations (e.g. in unit testing), then the original container is for you.



We're excited to announce to PI Dev Club members that we now have a PI Web API Client Library for Go, the Google-sponsored programming language specifically designed around concurrent processing.


You can visit the GitHub repository of this library here.




go1.9.7 or later




If you haven't already, install the Go software development kit.


Run this line to install the PI Web API Client for go


go get -u


Note: You don't need the Go SDK after you have compiled a go program and wish to deploy it somewhere.  Go creates self-reliant executable programs that compact dependent libraries inside them.


Getting Started


Here is a sample Go program for retrieving the links from the Web API home page.


Create a directory under %GOPATH% and let's call it webapitest. Then create a new code file with the name webapitest.go

This will print all the version numbers of your PI Web API server plugins. Replace the string literals {in braces} with the appropriate values for your environment.


// webapitest.go
package main

import (

    pi ""

var cfg = pi.NewConfiguration()

var client *pi.APIClient
var auth context.Context

func Init() {
    cfg.BasePath = "https://{your web api server here}/piwebapi"

    auth = context.WithValue(context.Background(), pi.ContextBasicAuth, pi.BasicAuth{
        UserName: "{user name here}",
        Password: "{password here}",

    client = pi.NewAPIClient(cfg)

func main() {
    response, _, fail := client.SystemApi.SystemVersions(auth)
    if fail != nil {

    fmt.Println("Here's all the plugin versions on PI Web API")
    for i := range response {
        fmt.Println(i, response[i].FullVersion)


You can run the program by issuing the following commands


~/go/webapitest $ go build
~/go/webapitest $ ./webapitest


Your output should look something like this


~/go/webapitest $ ./webapitest
Here's all the plugin versions on PI Web API


Coding examples


There are some simple examples on how to start probing the PI Web API Client library over here.


Developing in Go


Golang programmers tend to develop using Visual Studio Code on Windows which has great golang support and is also available on MacOS and Linux.   There is great golang support available for emacs (configure emacs from scratch as a Go IDE) and vim as plugins which also give you function templates, IntelliSense, syntax checking, godoc (the documentation system for go), gofmt (code formatting/style) and support Delve, the debugger for the go language which cleanly handles the concept of go routines.


You can also build Go code with nothing but your web browser using the Go Playground.   This is a very handy tool where you can experiment with Go code snippets and compile and run them directly in a web browser, viewing the output.




A WebID wrapper has been added to the library.  You can review the unit test code to see how you can create WebIDs from paths.


Final Thoughts


Most everyone's exposure with Go is minimal (including myself!).  But this language is expected to grow in popularity.   The reason: Go's awesome power to simplify concurrent programming is making it spread quickly, particularly within the realm of sensors and other lightweight devices.   Go code is also quite fast and produces programs that are lightweight yet powerful.


It's also a very simple programming language to learn (so simple you can become a Go programmer in 15 minutes if you have exposure to any other programming language).  Considering that there is also a rich library of data adapters written in Go, it made obvious sense to open a portal to the world of OSIsoft in golang.


Happy Gophering!



A gopher is a euphemism Go programmers use to describe one another.


With the announcement of the PI Web API Client Library for the Go programming language I have hope that we can all broaden our understanding of concurrent programming.  Go isn't just the "programming language of the {date_year}".   It really is an exciting time to be coding, particularly with a programming language as simple to implement and understand as this one.


What is Go?


package main

import (

func main() {
    fmt.Println("Hello, playground")

Go (known as golang) is a language funded by Google with a design goal of making concurrent programming much easier to write, debug and manage.  The principle designers at Google are Robert Griesemer, Rob Pike, and Ken Thompson formerly of AT&T and Bell Labs.   It's easier to explain what Go is by describing what it's not:


What isn't in Go



Yes, you can survive totally fine in a computer language these days with no objects.  It certainly lowers the barrier to new programmers who don't have the patience to memorize Design Patterns.   As a functional programming language that mimics C, avoiding holding on to state as much as possible is primarily the point.  So in the case of web server-based REST programming and other types of concurrent processing--avoiding holding on to state as much as possible is ideal and makes concurrent code easier to understand.



Well, threads as you've always known them to be.   Intel/ARM processor threads are used in Go programs as you would expect but go runtimes utilize a concept called a goroutine, which is a much more lightweight concept than a thread and they live within your processor's threads.   The main advantage of using a lightweight goroutine is massive scalability on a single server instance.   A Java service that might support 5,000 threads could theoretically scale to hundreds of thousands of threads when implemented in Go.   Some more benefits of goroutines:

  • You can run more goroutines on a typical system than you can threads.
  • Goroutines have growable segmented stacks.
  • Goroutines have a faster startup time than threads.
  • Goroutines come with built-in primitives to communicate safely between themselves (channels).
  • Goroutines allow you to avoid having to resort to mutex locking when sharing data structures.
  • Goroutines are multiplexed onto a small number of OS threads, rather than a 1:1 mapping.
  • You can write massively concurrent servers without having to resort to evented programming.


A process that normally would have to live on an expensive cloud VM instance or on a medium-sized server can be scaled down to an Arduino device or a much smaller VM instance.    This gives you an unparalleled amount of parallel power (pun intended), not only taking full advantage of all that hardware you paid for but it also affords you the capability to go cheaper down the hardware cost curve in future hardware.


An even more convincing selling point for Go is the power of race-condition debugging, which is difficult to do in nearly every evolved programming language.  The Data Race detector is built-in to Go and can pick up your memory conflict points in your code as you run a production workload through it.  To invoke the detector you just kick off your program with the go command using the -race option.


Anyone who has had to hunt down race conditions in .NET languages or C++ only to download loads of third-party tools to assist with locating offending race condition code would kill to have this feature built into the language.


Exception handling

One of the behaviors of Go that it definitely inherited from C is the concept of exception handling, or rather the lack of it.  Just like you must do in C and Visual Basic, in Go you will need to check returns from functions and handle error states immediately after calls that have a non-zero probability of failure.   The only thing you can really trust being available are memory cells, your local variables and the CPU.   Just about everything else you touch can fail (disk, network, http calls, etc).


To get around this though, Go supports multiple return variables from functions which is a very pleasing feature of the language.   A typical call to a function that might fail often looks like this:


var theData, err, numRecords := GetRecordsFromDatabase(userContext, sql)
if err != nil
     go myapp.PrintWarning("The database is down right now.  Contact Support. " + err)

... // Begin processing records


What Go feels like to code in


The designers of Go definitely went on a shopping trip; starting with C and picking off concepts found in Pascal (the := assignment operator), inferred assignments from Javascript and C# using the same var keyword in both languages.  It also has pointers, breakouts to assembly, and the compiler condenses raw-metal binary executables that are freed from the need to host a JVM or a .NET runtime kit to have your programs launch.


One of the strongest benefits is that Go has garbage collection--a feature in hosted languages that need frameworks (Java and .NET).   So there aren't any calls to .free() and destructors are not necessary since there are no objects.  Instead, Go uses the defer keyword so you can run a cleanup routine when a function ends.   And unlike C there is no malloc() nonsense to worry about.   It's the fun of C without the items that make C frustrating.


The Go language is also surprisingly simple to wrap your brain around to the point that I am seeing people who have learned Go as their first programming language.  The Go code spec has a strict convention on formatting and gofmt comes with Go which can lint your code.   It's also customary to use godoc to heavily document what you're doing inside go routines.  Once again this comes with Go; no 3rd party tools are necessary to stylize your code.   The burden of code reviews that developers must do inside teams is greatly simplified thanks to these standards.


These standards combined with the lightweight thread power this design offers make it easy to understand why this language is taking off so rapidly and why Google invested in it.


Where Should I Start?


These are places that helped me get started in Go:


IDE Developing in Go


Golang programmers tend to develop using Visual Studio Code on Windows which has great golang support and is also available on MacOS and Linux.   There is great golang support available for emacs (configure emacs from scratch as a Go IDE) and vim as plugins which also give you function templates, code completion, syntax checking, godoc (the documentation system for go), gofmt (code formatting/style) and support Delve, the debugger for the go language which cleanly handles the concept of go routines.


You can also build Go code with nothing but your web browser using the Go Playground. This is a very handy tool where you can experiment with Go code snippets and compile and run them directly in a web browser, viewing the output.


Happy Gophering!



A few years ago, we've announced the public PI Web API endpoint in order to:


  • Provide developers access to a PI System who may not be able to access PI otherwise
  • Create code samples against the public endpoint and to host them under OSIsoft organization on GitHub
  • Offer developers a playground to exercise with PI Web API
  • Create a streamlined way to offer datasets in a PI System


Although we were able to achieve the goals above, I felt that visualizing the data only through PI Web API is a challenge since common tools like PI System Explorer and PI Vision are not available to work with PI data stored on a remote PI Web API endpoint. Given this scenario, I've developed the PI Web API Data Reference which allows local attributes to access data from remote attributes through PI Web API endpoints. As a result, now I can see PI data from the public PI Web API endpoint within the PI System Explorer.


This custom data reference (CDR) was developed on top of the PI Web API client library for .NET Framework and PI AF SDK. The basic idea is to make HTTP requests against PI Web API and convert the responses into PI AF SDK objects which PSE will be able to process.




  • PI Web API 2017 R2+ installed within your domain using Kerberos or Basic Authentication. If you are using an older version, some methods might not work.
  • PI AF 2017 R2+
  • .NET Framework 4.6.2


GitHub repository

Please visit the GitHub repository for this project where you can find its source code, download the CDR under the GitHub release section and read about the new features added. There are two folders: src and dist. The src folder has the source code package. If you compile the VS Solution in the Release mode, the binaries will be created on the dist folder.  Due to the settings of the .gitgnore file, this folder is empty.




  • Copy all files from the dist folder to %PIHOME%\PIWebApiDR
  • Register the CDR using the following command (you can also run register_plugin.bat):


"%pihome%\AF\regplugin" "OSIsoft.PIDevClub.PIWebApiDR.dll" "OSIsoft.PIDevClub.PIWebApiClient.dll" "Newtonsoft.Json.dll" /own:OSIsoft.PIDevClub.PIWebApiDR.dll 




  • Unregister the CDR using the following command (you can also run unregister_plugin.bat):


"%pihome%\AF\regplugin" -u "OSIsoft.PIDevClub.PIWebApiDR.dll" 


  • Delete all files from the %PIHOME%\PIWebApiDR folder.


ConfigString and ConfigStringEditor


The config string of this CDR has the following structure:RemoteDatabaseId={RemoteDatabaseId};WebId={WebId}

  • RemoteDatabaseId is the ID of the RemoteDatabase which is an AF Element with many attributes with the configuration to connect to a remote PI Web API.
  • WebId is the Web ID 2.0 of the remote AF Attribute.


The ConfigStringEditor (the window that is opened when you click on the "Settings..." button on PI System Explorer generally used to edit the attribute config string) does not allow the user to change the ConfigString of the attribute. It is used only to visualize the information. Please use the Remote Database Manager to delete and create databases mapping remote ones.




The remote databases are stored under the "OSIsoft Technology" root element of the Configuration AF database.




Remote Database Manager

The Remote Database Manager should be used to manage the remote database on the system as well as to create local AF databases mapped to remote AF databases through PI Web API. Below you can find a screenshot of this application that comes with the PI Web API Data Reference.



Visualizing PI data from remote AF databases locally in PSE

On the screenshot below, you can see a PI System Explorer accessing data from the "Universities Mapped" database. This database was created by the Remote Database Manager, mapping the "Universities" AF database from the public PI Web API.



Features of the CDR in PI AF SDK


When writing our UnitTests, we have create a sample AF database with attributes using the PI Point DR and then created mapped AF database with PI Web API DR using the Remote Database Manager. Each test will get data from the PI Point DR and PI Web API DR and compare the results to make sure they are the same. Below you can find the features implemented on the PI Web API data reference:

  • Asynchronous
  • DataPipe (please refer to PIWebApiEventSource.cs)
  • InterpolatedValue
  • InterpolatedValues
  • InterpolatedValuesAtTimes
  • PlotValues
  • RecordedValue
  • RecordedValues
  • RecordedValuesAtTimes
  • Summary
  • Summaries
  • Bulk calls


If you use AFDataPIpes with Attributes with PI Web API Data Reference, the CDR will use PI Web API Channels to get new real-time values under the hood.



If you are having issues make sure:

  • .NET Framework 4.6.2 is installed on the computer registering the plugin.
  • Run the Remote Database Manager with administrative privileges.
  • Unblock the files after extracting them from the compressed file.





In case you are using Basic authentication, the username and password will be stored in fixed attributes. Although the password will be encoded, it won't be safe. Every user with read access to the Configuration database will be able to get the password of the remote PI Web API endpoint. For Kerberos authentication, the credentials won't be stored on the AF database.


This is not an official product. As described, there are security risks involved. Use it carefully.



I hope that the PI Community will benefit a lot from using the library as another tool to share data. Please test yourself and let me know if it works fine. I plan to record a video with some tips to use this great feature.

PI AF 2018 (AF SDK 2.10) offers a very significant change in filtering on attribute values: there is no longer a restriction that the attribute must belong to an element template.  The allows for a greater flexibility for filtering.  For example, you may now search for attributes that don't belong to any template.  Or better yet, you may search for attributes with the same name but belonging to different templates!


Other Features New to AF SDK 2.10


The bulk of this blog will cover the ValueType used with attribute value filters.  Before we dig too deep into that topic, let's take a quick look at the other new features.


There is a new AFSearchFilter.ID search filter to allow searching for objects by their unique identifier (GUID).  This unique ID is the fastest way to locate a given object on the AF Server.  A much-welcomed addition is that the ID supports the Equal or IN( ) operator.  If you are developing code, the best way to pass a GUID is by using ID.ToString("B").


New Search Fields are ConfigString, DisplayDigits, IsManualDataEntry, Trait, UOM, SecurityString, and SecurityToken.  Note that with a SecurityToken field, the FindObjectFields method would return an AFSecurityRightsToken object.


PI AF 2017 R2 (AF SDK 2.9.5) introduced the AFAttributeSearch and the PlugIn filter.  You could combine that filter plus the new ability to search on attributes without specifying a template.  For example, you now have the ability to perform a completely server-side search of all attributes referencing the PI Point data reference!  Stay tuned for a blog dedicated to this topic from one of my colleagues.


And now, the remainder of the blog will discuss the new ValueType.


If using an element or event frame Template


As mentioned earlier, previous versions required a Template for any attribute value filters.  An additional requirement was that the Template needed to be specified before the attribute value filters.  If the Template was specified after, then an exception was thrown.


Since AF SDK 2.10 removes the restriction on the Template, an interesting artifact is that you may specify the Template after the attribute value filter - and an exception will not be thrown.  However, we strongly recommend against this practice.  If you want to filter on a Template, we highly recommend you specify the Template first - just as you did with AF SDK prior to 2.10.  Nothing has changed with the new version here (nor should your existing code).  If you follow this advice, then you should skip the "AS valuetype" cast (more below).


Now let's consider if you don't follow the advice and you specify the Template after the attribute value filters.  You will need to include the "AS valuetype" cast, and its behavior will be as described in the remainder of this document.  The search will still be limited to the specified Template but as far as the attribute value filters are concerned, they will be treated as if the template was entirely omitted.  Precisely how they are treated depends on the attribute's data type and the "AS valuetype" cast you declare, which is presented in detail below.


Casting AS ValueType - When not using a Template

(or the Template is specified after the attribute value filters)


To support this new capability, there are several new things to discover with AFSearch to address the issue of a filter attribute's data type.  When based on a template, the data type is easily inferred.  What happens if a template isn't specified and the attribute does not belong to a template and/or may span different templates?  How does the search know which data type to use in the filter?  The answer is that it is left up to you (the developer) to pass the desired value type as you build the search, either by a query string or search tokens:


  • New AFSearchValueType enumeration
  • A new AFSearchToken.ValueType property (a string)
  • Two new AFSearchToken constructors to allow you to indicate the ValueType (also a string)


Golden Rule:

  • If you DO specify the template, do NOT specify the value type. 
  • If you do NOT specify a template, then you SHOULD specify a value type.


How carved in stone is the above "SHOULD"?  If there is any possibility whatsoever of an ambiguous interpretation between a String versus a Numeric, then you absolutely should specify the value type.  For example,  AFSearch has no way of knowing whether 1 or '1' or "1" should be a Numeric versus a String value.  Best practice: anything numeric should always specify "AS Numeric".


If you are using search tokens, you would use the new AFSearchToken constructors.  If you are using a query string, you would use the new AS <value type> syntax.  The available values for value type are:

  • Numeric, i.e. the literal text "Numeric"
  • String, i.e. the literal text "String"
  • EnumerationSet, the name of the applicable AFEnumerationSet.  Do NOT use the literal text "EnumerationSet".


Typical Scenarios with Numeric or String


The brief examples below look very similar with the exception of the value type designator (Numeric or String).  This bears repeating: you only need to use the AS valuetype if you do not specify a template.


Data Type Numeric: Integer (Byte, Int16, Int32, Int64) or Floating Point (Float, Single, or Double)

Consider if you have an attribute named RunStatus, and its data type is an Int32, where 0 means not running and 1 means running.  The query string could look like either of these:

  • "|RunStatus:1 AS Numeric"
  • "|RunStatus:'1' AS Numeric"


Data Type String

And if RunStatus was a String where "0" means not running and "1" means running, you would use these:

  • "|RunStatus:1 AS String"
  • "|RunStatus:'1' AS String"


Typical Scenarios with EnumerationSet


We continue covering scenarios when you do not specify a template.  We turn to another typical scenario of when the attribute's data type is an Enumeration Set.  Here it doesn't matter where the data comes from, be it a PI point, a table lookup, formula, or even a static value.  What matters to the value being filtered on is that the attribute has been declared to use an enumeration set.  The important thing is to use the AS specifier followed by the name of the enumeration set; do NOT use the literal value "EnumerationSet".


string attrPath = "|Classification";
string attrValue = "Commercial";
string enumSetName = "Building Type";
string query = $"'{attrPath}':='{attrValue}' AS '{enumSetName}'";


The above snippet safely wraps anything I created in quotes.  Note that because the enumeration set name contains a blank, I absolutely must wrap it in quotes.  In this specific case where neither attrPath or attrValue contain a blank, I could have omitted the quotes.  The value in the variable query will be:


"'|Classification':='Commercial' AS 'Building Type'"


Later when passed into an AFAttributeSearch constructor, the search instance will resolve to:


{|Classification:Commercial AS "Building Type"}



Cases Needing Special Consideration


There are special cases you may need to keep in mind beyond the typical Numeric, String, or EnumerationSet.  Obviously an attribute that is a string should use "AS String" and an attribute that has a number data type should use "AS Numeric".  What about data types that aren't so obvious?  Boolean or DateTime, for example?


Data Type Boolean: cast AS String

Before we even touch on enumeration sets, let's investigate another area of caution.  An attribute with a data type was Boolean is not exactly a number and not exactly a string.  As far as AFSearch is concerned, you should compare the literal values of a Boolean, namely "True" and "False", as strings.  Therefore the following filters would all be correct for a Boolean data type:

  • "|RunStatus:True AS String"
  • "|RunStatus:'True' AS String"


It's absolutely important with Booleans to specify "AS String".  You need to be aware that the following will quietly fail by returning 0 items:

  • "|RunStatus:True"


Data Type DateTime: cast AS Numeric

This can be a bit tricky.  The safest practice when dealing with DateTime attributes, whether you have a DateTime or an AFTime instance in code, is to use Round Trip Formatting.  That is to say, use ToString("O") when converting a DateTime or AFTime to string for the AFSearchToken constructor or within a query string.  And despite passing it as a string, it will actually be treated AS Numeric.  So these snippets work:


Variable "date" can either be DateTime or AFTime instance

  • $"|Timestamp:>'{date.ToString("o")}'"
  • $"|Timestamp:>'{date.ToString("o")}' AS Numeric"


The above uses an Interpolated String.  If you prefer string.Format, it would be:

  • string.Format("|Timestamp:>'{0}'", date.ToString("o"))
  • string.Format("|Timestamp:>'{0}' AS Numeric", date.ToString("o"))


TimeSpan with Data Type Anything: Not Supported

Some customers have attributes with data type "<Anything>" to hold a TimeSpan object.  Value filtering on such time span attributes is not supported in AF SDK 2.10.


If your time span attribute is defined to hold an integer or a floating point, then it would be treated as a Numeric data type (see above).


Digital PIPoint with Data Type Anything (not using an AFEnumerationSet): cast AS String or Omit AS specifier

For an attribute using the PI Point data reference that grabs from a Digital tag, we recommend that the attribute data type be "<Anything>".  You do not have to map the digital tag to an AFEnumerationSet.  You can if you wanted to, but that means (1) you have to copy the PIStateSet as an AFEnumerationSet, and (2) what to do with the attribute falls under "AS EnumerationSet" discussed elsewhere in this document.


Assuming you have a string variable named  "stateText", which contains the text of a given digital state, the following would be used to filter:

  • $"|Digital:'{stateText}'"
  • $"|Digital:'{stateText}' AS String"


For instance, if your digital tag used the "Modes" StateSet and you wanted to filter on those attributes with a mode of "Manual", either of these should suffice:

  • "|Digital:'Manual'"
  • "|Digital:'Manual' AS String"


Past AFSearch Blogs


What's new in AFSearch 2.9.5 (PI AF 2017 R2)  (March 2018) - AFAttributeSearch is introduced.


Coding Tips for using the AFSearch IN Operator - you may search for multiple values using IN(value1; value2; etc.).  Some code is offered to make this easier to generate.


Using the AFEventFrameSearch Class (Nov 2016) - Giving some attention to event frames since many of earliest AFSearch examples were element based.


Aggregating Event Frame Data Part 1 of 9 - Introduction  (May 2017) - From the UC 2017 TechCon/DevCon hands on lab for Advanced Developers.


PI World 2018 SF Developer Track - Getting the most out of AFSearch - (May 2018) From PI World 2018, DevCon presentation for Intermediate Developers.


Why you should use the new AF SDK Search  (June 2016) - The granddaddy of them all.  The earliest post explaining AFSearch, which was new at that time.

A customer recently asked about filtering on multiple values of an attribute in AFSearch.  This is easily addressed using the IN search operator in a query string, or the equivalent AFSearchToken constructor expecting an array of values.  There is one major caveat: the IN operator is not allowed for floating point types such as Single or Double, since binary floating point values are considered approximations instead of exact numbers.


There are a few tips to get developers pointed in the right direction.  First of all, values within the IN operator are delimited by semi-colons.  If you (like me) accidentally use a comma-delimited list, then you will receive an odd error about missing a closing parenthesis.  Then if you carefully inspect the query string, then you (like me) will start pulling your hair out because you don’t see a missing closing parenthesis!  Hopefully you may benefit from my pain by quickly switching your delimiter to be a semi-colon.


Another tip is that while blanks are allowed within the IN operator - or more specifically around the delimiters - any values that contain embedded blanks must be wrapped in quotes (single or double quotes).  Consider this list of cities:


  • London
  • Paris
  • San Francisco


Obviously, San Francisco needs to be wrapped in quotes due to the blank inside the name.  If the attribute path is “|City”, then you could have this C# filter in your query string:


string uglyQuery = "|City:IN(London; Paris; \"San Francisco\")";


Though ... that is a wee bit ugly.  Besides being ugly, it also might be confusing to someone using a different language, for instance, VB.NET.  A couple of weeks ago, I taught a class where a VB coder thought the unsightly \" were delimiters for AFSearch.   I explained to him that it's how C# escapes double quotes.  This can look much easier on the eyes if you just use single quotes:


string prettyQuery = "|City:IN(London; Paris; 'San Francisco')";


And it avoids any needless confusion that \" is an AFSearch delimiter.


This is all fine-and-dandy, but how often do you hard-code values?  It does make the example simple and straight-forward, but what if the list of items you wish to filter upon are in a list or array?  Let’s look at some code that helps us out.  After all, this is a PI Developers Club, which demands for code.  Strictly speaking, all we need for the collection of values is that they be in an enumerable collection.  Your source collection doesn’t have to be strings, but keep in mind that eventually we will need to pass a string to the AFSearch query, or if we use the AFSearchToken, then we must actually pass an array of strings.


For our cities example, it makes sense that the city names are strings.  Let’s not take the easy way out and make it an array of strings.  Instead we will use a list so that we can see what we must do differently to make it all work together:


List<string> cities = new List<string>() { "London", "Paris", "San Francisco" };


A tiny bit of  code is needed to put that inside a query string:


// A little LINQ to wrap single quotes around each city, and separate them with "; "
string delimitedCities = string.Join("; ", cities.Select(x => $"'{x}'"));

// An Interpolated String to substitute the delimitedCities.
string query = $"|City:IN({delimitedCities})";


This resulting value in query would be:


"|City:IN('London'; 'Paris'; 'San Francisco')"


Or if you prefer working with search tokens, this would be the equivalent:


// Tip: here you must pass a string array, so we must ToArray() on the cities list.
AFSearchToken token = new AFSearchToken(AFSearchFilter.Value, AFSearchOperator.In, cities.ToArray(), "|City");


If we had defined cities to be a string array, we would not need the ToArray(), but then this example would be boring and less educational.


What if our enumerable collection isn’t a bunch of strings?  Let’s say we have a bunch of GUID ’s.  (Exactly how you got these GUID's is an interesting question not addressed here ; suffice to say this example takes a collection of things that aren't strings and converts them to one that is.)  There would now be an extra step needed where we must convert to string.  Once we have a collection of strings we can then implement code similar to the previous examples.  Let’s imagine we have something like this:


IEnumerable<Guid> ids = GetListOfGuids();  // magically get a list of GUID


Or it could maybe have been:


IList<Guid> ids = GetListOfGuids();


Let's say we want to filter on an attribute path of “|ReferenceID”.  First let’s tackle the problem of converting a GUID into a string that is compatible with AFSearch.  This is easy enough thanks to LINQ:


// Nicest way to convert a GUID a string compatible with AFSearch is to use GUID.ToString("B").
IEnumerable<string> idStrings = ids.Select(x => x.ToString("B"));


Okay, so now we have an enumerable collection of strings.  Using what we learned in previous examples, we can knock this out:


// A little LINQ to wrap single quotes around each string item, and separate them with "; "
string delimitedIds = string.Join("; ", idStrings.Select(x => $"'{x}'"));

// An Interpolated String to substitute the items.
string query = $"|ReferenceID:IN({delimitedIds})";


Fantastic.  If you prefer AFSearchTokens, that’s easy enough as well, but we do require the idStrings to generate a string array.


// Tip: here you must pass a string array, so we ToArray() on the idStrings collection.
AFSearchToken token = new AFSearchToken(AFSearchFilter.Value, AFSearchOperator.In, idStrings.ToArray(), "|ReferenceID");


Granted our example would have been simplified if we defined idStrings to be an array in the first place, but what fun would there be in that?



VB.NET Examples


Some of us supporting the PI Developers Club think there should be more VB.NET examples.  Towards that end, here are code snippets for the VB coders out there:


    Dim uglyQuery As String = "|City:IN(London; Paris; ""San Francisco"")"

    Dim prettyQuery As String = "|City:IN(London; Paris; 'San Francisco')"

    Dim cities As List(Of String) = New List(Of String) From {"London", "Paris", "San Francisco"}


Cities Query Example:


Query String

        ' A little LINQ to wrap single quotes around each city, and separate them with "; "
        Dim delimitedCities As String = String.Join("; ", cities.Select(Function(x) $"'{x}'"))

        ' An Interpolated String to substitute the delimitedCities.
        Dim query As String = $"|City:IN({delimitedCities})"



' Tip: here you must pass a string array, so we ToArray() on the cities list.
Dim token As AFSearchToken = New AFSearchToken(AFSearchFilter.Value, AFSearchOperator.In, cities.ToArray(), "|City")


Guids Example (or something that is not a collection of strings)


Query String

        Dim ids As IEnumerable(Of Guid) = GetListOfGuids()   ' magically get a list of GUID

        ' Nicest way to convert a GUID a string compatible with AFSearch is to use GUID.ToString("B").
        Dim idStrings As IEnumerable(Of String) = ids.Select(Function(x) x.ToString("B"))

        ' A little LINQ to wrap single quotes around each string item, and separate them with "; "
        Dim delimitedIds As String = String.Join("; ", idStrings.Select(Function(x) $"'{x}'"))

        ' An Interpolated String to substitute the items.
        Dim query As String = $"|ReferenceID:IN({delimitedIds})"



        ' Tip: here you must pass a string array, so we ToArray() on the idStrings collection.
        Dim token As AFSearchToken = New AFSearchToken(AFSearchFilter.Value, AFSearchOperator.In, idStrings.ToArray(), "|ReferenceID")




If you want to filter on a variety of values for an attribute, this requires the IN search operator.

  • You may use IN in a query string or an AFSearchToken.
  • Values must be exact.  Therefore, IN does not work on Single or Double floating point data types.
  • In query strings
    • The semi-colon is the delimiter between IN values.  Example: "|City:IN(London; Paris; 'San Francisco')"
    • Values containing blanks must be enclosed in single or double quotes.  Example: "|City:IN(London; Paris; 'San Francisco')"
  • AFSearchToken
    • Values must be passed as strings in an array

I have been asked on many occasions “How can I find the first recorded event for a tag?”  The direct answer to this may be as brief as the question.  However, usually there is a lurking question-behind-the-question about what they really want to do with that first event, and if you dig even slightly deeper you will uncover the overall task they are trying to accomplish.  What you may ultimately discover is there is no need to find the first event.


Let’s start off with the direct, simple answer in the form of a C# extension method:


public static AFValue BeginningOfStream(this PIPoint tag) => tag.RecordedValue(AFTime.MinValue, AFRetrievalMode.After);


This works because the PI Point data reference implements the Rich Data Access (RDA) method of RecordedValue.  The earliest timestamp to query is AFTime.MinValue, that is midnight January 1, 1970 UTC.  Thanks to the AFRetrievalMode, you ask for the first value after January 1, 1970.  If it’s the earliest recorded timestamp you are only concerned with, you can use this extension method:


public static AFTime BeginningTimestamp(this PIPoint tag) => BeginningOfStream(tag).Timestamp;


For PI points, this would give you the BeginningOfStream method to go along with the built-in EndOfStream.  Before the advent of future data, the EndOfStream was simply the Snapshot.  But there are oddities related to future data, which required different handling of data compared to the traditional historical data.  Hence, Snapshot was replaced by CurrentValue, and EndOfStream was added.


An AFAttribute could have a BeginningOfStream method, but it doesn’t have the same nice guarantees of PIPoint.  It all depends upon the data reference being used and whether it supports the RDA Data.RecordedValue method, which is why you should properly check that it is supported before attempting to call it:


public static AFValue BeginningOfStream(this AFAttribute attribute)
    if (attribute.DataReferencePlugIn == null) // static attribute
        return attribute.GetValue();
    if (attribute.SupportedDataMethods.HasFlag(AFDataMethods.RecordedValue))
        // Depends on how well the data reference PlugIn handles AFRetrievalMode.After.
        return attribute.Data.RecordedValue(AFTime.MinValue, AFRetrievalMode.After, desiredUOM: null);
    // Fabricated answer.  Not exact that one is hoping for.  
    return AFValue.CreateSystemStateValue(AFSystemStateCode.NoData, AFTime.MinValue);


Since the value being returned may be fabricated, I would be hesitant to include a BeginningTimestamp method as it would mask the inaccuracies.  To compensate, I would think further inspection of the returned value is needed, i.e. check for “No Data”.  Such are the difficulties of trying to create a BeginningOfStream method within your code libraries.  This is why we begin to probe more and ask about your use-case, or simply “What are you really wanting to do?


Virtually 100% of the people asking me how to find the first value in a stream want to find it for historical PI points only.  This greatly simplifies part of the problem because there is no need to be concerned with attributes or tags with future data.  Which brings us right back to the direct answer at the top, where you may be inclined to stop.  But if you take time to dig just a little deeper into what they are really doing, the true mission is revealed: they want to copy all historical data from an old tag to a new tag.  And trust me, there are several legitimate use-cases for doing this.


When I hear anything about copying of historical data, the first thought I have is “How much data is in the old tag?”  There are two particular answers that require the same special handling: (a) I don’t know, or (b) a lot.


The real problem they need to solve isn’t finding the earliest recorded timestamp.  Rather, they may have so much data they will bump into the ArcMaxCollect limitation (typically 1.5 million data values in a single data request).  There are programming ways around ArcMaxCollect (more below) and they rely upon the PIPoint.RecordedValues method specifying a maxCount > 0 (for instance, 100K works well).  The perceived issue of knowing the earliest timestamp becomes a moot point.  The more important date is knowing the end time, that is the switch-over date from the old tag to the new tag.  Depending upon how the old tag was populated, this may very well be the EndOfStream.  But if there is a chance that the old tag could still be receiving “InterfaceShut” or “IOTimeout” messages, you will need to explicitly specify the end time.  Worrying about the earliest recorded date has been a distraction to solving the real problem.


What of your start time?  I would think an in-house developer should know of the earliest start of their company's archive files.  A contracted developer could use AFTime.MinValue or go with a later, but still much safer date, such as “1/1/1980”.  Which brings us back to what they really want to do: copy large or unknown amounts of data.    This has been blogged about many times before:


Extracting large event counts from the PI Data Archive


PI DataPipe Events Subscription and Data Access Utility using AF SDK - PIEventsNovo


GetLargeRecordedValues - working around ArcMaxCollect

Filter Blog

By date: By tag: