Skip navigation
All Places > PI Developers Club > Blog
1 2 3 Previous Next

PI Developers Club

536 posts

Introduction

 

In 2014, I have written a blog post about Developing a PHP application using PI Web API. At that time, it was necessary to build the URL with strings concatenation in order to make HTTP request against PI Web API. With the Swagger specification that comes with PI Web API 2017 release, I was able to generate a PI Web API Client library for PHP. On this blog post, I will rewrite this PHP application in order to use this library instead of writing some lines of code to generate the URLs. Let's see how this work.

 

The source code package of this PHP application with PI Web API client is available on this GitHub repository.

 

Adding the library to the project

 

Download the PHP files from the  PI Web API Client library for PHP GitHub repository, extract the files and copy the lib folder to the root of your PHP application folder.

 

 

Comparing the piwebapi_wrapper.php files

 

Let's compare both piwebapi_wrapper.php files. The first one is from the 2014 blog post and second one is related to this blog post which uses the PI Web API client library.

 

piwebapi_wrapper.php with no client library.

 

<?php
class PIWebAPI
{
  public static function CheckIfPIServerExists($piServerName)
  {
  $base_service_url = "https://cross-platform-lab-uc2017.osisoft.com/piwebapi/";
  $url = $base_service_url . "dataservers";
  $obj = PIWebAPI::GetJSONObject($url);
  foreach($obj->Items as $myServer)
  {
  if(strtolower($myServer->Name)==strtolower($piServerName))
  {
  return(true);
  }
  }
  return (false);
  }

  public static function CheckIfPIPointExists($piServerName, $piPointName)
  {
  $base_service_url = "https://cross-platform-lab-uc2017.osisoft.com/piwebapi/";
  $url = $base_service_url . "points?path=\\\\" . $piServerName . "\\" . $piPointName;
  $obj1 = PIWebAPI::GetJSONObject($url);
  try {
  if(($obj1->Name)!=null)
  {
  return (true);
  }
  return (false);
  }
  catch (Exception $e)
  {

  }
  }

  public static function GetSnapshot($piServerName, $piPointName)
  {
  $base_service_url = "https://cross-platform-lab-uc2017.osisoft.com/piwebapi/";
  $service_url = $base_service_url . "points?path=\\\\" . $piServerName . "\\" . $piPointName;
  $obj_pipoint = PIWebAPI::GetJSONObject($service_url);
  $url = $obj_pipoint->Links->Value;
  $obj_snapshot = PIWebAPI::GetJSONObject($url);
  return ($obj_snapshot);

  }

  public static function GetRecordedValues($piServerName, $piPointName,$startTime,$endTime)
  {
  $base_service_url = "https://cross-platform-lab-uc2017.osisoft.com/piwebapi/";
  $service_url = $base_service_url . "points?path=\\\\" . $piServerName . "\\" . $piPointName;
  $obj_pipoint = PIWebAPI::GetJSONObject($service_url);
  $url = $obj_pipoint->Links->{'RecordedData'} ."?starttime=" . $startTime . "&endtime=" . $endTime;
  $obj_rec = PIWebAPI::GetJSONObject($url);
  return ($obj_rec);

  }

  public static function  GeInterpolatedValues($piServerName, $piPointName,$startTime,$endTime,$interval)
  {
  $base_service_url = "https://cross-platform-lab-uc2017.osisoft.com/piwebapi/";
  $service_url = $base_service_url . "points?path=\\\\" . $piServerName . "\\" . $piPointName;
  $obj_pipoint = PIWebAPI::GetJSONObject($service_url);
  $url = $obj_pipoint->Links->{'InterpolatedData'} ."?starttime=" . $startTime . "&endtime=" . $endTime . "&interval=" . $interval;
  $obj_int = PIWebAPI::GetJSONObject($url);
  return ($obj_int);

  }

  private static function GetJSONObject($url)
  {
  $username = "username";
  $password= "password";
  $ch = curl_init($url);
  curl_setopt($ch, CURLOPT_HEADER, false);
  curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
  curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
  curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
  curl_setopt($ch, CURLOPT_USERPWD, $username.":".$password); 
  $result = curl_exec($ch);
  $json_o=json_decode($result);
  return ($json_o);
  }
}

 

 

piwebapi_wrapper.php with PHP client library.

 

 

<?php
include_once ( __DIR__ . "\\lib\\PIWebApiLoader.php");
use \PIWebAPI\Client\PIWebApiClient;
class PIWebAPI
{
  public $piwebapi= NULL;
  public function __construct()
  {
  $this->piwebapi = new PIWebApiClient("https://cross-platform-lab-uc2017.osisoft.com/piwebapi","username", "password", "BASIC", FALSE, TRUE);
  }


  public function checkIfPIServerExists($piServerName)
  {

  try 
  {
  $response = $this->piwebapi->dataServer->dataServerGetByNameWithHttpInfo($piServerName);
  if($response[1] == 200)
  {
  return true;
  }
  else {
  return false;
  }
  } 
  catch (Exception $e) {
  return false;
  }


  }

  public function getPIPoint($piServerName, $piPointName)
  {
  try
  {
  $path = "\\\\" . $piServerName . "\\" . $piPointName;
  $response = $this->piwebapi->point->pointGetByPathWithHttpInfo($path);
  if($response[1] == 200)
  {
  return $response;
  }
  else {
  return null;
  }
  }
  catch (Exception $e) {
  return null;
  }
  }

  public function getSnapshot($webId)
  {
  return $this->piwebapi->stream->streamGetEnd($webId);
  }

  public function getRecordedValues($webId, $startTime, $endTime)
  {
  return $this->piwebapi->stream->streamGetRecorded($webId, null, null, $endTime, null, null, null, null, $startTime);
  }

  public function  getInterpolatedValues($webId, $startTime, $endTime, $interval)
  {
  return $this->piwebapi->stream->streamGetInterpolated($webId, null, $endTime, null, null, $interval, null, $startTime);
  }
}

 

The readme.rd from the GitHub repository provides information about how to use this library within a PHP application. Look how easy it is to switch from Basic to Kerberos authentication. Just change the value for $authMethod (which is an input of the PIWebAPIClient constructor) from  "BASIC" to "KERBEROS".

 

 

Updating index.php

 

The objects returned by the methods from the PIWebAPI class are different when compared to the older project. The reason is that the Swagger Code generation will generate classes basic on the specification Json description. As a result, the way you extract the values on the index.php file to render on the tables is a little different:

 

function displaySnapValues($SinusoidSnap) {
  ?>
  <h2>Snapshot Value of Sinusoid</h2>
  <br />
  <table style="width: 20em; border: 1px solid #666;">
  <tr>
  <th>Value</th>
  <th>Timestamp</th>
  </tr>
  <tr>
  <td><?php echo $SinusoidSnap['value'][0]; ?></td>
  <td><?php echo $SinusoidSnap['timestamp']->format('Y-m-d H:i:s'); ?></td>
  </tr>
  </table>
  <br />
  <br />
  <?php
}
function displayRecValues($SinusoidRec) {
  ?>

  <h2>Recorded Values of Sinusoid</h2>
  <br />
  <table style="width: 20em; border: 1px solid #666;">
  <tr>
  <th>Value</th>
  <th>Timestamp</th>
  </tr><?php
  foreach ( $SinusoidRec['items'] as $item) {
  echo "\n<tr>";
  echo "\n\t<td>" . $item['value'][0] . '</td>';
  echo "\n\t<td>" . $item['timestamp']->format('Y-m-d H:i:s') . "</td>";
  echo "\n</tr>";
  }
  ?>

  </table>


  <br />
  <br />
  <?php
}
function displayIntValues($SinusoidInt) {
  ?>
  <h2>Interpolated Values of Sinusoid</h2>
  <br />
  <table style="width: 20em; border: 1px solid #666;">
  <tr>
  <th>Value</th>
  <th>Timestamp</th>
  </tr>
  <?php
  foreach ( $SinusoidInt['items'] as $item) {
  echo "\n<tr>";
  echo "\n\t<td>" . $item['value'][0] . '</td>';
  echo "\n\t<td>" . $item['timestamp']->format('Y-m-d H:i:s') . "</td>";
  echo "\n</tr>";
  }
  ?>
  </table>
  <?php
}

 

Conclusion

 

Although PHP is not my favorite language for web development, I have seen some questions on PI Square about using PHP to retrieve PI data through PI Web API. As a result, I think this library will be very useful for those PHP developers who wants to add value to their application by integrating it with the PI System.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary Per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

Conclusion

We covered a lot of ground in this 9-part series because a lot of what was being covered was new ground.  You were introduced to 4 brand new methods to PI AF SDK 2.9.  Three of those were aggregation methods, which has got to be a much welcomed feature in AFEventFrameSearch.  And FindObjectFields might be the first one any developer checks out for its sheer speed and versatility not just for aggregation but for lightweight detail reporting.  To rehash what was covered:

 

  • Part 4 We showed the old way of doing things with the classical FindEventFrames.  This provided a baseline in performance for us to benchmark against the other new methods.
  • Part 5 We saw the new lightweight FindObjectFields method to return a skinny set of columns.  We looked at all 3 overloads of this method, each of which is concerned about casting first from generic object to the specific underlying type, followed perhaps by additional casting or converting to the type you desire.
  • Part 6 We saw the Summary method and discovered there is an event weighted overload as well as a general weighting overload to produce custom weightings beyond just time weighted.
  • Part 7 We saw how to use the GroupedSummary method to summarize with groupings, which allowed us to make fewer calls.
  • Part 8 We finished off with showing how to use a compound AFSummaryRequest to produce a 2-level grouping.  It was a tad bit complicated but did have great performance.

 

 

Tips to Remember

 

General:

  • Use CaptureValues() to see the performance benefits from server-side filtering.
  • Classes inherited from AFSearch, such as AFEventFrameSearch, now implement IDisposable starting with AF SDK 2.9.  You should consider wrapping your calls inside a using block, or else issue an explicit Close() when you are finished with your search activities.
  • When composing a query string, any values containing embedded blanks should be wrapped inside single or double quotes.
  • Your time string for output queries should be output using the "O" Round-trip specifier.
  • For best performance, you probably want to choose the method that makes the fewest calls to the server.

 

FindObjectFields:

  • If you are working with detail records, you should strongly consider including ID as one of the input fields.  That way if you ever have the need to perform further drilling into a specific event frame, you have the unique ID which can help you quickly locate the full event frame in question.
  • There is no weighted overload for FindObjectFields.  You would be expected to include your own weighting field (e.g. Duration or custom) in the returned set of values.
  • The underlying type of any attribute's value will be AFValue.
  • You may use fields or properties for your DTO class.
  • For the auto-mapped overload, you will have to use the ObjectField decorator to map the source attribute name that happens to begin with a "|" to your desired DTO field name.
  • For event frame properties and the auto-mapped overload, the default is to use the same property name for the mapping.  However, you may override this default.

 

AFSummaryRequest:

  • Is limited to no more than 2 levels of groupings.
  • For 1 grouping level, you should just use Summary or GroupedSummary depending upon your needs since these are less complicated and have a simpler packaging of the results.
  • Based on previous bullets, you probably would only use AFSummaryRequest precisely when you need 2-level groupings.

 

Async

Other than showing the method names in Part 1, we did not mention any of the async methods or show their usage.  But they are there and easily discernible by seeing a CancellationToken among the parameters.  Once LiveLibrary is active for PI AF Client 2017, you are encouraged to review the online help for:

 

  • BinnedSummaryAsync
  • FrequencyDistributionAsync
  • GroupedSummaryAsync
  • HistogramAsync
  • SummaryAsync (both event and general weighting overloads)

 

If you are curious as to why FindObjectFields does not have an async counterpart, keep in mind that FindObjectFields makes paged calls.  You are always capable of break your processing, which will stop requests for more pages of data.

 

Weighting

While the more natural weighting with the data archive is probably time weighted, event frames are stored in the data archive.  It should be no surprise that event weighting is the more natural or default weighting when dealing with event frames.  Out of the new AFEventFrameSearch aggregation methods, only Summary and SummaryAsync offer some other weighting overload other than event weighted.  You aren't limited to just time weighted as the lone alternative.  The new overloads are flexible to allow custom weightings.

 

FindObjectFields doesn't allow for weightings because it's not an aggregation method.  You may still use FindObjectFields but you should include the weighting field as part of the set of skinny columns to be returned.

 

Binning

I did not show any examples of binning.  That might be a future topic.  But you should be aware that these methods exist.

 

  • For discrete values such as integers or strings, FrequencyDistribution and FrequencyDistributionAync generates a <gasp> frequency distribution.
  • For floating point values, you would want to bin by ranges.  See Histogram or HistogramAsync for that.  Note that your requested ranges do not have to be in evenly-spaced intervals.
  • Why not have summaries by bins?  For this there is the BinnedSummary and BinnedSummaryAsync methods.

 

This is the End?

Or is it?  Don't be surprise if I do a future series about binning.

 

Thanks for reading the series.  I hope you enjoyed it.  Please remember to use this knowledge for good and not evil.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary Per Manufacturer

Part 8 - Compound AFSummaryRequest

       Part 9 - Conclusion

 

Let's Make Only ONE Call

I'm going to assume that you haven't jumped blindly into this topic for the first time.  It should be a safe bet that you've read Parts 6 and 7 regarding Summary and GroupedSummary respectively.  It has boiled down to this: I don't want to make repeated calls, because we know each call to the server takes a performance hit.  Summary needed to be called 3 times for this use case, and GroupedSummary twice.  I want to issue one and only one call.  Plus I don't want to have know to know all Manufacturers or Models before I query for them.  Maybe I absolutely don't know that info ahead of time and that's why I'm doing this search in the first place.

 

Which brings us to AFSummaryRequest, which will perfectly fit the bill.  Technically it is not a method to the AFEventFrameSearch or AFSearch classes.  It's a concrete method in the OSIsoft.AF.Data namespace that implements the abstract method AFAggregateResult.  Other aggregation methods, like Summary and GroupedSummary, call AFSummaryRequest themselves for the most common method signatures that need a 1-level summary.  As we want a 2-level grouping, AFSummaryResult is the best choice for our use case.  With that in mind, we should be forgiving that the code to use it is a bit more complicated.  Note that AFSummaryRequest limits you to no more than 2 groupings.

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> tokens)
{
    using (var search = new AFEventFrameSearch(database, "Compound Request", tokens))
    {
        //Opt-in to server side caching
        search.CacheTimeout = TimeSpan.FromMinutes(5);

        //While we eventually want an average, it will be calculated from Total and Count.
        var desiredSummaryTypes = AFSummaryTypes.Count | AFSummaryTypes.Total;

        //Here we make only 1 call to the server but we must build a compound AFSummaryRequest.
        //The GroupBy order is opposite than what you would intutively think: Model and then Manufacturer.
        //First we bundle the AFSummaryRequest.
        var compoundRequest = new AFSummaryRequest("Duration", desiredSummaryTypes)
                                    .GroupBy<string>("|Model")
                                    .GroupBy<string>("|Manufacturer");

        //We send the request as a member of IEnumberable<AFAggregateRequest>.
        //Since we pass a collection of one member, we get a collection of one member back.
        //So we grab that one member and cast it appropriately.
        var aggResult = search.Aggregate(new[] { compoundRequest })[0] as AFCompoundPartitionedResult<string, string>;

        //Unwrap the results.
        foreach (var kvp in aggResult.PartitionedResults)
        {
            var mfr = kvp.Key.PrimaryPartition;
            var model = kvp.Key.SecondaryPartition;

            var summaries = kvp.Value;

            var totalVal = summaries[AFSummaryTypes.Total];
            var countVal = summaries[AFSummaryTypes.Count];
            var stats = new DurationStats();

            if (countVal.IsGood)
            {
                stats.Count = countVal.ValueAsInt32();
                if (totalVal.IsGood)
                {
                    stats.TotalDuration = ((AFTimeSpan)totalVal.Value).ToTimeSpan();
                }
                summary.AddToSummary(mfr, model, stats.TotalDuration, stats.Count);
            }
        }
    }
}

 

 

There you have it.  Not exactly pretty.  But you can't argue with the results since this was the fastest method for my use case.

 

I want to reiterate that you may have no more than 2 levels of grouping for AFSummaryRequest.  And review lines 15-16 to see that the grouping is inside to outside.  That is if we want to group by Manufacturer first and Model second then when we compose the AFSummaryRequest the first GroupBy is by Model and the second GroupdBy is Manufacturer.

 

Metrics Comparison (from Part 2)

The numbers below are from a 2-core VM using Release x64 Mode.  The smaller values are better.  Caution that we sometimes have a difference in UOM between MB and KB, but I will bold KB when needed.

 

Resource Usage:

Values displayed are in MB unless noted otherwise

Method

Total GC Memory (MB)

Working Set Memory (MB)Network Bytes Sent
Network Bytes Received
FindEventFrames145.48257.089.13 MB190.08 MB
FindObjectFields1.2865.555.00 KB3.68 MB
Summary2.5455.358.58 KB261.81 KB
GroupedSummary9.8664.286.24 KB1.98 MB
AFSummaryRequest7.2965.365.00 KB3.68 MB

 

Performance:

MethodClient RPC CallsClient Duration (ms)Server RPC CallsServer Duration (ms)Elapsed Time
FindEventFrames12063337.011039118.102:27.8
FindObjectFields105360.8114547.600:06.0
Summary159484.6169310.900:10.1
GroupedSummary125527.2134938.500:06.2
AFSummaryRequest102992.2102222.200:03.7

 

We are at the same spot where we ponder if AFSummaryRequest is the fastest of the methods.  It appears to be so for my particular use case of reporting by Manufacturer and Model.  If we were to ignore my use case, and compare AFSummaryRequest to Part 6's Bonus Summary and Part 7's Bonus GroupedSummary, both of which issued one call on the same data set, here's how those metrics line up:

 

Metric
SummaryGroupedSummaryCompound AFSummaryRequest
Total GC Memory (MB)4.4812.247.29
Working Set Memory (MB)52.4861.8465.36
Network Bytes Sent4.77 KB4.85 KB5.00 KB
Network Bytes Received260.02 KB1.98 KB3.68 MB
Client RPC Calls101010
Client Duration (ms)534.01913.72992.2
Server RPC Calls101010
Server Duration (ms)353.81472.82222.2
Elapsed Time00:01.100:02.600:03.7

 

I know I sound like a broken record but the same advice applies: think through your application and pick the right tool for the right job.  Which one gives the correct results with making the fewest calls to the server?

 

Up Next: End of the Series

We conclude this 9-part series naturally enough with post that I call Part 9!

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

Query One Level Up

GroupedSummary and Summary have something in common.  They both require a priori knowledge of what you will be summarizing before you can actual summarize it.  For Summary, this required summarizing per the inner loop of Model, which required 3 calls (one for each of our 3 models).  For GroupedSummary, we can reduce the number of calls to the server by making a call on the the outer loop per Manufacturer.  While we do need to know the manufacturers to filter upon for GroupedSummary, we don't need to know the models.

 

We will do something similar as we did the Summary in Part 6:

  • Get a priori list of Manufacturers
  • Build a new token for the given Manufacturer
  • Issue the GroupedSummary call
  • Peel back the results to feed to my DurationStats and StatsTracker

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> baseTokens)
{
    //Absolutely critical to have a priori list of Manufacturers
    var mfrList = summary.Keys.ToList();

    foreach (var mfr in mfrList)
    {
        var tokens = baseTokens.ToList();
        tokens.Add(new AFSearchToken(AFSearchFilter.Value, mfr, "|Manufacturer"));

        using (var search = new AFEventFrameSearch(database, "GroupedSummary Example", tokens))
        {
            //Opt-in to server side caching
            search.CacheTimeout = TimeSpan.FromMinutes(5);

            //While we eventually want an average, it will be calculated from Total and Count.
            var desiredSummaryTypes = AFSummaryTypes.Count | AFSummaryTypes.Total;
            var groupedField = "|Model";
            var summaryField = "Duration";

            var perMfr = search.GroupedSummary(groupedField, summaryField, desiredSummaryTypes);

            foreach (var grouping in perMfr.GroupedResults)
            {
                var model = grouping.Key.ToString();
                var totalVal = grouping.Value[AFSummaryTypes.Total];
                var countVal = grouping.Value[AFSummaryTypes.Count];

                var stats = new DurationStats();

                if (countVal.IsGood)
                {
                    stats.Count = countVal.ValueAsInt32();
                    if (totalVal.IsGood)
                    {
                        stats.TotalDuration = ((AFTimeSpan)totalVal.Value).ToTimeSpan();
                    }
                    summary.AddToSummary(mfr, model, stats.TotalDuration, stats.Count);
                }
            }
        }
    }
}

 

While we did have some similarities, where we invoked the server call is very different.  Here with GroupedSummary, we make the call in our outer loop so we will have less trips to the server.  For Summary in Part 6, we made the call inside the inner loop.  Also the returned results are quite different, though the concept of what we do with them is the same: peel back the returned dictionary accordingly and have them conform to my output objects.

 

The metrics shown in Part 2 would make you think GroupedSummary is faster than Summary.  In general, this is really not true.  For my particular use case it is true, but that's because there are more server calls that my app is making to Summary than for GroupedSummary.  Do not walk away thinking you would want to avoid Summary.  Instead, you should not hesitate to use it for a better use case.

 

Metrics Comparison (from Part 2)

The numbers below are from a 2-core VM using Release x64 Mode.  The smaller values are better.  Caution that we sometimes have a difference in UOM between MB and KB, but I will bold KB when needed.

 

Resource Usage:

Values displayed are in MB unless noted otherwise

Method

Total GC Memory (MB)

Working Set Memory (MB)Network Bytes Sent
Network Bytes Received
FindEventFrames145.48257.089.13 MB190.08 MB
FindObjectFields1.2865.555.00 KB3.68 MB
Summary2.5455.358.58 KB261.81 KB
GroupedSummary9.8664.286.24 KB1.98 MB
AFSummaryRequest7.2965.365.00 KB3.68 MB

 

Performance:

MethodClient RPC CallsClient Duration (ms)Server RPC CallsServer Duration (ms)Elapsed Time
FindEventFrames12063337.011039118.102:27.8
FindObjectFields105360.8114547.600:06.0
Summary159484.6169310.900:10.1
GroupedSummary125527.2134938.500:06.2
AFSummaryRequest102992.2102222.200:03.7

 

 

BONUS: GroupedSummary Using ONE Call

Let's come up with a better use case where we only need to issue one call.  Allow me once again to temporarily change my requirements on the end report, purely for illustration purposes.  Let's imagine I no longer am interested in the average and counts per manufacturer and model.  Instead I want to summarize the same data set as a whole but I only care about models.  In this new scenario I have absolutely no concern about manufacturers.  The new report would look like:

 

Manufacturer  Model            Count Avg Duration

------------- ------------ --------- ----------------

<Any>         DQ-M0L           8,136 03:53:21.4859882

<Any>         Nimbus 2000      1,499 03:44:28.8192128

<Any>         SWTG-3.6        13,678 03:53:35.3165667

------------- ------------ --------- ----------------

            1            3    23,313

 

For the code to do that, I don't need to initialize my summary object to populate itself from an AFTable.

 

//I still use StatsTracker for conformity but we don't need to initialize this from our AFTable  
var summary = new StatsTracker();  

 

That is the summary instance I will pass to my new method, which now eliminates 1 level of looping.  However, I will need to later sort the results so I am going to pass summary by ref.

 

public void GetSummaryByMfrAndModel(ref StatsTracker summary, AFDatabase database, IList<AFSearchToken> tokens)
{
    summary = new StatsTracker();

    //In this bonus test, we want to only issue one GroupedSummary call.
    //Rather than rigorously issue separate calls per Manufacturer, I instead isssue one call on grouped on Model for all Manufacturers.
    //The downside is I lose the individual Manufacturer names.

    using (var search = new AFEventFrameSearch(database, "GroupedSummary Example", tokens))
    {
        //Opt-in to server side caching
        search.CacheTimeout = TimeSpan.FromMinutes(5);

        //While we eventually want an average, it will be calculated from Total and Count.
        var desiredSummaryTypes = AFSummaryTypes.Count | AFSummaryTypes.Total;
        var groupedField = "|Model";
        var summaryField = "Duration";

        var groupedSummary = search.GroupedSummary(groupedField, summaryField, desiredSummaryTypes);

        foreach (var grouping in groupedSummary.GroupedResults)
        {
            var model = grouping.Key.ToString();
            var totalVal = grouping.Value[AFSummaryTypes.Total];
            var countVal = grouping.Value[AFSummaryTypes.Count];

            var stats = new DurationStats();

            if (countVal.IsGood)
            {
                stats.Count = countVal.ValueAsInt32();
                if (totalVal.IsGood)
                {
                    stats.TotalDuration = ((AFTimeSpan)totalVal.Value).ToTimeSpan();
                }
                summary.AddToSummary("<Any>", model, stats.TotalDuration, stats.Count);
            }
        }
    }
    //Sort the results.  They have the same Manufacturer "<Any>" but the Models will be alphabetical.
    summary = summary.SortByKeys();
}

 

I am expecting to get back 3 rows, so I sort the results before returning from my method.  Let's review the metrics with making that one bonus GroupedSummary call and let's compare that to making one bonus Summary call.

 

Metric
SummaryGroupedSummary
Total GC Memory (MB)4.4812.24
Working Set Memory (MB)52.4861.84
Network Bytes Sent4.77 KB4.85 KB
Network Bytes Received260.02 KB1.98 KB
Client RPC Calls1010
Client Duration (ms)534.01913.7
Server RPC Calls1010
Server Duration (ms)353.81472.8
Elapsed Time00:01.100:02.6

 

For the right use cases, both of these methods are extremely fast, and should be welcome in your tool bag.  Don't shy away from using Summary or GroupedSummary because one table shows sluggish performance.  Use your noggin and pick the right tool for the right job.  The emphasis should be on producing the desired results with the fewest trips to the server.

 

 

Up Next: Name That Tune In One Call

Putting aside the bonus section, let's return to the original report by Manufacturer and Model.  To repeat the pattern you should have witnessed in the progression of each method in parts 4 - 7.

Part 4: Heavy detail records

Part 5: Light detail records

Part 6: Aggregation per inner loop

Part 7: Aggregation per outer loop

 

For each successive example we were making fewer calls or receiving fewer records.  The good news is that Summary and GroupedSummary are downright miserly on resources consumed.  The bad news is this whole a priori knowledge requirement as well as making multiple calls which degrades performance.  Wouldn't it be great to be able to make only ONE call and to do so without knowing what the heck we want to summarize in the first place?  That will be covered in Part 8.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

A Bona Find Aggregation Method

We've covered 2 different ways to produce our summaries but neither of those approaches used a true aggregation method.  Instead they both returned detailed rows where we had to apply our own custom aggregation.  In the case of FindEventFrames, the detail rows were heavyweight event frames.  In the case of FindObjectFields, the detail rows were data container records.  For this brand-new AFEventFrameSearch.Summary, we will getting back an aggregation and you will note what is sent from across the network to us (as recorded in Network Bytes Received) is only a teeny tiny bit memory.

 

Summary requires a priori knowledge of what you want to be summarizing.  In our case, we want to summarize by Manufacturer and Model so we must know all the Manufacturers and Models we wish to summarize before we can actually summarize them.  This was discussed in Part 3.  I chose to read an AFTable and populate a model-keyed dictionary inside a manufacturer-keyed dictionary.  You are in no way restricted to do the same.  You are encouraged to find the solution that best fits your own database and needs, and I welcome you sharing your creative solutions back on PISquare one day.

 

You may also remember that in Part 2 the Summary method seemed to be the slowest of the new methods.  It really isn't.  The problem is I am trying to have all these new methods produce the exact same output, so making multiple calls on Models within Manufacturers is really not the best use case of Summary.  On the other hand, it is a very good example of syntax on how to issue a Summary call, as well as what to do with the results that come back from that call.  Let's focus on that as the main lesson to be learned in the code below.

 

The Highlights

My a priori requirement is taken care of by my dictionary in a dictionary.  However, I will need to get an independent list of the keys to the dictionaries.

 

I will also need to issue a Summary per Model.  This means I must use the same base tokens or query that I used for our previous examples, and modify them for each Summary call.  Again, I could take the lazy or sloppy approach and only worry about Model since my current data set had 3 unique models.  But that code could break in the future if I were ever to add a Model with the same name to a different Manufacturer.  Instead, I will take a rigorous approach and truly query by Manufacturer and then Model.

 

All of this is to say that I will be looping first over Manufacturers, and then secondly over the Models.  Then I will modify the tokens or query string for inside the inner loop.  Because I will modify the input tokens/query repeatedly, I have renamed the input argument from "tokens" to be "baseTokens".

 

The final steps will be to receive the results from Summary, and unwrap them to conform to my DurationStats and StatsTracker objects discussed in Part 3.

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> baseTokens)
{
    //Absolutely critical to have a priori list of Manufacturers and Models
   //Get independent list of Manufacturers
    var mfrList = summary.Keys.ToList();

    foreach (var mfr in mfrList)
    {
        //Get independent list of Models for given Manufacturer
        var modelSubList = summary[mfr].Keys.ToList();

        foreach (var model in modelSubList)
        {
            //Safest Technique: via tokens.  
            //Get independent copy to modify inside loop
            var tokens = baseTokens.ToList();
            tokens.Add(new AFSearchToken(AFSearchFilter.Value, mfr, "|Manufacturer"));
            tokens.Add(new AFSearchToken(AFSearchFilter.Value, model, "|Model"));

            //Starting with AF 2.9, AFSearch implements IDisposable
            using (var search = new AFEventFrameSearch(database, "Summary Example", tokens))
            {
                //Opt-in to server side caching
                search.CacheTimeout = TimeSpan.FromMinutes(5);

                var desiredSummaryTypes = AFSummaryTypes.Count | AFSummaryTypes.Total;

                var perModel = search.Summary("Duration", desiredSummaryTypes);

                var totalVal = perModel.SummaryResults[AFSummaryTypes.Total];
                var countVal = perModel.SummaryResults[AFSummaryTypes.Count];

                var stats = new DurationStats();

                //Unwrap the returned results as needed
                if (countVal.IsGood)
                {
                    stats.Count = countVal.ValueAsInt32();
                    if (totalVal.IsGood)
                    {
                        stats.TotalDuration = ((AFTimeSpan)totalVal.Value).ToTimeSpan();
                    }
                    summary.AddToSummary(mfr, model, stats.TotalDuration, stats.Count);
                }
            }
        }
    }
}

 

 

The above example uses query tokens.  I mentioned in Part 3 you could have used a query string.  If you wanted a string instead of tokens, I would have an input string argument named "baseQuery" containing:

 

$"AllDescendants:{allDescendants} Template:'{templateName}' Start:>={startTime.ToString("O")} End:<={endTime.ToString("O")} '{attrPath}':>={attrValue}"

 

Then inside the inner loop of Model, lines 16-18 would become:

 

var query = $"{baseQuery} |Manufacturer:'{mfr}' |Model:'{model}'";

 

Note the use of single quotes around {mfr} and {model}.  For model this is an absolute must have with our data, because we do have one model ("Nimbus 2000") that contains an embedded blank in its name.  For mfr, we did this for future proofing in case we ever add a Manufacturer with a blank in its name.  You may recall in Part 3 I cautioned that if it's a name or a path that the safest route is to wrap it in single quotes.  This helps make your code less fragile.

 

Event versus Time Weighting

For the Summary overload we used in the code above, the result is event weighted.  Normally with data coming from a process historian, I tend to first think in terms of time weighted values.  But we're working with event frames here, so my inclination is that the values are event weighted, that is there is a value associated with the entire event frame.  But that's me.  But you may be interested in getting back a time weighted number, so you might ask "Is there a time weighted overload?"

 

The trick answer is No.  While it's true there is not a Summary overload that allows you to pass an AFCalculationBasis.TimeWeighted enumeration, there is an overload that accepts a general weighting field as the 3rd argument.  This means you aren't restricted to either event weightings or time weightings, but you may pass a custom weighting!  The restriction here is that you pass the name of the weighting field, and that field must belong to the event frame.  For a time weighted weighting, the name of the weighting field could be "Duration" or perhaps you have another time span attribute defined on your event frame.

 

 

Metrics Comparison (from Part 2)

The numbers below are from a 2-core VM using Release x64 Mode.  The smaller values are better.  Caution that we sometimes have a difference in UOM between MB and KB, but I will bold KB when needed.

 

Resource Usage:

Values displayed are in MB unless noted otherwise

Method

Total GC Memory (MB)

Working Set Memory (MB)Network Bytes Sent
Network Bytes Received
FindEventFrames145.48257.089.13 MB190.08 MB
FindObjectFields1.2865.555.00 KB3.68 MB
Summary2.5455.358.58 KB261.81 KB
GroupedSummary9.8664.286.24 KB1.98 MB
AFSummaryRequest7.2965.365.00 KB3.68 MB

 

Performance:

MethodClient RPC CallsClient Duration (ms)Server RPC CallsServer Duration (ms)Elapsed Time
FindEventFrames12063337.011039118.102:27.8
FindObjectFields105360.8114547.600:06.0
Summary159484.6169310.900:10.1
GroupedSummary125527.2134938.500:06.2
AFSummaryRequest102992.2102222.200:03.7

 

 

Caution again about CaptureValues()

The performance is realized because all of my event frames have captured values.  This means filtering by wind velocity, manufacturer, and model - all of which are attributes - is performed on the server.  That greatly reduces the network load.

 

I don't know you consider it a good thing or a bad thing that the code above also works if you have not captured values.  Yes it will work.  But it may be as slow or slower than FindEventFrames.

 

Use the Right Tool for the Right Job

The above example shows correct syntax and how to peel back the results as you need.  Admittedly, a 2-level summary is not a good use case for Summary.  I would absolutely reject using this method if I had to query model 5 times or more (that is make 5 or more invocations of Summary).  I may possibly consider it if I knew I had less than 5 models but would likely reject it as the method of choice unless I only had to make 1 or 2 calls.  With 1 call, it's a no-brainer: Summary is the right choice.  Would you like proof?

 

BONUS: Summary Using ONE Call

Let's come up with a better use case where we only need to issue one call.  Allow me to temporarily (just for illustration purposes) change my requirements on the end report.  I no longer am interested in the average and counts per manufacturer and model.  Instead I want to summarize over the exact same data set as a whole.  The new report would look like:

 

Manufacturer  Model            Count Avg Duration

------------- ------------ --------- ----------------

<All>         <All>           23,313 03:52:55.3506627

------------- ------------ --------- ----------------

            1            1    23,313

 

I get the exact same record count as the original report in Part 2, which shouldn't be surprising since I use the exact same filter.  For the code to produce the above report, I don't need to initialize my summary object to populate itself from an AFTable.

 

//I still use StatsTracker for conformity but we don't need to initialize this from our AFTable
var summary = new StatsTracker();

 

That is the summary instance I will pass to my new method, which now eliminates 2 levels of looping.

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> tokens)
{
    //Starting with AF 2.9, AFSearch implements IDisposable
    using (var search = new AFEventFrameSearch(database, "Better Summary Use Case", tokens))
    {
        //Opt-in to server side caching
        search.CacheTimeout = TimeSpan.FromMinutes(5);

        var desiredSummaryTypes = AFSummaryTypes.Count | AFSummaryTypes.Total;

        var oneCallSummary = search.Summary("Duration", desiredSummaryTypes);

        var totalVal = oneCallSummary.SummaryResults[AFSummaryTypes.Total];
        var countVal = oneCallSummary.SummaryResults[AFSummaryTypes.Count];

        var stats = new DurationStats();

        if (countVal.IsGood)
        {
            stats.Count = countVal.ValueAsInt32();
            if (totalVal.IsGood)
            {
                stats.TotalDuration = ((AFTimeSpan)totalVal.Value).ToTimeSpan();
            }
            summary.AddToSummary("<All>", "<All>", stats.TotalDuration, stats.Count);
        }
    }
}

 

Since I get back only 1 row, there is no need to sort the results.  Let's review the metrics with making that one call:

Metric
Summary
Total GC Memory (MB)4.48
Working Set Memory (MB)52.48
Network Bytes Sent4.77 KB
Network Bytes Received260.02 KB
Client RPC Calls10
Client Duration (ms)534.0
Server RPC Calls10
Server Duration (ms)353.8
Elapsed Time00:01.1

 

Wow, that IS FAST!!!

 

 

Up Next: Reduce the Calls to the Outer Loop

Putting aside the bonus section, let's return to the original report by Manufacturer and Model.  We had to drill down into 2 loops to build our Summary call per Model.  In Part 7 we reduce the number of calls by making a call in the outer loop per Manufacturer.  We will do this with the GroupedSummary method.  See you in Part 7.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

Welcome to the Halfway Point

We have reached the halfway point in this series.  I want to thank you for sticking with it.  Both of you.  In this part will be look into the brand new FindObjectFields which is very lightweight.  You may want to keep a fire extinguisher handy because this method is blazing fast (when compared to FindEventFrames).  Caveat: you only see performance benefits if you've called CapturedValues() on the event frames in question.

 

I don't want to sway your opinion but let me say that this became my favorite new method in PI AF SDK 2.9.  Besides being lightweight and super fast, it does return detail rows so it has flexibility for so many other applications, not just aggregation.  FindObjectFields has 3 different overloads.  We only covered 2 in the lab, but we will cover all 3 here.

 

Besides the sheer lightweightness of FindObjectFields when compared to FindEventFrames, there is another big, BIG difference.  The one call to FindObjectFields returns the data in the same call.  No need for a separate GetValues() call.  No need for custom chunking.

 

Plain Overload

This overload wasn't covered in the UC 2017 lab.  Essentially the values all come over as object so one of the first steps you will undertake is most likely casting to its underlying specific type.  And if that type is want you want to ultimately work with, then you will have to perform an additional cast or convert.

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> tokens)
{
    //Starting with AF 2.9, AFSearch implements IDisposable
    using (var search = new AFEventFrameSearch(database, "FindObjectFields Example 1", tokens))
    {
        //Opt-in to server side caching
        search.CacheTimeout = TimeSpan.FromMinutes(5);

        //The order of these fields determines the order of returned values in IList<object>
        var fields = "|Manufacturer |Model Duration";

        //returns IEnumerable<IList<object>> where each IEnumerable
        //represents one event frame, and the IList<object> contains 
        //values from each of the specified fields.  From our example,
        //we will have 3 values returned per event frame.
        var records = search.FindObjectFields(fields, pageSize: 10000);

        foreach (var record in records)
        {
            //Read data AND cast appropriately
            var mfr = ((AFValue)record[0]).Value.ToString();
            var model = ((AFValue)record[1]).Value.ToString();
            var duration = ((AFTimeSpan)record[2]).ToTimeSpan();

            //Summary overload is for TimeSpan
            summary.AddToSummary(mfr, model, duration, 1);
        }
    }
}

 

In Line 10, we specify the order of desired fields in a (mostly) blank delimited string.  If you have an attribute path that contains an embedded blank, you would need to wrap the path in single quotes, so they would serve as a delimiter as well.  As would double quotes, but that's a topic for another day.  Internally, FIndObjectFields will parse this string into its own IList<string>, something like { "|Manufacturer", "|Model", "Duration" }.

 

What's returned by the FindObjectFields is IEnumerable where each iteration of the IEnumerable is a different event frame.  The report of Part 2 shows we have over 23000 event frames, so we would expect 23000 records to enumerate over.  The payload of each iteration (or event frame or record) is an IList<object> where each indexed item is a value for the associated field.  In our example, index[0] is the Manufacturer value, index[1] is the Model, and index[2] is the Duration.

 

Since each record comes back as IList<object>, it would be your duty as developer to cast each object to its underlying data type.  First and foremost, any attribute will be returned as an AFValue and therefore must be cast into an AFValue before you can do anything else with it.  Any value from a property on the event frame will be returned with the same data type as the property.  Since Duration is an AFTimeSpan, I must first cast to AFTimeSpan, which I can then convert to TimeSpan if so desired.

 

A word about pageSize here ... with FindEventFrames the larger the pageSize the larger the memory needed to hold all the objects.  However, since this is lightweight, and maybe 10X smaller than the heavy objects from FindEventFrames, using a pageSize of 10000 with FindObjectFields still uses a comparable amount of memory as FindEventFrames(pageSize: 1000).

 

Auto-Mapped DTO Class

The next overloads will use DTO (Data Transfer Object) classes.  This particular overload will Auto-Map field names into your DTO class.  The help file for 2.9 shows this quite well.

 

A DTO is a simple class that will be our lightweight data container.  You could also use a POCO (Plain Old Class Object) but the more definitions you put in your container class the less lightweight it becomes.  I don't want to get into an argument over the subtle differences between DTO and POCO because such academic arguments are as enjoyable as chewing on tin foil.  You are invited to research on the web to learn more.  I include 2 links below.

 

P of EAA: Data Transfer Object

c# - POCO vs DTO - Stack Overflow

 

Before we can use the overload, we must first define our DTO class.  There are a couple of critical rules to apply:

  1. Any type for an attribute value must be an AFValue.
  2. The type for an event frame property must exactly match the type on the event frame.
  3. If the name of the entity on the event frame does not contain a blank, you may keep the original name.
  4. If you wish to change the name in your DTO class, you will use the AFSearch.ObjectField decorator.
  5. Since all attribute paths must begin with "|", which is not allowed in class field or property names, you must use the AFSearch.ObjectField decorator to provide a mapping to your DTO property.
  6. Your DTO may declare your objects as fields or properties.  The example below uses my own personal preference (properties).

 

    public class LightweightAutomapDto
    {
        // Field mapped using default name.
        public AFTimeSpan Duration { get; set; }

        // Attribute value mapped to property using 'ObjectField' attribute.
        [AFSearch.ObjectField("|Manufacturer")]
        public AFValue Manufacturer { get; set; }

        // Attribute value mapped to property using 'ObjectField' attribute.
        [AFSearch.ObjectField("|Model")]
        public AFValue Model { get; set; }
    }

 

Based on the mapping rules above, you will note:

  • I am using the name "Duration" exactly as it is named on the event frame.
  • The data type for my "Duration" is AFTimeSpan because that's what the event frame's Duration is.
  • Both my attributes must provide a ObjectField mapping.
  • Both my attributes have a data type of AFValue.

 

How would that look like in code?

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> tokens)
{
    //Starting with AF 2.9, AFSearch implements IDisposable
    using (var search = new AFEventFrameSearch(database, "Automap DTO Example", tokens))
    {
        //Opt-in to server side caching
        search.CacheTimeout = TimeSpan.FromMinutes(5);

        var records = search.FindObjectFields<LightweightAutomapDto>(pageSize: 10000);

        foreach (var record in records)
        {
            //Read data from 1 event frame via the current DTO container
            var mfr = record.Manufacturer?.Value.ToString();
            var model = record.Model?.Value.ToString();

            //Summary overload is for TimeSpan
            summary.AddToSummary(mfr, model, record.Duration.ToTimeSpan(), 1);
        }
    }
}

 

 

DTO with Custom Factory

The 3rd overload is quite interesting and was almost left out of the UC lab for fear of course length.  I am glad I included it because it soon became my favorite of the overloads.  If you choose to use this overload, then you absolutely MUST provide your own custom factory to perform the transfer of data.

 

Why would you want to do use this?  Let's consider the Auto-mapped DTO and what I would like to try differently:

  • The attribute values must be AFValue but I want Manufacturer and Model to be strings.
  • I want my Duration property to be a TimeSpan instead of the AFTimeSpan as it is on the event frame.

 

The resulting DTO class is far, far simpler and laid out exactly like I want it.  We don't have to worry about AFSearch.FindObject fields because our custom factory will take care of transfer.

 

public class LightweightDtoForCustomFactory
{
    public TimeSpan Duration { get; set; }
    public string Manufacturer { get; set; }
    public string Model { get; set; }
}

 

And here is how it would be used:

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> tokens)
{
    //Starting with AF 2.9, AFSearch implements IDisposable
    using (var search = new AFEventFrameSearch(database, "DTO For Custom Factory Example", tokens))
    {
        //Opt-in to server side caching
        search.CacheTimeout = TimeSpan.FromMinutes(5);

        //The order of these fields determines the order of returned values in IList<object>
        var fields = "|Manufacturer |Model Duration";

        //Define your custom factory 
        Func<IList<object>, LightweightDtoForCustomFactory> factory = (values) =>
        {
            var dto = new LightweightDtoForCustomFactory();
            dto.Manufacturer = ((AFValue)values[0]).ToString();
            dto.Model = ((AFValue)values[1]).ToString();
            dto.Duration = ((AFTimeSpan)values[2]).ToTimeSpan();
            return dto;
        };

        var records = search.FindObjectFields<LightweightDtoForCustomFactory>(fields, factory, pageSize: 10000);

        //The loop is a bit simplified because the transfer logic is contained in the function delegate above.
        foreach (var record in records)
        {
            //Note that Manufacturer and Model are now validated strings thanks to factory.

            //Summary overload is for TimeSpan
            summary.AddToSummary(record.Manufacturer, record.Model, record.Duration, 1);
        }
    }
}

 

You are invited to review the code for each of the 3 overloads to look for similarities and differences.  The one big similarity is that each is concerned with casting the object value to its underlying type and then converting that to a desired type.  With that in mind, all 3 overloads offer identical performance so your preference of one over the other is purely your personal preference.

 

Great For Detail Reporting

As mentioned many times earlier, FindObjectFields is not limited to performing aggregations.  It's also quite handy for detail reporting too.  If you are going to use it for detail reporting, the strongest suggestion I can give you is to be sure to include the event frame's ID in your DTO class.  That way you have the ability to quickly find and load a specific event frame if ever needed.

 

Metrics Comparison (from Part 2)

The numbers below are from a 2-core VM using Release x64 Mode.  The smaller values are better.  Caution that we sometimes have a difference in UOM between MB and KB, but I will bold KB when needed.

 

Resource Usage:

Values displayed are in MB unless noted otherwise

Method

Total GC Memory (MB)

Working Set Memory (MB)Network Bytes Sent
Network Bytes Received
FindEventFrames145.48257.089.13 MB190.08 MB
FindObjectFields1.2865.555.00 KB3.68 MB
Summary2.5455.358.58 KB261.81 KB
GroupedSummary9.8664.286.24 KB1.98 MB
AFSummaryRequest7.2965.365.00 KB3.68 MB

 

Performance:

MethodClient RPC CallsClient Duration (ms)Server RPC CallsServer Duration (ms)Elapsed Time
FindEventFrames12063337.011039118.102:27.8
FindObjectFields105360.8114547.600:06.0
Summary159484.6169310.900:10.1
GroupedSummary125527.2134938.500:06.2
AFSummaryRequest102992.2102222.200:03.7

 

 

Next Up: A Real Summary Method

In Part 6 we will cover the first bona fide aggregation method, AFEventFrameSearch.Summary.  I forgive you if it takes a several days or a week for you to move onto Part 6.  If you are anything like me, once you saw code for FindObjectFields the gears in your head must have started spinning overtime as you became preoccupied thinking of every app you've ever written that could have benefitted from a faster lightweight method.  If that's the case, Part 6 can wait.  You should by all means roll up your sleeves and get busy testing out FindObjectFieldsPart 6 will be here when you get back.

Using RPC Metrics in your AF SDK has never been easier!

 

Occasionally you may have the need to view various metrics regarding your AF SDK code.  Recently I was given a bit of code to review performance metrics, which I have modified and will present below via a text file to download.  Note the code references PIServer.GetClientRpcMetrics, which is new to AF SDK 2.9.  If you are interested in the code but are working with an earlier version of AF, you will need to strip out any references to PIServer to get it to compile.

 

Modify App.Config

You will need to modify the <configuration> section of your App.Config file.  If the App.Config file does not exist, you will need to create it.  You will add the following lines:

 

<system.net>
  <settings>
    <performanceCounters enabled = "true" />
  </ settings >
</ system.net >

 

This again would be in the <configuration> section, following the <startup> node.  If you have to create the App.Config file from scratch, the whole thing would look like:

 

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
  </startup>
  <system.net>
    <settings>
      <performanceCounters enabled="true" />
    </settings>
  </system.net>
</configuration>

 

For the metrics code further below, you may put it in a library project.  However, the library DLL as well as any application referencing the library should all have their App.Config modified as shown above.

 

MetricsTicker and MetricsSnapshot

I have 2 classes defined within one file named "Metrics.cs".  The MetricsTicker will track the starting and ending snapshot taken by MetricsSnapshot, and then neatly display the differences between the start and end.  MetricsSnapshot will take a snapshot of these metrics:

  • AF Server RPC Metrics (if any)
  • AF Client RPC Metrics (if any)
  • PI Client RPC Metrics (if any)
  • Network bytes sent  (a performance counter)
  • Network bytes received  (a performance counter)
  • Garbage collected memory
  • Allocated working set memory (a performance counter)
  • Allocated peak working set memory (a performance counter)
  • Timestamp (DateTime.UtcNow)

 

Simple Usage Example

Let's say I have some method named YourCustomMethod where I make lots of AF calls that I wish to review the metrics related to that method.  I could reference the classes by passing the Asset Server (PISystem) of interest:

 

var metrics = new MetricsTicker(assetServer);
metrics.Start();
YourCustomMethod();
metrics.Stop();

 

Well, that seems easy enough so far!  If you want a high precision timer, you may optionally wrap a Stopwatch around your custom method call.  However, the last thing a starting snapshot does is grab DateTime.UtcNow, and the first thing an ending snapshot does is also grab DateTime.UtcNow so there is already a built-in way to measure the timespan between snapshots.  Plus if you want to focus just on the AF RPC calls, you can review the subtotals of the duration.

 

Sample Output

Okay, starting and stopping our metrics was easy.  What about reporting?  How difficult is that?  It too is easy.  To output the the difference in metrics, there is one simple command:

 

Console.WriteLine(metrics.DisplayDelta());

 

Which would produce something like:

 

AF Server RPC Metrics        Count   Duration(ms)  PerCall(ms)

-------------------------  --------  ------------  ------------

GetSDCollection                   1           0.5         0.484

GetElement                       11          81.4         7.400

GetTableList                      1           3.1         3.092

GetTable                          1           1.6         1.649

GetElementTemplate                2           9.3         4.634

GetCategory                       2          10.2         5.120

GetUOMDatabase                    1           2.0         2.020

SearchTotalCount                  1         246.4       246.412

SearchRefresh                     1           0.0         0.021

SearchObjectIds                   3         221.9        73.953

GetEventFrames                   24        5386.9       224.456

GetCategoryList                   3           9.3         3.102

GetAnalyses                      24         545.2        22.717

GetAnalysisTemplates              1           9.5         9.532

GetElementTemplates               1           3.7         3.720

GetAnalysisTemplateList           1           1.9         1.894

GetModels                        47       17257.7       367.185

-------------------------  --------  ------------  ------------

                 Subtotal       125       23790.8     23790.770

 

AF Client RPC Metrics        Count   Duration(ms)  PerCall(ms)

-------------------------  --------  ------------  ------------

GetTableList                      1          58.7        58.652

GetTable                          1          30.8        30.814

GetElementTemplate                2          51.2        25.599

GetCategory                       2          39.6        19.786

GetUOMDatabase                    1          68.5        68.490

SearchTotalCount                  1         269.2       269.233

SearchObjectIds                   3         328.5       109.488

GetEventFrames                   24       13932.8       580.535

GetCategoryList                   3          23.9         7.983

GetAnalyses                      24        1706.0        71.083

GetAnalysisTemplates              1          72.0        72.034

GetElementTemplates               1          15.0        15.026

GetAnalysisTemplateList           1          15.8        15.765

GetModels                        47       31414.9       668.402

SearchRefresh                     1           4.7         4.748

-------------------------  --------  ------------  ------------

                 Subtotal       113       48031.7     48031.692

 

Total GC Memory: 385.37 MB

Working Memory : 524.71 MB

Peak Wrk Memory: 539.85 MB

Network Sent   : 8.84 MB

Network Receivd: 181.44 MB

Elapsed Time   : 02:44.6

 

How easy is that to generate a report of your metrics?!!

 

There are different combinations to DisplayDelta() method since it's full signature is:

 

public string DisplayDelta(bool round = true, bool showServerRpcMetrics = true, bool showClientRpcMetrics = true)

 

If you notice the bottom of the above output neatly shows bytes in MB rounded to 2 decimal places, and the elapsed time span is to 1/10th of a second.  This is due to the round parameter defaulting to true.  You can get the full bytes and time span if you try DisplayDelta(round: false).  Here's an example of that:

 

Total GC Memory: 464,306,792 bytes

Working Memory : 600,330,240 bytes

Peak Wrk Memory: 601,427,968 bytes

Network Sent   : 9,271,104 bytes

Network Receivd: 190,250,782 bytes

Elapsed Time   : 00:02:47.1073888

 

Memory is a Guesstimation

The Network Bytes Sent and Received are fairly accurate, but the values for memory usage should be considered a ballpark value rather than a highly accurate value.  There's so much that goes on inside of .NET with garbage collection, the memory heap, locked pages, etc., that makes it tough to be precise.  Suffice to say that if you to have an app that routinely uses 400 MB of Total GC Memory, and then you make changes to your app to see the memory drop to 100 MB routinely, then you should have peace-of-mind that you've reduced your memory needs by 75%.  Note that I peppered that last sentence with "routinely".  That's because, thanks to the mysteries of garbage collection, you may run the same app 10 times in a row and 9 of those times the memory may hover around 400 MB and 1 of those times it may unexpectedly drop to 200 MB.  This should be considered a one-off due to GC doing something independent of your app but obviously affecting your app.

 

While the memory usage is not highly accurate, I personally find it to be an acceptable gauge for comparisons.

 

Download Files

Here's the "Metrics29.cs" file to add to your projects.  You may need to change the namespace accordingly.  Note that "Metrics29.cs" requires AF SDK 2.9 or better.  For AF versions 2.6 through AF 2.8, you may use the "Metrics28.cs" version.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

The Classical Full Load Approach

We are going to be using the AFEventFrameSearch.FindEventFrames method, which was about the only thing available to us in AF SDK 2.8.  In Part 3 we talked about the filter query we will be using.  We essentially want a 2 level summary to be performed: the first level is by Manufacturer and the second level is by Model.  Take some time to think of how you would have done this.  Many of you should already have firm ideas on how you would do it, and perhaps many of you have already done something like this before.

 

My approach will be to call FindEventFrames using fullLoad: true because I do need to reference some attributes.  As explained at the bottom of Part 2, this is rather heavyweight since it will be bring back a lot of stuff that I'm not interested in for the specific task at hand.  Despite this heaviness there is one crucial thing the full load doesn't bring back: attribute data!  That means I have to compose a 2nd set of calls to fetch the data, which means additional trips to the server.  An experienced developer would know that it's inefficient to call GetValue() one-at-a-time, so we will implement some sort of custom chunking to process in bulk in order to minimize the number of trips.

 

For those who attended the UC 2017 lab, I am going to do something a bit different.  In the lab, I would initialize a StatsTracker instance in the method below, and the method would return StatsTracker.  I decided to initialize StatsTracker shortly after I set my database and template objects, but before I record the metrics.  The initialization makes an RPC call to fetch the AFTable, which is the same for all 5 apps, so I don't really want to measure it for all 5 apps since its the same thing.  The method below differs from the lab in that it takes the StatsTracker as an input argument and the method now returns void.

 

public void GetSummaryByMfrAndModel(StatsTracker summary, AFDatabase database, IList<AFSearchToken> tokens)
{
    const int chunkSize = 5000;

    //Starting with AF 2.9, AFSearch implements IDisposable
    using (var search = new AFEventFrameSearch(database, "FindEventFrames Example", tokens))
    {
        //Opt-in to server side caching
        search.CacheTimeout = TimeSpan.FromMinutes(10);

        var frames = search.FindEventFrames(fullLoad: true, pageSize: 10000);

        var chunk = new List<AFEventFrame>();

        foreach (var frame in frames)
        {
            chunk.Add(frame);

            //Process in bulk calls for a given chunk
            if (chunk.Count >= chunkSize)
            {
                ProcessChunk(chunk, summary);
                chunk = new List<AFEventFrame>();
            }
        }

        //Process last chunk (if any)
        if (chunk.Count > 0)
            ProcessChunk(chunk, summary);
    }
}

private void ProcessChunk(IList<AFEventFrame> chunk, StatsTracker summary)
{
    var attributes = new AFAttributeList();

    //First pass over each event frame in this chunk to gather the attributes
    foreach (var frame in chunk)
    {
        attributes.Add(frame.Attributes["|Manufacturer"]);
        attributes.Add(frame.Attributes["|Model"]);
    }

    //Secondly issue a bulk GetValue call on those attributes, but I need a dictionary for faster lookups
    var values = attributes.GetValue().ToDictionary(pv => pv.Attribute);

    //Finally pass over each event frame one last time to update summary using fetched values.
    foreach (var frame in chunk)
    {
        //Read data from current event frame
        var mfr = values[frame.Attributes["|Manufacturer"]].Value.ToString();
        var model = values[frame.Attributes["|Model"]].Value.ToString();

        summary.AddToSummary(mfr, model, frame.Duration, 1);
    }
}

 

You will note in line 09 that I do opt-in to server-side caching, which I do for the other 4 apps.  The difference is that here, where I know there is a performance penalty due to the heaviness of the objects, I use a timeout of 10 minutes.  Since the other 4 apps in the series will be much, much faster, I will use a timeout of 5 minutes for them.

 

About pageSize

The default value for pageSize is 1000.  I choose 10000 here.  Why?  Is it better?  Do I have inside information that its better?  NO.  I did this because life is too short.  While developing the lab, and testing frequently with each new beta build, I ran this over 1000 times.  Early on, when I looked at a 3-day period, the code above took 10 minutes to run.  This would be ridiculous to have a lab exercise take 10 minutes just to execute.  So I trimmed it my filter to 1-day and the code then took over 4-5 minutes to run using the default pageSize.  A pageSize of 5000 took 3-and-a-half minutes to run, but used more memory.  I settled on a pageSize of 10000, which used even more memory, but took about 2-and-a-half minutes to run, which is about as long as I would want a lab exercise to take.

 

There was a side benefit that since this took more memory using pageSize: 10000 that it really helped to show-off the memory savings of the new methods.  But that was just a side benefit.  It really came down to I didn't want to wait 10 minutes a couple of times a day to wait for this to finish.

 

About Producing Metrics

I have shown you results of metrics, but I haven't shown you how I produce them.  That is in this separate blog.  It is not a part of this 9-part series.  It will be a blog to stand on its own since metrics tracking is a topic completely independent of event frame searching or aggregation of data values.

 

Metrics Comparison (from Part 2)

The numbers below are from a 2-core VM using Release x64 Mode.  The smaller values are better.  Caution that we sometimes have a difference in UOM between MB and KB, but I will bold KB when needed.

 

Resource Usage:

Values displayed are in MB unless noted otherwise

Method

Total GC Memory (MB)

Working Set Memory (MB)Network Bytes Sent
Network Bytes Received
FindEventFrames145.48257.089.13 MB190.08 MB
FindObjectFields1.2865.555.00 KB3.68 MB
Summary2.5455.358.58 KB261.81 KB
GroupedSummary9.8664.286.24 KB1.98 MB
AFSummaryRequest7.2965.365.00 KB3.68 MB

 

Performance:

MethodClient RPC CallsClient Duration (ms)Server RPC CallsServer Duration (ms)Elapsed Time
FindEventFrames12063337.011039118.102:27.8
FindObjectFields105360.8114547.600:06.0
Summary159484.6169310.900:10.1
GroupedSummary125527.2134938.500:06.2
AFSummaryRequest102992.2102222.200:03.7

 

 

Next up: Finally Something New

This was just one possible way of producing aggregates, or specifically a 2-level aggregation.  There's lots of different ways this could have been done, even with FindEventFrames.  How would you have done it?  Would you have done something mostly similar but differing in some details?  Or do you have a totally different approach?

 

Anyway, we have established our baseline for performance metrics.  We are now ready to venture into brand new territory with Part 5 where we see how to use the new FindObjectFields method.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

Event Frame Template

I'm going to skip over creating a nice warm-hearted, make-believe story.  This is an Advanced AF SDK series.  My audience is accustomed to reading Help files in all their minimalist glory.  This is a step up from that.  Besides this part is going to be one of the longest in the series, so let's jump right into things:

 

Here's what my event frame template looks like:

Template: Low Efficiency inherited from Condition Alert

EF Template.png

The report in Part 2 uses the Duration property and summarizes by the Manufacturer and Model attributes.  We will also filter on an attribute: we are only interested in when the Wind Velocity at Start is greater than or equal to 11 miles per hour.  IMPORTANT: to be able to efficiently filter on an attribute, we need to issued CaptureValues() on all the event frames of interest.  Otherwise any performance benefit will be lost.

 

CRITICAL: in order to filter on a numeric attribute, the attribute's data type must be a floating point.  It cannot be integral (yet) but may change in future versions.  My Wind Velocity at Start is defined as a Single, and there is a Wind Velocity that is also a Single back on the primary referenced element.  It's perfectly okay that back in the data archive the underlying PI point is an Int32.  What matters to AFEventFrameSearch is the filtered attribute must be floating point.  Another benefit: if you ever plan to allow any UOM conversion in the future, floating point values convert with better precision that integers.

 

Analysis Template to Generate Event Frames

Analysis.png

Note the analysis has 3 possible Start triggers and each trigger has a different Severity level.  What happens if a Warning event occurs but later degrades to be Major before the End trigger is satisfied?  This "one" low efficiency event would create at least 3 event frames:

  1. A root level frame for the entire duration of the event with the worst Severity
  2. A child event frame for the initial Warning
  3. A child event frame for the subsequent Major breech

 

Sample of Compound Event

Not every event triggered will generate multiple event frames.  Some will.  Some won't.  Keep things interesting.  Here's a sample of what that might look like:

 

Root Level.png

 

You will discover later that the query I use with AFEventFrameSearch will only look at root level event frames.  I want to count the total occurrences and duration of the "one" low efficiency event, so I must not double-dip and count the child frames.

 

The Query

I'm going to show 3 different ways of generating a query.  There are 2 main overloads to AFEventFrameSearch I could use.  One accepts a list of AFSearchTokens.  The other takes a string.  For the one that takes a string, I will build that string 2 ways, one as a long, wide string, and the other using StringBuilder.  For all these varied techniques, I will use the same base filter variables:

 

bool allDescendants = false;
string templateName = "Low Efficiency";
DateTime startTime = new DateTime(2017, 1, 21, 0, 0, 0, DateTimeKind.Local);
DateTime endTime = new DateTime(2017, 1, 22, 0, 0, 0, DateTimeKind.Local);
//CRITICAL the attrName includes a prefixed "|".
string attrPath = "|Wind Velocity at Start";
int attrValue = 11;

 

Let's assume that I also previously have declared and set an AFDatabase object named "database".

 

A few sections back I said a filtered numeric attribute must have a floating point data type.  But my attrValue variable is declared as an int (Int32).  This is also okay.  Ultimately I must serialize everything to a string, so be it an int (11) or a Single (11.0F), it will eventually be converted to string ("11").

 

As discussed in previous section, I only want to look at root level event frames.  Another way of saying that is I am not interested in all descendants.  The template name should be understood.  I care about only one specific day, which is to say it starts at midnight of one date and ends at midnight on the following date.  One key distinction of strings passed is whether it refers to an attribute or a property.  The important way to make that distinction is to prefix any attribute name with "|".  Attribute names that contain blanks should be wrapped in single or double quotes.  You should treat the "|" as part of the name (it's a path actually) so the "|" would also be inside the single or double quotes.

 

Tokens

Using the above variables, here's what I would do to create a search object using tokens:

 

var tokens = new List<AFSearchToken>();
tokens.Add(new AFSearchToken(AFSearchFilter.AllDescendants, AFSearchOperator.Equal, allDescendants.ToString()));
tokens.Add(new AFSearchToken(AFSearchFilter.Template, AFSearchOperator.Equal, templateName));
tokens.Add(new AFSearchToken(AFSearchFilter.Start, AFSearchOperator.GreaterThanOrEqual, startTime.ToString("O")));
tokens.Add(new AFSearchToken(AFSearchFilter.End, AFSearchOperator.LessThanOrEqual, endTime.ToString("O")));
//Attribute values are special case:
tokens.Add(new AFSearchToken(AFSearchFilter.Value, AFSearchOperator.GreaterThanOrEqual, attrValue.ToString(), attrPath));

using (var search = new AFEventFrameSearch(database, "tokens example", tokens))
{
    //Get ready to rumble
}

 

For lines 04 and 05, I use the DateTime Round Trip Specifier of "O" to generate an ISO 8601 compliant time string that is (a) culturally neutral and (b) unambiguous as to its instance in time.  Of side note, the VM has its time zone set to UTC.

 

Everywhere else anything that should be a string is given an appropriate ToString().  A quick sanity check by dumping the query to my console produces:

Base Query Tokens:

    AllDescendants:False

    Template:'Low Efficiency'

    Start:>=2017-01-21T00:00:00.0000000+00:00

    End:<=2017-01-22T00:00:00.0000000+00:00

    '|Wind Velocity at Start':>=11

 

You should not that the AFSearchToken automatically inserted single quotes where there should be embedded blanks.  See 'Low Efficiency' and '|Wind Velocity at Start'.  Another advantage of the "O" specifier for the Round Trip time string is that the time string does not contain blanks.

 

As mentioned in Part 1, AFSearch now implements IDisposable so in line 09 a using block is used.

 

Wide String

The same query could have just as easily have been one wide string, but it is now my responsibility as developer to wrap values in single quotes wherever there is a possibility of an embedded blank.  The burden is on you.

 

var query = $"AllDescendants:{allDescendants} Template:'{templateName}' Start:>={startTime.ToString("O")} End:<={endTime.ToString("O")} '{attrPath}':>={attrValue}";

using (var search = new AFEventFrameSearch(database, "string example", query))
{
    //Get ready to rumble
}

 

Let's consider any value that I absolutely know would never have an embedded blank when ToString() is applied:

  • a bool, so I don't worry about allDescendants
  • an int, so I don't worry about attrValue
  • any DateTime output with Round Trip specifier "O"

 

In general, any String variable that could be a name or path should be wrapped in single quotes for safety.  Out of sheer caution, consider anything else to be a BIG MAYBE.  Any such MAYBE's should be wrapped in single quotes.  Here again see '{templateName}' and '{attrPath}'.

 

StringBuilder

Once again the same query could have also been constructed using StringBuilder, particularly if the string would be very wide.  Here again the burden is on you as the developer to wrap values in single quotes when such values have the possibility to contain an embedded blank.

 

var builder = new StringBuilder();
builder.Append($"AllDescendants:{allDescendants}");
//Be sure to include leading blank for each line below below.
builder.Append($" Template:'{templateName}'");
builder.Append($" Start:>={startTime.ToString("O")}");
builder.Append($" End:<={endTime.ToString("O")}");
builder.Append($" '{attrPath}':>={attrValue}");

using (var search = new AFEventFrameSearch(database, "string example", builder.ToString()))
{
    //Get ready to rumble
}

 

There you have it.  Three different ways you may build a query to be passed to AFEventFrameSearch.  Which way is the best way?  That's up to you.  It's purely a matter of personal preference, as all 3 ways produce equivalent filters.

 

Adding Conformity to 5 Very Different Methods

I mentioned that how we interact with each of our 5 methods will be different per app.  This entails what we do in order to make each call, and later what we do with the different objects returned from each call.  In order to have some uniformity among the apps, especially since I don't want to make 5 different methods to produce the pretty report shown in Part 2, I will have the apps conform to what will be output.  What I settled upon was I would have a dictionary inside a dictionary.  The outer key will be the Manufacturer name.  The inner key will be the Model name.  The value of the inner key will be this custom structure:

 

public struct DurationStats
{
    public TimeSpan TotalDuration { get; set; }

    //Eventually some customers may have well over 2 billion event frames!
    //When that day comes, a Int64 (long) should be used for Count.
    public int Count { get; set; }

    public TimeSpan AverageDuration => Count > 0 ? TimeSpan.FromTicks(TotalDuration.Ticks / Count) : new TimeSpan();
}

 

Ok, so I want the average duration.  But I also want the count of events.  I chose to track the total duration and along with the count I will calculate the average duration.  This makes it easier on my first 2 methods (FindEventFrames and FindObjectFields) which will return detail rows of each event frame.  I don't necessarily need to track to total duration for the last 3 methods (Summary, GroupedSummary, AFSummaryRequest) and could just have easily gone directly with average duration.  But again, for conformity across the apps, I will be tracking total duration and calculate the average myself.

 

In order to organize each stats by Manufacturer and Model, I use the dictionary inside a dictionary as shown below.  I offer a few overloads to updating the stats, and again you will note that I apply a rigorous approach to truly segment each Model within each Manufacturer.  Included in this class is how the pretty report is generated.

 

// Dictionary of (1) Manufacturer with innder Dictionary (2) Model, and (3) Stats
public class StatsTracker : Dictionary<string, Dictionary<string, DurationStats>>
{
    public void AddToSummary(string mfr, string model, DurationStats stats)
    {
        AddToSummary(mfr, model, stats.TotalDuration, stats.Count);
    }

    public void AddToSummary(string mfr, string model, AFTimeSpan duration, int countIncrement)
    {
        AddToSummary(mfr, model, duration.ToTimeSpan(), countIncrement);
    }

    public void AddToSummary(string mfr, string model, TimeSpan duration, int countIncrement)
    {
        //Add to appropriate summary, first by Manufacturer ...
        Dictionary<string, Support.DurationStats> inner;
        if (!TryGetValue(mfr, out inner))
        {
            inner = new Dictionary<string, Support.DurationStats>();
        }

        //... and secondly by Model
        Support.DurationStats stats;
        if (!inner.TryGetValue(model, out stats))
        {
            stats = new Support.DurationStats();
        }

        //Update the stats
        stats.Count += countIncrement;
        stats.TotalDuration = stats.TotalDuration.AddDuration(duration);
        inner[model] = stats;
        this[mfr] = inner;
    }

    public override string ToString() => DisplaySummary(indent: 0);
    public string ToString(byte indent) => DisplaySummary(indent);

    public string DisplaySummary(byte indent = 0)
    {
        var totalMfrs = 0;
        var totalModels = 0;
        var totalFrames = 0;
        var pad = new string(' ', indent);
        var builder = new StringBuilder();
        builder.AppendLine($"{pad}{"Manufacturer",-13} {"Model",-12} {"Count",9} Avg Duration");
        builder.AppendLine($"{pad}{new string('-', 13)} {new string('-', 12)} {new string('-', 9)} {new string('-', 16)}");
        foreach (var outer in this)
        {
            totalMfrs++;
            foreach (var inner in outer.Value)
            {
                builder.AppendLine($"{pad}{outer.Key,-13} {inner.Key,-12} {inner.Value.Count,9:N0} {inner.Value.AverageDuration}");
                totalModels++;
                totalFrames += inner.Value.Count;
            }
        }
        builder.AppendLine($"{pad}{new string('-', 13)} {new string('-', 12)} {new string('-', 9)} {new string('-', 16)}");

        // The very last thing we write should use simple Append and not AppendLine so as not to include
        // a trailing Environment.NewLine sequence.  Leave it to hte calling Console.WriteLine(x) to
        // issue the last NewLine.
        builder.Append($"{pad}{totalMfrs,13:N0} {totalModels,12:N0} {totalFrames,9:N0}");

        return builder.ToString();
    }
}

 

Elsewhere (meaning in another static class) I also defined an extension method to allow me to add a TimeSpan to a Timespan:

 

public static TimeSpan AddDuration(this TimeSpan duration1, TimeSpan duration2)
{
    return TimeSpan.FromTicks(duration1.Ticks + duration2.Ticks);
}

 

There you have it.  I will be able to funnel the different outputs to StatsTracker to have uniformity of producing the report.

 

Initializing StatsTracker

I didn't drive home the point that the report shown in Part 2 was sorted first by Manufacturer and secondly by Model.  But it was.  And you may note in the StatsTracker class I am not using a SortedDictionary.  I get around this by initializing an instance of StatsTracker that is already sorted.

 

Is this necessary?  That depends upon which method were are using.  Certainly it is a nicety.  You should not assume the results returned by any of our methods will be sorted.  A general rule of thumb is the results are produced in the order they are discovered.  If you want a nice sorting, you can sort the dictionaries after you have built them, or as I have chosen to do, you may initialize an instance to already be sorted.

 

Again I ask, is this necessary?  Not for all methods we call.  And it's not necessary that its sorted.  But 2 of the methods, Summary and GroupedSummary, require some a priori knowledge in order to even issue the call in the first place.  For Summary, where we summarize per Model, we must know the models we want to summarize before calling Summary.  For GroupedSummary, we must know the manufacturers to summarize since we want to aggregate it by Manufacturer.  Whether this a priori information is sorted or not is not absolutely necessary, but it must be accumulated.  So why not sort it as well?

 

The burning question for you should be: How do I determine this knowledge beforehand?  Will you have a hard-coded list?  How will this be maintained?  For my database, I use an AFTable.  It doesn't have to be a table of just manufacturers and models, but it does have to contain all information you may be interested in.  For my database, I have a table for wind turbine power coefficients.  It has multiple entries per manufacturer and model:

 

2017-03-03 09_23_04-3323vlecs1.cloudapp.net_60001 - Remote Desktop Connection.png

 

That has all the information I desire, and then some.  All I need to do is organize it and populate my StatsTracker with it.

 

public static StatsTracker InitializeFromTable(AFDatabase database, string tableName, string mfrColName, string modelColName)
{
    var table = database.Tables[tableName].Table;

    // https://weblogs.asp.net/wenching/linq-datatable-query-group-aggregation
    var query = from row in table.AsEnumerable()
                group row by new { Manufacturer = row.Field<string>(mfrColName),
                    Model = row.Field<string>(modelColName) } into grp
                orderby grp.Key.Manufacturer, grp.Key.Model
                select new { grp.Key.Manufacturer, grp.Key.Model };

    var dict = new StatsTracker();
    foreach (var row in query)
    {
        Dictionary<string, DurationStats> inner;
        if (!dict.TryGetValue(row.Manufacturer, out inner))
        {
            inner = new Dictionary<string, DurationStats>();
        }
        inner[row.Model] = new DurationStats();
        dict[row.Manufacturer] = inner;
    }
    return dict;
}

 

I would call the above method passing my database object, tableName would be "OSIDemo_Wind Turbine Power Coefficient", mfrColName is "Manufacturer", and modelColName is "Model".  I set the method up to take arguments in case I ever had another source table that used slightly different names, e.g. a column named "Mfr".  This will produce a StatsTracker instance with:

  • The outer dictionary has 2 entries.
  • Outer keys are "Cervantes" and "Sailr" in that order.
  • The inner dictionary for Manufacturer "Cervantes" has 1 item for Model "DQ-M0L".
  • The inner dictionary for Manufacturer "Sailr" has 2 items: Models "Nimbus 2000" and "SWTG-3.6" in that order.

 

If you don't need to initialize StatsTracker, which is a choice if you are not calling Summary or GroupedSummary, and you want to sort it after the fact, then you may use this extension method:

 

public static StatsTracker SortByKeys(this StatsTracker input)
{
    var output = new StatsTracker();
    var mfrKeys = input.Keys.ToList();
    mfrKeys.Sort();
    foreach (var mfr in mfrKeys)
    {
        var modelKeys = input[mfr].Keys.ToList();
        modelKeys.Sort();
        foreach (var model in modelKeys)
        {
            var stats = input[mfr][model];
            output.AddToSummary(mfr, model, stats.TotalDuration, stats.Count);
        }
    }
    return output;
}

 

Mental Exercise: I have shown you how I setup MY application.  Your database is probably organized very differently.  There are many ways to achieve the same thing.  If you need to know certain information before making a call, you must ask yourself how you would acquire that information for your application.  What's right for my app could be wrong for yours.  Think through your problem and come up with a solution that works for you.

 

Next Up: Getting a Metrics Baseline

I warned you these were going to get longer!  We have established many pieces parts used by the database and subsequent applications.  In Part 4, we will establish a baseline for metrics by doing it the old way.  The old way is our trusty FindEventFrames that has been available way back in AF 2.8.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab which showcases AFEventFrameSearch methods new to PI AF SDK 2.9.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

The Final Output Report

In Part 3, we'll cover more about the AF objects being used by the 5 different applications, all of which will produce the exact same report but use completely different methods to do so.  The report we want to generate will show the count of event frames and average duration per manufacturer and model.  Here's what the desired report looks like:

 

Manufacturer  Model            Count Avg Duration

------------- ------------ --------- ----------------

Cervantes     DQ-M0L           8,136 03:53:21.4859882

Sailr         Nimbus 2000      1,499 03:44:28.8192128

Sailr         SWTG-3.6        13,678 03:53:35.3165667

------------- ------------ --------- ----------------

            2            3    23,313

 

Keep in mind this is a simple example with a small dataset.  Granted each Model currently happens to be unique to the entire dataset but we are going to assume that in the near future there may be a "Nimbus 2000" model to be offered by Cervantes.  This means we can't be lazy in our programming.  And by lazy, I don't mean deferred execution, but rather sloppy.  Instead we must be rigorous in our programming to truly count by a Model within a Manufacturer.  You are invited to think about how this would be done in AF 2.8, and in Part 4 we will go over one such implementation.

 

Different Methods to Produce Same Report

In Parts 4 - 8 we are going to dedicate each part to a different method that produces the exact same report.  Mind you, calling 5 different methods means we must consider 5 different ways of how we formulate a call to the respective method, plus the different type of objects that are returned from each method.  In order, the 5 apps will be dedicated to these 5 AFEventFrameSearch methods:

  1. FindEventFrames (Part 4)
  2. FindObjectFields (Part 5)
  3. Summary (Part 6)
  4. GroupedSummary (Part 7)
  5. Composite AFSummaryRequest call (Part 8 and not really an AFEventFrameSearch call)

 

This apps will not necessarily be presented to you in order of slowest to fastest, nor in terms of biggest resource hog to skimpiest.  Rather I present them in terms of how many rows of data is returned:

  1. FindEventFrames, returns all records with fairly heavyweight objects
  2. FindObjectFields, returns all records but with lightweight classes of just the desired values
  3. Summary, issues one aggregate call per Model (total of 3 calls for sample dataset)
  4. GroupedSummary, issues one aggregate call Per Manufacturer (total of 2 calls)
  5. Composite AFSummaryRequest call, issues one aggregate call for entire dataset (total of 1 call)

 

If you want to know why we just don't skip to the last one since making one call seems to make the most sense, I would caution that it is a bit more complicated to call.  Plus we really learn a bit more about the other methods because they will definitely have a place in your bag of tricks.  Furthermore, each method offers different benefits that we are about to explore, and we would miss out on an opportunity for such comparisons if we skipped to the last one.

 

Based on my output requirements, I will be shoehorning Summary and GroupedSummary to conform to my contrived output needs.  You will see in their respective parts that I am forced to make multiple calls.  In this respect, it's not a great use case.  To make up for this, I will show some bonus code and metrics for use cases more tailormade for those respective methods in their associated parts.

 

Metrics Comparison

The numbers below are from a 2-core VM using Release x64 Mode.  The smaller values are better.  Caution that we sometimes have a difference in UOM between MB and KB, but I will bold KB when needed.

 

Resource Usage:

Values displayed are in MB unless noted otherwise

Method

Total GC Memory (MB)

Working Set Memory (MB)Network Bytes Sent
Network Bytes Received
FindEventFrames145.48257.089.13 MB190.08 MB
FindObjectFields1.2865.555.00 KB3.68 MB
Summary2.5455.358.58 KB261.81 KB
GroupedSummary9.8664.286.24 KB1.98 MB
AFSummaryRequest7.2965.365.00 KB3.68 MB

 

Performance: 

MethodClient RPC CallsClient Duration (ms)Server RPC CallsServer Duration (ms)Elapsed Time
FindEventFrames12063337.011039118.102:27.8
FindObjectFields105360.8114547.600:06.0
Summary159484.6169310.900:10.1
GroupedSummary125527.2134938.500:06.2
AFSummaryRequest102992.2102222.200:03.7

 

The above tables are quite enlightening but don't jump to premature conclusions.  For instance, one may be tempted to proclaim that the GroupedSummary method is faster than the Summary method.  That's not true.  You will see later that my application requires me to make 2 GroupedSummary calls and 3 Summary calls.  , but there is an extra method call involved.  I also tested my app out and issued 3 GroupedSummary calls in lieu of Summary and it took 5 seconds longer.  The lesson here is to try to make the fewest calls to the server as possible.  What if we had 50 Manufacturers and each one had 3 Models each?  Then we would need 150 calls for Summary, 50 calls for GroupedSummary, but still only 1 call for the compound AFSummaryRequest.  My best advice is to avoid making too many calls if there is a better way available.

 

Later in Part 6, I will temporarily change my output requirements to show bonus numbers where I issue one and only one Summary call.  Bottom line: it takes only 534 ms for the client RPC duration, and a blazing 1.1 seconds total elapsed time.  Still think Summary is slow?  Not for the right use case, it isn't.

 

I offer a similar bonus in Part 7 as well, where I issue one and only one GroupedSummary call.  Client RPC duration is 1971 ms and total elapsed time is 2.6 seconds.

 

Explaining the Performance Boost

The first time the performance metrics were shown in the lab, a hearty discussion followed.  Why the big difference?  It's not due to caching.  You will see later that every exercise, including FindEventFrames, implements server-side caching.  The question isn't really about why the new methods are faster, but really more it's more of why is the older method slower.

 

The older method is very heavy.  All we need for each event frame is each Manufacturer (string), Model (string), and Duration (AFTimeSpan).  But FindEventFrames(fullLoad: true) brings back so much more.  It brings back every property, every attribute, and every referenced element for every event frame.  And because our event frames were generated from an Analysis, it also brings over the Analysis, which includes parsing every expression in the Analysis where it spends some time deciding whether something enclosed in single quotes is 'attribute' or a 'time'.  It is a performance drain to perform the serialization of everything from SQL Server as it makes its way to the client workspace as AF objects.

 

The newer methods are far more lightweight.  You will only be getting back just the skinny bit of data you've asked for.  That would explain the reduced RPC calls, and therefore reduced execution time, as well as smaller footprint of resources.

 

Lose Wait Now, Ask Me How!

I'm teasing you because there's lots more to cover in this series.  You have more a lot more reading time to invest.  Hopefully the metrics savings I've shown you will convince you to stay tuned for more.  The next parts will get longer, and we have 2 more to cover before we even get to the new methods!  But now you have full expectations of the benefit that can be realized from these new methods, so it should be work sticking around.

 

Up next in Part 3, we discuss the AF objects we will be working with in all the later parts.

The Advanced AF SDK lab at UC SF 2017 was on this very topic.  The material in this 9-part series follows much of that lab.

 

Blog Series: Aggregating Event Frame Data

Part 1 - Introduction

Part 2 - Let's Start at the End

Part 3 - Setting up the App

Part 4 - Classical FindEventFrames

Part 5 - Lightweight FindObjectFields

Part 6 - Summary per Model

Part 7 - GroupedSummary per Manufacturer

Part 8 - Compound AFSummaryRequest

Part 9 - Conclusion

 

INTRODUCTION

PI AF SDK 2.9 has many new exciting improvements and methods, particularly with the AFEventFrameSearch class.  This version introduces new methods that has the potential to:

  1. Simplify your code
  2. Reduce number of RPC calls
  3. Reduce overall execution time
  4. Reduce memory consumption
  5. Reduce CPU usage
  6. Reduce network traffic

 

Just as a teaser to what the blog series will illustrate, using the classical FindEventFrames(fullLoad: true) for my app takes over 140 seconds and over 100 MB of allocated memory.  Using some of these methods can reduce the run time to less than 10 seconds and under 10 MB memory!  That alone should encourage you to follow this series.  Granted there's always the disclaimer "Actual results may vary" but trust me on this.  When I put together what I felt was a real-world application, I was amazed at the improvement.  I am quite enthusiastic about sharing this information with the PISquare community.

 

Here's a list of some of the methods brand new to AFEventFrameSearch:

  1. BinnedSummary
  2. BinnedSummaryAsync
  3. FindObjectFields
  4. FrequencyDistribution
  5. FrequencyDistributionAsync
  6. GroupedSummary
  7. GroupedSummaryAsync
  8. Histogram
  9. HistogramAsync
  10. Summary
  11. SummaryAsync

 

When you get your hands on AF 2.9, you definitely want to check these new methods out. For this particular series, I will be showcasing these methods:

  1. FindObjectFields (Part 5)
  2. Summary (Part 6)
  3. GroupedSummary (Part 7)
  4. AFSummaryRequest (Part 8)

 

I grew to appreciate each method for what it does.  In particular, I really liked FindObjectFields.  While it's used here within my examples for aggregation, it's not actually an aggregation method; it's quite versatile and its primary purpose is to be used for detail data.

 

Also NEW: Implements IDisposable

Effective with AF SDK 2.9, the abstract AFSearch class now implements IDisposable. This means that any class derived from AFSearch, such as AFEventFrameSearch, also implements IDisposable. This also means you may now wrap the call in a using block as in:

 

using (var search = new AFEventFrameSearch(arguments))
{
     //do something with the search
}

 

When the using block terminates the cache will immediately be released on the server. Code prior to AF SDK 2.9 that doesn’t specify using will still work without modification, though you are encouraged to make these changes. If you don’t make this change, the cache will eventually be released when its Timeout expires.

 

Besides a using block, you may alternatively issue an explicit Close() on your search object. This too will immediately release the server cache. Though using or Close() is not required, it is considered a best practice to do so.

 

Caveat: Use CaptureValues

It's a welcome sight to see both lightweight and aggregation search methods available for event frames.  You will see in Part 2 an example of performance improvements.  However, you will only realize a such boost in performance if you have issued CaptureValues for your event frames.  This allows server-side filtering of attribute values or aggregation of attribute values.

 

If you have not captured values, your code will still work just fine albeit more slowly as filtering and evaluation of attribute values must be performed client-side.

 

Why You Should Keep Reading the Series

So keep an eye out in the forums for updates to this blog series.  Hope I've whet your appetite enough to want to read through all 9 parts!  Warning: they will get longer and contain code.  Don't fear that you will have to do a lot of reading only to be disappointed in the results.  In Part 2, I will go ahead and show you the final performance results!  That's right.  We start with the end.  I'm confident that by showing you the savings and benefits early that you will be glad to be vested to read the entire series.

LATAM Regional Conference Programming Hackathon 2017

Clique aqui para fazer seu cadastro!

 

Data: 5 de junho de 2017 das 9h30h às 19h30

Local: OSIsoft Brasil  - Alameda Santos, 1940, 15.o andar - São Paulo, SP, Brasil

 

Participe do primeiro Hackathon de programação na LATAM Regional Conference para aprender, conhecer outros profissionais da sua área e competir por prêmios! Se você é um cientista de dados, administrador, integrador, desenvolvedor ou arquiteto do PI System, você deve participar deste evento desafiador. Nós vamos fornecer para os participantes todas as ferramentas necessárias além de um ambiente pronto para desenvolverem uma aplicação em 10 horas. Vários especialistas estarão presentes para responderem às dúvidas técnicas dos participantes.

 

Você terá a sua disposição uma estrutura de ativos e dados reais. Usando o PI System e nossas tecnologias para desenvolvedores (PI Developer Technologies), você deverá desenvolver uma aplicação utilizando sua criatividade que agregará valor para consumidores e empresas utilizando o poder dos dados! Você trabalhará em um grupo para atingir esse objetivo, utilizando todos os recursos do PI System, incluindo os dados em tempo real, e suas habilidades inovadoras. No final, a sua aplicação será avaliada por um grupo de especialistas. As melhores aplicações serão premiadas, e você e sua equipe serão reconhecidos publicamente pela conquista. Se você tiver alguma dúvida sobre este evento, entre em contato com mloeff@osisoft.com.

 

 

P: Quem deve participar nesse evento? 

R: Se você é um desenvolvedor, arquiteto de sistemas, integrador, cientista de dados ou tecnólogo interessado em aplicações de transformação digital industrial, este evento é para você.

 

 

P: Quais são os principais benefícios de participar nesse evento?

R: Você poderá competir em um evento único para ganhar prêmios e reconhecimento dentro de nossa comunidade de usuários do PI System. Além disso, este evento é uma ótima oportunidade para conhecer e trabalhar junto com outros profissionais de sua área. Finalmente, você aprenderá sobre às novíssimas tecnologias não somente do mercado, mas também do PI System.

 

 

P: O que devo trazer? 

R: Você utilizará o seu próprio laptop durante este evento, portanto, não esqueça de trazê-lo. Forneceremos a infraestrutura de dados e o suporte técnico.

 

 

 

P: Quando e onde ocorrerá o evento? Ele entrará em conflito com qualquer outra atividade do Seminário Regional?

R: Não. O hackathon ocorrerá na segunda-feira, 5 de junho de 2017, das 9h30 às 19h30 no escritório da OSIsoft em São Paulo. Este evento de programação não se sobrepõe a nenhuma outra atividade principal do seminário regional, já que a abertura oficial da conferência será no dia seguinte (terça-feira).

 

 

P: Já me inscrevi no LATAM Regional Conference mas não no Hackathon de Programação. O que devo fazer? 

R: Envie um e-mail para mloeff@osisoft.com

As of the time of writing this, there are only a few ways I can think of to make sure that a connector is running and healthy.

 

  1. Checking the tags that it is writing to and making sure they are updating.
  2. Checking the administration page to ensure that all lights are green.

 

The purpose of this is to show you how it is possible to monitor the connectors at your organization using Powershell and AF SDK.

 

When you first saw this post you might be thinking that the only way you could check if your connector was working was by checking the administration page, but that is only partially true! The items on the administration page can be retrieved by making a REST type call to the connector webpage. So, all of the information that you can view on the connector administration site can be extracted and written to PI tags, offering an easy solution for monitoring the health of your connectors.

 

I have included a script, which is attached to this webpage. If you'd like to skip straight to the part where I talk about the Powershell script and what it can do, please use the link in the table of contents to skip to that section. The attachments can be found at the bottom of this post.

 

Table of Contents

 

 

What types of information can we pull from the connector webpage?

First, let's cover where this information is stored. Pull up the administration page for your favorite connector. I'll be using the PI Connector for OPC UA. In the screenshot you see below, each of these fields can be queried by making a REST call to the connector. So, let's work on finding how to query for the status of the Connector, Data sources, and Servers configured to receive data from the connector.

 

I am using Chrome for this, but you can also perform similar actions in other web browsers. When on the Connector Administration page, hit F12. You should see the Chrome Developer Tools window pop up. From there, let's browse to the Network tab. The Network tab will allow us to see the requests being made as well as the responses being provided by the connector. Let's take a look at the Data Source Status (this shows up as Datasource%20Status). Expanding the object allows us to see the properties underneath it. For my data source, we can see that it is named OPCServer1, it has a status of Connected, and a message stating that I have No Data Filter set.

 

We can also see the URL that was used to retrieve this information from the Headers section.

 

Information on the Connector State, PI Data Archive, and AF connections can be found in a similar manner under the Network tab by looking at ConnectorState, PI%20Data%20Archive%20Connections, and PI%20AF%20Connections respectively.

 

How can we obtain this information using Powershell?

Now that we know what types of information we can get, let's go through how Powershell can query for and store this information.

 

Because we want this script to run periodically, we will need stored the credentials on the machine. But, we don't want to just store credentials in plain text on the machine running the script, so we will encrypt them. Let's set the username variable first:

#username for logging into the PI Connector Administration Page.

$user = "domain\user"

 

Next, let's store the password and encrypt it. We will then set the password to the encrypted file that contains the password:

#Convert password for user account to a secure string in a text file. It can only be decrypted by the user account it was encrypted with.

"password" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "file location for the stored password file"

$pass = "file location for the stored password file"

 

Finally, we will decrypt the credentials when running the script using the command below. These credentials can only be decrypted by the user that encrypted them, so make sure to encrypt the credentials with the same user that will be running this script.

#Credentials that will be used to login to the PI Connector Administration Page.

$cred = New-Object -TypeName System.Management.Automation.PSCredential `

-ArgumentList $user, (Get-Content $pass | ConvertTo-SecureString)

 

The connector's also use self-signed certificates, so you may get an error when attempting the GET request. To get around this, we will include the following code to ignore the certificate errors:

#Ignore invalid certificate errors when connecting to the PI Connector Administration Page. This is because the connector uses a self-signed certificate, but Powershell wants to use a validated certificate.

Add-Type @"

    using System;

    using System.Net;

    using System.Net.Security;

    using System.Security.Cryptography.X509Certificates;

    public class ServerCertificateValidationCallback

    {

        public static void Ignore()

        {

            ServicePointManager.ServerCertificateValidationCallback +=

                delegate

                (

                    Object obj,

                    X509Certificate certificate,

                    X509Chain chain,

                    SslPolicyErrors errors

                )

                {

                    return true;

                };

        }

    }

"@

 

 

[ServerCertificateValidationCallback]::Ignore();

 

Now that all of that is out of the way, let's get to the part where we pull the information from the webpage. For this, we will be using the Invoke-WebRequest function. If we wanted to query for the data source status shown above, our function would look like this:

$DataSourceStatusResponse = Invoke-WebRequest -Method GET  "https://nlewis-iis:5460/admin/api/instrumentation/Datasource%20Status" -Credential $cred | ConvertFrom-Json

We are using the GET method which we can see for the Request Method in the Headers. The login to the connector webpage uses basic authentication, so we are passing it credentials that we have stored in the variable $cred. Finally, we pass this to the ConvertFrom-Json function in order to store the information retrieved from Invoke-WebRequest in a Powershell object under the variable $DataSourceStatusResponse.

 

For our data source status, we can then take a look at the variable to see what information we now have. We can see that under the variable we can see our data source OPCServer1. If we had additional data sources, they would show up here.

 

If we browse further into the variable, we can then find the Message and Status fields we were looking for:

 

If we wanted to store the status in a variable ($OPCServer1_Status), we could then achieve this as follows:

$OPCServer1_Status = $DataSourceStatusResponse.OPCServer1.Object.Status

 

Now we just need to retrieve the other information we want in a similar fashion and we are ready to write the values to PI tags!

 

 

Writing the values to PI

For this, we will be using AF SDK in Powershell to achieve this. There are also native Powershell functions for the PI System that come with PI System Management Tools that could be used instead of using AF SDK.

 

There are a few steps in order to this this.

 

1. We need to load the AF SDK assemblies.

# Load AFSDK

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDKCommon") | Out-Null

[System.Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDK") | Out-Null

 

2. We need an object that can store the point attributes for the tags we will be creating. The script I created will automatically create the PI tags if it cannot find them.

#Create an object with point attributes for the points you are creating

$myattributes = New-Object 'System.Collections.Generic.Dictionary[[String], [Object]]'

 

3. Store the tag attributes in the tag attribute object. For myself, I am making these string tags with a point source of CM.

<#Add the attributes to your point. I am making the points that will be created string tags, which corresponds to a value of 105.

Different point types can be found here: https://techsupport.osisoft.com/Documentation/PI-AF-SDK/html/T_OSIsoft_AF_PI_PIPointType.htm

#>

$myattributes.Add("pointtype", 105)

$myattributes.Add("pointsource","CM")

 

4. Next, we will need to initialize the PI Data Archive, AF Server, buffering options, and instantyiate our new values we will be using. We are using the default PI Data Archive and AF Server for this.

# Create AF Object

$PISystems=New-object 'OSIsoft.AF.PISystems'

$PISystem=$PISystems.DefaultPISystem

$myAFDB=$PISystem.Databases.DefaultDatabase

 

# Create PI Object

$PIDataArchives=New-object 'OSIsoft.AF.PI.PIServers'

$PIDataArchive=$PIDataArchives.DefaultPIServer

 

# Create AF UpdateOption

$AFUpdateOption = New-Object 'OSISoft.AF.Data.AFUpdateOption'

 

#Set AF Update Option to Replace

$AFUpdateOption.value__ = "0"

 

# Create AF BufferOption

$AFBufferOption = New-Object 'OSISoft.AF.Data.AFBufferOption'

 

#Set AF Buffer Option to Buffer if Possible

$AFBufferOption.value__ = "1"

 

# Instantiate a new 'AFValue' object to persist...

$newValueX = New-Object 'OSIsoft.AF.Asset.AFValue'

 

# Apply timestamp

$newValueX.Timestamp = New-object 'OSIsoft.AF.Time.AFTime'(Get-Date)

 

With that all out of the way, we just need to create our PI tag, assign it a value and timestamp, and send it on its way.

 

5. Assign a name to the PI tag.

# Assign Tag Name to the PI Point. Here I denote that this is for the data source OPCServer1 and I am retrieving the status.

$tagNameX = "OPCUAConnector.DataSource.OPCServer1.Status"

 

6. Find the tag, and create it if it does not exist. This finds the tag based off of the PI Data Archive we specified earlier, as well as the tag name

#initiate the PI Point

$piPointX = $null

 

#Find the PI Point, and create it if it does not exist

 

 

if([OSIsoft.AF.PI.PIPoint]::TryFindPIPoint($PIDataArchive,$tagNameX,[ref]$piPointX) -eq $false) 

{  

     $piPointX = $piDataArchive.CreatePIPoint($tagNameX, $myattributes) 

#Set the PI tag for $newValueX to $piPointX

$newValueX.pipoint = $piPointX

 

7. Lastly, we can apply the value to $newValueX and write this value to PI! We set the value equal to the call we made earlier for the Data Source Response where we retrieved the status of the server. We then user $newValueX.PIPoint.UpdateValue in order to write the new value to the tag.

    $newValueX.Value = "$DataSourceStatusResponse.OPCServer1.Object.Status"

    $newValueX.PIPoint.UpdateValue($newValueX.Value,$AFUpdateOption)

 

And that's it! That is all the code required in order to pull the information from the connector page and write it to a PI tag.

 

 

The Connector Monitoring Script

If you were wondering while reading this whether or not someone built out a script that already pulls in some of the relevant information, then you're in the right place. While writing this I also developed a script that will write the data from the connector pages you supply it with to PI tags. The output looks something like this:

 

How the script works:

  1. You supply it credentials based off of the method we discussed earlier, where we encrypt the password.
  2. You provide it a list of connectors. In this list there is the name of the connector (which will be used in the tag naming convention), as well as the URL to the admin page. Make sure to exclude the /ui at the end of the URL. These are then stored in an object called $Connectors.

#Create an object called Connectors. This will hold the different connectors you want to gather information from.

$Connectors = @{}

#When adding a connector, give it a name, followed by the link to the admin page like below.

$Connectors.add("OPCUA_nlewis-iis","https://nlewis-iis:5460/admin")

$Connectors.add("PING_nlewis-iis","https://nlewis-iis:8001/admin")

 

    3. A connector object is then generated and the items in the list are added to this object.

    4. We query the web pages and then write the values to PI for each of the objects in the connector object.

 

What the script does

  • Tags are based on the naming convention of: <Provided Connector Name>.<Type>.<Server>.Status, where the type can be DataSource, AFServer, or PIServer
    • For the status of the AF Server (AF Server is named nlewis-af1) for a connector I named OPCUA_nlewis-iis, the tag would be named OPCUA_nlewis-iis.AFServer.nlewis-af1.Status.
  • If the script cannot connect to the admin page, it writes an error message to the connector state tag.
  • If the service is running but the connector is stopped via the webpage, the script writes to all tags for that connector that the connector is stopped.

 

There are two parts to the script. The first part generates the encrypted password file. The second part you run to pull the information, create the tags if they do not exist, and write to them.

 

Please edit the code to include the connectors used in your environment. The scripts can be found attached to this post.

 

PI Connectors and PI Interfaces   All Things PI - Ask, Discuss, Connect

Introduction

 

On the first 3 blog posts (part 1, part 2 and part 3) about developing the Google Maps PI Coresight custom symbol, I have shown you how to create a custom symbol showing a Google Map with some markers that represent the updated location from many assets.

 

On this blog post, I will show how to show display on the map historical data by using event frames.

 

You can download the custom symbol in this GitHub repository.

 

Understanding the problem

 

In the past, I have recorded several times the geolocation coordinates when I've walked from my old home to our old OSIsoft office here in São Paulo using an Android app called RunKeeper. After I've stopped recording the activity, the app allowed me to download all routes of activities recorded by downloading a GPX file. By creating a simple console application, I was able to send all the latitude and longitude values to a PI System.

 

On my PI AF Server, I have created a new database called GMapsPart4 with only 1 element called Marcos. This element is derived from an element template called UserTemplate with 2 attribute template called Latitude and Longitude.

 

 

I've also created an event frame for each activity downloaded in the GPX format to make it easier access this kind of information programmatically. There are 3 event frames on this database. The primary referenced element in all of them is the Marcos element. Each attribute from the element map the attribute from the Marcos element as shown below. An event frame template was created since all EFs follow the same pattern.

 

 

Below there is a screenshot taken using the final version of the custom symbol developed on this blog post (part 4):

 

 

 

As you can see, the Event Frames mapped with the Marcos elements are below. Once the user clicks on any event frame, the symbol retrieves the historical geolocation data within the time range defined by the event frame. Using Google Maps API, we have added a path on the map and a marker. The marker moves when the user changes the slider's value, which is above the map.

 

This will only happen if the user selects the Historical Mode on the configuration options as shown below:

 

 

Migrating from PI Coresight 2016 to PI Coresight 2016 R2

 

If you compare the source code between the version described in Part 3 with this new version, you will see some architectural changes due to the fact that part 3 was developed for PI Coresight 2016 and part 4 was developed for PI Coresight 2016 R2. Please refer to the PI-Coresight-2016-R2-(CTP)-Extensibility-Documentation  for more information about how to upgrade existing symbols from PI CS 2016 to 2016 R2.

 

One important change I want to point it out is the AngularJS directive for color picker. In 2016, this is how you would define on your HTML:

 

<format-color-picker id="markerColor" property="MarkerColor" config="config"></format-color-picker>

 

In 2016 R2, you would use:

 

<cs-color-picker id="markerColor" ng-model="config.MarkerColor"></cs-color-picker>

 

 

 

Adding the PI Web API Client library for AngularJS to the PI Coresight infrastructure

 

PI Coresight does not allow you to retrieve event frames or recorded values through its native extensibility model. Although for the majority of the symbols, this is not a problem, there are some use cases which require the custom symbol to interact with PI Web API in order to retrieve additional information. Please refer to my previous blog post in order to make the piwebapi Angular service available on your PI Coresight infrastructure, otherwise, this symbol won't work on your PI Coresight 2016 R2.

 

 

RangeSlider

 

The slider above the map is an objected created with the RangeSlider.js JavaScript library. Please download the library from their official site and paste it on the \symbols\ext\libraries folder. I am using version 2.3.

 

 

Updating the sym-gmaps-template.html

 

Following the examples provided on the official site from RangeSlider.js library, I've added the horizontal slider on the top of the symbol. Below the map, I have added a div node in order to list the event frames retrieved.

 

 

<div ng-if="config.HistoricalMode" style="width:100%;height:50px;">
    <input type="range"
           min="0"
           max="{{rangeMax}}"
           step="1"
           value="0"
           data-orientation="horizontal">
</div>
<div id="container" style="width:100%;height:calc(100% - 150px);">


</div>


<div ng-if="config.HistoricalMode" class="activities-pane">
    <div ng-repeat="activity in activitiesList" ng-class="activity.WebId == selectedActivity.WebId ? 'activity-non-selected' : 'activity-selected'" ng-click="updateActivity(activity)">
        <p>
            {{activity.Name}}
        </p>
    </div>
</div>


<style>
    .activities-pane {
        background-color: white;
        padding: 1px;
        height: 100px;
        overflow-y: auto;
    }
        .activities-pane div {
            height: 31px;
            border: azure;
            margin: 4px;
            padding-top: 8px;
        }
        .activities-pane p {
            color: white;
        }
    .activity-selected {
        background: brown;
    }
    .activity-non-selected {
        background: blue;
    }
</style>

 

Finally, to make things easier, I've added a new style within the HTML itself although the best practice would be to have a new CSS file just for this purpose.

 

Updating the sym-gmaps.js

 

I won't describe all the steps required to update the sym-gmaps.js from part 3 to part 4. I will focus on the main logic for you understand what is going on under the hood.

 

The first thing is to inject the piwebapi service and add the HistoricalMode property of the main definition:

 

       inject: ['piwebapi'],
        getDefaultConfig: function () {
            return {
                DataShape: 'Table',
                Height: 600,
                Width: 400,
                MarkerColor: 'rgb(255,0,0)',
                LatName: 'Latitude',
                LngName: 'Longitude',
                HistoricalMode: false,
                OpenInfoBox: true,

 

 

If the HistoricalMode is false, the symbol would work very similar to the symbol developed on part 3. The new features can be viewed if HistoricalMode is true.

 

On the init function, call the piwebapi functions to initialize the service as described on the other blog post:

 

 

    symbolVis.prototype.init = function init(scope, elem, piwebapi) {


        piwebapi.SetServiceBaseUrl("https://marc-web-sql.marc.net/piwebapi");
        piwebapi.SetKerberosAuth();
        piwebapi.CreateObjects();

 

We need to use the extensibility model to extract the AF database path and the root element name.

 

 

            if ((scope.elementName == undefined) && (scope.lastDataWithPath.Rows[0].Path.substring(0, 3) == "af:")) {
                var elementPath = (data.Rows[0].Path.split("|")[0]).substring(3);
                var stringData = elementPath.substring(2).split("\\");
                scope.databasePath = "\\\\" + stringData[0] + "\\" + stringData[1];
                scope.elementName = stringData[2];
            }

 

 

The AF database path is used to get the AF database WebId, which wil the element name, both are used as inputs for the GetEventFrames method from the AssetDatabase controller.

 

 if (scope.activitiesList == undefined) {
                        piwebapi.assetDatabase.assetDatabaseGetByPath(scope.databasePath, null, null).then(function (response) {
                            var webId = response.data.WebId;
                            piwebapi.assetDatabase.assetDatabaseGetEventFrames(webId, null, null, "*", null, 100, null, scope.elementName, "UserTemplate", true, null, null, null, null, null, null, "*-900d").then(function (response) {
                                scope.activitiesList = response.data.Items;
                                scope.selectedActivity = response.data.Items[0];
                                scope.updateActivity(scope.selectedActivity);
                            });
                        });
                    }

 

The scope.updateActivity method updates the UI for a given activity (which is an Event Frame) following the steps below:

 

  • Clean markers and paths from the map.
  • Get the attributes WebId from the user element.
  • Get interpolated values in bulk by using the GetInterpolatedAdHoc method from the StreamSet controller.
  • Polish the values to be consumed by the Google Maps API.
  • Create the path with the retrieved PI Values and add it to the Google Map.
  • Update the color of the marker if needed.
  • Instantiate the rangeSlider object using the JavaScript library methods.

 

   scope.updateActivity = function (activity) {
            if ((scope.marker != null) && (scope.marker != undefined)) {
                scope.marker.setMap(null);


            }
            if ((scope.routePath != null) && (scope.routePath != undefined)) {
                scope.routePath.setMap(null);
            }


            scope.selectedActivity = activity;
            scope.loadingGeolocation = true;
            var elementWebId = scope.selectedActivity.RefElementWebIds[0];
            piwebapi.element.elementGetAttributes(elementWebId).then(function (response) {
                scope.attributes = response.data.Items;
                var webIds = new Array(scope.attributes.length);
                for (var i = 0; i < scope.attributes.length; i++) {
                    webIds[i] = scope.attributes[i].WebId;
                }


                piwebapi.streamSet.streamSetGetInterpolatedAdHoc(webIds, activity.EndTime, null, true, "30s", null, activity.StartTime).then(function (response) {
                    for (var i = 0; i < response.data.Items.length; i++) {
                        var currentItem = response.data.Items[i];
                        if (currentItem.Name == scope.config.LatName) {
                            scope.latitudeTrack = currentItem.Items;
                        }
                        if (currentItem.Name == scope.config.LngName) {
                            scope.longitudeTrack = currentItem.Items;
                        }
                    }


                    var bounds = new google.maps.LatLngBounds();
                    var routeCoordinates = [];
                    for (var i = 0; i < scope.latitudeTrack.length; i++) {
                        if ((scope.latitudeTrack[i].Good == true) && (scope.longitudeTrack[i].Good == true)) {
                            var pos = { lat: scope.latitudeTrack[i].Value, lng: scope.longitudeTrack[i].Value };
                            routeCoordinates.push(pos);
                            var point = new google.maps.LatLng(pos.lat, pos.lng);
                            bounds.extend(point);
                        }
                    }
                    scope.rangeMax = routeCoordinates.length;


                    scope.routePath = new google.maps.Polyline({
                        path: routeCoordinates,
                        geodesic: true,
                        strokeColor: '#FF0000',
                        strokeOpacity: 1.0,
                        strokeWeight: 2
                    });


                    scope.routePath.setMap(scope.map);
                    scope.map.fitBounds(bounds);


                    scope.marker = new google.maps.Marker({
                        position: routeCoordinates[0],
                        map: scope.map
                    });


                    updateMarkerColor(scope.marker, scope.config);


                    $('input[type="range"]').rangeslider({
                        polyfill: false,
                        onSlide: function (position, value) {
                            x = Math.round(position);
                            var geoPosition = routeCoordinates[x];
                            scope.marker.setPosition(geoPosition);
                        }
                    });
                });
            });
        };

 

 

 

Conclusions

 

The first PI Coresight which has the extensiblity model is the 2016 version. It allows you to develop custom symbols for PI Coresight. The PI Coresight extensibility model keeps sending live data to be consumed by the custom symbol. Nevertheless, there are lot of use cases and interest by the PI DevClub community to integrate the custom symbol with PI Web API. This blog post provides information about how to achieve this goal in order to create richer and more valueable custom symbols for your enterprise and/or your customers.

Introduction

 

Since the release of PI Vision extensibility, I have seen a lot of questions on PI Developers Club about how to make HTTP calls against PI Web API within the custom symbol methods.

 

On my last blog post, I have shown step by step about how to generate a PI Web API Client library for AngularJS using Swagger codegen. This library is available for download on the dist folder from this GitHub repository. What I do think it would be very interesting and useful for our community is to use this library when developing custom PI Vision symbols. Who wouldn't want to use this feature?

 

Disclaimer

 

Before we start describing the procedures, it is good to remind that you that we will change PI Vision source JavaScript source code. As a result, there are a lot of risks involved, such as:

  • If the changes you've done were not well done, your PI Vision might not load anymore. In this case, you might need to repair the installation to replace all JavaScript files.
  • After a PI Vision upgrade,  all your changes will be undone. You will have to do the procedure again.
  • I strongly recommend testing on a development environment first.
  • Although I don't expect major issues, we haven't tested the problems when using this library within PI Vision.

 

All in all, there are risks involved. Just consider them before following the procedure.

 

 

Making the piwebapi service available on PI Vision 2016 R2

 

Yes, I am writing this procedure only for PI Vision 2016 R2 (or PI Coresight 2016 R2). It should also work on PI Vision 2016 but I haven't tested. If you do test, please write a comment below!

 

Here are the steps:

 

1 - Open the browser and go to this GitHub repository. Download the source code package to a zip file. Within this file, copy the piwebapi-kerberos.min.js (or piwebapi-kerberos.js) file to %PIHOME64%\Coresight\Scripts\app\editor folder.

2 - Edit the Index.html file located on the %PIHOME64%\Coresight\Views\Home folder by adding a reference to piwebapi-kerberos.min.js just below "@Scripts.Render("~/bundles/libraries/jstz")". Please refer to the code below:

 


    @Scripts.Render("~/bundles/libraries/jquery")    
    @Scripts.Render("~/bundles/jqueryui")    
    @Scripts.Render("~/bundles/jqueryui/layout")
    @Scripts.Render("~/bundles/jquery-ui-patch")
    @Scripts.Render("~/bundles/libraries/hammer")
    @Scripts.Render("~/bundles/libraries/angular")
    @Scripts.Render("~/bundles/libraries/angular-gridster")
    @Scripts.Render("~/bundles/libraries/jstz")


  <script src="/Coresight/Scripts/app/editor/piwebapi-kerberos.min.js" /></script>


    @Scripts.Render("~/bundles/kendo-patch")

 

3 - Edit the coresight.app.js file located on the %PIHOME64%\Coresight\Scripts\app\editor by adding piwebapiClientLib to the depedencies module list from PI Vision. Please refer to the code below:

 

    angular.module(APPNAME, ['ngAnimate', 'Chronicle', 'osi.PiDialog', 'osi.PiToast', 'coresight.routing', 'kendo.directives', 'gridster', 'piwebapiClientLib'])
        .config([
            '$animateProvider',
            '$compileProvider',
            '$httpProvider',
            config])

 

Do not forget to save both files.

 

That is all! Easy, right?

 

 

Creating a custom symbol using the piwebapi service

 

Now that our PI Vision is now able to inject our piwebapi service, which is a PI Web API client library using AngularJS, let's create a custom symbol that will make an HTTP request againt the main PI Web API endpoint and display the JSON on the screen.

 

Create a new file name sym-piwebapi-test.js on the %PIHOME64%\Coresight\Scripts\app\editor\symbols\ext folder with the following content:

 

(function (CS) {

  function symbolVis() { }
  CS.deriveVisualizationFromBase(symbolVis);

    var definition = {
        typeName: 'piwebapitest',
  datasourceBehavior: CS.Extensibility.Enums.DatasourceBehaviors.Multiple,
  inject: ['piwebapi'],
        getDefaultConfig: function () {
            return {
                DataShape: 'Table',
                Height: 400,
                Width: 400,
                MarkerColor: 'rgb(255,0,0)'          
            };
        },
  visObjectType: symbolVis
    };




   symbolVis.prototype.init = function init(scope, elem, piwebapi) {  
  piwebapi.SetServiceBaseUrl("https://marc-web-sql.marc.net/piwebapi");
  piwebapi.SetKerberosAuth(); 
  piwebapi.CreateObjects(); 

  piwebapi.home.homeGet({}).then(function(response) {
  scope.data = response.data;
  });   
    }


    CS.symbolCatalog.register(definition);
})(window.Coresight);

 

Create a new file name sym-piwebapitest-template.html on the %PIHOME64%\Coresight\Scripts\app\editor\symbols\ext folder with the following content:

 

<center>
    <br /><br />
    <p style="color: white;">{{data}}</p>
</center>

 

 

Time to test our custom library. Dragging any element and dropping it on the screen, we can see that JSON response from the main PI Web API endpoint.

 


Conclusions

 

The custom symbol developed in this blog post is very simple. Nevertheless, its main purpose is to demonstrate that after making the changes in order to add the PI Web API client library for AngularJS to PI Vision, it is possible to inject the piwebapi service. As a result, it gets easier to make HTTP requests against PI Web API using this client library.

 

Finally, please write comments about this project and share your thoughts!

Filter Blog

By date: By tag: