AnsweredAssumed Answered

Certain FilteredSummaries results in AFSDK aren't matching PISDK results

Question asked by kilgored on Jan 8, 2015
Latest reply on Jan 14, 2015 by ling

I have an application that has been under test and recently some of the tests failed miserably, as the resulting values do not match expectations. After reviewing the code and not understanding why things could be wrong I did the unthinkable - I challenged the validity of the results from the AFSDK. In doing so, I have found certain cases where FilteredSummaries calls, on either AFAttribute or PIPoint, do not produce the same results as the PISDK.


Here are some details, please show me where my blind spot is in finding the real problem, or confirm that it is truly an issue with the AFSDK RDA methods.


The problem exists in both .NET 4.0 and 4.5 projects that use the AFSDK 2012 ( or 2014 ( when compared to several PISDK versions and I can reproduce it on my DEV system (PI 2012 - 3.4.390.16, AF 2012 - and another TEST system (PI 2012 -, AF 2014 - In each case, regardless of SDK versions, the PISDK always returns the same result - which is always different from the AFSDK results (that also match themselves from version to version).


The scenario is getting a filtered average for a single tag where the filter is a boundary check on that tag itself. In my DEV system, I am using CDT158, in the TEST system it is a contrived dataset specific to the test being run. The filter is simple, using tag expression formatting it is:


'cdt158' >= 60 and 'cdt158' < 100


Where the limits (just for illustration) of 60 and 100 are values that get substituted in, based upon the measurement criteria of the application and test. The goal of the result is to produce a time weighted average that excludes outliers. We also request the Count summary, which provides us with a gauge of how much data was bad or unreasonable during this period.


The results between the AFSDK and PISDK match perfectly as long as the data is all good and all within limits, or as long as only bad data is intermixed with data that is within the limits. However, the results from a period where outliers exist is a different story - the AFSDK results appear to be proportionally skewed towards the outliers. For example, if the limits are 1 and 2, and some data within the time range is -10000, the AFSDK result becomes a negative number - whereas the PISDK result is within the range of 1-2 (which makes sense because all of the other data should have been filtered from the averaging).


So, I created a simple C# query test application to make it simpler to verify my theory. It isn't pretty or factored for error handling, but it is effective.


using System;
using System.Linq;
using OSIsoft.AF.Asset;
using OSIsoft.AF.Data;
using OSIsoft.AF.Time;

namespace QueryTest
    class Program
        static void Main(string[] args)
            string coreFilter = "'{0}' >= {1} AND '{0}' < {2}";

            string attPath = args[0];
            string startTime = args[1];
            string endTime = args[2];
            string min = args[3];
            string max = args[4];

            var afAtt = AFAttribute.FindAttribute(attPath, null) as AFAttribute;
            var afPoint = afAtt.PIPoint;
            var piPoint = afAtt.RawPIPoint as PISDK.PIPoint;
            var ptData = piPoint.Data as PISDK.IPIData2;
            var afRange = new AFTimeRange(startTime, endTime);
            var afSpan = new AFTimeSpan(afRange.Span);
            var afFilter = string.Format(coreFilter, ".", min, max);
            var ptFilter = string.Format(coreFilter, piPoint.Name, min, max);

            AFValues afAvg, afPct;
            PISDK.PIValues piAvg, piPct;

            var attDict = afAtt.Data.FilteredSummaries(afRange, afSpan, afFilter, AFSummaryTypes.Average | AFSummaryTypes.Count, AFCalculationBasis.TimeWeighted, AFSampleType.ExpressionRecordedValues, AFTimeSpan.Zero, AFTimestampCalculation.Auto);
            afAvg = attDict[AFSummaryTypes.Average];
            afPct = attDict[AFSummaryTypes.Count];
            Console.WriteLine("{0} :: {1} @ {2}", afAtt.GetPath(), afFilter, afAvg.First().Timestamp);
            Console.WriteLine("{0:0.000} :: {1:0.000}", afAvg.First().Value, afPct.First().Value);
            var ptDict = afPoint.FilteredSummaries(afRange, afSpan, ptFilter, AFSummaryTypes.Average | AFSummaryTypes.Count, AFCalculationBasis.TimeWeighted, AFSampleType.ExpressionRecordedValues, AFTimeSpan.Zero, AFTimestampCalculation.Auto);
            afAvg = ptDict[AFSummaryTypes.Average];
            afPct = ptDict[AFSummaryTypes.Count];
            Console.WriteLine("{0} :: {1} @ {2}", afPoint.Name, ptFilter, afAvg.First().Timestamp);
            Console.WriteLine("{0:0.000} :: {1:0.000}", afAvg.First().Value, afPct.First().Value);

            var ptNVS = ptData.FilteredSummaries(afRange.StartTime.UtcSeconds, afRange.EndTime.UtcSeconds, null, ptFilter, PISDK.ArchiveSummariesTypeConstants.asAll, PISDK.CalculationBasisConstants.cbTimeWeighted, PISDK.FilterSampleTypeConstants.fstPIPointRecordedValues);
            piAvg = ptNVS["Average"].Value as PISDK.PIValues;
            piPct = ptNVS["Count"].Value as PISDK.PIValues;
            Console.WriteLine("{0} :: {1} @ {2}", piPoint.Name, ptFilter, piAvg[1].TimeStamp.LocalDate);
            Console.WriteLine("{0:0.000} :: {1:0.000}", piAvg[1].Value, piPct[1].Value);            


When run from the command line, provided the parameters of attribute path, start time, end time, minimum, and maximum - it queries the attribute and underlying point using each of the three possible methods within the two SDK's abd outputs them to the screen. For example:


QueryTest "\\myafserver\mydatabase\myelement|myattribute" "7-jan-15 11:00" "7-jan-15 11:10" 60 100


Note that the quotes around the attribute path are required due to the use of the pipe symbol, and around the timestamps because they contain spaces.


Any ideas on what I'm missing or where I'm going wrong?