Yes, it can be easily implemented in PI AF Analysis and output can be written to AF attribute which can be used in PI Vision display.
AF Analysis : Configuring Analytics with PI AF
PI Vision : Visualizing PI System Data with PI Vision Discussion Forum
Hope this helps.
By coincidence I was investigating ProcessBook data set solutions today for one of my users. I wanted to see in the 2018 versions of Data Archive, PI AF and PI Vision offered anything new to solve the problem. I have used PI AF for calculations which can replace ProcessBook data sets for facility or enterprise displays by updating an attribute and assigning the attribute to a PI tag so the values get archived... The question comes from users who created data sets in their ProcessBook displays, but do not have privileges to create PI tags in the data archive...
I created a simple asset and some analyses to test a couple of filtering options 1) 2-hour average, 2) 1-Hour Average, and 3) 15-Minute lag as shown below.
I was able to see the values in PSE as well as trend the values in PSE. Trending 24-hours of data in PSE took about 10 seconds...
I wanted to make sure the data was visible to PI Datalink, so I created an Excel spreadsheet and called sampled data (24-hours @ 15 minute time step) for the four attributes. The RawData attribute returned in 1-2 seconds, but each of the filtered values took 30 seconds each (90 seconds total)... I am not sure why Datalink took so long, but I suspect lots of round-trip calculations...
Next I wanted to see if I could return the results via PI SQL call, so I created an asset data transformation, but all the values returned zero which is incorrect...
Next I wanted to see if I could display the results in PI Vision... Once again all the number values and trend values were zero which is incorrect...
I will continue to investigate alternatives and post an update if I find something...
Revised Jan 6, 2019 @ 8:00 PM ET
Revised based on Brent's suggestion below to compare attributes using data references against the technique using analytics and the 24-hours of filtered values returned to Datalink is a few seconds... The PI Vision trends also displayed properly.
This information is extremely helpful as we transition from Process book to the new toolset. I learn best by doing so your steps testing the different components is perfect. I agree the security is more complicated in the new tools. Please keep me updated.
Is your sinusoid tag still using the standard settings? I'm surprised by the slow retrieval from the Analysis Data Reference since sinusoid is typically not a very dense data stream. I'm curious how the performance of the PI Point DR is in comparison. Have you tried setting up the same attributes as PI Pt DRs? The equivalents would be:
2h Avg: By time: Time Range Override, Relative time: -2h, By Time Range: Average
1h Avg: By time: Time Range Override, Relative time: -1h, By Time Range: Average
15m Lag: By time: Automatic, Relative time: -15m, By Time Range: End Time
You will see some differences in trending because of the way the DRs sample data for the trend, but interpolated values in DataLink and other clients should be the same.
I'm not sure why you aren't seeing values in your Vision trends; the Analysis DR should work fine there. Have you tried contacting Tech Support about this issue?
Brent - I like your solution better... Sometimes we forget the easy solutions and go with the tools we used most recently.
The Datalink and PI Vision response times were much faster and more in line with what I was expecting.
Brent - I have often wondered why I should use the PI Point DR over an analysis for doing these types of calculations. Why is one approach better than the other from a performance standpoint? How is the PI Point DR evaluated (like an analysis on a new snapshot value?). I am not sure I have seen a primer or tip on why you should use the PI Point DR for some simple calculations versus an analysis. Can you expound? I, like Rick, have tended to default to writing analyses for these types of calculations, but the DR is much simpler and if it offers performance benefits, I would like to understand why.
Hi Jim and Rick.
Hopefully, I can ease up some of the confusion. One of the main differences between PI Point Data References and Analysis Data References is when and where the calculation is handled. As you can imagine, these two factors will affect the performance of data retrieval.
Let's start with the PI Point Data Reference first since it's the simpler of the two. For PI Point Data References, we handle the calculation based on the schedule configured for the analysis and through using the Analysis Service on the server machine. As described, event-triggered analyses will calculate with every new snapshot value for the PI Tags involved in the calculation while Periodic analyses will calculate after a given time interval. Once this calculation is handled, the resultant value is written to a PI Tag. This means that this calculation is the ONLY time it is calculated with the exception of backfill or recalculating through PSE Analysis Management tools. From the client application's perspective, the data is retrieved straight from the PI Tag itself. In other words, the client application will reach out to the PI AF Server for the configuration of the attribute and determine which PI Tag is needed for data retrieval. At no point, does the client application notice that the attribute has an analysis attached to it.
As an extension to the above, Summary methods for PI Point Data reference retrievals are handled on the PI Data Archive Side. The client application will construct the appropriate AF SDK Summary Value function call to the PI Data Archive. The Data Archive will then calculate the summary calculation (ex Average). As you can image, you generally do not want the PI Data Archive to handle those loads as there's usually a lot of traffic to and from the PI Data Archive already. In general, we'd recommend that use the retrieval methods if the time range is short or the data is not dense in the time range you are interested it.
This brings us to the Analysis Data Reference (AnalysisDR). Because AnalysisDR does not write data to a PI Tag, historical data is not stored at all. This means that when we retrieve data from a client application, the analysis must evaluate the calculation at every data request. For example, the current value function in Datalink must calculate the analysis using the current time context. Because of this, there are several implications:
1) AnalysisDR cannot be periodic. All AnalysisDR are event-triggered functions. Which means if you have a PI Tag that is fast acting in your calculations, that AnalysisDR will calculate many times. This will lead to performance issues, if the data retrieval time range is large.
2) AnalysisDR are handled by the client application and NOT the analysis service like PI Point Data References. In other words, the client application like Datalink or Vision are in charge of the actual calculation for the Analysis. Generally speaking, client applications are housed on machines with less resources than the actual AF Server machines which use Server-grade hardware. You see the most issues in Excel where the 32-bit version is most common. 32-bit applications have a maximum RAM allotment of 2 GB. When you retrieve data for large time ranges, the results of the retrieval are stored in RAM until all calculations are completed. Because, the calculation is also done in RAM, you have less and less space available for calculations that are over a large time range.
Hopefully, this clears it up a little. Please let us know if you have additional questions with regards to this.
I am more curious to understand the performance difference between the below and a tagavg('Level Indicator','*-15m','*') stored as a PI Tag. What is the performance difference and why should I use one over the other? I also don't understand how often the below actually calculates, whereas if I do an analysis and schedule it, I know what load is on the system and how often the tag/attribute updates.
I haven't used the below very much, but have seen others use it and question why I don't use it more, but I need to understand its methodology better:
Sorry, I realized I misinterpreted your question. I edited my response to include my remarks on it but I'll copy and paste the difference here.
Summary methods for PI Point Data reference retrievals are handled on the PI Data Archive Side. The client application will construct the appropriate AF SDK Summary Value function call to the PI Data Archive. The Data Archive will then calculate the summary calculation (ex Average). As you can imagine, you generally do not want the PI Data Archive to handle those loads as there's usually a lot of traffic to and from the PI Data Archive already. In general, we'd recommend to use PI Point Data Reference summary methods if the time range is short or the data is not dense in the time range you are interested it.
Jesse - thanks, so to clarify:
1. A 15 minute average for a level tag that updates once a minute is probably OK to use a PI Point DR like configured, since this is not a data dense calculation
2. On the other hand, if I need the 4 hour average of the level and the tag had a 1 second update, it might be best to schedule a PI Analysis once every four hours to get the four hour average of the level and store that as a PI Point? That way, we have one point an hour versus having the client side do the proper AF SDK call and do the calculation on the fly when I need it. Is this correct?
That is an accurate summary of our recommendations and description of the behavior.
William - sorry to hijack your thread, but hopefully, this was useful for the problem you are trying to solve as well. I am a fan of moving datasets into AF Analytics, so that the calculations are stored and we know where they come from and how they got there. Too often, we see people use the same calculations in PE's, datasets, and in an Excel spreadsheet somewhere. So, if someone changes only ONE of them, then you can have problems, since they are not linked.
Retrieving data ...