11 Replies Latest reply on Oct 1, 2015 11:46 AM by Roger Palmen

    Asset Framework & Recalculations - Real World Examples


      Hi everyone,


      There are many articles and entries regarding the use of recalculations within the AF 2015 system and how users are managing or getting around the current implementation of the Analysis engine within AF.


      I am interested to see how other users have implemented their systems in regards to calculations and recalculations whilst taking into consideration the following:


      - Late Bound data

      - Out of order data

      - sequenced calculations

      - Change of Algorithm recalculations

      - Calculation data timestamped at a particular time (e.g. 8:00:00) & the recalculation timestamped at the same time.


      We are trying to avoid using ACE/PE for these particular functions.


      What I'm particularly interested in hearing from users about is the design considerations taken whilst keeping some of the above constraints in mind. I am interested to hear from users that have had these issues and their current design considerations for their AF structure to be able to satisfy these criteria.


      A common occurrence is the calculation of data at the beginning of a shift for a specific time stamp (e.g. 08:00:00), and there may be some late bound manually entered data an hour or so later - with the recalculation time stamped again at 08:00:00, taking into account the new data entered.


      Has anybody had any experience with utilizing Event Frames Attributes and the 'Recapture Values' function to provide some sort of recalculation functionality? Are people saving all calculation results as PI Points, or in the AF database? Are users deleting any data before commencing a Backfill within the tool to rewrite the data - is this a manual or automated function for you?


      If anybody has any examples for reference that would be appreciated - such as screenshots, documents or comments.




        • Re: Asset Framework & Recalculations - Real World Examples
          Rhys Kirk

          Ramon Carnovale:


          A common occurrence is the calculation of data at the beginning of a shift for a specific time stamp (e.g. 08:00:00), and there may be some late bound manually entered data an hour or so later - with the recalculation time stamped again at 08:00:00, taking into account the new data entered.



          This one is fairly easy now with AF 2.7.

          Your shift calculations can be offset by a couple of hours but you can now alter the output timestamp to align to your shift time. You still need to decide a suitable offset but in your example of 08:00 then you could run the calculations at 10:00 but use the "Advanced" output timestamp to set the calculation output to "t+8h". In your algorithm you would use value calls such as "TagVal('Attribute','t+8h')" etc if you wanted to capture the times at the end of the shift, or totals over a shift etc.

          There are some AF service configuration options for dealing with late arriving data, but for fixed period calculations such as a shift then the above is typically more suitable. You just know that operators will never enter manual data on time, there will always be a lag.


          OOO data is a different ball game and will be dependent on what you are calculating. There are a number of discussions on here about recalculating and the need to remove previously calculated outputs first. There are certain scenarios where I've automated this approach, but generally the calculations I've implemented in AF are considered correct at that point in time...if there is missing data then the calculation would reflect that, usually with a digital status.

            • Re: Asset Framework & Recalculations - Real World Examples

              Thanks for the reply Rhys. I have read lots of articles on the recalculation methods other users are adopting to fulfill their needs, but am yet to see if anybody has an approach that might be considered 'out of the box' in terms of the fundamental design of the AF hierarchy.


              Traditionally where a flat, time-series structure existed, calculations are a case of looking at time-series data, performing a calculation with an algorithm, that outputs a final result. This result is then 'recalculated' so that if there are any changes or additions to the time-series data during the calculation period, these are included in the new output value. Looking at this approach in term of how an AF structure might be implemented can possibly be dated in the sense that we can now use many different types of data to achieve the calculation result, obtaining information from potentially many different sources dependent on the attributes configured for an element.


              We are investigating the use of Event Frames to tie a period of time to perform calculations, so that any calculations performed are not configured as a Pi Point, but can instead be saved in the database - the intention being that we can select the Event Frames that might need to be 'recalculated' and use the 'Recapture Values' option to obtain any information related to the time period, which can then update the calculation result for that particular Event Frame. This is a change in the thought process of a time-series context as a shift calculation tying in at a hard '8AM to 8AM' time stamp to now being associated to a 'shift' - so that when a user requests data they would move away from Monday 8AM to Tuesday 8AM, to an Event Frame for 'Monday Night Shift' for example.


              Is anybody tying in calculations with Event Frames for 'shifts'? If so, what considerations were made in order to complete the calculations?



                • Re: Asset Framework & Recalculations - Real World Examples
                  Rhys Kirk

                  Using Event Frames and recapturing the values is a viable approach but has some limitations. Some client tools don't natively support accessing data direct from an Event Frame, they're still focused on an AF Attribute or PI Point, and it seems you're looking for as much out of the box as possible. Also, let's say you have multiple shift Event Frames that are really showing the same calculated value but segregated by their containing Event Frame, well if you want to see a trend of that value across shifts then you'll need a method to first "stitch together" the Event Frames. I would still, and do, continuously calculate outside of an Event Frame and use the Event Frame as a quick way to navigate time periods. Others will use them differently.


                  On the changing algorithm topic, well it depends on what basis the calculation outputs are used. It may be that you don't go back and recalculate because the previously calculated values were used as a basis for a decision with information available at that point in time and recalculating that skews the basis for that previous decision. Alternatively, in these days of AF you could simply create a new set of calculation PI Points and recalculate with your new algorithm without affecting your AF hierarchy and preserving the previous calculated results etc. After all, clients are (well they should be) looking at AF Attributes for which the data source is somewhat irrelevant for them, so swapping out the PI Points upon an algorithm change will have minimal impact.

              • Re: Asset Framework & Recalculations - Real World Examples

                Hi Ramon,

                I had PI asset based calculations being naturally scheduled (by event). This turned out to be a problem when getting out-of-order events. Since the pi analytics processor got registered in the pi buffer sub system I detected 'Error-11429  Out of order event discarded from buffered source point'. It turned out that the data was missing in the data archive. So far OSI support stated that there is no solution for this. So be aware of this behavior when thinking about late data arrival, amnual input  and of cause out-of-order events.

                When I tried to generate events frames with the  pi asset based calculation; I had an issue doing calculations within the event frame. When I remember correctly I could not access the start and end time of the event frame from the event attribute.

                We are doing recalculation in a rather simple way.  identify the element that requires potential recalculation (simple rule based on risk assessment) - remove the values of the output attribute pi points ( tht's easy when you don't have to consider high availability scenarios)- rerun the calculation for the potential timeframe. Of cause this is not perfect but it's rather simple and easy to manage.



                  • Re: Asset Framework & Recalculations - Real World Examples

                    Hello Ralf


                    "rather simple and easy to manage."

                    Curious, how many 're-calcs" would you run daily? And scripted or manual process to remove existing data/re-run?

                    We would have a few thousand, mostly based on late bound lab assay results, or manual entries done post shift, and would like to understand how the level of effort would compare to your experience.




                      • Re: Asset Framework & Recalculations - Real World Examples


                        We just started  to migrate calculation towards PI asset based analytics. So far I have only 300 calculations running. And all of them are subject to recalculation. However we will have about 4000 calculations when migration is finished.


                        Well , let's  explain what simple means from my perspective. Simple means that I don't do fancy monitoring of the inputs ( registering for updates) to identify any late bound and/or out of order data nor do I care on load balancing.   I make a risk based assumption on the occurrence of  recalc trigger events and start the recalculculation on a specified time for the potential timeframe.  This ensures that I have the 'best possible' data available from reporting. Once reporting is done, I don't want  to do recalculation again. This approach might not be valid for others since this solution is adapted to our business processes.

                        I use attribute categories and analysis categories to identify recalculation requirements. This makes it easy to run the process programatically.

                        Of cause everything is put into a self developed  Windows service, that is doing the data removal  and recalculation.  Luckily, it's not much coding effort  to get this working.

                        Since OSI is aware of the recalculation issue, I expect to have some recalc functionality added in the future. From my experience it won't suit my needs  perfectly. So I need a reacalculation that I can easily adapt to the changes OSI may introduce.




                        1 of 1 people found this helpful