7 Replies Latest reply on Dec 24, 2013 6:51 PM by osvaldo

    PI Calculation Best Practice Continued : Why I love AF  for doing calculations (and am looking forward to ABAcus)


      This short post was inspired by the thread posted by a post from David M. Fairchild (hence the same title)




      Objective is to give something back to the vCampus community in terms of my own personal experiences and opinions (disclaimer: so I probably might state something that is not entirely right), with regards to pi calculations and to spark discussion on taking organizational aspects of managing and supporting your calculations into the decision process of one solution over the other. 




      Business Case


      For a new recently started up oil field in the Caspian Sea area, my customer (a consortium of oil majors) was required by the government to supply volumetric Flare Emissions reports with an information resolution down to the valve and compositional level. Challenge was in this case that some valves (about 800) did not have a (calibrated) flow meter, so volumes had to be estimated based on other real-time inputs like:


      Upstream/downstream pressure difference, valve percentage opening, temperature, valve specifications


      To complicate matters, due to the startup phase of the operation, estimates and event ‘sensitivity’ had to be ‘tuned’ and recalculated in retrospective multiple times.


      To make a long story short: in the end I built my own ‘eventframes’ generator and (sort of) asset based analytics engine.


      The overall process was as described below:

      1. Detect valve open/close ‘events’
      2. Estimate/Calculate flow rates, based on the flow rate a custom integral calculation was done to get the volume for the duration of the event.
      3. Report events
      4. User selects report date range and validates flare events and classifies events according to permit
      5. Report to government is generated

      Technical and organizational Landscape and challenges

      1. Separate DEV/TEST/ACC/PROD pi collectives
      2. Separate dedicated AF instances
      3. Large AF model (the largest our company has so far)
      4. NO AF 2 MDB sync in place due to technical limitations
      5. Separate support groups and owners for each environment due to the ‘enterprise’ environment. And (For information security  reasons ) strict separation of duties model in place for access to each environment.
      6. Flow estimation logic was a complex formula with in principle the same ‘pattern’ but different ‘coefficients’ per valve
      7. Totalizers over the flow where required to get the volumes.



      (Personal) Considerations with each solution option


      During implementation of the solution I had to weigh several pros and cons of each of the solution directions, the following table logs this.


      Option 1: Performance Equations


      My first go at it the solution were performance equations since they seemed to provide the best maintenance model. This proved to be quite difficult with 800 PE’s. The formula contained some if/else clauses and lots of valve specific inputs. In the end I ended up with a PE that was very very had to troubleshoot for mistakes. To proper support (and to save myself a load of work I ended up building an Excel sheet my macro’s to generate out all the PE’s. Then was when trouble started….next list sums up the pro’s and cons of this approach.


      + PE’s are reliable & accurate


      + PEs’ require no ‘coding’ making it (theoretically) easier to support.


      + Able to recover from telemetric outages.


      -          To force recalculation PE’s require admin access on the PI server.


      -          Recalculation of totalizers based on recalculated PE’s is not supported by 100% PE approach


      -          No ability to template/reuse generic calculation logic, so for each valve repetition of ‘code’


      Option 2: PI ACE Modules


      PI ACE would have been a good solution for me when it had similar AF integration as it has for the module database.


      + Ability to re-use programming logic


      +  Recalculation of totalizers possible


      -          Forced recalculation requires elevated/admin right.


      -          Stability of scheduler


      -          No automatic history recovery from telemetric outages, or when scheduler has been down.




      Option 3: AF Based Calculations + custom windows service


      What sparked me initially to go the ‘AF’ route as alternative for PI calculations was the fact that I as a ‘external’ solution provider did not have proper access to the PI admin teams. This  greatly blocked the ‘tuning’ cycle that was required since PE’s as well as ACE modules require that. As I worked along I discovered a couple of unexpected ‘gems’ in AF that are *not at all* possible with the other approaches.


      + using element templates and valve configuration data stored in the asset model, we can re-use calculation logic


      + When performing calculations AF automatically takes a ‘stepsize’ that gives the most detailed resolution


      + We can use the ‘divide and conquer’ method to divide the calculation logic in small sub-steps in a formula reference.


      + It is much easier to change the retrieval method of the input values (interpolated, averaged, last value) without the need to change the formula itself


      + Given the above we have a Full ‘Asset based’ calculation model ( that’s why I am really looking towards abacus)


      + Changes in calculation parameters are instantly reflected


      + AF data retrieval is much faster that you would expect…(I recalculated the estimations since startup of the plant over a month in less than one hour).


      + Using writable attributes in AF and some custom code we can have a fully client managed backfilling mechanism.


      (neutral) Using some simple programmatic logic it is possible to fully exert control over recalculation


      -          Scheduled calculation and backfilling/recalculation logic requires some custom coding ( but also gave us better stability and control)


      -          Custom solutions are always getting extra scrutiny from the run&maintain and apps support teams (rightfully so), so might require additional step-out approval.


      -          (minor) AF works with archive values so, (depending) on compression settings possibly some information loss.


      -          Trending over long periods in element relative processbooks over might cause time/resource con




      In summary I can conclude that AF based calculations offer (me) the most flexibility and maintainability of my calculations. It also easies some of the pains while developing AF based solutions (in theory you can have multiple application specific AF instance serving as application server and ‘container’ for your asset models).  

        • Re: PI Calculation Best Practice Continued : Why I love AF  for doing calculations (and am looking forward to ABAcus)
          Roger Palmen

          Hi Aldo,


          A quick first comment (i'll try to post a more elaborate addition to this thread later on... end-of-year pressure!):


          I share most of your opinions on the options you've listed. Which option did you eventually ended up with?


          Option 1: PE is easy for simple and small-scal stuff. Anything beyond that, imho the drawbacks outweigh the benefits quickly. Luckily, Abacus and the AFSDK support PE's to be resolved, and separates the evaluation of the PE, and the persistence of the result. You could almost use the AFSDK in Excel to run the PE's from your Excel PE management sheet!


          Option 2: ACE is flexible, and can be linked to AF (see this thread: http://vcampus.osisoft.com/discussion_hall/development_with_other_osisoft_products/f/12/t/3501.aspx). But ACE is also a pain to scale up and manage. If one would look at using the AFSDK from ACE, it's just a small step to actually create your own calculation service. I did once read a great writeup of that somewhere on OSIsoft, but can't find it back quickly.


          Option 3: AF is very well placed to store and maintain the 'functional logic' required to build such solutions, and the templates support that very well. AF does need a set of Custom DataReferences though to support more complex logic effectively.  I am also impressed in how much data AF can calculate through in mere seconds, which kept me using rollup calculations. Even creating trends that require thousands of calculations run acceptable (meaning seconds). Before Abacus, the best combination of options 2 & 3 i have always found to be a solution (again, can't find the link quickly) that used ACE to copy the value of an AFattribute to another AFattribute. The source AFattribute can than be used to use all the good stuff AF gives us.




          2014 will bring Abacus, and will allow us to move on to the next level of problems and complexity!