AnsweredAssumed Answered

Solutions for a flexible, user-correctable, AF-based production report?

Question asked by bneufeld on May 13, 2019
Latest reply on May 18, 2019 by TimCarmichael

I am looking for input from the user community on best practices for production reporting. 


Goals / criteria:

  1. Automatic daily calculation of production and metallurgical calcs 
  2. Calculations in PI AF (instead of spreadsheets, which have version control & readability problems, or Performance Equations, which are more trouble than AF) 
  3. Process engineers must have the ability to correct bad data


Present situation: Years ago, we came up with "The Reconciler" (or perhaps the idea was borrowed from an old PI World conference) to do daily production reporting at sites. The process looks like this: 

  1. A process engineer starts their day by opening up a PI DataLink spreadsheet
  2. The spreadsheet queries 24 hours' of data from the Data Archive from many tags, generates totals, averages, and dependant (multivariable) calculations. 
  3. The process engineer is given the opportunity to correct any "bad" data. For example if a belt scale failed for 3 hours, and we know our production was X during that time, they can adjust the belt scale total/average/whatever. 
  4. The process engineer "submits" the data, where it is written to PI Manual Entry tags (which we denote with the .ME suffix) with a 6:00:00am timestamp. 
  5. The same Excel spreadsheet is also the basis of daily mill reports. 


This system works pretty well, but it's not perfect. Problems include: 

  • It is manually initiated by a process engineer. We could "fix" this by running a spreadsheet automatically on a server with a macro, but I dislike adding complexity in Excel. The calculations should run themselves, I say! 
  • It is slow. For some remote sites, the Reconciler spreadsheet might query 250-300 tags over a few minutes. Even a lag of a few minutes is wasted time, especially at the start of a busy process engineer's day. 
  • Correcting or backfilling data is nightmarishly slow. Example, perhaps a transmitter has failed in a normal-looking state a few weeks ago. For each production day, the process engineer must set the date on this spreadsheet, query all data, fix the bad data, and submit the sheet to re-write the Manual Entry tags. 
  • Since the Reconciler and Reports live on the same spreadsheet, the spreadsheet is unnecessarily complex. I think a better solution would be to decouple data reconciliation/validation and reporting. 


Proposed Solution: "The Reconciler 2.0". See attached PDF for a data flowchart. The essence of the process is:

  1. At 6:00:00am, a periodic AF analytic runs, filling a batch of Manual Entry (.ME) tags with the results of INDEPENDENT calculations. What I mean by independent is the calcs only require 1 tag: averages, totals, maximums, time in a certain state, etc. This distinction is important because it's these independent .ME tags that will later be user-correctable. 
  2. The final calculation in the periodic AF analytic is incrementing a counter tag; a trigger. 
  3. The trigger tag triggers an event-based AF analytic to compute DEPENDENT calculations; calcs that use the results of INDEPENDENT calculations as their inputs. For example, (average production rate) / (time running) = (average production rate while running). 


The result is that when the process engineer starts their day, they will already have their results sitting in front of them. A big step ahead of where we are today!


What if the source data is bad? I'm glad you asked! To correct bad data:

  1. Process engineer opens a DataLink spreadsheet. 
  2. Spreadsheet pulls in 50-100 INDEPENDENT values (totals, sums, averages, times in state; not dependent on results of other calcs) from this morning's calculations. This operation just takes a second, because it's only looking for values at the 6am timestamp, not summing/averaging/totalizing a whole day of data. 
  3. Process engineer corrects the result of the INDEPENDENT Adds the equivalent of 3 hours production to the belt scale, or whatever. 
  4. Process engineer hits SUBMIT on their spreadsheet. The submit button does two things:
    - Writes the corrected data to the INDEPENDENT Manual Entry tags
    - Increments the Trigger for the DEPENDENT calculations, which causes the dependent analyses to re-calculate based on these new inputs. 


The other nice thing is that reporting should be decoupled from data validation. A user can build a spreadsheet, PI Vision dashboard, or anything they want, to display the daily production reporting calcs. It will not be one highly-coupled mess of a spreadsheet. 


Where I need help. 

  • I am scared that these 2 AF analytics, INDEPENDENT and DEPENDENT, could be triggered accidentally. Since process engineers can (by design) modify the results of INDEPENDENT calculations, the INDEPENDENT AF analytic can never be re-run or backfilled. Why? It will scrape the Data Archive for those original totals, averages, times in state, etc, that the process engineer decided were bad and needed correcting. 
    • I was thinking a safety option might be writing the results of all production calcs, INDEPENDENT or DEPENDENT, to a secondary database so we have history on all results. But this seems too complicated. 
  • Overall, is this too complicated? What other tools are out there for production reporting? Am I overthinking the problem and solution? What other solutions meet my 3 criteria at the top of this post?


Thanks for reading, and apologies if this is in the wrong place. This is my first PI square post. 

Edit: Couple corrections.