I have a group of PI AF Servers that have a large number of analyses running on them. As we've been creating more and more analyses, we've begun running into performance issues from time to time. What I've noticed is that it's not just the number of analyses that seem to be giving us problems - an even bigger problem seems to be computationally expensive analyses that are being executed way too often. In many cases, these are analyses that are event-triggered based on inputs that are populated by other analyses. So we sometimes end up with a "domino effect" of inputs that cause analyses to be executed far too often.
So I'm trying to track down which analyses may getting hit the hardest by this. I've written a PowerShell script that uses AFSDK to inspect every analysis in on an AF Server that and tell me two things about each:
- How long an analysis takes to run. For this, I've been measuring how long Analysis.AnalysisRule.Run() takes to execute.
- How often an analysis has executed over the past day. I have not found a good way to do this so far. Essentially, I look for the trigger inputs and I estimate how many times the analysis should have executed based on the timestamps of each of their recorded values. I'm especially having a hard time doing this for Rollups, for which the AnalysisRule and the TimeRule property don't seem to provide much information about when these analyses are executed.
Based on how System Explorer operates, I feel like there's a clean way of getting both of these pieces of information for each analysis with AFSDK, but it has been eluding me so far. Does anyone have any ideas how I could go about programmatically finding this information?