High CPU usage in AFSDK code when PI Server is unavailable

Discussion created by MIPAC on Jun 14, 2013
Latest reply on Jul 15, 2013 by cmanhard



I have some code that runs AFAnalysis rules on a scheduled basis. The analysis rules perform differing functions (some are custom written by me, others are Sigmafine analyses). The scheduling code runs on the AF server, which is on a separate machine from the PI server. I have found a rather strange behaviour when the PI server becomes unavailable - the CPU on the AF server jumps to and runs at 100%, and it is the process where my scheduling code is executing that is using the CPU. The analyses are typically reading from and writing to attributes that use the PIPoint DR. I've observed that these are correctly being read with a bad value when the PI server is unavailable, and the analysis still completes and exits after gracefully handling the bad values. During this time, and in between scheduled analyses the CPU on the AF server is running at 100%, although my code is not actually doing anything in between scheduled analysis tasks. Once the PI server becomes available again, the CPU utilisation on the AF server (and in my process) returns to normal. My code makes use of thread pooling to handle multiple scheduled tasks, however this level of multithreading doesn't seem to be an issue.


I've done some testing against a fairly stripped down version of the code (removed all other functions, just does basic analysis execution using the threadpool), and still see the same behaviour.


Using the latest released versions of PI system software, and my code is written against .Net 4.


Appreciate any guidance or suggestions as to what is likely going on here, or what to look for in this type of interaction.