I see the following possible causes to your performance issues:
- The PI Data Archive Server does not support bulk calls, providing the version of the PI Data Archive server will help to identify this.
- Your PI Data Archive is under heavy load / memory use
Can you also tell us more about your data density?
How many events do you get back from your 2 days query?
The hardware type you have may also influence greatly you read time, what type of hardware do you have?
SSDs today are providing the best performances.
To see what is happening, I'd propose you to run this powershell script, on the PI Data Archive server, and to share the blg file you get on this thread.
Make sure you have enough memory on the PI Data Archive machine before running this script at least 200Mb free.
- You have to start the script before executing your bulk call.
- Make sure to set the duration long enough (default 10 seconds. you may set it to 300s for example (5m)) so we can see before and after the data call is made on the chart results.
- please provide the time at which you started the bulk call.
Sorry to end up with more questions than answers , however I am sure we can find why you are getting slow performances!
Waiting for your inputs
Thank you for your reply. Please find answers to your questions below:
- PI Version: 3.4.390.28
- The server I'm working with is a development environment and I'm the only one connected to it. So, I don't imagine it is under heavy load.
- Data Density: For each of the tags, I have a new value coming in every 2 minutes or so. So, I'm expecting quite a lot of data.
- Hardware: The system IS using an SSD.
Sorry, I'm not sure how to get that Powershell script you mentioned. The hyperlink links back to this page? Am I missing something?
- I assume this is 3.4.390.18, I am not aware of 3.4.390.28. From the documentation :
The bulk Summary methods require PI Data Archive version 3.4.390.18 or greater.
so you should be fine with this version.
- Maybe not under heavy load, but this call will require a lot of memory on the server side. How about server's ram?
- 2 minutes data means 30 values per hour; this makes 720 values per day per tag.
- For 50k tags this means 36 000 000 of values, for 1 day. This is a lot of values! And this will take some time for sure.
- How do you chunk your data, by tags or by values? You should definitely chunk by tags when you create you pagingConfig object.
- You may consider that a lot of time will be needed by your client also to process this data ( converting the data into .Net Objects.)
- Why do you need to get such amount of data so fast?
- I updated the link to the powershell script.
Just in case : Is your development system running on an external hard drive? If yes, USB 2.0 would give poor performances, you'd need at least USB 3.0.