If I understand you correctly you basically want to test how much data you can write to the PI server.
To insure no data-loss this is more a HW configration than a SW configuration. The tuning parameters can only help you so far. It doesn't matter if you change the Max queue parameters if the server can't even receive the request.
I would start looking at the HW sizing at: https://techsupport.osisoft.com/troubleshooting/hardwaresizing/hardwaresizing.aspx
Note that when buffering is enabeled all data will be written to disk at the client node first then sent to PI, so you need to have good disks here as well.
I rather do have pi server tuning issues, the hardware is rather enough based on hardware sizing and hardware usage/latency/up/CPU/network bandwidth.
I still have no recipe to tune pi server parameters to manage this data flow. No archiving should help but it doesn't affect too much the performance.
I found it quite strange to not be able to process this simple data flow with some new server and a 2017r2 pi server.
It's strange. You should not have gotten a recommended hardware sizing result based on your requirements.
This is because we generally want you to talk to us when your archiving data points are over a certain number. Please remember that your requirements now are for steady-state operation and we need to account for emergency situations as well.
Please check if your inputs are correct (#Point count, interface scan rate and compression rate)
Also, the default tuning parameters works the best in most cases. I would believe that the same is true for your setup as well.
Based on the hardware sizing tool describes above the current hardware is more than enough, but still I do have issue. Does it means that tuning parameters in pi server 2017r2 won't help for my steady simple data flow? Does it means that pi server got simple limits right there and can not scale up or scale out?
What is the maximum amount of snapshot event / s ever tested or registered by osisoft?
PI Server itself has no such limits in place however in order to protect and support our customers, we set the hard limit in the hardware sizing web tool. And to the spreadsheets we had before.
So the web tool should have given you a warning instead of an actual sizing recommendation. If this is not the case, then please let us know!
In general, we wanted customers to scale out if you start to archive more than 500k events per seconds. If you only want to see the snapshot data with less archiving rate than 500k, then it's 1M snapshots/second set in the web tool. You can play with the web tool to know the limit.
From my understanding, below presentation still applies with some improvements:
2 of 2 people found this helpful
Back when we released PI Server 2012 we did ingress testing against the snapshot and archive. The numbers we published at that time were 1M events/sec to the snapshot and 500K events/sec to the archive. What you are looking to do is beyond what we've tested or documented for archive performance, and it's right on the raged edge of possible snapshot performance with some important caveats:
-These numbers were pure ingress only. There wasn't a single client connected to the server. Client connections, especially to the update manager for real-time streams, absolutely will impact the ingress capabilities. The numbers on this slide (https://pisquare.osisoft.com/message/78428-re-how-many-points-can-one-pi-data-archive-machine-handle#comment-78428 ) are individual maximums - not all attainable simultaneously.
- It required multiple client sources of data to achieve these inputs. The highest numbers we've seen for ingress from a single instance of buffer subsystem (the fastest ingress mechanism we have) are around the 300K event/s sec to the snapshot mar
On the surface it looks to me like you need a minimum of 2 data archive servers to accomplish your goal, and even then you may not be able to get the required data rates leaving your client machine.
I'd like to talk with you a bit more about your use case if you have the time. We are just about to embark on a new round of PI data archive testing, and if I can understand exactly what you are looking to accomplish it may help us to design better tests.