2 of 2 people found this helpful
This value is controlled by the MaxReturnedItemsPerCall configuration setting. You can look at the list of configuration setting at https://<yourserver>/piwebapi/system/configuration. If this setting does not exist, it defaults to 150000. This property limits the maximum number of returned items returned in the response. It affects the Stream and StreamSet controllers, as well as any action for any controller that contains a maxCount URL parameter. Note that if you are using the streamset controller and requesting values from multiple streams, the MaxReturnedItemsPerCall value is divided by the number of requested streams. Assuming you have not changed this setting, are you requesting close to 600 number of streams in your streamset requests?
To change this setting, you can add a Int32 configuration attribute named "MaxReturnedItemsPerCall" in the AF configuration database that is configured to store the PI Web API configuration. Before you do that, make sure you understand the implication of increasing this value. The parameter was introduced to protect the server from a single client from requesting too much information in one single request.
Hi Daphne, thanks for the answer!
I do understand the risks of such requests, and I am actually testing how much the system can take (of course, inside a test environment) .
I intend to request interpolated data from the 600 points for a period of a month with an interval of 5 minutes (+5 million items), is this reasonable to request using the PI Web API?
My intention to use it is because HTTP/JSON requires much less configuration on the client side when compared to OLEDB, ODBC, SDK and other technologies.
While we have done some internal load testing, specific load numbers depends on your specific configuration, environment and server resources. Keep in mind that the more items that are requested, the more memory is needed on the PI Web API server (e.g. for serialization, etc.) Network latency can also play a big factor in transferring a large payload between the client and the server. Also, if the server is busy working on one large request, it might have performance implications on handling other requests coming to the server. If you already have your test environment set up, you can try chunking up the points into separate streamset requests to see if there is a combination that minimizes response time. Please feel free to share your results here!