3 Replies Latest reply on Feb 12, 2013 3:11 PM by mhamel

    ArcMaxCollect and PI SDK

    rohanar

      Is there any way to read the value of ArcMaxCollect set on a PI Server via the PI SDK?  We're running into limitations that occur based on this value. The error (11128) indicates there are too many events returned from a TimedValues call. According to the documentation, that limit is the ArcMaxCollect value on the PI archive.

       

      Thanks!

        • Re: ArcMaxCollect and PI SDK

          Hello Rosanne,

           

          Error [-11128] indicates ArcMaxCollect has exceeded.

           

          You don't have access to tuning parameters like ArcMaxCollect via PI SDK but you can use PI SMT -> Operation -> Tuning Parameters to lookup / change tuning parameters. Alternatively piconfig.exe can be used - with recent versions also remote. It is usually necessary to restart the subsystem a tuning parameter is used by to make the change become affective. In case of ArcMAxCollect PI Archive Subsystem needs to be restarted.  

            • Re: ArcMaxCollect and PI SDK

              Hello Rosanne,

               

              There is one thing that I forgot to mention. ArcMaxCollect is intended to protect a PI System against expensive user queries. Before increasing the amount of events that can be retrieved by a single call, please think about how you could change your query in a way that it returns less results e.g. by limiting tags and the time period you query.  

                • Re: ArcMaxCollect and PI SDK
                  mhamel

                  @Rosanne: As Gregor mentioned this parameter is meant to be a "safety valve" when too many events are pulled for a single connection to the PI Server (PI Data Archive). By default, since the PI Server version 3.4.380.36 this value has been readjusted to 1.5 million of events per call per connection. It does not mean you cannot query out more than 1.5 million of events at a time but that you should optimize how you pull those out. Applying a technique of divide for conquer works best.

                   

                  In past experience, I had to pull out more than 100 million of events which I could not afford in a single call. My best strategy was to "slice" this big query into more manageable and small ones. You can easily reassemble the data sets at the end before passing it back to your client. If this can be done, you could perform parallel queries to go faster using different techniques such as asynchronous calls or Parallel class from .NET. The vCampus Live! 2012 conference demonstrated some of these techniques; you can have access to this presentation from the Download Center if you are interested.

                   

                  I hope this helped!