19 Replies Latest reply on Feb 16, 2011 12:13 AM by Ahmad Fattahi

    Timeout on PI RPC or System Call

      Hi, My client is trying to write data into more than 150 tags using our pi webservice. He is receiving following error occasionally. Status : -2147220478: Unable to open a session on a server. [-10722] PINET: Timeout on PI RPC or System Call. However, in spite of the error, data is being written into tags successfully most of the times. My preliminary guess is that PI server is too busy handling large request calls and hence throwing above error. Has anyone seen this issue before and can suggest the solution? Thanks!
        • Re: Timeout on PI RPC or System Call
          Ahmad Fattahi

          Did you have a chance to investigate PI SDK logs on the PI Server to see if they include any more clues? In general updating 150 tags shouldn't overload a typical PI Server unless the server is too busy doind other tasks, the hardware is very old, or data rate is very fast (tens or hundreds per second per tag).


          Another quesion is how often you update the 150 tags? Do you update them all together? Does it make a difference if you break them down into a number of smaller group of tags? What if you decrease the rate of tag updates?

            • Re: Timeout on PI RPC or System Call

              Your post indicates you are using "our piwebservice".  I assume this means that you developed a web service based on PISDK or AFSDK to write data.  Please fill in the details if the assumption is incorrect.


              A typical web service generates MT calls that will reference the assemblies, in this case, AFSDK or PISDK.  The current AFSDK leverages the PISDK to write to PI Servers, so we can reduce this problem to using the PISDK to write in a web service.  (Maybe I am over simplifying this)


              The PISDK is STA COM, so if you call it directly from your MT thread, you will end up marshaling into the STA thread that hosts the PISDK.  This could slow down your web service.  Thus, to enable reasonable performance, you should create an STA background thread and create the ServerManager object first before doing work in that thread.  Let's call this your 'worker thread'.  You will need to query from the MT to the worker thread and back to the calling MT thread - using your favorite method.


              Another need in using both AFSDK and PISDK is lifetime management.  The SDK object should be kept around and not destructed frequently.  This will prevent connection/reconnection cycles that could slow down the web service.  Also, some internal caching of meta-data will be enabled in that way.  Thus, a typical call will be to pass the query along with identity and locale information to the worker thread.  Set those for the worker thread and then do work.


              Eventually, you may find that you need more throughput.  The next step is to make all calls asynchronously (PIAsynchStatus objects) in the PISDK.  Depending on the complexity of the call, and the network topology, this could be enough to meet performance goals.


              If more throughput is needed, the next step involves adding a thread pool of multiple worker threads.


              Whether you use the packaged PI Web Services, or create your own depends on your needs.  However, the above discussion should give you an idea of the work involved.

              • Re: Timeout on PI RPC or System Call

                This appears to be a timeout on connection or reconnection.  The timeout error may be resolved by increasing the connection timeout (use AboutPI-SDK - connection manager dialog for your PI Server settings).  Verify the network transport time using tracert or ping to see if this could be the problem.  You can always use the AboutPI-SDK/Connection Manager to get a feel for how long connection takes.


                The pibasess subsystem on the PI Server usually logs messages about connection attempts.  See if you can discover any messages from pibasess and pinetmgr pertaining to the web service node in the PI Server message log.

                  • Re: Timeout on PI RPC or System Call

                    I increased the connection timeout to 200 seconds on client side but that made no difference. The data-push seems to be working fine from top of the hour to hh:50 but fails when run during hh:50 to hh:00 approximately. I saw the following error message at hh:57 in pigetmsg -f :


                    "Idle Point CleanUp: 154 points have been unloaded[20]"


                    This error seems to be related to data-insert as 153/154 get updated in every job run. I saw following article on OSISoft website.




                    Sounds like this may be the cause. But don't know where to find following parameters: ModuleDB_MaxIdleCleanSec and PointDB_MaxIdleCleanSec

                      • Re: Timeout on PI RPC or System Call
                        Ahmad Fattahi

                        To access/edit Timeout Table parameters please see this article.

                          • Re: Timeout on PI RPC or System Call



                            I found that tuning paramaters were set to the default value of 3600 so I did not make any change there.


                            From the logs, it is seen that PI ACE is using up the resources between hh:50 and hh+1:00 and hence inserting PI data using webservice may not always be successful in this timeframe. However, data Read is not seen failing during this timeframe. Wondering if there is a resolution to this other than changing the schedule of data-insert job.

                              • Re: Timeout on PI RPC or System Call
                                Ahmad Fattahi

                                Can you spot any other error messages in the log file during that time period indicating that the calls to insert data are in fact timing out?


                                Another question is what and how much resources are being used by PI ACE calculations after minute 50?

                                  • Re: Timeout on PI RPC or System Call

                                    I can see logs on client side attempting a connection to PI server. But no messages are seen on PI server when client fails to connect to PI. Any logs available on PI sever during that timeframe are from PI ACE, pisnapss and piarchss.


                                    I can see logs on PI server only when client connects to it successfully.


                                    What should I check to see how many resources are being used?

                                      • Re: Timeout on PI RPC or System Call
                                        Ahmad Fattahi

                                        You mentioned in your earlier post that "From the logs, it is seen that PI ACE is using up the resources between hh:50 and hh+1:00". Could you elaborate some more on this assumption? I was talking about those resources you mentioned there. The main factors would be CPU and memory usage for which you can use Windows performance monitor (type perfmon in start menu).


                                        Confirming this assumption will tell us if that's actually the root cause.

                                          • Re: Timeout on PI RPC or System Call



                                            I am also encountering the same problem. I am retrieving time series data using PI webservice. The service is working fine, but for few calls it is giving the similar error.


                                            note: i m performing Read operation only (GetArchieveData).


                                            I raised this point to osi tech-support also.


                                            Is this a bug or a known issue?





                                              • Re: Timeout on PI RPC or System Call



                                                it sounds like Rashmi is using his own webservice, not the PI Webservice. However, you might troubleshoot the same way.


                                                @Farooq & Rashmi


                                                As this seems to be more a techsupport issue (the service works fine most of the time, no programming issue here) it is probably more suitable for the techsupport.


                                                However, as some hints (and questions) - as Ahmad asked earlier, could you elaborate more on what happen during the time the service fails?


                                                It sounds very strange to me that the PI System should be extremly busy for 10 minutes based on ACE calculations - what are these calculations doint? How much CPU load? How many points on the server? server hardware? Virtual machine? anything else going on on that machine during that 10 minutes?

                                                  • Re: Timeout on PI RPC or System Call

                                                    One troubleshooting tool that you could use is PISDK tracing.  You enable this from the AboutPI-SDK utility, set verbose mode, and the level to 50 (200 is too much information for most purposes).  Set the time to '*+1h' and restart the app pool.


                                                    You can use the DebugView tool from technet for immediate feedback (can save output as a file as well), or look at the file in the PIHOME\PISDK directory after you see some of the behavior.


                                                    This in combination with the PI Server log and local PINS log covering the same time period (the time leading up to an event is important in log files) will give a full picture of the state of the system.

                                      • Re: Timeout on PI RPC or System Call

                                        The usual connection timeout is 10-15 seconds.  Some systems need to be increased, up to 30 seconds for unusual network topography.  However, increasing to 200 is probably not advisable.  If that increase has not solved any problems, I suggest returning the value to 15 seconds and continue searching for the root cause.