6 Replies Latest reply on Nov 8, 2013 5:51 PM by ekuwana

    AF 2.6 Delete Snapshot Scenario

    Rick Davin

      I currently have an AFSDK 2.5 application to perform data maintenance for some of our controllers.  The app implements its own data fanning.  I want to investigate giving it better or simpler features offered in AF 2.6, especially regarding buffering.


      While our end-users think of a controller as a hardware device in the field, I think of it as a collection of PI tags associated by their controller type and serial number.  E.g. All tags with name beginning with “3DTCW/12345:*”.  For this example, there are 150 individual tags.  Most are Float32.  Some are true strings.  Some are digital tags.  Some are Integers.  Some have data as frequently as 5-minutes, others have data once a year (a configuration tag for example).  We not use sub-second time stamps.


      Our environment has many PI Servers.  Some are stand-alone.  Some are in an HA collective.  Consider an example where I want to delete a range of data for a controller, that is to say delete data in the same time range for all 150 associated tags.    This is not same as a recent thread “Using AF SDK for deleting values in HA”.  Rather I might have an HA with buffering.  I might have an HA without buffering.  Or I might have stand-alone PI Server.   I need one app that handles all of these possibilities.


      The current 2.5 app that accommodate all this safely does the following:

      1. Loop through all member servers and call my Delete method.  (For context of this app, I consider a stand-alone PI Server to be a one-member system. )
      2. My Delete method performs this PER member server:
      3. Set a class-level Boolean named IsBuffered = false;
      4. Fetch all related tags.  
      5. PER tag, do the following:
      6. Fetch the tag’s snapshot.
      7. Set a bSnapshot flag if the snapshot falls within the time range.
      8. If bSnapshot is false, retrieve data for the entire range.
      9. If bSnapshot is true, retrieve data for the start of the time range with the adjusted end time being 1-second before the snapshot.
      10. Delete the retrieved data, i.e. tag.UpdateValues(arcvalues, AFUpdateOption.Remove).
      11. If bSnapshot is false, continue to next tag.
      12. If IsBuffered is true, log a message, and continue to next tag.
      13. Else bSnapshot is true and IsBuffered is false, so attempt to remove the snapshot.  This is wrapped in try block.  If it fails, set IsBuffered to true in the catch block.
      14. When all tags are processed, continue to next member server.

      Yes, I know I could do a bulk call for the snapshot.  I originally had that.  But when running across remotely across a continent for many member servers, it was taking upwards of 10-minutes so that the bulk fetched snapshot could have been stale.  Fetching the snapshot per tag rather than in bulk call is not the problem.


      The whole problem are all the checks and juggling of whether to have special handling around deleting the snapshot, and fetching everything in the time range except the snapshot.  I am thinking of what’s the best way to do this in 2.6?  With the default AFData.BufferOption set to BufferIfPossible, do I even have to do anything?   Do I have to loop over each member?  What if I am running against a PI Collective that has buffering turned off?  I imagine that I would still have to fan over the members.


      Just from what little I have read today - and please someone from OSIsoft correct me - what I’m thinking is something along the lines:

      1. When the user clicks the delete button, I detect if the requested PI Server has buffering enabled around the same time I check to see if it’s a collective.  Something like PIServer.GetHealthStatus()?  Buffering is not running if its Health is among {NotRunning, Disabled, NotConfigured}.  Right?
      2. If a collective has healthy buffering, should I not fan out across its members and instead just process it once?
      3. If a collective has buffering turned off, I probably must still fan out across its members.
      4. In either case, for the per tag delete, I can stop jumping through hoops to have special handling if the snapshot falls within my time range.

      Any guidance is appreciated.

        • Re: AF 2.6 Delete Snapshot Scenario

          Hi Rick,


          In AF 2.6 with BufferIfPossible, if buffering is turned off then AFSDK will try to send it directly to the connected member (no fanning), where the behavior is the same as in AF 2.5 UpdateValue/s.


          I'm wondering if it would be more efficient if instead of having to check for snapshot, would it be possible to not delete the last value/s, i.e. fetch data of each tag within the time range, remove the last value/s in the fetched data (of each tag), then delete the remaining values.

            • Re: AF 2.6 Delete Snapshot Scenario

              The downside is the last (archive) value/s may remain, or maybe they can be taken care of in the next maintenance cycle?

                • Re: AF 2.6 Delete Snapshot Scenario
                  Rick Davin

                  Thanks for the reply, Eddy.  I could try that but again the efficiency of checking the snapshot is not an issue.  The app does a lot more then my original post let on to.  There are options to transfer data in a time range from a source controller to a target controller, or to mothball a controller for a time range (in which case the target tags are created), or to mothball to the new targets and then then delete the same data from the source tags.  The basic operations by the app boil down to (A) Copy from Source to Target controller, or (B) Delete from Source controller.  


                  Again, PI doesn't have a concept of a controller ; it does have a collection of individual tags.  For me to safely handle a controller, I must perform operations as a whole that might not be the most efficient for an individual tag but is considered safer for a controller as a whole.  For instance, the Transfer first attempts to copy data from source to target, and if successful then deletes the data in the source.  On a per tag basis, the most efficient way to do this would be to fetch that tag's source data, and with the fetched AFValues copy it to the target followed up by deleting those same AFValues from the source.  But I can't do it that way because of the concept of a controller.  Instead I must make sure that ALL individual tags were copied successfully.  If any one of them fails, I do not want to delete any data. Thus, the copy and delete, both of which must retrieve data, are done separately.


                  Fetching the snapshot gives me important information, contrasted to your suggestion of just special handling of the last value.  The only  I create a detailed log file, and if the snapshot's value is before the requested time range, I log an appropriate message.  If the time range is before the snapshot, then the copy and/or deletes should not cause a problem.  The only potential problem arises if the snapshot is within the time range ... the delete might fail due to buffering, and with 2.5 I don't know that until I try something.  What I am hoping with in 2.6 is to be able to detect that potential problem before I issue a delete.  So my post is more about how to use new features of 2.6.


                  Let me stress that the 2.5 application as-is runs extremely well under 2.6.  I haven't tested it in 2.6 against a collective yet, or with any new features.  My current test is against a stand-alone PI Server in Azure, and we only have 3 developers who might be on that box.  While my 2.5 runs may sometimes be across a slow WAN connection and take many minutes to step over all collective members, the 2.6 run on a stand-alone server in the cloud takes about 1.2 seconds.  So fetching the snapshot is important to the application where I must choose safety over efficiency.

                    • Re: AF 2.6 Delete Snapshot Scenario

                      Hi Rick, if I understand you correctly, for the case of snapshot, "the delete might fail due to buffering", this is mainly due to PISDK (by design) re-route the update with delete mode directly to the PIServer, i.e. does not go through PIBufSS, hence it may fail if the point snapshot is locked. Is that correct?


                      In 2.6, AFSDK deletes will go through PIBufSS, hence if everything works properly, it will not fail due to point locking.


                      However, as Chris and Denis have pointed out in a separate thread that you're probably aware of, there may still be other potential issues related to delete and buffering:




                      "Note that deletes could fail to produce desirable results on a buffered system if the events are not in-sync on all members, so, care by the client is required to ensure the data is synchronized and that time is given for archive and snapshot events to clear the queue to all collective members.  Otherwise, the collective members may become desynchronized.  Deleting the snapshot, for example, when it is actively being written by another interface/application, would not be recommended.  Like with PI SDK, you do not want to write to a point's snapshot via buffering if the interface responsible for writing the data to the point is not also buffered, because point locking would occur."

                        • Re: AF 2.6 Delete Snapshot Scenario
                          Rick Davin

                          Eddy, I am of general nature of some new features and capabilities with 2.6.  And that there could be potential issues.  That's all fine and dandy, but it brings me back to original post.  So we have these new capabilities, how would we specifically use them?  How would an application know whether or not PIBuffSS is running and configured for the PI Server in question?  What would I do different in my code to exploit these new features?

                            • Re: AF 2.6 Delete Snapshot Scenario

                              Hi Rick,


                              Firstly, if you want to make sure data goes through PIBufSS, you need to manually configure buffering security. PIBufSS that comes with AF2.6 will have buffering utility (GUI) to do this. This utility can be launched from PSE. Note that PSE as well as the buffering utility can show the status of buffering for a particular server.


                              You can also get the buffering status programmatically through PIServer object i.e. PIServer.GetBufferStatus() method.


                              There are 3 different ways to specify mode of buffering (i.e. DoNotBuffer, BufferIfPossible, Buffer)


                              - in AFSDK.config, valid for all AFSDK applications running in the machine


                              - programmatically through AFData.BufferOption, valid for each application process


                              - in the UpdateValue/s methods


                              By default, upon installation, the mode in AFSDK.config is "BufferIfPossible".


                              Which means that If PIBufSS is configured and running, the application will go through PIBufSS (buffered and fanned) by default, hence no code changes need to be made.


                              If PIBufSS is not properly configured or not running then AFSDK will send the data directly to PIServer with this mode. As I noted before, for a collective data will not be fanned in this case.


                              If the application wants to handle fanning by itself when PIBufSS is down, then it can either


                              - check GetBufferStatus(), and handle fanning if status is not ok, or


                              - set the mode to "Buffer" where AFSDK will return error and the app can handle fanning


                              For better performance, I suggest the later since GetBufferStatus() makes an RPC to the PIBufSS hence will have some impact (even though may not be too much impact since it's local).


                              Hope this helps.