5 Replies Latest reply on Apr 1, 2016 4:27 AM by bshang

    Delay in new tags being replicated to Secondary Data Archive

    RolandRich

      I have an older PISDK application that creates PI tags and writes values to them. The PI Data Archive is an HA Collective of 2 servers (Primary and Secondary both running PI 2015). The application uses the (latest version) of the PI buffering subsystem to fan and buffer data to both members of the collective.

       

      However, if the application creates a new tag, and immediately attempts to write a value to this tag, the value does not reach the Secondary Data Archive. Looking at the PI message logs, you can see that it takes about 9-10 seconds after being created on the Primary for the new tag to be replicated and created on the secondary. During this time, the secondary will not accept values for the tag.

       

      One simple way I have worked around the issue is to just wait for 20 seconds after creating the tag, before I write values to it. This gives the PI collective enough time to replicate the new tag to the secondary (and presumably register the new tag with the buffering subsystem on the client).

       

      However, this workaround is not particularly neat. I was wondering if anybody else had encountered this issue before, and if so, what workarounds they have for it.

       

      Thanks.

        • Re: Delay in new tags being replicated to Secondary Data Archive
          Kenji Hashimoto

          Please try following.

          Open Collective manager > Select secondary > Try setting 0 to Attributes "SyncPeriod"

          I think "CommPeriod" is also related.

          • Re: Delay in new tags being replicated to Secondary Data Archive
            gregor

            Hello Roland,

             

            We've seen the issue you describe with interfaces that are capable of creating PI Points as needed like e.g. PI Interface for Universal File and Stream Loading (UFL).

            When a new point becomes created, this happens on the Primary collective member first and a sync record is created to take care the same configuration change becomes applied to Secondary collective members. Because the members of a PI Collective are considered kind of independent, there's no acknowledgement required. A Secondary Collective member can as well be down for maintenance without the users even recognizing this is the case.

            PI Buffer Subsystem is not aware if the point has been created on Secondary nodes.

            As far as PI UFL Interface is concerned, I understood the issue becomes addressed by delaying new events for recently created PI Points to reduce the risk that the point doesn't exist on Secondary nodes at the time data is sent.

            So one thing you could do is queuing events for recently created PI Points for a few minutes, or until you've verified new points have become created on Secondary nodes too. One way to do this would be through a sign-up for point updates with every member of the receiving Collective. Likely the easier approach could be actively querying all Collective members for the new PI Points until points are found on all members. 

            2 of 2 people found this helpful