1 Reply Latest reply on Aug 11, 2017 7:31 PM by jaevans

    Adjusting system configuration according to PI Cloud Connect new functionality

    Abel_PM

      Hello,

       

      We are using PI Cloud services to connect 2 manufacturing sites to a central location. I wonder what configuration scenario would be best for our operation now that there is new functionality in the PICC services like the replace mode. These are my comments about the possible scenarios that we could implement going forward:

       

      Scenario 1:

           * Publisher PI Tags:
                 Compression= ON
                 Exception Max= 10 min
           * Publisher option: No delay
           * Subscriber write option: Insert Mode

       

      This is a scenario that we've been using in the past. The system was writing compressed data to the subscriber data archive (DA) since the compression PI tag settings are replicated to the subscriber PI Point database. The disadvantage here is that whenever we recalculate analysis on the publisher DA, these out of order values would not get written on the publisher DA. This would make us to write data manually via piconfig to the subscriber DA.

       

       

      Scenario 2:

           * Publisher PI Tags:
                 Compression= ON
                 Exception Max= 10 min
           * Publisher option: No delay
           * Subscriber write option: Replace Mode

       

      When one of the updates to PI Cloud was rolled out, our subscriptions were changed to "Replace" Mode. In this case I noticed that out of order values I would manually write in SMT on the publisher DA would get propagated to the subscriber DA. What would be the expected behavior in this scenario if I recalculate values with analytics?.

      Also, in this scenario we observed a huge increase of audit log files "pisnapssAuditDD_MMM_YY_..". We think it's caused by the "replace mode" although is difficult to tell since we just enabled audit viewer a few days ago. I opened a case with techsupport for this.

       

      Scenario 3:
           * Publisher PI Tags:
                 Compression= ON
                  Exception Max= 10 min
           * Publisher option: Delay 1 min (or no delay if the replace mode will be\is implemented with compression at the subscriber DA)
           * Subscriber write option: Replace Mode

       

      I was reading some posts where it mention that with a delay, PICC will read data directly from the Archive from the Publisher DA and then the subscription would be on replace mode (not doing compression at the subscriber DA). Would this allow me to recalculate analysis at the publisher data archive and propagate those values to the subscriber?, how would the audit trail behave?

       

      It seems to me that for our requirements the scenario 3 is the best going forward. If "replace mode" allows compression on the subscriber side would be ideal. Also we'd need to find a fix for the huge increase of audit log files issue. For now, we are going back to scenario 1 in order to avoid the audit log files to fill the disk. I would appreciate your comments and opinions about this.

       

       

      Thanks in advance,

        • Re: Adjusting system configuration according to PI Cloud Connect new functionality
          jaevans

          Scenario 1:

          We'd actually still expect these out of order events to be written to the subscriber and you'd have multiple values at the same time stamp. Depending on how you are viewing this data, you may only be seeing one data point at a timestamp when multiple exist. If that isn't occurring, we should take a closer look at why.

           

          Scenario 2:

          There should be no difference here whether the values are written through SMT or through analysis. An increase in audit logs may be due to a known issue where compression is not used with Replace mode.

           

          Scenario 3:

          A delayed publication would not allow you to recalculate as the delayed publication pulls directly from the archives once, so it would not know that a recalculation occurred.