2 Replies Latest reply on Jun 13, 2017 7:03 PM by Narendra09

    General architecture, buffering & compression mechanism

    Narendra09

      I have three Question

       

      1)

      is it required to configuring PI interface on separate node ? what are drawbacks if we are configuring interface on Server ?

      What is standard recommendation ?

       

      2)

      Data passed from snapshot tables to data archives, in between there is buffering mechanism. So we have to manually required to configure it or default behavior of Data Archive ?

       

      3)

      Data compression required to configure or it is managed by default by Data Archive ?

        • Re: General architecture, buffering & compression mechanism
          kholstein

          Hi Narendra,

           

          With regards to your questions:

           

          1) It is not required to configure PI Interface on a separate node from the PI Data Archive however it provides an additional layer of protection when it comes to data loss. With your PI Interface on another node, you can configure buffering which will allow you to store the data locally on the interface in the event of your PI Data Archive going down (offline or unavailable for other reasons). If you have your interfaces on the same node as your PI Data Archive, if something happens to that machine, your interface will also be offline and thus you will lose data during that period. Having separate nodes also makes it easier to continue data collection during system/maintenance, upgrades, etc. Some additional resources that shed more light onto this are topics like data loss vs data availability as well as this PI Interface Node Architecture - White Paper

           

          2) The "buffering" behavior that occurs between the snapshot are archive (known as the event queue) is automatic; you do not need to configure it aside from specifying the location for your event queue files which is done during your PI Data Archive installation.

           

          3) Compression has default values but can be changed on a point-by-point basis. Depending on what the data is, how much granularity you need, and the precision of the instruments/sensors that are collecting the data, you may want to tighten compression (retain more values) or loosen it (store fewer values that still capture the overall shape\trend of the data within the limits of your needs).

          2 of 2 people found this helpful