We have an OPC DA Interface running offshore with 138 floats and 172 digitals, advised with 1s refresh rate and I have assumed 10% exceptions get sent to PI, which is located onshore. We have a 2mb link, which equates to 16 mbits required in the Bandwidth Calculation spreadsheet, with a latency of 175ms.
I am doing a bandwidth sizing calculation to see what the current bandwidth usage is and the impact of increasing the number of tags from the current 310 to 2,800.
The calculation in the spreadsheet says that this will generate 1,001 bytes of data/second under normal operation.
I am trying to do a sense check on this of 310 tags * 10% pass exception = 31 tags/second sent to PI. If each event consists of say 14 bytes (4 bytes value, 8 bytes timestamp, 2 bytes status) that comes to 434 bytes + the message header. But 1,001 bytes seems high.
If I do the same calc for 2,800 tags manually I calculate 2800 tags * 10% * 14bytes/event = 3,920bytes, not the 7,920 bytes calculated by the spreadsheet.
Does anyone know why the spreadsheet seems to be double what you would expect to be used? Does each event now have a larger overhead than it used to?