PI Connector for UFL Performance Info:
Performance of PI Connector for UFL depends mainly on hardware configuration - 100k ev./s can be reached with powerful machine and a good scenario, 50k ev./s for float points with a solid hardware. The connector is able to use 4 CPU cores. For a better performance: Use a real hardware instead of VM, at least 4 cores 64-bit CPU, SSD drive, PI Server located on a different node with a good network connection
When CPU/RAM usage is more than 95%, performance improvement can be done by upgrading the hardware.
Therefore I have tested a scenario:
- VM, Win 10 x64, RAM 8GB, 8 cores
- PI Data Archive and PI AF on the same machine
- Only UFL writes data to the PI System
- 10k points
- 5 files x 1M ev. rach -> Float points created by UFL
- The same simple ini file (attached), Interface DEB=1
Data.FILTER = C1=="*,*,*"
TagName = ["(*),*,*"]
Timestamp = ["*,(*),*"]
Value_Float32 = ["*,*,(*)"]
StoreInPI(TagName, ,Timestamp, Value_Float32, ,)
PI Connector for UFL results:
Connector Started: 5:22:03 AM
PI Data Archive started receiving data: 5:22:25 (creation of 10k points -> ~20s)
The picture above shows number of events received by the PI Snapshot Subsystem from the PI Connector for UFL during the data collection. Below you can see log prompted by 'piartool -ss' where the most important for us is Snapshot Events value - # of events received in last 5 s.
PI Interface for UFL results:
Interface started: 5:42:58 AM
PI Data Archive started receiving data (after point creation): 5:48:13 AM (creation of 10k points -> ~315s affected by logging the point creation to the log)
PI Data Archive received all the data: 5:50:30 (data collection of 5M events -> ~36k ev./s)
As you can see PI Connector for UFL is able to reach twice better data rate. In case of multiple instances the data rate for PI Connector for UFL is kind of shared, with Interface the rate is more likely multiplied.
ufl_simple.ini.zip 560 bytes