this is already a cloud of questions. So let us start with the manual input of "normal" PI data:
If you have no PI Trust you will have to go via the PI user authentication. You can restrict access to the each and any object in the PI server as you have already mentioned. So the "authenticated" user will have access to some tags, modules, etc. Enabling the audit trail you will make sure that you keep track on who did when what change to the point database or edited point values.
For AF you have to authenticate as well - wether there will be something similar to the audit trail I will ask the AF developers to comment on that.
But what you have to make sure is - no one should have physical access to the machine nor be able to login to have access on the file system level. And as AF stores its data in a SQL database you may add the same security measurements to the MS SQL server.
more info regarding the AF. I assume you are not using the MDB to store the AF. For the SQL Server we have an Audit DB starting with AF2. It keeps track of changes but is currently not exposed (but there are plans to expose this later). All the AF tables are restricted but if you gain enough privileges on the SQL server you may be able to modify.
If you are writing to attributes that refer to a PI tag, AF is using the credentials of the user that first connects to the PI server within the application.
hope this helps,
Andreas, thanks for the information.
So I am pretty satisfied that we can track right the way up to AF as we would like (found nice feature in PI-ML that can track all changes inside the application too). AF audit DB is interesting and should help to close the loop on a full audit trail, I need to look into this part a bit more. I guess in the next version of AF all the 'temporary exposure limitations' will become exposed...
AF2.1 does not provide access to the AF audit tables.
AF Dev Team
Total security usually involves a layered approach. (Physical-Network-Host-Application-Data-Procedural)
The access controls and auditing features you mention are built-in to backend servers and especially appropriate for manual data.
In my experience with manual input, the application layer has been critical. Point of capture data cache is a key feature affecting data availability and integrity. For instance, Manual Logger holds tour data even after approval and transmission to PI. A local data cache drives in-situ validation and provides some comfort in "data ownership" with a short term disaster recovery mechanism.
Some of the best home grown manual data solutions also make good use of a local cache (sometimes just simple data files). It's also common to implement middleware and target data input from browsers. Buffering can be used to manage distribution of writes to collective members. But ultimately, robust user interaction at the application layer tends to be the key for data integrity (avoid mistakes and dry lab of the data).
General comment: Unlike protected SCADA networks, manual data often originates from perimeter networks that shouldn't be implicitly trusted. As such network and host based defenses are integral to PI system security. Our strategy is to enable flexibility in architecture and leverage enterprise security infrastructure in our products.