I had once an issue like the one that you are having now. My system is a TDC3000 connected to the PIServer using several interfases (23) each interface had certain amount of tags (ranging 115 to 150 tags) and I found out the the tags that were lossing data were listed on the interfases with the bigger amount of tags. I reduce the tag count to less than 115 and this solved the problem.
Hope it helps.
If you use the PI OPC Client tool to browse the OPC server for a tag that is displaying "COMM FAIL" what is the quality listed on the OPC Server?
If the quality there is COMM FAIL this is possibly an issue with your OPC Server.
If the value there shows "Good" but the tags are still reporting COMM FAIL then I think your open tech support case is your best bet.
are there any news on this problem?
I have a question to clarify the situation: Is the Modbus serial Interface also pointing to your GE system or to another system? Do I understand it right, that both, the Modbus serial If and the OPC-IF schow alternating good-bad values?
Regarding the OPC-problems: "COMM Fail" and "No Result" sound like problems on the OPC side (or maybe the system behind the OPC-Server), not at PI side. Please try to verify that a local OPC-Client on the OPC-Server can read data without the use of tunneller. If there occour the same problems like you see in your PI system, the problem is at OPC-Server or at the GE System. If all data is goog, try the same with the use of Tunneller. If it works, your problem is likely at PI side (and TechSupport is the best address to solve this problem), if it does not work Tunneller is broken.
This issue has not been resolved yet. Below is what OSISoft has said so far (quoted). I am trying to escalate this higher at OSISoft to get more help.
"Here is everything I know:
1. When “Comm Fail” is written to a tag by the PI OPC interface that means there was a communication failure with the OPC Server or the PLC the tag is pulling data from. I can see in the SDK logs I collected messages about communication issues with the OPC server. As well as some messages about communication with the PI Server.
2. I looked through the Modbus documentation to figure out why “No Result” would be written to a tag. From what I read it seems to only be written to health tags that monitor output points, and Modbus only writes “No Result” when there are not output tags.
3. Looking through the message logs I can see disconnections and reconnections, and there are a lot of them. Some of them are between the PI Server and the interfaces. Some of them indicate communication issues with the data sources.
Everything I have found indicates intermittent networking issues. I can think of no reason upgrading the PI Server would cause networking issues if the network traffic level was the same before and after. Is there any additional information you can think of that might explain network degradation that coincides with a PI Server upgrade? For example, did more people start using it after the upgrade?"
As to your questions:
The Modbus and GE OPC are two separate interfaces and paths. They come into PI separately.
You are correct. Essentially every other data collection period is alternating good and bad.
I have looked at the tunneller and cannot find/see an issue. I have a help ticket in with Matrikon to ask them to look to see if there are issues.
Our network is new. We have brand new GE server, brand new PI server and software as well as new fiber, copper, and switches. There are no other connectivity issues on the system that we can see.
This issue has been resolved through much heartache and work on the part of OSISoft engineers remotely and on site. As it turns out we had several issues occurring that all played a part.
- The first issue was with the GE Mark VI OPC getting data to PI. We upgraded to a brand new system with GE computers, network switches, and PI system. Due to NERC CIP we had to use a tunneller to get data from GE to PI. We are collecting approximately 3500 points from GE. Previously we had GE Cimpi directly going to PI, no tunneller or any interference. All was good. Since we added the tunneller and another server we had to break all of our points up into separate scan classes, 5 to be exact. This fixed the problem. It appears that we were overloading the tunneller or OPC client. i.e we were clogging the pipes with too much data. For the past week all has been good on receiving data.
- We have an interface of Modbus Serial. We had new computers installed there as well. For all intensive purposes the interface and points seemed to be installed correctly. There did not appear to be any errors. The new computer had a new instance of the interface. It was not copied over like it should have been. Therefore when all of the points were copied over they did not much the interface exactly. We were able to retrieve the old computer and found that the timers/timing of collecting data was slightly different. We put the old times in the new instance of the interface and everything is working very well. As my engineer said it took us finding the needle at the bottom of the haystack and comparing to a needle at the bottom of the other stack. We got lucky or we would still have this issue.
- We found we were having issues with totalizer and calculation tags. Turns out several years ago the piadmin user was corrupt and deleted from the system When we tried to change some setting on these tags we could not. They had stopped working. We need to add the user piadmin back into our system as well as change some security settings. All of our totalizer tags are currently working. Again all is good.
If you have any questions please ask. I am not very good at working with PI yet but I am getting there. I am not an IT person. I just happen to work with the PI system and getting it fixed.