For example asking for one tag over 3 years how many values a second speed would you expect to achieve?
The answer could vary based on your own environment such as the server specs as well as the network, and whether the client PC is local or remote. But the biggest variation depends not upon the time range but rather the density of the tag in question. Does the tag sample data every minute, every 5 minutes, or 60 times a second? What's the typical compression rate for such a time range?
If you sample every minute and expect about 80% of the values to pass compression, that would be 1152 values per day. If you sample 60 times a second with 95% compression, that almost be 5 million values per day. This presents other issues. Let's say 3 years is 1095 days. Then 1152 values per day would be 1.2 million overall values in 3 years. But 5 million values per day would be 5.4 BILLION values in 3 years. The typical ArcMaxCollect value is 1.5 million, so anything more than that requires chunking.
But my main point is, not all tags are created equally so you need to provide more information for a qualified answer.
Thankyou so much for your detailed reply.
Basically what I'm trying to gauge at the moment is what is an acceptable read speed to be aiming for and I realise there are many different variables.But lets say we have a very powerful server with the data on it with no problems with IO etc and a 10gbit network.
I've been testing a solution (a different historian) at the moment where I read 1hz data over 3 months for one tag. Currently I'm getting around 250 000 values a second read speeds and to be honest this is what you'd expect from an off the shelf SQL database. Not a high speed file based historian.I can do many things to optimise this like introduce some caching etc but I wonder what you'd expect from PI in a similar situation. For me I was expecting closer to 1 million reads a second. Anyone see similar performance in PI?
Retrieving data ...