We have a system with around 1.5 million tags, most of which, for several reasons, have compression and exception turned off, so the archives are receiving values every minute for most of these tags and are filling at a rate of 1GB per hour!
If we were able to use compression, this would massively reduce (and performance would similarly benefit) but unfortunately as I say, there are reasons why we can't do this. So is it sensible to increase the archive size from the current 4GB up to, say, 96GB, giving us roughly 4 days per archive? Our PI Server currently has 32GB of memory. Would we also have to increase that to, say, 128GB?
Essentially, what are the practical constraints on archive sizing?