2 of 2 people found this helpful
Thanks for reaching out to us on PI Square.
The amount of time it will take to do a full backup will depend on factors outside of the information you've provided such as the I/O rate of the disk which houses archive data, the I/O rate of the disk receiving the backup, the network latency and bandwidth as well how much space 5 yrs worth of data takes up for your organization. That last point is very important because 5 yrs of data could 5 Gb or could 5 Tb, depending on the number of tags, frequency of the data, and your compression settings.
Thus, we can't really give you an estimate based on the information you've provided to us. If you don't have a backup strategy in place already I would strongly encourage you do set one immediately regardless of how long it takes to copy the data from one location to another.
Thanks for reply.
We do have a backup strategy and we create full backup on weekly basis along with incremental backup on daily basis.
Now we want to move this to removable disks and for that we want to know the timing as how long it will take for copying data.
I totally agree we do need the have I/O rate of the disk receiving the backup, the network latency and bandwidth but if we can have a generic view..it would be helpful.
To Rob's point, these actions are entirely dependent on the parameters that he mentions. There is no "generic view" as it is totally dependent on the size of data and speed of disks and network involved.
Can you just copy a month or two and obtain the average time per month? This assumes your archives are all about the same size.
Here are some rule-of-thumb numbers that I tend to use a lot: A typical PI value (32-bit int or float, no subsecond timestamp) takes 5 bytes in the data archive. A typical value that stores subsecond timestamps takes 12 bytes. Assuming reasonably large archives (GB or more range), the header and index data in an archive can be treated as taking up negligible space. Strings and blobs take more space in the data archive, but I generally assume the number of string/blob values being stored in PI is also negligible in a typical system.
If you can make some reasonable assumptions about your average data rate archived, you can calculate roughly the amount of disk you'll use. For example, say 10% of 100K events/sec sampled after exception/compression are archived: 10K events/sec archived * 12 bytes (typical worst case)/event * 86400 sec/day * 365 days/year * 5 years = ~17TB of disk. So, I'd say for a system with an average archive rate of 10K events/sec, you'd need to back up roughly 10-17 TB of disk. If your archive rate is half that, then I'd divide the result in half, etc.
How long that takes to back up would depend on the I/O rates of your hardware.
BTW, if you don't know how many events/sec you normally archive, you can monitor this using "%PISERVER%\adm\piartool -as" on the command prompt (during typical operations of your site) to get a reasonable estimate.
Hope this helps.