Bit of a general question.
When you start hitting 500GB, 1TB and beyond for your PI server archive files what are the fastest methods people are using (either directly or via your 3rd party provider) for moving such large amounts of data around? I am talking about occasions such as forming a 4 node collective from a PI server with 1TB of archives and you want to prepare the 3 secondary servers. Or you have new hardware that means you need to add a secondary PI server etc
Of course you can set up some batch jobs (e.g. robocopy) to do it in the background for a few days but it is a pain and I am an impatient guy
I started doing some research and got sidetracked by the CERN LHC and the sheer amount of data collected and transferred, 15 petabytes of data a year; a good read if you have some spare reading time...
http://lcg.web.cern.ch/LCG/public/data-processing.htm (1Mb @ 40,000,000 events per second)