I have a somewhat large PI Backfilling operation that I need to perform for a customer. The data is in an old Foxbro Historian (AIM) and we need to backfill 5 years of data for about 10K points. I’m not sure how many values per point at this time, but the AIM database has at least 40GB of data.
I have read most of the stuff here on the boards and reviewed the “Backfilling with PI Config” KB article. It looks like because of the amount of data and tags, that PIConfig is not going to get it done, so I’m thinking of using PISDK to do the job. This is the basic work flow I’m considering:
1. Create tags to backfill (using the SMT add-on for Excel)
2. Reprocess old archives to create primary records for the new tags, they will be converted to dynamic archives, using piarchss as outline in the KB article step 5
3. Create new archives as required (step 6 of KB article)
4. Clear the snapshot value for all tags (using the SDK to find all the tags, then deleting the current value, is this a good approach?)
5. Import the point values from oldest to newest for each tag using the SDK. I may get the data from a csv file (exported from the database) or through an ODBC query to the AIM database, we are evaluating that right now.
This is just a summary of what intend to do. Does this sound reasonable? Is there another approach that would be faster or simpler? I would like to automate all steps as much as possible.
I have a backup of the customer’s PI database that is about a month old, so I will also need to add the newly process archives back into their system at some point. I may also want to use the utility in the future to move more (new) data from the AIM database to PI, so I also am keeping that in mind.
Any comments, advice, or help would be appreciated. If someone has done something like this and is willing to share some code that would be great. I will also share what I do here as we move forward to help others out.