3 Replies Latest reply on Jan 31, 2013 11:29 PM by Lonnie Bowling

    Proficy Historian backfill - to - PI

    BlairW

      Hi

       

      We have a client who is moving from Proficy to PI-2012. Everything PI-related has been setup and is running correctly. However, we need to extract a significant amount of data from Proficy and import this into PI. In terms of data quantity we are talking about 4500 - 5000 tags with data that spans from as far back as 2002 up untill now. 

       

      The client has provided us with a spreadsheet that contains Proficy tags and their corresponding PI tags. We have created a spreadsheet that uses the Proficy SDK and the PI SDK to export data from Proficy and insert it into PI. We have also created a small .NET application that does the same thing. The main issue that we have is that both applications run extremely slow! At the current rate of processing it will take about a month for all of the data to be pulled across into PI.

       

      From what we can see, there are two significant bottlenecks:

       

      1: When we query the Proficy data (we are querying a month at a time - but we have played around with different time ranges).

       

      2: When looping through all of the Proficy data (a collection of Proficy values) to create a PIValues collection.

       

      So, based on this I have a couple of questions: Are we doing this the right way? Is there a better / more efficient / quicker way to backfill Proficy data into PI?

       

      Thanks

       

      Blair

       

       

       

       

       

       

        • Re: Proficy Historian backfill - to - PI
          Zev.Arnold@Wipro

          Blair,

           

          That sounds like more a Proficy question than  a PI question.  The approach I've used in similar projects is to write a custom application *only* to read data from the source system.  The custom app outputs this data into a flat file in PI UFL interface format to be uploaded later.  This can help significantly in verifying that you've really transferred all the data and reduces the footprint of your custom code which reduces your risk.  By using PI UFL instead of your custom app, you can be sure that things like buffering and n-way fanning are handled correctly as well.

           

          -Zev

            • Re: Proficy Historian backfill - to - PI
              BlairW

              Hi Zev

               

              Thanks for your feedback. We are currently looking into a way of automatically exporting data from Proficy (using one of the Proficy extraction tools). The only issue with this is that the Proficy tag names are different to the PI tag names.

               

              Also, just to clarify - the last performance bottleneck isn't around pushing the populated PIValues collection into PI. This is extremely fast. The bottleneck is around looping through all of the values that were retrieved from Proficy to populate the PIValues collection. Your suggestion is to loop through the collection of Proficy values and output them to a CSV file - I'm not sure this will solve the performance issue that we are having. The data is going into PI correctly - it's just taking aaaaaaaaages to loop through the collection of Proficy values.

               

              Thanks for your help

               

              Cheers

               

              Blair

                • Re: Proficy Historian backfill - to - PI
                  Lonnie Bowling

                  Hi Blair,

                   

                  I did this sort of thing for a client last year and had the same type of issues.  You have identified where the issues are and I don't think there is a single simple fix.  Your approach is the same I took and I was successful, but it took a while (about 2 weeks for the same about of data you are talking about).  Before I started the run it took several weeks to figure out the fastest way to do it so it would not take months.  I optimized and even did parallel processing for reading from the old historian and writing to  PI  by putting the operations on different threads.  This maxed my cpu usage even when I was waiting on data queries.  You can also run a profiler on your code to see where the bottlenecks are.   Make sure your are not being limited by hard drive access and slow network connections.  These are hardware issues than often can be a simple fixed.

                   

                  Lonnie