MvanderVeeken

Measuring performance across different OSIsoft Data Access products

Blog Post created by MvanderVeeken Employee on Feb 22, 2012

OSIsoft provides several data access methods to PI System data. Most of us would be familiar with most of them in our day-to-day PI System development efforts:

  • PI Web Services
  • PI SDK
  • AFSDK
  • PI OLEDB
  • PI OLEDB Enterprise
  • AFSDK (RDA)

There is a good reason for having these multiple data access methods. Different circumstances prefer a different data access product. If you are developing your own high performance PI Interface, you might want to resort to PISDK for performance reasons. If you are developing a Silverlight app,  you probably want to resort to PI Web Services, because of the ease of access. If you are working on more of a BI product, and want to really cross examine your data, then PI OLEDB or PI OLEDB Enterprise could be your method of choice.

 

It's difficult to really state beforehand when you should use what product. It's really dependent on the environment, goal and architecture.

 

One thing that could be measured is the performance of the different data access products in your architecture and situation. The last couple of weeks I've been working on a concept where measuring the performance of the different data access methods should be a walk in the park. In this post, I want to elaborate a bit on this concept, and ask you for your opinion. 

 

Introducing PI System DataAccess Profiler

 

The goal of this concept is to create a 'one stop shop' for all performance measurements involving PI System Data Access technologies. But why stop there, why not also measure the performance of different programming paradigms like sequential, async, multi threading and parallel programming. 

 

Some example use cases I have in mind:

  • You want to get insight into the performance of a data access method in your (network) environment
  • You want to compare the performance of PI SDK vs PI Web Services for data retrieval in order to make an informed architecture decision.
  • You want to know the performance using PI OLEDB to all the PI Servers in your Enterprise.
  • You are creating a new .NET product, and you are wondering if multithreading or parallel programming would allow the biggest data throughput to your PI Server using PI SDK
  • ...

Overview

 

The main idea of the application is to generate detailed performance reports, according to your configuration. You create a 'Execution Plan', which holds all the information.

 

An execution plan holds one or more 'Connections' and one or more 'Execution Groups'. A Connection is a dedicated connection using a certain Data Access product. For instance, you can configure one or multiple PISDK or PI Web Services connections. The application should support PI Web Servces, PI SDK, AF SDK, PI OLEDB and AF SDK RDA. An Execution Group is dedicated to a certain Connection, and consists of Operations. The application supports the following Operations (for each connection type):

  • Get Product/Server Version
  • Get Snapshot
  • Get Archive Values
  • Get Summaries

These are the basic operations that all Data Access Products support, and therefore we can cross measure performance. 

 

The application looks like this when you open it. It presents you with an empty execution plan on the 'Execution Plan Explorer' on the left pane. The middle pane is where the report will show up, the right pane is the 'object explorer', where you can make configuration changes. The lower pane is a log pane, where any information and error messages appear. The toolbar on top let's you

  • Create a new Execution Plan
  • Open an existing Execution Plan
  • Save the current Execution Plan
  • Save the current Execution Plan As (specify filename)
  • Run the current execution plan and generate a report
  • Save the generated report to HTML
  • Get information about PI System Data Access Profiler

2352.screen1.png

 

 

 

Creating a Performance Report

 

If we want to create an execution plan, we will first start off by adding one or multiple connections

 

0184.screen2.png

 

After that, we can configure the connection (using the object explorer). You can rename a connection to make it more descriptive.

 

6560.screen3.png

 

After that, we can add our first execution group, using one of the previously defined connections.

 

4807.screen4.png

 

Once we have our execution group, we can configure it. We can rename it (to make it more descriptive, for instance 'PISDK to PISRV101 Parallel'). We can also specify 'iterations'. This number indicates how many times the entire group is iterated for the performance test. The default is 1, but if you really want to see how it performs getting data a few hundred times, you can configure it here. The ExecutionType configured the way these iterations are handled. You can choose from the following:

  • Serial (will be renamed 'Sequential). Uses a typical for loop to iterate
  • Parallel. Uses Parallel Extension to the .NET Framework to execute the iterations in Parallel
  • Async. Uses async mechanism (if available for this particular Data Access Product). Works well with for instance PI Web Services or the PISDK
  • MultiThreading. Spawns new threads for every iteration. 

1614.screen6.png

 

After we have configured our Execution Group, we can add operations to our Execution Group. You can add as many different operations as you like. The same goes for adding more execution groups.

 

4530.screen5.png

 

Now we can configure the operation. Here is an example of the 'Get Snapshot' configuration

 

4657.screen7.png

 

And here is an example of configuring and adding Requests to a 'Get Summary Data' operation. Again, you can add as many as you like.

 

1346.screen8.png

 

Here is an example of running a Execution Plan that uses PI Web Services to get snapshots. In this example we compare the results of calling 'GetPISnapshotData' 20 times sequential, and 20 times in parallel. You can see the report (in HTML) in the middle pane. Before executing the Execution Plan, there are latency checks (ping) and a traceroute to the target server to get some insight in the network performance. You will also get a summary about the performance of the different execution groups (in this case, using the same operation but one sequential and one parallel executed). You can clearly see the difference in performance when comparing Parallel vs. Sequential.

 

1106.screen10.png

 

Here are some more details from the generated report (Sequential group)

 

3618.screen11.png

 

and for the Parallel group

 

4353.screen12.png

 

You can imagine using this to generate a performance report for PI SDK vs PI Web Services, or vs AF SDK, or any other combination. You can also use it to see the performance difference using the same technique to different PI servers in your organization. This could give some great insight into choosing the right data access method for your environment or project.

 

Status of the application

 

The application that is showed here is in a very early stage of development, and should be considered a concept application. Development has come up to a point where it is feasible to show, and to a point where I need more input. I could personally think of something like this as a Community Project.

 

At this point, I cannot promise anything beyond what I just showed. I'm not sure how this will develop, and if this will ever reach maturity. I would certainly like to come up to a point where I can share the code.

 

The application is a .NET 4.0 WPF application, written in C#. 

 

Questions that need input

 

I'm personally convinced this could be a great tool to get insight into the performance of the different PI DataAccess products. I would like to get some input on the following:

  • What's your overall thought after reading this?
  • Will something like this be useful in your PI application development efforts?
  • What features should definitely be present in a concept like this?
  • Any further comments/input.

 

 

If you have made it to the bottom, thanks for reading! I hope you leave comment to provide some input!

 

 

Outcomes