Skip navigation
All Places > PI Developers Club > Blog > 2009 > June
2009

As promised in my last blog post, we’ll start the discussion of Manual data entry in a HA PI System with the most popular Data Access technology; the PI-SDK.

 

The PI Software Development Kit is a very common technology for accessing the data in PI. The majority of PI Clients use the SDK, many custom applications are built using the SDK and there are countless little code snippets in ProcessBook displays at most PI sites. As such a prevalent technology, it is important to understand how the PI-SDK behaves in a High Availability (HA) environment.

 

Before we get too in-depth with the PI-SDK, we should determine what we really need for HA-based Manual Entry Data (MDE).  Once we have done this, we can easily see how the PI-SDK “measures-up” and what we can do to get it into shape!

 

Ensuring that data sent to a PI server is consistently and reliably stored within the PI Archive is our primary goal. When working with a collective, we need to ensure that data is sent to both members of the collective consistently and be able to ensure that all modifications to the PI Archive are processed on each server. Furthermore, to maintain a Highly-Available system, the final MDE solution should not introduce a “single point of failure”. Concisely, we need Replication and Buffering.

 

The PI-SDK is “collective aware” in the fact that it can seamlessly failover a connection to multiple servers within the collective. Replicating updates to the PI Archive (inserting, editing or deleting events) through the SDK, however, is not as straight-forward. In its current release, the PI-SDK does not support replication or buffering of events sent to PI straight out of the box. We can, however, design a solution using the PI-SDK that satisfies the goals of our MDE system with relatively little effort.

 

As we are all aware, before you can use the PI-SDK for updating the data in the PI Archive, we need to initiate a connection to the PI server. Prior to the PR1 SDK release, calling the open method of a server object would open a connection to that server. PR1 changed this behaviour; the Open method now creates a connection to the collective, instead of a named server. Whilst you may instantiate the server object with a specific server name, there is no guarantee that the SDK will open a connection to that server. The SDK will attempt to connect to each server in the collective until it successfully connects, in order of the server priority. Any modification made to data is only performed against the connected collective member and isn’t replicated to the others. To achieve our first goal, replication, we will need to handle the PI Server connection slightly differently.

 

The PR1 release of the SDK included a new Interface, IPICollective. This interface extends the functionality of the Server object and exposes a series of methods and properties that allow us to properly handle a collective. The MemberOpen method will allow us to attempt to connect to a specific member of the collective (instead of the collective itself). This method accepts the same parameters and behaves similar to Server.Open before the PR1 release. If you cannot connect to the server, it will throw an error instead of failing over to a secondary. Once you have opened a connection to the server, you can use the returned server object as you would normally.

 

Here’s a quick example of connecting to a named PI Server using the IPICollective.MemberOpen method:

 
ProtectedFunction OpenMember(ByVal _serverName AsString, _
              ByVal
_connectionString AsString) As PISDK.Server
 
        Dim srv As Server
        Dim sdk As PISDK.PISDK
        Dim col As IPICollective
        Dim colList As CollectiveList
        Dim colMember As CollectiveMember
 
        'Get a handle on the SDK
        sdk = New PISDK.PISDK()
 
        'set the server to point to our desired server
        srv = sdk.Servers(_serverName)
 
        'Check to see if it is a collective
        col = srv
        If col.IsCollectiveMember() Then
            'If so, locate the correct Collective Member
            colList = col.ListMembers()
            colMember = colList(_serverName)
 
            'Open the member (not the collective)
            srv = col.MemberOpen(colMember, _connectionString)
        Else
            'It is a normal server, open it as you normally would.
            srv.Open()
        EndIf
 
        Return srv
 
    EndFunction

Once we have a server object, we can cast it as an IPICollective object and manage the connection as collective. We can check to see if the server is indeed a collective before trying to open it using the collective-specific functions. Once we have ascertained that it is a collective, we can list all the servers in the PI Collective and get a handle on our specific server. Once we have this, it is just a matter of calling MemberOpen and specifying this handle with the standard Server.Open connection string. The server object can now be used to store and retrieve data using the standard PI-SDK methods.  For more information on the IPICollective and its methods/properties, please consult the PI-SDK documentation. You will also need to ensure the “Replication_AllowSDKWrites” tuning parameter is set on all secondary members (it is not enabled by default)

 

So, using the MemberOpen method, we can now connect to each member of the collective and send data to PI. Each time we send data to PI, we will need to repeat the method for each member of the collective. We can, therefore, satisfy our first goal; Replication. We can only ever have one open connection to the server per process, attempting to open an additional connection will result in an error “This server is already open under a different connection string”.

 

 There are, however, some downsides to calling MemberOpen for each collective member. Each time you close a connection and open a new one with MemberOpen, it takes some time. Constantly switching servers adds overhead and may result in a visible performance loss as the SDK needs to authenticate each time a connection is made. If you are using event pipes, these will be lost when you switch server which may result in lost data or updates. Additionally, each time you open a connection to another server, you potentially orphan existing PI-SDK objects. These orphaned/zombie objects use memory and will increase the working set of your application. As these are COM-based objects, there is a good chance they won’t be cleaned up (Garbage Collection) until the application exists. These issues can be mitigated with good application design.

 

You may be tempted to modify the priority of the server within the collective and then use the SwitchMember method to connect to different servers. Whilst this procedure would still allow you to connect to the other collective members, you have no guarantee that the connection is made against the desired server. If SwitchMember cannot connect to the desired server, it will attempt to make a connection to all others in the collective until it can (ordered by the priority).

 

Now that we can effectively replicate data to the collective, we need to implement a buffering strategy to store writes when a collective member is not available. This is another in-depth topic and, if covered here, would make this quite a long blog post. As such, I will discuss some options in my next blog post in this series.

 

If you have any questions, please feel free to use the comments, the discussion forum or email the vCampus team/myself directly and we will gladly help out.

 

 

The smart grid is a lot of things to a lot of folks.  At this year's PI user conference Glenn Pritchard of PECO mentioned virtual SCADA systems driven by advanced meter infrastructure (AMI) and PI ACE. Another presentation by Brian Parsonnet of ICE Energy described using the smart grid for optimization and automatic control of distributed energy resources (DER).

 

Indeed smart grid is taking off and driving new levels of collaboration. Not just technical interoperability but also cross-cutting teamwork needed to realize a more secure, reliable and cost effective grid. 

 

Concurrency attacks could be considered collaboration gone wrong... kind of like two people talking at the same time on a conference call.  The BlueHat presentation by Scott Stender and Alex Vidergar of iSEC Partners does a good job of describing web application concurrency attacks and defensive challenges. One of the observations is that today's web development frameworks provide little defense and strict backend defenses can seriously impede scalability.

 

Of course concurrency attacks can happen in control systems too. What about the smart grid command and control infrastructure, will it be possible to generate a state mismatch from concurrent calls to web methods supporting the smart grid?

 

The AMI and Enterprise Gateway teams at OSIsoft have been able study concurrency attacks. Imagine meter connect request issued by agent 1 and a disconnect request issued by agent 2.  Although field state remained consistent, mismatched status reporting to the agents could be observed. Our conclusion is yes, extra defenses to mitigate concurrency attacks are required.  This is especially true to accommodate distributed customer services and delegated service control authority.

 

Another complication is that AMI capability to process commands varies widely depending on technologies used and actual implementation. Some processing schemes use batch oriented methods where, ‘on demand' commands may be scheduled for deferred execution. A lot can change in the dead time before execution.

 

Potential concurrency issues can be prevented at the Enterprise Gateway interface starting with web method support for approval and reversal.  In addition to managing requests the gateway is also tasked with monitoring communications and schedule. This approach exposes activity logs with detailed state and performance indicators in the enterprise gateway.

 

In the operational layer, PI Servers record state of grid operational data using traditional SCADA interfaces. Smart meter information leverages the AMI interface conductor design.  The interface conductor supports plug-in modules for head end systems and represent the inner most defense for concurrency attacks. Anomalies such as inconsistent command sets or permissions for one or more target meters could be rejected or raise an alert.  

 

Smart grid solutions span many technologies that provide unique opportunities for layered defenses. A multi-level permissive and abort logic scheme could be especially effective when there are multiple authorized agents using web entry points. Is your web application vulnerable to concurrency attacks?

Filter Blog

By date: By tag: