Skip navigation
All Places > PI Developers Club > Blog > 2010 > April

More Querystring Fun

Posted by pkaiser Employee Apr 30, 2010

Wow, it's been a while since I've found time to blog. I enjoyed meeting many new people at the OSIsoft Users Conference earlier this week, and also seeing lots of old familiar faces. While it's fresh in my mind, I thought I would reiterate the new querystring functionality that was a popular discussion topic at the Users Conference.


If you saw the presentation that I made with Tamara Carbaugh on Wednesday morning, then you already know about all of the new querystring support added to PI WebParts in version 3.0. If not, then you'll be glad to learn that querystrings are an even more powerful mechanism for simple integration in PI WebParts. We've supported querystrings in our TreeView and TimeRange web parts for some time, and in my last blog on the topic I talked about the querystring support in our ad-hoc trend. In PI WebParts 3.0, we've dramatically extended querystring support to the rest of the web parts, and eased some of the restrictions regarding their usage.


In the past, the limited querystring parameter support found in the TreeView and TimeRange parts suffered from a few constraints:

  • The names of applicable querystring parameters were fixed. For example, there was no way to apply a querystring parameter with a simple name such as “Start” to the “Start Time” property of an instance of the RtTimeRange web part. The only querystring parameter that could be applied to the “Start Time” property of RtTimeRange had to be named “RtTimeRange_StartTime.”
  • Querystring parameters were automatically applied to all instances of the applicable web part. In other words, the querystring parameter named “RtTimeRange_StartTime” was applied to all instances of the RtTimeRange web part on the page. There was no opportunity to “opt in” (or “opt out”) on a web part instance-by-instance basis.
  • A single querystring parameter could not be applied to instances of different parts. The querystring parameter named “RtTimeRange_StartTime” was automatically applied to every instance of RtTimeRange on the page, if any, but could not be made to have any effect upon an instance of RtTrend on that same page.

To overcome these constraints, PI WebParts 3.0 extends querystring parameter support to any connectable web part property, and allows the configuring user to specify the name of the querystring parameter to use. This is accomplished through the use of our existing web part connection dialog. Now, when you click our "lightning bolt" button to connect a property to a providing web part, in addition you can connect it to a querystring parameter. You don't have to do both; you can connect a property to another web part without connecting it to a querystring parameter, and you can connect a property to a querystring paramter without connecting it to a providing part. Or you can connect a property to both a providing web part and a querystring parameter. When a web part property is connected to a querystring parameter, but the parameter does not appear in the querystring in the URL, that connection is simply ignored.


So what happens when a web part property is connected to both a querystring parameter, and another web part? And what if that property has a user-specified (i.e. typed-in) default value? Obviously, at the time a web part page is rendered, it is now possible for a web part property value to be provided from multiple sources. For all web parts, a value provided from a connected web part takes precedence. If there is no value from a connected web part, then the value from the querystring paramter is used. If there's no value from a connected web part, and no value from a querystring parameter, then the default value is applied.


As noted in my previous querystring parameter blog, and in the presentation that I gave wih Tamara earlier this week, querystring parameters are a tried-and-true mechanism for simple UI-level integration between PI WebParts and other web-based or web-aware applications. They're widely-used, reliable, and so easy to implement that even a Product Manager could do it. Or an Engineering Group Lead like me, that hasn't written code in a while...


ACE 2010 beta

Posted by gmoffett Employee Apr 30, 2010

As mentioned by Steve, the PI ACE 2010 beta has been posted on vCampus on the "Pre-Release" area of the vCampus Download Center. Please make sure you consult the Release Notes for more information, and do not hesitate to provide any feedback you may have on this Beta release to




So what is new?

  • Multiple scheduler support against one PI Server, allowing ACE users to split calculations across two or more machines
  • Performance improvements (approximately x5 times in testing) due to components re-written in .NET
  • Native 64-bit operating system support
  • Visual Studio 2010 support

We are keen for feedback to!



I would like to draw your attention to the official Microsoft StreamInsight page - especially in the "Additional Resources" section down at the bottom. You can see 2 different videos that were posted on Microsoft's Channel 9, which describe how you can do real-time monitoring with PI and our PI System Adapters for StreamInsight:

For those of you who are not familiar with Microsoft StreamInsight, our StreamInsight Adapters and Complex Event Processing (CEP) in general, I invite you to watch our webinar on the topic. Also make sure you follow this blog (RSS Feed) and do not hesitate to initiate discussions in the "StreamInsight Development" discussion forum.



PIData | OData

Posted by spilon Apr 14, 2010

A couple weeks ago in Vegas, a new protocol was released: Open Data Protocol (OData). I can hear it from here, you're now thinking "oh well, oh well, yet another standards, new protocols emerge every other day".


That's true.
But I think this one is a little different and worth taking a deeper look at.


In a nutshell, this is a new data access protocol based on the REST principles, and is meant to facilitate sharing of data between heterogeneous parties (i.e. data producers, data consumers). An OData-enabled source essentially allows you to query and manipulate the data it exposes, by means of a simple URL.


I'd like to invite you to read what follows (a quick example I made up and a few links to more information) and then tell us what you think: are you interested in this? Do you have a brilliant idea on how you would implement this in your applications, in your organization? Do you think it would be valuable for our customers and partners if OSIsoft invested into OData?




Take our website as an example. And say you have your own website where you want to list upcoming OSIsoft events (e.g. Users Conference, OSIsoft vCampus Live!, regional seminars, etc.). With an OData-enabled version of our website, you could probably query it like this:


Which would return you an XML-type document (i.e. Json/Atom type feed) that lists the different things you can query, say:


Then you probably guessed it, you could query which would return an XML document listing all events with all corresponding information and links.


One could also filter on the fields you get back from the service, as well as sort them in the desired order. As an example, you might want to focus on the "Regional Seminars" type of event, or on events being held in San Francisco - the latter query might look this this:$filter=Location eq 'San Francisco'


Now imagine a growing number of OData-enabled data sources, from various data providers, and a growing collection of OData consumer applications... everything gets to be interconnected. From some application on your smart phone, you could get from the OSIsoft website to the upcoming Users Conference (April 26-28 in San Francisco), to the actual hotel and the list of services they provide, to a list of suggested restaurants in the area, and ultimately make a reservation at that restaurant's website.


And then take this to the PI and AF level: the ability to search/read/edit/delete data from PI and AF, from all sorts of OData-enabled clients (thick client, smart phone, website, ...) - sounds good to you?




While Microsoft is the key actor behind this, they really mean it to be an open protocol which hopefully gets standardized at some point. They are really serious about it and even implemented it in SharePoint 2010 (SharePoint lists are "OData queryable").


You can find out more here:

So let me ask you again: what do you think about this? Interesting or not? Already got a couple brilliant ideas on how you would implement this? How about OSIsoft exposing PI and AF data via an Odata producer?

Seeing ‘Red' over Cyber Security

Cyber security is a highly charged issue from boardrooms to regulators and solution providers. Indeed we all depend on critical infrastructure and global supply chains.  Why then, even in new products are there still so many common security weaknesses?

Popular theories are abundant: executives view security as purely extra cost; regulators believe people just don't understand the risk; and solution vendors just want to sell you something...and so on. I can't put enough emphasis on the last point. Pushing FUD is the wrong approach.

A better approach is seeing red over cyber security.

Seeing red in terms of the true cost of defects is a core theme in the security development lifecycle (SDL). Pay now or pay orders of magnitude more in after the fact remediation. Addressing security early and often can help avoid fiscal red.

It's important to note SDL effectiveness is largely understated due to incalculable external costs incurred by customers. Not only is there a direct cost for producing fixes but end users carry a significant cost for roll out.

To truly be effective we must also reduce the external cost of security measures.  "So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users" from Microsoft research challenges just how wrong it is to assume people just don't get it about security.

However, seeing red over cyber security in the military sense of engaging cyber red teams is proving very effective.  The general principle is about using offense to inform defense. In familiar terms you can't improve what you can't measure.

To see red, go on the offensive and identify security metrics. There will be gaps and some applications may need to expose better controls and indicators. As a case in point, consider the vCampus discussion about security with a lot of people managing tags. At the very least this is an externality we must address; in the meantime monitoring is possible.

According to SANS, even with incomplete metrics the US State department observed dramatic risk reduction using a data centric approach.  Through osmosis federal regulators are starting to see red too.  Security enforcement based on real time data is becoming a practical necessity because compliance penalties are per day in violation.


For software development projects, it's better to see red early rather than depend on network fuzz testing or product penetration testing of fielded product.  The DHS sponsored Control Systems Cyber Security Advanced Training Workshop at Idaho National Lab can help you develop a red team mind set. This cyber war game activity has an excellent reputation and is a catalyst for showing many engineers to see red.

Seeing red, using offense to inform defense, helps us make the right development decisions.  Perhaps just as important for developers is how red teams make security intensely personal - no one wants their code to be abused or broken by a peer!

There have been questions recently regarding how to acknowledge a notification programmatically.  There seem to be two common sources of confusion: which methods to use and what subscriber Guid to pass.  Acknowledgment should be done using the ANNotification object but there are several acknowledgment-related methods.  The instance methods are used on the server-side and they won't produce any effect when run from a client.  Instead the static ANNotification.AcknowledgeInstance / AcknowledgeSubscription / AddComment methods should be used.  When calling these methods a Guid identifying a subscriber must be specified to associate the comment/acknowledgment with.  This Guid should be the ID of an AFNotificationContactTemplate (delivery endpoint in the UI) that has an AFNotificationContact instance subscribed to the notification.  Note that every AFNotificationContact instance will have a corresponding Template but may or may not have an AFContact associated.


In the example below, I show a method that takes an AFContact and looks up the corresponding AFNotificationContact on a notification.  From the AFNotificationContact I pass in the Template.ID to the acknowledgment methods.  (UPDATE: I have modified the code slightly to get the last instance from a static ANNotification call - this is preferred to using an instance of ANNotification)


static void AcknowledgeActiveNotification(AFNotification notification, AFContact contact,
    ANAcknowledgmentType acknowledgmentType, string comment)
    // Note that to acknowledge or comment, we need to pass a AFNotificationContactTemplate (delivery endpoint in the UI)
    //   that is subscribed to the notification so that the comment/acknowledgment can be associated with that contact.
    //   This method will search a AFNotificationContacts collection to find a subscription whose AFContact matches the one we specifed.
    //   We will associate the comment/acknowledgement with this AFNotificationContact's Template.
    AFNotificationContact notificationContact = FindContactSubscription(notification.NotificationContacts, contact);


    // query for the last instance of the notifiation
    ANInstance lastInstance = ANNotification.GetLastInstance(notification);


    if (!lastInstance.IsActive)
        Console.WriteLine("Last instance is inactive");


        if (acknowledgmentType == ANAcknowledgmentType.Instance)
            // this acknowledges the notification, even if the required acknowledgments are not met
            ANAcknowledgmentReturnStatus ackResult = ANNotification.AcknowledgeInstance(
                notification,                       // the AF notification
                lastInstance.InstanceID,            // the instance ID
                notificationContact.Template.ID,    // the ID of the AFNotificationContactTemplate
            Console.WriteLine("Instance acknowledgment: " + ackResult);
        else if (acknowledgmentType == ANAcknowledgmentType.Subscription)
            // this acknowledges a single subscription, but may not cause the notification to be acknowledged
            //   if there are > 1 required acknowledgments
            ANAcknowledgmentReturnStatus ackResult = ANNotification.AcknowledgeSubscription(
            Console.WriteLine("Subscription acknowledgment: " + ackResult);
        else if (acknowledgmentType == ANAcknowledgmentType.Comment)
            // This adds a comment, but does not perform any acknowledgment
            string errorMessage;
            bool result = ANNotification.AddComment(
                out errorMessage);
            Console.WriteLine("Comment added: " + result);
            if (!String.IsNullOrEmpty(errorMessage))
                Console.WriteLine("Comment error: " + errorMessage);
    catch (Exception excp)


// recursively check the subscriber hierarchy looking for a AFNotificationContact associated with the specified AFContact
static AFNotificationContact FindContactSubscription(AFNotificationContacts subscribers, AFContact contact)
    if (contact == null || subscribers == null)
        return null;


    AFNotificationContact associatedSubscriber = null;
    foreach (AFNotificationContact subscriber in subscribers)
        if (subscriber.Contact == contact)
            associatedSubscriber = subscriber;
        else if (subscriber.NotificationContacts != null)
            // if we didn't find the contact, but this has subitems (a group or escal) we need to search them
            associatedSubscriber = FindContactSubscription(subscriber.NotificationContacts, contact);


        if (associatedSubscriber != null)


    return associatedSubscriber;

Internationalization and localization are means of adapting computer software to different languages and regional differences. As our users are distributed in many different regions in the world, this is one of the things that OSIsoft is tackling when it comes to handling different region/language/culture contents in our systems as well as our client tools.


Though there have been a good amount of work that has been devoted into adding and improving localization features within the PI products, there are still things that you should watch out for when you are using PI Server, PI Clients and PI Data Access Technologies to store, read and display contents with localization features.


And hence this give me the idea to come up with the new white paper that gives an overview on the different issues that we can encounter in handling localized contents and the available workarounds when using the Server and Client Products as well as Data Access Technologies


The white paper is now available on the OSIsoft vCampus Library, you can go and download it now.

Filter Blog

By date: By tag: