Skip navigation
All Places > PI Developers Club > Blog > 2009 > February

The Selected Data toolpart for RtTrend dictates what traces appear in the trend. It is an ordered list of PI Points, relational dataset columns, and web service parameters, and can also include one connection from another web part. This connection can consume any number of PI Points, passed as a semicolon-delimited list of the format \\piserver\tagname[;\\piserver\anothertagname]*


Often overlooked in the Selected Data toolpart for RtTrend is the "Replace Ad-hoc Traces" checkbox. This property controls the response to the web part connection for the Selected Data property. Checked by default, this causes every new batch of PI Points received via the connection to replace the last batch. In other words, by default PI Points sent via the connection to the Selected Data for RtTrend are substitutive. However, if you uncheck this property, PI Points sent via the connection will be additive -- the existing traces will remain while new traces are added for the new PI Points received via the connection.


For example, when the AliasTagList parameter from RtTreeView is connected to the Selected Data for RtTrend, and ad-hoc traces are not being replaced, each time you click on a Module in RtTreeView, the PI Points to which all of its Aliases refer will be added to the traces in the trend. No single PI Point will appear in the trend more than once, so if a subsequent connection event provides a PI Point that is already being traced, a second trace will not be added to the trend for that PI Point.


The RtXYPlot web part provides similar functionality, allowing additional "Y" tags to be passed in via connection either additively or substitutively. Hopefully calling attention to these features will help you get more value now out of your existing RtWebParts applications.

A systemic naming convention is helpful to automate management of numerous interface services. But there is still the pesky problem of uniqueness in the startup bat files.




This issue is normally handled by the PI-ICU (a.k.a. intensive care unit).  The Interface Configuration Utility helps create new interface services interactively during commissioning; however an automation interface is not provided.  Using ICU to manage 80 services could be somewhat error prone.




For example, consider multiple relational database interface services.  The startup file will usually differ by service id, pointsource and dsn. By convention, these parameters can all use a format based the pointsource string (eg. “SQLDB1”).




Since there isn’t really a built-in ‘grep|awk’ construct  for Windows, a Powershell function could help clone the new startup files by replacing parameters derived from the pointsource:




Function ReplaceRDBMSps { param([string]$oldps,[string]$newps)




Foreach-Object {$_ -replace $oldps, $newps} |






Some additional helpers could also leverage the naming convention, such as service creation.  A stop script would query windows service status, filter on the base interface name and issue stop commands. 




Startup sequence is probably important enough to setup manually. For this number of interface services, I certainly do not recommend the automatic services startup setting.  Launching startup using the group policy logon script mechanism provides a much more orderly start on reboot.




Simple! Well maybe. What about saving the interface startup parameters in the PI module database?  This important feature is provided by PI-ICU and drives layered management tools like automatic point synchronization and the SMT interface services control plug-in. There could be other limitations such as interfaces or APS connectors that don’t yet support multi-character pointsource or string service IDs.




The ICU will continue to support an interface node centric user experience.  Scale up for multiple interface nodes and services fits better in the PI system management tool suite.  SMT is also a good vantage point to manage service mobility between nodes, n-way buffering, and fail over schemas. 




13750osi8 Buffer> Add support for remote configuration using PI Management Subsystem.


16946osi8 SMTHost> Add plugin to edit ICU configuration remotely using management subsystem.




Obviously, the PLI enhancements above aren’t an all inclusive functional list for easing administration of interfaces but simple is good and we will continue to work in this direction!

If you haven't yet registered for our next User Conference (March 31-April 1 in San Francisco), hurry up! We're 10 days away from the closing of the "Early Bird" registration offer. This offer consists of a $300 discount on conference registration, as well as a $40/night discount on hotel room.


The User Conference is the perfect venue to learn about OSIsoft's products and roadmap for the coming months/years, but also a great opportunity to connect with OSIsoft employees, customers and partners.


Visit for the UC2009 website for more details and for registration.



A recent upgrade activity was complicated by 80+ interface services running on the PI server host. Needless to say the upgrade took more than a few minutes!


PI interfaces can be very simple, and simple translates into very reliable. Reliability is the exact reason there were so many interface services. The system management team had taken the time to group points into multiple interface services because inputs were being polled from many different remote nodes.  In this case, the grouping strategy was based on commonality in connection path.


There are many other good uses for multiple interface services and point groups. Multiple interface services tend to operate in parallel and can also increase data throughput. If you don’t have redundant interfaces, dedicated services for high value and mission critical data can further increase reliability. Routine configuration changes and interface updates can also be organized by plant area to better align with operational schedules.


But how many is too many, is 80 interfaces too many?


The PI server is a beefy machine and this configuration has been stable over the years but the startup is taking a very long time.  It turns out LOCATION1 is still being used to group the points. This mechanism loads all points with matching POINTSOURCE from the server and then filters by LOCATION1 at the interface.  No wonder PIBASESS pegs, all the interfaces asking for all the points at the same time.


Changing to a unique multi-character POINTSOURCE for each interface service is a better approach (and 1 less location parameter to set).  The UNIINT startup cache mechanism delivers the best startup performance. 


Orderly start and stop is another defect.  Yeah, we forgot to update the site start and stop scripts too. 


So, 80 interfaces on one server are not all that simple after all.  We have more work to do.





February 26th it is! Mark the date and register now!


With the help of a few eminent guest speakers, I'll be conducting the next of the vCampus-exclusive The Builders' Café webinar series: Programming .NET Add-Ins for PI ProcessBook.


This vCampus-exclusive webinar will present a side of PI ProcessBook that you may not know yet: automation and add-ins. In addition to presenting how to develop .NET add-ins to PI ProcessBook, this webinar will introduce a set of 4 sample add-ins from which you can start developing yours. Also, a live Q&A session will be held at the end of the webinar, for you to provide feedback or ask question to some of the people who made all this possible!


Guest speakers are:


Don't miss this... register now!

As you may already know, PI stores data with timestamps that read in number of seconds since January 1st, 1970 UTC.


For the first and only time in history (makes it sound more serious  ), this number will read 1234567890 on Friday the 13th: 13-Feb-09 23:31:30 UTC, more precisely.


'How can I see if this is true??', I hear from my desk...  here, using a command prompt:
     D:\PI\adm\> pidiag -tz "13-feb-09 23:31:30"

(Sorry for the word play  - or "acronym play"... I just couldn't resist )


More seriously... we would like to reiterate our invitation to our 20th Users Conference, taking place in San Francisco starting March 31st. You can review the details (there's been major changes!) and register for the event at


This year more than ever, we would be interested to hear about the impact of the current economic situation on your business and operations and discuss how PI can address the challenges you are facing now. Our upcoming Users Conference can perhaps help you, since a number of people - customer and partners of OSIsoft - will speak on this topic. If you have something to share which you feel would be beneficial for the whole commumity, please consider registering for own talk, using the "Call for Papers" link on the left-hand side of the link provided above.


We hope to see you and your colleagues at this event in a few weeks! We'll also make sure the members of the vCampus community meet and exchange around common interests!



GE's Scarecrow Superbowl ad is just the latest hype about the smart grid.  But $3-5MM for the ad is just a drop in the bucket.  Smart grid is a huge problem space and very technology intensive.  On my unofficial scale of difficulty where global climate models are a 10, I rate smart grid a strong 7.


But what is smart grid?  Resources at UCAIUG.ORG can provide a good foundation. Specifically, check out the business case work referenced by the AMI-SEC task force security requirements.  I've extracted appendix B here for your convenience.  IMHO, Distributed Energy Resource (DER) Management is a killer app all by itself.


Today, many asset owners and operators are in the early stages of AMI and smart grid rollout. The costs are enormous but so is the incentive (Metcalfe's Law is one way to think about the value aspect).


In the coming years, I expect solutions will take many forms and several will even use the PI System.  Perhaps some of you are already working in the smart grid space. To be ready and successful, we need to collaborate to push the envelope on scalability. That means more performance and zero administration without sacrifice of security.


All of us have to be smarter if the grid is ever going to get a brain.

Filter Blog

By date: By tag: