Skip navigation
All Places > PI Developers Club > Blog > 2012 > May

Unlike Shakespearean roses, an AF Attribute by any other name does not smell as sweet.




When naming PI tags, the naming convention often corresponds tightly with the control system topology – or, often, the PI tag names are the control system tag names. Or (as many of you are muttering right now) there is no naming convention, but the names are still cryptic. When the name is the primary identifier of an object, it makes sense to do cryptic, human-unfriendly maneuvers in an attempt to uniquely and positively identify the tags for those individuals who are in “the club” and understand the control system lingo.




Try going onsite and understanding a competitor’s tag names. Go on, I dare you. Or hire a contractor to do integration work at your site. Or hire a new employee – you get the idea. With cryptic tag names, your data is cloaked in a fog of mystery. One of the prime directives of PI AF is to make your assets understandable by any onlooker (...who has the right security permissions).




Instead of requiring you to parse a fully-qualified PI Tag name (e.g. CEL-LIS_TRK001_DLOAD.MEAS.PV), PI AF presents Truck 1 as an entity with meaningful properties of a truck: Speed, Oil temperature, Dynamic load, and so forth. Furthermore, those properties are each presented in a known (and selectable!) unit of measure. Standardization (every truck is a truck), legible naming (e.g. Oil temperature), and units of measure (Oil temperature is presented by default in °C, unless requested otherwise) is how PI AF shines the light of sanity upon complex worlds.




Thus we arrive at my sermon: When you’ve got your hard hat on and are playing PI AF builder, remember the prime directive: PI AF must be friendly. A crucial and repeated task is to give PI AF Attributes “good” names. Good names are names that immediately have meaning to mortal humans, foreign systems, and the humans establishing mappings between PI and foreign systems.





Talk like you're a human

Names must be written like you’re communicating with a  human. “ENG SPD” is a lousy and obscure representation of “Engine speed,” so don’t even think about naming your PI AF attributes in the former, cryptic way. Forget how your PI tags are named; it doesn’t matter one bit. “Engine speed” makes much more sense to everyone and that is what matters going forward. I’m a mechanical engineer, and even I get nauseated by coded names.





Don't mix names with units

Names should not include units-of-measure. It makes absolutely no sense to say “Generated watts” when consumers can request the value of “Generated watts” in watts, kilowatts,  BTUs, or horsepower. What you’re trying to describe is “Generated power,” so call it that.





Be relative

Child attributes should be named relative to their parents. Long lists of attribute soup can be avoided three ways: subdividing your assets, categorizing the attributes (to enable grouping/sorting), and by nesting them (some attributes as child attributes of others). The latter case of child attributes pertains to this article on naming. If “Feed A Flow” is the parent attribute holding  “Feed A Total” and “Feed A Rate of Change,” those children’s’ names should be reduced relative to their parent – to “Total” and “Rate of Change.” Otherwise, you’ll end up with a attributes fully-named as, e.g., “Feed A Flow|Feed A Total.” This likely makes more sense as “Feed A Flow|Total” – the total of Feed A’s flow. 




In the new integrated PI Search experience (cf. Coresight, DataLink), search results will appear associated with their immediate parent:     








These three rules – legible names, not including units of measure, and not over-qualified if nested – will help your PI AF implementation be successful as an enterprise-level object model and data foundation. Go forth and prosper.




And I'm hoping for some great comments from those of you who can add to this list of good practices! What's missing?



Okay, so it was my son and he is actually 2 and half years old (as he tells me often the half is important) but today he taught me a lesson or two that made me sit back, reflect, and come up with this blog post.




My son, Ethan, is someone I am very proud of because he is already a geek like me [:)] In fact he even has his own iPad (iPad 2, not the “new iPad” – that is mine!!!) and already the way he used the user interface is so natural.  There is no clicking of the maximize button, using “File -> Open”, and so on, no he literally just swipes his way through the iPad with ease finding what he wants. 




I had a long day today that meant I was mentally exhausted and put my iPhone down, which was the cue for Ethan to come and grab my iPhone.  First, he swipes to unlock it and enters my code (I made the mistake of telling him it once, he still hasn’t forgotten it), which he then follows with “Oooo Daddy you have 4 Facebooks and 12 e-mails”.  I smiled, “Facebooks”.  Ethan then proceeded to open Facebook and pulled down the news feed to refresh it.  I thought to myself, “why didn’t he click refresh or hit F5”, then I realized that his generation just isn’t going to know that type of interface especially on mobile or tablet devices.  It is already the norm for them at such a young age, something us old school folk have had to adjust to, although the old way still comes back to haunt us.




The story continues…


After refreshing my news feed he then opens my notifications and shouted “Oooo Daddy tagged in a photo” followed by a swift click on the notification link.  At that point I sat up sharply just in case the photo was something I wasn’t expecting.   Up pops the photograph upon which he spreads his fingers to zoom in and finds me in the photograph.  Instead of pinching to zoom out he double-touches the screen and it zooms out…well I did not know that!  So that was my lesson on how to interface with my own phone.  Needless to say I shall be watching how Ethan uses his iPad to pick up some more tips.




The story continues just a little bit more…


When I have a few minutes spare I have an obsession, no not the kind you have to go to meetings for, it is the Microsoft Kinect.  To date I have concentrated quietly in my office on data streaming from the Kinect device using AF SDK RDA on Windows Server 8 to the PI Server 2012 as fast and efficiently as possible – some great results so far. 


After bathing Ethan I scuttled off to my home office and fired up my laptop.  Ethan comes in to the office whilst I am watching the performance counter of the PI Server snapshot events/second increasing then decreasing as my tracked skeleton is moving about on the screen.  He laughs.  He then spots the second Kinect sensor lying around in my office and says, “I’ll use this one, you use that one”.  “Okay” I said apprehensively.  I started waving at my Kinect sensor and Ethan saw my skeleton on the screen.  He laughed some more.  He then turned around so we were back to back, put his Kinect sensor on the floor facing him and started to wave at it using the opposite arm that I was using.  “Bing” – that was the sound of the light bulb going off above my head.  Before I work on any visualization or application with all this data being collected I need a second pair of eyes, in this case a second Kinect sensor validating the first load of data being produced.  The Kinect SDK supports multiple Kinect sensors so I now know what I am going to be doing for the foreseeable, working with two huge streams of data, validating each other with their inverse.




To give you all an appreciation of how all this data can be used to make sense for visualization or for a natural user interface application, PI Event Frames play a huge part in what I have built so far.  Imagine correlating specific events or gestures a user makes whilst using your application for viewing/interacting with real-time process data…not only do you see the user acknowledge an alarm, but you could check if they were even looking at what they were doing whilst acknowledging that alarm!  Validate quite literally how a user interacts with your application.




Take it even further in to the future and you drive a new car off the dealership with a Kinect sensor embedded in to the dashboard, able to detect if you are tired or you didn’t check your mirrors before taking that turn (face tracking correlated with the signals from indicator stork + light bulb, and speed of the car for distance to the turn)…a cars black box based on the PI System & Kinect!




Anyway, what started out to be a bit of a random story did flourish in to what I was trying to get across, and what I see people like our community friend Lonnie already blogging about, the world is being taken over by 2 year olds who don’t need mice, they just need their fingers and a piece of glass.



Sometimes we find ourselves in funny situations, where we think our past experience is going to help get us out of a bind.  We have faced a problem before, so naturally we know what needs to be done the next time.  But is next time really ever the same?  What I’m getting at here is that the mobile ecosystem is not the personal computer ecosystem.  On the face of it they seem very similar.  There is a data service over the network; there are client apps or browsers to visualize the data. We have software to build logic and create a nice user interface. We have been doing this for almost two decades on the PC so what is the big deal.  That thinking is exactly what is getting us in trouble when it concerns mobile; we are so use too seeing the world in a PC-centric way, that we just cannot help ourselves.  Let me give you a couple of quick real-life discussions I have had with people, and these are smart people, mind you.  Names have been changed to protect those that need to be protected ;)




Mark Z. and I were having a discussion about getting notifications on a phone and he pointed out that he has been doing this for years now via email and SMS.  “So why Lonnie, are you so big on native push notifications?”  It is a fair question and I responded, “See Mark, it is about what you do after you get that notification.  We have technology to better manage that notification for you when it comes in and we can integrate that message into a client app that allows you to get to the answer as quickly and easily as possible.  See, a notification always requires an action of some kind, and that is what we need to focus on – what the next step is.  Provide that to a user and we have moved beyond text messages and email.”




Warren B. and I were talking at Starbucks about having something like a Process Book display on a phone or tablet, and he pointed out that it has already been done.  He pulls out his phone and remote desktops to his PC and pulls up a screen.  “See,” he said, “do we really need to have an app for that?”  My answer went along the lines of, Warren, you are my friend, and I value your opinion, but do you really think that screen is usable?  We are looking at a desktop that is 7 times larger than your phone screen and designed for keyboard and mouse interaction, do you really think that is a solution that users would pay for?




Mark and Warren are my friends and I hope that one day they will understand what I’m getting at.  Let’s look at the underlying issue here.  Both are thinking with PC brains.  They have been condition for years that we can make do with these kinds of solutions.  They were good back in the day, so why not now?




I hope I have made my point here.  We need to let go of how things were done in the past and try our hardest to embrace the new.  Smart phones and tablets are not PCs.  The have a different form and people interact with them in very different ways.  Ways that are much more natural.  When you are dreaming up the next big app, try to think in those terms, you will be way ahead of everyone else!




Thanks for reading!









OSIsoft vCampus is hosting the M2M Killer App Hackathon at the Connected World Conference on June 11th - June 13th at the Pheasant Run Resort in St-Charles, Ill. (near Chicago).


The goal of this hackathon is to create a contest, where developers participate in a 36-hour programming frenzy to create awesome applications with the PI System. Two other companies who are participating are ILS Technology and Exosite. For more information, go to this site!


There are  nice cash prizes involved for the winners. Each track will have three winners, and the first prize is a $3000 cash prize, the second prize winner will get $1500, and the third prize winner will get an iPad. Next to that, the winners will be featured in the Connected World Magazine (350.000 subscribers!), and on the website. This will get you and your company a lot of exposure!


We are really excited about this, and we are very busy preparing the challenge. We are going to be using some really cool technology to get data in and out of the PI System. Something that hasn't been released so far, and you may have never heard of it before. It's called 'PI Data Pipeline', and it's a new way to interface with the PI System.


We are going to kick off the hackathon on Monday, where everyone will receive instructions and their developer kits. The OSIsoft track will have an ending session on Wednesday, June 13th where we will wrap up and announce the winners.


Do you want to be a part of this? You can! You can register at the Connected World Conference website. Participating in the hackathon will cost you $225, and you will get a lot of value out of this!


We want as many people there participating. That's why we will be paying for 2 vCampus members to go there! Flight and hotel will be sponsored by OSIsoft, and full conference admission (worth $1300) will be sponsored by Connected World.


What do you need to do to win this? We want you to write a small testimonial, where you tell us how OSIsoft vCampus helped you out during your projects or day-to-day work. How did OSIsoft vCampus brought you value? Send this to, with your company logo and picture. We will pick the two winners from the submissions, and announce them a week from now. Just keep your schedule open!


Hope to see you at the Connected World Conference!





Following my previous blog post here, I am focusing on the single variable statistics for a PI tag using R. The goal here is to dig deeper into the data we have collected in the PI System and enable the Power of Data. As we will see, some very small code snippets can generate huge analytical and visual value out of data. All of this comes at no monetary price as long as you have your PI System in place; R is a free tool!


We base our operations on the same dataset as in the previous example, being the power and temperature data gathered from OSIsoft headquarters building in San Leandro, CA. We assume that the sampled data is already exported to a CSV file using PI DataLink or other methods such as PIConfig. The data is sampled one per day going back for a full year.


First we read the data,  from the CSV file into a variable called PowerTemp.df. The type of the data is called data frame in R. A data frame is pretty much like a table having rows and columns. The values in each column are usually of the same type representing a variable, such as temperature or time. Each row will be an entry or observation. Note that the whole structure is easily read from the CSV file. The names of the variables are automatically set from the headers in the CSV file:

#Read the data from the CSV file
PowerTemp.df <- read.csv(file='C:\\Users\\afattahi\\Documents\\R\\Examples\\SL - Power - Temp - 1year - Cleaned.csv', header=TRUE)

#Converting the Power and Temperature to numerical vectors
power.numeric <- as.double(as.vector(PowerTemp.df$Power))
temperature.numeric <- as.double(as.vector(PowerTemp.df$Temperature))

Next step will be to look into the individual distribution of each variable, temperature and power consumption. The histogram will give us this information. This should give us more insight into how the variables have been behaving over the desired period.

#Plotting the simple histogram of the power with 20 bins
hist(power.numeric, breaks=20, col='blue')



It clearly shows the behavior we witnessed before: there is two types of behavior or distribution; one related to weekends (base power) and the other one to the working days. Now let's fit a density function to the histogram above using function density() for a smoother description of our data:

#Calculating the density function: we get an error due to NA in data. We need to clean it out.

d <- density(power.numeric)

Error in density.default(power.numeric) : 'x' contains missing values

Oops! We get an error. The problem is that our dataset contains some NA values (Not Available). The density() function cannot handle that. This is a classic example of the need for data cleaning. There is a saying that 80% of a data scientist or engineer's time is spent on cleaning data and 20% on algorithms and generating insight! So, let's take the NA out. R can do this very efficiently. In the snippet below we clean out the data, calculate the density and plot the density function. The density object, d, contains all the statistical description of the graph:

#Cleaning the data
power.numeric.clean <- power.numeric[!]

#Create the density and plot it
d <- density(power.numeric.clean)
polygon(d, col="red", border="blue")



Now it is evident that the behavior to the right (right lobe - weekdays - 5 days a week) is the dominant behavior as opposed to the left one (left peak - base power - weekdays - 2 days a week) - Beautiful!


To put the icing on the cake, let's look at the distribution of the power in different seasons and compare them. We define seasons as: Jan-Mar as winter, Apr-Jun as spring, Jul-Sep as summer, and Oct-Dec as Fall. The first step is to extract the vector of months (1-12) from the timestamps in our dataset and clean it:

#Extract the vector of months and clean it
months.vector <- as.numeric(format(as.Date(PowerTemp.df$Time, format="%d-%b-%Y"), "%m"))
months.vector.clean <- months.vector[!]

Here is a very important step. We need to bin this vector according to seasons. In other words, take the vector of months and attach the corresponding season to each entry in the vector based on our definition of seasons. We use the function cut() to do so. It generates a factor which is another data structure in R. A factor is an ordered vector of objects; every realized value in the whole list is called a level. Factors are very good to represent observations of categorical values, in this case seasons.

#Create the factor of seasons
seasons <- cut(months.vector.clean, breaks=c(0,3,6,9,12), labels=c("Winter", "Spring", "Summer", "Fall"))

Now we are ready to compare the distributions of the values of power per season. To do so we use the function That's why we load the package sm first. The beauty of it is that once we know what we are doing everything is done with very few lines of code and becomes intuitive.

#Compare the distribution of power consumption by season
require(sm), seasons, xlab="Power consumption by season")
legend("topright", levels(seasons), fill=2+0:(length(levels(seasons))-1), legend=c("Winter", "Spring", "Summer", "Autumn"))



It shows that the dual behavior is again observed in each individual season; so this is an intrinsic behavior of the underlying process. The only curious point is that in spring the baseline power is dominant. It can be because of the moderate weather in California in spring time where there is very little energy used to cool or heat the building. To see the different behavior by season we can look at the box plots of the power consumption by season:

bwplot(power.numeric.clean~seasons, ylab="Power Consumption")



The intent of this post is to delve deeper into single variable statistical analysis of the data. To do so we need to import data from PI System into R, clean it up, and use appropriate analysis and graphics. R proves to be very efficient in enabling the Power of Data!

I’ve been quite happy with my testing of AFSDK 2.5 CTP2.  I work in a big AF shop so the emphasis on my tests has asset-centric.  However, my company does have some tag-centric PISDK applications and I’ve recently tested out the feasibility of using AFSDK 2.5 for tag-centric apps.  Obviously there was going to a syntax change for some commands, plus I wanted to gauge performance.  Keep in mind that when I mention AFSDK for this particular post, I am referring to the OSIsoft.AF.PI namespace for counterparts to PISDK objects and methods.








I have mixed feelings on some of the new commands.  To connect to a PIServer you must connect to a PISystem first.  That just seems awkward for some reason, but it’s such trivial code that it's really no big deal.  I do appreciate that its now PIServer in AFSDK, contrasted to the very generic sounding Server in PISDK.  And when you’re fetching values from PI it’s no longer PIValues being returned but rather AFValues.  I had a slight brain freeze with that last one but came to appreciate it because I like the constructor methods with AFValues and AFTime far better than their PISDK counterparts.




One big syntax difference was for anything fetching values based on a time range.  The PISDK methods usually required separate parameters for StartTime and EndTime.  The new AFSDK methods require an AFTimeRange object.  What I found myself doing for my own methods was overloading it to accommodate both method signatures.








As far as features, I’ve got to correct a misconception I’ve had about Rich Data Access in 2.5.  I was under the impression that you could not delete data as there is no equivalent RemoveValues method in AFSDK.  While it is true that this convenience method is missing, it is false that you can’t delete PI data using AFSDK.  Using the RDA UpdateValues method with the AFUpdateOption.Remove flag will remove values, or at least attempt to do so based on your user credentials ;-).




The other nice features of AFSDK is what we’ve all read about since CTP1: managed .NET code and objects that don’t use COM so now I don’t have to worry about marshaling, etc.  Whereas PISDK could be sluggish in MTA threads, AFSDK is quite happy in MTA.




A feature missing from 2.5 but slated for 2.6 is an AF equivalent of the PIAsynchStatus object.  While it would be nice to have, it does add more complexity to an application.  Don’t get me wrong – I’m a big fan of PIAsynchStatus but in some instances it can actually make your application run slower.  It would be quite useful to have in the toolbox but is something I use in less than 20% of my PISDK applications.  I can’t write any application with it that I can’t also write without it.








Since this is a pre-release product, I’m not going to publish any hard numbers but what I will say is that I do like the speed I’ve seen so far.  Would I like it to be faster?  Absolutely.  Then again, I want the PISDK to be faster too!  But for what I consider to be an impending first release of managed, COM-less objects, I think performance is quite acceptable.  Or as one colleague said “It’s pretty durn fast.”  For those that don’t speak Southern, ‘pretty durn fast’ is a very good thing.  There are some commands where PISDK performs ever so slightly faster than AFSDK – though I would say it is insignificantly faster because within a blink of an eye is still a within a blink of an eye .  Yet there are some commands that perform a wee bit faster – that is to say not so insignificant but just a smidgen noticeably faster.




There was one command where AFSDK was quite faster than PISDK:  fetching a large collection of PIPoint objects or a PointList.  When testing a small set of 200 or so tags, PISDK was slightly faster.  But when fetching my entire tag collection of 35,000+ tags, AFSDK was noticeably faster.  Over a series of several tests, PISDK would take between 3 and 5 seconds to fetch all 35K tags, whereas AFSDK would do the same thing in half that time.








Based on the performance I’ve seen and the features of AFSDK, once AF 2.5 is released into production I would not hesitate to write a tag-centric application in AF 2.5.  Not only because this is the strategic direction of where OSIsoft is heading, but just in case you ever need to promote it to be asset-based, my application would already be in AFSDK.  For example if I’m just doing tag-centric processing, I wouldn’t be concerned about UOM’s.  But what if my needs suddenly changed and I needed to worry about UOM data conversions of my PI data?  This would be an easy transition if my formerly tag-centric application was based on AFSDK 2.5.




That’s just one person’s opinion.  And that’s all for now.

I have this dream to see PI data on an iPhone.  I think this would be a fantastic thing.  I really do.  There are so many people walking about with iPhones and having access to their data would be a tremendous benefit.  How can we make this possible?


I have this conversation with myself at least once a week and have spent a lot of time thinking and researching various possibilities.  This is what led me to the cloud.  See, if you are serious about providing data to an iPhone, or any mobile device, then you have to consider that fact that this data will be traversing the Internet, or what I like to call, the wild.  Data in the wild is a problem for most of us.  How do we prevent others from seeing it? How to we insure that the person asking for the data is who they say they are?




To be blunt, it is all about security.  And the cloud offers the capabilities that I need.  By the cloud, I really mean Microsoft’s Azure.  They have some nice services, like Access Control Service (ACS), and the service bus.  I have found that these two features enable me to solve the “wild” problem.  I can secure my data and authenticate users.  This allows me to sleep at night and tell my clients that they are OK.  To play on an old cliché, “you can have your PI and eat it too.”  I know, that was pretty bad :)




But, as a developer this means I need to understand what the cloud is about, how to talk to my clients (either my boss, users, other departments, or paying customers)  without freaking them out and really get the point across that we can do this in a safe way.  We can get the data to your phone or tablet in a secure way and use the power of the cloud to help us out.   I think it can be a very positive conversation if approached the right way.  But it first starts with us getting up-to-speed with this technology.




So how does this cloudy stuff work?  I will be talking about this subject in coming blogs.  You don’t necessary have to write cloud services or anything that intense, but all of us should want to really try to understand what the cloud is about and why it is such an important part of getting our data to the mobile world.  This is a big part of the mobile puzzle and you will be doing yourself a favor by learning what it could mean to you and your organization. 




Thanks for reading.



Ahmad Fattahi


Posted by Ahmad Fattahi Employee May 10, 2012

As geeks we all love puzzles! How about attacking a cool one using R and solving it in an elegant way? (Even if you don't know R you will probably enjoy it! I found the problem and solution here; I just tweaked it a little bit.) The syntax and code efficiency is really impressive. Here is the description:


We are given integers between 1 and n. We would like to order them in a line, when possible, so that every adjacent pair adds up to a square number. For example, for n=15 a solution would be:


9, 7, 2, 14, 11, 5, 4, 12, 13, 3, 6, 10, 15, 1, 8


There are multiple ways to model the solution; an elegant way is to model this as a graph. Each number corresponds to a vertex. If two integers add up to a square number there will be an edge between the corresponding vertices. We have a solution if there is a Hamiltonian path in the graph (a path that traverses all the vertices once and only once).


Now let's implement this in R. The comments in the code should explain each line.

#We consider numbers between 1 and n
n <- 15

#Creating the graph adjacency matrix and giving rows and columns names
d <- outer(1:n, 1:n, function(u,v) (round(sqrt(u+v), digits=6)==sqrt(u+v)))
rownames(d) <- colnames(d) <- 1:n

#Defining the graph object based on the adjacency matrix
g <- graph.adjacency(d, "undirected")

#Labeling each vertex with the actual integer it represents in the graph
V(g)$label <- V(g)$name

#Plotting the graph with a readable layout
plot(g, layout=layout.fruchterman.reingold)

 And here is the result for n=15, n=10 (no solution), and n=20 (no solution).







As a follow up to this post, today I will write a value with annotations to PI.


In order to do this, we need the usual preparation. In other words, I need to get my imports:

//need to import sdk  
#pragma warning( disable : 4786 4146 )    
#import "E:\PIPC\PISDK\PISDKCommon.dll"    no_namespace 
#import "E:\PIPC\PISDK\PITimeServer.dll"   no_namespace 
#import "E:\PIPC\PISDK\PISDK.dll"          rename("Connected", "PISDKConnected") no_namespace

and some pointers and strings:

/* PISDK */
IPISDKPtr       spPISDK = NULL;                /* the PISDK */
ServerPtr       spServer = NULL;               /* the server */
PIPointPtr      spPIPoint = NULL;              /* the pi point */
_bstr_t         bstrServer = "SCHREMMERAVMPI"; /* the pi servername*/
_bstr_t         bstrPointName = "MyLabTag";      /* the tagname */  

now let's start with the initialization:

// initialize the COM library

// Create an instance of the PISDK
// get the PI server 
spServer = spPISDK->GetServers()->GetItem(bstrServer);     
// get the PI point 
spPIPoint = spServer->PIPoints->GetItem(bstrPointName);       

to make my life easier I am going to use a Float32 tag, and I am assuming it does not have a bad value now. I get the snapshot:

// get the current value
_PIValuePtr _pv = spPIPoint->Data->GetSnapshot();

and now I need some annotations:

// the PI Annotations 
_PIAnnotationsPtr spPIAnns;
// a string annotation
// an integer annotation
// a float annotation

As usual we have to deal with VARIANTs:

// We need a variant to pass
_variant_t vAnns;
VariantInit (&vAnns);
// the type is VT_DISPATCH
V_VT (&vAnns) = VT_DISPATCH;
// assign the annotations to the variant
V_DISPATCH(&vAnns) = spPIAnns;

Now that there is a VARIANT that refers to my annotations. I am going to use UpdateValues to send my value to PI, and to pass the annotations I will need a named values collection, containing the VARIANT:

// named values :-)
_NamedValuesPtr spNVValAttr;
// add the annotations to the named values
spNVValAttr->Add("Annotations", &vAnns);

almost there now. The remaining part is as simple as creating my PIValues collection, adding my PIValue to it, and sending it to PI. Remember, I am assuming my tag is a Float32 and has no bad value :

// PI Values
_PIValuesPtr spPIValues;

// make the PI values writeable
// add a new value with current time, new float value and annotations
spPIValues->Add("*",_pv->Value.fltVal + 1,spNVValAttr);
// make the PI values readonly
// write the value to PI
HRESULT hr = spPIPoint->Data->UpdateValues(spPIValues, dmInsertDuplicates, NULL);

cleaning up: 

// we don't need the variant anymore
V_VT (&vAnns) = VT_EMPTY;

// Closes the COM library

and leaving:

if (hr < 0)         
     return hr;     
     return 0;  

 hope you had as much fun reading as I had writing !

"The Power of Data" - sounds familiar? It was the tag line for OSIsoft Users Conference 2012 in San Francisco, and not for no reason. After the awesome PI Systems collect data from different parts of an enterprise we need to extract useful information out of raw data. This is especially true given how easily the new and mighty PI System can handle millions of tags (data streams) and events in a blink of an eye. To really get to "the Power of Data" we have no choice but to use more advanced analytics to massage the raw data and seek insight out of this huge volume that comes to us very fast.


To this end, I would like to share some of the current efforts we are taking here at vCampus with the help of some of my OSIsoft colleagues as well as third-party partners to enable more advanced analytics on PI Data. I would also love to hear your comments, feedback, and ideas along the way:


1) PI System and MATLAB: we have been working on such integration for about two years now. Not only we have a white paper in the Library describing the integration, we have also presented some machine learning applications during vCampus Live! 2011 and other places. MATLAB is a very powerful tool to do general and specialized analytics across several disciplines including machine learning, statistics in general, mathematical optimization, signal processing, and control systems among others. It has good penetration in research communities and academia as well as some industries. We have been working with Mathworks over the past year and will continue the joint venture.


2) PI System and R: R is the emerging and de facto language of Big Data Analytics. A lot of organizations are actively using R for development, in production, or are adopting the technology. The open source nature of the platform makes its use ubiquitous for many different applications. The language is specifically designed for handling data; therefore, it is extremely powerful in making tables, joins, selecting, and performing statistics on the data with a very efficient and smart syntax. Another huge advantage of R is its powerful graphics which make data much more readable and improves interpretability. We have been working with Revolution Analytics, who commercialize R in their product Revolution R Enterprise; it offers a much better development environment (like an IDE) and also offers supported packages for parallel computing. The following infograph shows the trend in terms of the number of books sold on different programming languages. The light green box on the bottom is about R with 127% annual increase and on par with some other major languages.


In my previous blog post I showed a way to use PI Data in R. I will follow up with other integration methods and more cool applications and graphics in the weeks to come.




3) PI System and Python: Python is a programming language that lets you work more quickly and integrate your systems more effectively. It is free to use (even for commercial purposes) and runs on multiple platforms. We have recently started looking into using PI Data with Python. A group of our enthusiastic engineers at OSIsoft have some good experience in doing so. We hope we can offer more concrete documents and results in the months to come.


Do  you use any of such analytical tools with your PI Data? Would you be interested to do so? Do you face any particular challenges? What would be the most valuable result out of a successful analytics package running on your PI Data?

Lonnie Bowling

PI in the Cloud

Posted by Lonnie Bowling Champion May 7, 2012

    Normal   0           false   false   false     EN-US   JA   X-NONE                                                                                                                                                                                                                                                                                                                                                      


What do we mean by “PI in the Cloud”?




I think when entering a discussion about “PI in the Cloud” some context needs to be set.  There are many ways of interpreting what that means.  For example, are we talking about PI Servers running in the cloud, or are we talking about being able to access PI data from a cloud service?  They might seem to mean the same thing, but they are not.  To some PI in the cloud is viewed as a set of computers located in a datacenter that are hosting PI servers.  This is one scenario, but there are many, many more.  I think it goes back to what problem is a PI user faced with and trying to solve.  Is it data access, maybe it is being able to scale a large system, maybe it is to tie a distributed system together, or maybe it is to move hardware from on-premise into a data center.




Today, some of this is very possible, like improving data access and security.  If someone wants to access data from the cloud via a mobile device, this is very do-able.  I would think that one thing OSIsoft is looking at in the near term is extending its infrastructure to the cloud.  This would mean that there would be some means for a PI system to be extended, not full deployed, to the cloud.  I envision that this would start with a set of cloud services and security where data could be easily accessed from the cloud.  As things progress, I could see PI collectives being deployed to the cloud and connecting to local interface nodes.  Also archives could be spread across multiple servers and data centers.  I think that being able to easily scale a PI system in the cloud will be a big deal for large enterprise users.  We may be far off from that day, but I think that the way the PI System as a whole is becoming more and more based on a service oriented architecture we will progressively see parts of the system being abled to be moved.  User and developers will have more options and capabilities than ever before.  As I’m am interested in the mobile area, this is really exciting times and I’m looking forward to seeing what happens!  What do you think? What features would you like to first see moved to the cloud?



Bryan Owen

2-May-2012 NERC Updates

Posted by Bryan Owen Employee May 2, 2012

Spring has sprung in most of the US and so have a plethora of bulletins from NERC – the North American Electric Reliability Corporation.


I suspect NERC is off the radar for many in the vCampus community. So as creators of cool apps for the PI System, I thought you might appreciate a roundup of recent updates from NERC.  After all the North American grid is often touted as the biggest machine ever created by humans. With that awesome factoid we can expect to find some points of interest and targets for your next innovations.


The first update topic involves the NERC critical infrastructure protection (CIP) standards. Revision 4 of the standard is now law per FERC order 761. This revision codifies “bright line” criteria on what assets are considered critical to the bulk electric system. The new criteria results in about about 29% of the installed US generation capacity designated as critical (version 4 adds about 146 generators).


Critical assets are then reviewed for critical cyber assets (CCAs) like industrial control systems with external routable connectivity. CCAs and nodes on the same network are subject to the NERC CIP requirements which include a strict auditing regime.


One of the most recent rulings from NERC involves a clarification on remote access (NERC Project 2009-26). In short, remote access creates a significant compliance burden under the existing rules. My advice for solution developers: build as much instrumentation into your application as you can.  To reuse a comment made at the UC, it is much easier to bring the bits to the experts than it is for experts to have access to critical systems.


Also buried in FERC order 761 are strong reminders to NERC to continue fixing other deficiencies in the CIP standard.  Version 5 is currently in a 2nd ballot and comment phase.  A fundamental change in version 5 is adoption of High and Medium asset classifications (everything else is considered Low). Essentially ‘everything’ will be in scope. The version 5 drafting team is also working on details in the standard that have proven to be problematic in practice.


While obvious changes are needed it is far from clear if the CIP standards are heading in the right direction to improve grid reliability. I make this comment in context of the Arizona-Southern California Outage report just released by NERC.


More than 100 notable events occurred in less than 11 minutes on September 8, 2011 that left over a million people in the dark. The report especially highlights the importance of prior planning because there was so little time for operators to react. Like the 2003 outage, automated protection systems using very conservative settings is one of the reasons operators were unable to prevent the cascading affect. In this case some of the protection schemes were found to no longer serve a valid purpose. If you are an engineer, I think you’ll enjoy reading the entire report. Otherwise, jump to appendix B and C for a quick look at the findings.


Like NERC CIP, one of the biggest challenges is knowing what parts of the bulk electric grid are critical at any given time (and understanding what that means operationally). For power grid reliability the path forward includes using more real time data and simulations tools.


Why should security be any different?

Lonnie Bowling

Adventures in Mobile

Posted by Lonnie Bowling Champion May 1, 2012



I see signs everywhere that mobility is fundamentally changing the way we live our everyday lives.  Think about it for a second; if you have a smartphone, which I bet you do if you are reading this, how many times did you check your email today with your phone?  How does that compare with how many times you checked email on your computer?  If you are like me, your phone is setting next to you while your at your PC and you use it primarily as a your email client, at least for receiving emails.  This is the story that I think will play out over and over for a variety of functions that we have been using the PC to perform for years.  The world is about to change, big time, are you ready? 




OK, maybe that last statement was a bit over the top, but I want to get your attention.  As developers we really need to start thinking about this.  We all are going to be impacted by this shift, sooner or later, and I’m in the sooner camp.  I just finished up the UC2012 conference and it was great, but mobile was not a big story, yet. I think we all have time, but again, I really believe that smartphones and tablets will one day be the primary way we interact with our PI data.  I expect that we will be seeing a lot more in this area.  OSIsoft has their plans, but let’s be honest, they can’t do it all.  That means that we, the developers and power-users of PI, will need to be part of the solutions and help out.  My goal with this blog is to get the conversation going.  I hope everyone is ready to have some fun with this topic, because I think this is great stuff and have a lot to talk about.  So please speak-up on where you think the mobile world is, especially with data access and specifically PI data access.  I talked to a lot of users at the UC that are ready for this to happen, are you? 




OK, I hope you can tell that I’m really, really, excited about mobile technology and finally starting my blog on the subject.  Thanks for reading.



Filter Blog

By date: By tag: