Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog > 2017 > January
2017

I am thrilled to announce the posting of the PI System Connectivity Toolkit for LabVIEW on the National Instruments Reference Design Portal.

 

National Instruments equips engineers and scientists with systems that accelerate productivity, innovation, and discovery.

 

National Instruments measurement platforms often connect to the kinds of sensors and perform the kinds of analyses that aren’t usually found in the industrial control systems, which are traditionally connected to the PI System.  Examples of this are vibration analysis, motor current signature analysis, thermography, acoustic, electro-magnetic induction, and the analysis of other equipment condition performance indicators.

 

By enabling bi-directional communication between LabVIEW and the PI System, maintenance and reliability personnel can gain deeper insights, not only into the condition of equipment, through analysis of the signals in LabVIEW, but also into the process conditions that effect the equipment and vice versa.

 

LabVIEW edge analytics are enhanced by process data from other monitoring and control systems via the PI System.  The PI System real-time data infrastructure furthermore makes the LabVIEW data available enterprise-wide, for better insights and decision-making across an organization, and so that data can be integrated with other systems for predictive analytics, augmented reality, and computerized maintenance management for automating maintenance processes.

 

To obtain installation instructions, LabVIEW Virtual Instruments, and sample code files, see the following posting on the National Instruments Reference Design Portal:

http://forums.ni.com/t5/Reference-Design-Portal/OSIsoft-PI-System-connectivity-toolkit-for-LabVIEW/ta-p/3568074

 

The write-to-PI function requires a license for the PI-UFL Connector.  Please contact your Account Manager or Partner Manager.

 

The read-from-PI function requires a PI Web API license which can be downloaded and used for free for development purposes from OSIsoft Tech Support website.

 

For more information on LabVIEW, please see http://www.ni.com/labview.

 

Please direct any questions to NationalInstruments@osisoft.com.

jkorman

Balancing on the Edge

Posted by jkorman Employee Jan 10, 2017

Please check out my blog post about balancing Edge vs Cloud computing and let me know your thoughts!

http://www.osisoft.com/Blogs/Balancing-on-the-Edge/

 

Kind Regards,

Jeremy

While developing web-based applications that leverage the PI Web API, I often find myself asking the following question:

"How can I get all WebIds of a subset of attributes for all elements?

One obvious use-case that comes to mind is to display a table of attribute data for all elements of a certain type. Maybe I have a template that defines 20 attributes, but I only need to display 10 of them. Of those 10 attributes, maybe I only want to display snapshot values for 6 of them, and the other four I want to trend in a sparkline.

In order to accomplish this, a series of requests to the PI Web API needs to be made:

  1. Get all the WebIds for the elements I need
  2. Loop through each of these elements, and for each element get all the WebIds for the attributes I need
  3. Make streamset requests to the PI Web API with the WebIds of my attributes

In the past, making all of these calls to the PI Web API did not scale very well in terms of performance. Today, we have batch requests with request template functionality that makes implementing this use-case a lot easier. Even with batch requests, however, I have discovered some performance implications that I believe are worth knowing about. I will share these in this blog post.

 

 

Setting up the batch request

 

As a starting point, we need to write a batch request to get the elements we want. Throughout this blog post, I am using the public PI Web API endpoint with the NuGreen database. Suppose I want to get all elements of the "Boiler" template type. This batch request may look something like this:

 

{
 "database": {
  "Method": "GET",
  "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen"
 },
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["database"],
  "Parameters": ["$.database.Content.Links.Elements"]
 }
}

 

Notice that I am specifically asking for elements that are instances of the "Boiler" template. Also, I am using the "searchFullHierarchy" parameter in the element sub-request. I often include this parameter because I need all element descendants of a specific root element. As such, I will sometimes write my element query as follows:

 

{
 "rootElement": {
  "Method": "GET",
  "Resource": "https://devdata.osisoft.com/piwebapi/elements?path=\\\\PISRV1\\NuGreen\\NuGreen"
 },
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["rootElement"],
  "Parameters": ["$.rootElement.Content.Links.Elements"]
 }
}

 

The only difference here is that I have changed my starting point from an AF database to as specific AF element. For the rest of my examples, I'll be sticking to the database method.

 

Moving on to attributes

 

Now that we have the element batch query, we can expand it to get all attributes for each of the elements returned. This is where the new request template functionality of batch requests in the PI Web API comes into play. Here is what the query may look like:

 

{
  "database": {
   "Method": "GET",
   "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  },
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  },
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": "{0}?searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path"
   },
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Links.Attributes"]
  }
}

 

Notice the use of RequestTemplate in the "attributes" sub-request. As documented in the PI Web API:

 

A request can alternatively specify a request template in place of a resource. In this case, a single JsonPath may select multiple tokens, and a separate subrequest will be made from the template for each token. The responses of these subrequests will returned as the content of a single outer response.

 

This means my batch query is going to loop through all of my elements, and make a separate sub-request for each element's attributes. Even better, this is all being handled internally by the PI Web API... pretty cool!

Before we continue, there are a few things to note about the attribute query:

 

  1. I am also using searchFullHierarchy for the attributes. This is important because an element can have many levels of nested attributes. Writing a batch query to loop through all these levels in a generic way would probably be impossible. Luckily we have "searchFullHierarchy" for this.
  2. I have included "selectedFields" in all my sub-requests. As we will see later, this results in a MAJOR performance improvement over letting PI Web API return all of its default metadata.

 

Now, moving on...

 

What if I only want certain attributes?

 

Well, you can do that. PI Web API 2016 introduced a new function to request multiple attributes by WebId or path in one call. Suppose I only want the following attributes for my Boiler elements:

 

  • Asset Name
  • Model
  • Plant

 

Then our batch query may look something like this:

 

{
  "database": {
   "Method": "GET",
   "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  },
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  },
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": "https://devdata.osisoft.com/piwebapi/attributes/multiple?selectedFields=Items.Object.WebId;Items.Object.Path&path={0}|Asset Name&path={0}|Model&path={0}|Plant"
   },
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Path"]
  }
}

 

Here, I'm using the multiple attributes function call and supplying the "path" parameter in the resource URL multiple times. Each instance of this parameter appends the relative path to the attributes I want to the sub-request parameter (which happens to be the element path). From here, it is only a matter of writing your client-side code to properly construct the resource URL based on the attributes you want.

 

Where do I go from here?

 

Now that we have all the WebIds for my attributes, we have to decide what to do with them. Usually I will start by creating a flat data structure that maps my element paths to attributes to WebIds. In JavaScript, this may look something like this:

 

// HTTP response is stored in `res' variable
var WebIds = {};


res.elements.Content.Items.forEach(function (element) {
    WebIds[element.Path] = {};
});


res.attributes.Content.Items.forEach(function (subRes) {
    subRes.Content.Items.forEach(function (attribute) {
        var path = attribute.Object.Path,
            elementPath = path.substring(0, path.indexOf('|')),
            attributePath = path.substring(path.indexOf('|'));


        WebIds[elementPath][attributePath] = attribute.Object.WebId;
    });
});

 

After putting the WebIds into a structure that is a bit more usable, you could use them in a variety of ways. Typically I will create an additional PI Web API batch call for all of the streamsets I need. For example, I may want to use an "end" streamset for 5 of my attributes, but use a "plot" streamset" for only 2 of them.

 

How is the performance?

 

I ran different flavors of these types of batch queries and collected the response times. First I will present the raw results and then comment on them.

 

Query description
Response time (Path and WebId only)
Response time (all metadata)
All attributes for all boiler elements143 ms2611 ms
All attributes for all elements435 ms3809 ms
Specific attributes (3) for all boiler elements180 ms186 ms
Specific attributes (3) for all elements635 ms1120 ms
All attributes except one for all boiler elements453 ms2895 ms
All attributes except one for all elements3249 ms5272 ms

 

I included results for all elements (not filtering by template type) to demonstrate how these batch queries scale. I do NOT recommend doing this on a large AF structure.

 

Based on these results, I have made the following observations. YYMV, however, as PI Web API appears to do some internal caching that can cause successive calls to perform faster.

 

  1. When getting all attributes (not specific ones using the "multiple" function call), it is important to be mindful of which fields you are selecting. The more metadata you ask for, the longer the request will take.
  2. Using the "multiple" function call for ad-hoc attributes does not perform very well if you have a lot of elements and are requesting a lot of attributes for each element. You're better off just asking for all attributes with a call to "attributes," unless you only need a small subset of attributes.
  3. The more elements that need to be processed, the longer the query. This makes intuitive sense and is to be expected.

 

Concluding remarks

 

I hope this blog post is helpful to you as you use the PI Web API to develop your own web apps. What other ways have you discovered of getting attributes with the PI Web API?

I would like to share some thoughts about PI calculation datasets in ProcessBook and what would help us to keep – I’d rather say bring - them under control.

Undoubtedly, datasets come in very handy for users to do some simple calculations on the fly. But they are also a challenge for PI administrators when they start to get out of control - I'd almost say they become a mess then.

 

How did that come about?

In a perfect "PI world" data is accessible by what- and whomever and tag names never change. Users start building displays with calculations and - as they are very handy - they tend to spread more and more all over the company.

 

In reality we have PI security and systems that change over time. Display builder and user not always share the same access rights. That's the point where it gets weird. Not primarily for the user but definitely for the administrators. Thousands of error messages start to flood the server logs and make them almost unreadable:

User query failed: Connection ID: 6092, User: xxx, User ID: yyy, Point ID: 0, Type: expcalc, …

 

Users often are not even aware of using datasets. They picked them up with a copied display. Missing data access or renamed points are the most common source of trouble.

Besides a small green spot becoming red (status report) there is nothing that could draw users notice. That's nothing unusual in an extensive display, since this could be also be triggered by a temporary system digital state.

Individual datasets, spread all over the company are not manageable - and that's by far not the only handicap.

 

How to relieve the symptoms?

First of all, error messages should appear at the place where the error occurs.

It depends on the point of view, what would be the right place in case of calculation datasests.

Technically, the PI server might be the right place, practically and for logical reasons it'd be the client. That's the only place it can be fixed.

Here are my proposals:

Provide options to

  • prompt users for calculation dataset problems in ProcessBook
  • automatically disable datasets that can't be executed (security, tag not found) in ProcessBook
  • suppress those error messages (Point ID: 0) on the server
  • disable datasets in ProcessBook (stepwise, e.g. creation, execution, disable)

How to cure?

What I definitely see, is an urgent need (at least in our case) to replace the unmanageable decentralized approach with a centralized one.

This is where AF On Demand Calculations come in.

 

Anyway, replacing datasets with On Demand Calculations will be a big challenge and there are several topics that need to be discussed, e.g.:

  • support for migration from datasets to AF On demand calculations (export)
  • how to provide the individual flexibility of personal datasets (may be an organizational matter)
  • ...

 

Conclusion

From an administrators point of view the most annoying issue is the log entries for failing calculations in the server log.

500 messages per hour on a 50k server with 100 connected ProcessBooks are more the rule but the exception. An option to suppress those calculation errors with "Point ID: 0" would be a blessing.

 

A good starting point on the client could be to more obviously make users aware of the problem of failing calculation datasets.

This can (and may be should be) be annoying and therefore an option to disable specific datasets is needed. Together with an option to disable permanently failing calculations automatically this would be perfect.

 

Finally, a centralized and manageable approach with e.g. On Demand Calculations should be the goal - with benefits for all the new and future clients. Let’s do it step by step.

 

May be someone wants to picks this up and makes an enhancement request out of it

Filter Blog

By date: By tag: