Solution: Write Transpose Queries for RTQP (PI SQL Client)

Document created by kduffy on Oct 5, 2018Last modified by kduffy on Dec 18, 2018
Version 10Show Document
  • View in full screen mode

This document contains the answers to Exercise: Write Transpose Queries for RTQP (PI SQL Client)


1. Generate a query, creating any necessary transpose functions, that outputs the ProcessFeedrate min, max, and total for yesterday for every piece of equipment in the NuGreen database, but also show the average uptime and quality, for the same time period, of the plant where this equipment is located. Each piece of equipment should have all of its data and the data for its plant on the same line.

SELECT t1.Path + t1.Name Element, t1.[Process Feedrate_Minimum], t1.[Process Feedrate_Maximum],
t1.[Process Feedrate_Total], t2.[Quality_Average], t2.[Reliability_Average]
SELECT e.Name, eh.Path, s.*
FROM [Master].[Element].[Element] e
INNER JOIN [Master].[Element].[ElementHierarchy] eh
ON e.ID = eh.ElementID
CROSS APPLY [Master].[Element].[Unit_GetSummary]
e.ID, --Element ID
N'y', --Start Time
N't', --End Time
N'MostRecentTime' --Time Type
) s
WHERE e.Template in (N'Boiler', N'Pump', N'Heater', N'Compressor', N'Cooling Fan')
) t1
SELECT eh.Path + e.Name Element, s.*
FROM [Master].[Element].[Element] e
INNER JOIN [Master].[Element].[ElementHierarchy] eh
ON e.ID = eh.ElementID
CROSS APPLY [Master].[Element].[Plant_GetSummaries]
e.ID, --Element ID
N'y', --Start Time
N't', --End Time
N'1d', --Time Step
N'MostRecentTime' --Time Type
) s
WHERE e.Template = N'Plant'
) t2
ON t1.Path like (t2.Element + '%')


The simple use cases of transpose functions are covered in the video and the tutorials, so we're starting this exercise with a fairly complex example. When breaking down the requirements of the query, these ideas stick out:

  • We need to find the min, max, and total for Process Feedrate for every piece of equipment in NuGreen
  • We need to find the average uptime and quality for each plant
  • We need to combine the two, correctly tying the plant statistics to the equipment statistics.


For the first two, we're going to create template-specific data models as this allows us to use the data summaries transpose functions:


We know to use transpose functions here because we need to put multiple pieces of information from the same element onto the same row in the output (and also the fact that this is the transpose query exercise), and in the RTQP schema video we saw that the summary functions are in the Template-Specific Data Model Wizard. Alternatively, the 4th query of PI SQL Client > Queries > Element > 3-TemplateSpecificData.sql demonstrates a summary function being used:


For the first transpose summarize function, we're trying to query every piece of equipment for its Process Feedrate, but the different equipment types are different templates (boiler, heater, etc). In PI System Explorer, however, we see that each of these equipment template types derive from the Unit template, which has the Process Feedrate attribute:


This means that we can create a transpose function against the "Unit" template, and query all of the derived templates against that. This is also a good use case for having clear template hierarchies with shared attributes. Since there's a business case to be querying this common attribute across all equipment types, it's meaningful for each piece of equipment, and should, therefore, be on the base template. We didn't get lucky, per se, that this was possible to be queried easily, this is a product of the template hierarchies being built in a normalized fashion, making queries like this easier.


Back in PI SQL Commander's Template-Based Data Model Wizard, after selecting "Unit" in the Template menu, we can add a GetSummary object. In the new menu, we can drag the Process Feedrate attribute on three times, setting the summary types to Min, Max, and Total:

After creation of the summary function, we can navigate to it in the Object Explorer under Master > Element > Table-Valued Functions > Unit_GetSummary (unless a different name were provided), and execute its predefined query after changing the template type away from "Unit" and to a list a of all the templates deriving from "Unit":


Note that we received 116 rows, which matches the total number of elements throughout the NuGreen Enterprise based on element templates derived from Unit. You can confirm this from PI System Explorer, but since PI SQL Client does not necessitate the AF SDK being installed on the client machine, it may be better to use RTQP's FindElements function to confirm this count:


For the next transpose function, we need to follow the same steps, but for the plant template, and drag just the quality and reliability attributes in the attribute list:


We create the function and test it in the same way, this time keeping the predefined query as is (slimming down the top portion) because the plant template is the final template type:

There are only four plants, so the four rows are the correct output.


Finally, the last piece of this is to combine the two outputs into one query. We could try to do this from the element hierarchy tables that are feeding two transpose functions, but since there are two functions with different elements, we would still need to have two instances of the element hierarchy table joined together. Instead, a cleaner way to do this, from a query writing perspective, would be to join the two outputs together into one query.


Both queries can be rewritten to display the element path (by joining the ElementHierarchy table).


In theory we could join the two outputs where the second table's path is a subset of the first table's path (this is again not a lucky occurrence but is, in fact, a result of proper element hierarchy building. If you would like to apply certain attributes at the plant level to each piece of equipment then it helps to have built the hierarchy in this manner where the plant is always the great-grandparent element of the equipment. If this approach didn't make sense in real life, then the query we're building wouldn't make the most sense. And finally, if there are multiple ways that this information should be grouped, then parallel hierarchies in AF could be built to reflect this, again moving the burden of simplicity onto the AF Server admin and not on the PI OLEDB Enterprise query developer), but unfortunately this approach is not supported on the first release of RTQP as both of these functions will call the same ft_GetSummary call under the hood:


It should work in a future release once nested queries are supported, but for now we'll need to work around this by replacing one of the GetSummary functions with a GetSummaries function and pass 1d as the TimeStep parameter to retain the output:

Using this new function, we can replace the earlier query with this one:


After altering the SELECT portion, we can get the output that was requested:


2. Rewrite the first query, creating any necessary custom database objects, that allows for the query to be executed by applications that reject table-valued functions

SELECT eh1.Path + e1.Name Element, s1.[Process Feedrate_Minimum], s1.[Process Feedrate_Maximum],
s1.[Process Feedrate_Total], s2.[Quality_Average], s2.[Reliability_Average]
FROM [Master].[Element].[Element] e1
INNER JOIN [Master].[Element].[ElementHierarchy] eh1
ON e1.ID = eh1.ElementID
INNER JOIN [Master].[Element].[ElementHierarchy] eh2
ON eh1.Path like (eh2.Path + eh2.Name + '%')
INNER JOIN [Master].[Element].[Element] e2
ON e2.ID = eh2.ElementID
INNER JOIN [Master].[Element].[ft_Unit_GetSummary] s1
ON s1.[@ElementID] = e1.ID
INNER JOIN [Master].[Element].[ft_Plant_GetSummaries] s2
ON s2.[@ElementID] = e2.ID
WHERE e1.Template in (N'Boiler', N'Pump', N'Heater', N'Compressor', N'Cooling Fan')
AND e2.Template = N'Plant'
AND s1.[@StartTime] = N'y'
AND s1.[@EndTime] = N't'
AND s1.[@TimeType] = N'MostRecentTime'
AND s2.[@StartTime] = N'y'
AND s2.[@EndTime] = N't'
AND s2.[@TimeStep] = N'1d'
AND s2.[@TimeType] = N'MostRecentTime'



In order to get around the constraint of table-valued functions being rejected, Table-Valued Functions vs Function Tables can be used to understand how to convert these to function tables.


The RTQP data model makes it very easy to convert a table-valued function to a function table simply by right clicking on them in the Object Explorer and selecting "Create Function Table":


We kept the default name the same, and we observe the two function tables being created under Master > Element > Function Tables:


After converting the structure of the query to a six table join, we can use the columns list in the Object Explorer to ensure that we provide all of the necessary parameters in the ON and WHERE statements:


And when we run this query, we see the same output as before:


3. Generate a query, preferably not creating any new custom database object as this is a one-time query, to show interpolated values for the Process Feedrate and Motor Amps of cooling fan F-200 for all day yesterday at 15 minute intervals.

SELECT e.Name, sv.*
FROM Master.Element.Element e
CROSS APPLY Master.Element.GetSampledValues
'Cooling Fan',
{ '|Process Feedrate', 'Process Feedrate_Value', NULL, NULL, NULL },
{ '|Motor Amps', 'Motor Amps_Value', NULL, NULL, NULL }
>(e.ID, 'y', 't', '15m') sv
WHERE e.Name = 'F-200'


For this portion of the example, we can rely on RTQP's ad-hoc table value functions. In Not authorized to view the specified discussion 2664 we see the creation of template based functions from the scripts in the Master > Element > Templates folder, and these are the ad-hoc queries wrapped in a function header. To use them in an ad-hoc manner, we can also see the queries in the query compendium's Queries > Element > 3-TemplateSpecificData.sql. For the interpolated data example, we can look at the second query in specific:


We can alternatively start from the templates folder (Master > Element > Templates) in the Object Explorer and right click on GetSampledValues > Scripts > Create Function > New Query Editor Window, then remove the function header:



We'll then swap out the template with 'Cooling Fan' as this is the template name for F-200. This information can be found from either PI System Explorer or running the following query against the Element table:

We'll also type in the attribute path, replicating the example in the query compendium, leaving off the Error and UOM columns for now. For the first attribute, we'll use |Process Feedrate, and the second attribute will be |Motor Amps. Next, we'll change the timestep to be 15m as the exercise has requested. Lastly, we'll change the filter to not be by template name, since we're not looking for all cooling fans, but to filter by element name.


Running the query shows us the results we're looking for:



To save space on the overall query, we can remove the comments and collapse the braced portion into one line each, as well:


Note: If we were to use GetSampledValues under Master > Element > Table-Valued Functions, we would not get the transpose style output that we are looking for. For example, this query:

Provides us with all the Process Feedrates followed by all the Motor Amps. These are the values we're looking for, but not the overall structure of the result set:




4. Output the results of the first query (either with or without a table-valued function, your choice) to a csv file. As this is a one time development test, the automation of this csv file generation does not need to be implemented. Something that can do it one time is fine; however, it would be helpful to think about different ways in which to automate this process if this application were to be approved for use and rolled out to production.


There are a very large number of ways to handle this, such as using an SSIS package to direct the output of an OLEDB query to a csv file; using Powershell to parse through an OLEDB recordset writing it to a csv file; copying the output in SQL Commander, pasting it into Excel, and saving it as a csv; and many others.


For manually running queries and directing the output to a csv file, PI SQL Commander can do this right from the application. Navigate to Query > Result To and select Result to File


Execute the query again, and it will prompt you with a Save As dialog box. Once completed, the PI SQL Commander will show that the output was finished


And the file will be created where specified: