Skip navigation
All Places > PI Developers Club > Blog
1 2 3 Previous Next

PI Developers Club

642 posts

In this post I will be leveraging OSISoft's PI Web API to extract PI System Data to a flat file.

To keep things simple and easy to reproduce, this post will focus how to extract data with this technology.

 

Prerequisites:

 

Remote PI System:

PI Data Archive 2018

PI AF Server 2018

 

Client:

Python 3.7.2

 

For simplicity 7 days of data of solely the PIPoint "Sinusoid" will be queried and written to a .txt file.

 

In order to retrieve data for a certain PI Point we need the WebID as reference. It can be retrieved by the built-in search of PI Web API.

In this case the WebID can be found here:

 

 

 

Given the WebID of the PI Point "Sinusoid", the following code will request historical data for the previous 7 days. It will parse the response JSON package, and write "Timestamp, Value, isGood" to the datafile specified.

 

Python Code:

import requests
import json
url = "https://<piwebapi_endpoint>/piwebapi/streams/<WebID_of_Sinusoid>/recorded?startTime=*-7d&endTime=*&boundaryType=Inside&maxCount=150000" #maxCount will set upper limit of values to be returned
filepath = "<filepath>"
response = requests.get(str(url), auth=('<user>', '<password>'), verify=False) #verify=False will disable the certificate verification check
json_data = response.json()
timestamp = []
value = []
isGood = []
#Parsing Json response
for j_object in json_data["Items"]:
 timestamp.append(j_object["Timestamp"])
 value.append(j_object["Value"])
 isGood.append(j_object["Good"])

event_array = zip(timestamp, value, isGood)
#Writing to file
with open(str(filepath), "w") as f:
 for item in event_array:

 try:
 writestring = "Timestamp: " + str(item[0]) + " , Value: " + str(item[1]) + " , isGood: " + str(item[2]) + " \n"

 except:

 try:
 writestring = "" + str(item[0]) + " \n"
 except:
 writestring = "" + " \n"

 f.write(writestring)
 f.close()

(*intendation not correctly displayed)

 

Result:

 

Timestamp, value and the quality for this time range were successfully written to the file.

In this post I will be leveraging OSISoft's AFSDK to extract PI System Data to a flat file.

To keep things simple and easy to reproduce, this post will focus how to extract data with this technology.

 

Prerequisites:

PI Data Archive 2018

PI AF Server 2018

PI AF Client 2018 SP1

Microsoft Visual Studio 2017

 

For simplicity 7 days of data of solely the PIPoint "Sinusoid" will be queried and written to a .txt file.

 

Following code will establish a AFSDK connection to the default PI Data Archive Server specified in the local Known Servers Table. A query for PI Points is launched to find the PIPoint "Sinusoid".

The method PIPoint.Recordedvalues is used to retrieve a list of AFValues. Their properties "Timestamp, Value, IsGood" are then written to a flat file.

 

C# Code:

namespace Data_Access_AFSDK
{
class Program
{
static void Main(string[] args)
{
PIServer myPIserver = null;
string tagMask = "";
string startTime = "";
string endTime = "";
string fltrExp = "";
bool filtered = true;

//connection to PI server
if (myPIserver == null)
myPIserver = new PIServers().DefaultPIServer;

//Query for PI Point
tagMask = "Sinusoid";
List<PIPointQuery> ptQuery = new List<PIPointQuery>();
ptQuery.Add(new PIPointQuery("tag", AFSearchOperator.Equal, tagMask));
PIPointList myPointList = new PIPointList(PIPoint.FindPIPoints(myPIserver, ptQuery));

startTime = "*-7d";
endTime = "*";

//Retrieve events using PIPointList.RecordedValues into list 'myAFvalues'
List<AFValues> myAFvalues = myPointList.RecordedValues(new AFTimeRange(startTime, endTime), AFBoundaryType.Inside, fltrExp, filtered, new PIPagingConfiguration(PIPageType.EventCount, 10000)).ToList();

//Convert to PIValues to string[]
string[] query_result_string_timestamp_value = new string[myAFvalues[0].Count];
string value_to_write;
string quality_value;
int i = 0;

foreach (AFValue query_event in myAFvalues[0])
{
value_to_write = query_event.Value.ToString();
quality_value = query_event.IsGood.ToString();
query_result_string_timestamp_value[i] = "Timestamp: " + query_event.Timestamp.LocalTime + ", " + "Value: " + value_to_write + ", " + "IsGood: " + quality_value;
i += 1;
}
//Writing data into file
System.IO.File.WriteAllLines(@"<FilePath>", query_result_string_timestamp_value);
}
}
}

 

 

Result:

Timestamp, value and the quality for this timerange were successfully written to the file.

In this post I will be leveraging OSISoft's PI SQL Client OLEDB to extract PI System Data via the PI SQL Data Access Server (RTQP).

To keep things simple and easy to reproduce, this post will focus how to extract data with this technology.

 

Prerequisites:

PI SQL Client OLEDB 2018

PI Data Archive 2018

PI AF Server 2018

PI SQL Data Access Server (RTQP Engine) 2018

Microsoft Visual Studio 2017

 

For simplicity a test AF database "testDB" with a single element "Element1" which has a single attribute "Attribute1" was created.

This attribute references the PIPoint "Sinusoid".

 

 

SQL Query used to extract 7 days of events from \\<AFServer>\testDB\Element1|Attribute1

 

SELECT av.Value, av.TimeStamp, av.IsValueGood
FROM [Master].[Element].[Archive] av
INNER JOIN [Master].[Element].[Attribute] ea ON av.AttributeID = ea.ID
WHERE ea.Element = 'Element1' AND ea.Name = 'Attribute1'
AND av.TimeStamp BETWEEN N't-7d' AND N't'

 

Example C# Code:

 

using System;
using System.Data;
using System.Data.OleDb;
using System.IO;


namespace Data_Access_PI_SQL_Client_OLEDB
{
class Program
{
static void Main(string[] args)
{
DataTable dataTable = new DataTable();
using (var connection = new OleDbConnection())
using (var command = connection.CreateCommand())
{
connection.ConnectionString = "Provider=PISQLClient; Data Source=<AFServer>\\<AF_DB>; Integrated Security=SSPI;";
connection.Open();


string SQL_query = "SELECT av.Value, av.TimeStamp, av.IsValueGood ";
SQL_query += "FROM [Master].[Element].[Archive] av ";
SQL_query += "INNER JOIN [Master].[Element].[Attribute] ea ON av.AttributeID = ea.ID ";
SQL_query += "WHERE ea.Element = 'Element1' AND ea.Name = 'Attribute1' ";
SQL_query += "AND av.TimeStamp BETWEEN N't-7d' AND N't' ";


command.CommandText = SQL_query;
var reader = command.ExecuteReader();
using (StreamWriter writer = new StreamWriter("<outputfilepath>"))
{
while (reader.Read())
{
writer.WriteLine("Timestamp: {0}, Value : {1}, isGood : {2}",
reader["Timestamp"], reader["Value"], reader["IsValueGood"]);
}
}
}

Console.WriteLine("Completed Successfully!");
Console.ReadKey();
}
}
}

 

 

Result:

Events were successfully written to flat file:

 

Great list, though the author admits the list is more exhausting than it is exhaustive.  A must-read for VB.NET developers.

 

An Exhausting List of Differences Between VB.NET & C# | Anthony's blog

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Until now, when installing PI interfaces on a separate node from the PI Data Archive, we need to provision a separate physical or virtual machine just for the interface itself. Don't you think that it is a little waste of resources? To combat this, we can containerize interfaces so that they become more portable which allows them to be scheduled anywhere inside your computing cluster. Their batch file configuration also makes them good candidates for lifting and shifting into containers.

 

We will start off by introducing the PI to PI interface container which is the first ever interface container! It will have buffering capabilities (via PI Buffer Subsystem) and its performance counters will also be active.

 

Set up servers

First, let me spin up 2 PI Data Archive containers to act as the source and destination servers. Check out this link on how to build the PI Data Archive container.

PI Data Archive container health check

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h pi1 --name pi1 -e trust=%computername% pidax:18

 

For the source code to build the PI Data Archive container and also the PI to PI interface container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies.

 

We shall be using pi1 as our source and pi as our destination.

 

Let's open up PI SMT to add the trust for the PI to PI Interface container. Do this on both PI Data Archives.

The IP address and NetMask are obtained by running ipconfig on your container host.

The reason I set the trusts this way is because the containers are guaranteed to spawn within this subnet since they are attached to the default NAT network. Therefore, the 2 PI Data Archive containers and the PI to PI Interface container are all in this subnet. Container to container connections are bridged through an internal Hyper-V switch.

 

On pi, create a PI Point giving it any name you want (my PI Point shall be named 'cdtclone'). Configure the other attributes of the point as such

Point Source: pitopi
Exception: off
Compression: off
Location1: 1
Location4: 1
Instrument Tag: cdt158

 

Leave the other attributes as default. This point will be receiving data from cdt158 on the source server. This is specified in the instrument tag attribute.

 

Set up interface

Now you are all set to proceed to the next step which is to create the PI to PI Interface container!

 

You can easily do so with just one command. Remember to login to Docker with the usual credentials.

docker run -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

The environment variables that you can configure include

host: destination server

src: source server

ps: point source

That is all the parameters that is supported for now.

 

You should be able to see data appearing in the cdtclone tag on the destination server now.

 

Don't you think it was very quick and easy to get started.

 

Buffer

As I mentioned before, the container also has buffering capabilities. We shall consider 2 scenarios.

 

1. The destination server is stopped. Same effect as losing network connectivity to the destination server.

2. The PI to PI interface container is destroyed.

 

Scenario 1

Stop pi.

docker stop pi

 

Wait for a few minutes and run

docker exec p2p cmd /c pibufss -cfg

 

You should see the following output which indicates that the buffer is working and actively queuing data in anticipation for the destination server to be back up.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 18:39:23, regid: 3
lastsend: 2-Nov-18 18:58:59
total events sent: 47, snapshot posts: 42, queued events: 8

 

When we start up pi again

docker start pi

 

Wait a few minutes before running pibufss -cfg again. You should now see

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: SendingData, successful connections: 2
PI identities: piadmins | PIWorld, auth type: SSPI
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 19:07:24, regid: 3
total events sent: 64, snapshot posts: 45, queued events: 0

 

The buffer has re-registered with the server and flushed the queued events to the server. You can check the archive editor to make sure the events are there.

 

Scenario 2

Stop pi just so that events will start to buffer.

docker stop pi

 

Check that events are getting buffered.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering


*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 13-Nov-18 15:25:07, lastreg: 13-Nov-18 15:25:08, regid: 3
lastsend: 13-Nov-18 17:54:14
total events sent: 8901, snapshot posts: 2765, queued events: 530

 

Now while pi is still stopped, stop p2p.

docker stop p2p

 

Check the volume name that was created by Docker.

docker inspect p2p -f "{{.Mounts}}"

 

Output as below. The name is highlighted in red. Save that name somewhere.

[{volume 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17 C:\ProgramData\docker\volumes\76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17\_data c:\programdata\osisoft\buffering local true }]

 

Now you can destroy p2p and start pi

docker rm p2p
docker start pi

 

Use archive editor to verify that data has stopped flowing.

The last event was at 5:54:13 PM.

 

We want to recover the data that are in the buffer queue files. We can create a new PI to PI interface container pointing to the saved volume name.

docker run -v 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17:"%programdata%\osisoft\buffering" -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

And VOILA! The events in the buffer queues have all been flushed into pi.

 

To be sure that the recovered events are not due to history recovery by the PI to PI interface container, I have disabled it.

 

I have demonstrated that the events in the buffer queue files were persisted across container destruction and creation as the data was persisted outside the container.

 

 

Performance counters

The container also has performance counters activated. Let's try to get the value of Device Status. Run the following command in the container.

Get-Counter '\pitopi(_Total)\Device Status'

 

Output

Timestamp CounterSamples
--------- --------------
11/2/2018 7:24:14 PM \\d13072c5ff8b\pitopi(_total)\device status :0

 

Device status is 0 which means healthy.

 

What if we stopped the source server?

docker stop pi1

 

Now run the Get-Counter command again and we will expect to see

Timestamp CounterSamples
--------- --------------
11/2/2018 7:29:29 PM \\d13072c5ff8b\pitopi(_total)\device status :95

 

Device status of 95 which means Network communication error to source PI server.

 

These performance counters will be perfect for writing health checks against the interface container.

 

Conclusion

We have seen in this blog how to use the PI to PI Interface container to transfer data between two PI Data Archive containers. As you know, OSIsoft has hundreds of interfaces. Being able to containerize one means the success of containerizing others is very high. The example in this blog will serve as a proof of concept.

Dear PI Developer's Club Users:

 

OSIsoft LLC is currently revising and consolidating its GitHub presence and content-sharing policies. This change in policy has caused many links to suddenly become broken for the time being.  We are working diligently to resolve as many of these as we can once we have clarified our new code-sharing approach.

 

We apologize for the inconvenience of such broken links, and equally important, the unavailability of our learning samples that many customers and partners have found to be valuable.  Until we have transitioned to our new policies, such links and repositories will remain unavailable.

 

This announcement supersedes any previous policy stated at:

 

 

 

 

 

Thank you for your patience and understanding.

 

Michael Sloves, Director of Technology Enablement

 

 

UPDATE November 13

 

Regarding the PI Web API client libraries previously available on GitHub, we will allow access to the source code, provided you first agree and accept some terms and conditions.  Any request to the client libraries may be made to technologyenablement@osisoft.com.

 

Rick Davin, Team Lead Technology Enablement

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

We have learnt much about using containers in previous blog posts. Until now, we have been working with standalone containers. This is great for familiarizing yourself with the concept of containers in general. Today, we shall take the next step in our container journey which is to learn how to orchestrate these containers. There are several container orchestration platforms on the market today such as Docker Swarm, Kubernetes, Service Fabric and Marathon. I will be using Docker Swarm today to illustrate the concept of orchestration since it is directly integrated with the Docker Engine making it the quickest and easiest to set up.

 

honey-bee-drawing-cartoon-64.jpg

 

Motivation

Before we even start on the orchestration journey, it is important that we understand the WHY behind it. For someone who is new to all these, the objective of doing this might not be clear. Let me illustrate with two analogies.

One that a layman can understand and another that a PI admin can relate to.

 

First analogy

Suppose your hobby is baking cakes (containers). You have been hard at work in your kitchen trying to formulate the ultimate recipe (image) for the best chiffon cake in the world. One day, you managed to bake a cake with the perfect taste and texture after going through countless rounds of trial and error of varying the temperature of the oven, the duration in the oven, the amount of each type of ingredient etc. Your entrepreneurial friend advise you to open a small shop selling this cake (dealing with standalone containers in a single node). You decided to heed your friend's advice and did so. Over the years, business boomed and you want to expand your small shop to a chain of outlets (cluster of nodes). However, you have only one pair of hands and it is not possible for you to bake all the cakes that you are going to sell. How are you going to scale beyond a small shop?

Luckily, your same entrepreneurial friend found a vendor called Docker Inc who can manufacture a system of machines (orchestration platform) where you install one machine in each of your outlet stores. These machines can communicate with each other and they can take your recipe and bake cakes that taste exactly the same as the ones that you baked yourself. Furthermore, you can let the machines know how many cakes to bake each hour to address different levels of demand throughout the day. The machines even have a QA tester at the end of the process to test if the cake meets its quality criteria and will automatically discard cakes that fail to replace them with new ones. You are so impressed that you decide to buy this system and start expanding your cake empire.

 

Second analogy

Suppose you are in charge of the PI System at your company. Your boss has given you a cluster of 10 nodes. He would like you to make an AF Server service spanning this cluster that has the following capabilities

1. able to adapt to different demands to save resources

2. self-healing to maximize uptime

3. rolling system upgrades to minimize downtime

4. easy to upgrade to newer versions for bug fixes and feature enhancements

5. able to prepare for planned outages needed for maintenance

6. automated roll out of cluster wide configuration changes

7. manage secrets such as certificates and passwords for maximum security

How are you going to fulfill his crazy demands? This is where a container orchestration platform might help.

 

Terminology

Now let us get some terminologies clear.

 

Swarm: A swarm consists of multiple Docker hosts which run in swarm mode and act as managers and workers. A given Docker host can be a manager, a worker, or perform both roles.

Manager: The manager delivers work (in the form of tasks) to workers, and it also manages the state of the swarm to which it belongs. The manager can also run the same services workers, but you can also make them run only manager-related services.

Worker: Workers run tasks distributed by the swarm manager. Each worker runs an agent that reports back to the master about the state of the tasks assigned to it, so the manager can keep track of the work running in the swarm.

Service: A service defines which container images the swarm should use and which commands the swarm will run in each container. For example, it’s where you define configuration parameters for an AF Server service running in your swarm.

Task: A task is a running container which is part of a swarm service and managed by a swarm manager. It is the atomic scheduling unit of a swarm.

Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.

 

There are two types of service.

Replicated: The swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state.

Global: The swarm manager runs one task for the service on every available node in the cluster.

 

Prerequisites

To follow along with this blog, you will need two Windows Server 2016 Docker hosts. Check out how to install Docker in the Containerization Hub link above.

 

Set up

Select one of the nodes (we will call it "Manager") and run

docker swarm init

 

This will output the following

 

Swarm initialized: current node (vgppy0347mggrbam05773pz55) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377

 

Now select the other node (we will call it "Worker") and run the command that was being output in the previous command.

docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377

 

Go back to Manager and run

docker node ls

 

to list out the nodes that are participating in the swarm. Note that this command only works on manager nodes.

 

Service

Now that the nodes have been provisioned, we can start to create some services.

 

For this blog, I will be using a new AF Server container image that I have recently developed tagged 18s. If you have been following my series of blogs, you might be curious what is the difference between the tag 18x (last seen here) and 18s. With 18s, the data is now separated from the AF Server application service. What this means is that the PIFD database mdf, ndf and ldf files are now mounted in a separate data volume. The result is that on killing the AF Server container, the data won't be lost and I can easily recreate a AF Server container pointing to this data volume to keep the previous state. This will be useful in future blogs on container fail-over with data persistence.

 

You will need to login with the usual docker credentials that I have been using in my blogs. To create the service, run

 

docker service create --name=af18 --detach=false --with-registry-auth elee3/afserver:18s

 

Note: If --detach=false was not specified, tasks will be updated in the background. If it was specified, then the command will wait for the service to converge before exiting. I do it so that I can get some visual output.

 

Output

goa9cljsek42krqgvjtwdd2nd
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Waiting 6 seconds to verify that tasks are stable...

 

Now we can list the service to find out which node is hosting the tasks of that service.

 

docker service ps af18

 

Once you know which node is hosting the task, go to that node and run

 

docker ps -f "name=af18."

 

Output

CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS                        PORTS               NAMES
9e3d26d712f9        elee3/afserver:18s   "powershell -Comma..."   About a minute ago   Up About a minute (healthy)                       af18.1.w3ui9tvkoparwjogeg26dtfz

 

The output will show the list of containers that the swarm service has started for you. Let us inspect the network that the container belongs to by using inspecting with the container ID.

 

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks}}"

 

Output

map[nat:0xc0420c0180]

 

The output indicates that the container is attached to the nat network by default if you do not explicitly specify a network to attach to. This means that your AF Server is accessible from within the same container host.

 

You can get the IP address of the container with

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks.nat.IPAddress}}"

 

Then you can connect with PSE using the IP address. It is also possible to connect with the container ID as the container ID is the hostname by default.

 

 

Now that we have a service up and running, let us take a look at how to change some configurations of the service. In the previous image, the name of the AF Server derives from the container ID which is some random string. I would like to make it have the name 'af18'. I can do so with

 

docker service update --hostname af18 --detach=false af18

 

Once you execute that, Swarm will stop the current task that is running and reschedule it with the new configuration. To see this, run

 

docker service ps af18

 

Output

 

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
llueiqx8ke86        af18.1              elee3/afserver:18s   worker           Running             Running 8 minutes ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 9 minutes ago

 

During rescheduling, it is entirely possible for Swarm to shift the container to another node. In my case, it shifted from master to worker. It is possible to ensure that the container will only be rescheduled on a specific node by using a placement constraint.

 

docker service update --constraint-add node.hostname==master --detach=false af18

 

We can check the service state to confirm.

 

docker service ps af18

 

Output

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
r70qwri3s435        af18.1              elee3/afserver:18s   master            Running             Starting 9 seconds ago
llueiqx8ke86         \_ af18.1          elee3/afserver:18s   worker           Shutdown            Shutdown 9 seconds ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 2 hours ago

 

Now, the service will only get scheduled on the master node. You will now be able to connect with PSE on the master node using the hostname 'af18'.

 

When you are done with the service, you can remove it.

docker service rm af18

 

Conclusion

In this article, we have learnt how to set up a 2 node Swarm cluster consisting of one master and one worker. We scheduled an AF Server swarm service on the cluster and updated its configuration without needing to recreate the service. The Swarm takes care of scheduling the service's tasks on the appropriate node. We do not need to manually do it ourselves. We also seen how to control the location of the tasks by adding a placement constraint. In the next part of the Swarm series, we will take a look at Secrets and Configs management within Swarm. Stay tuned for more!

Introduction

Lately, I've been experimenting with Microsoft Power BI and I'm impressed by how mature the tool is. The application not only supports a myriad of data sources but now there is even a Power Query SDK that allows developers to write their own Data Connectors. Of course, the majority of my experiments uses data from PI Points and AF Attributes and, because Power BI is very SQL oriented, I end up using PI OLEDB Enterprise most of the time. But let's face it: writing a query can be tricky and not a required skill for most Power BI users. So I decided to create a simple PI Web API Data Connector for Power BI. The reason I decided to use PI Web API is that the main use-case for the Data Connector is "Create a business analyst friendly view for a REST API". Also, there's no reason to install additional data providers.

 

Important Notes

Keep in mind that this tool is meant to help in small experiments where a PI and Power BI user wants to get some business intelligence done on PI data quickly. For production environment use cases, where there is a serious need for shaping data views, or where scale is of importance we highly recommend the use of PI Integrator for Business Analytics . Also, in order to avoid confusions, let me be clear that this is not a PI Connector. Microsoft named these extensions to Power BI "Data Connectors" and they are not related to our PI Connector products.

 

The custom Data Connector is a Power BI beta feature, so it may break with the release of newer versions. I will do my best to keep it update but, please, leave a comment if there's something not working.  It's also limited by the Power BI capabilities, that means it currently only supports basic and anonymous authentication for web requests. If the lack of Kerberos support is a no-go for you, please refer to this article on how to use PI OLEDB Enterprise.

 

Preparation

If you are using the latest version of the Power BI (October 2018), you should enable custom data connectors by going to File / Options and Settings / Options / Security and then lowering the security for Data Extensions to Allow any extension to load.

 

2018-10-17 10_48_09-.png

 

For older versions of the Power BI, you have to enable Custom data connectors. It's under File / Options and Settings / Options / Preview features / Custom data connectors.

 

2018-06-28 09_12_56-.png

 

This should automatically create a [My Documents]\Power BI Desktop\Custom Connectors folder. If it's not there, you can create it by yourself. Finally, download the file (PIWebAPIConnector.mez) at the end of this article, extract from the zip and manually place it there. If you have Power BI instance running, you need to restart it before using this connector. Keep in mind that custom data connectors were introduced in April 2018, so versions before that will not be able to use this extension.

 

Hot to use it

You first have to add a new data source by clicking Get Data / More and finding the PI Web API Data Connector under Services.

 

2018-10-17 10_59_21-.png

Once you click the Connect button, a warning will pop up to remember this is a preview connector. Once you acknowledge it, a form will be presented and you must fill it with the appropriate data:

 

2018-10-17 11_09_35-.png

 

Here's a short description of the parameters:

 

ParameterDescriptionAllowed Values
PI Web API ServerThe URL to where the PI Web API instance is located.A valid URL: https://server/piwebapi
Retrieval MethodThe method to get your data.Recorded, Interpolated
ShapeThe shape of the table. List is a row for every data entry, while Transposed is a column for every data path.List, Transposed
Data PathThe full path of a PI Point or an AF Element. Multiple values are allowed separated by a semicolon (;).\\PIServer\Sinusoid;\\AFServer\DB\Element|Attribute
Start TimeThe timestamp at the beginning of the data set you are retrieving.A PI TIme or a timestamp: *-1d, 2018-01-01
End TimeThe timestamp at the end of the data set you are retrieving.A PI TIme or a timestamp: *-1d, 2018-01-01
IntervalOnly necessary for interpolated. The frequency on which the data will be interpolated.A valid Time-Interval: 10m,

 

Once you fill the form with what you need, you hit the OK button and you may be asked for credentials. It's important to mention that, for the time being, there's no support for any other authentication method than anonymous and basic.

 

2018-10-17 11_10_43-Untitled - Power BI Desktop.png

 

After selecting the appropriate login method, a preview of the data is shown:

 

2018-10-17 11_15_11-.png

 

If it seems all right, click Load and a new query result will be added to your data.

 

List vs Transposed

In order to cover the basic usage of data, I'm offering two different ways to present the query result. The first one is List, where you can get data and some important attributes that can be useful from time to time:

 

     2018-10-17 11_23_41-Untitled - Power BI Desktop.png

 

If you don't want the metadata and are only interested in the data, you may find Transposed to be a tad more useful as I already present the data transposed for you:

 

     2018-10-17 11_26_25-Untitled - Power BI Desktop.png

 

Note that for Digital data points I only bring the string value when using transposed. And that's it. Now you can use the data the way you want!

 

2018-06-28 10_39_53-Untitled - Power BI Desktop.png

 

The Code

The GitHub repository for this project is available here. If you want to extend the code I'm providing and have not worked with the M language before, I strongly suggest you watch this live-coding session. Keep in mind that it's a functional language that excels in ETL scenarios, but is not yet fully mature, so expect some quirkiness like the mandatory variable declaration for each instruction, lack of flow control and no support for unit testing.

 

Final Considerations

I hope this tool can make it easier for you to quickly load PI System data into your Power BI dashboard for small projects and use cases. Keep in mind that this extension is not designed for production environments nor it's a full-featured ETL tool. If you need something more powerful and able to handle huge amounts of data, please consider using the PI Integrator for Business Analytics. If you have questions, suggestions or a bug report, please leave a comment.

Introduction

 

If you are getting started with PI Notifications, setting up your AF tree, creating new notifications rules linked to event frames, you probably want to test them before running on production. The traditional way to test is to set up the SMTP server from your enterprise and send all the e-mails from PI Notifications to your e-mail account. If your notifications are sent to a group of people, you need to check with them if they are receiving the e-mails properly.

 

If you are a developer, you might be interested in this new approach for testing by using a custom SMTP Server. This will help you be more efficient testing your notifications rules before running on production. The idea is that the custom SMTP server will listen to port 25 and it will display on the console information about the e-mails sent by PI Notifications.

 

Getting started

 

You can find the source code of the program here.

 

Open Visual Studio and create a .NET Core Console Application. Then add 4 libraries with the following commands:

 

Install-Package MailKit -Version="2.0.6"
Install-Package MimeKit -Version="2.0.6"
Install-Package SmtpServer" -Version="5.3.0"
Install-Package HtmlAgilityPack" -Version="1.8.9"

 

 

This will install our custom SMTP Server core library, some additional libraries to handle e-mails programmatically and the Html Agility Pack, which it will be described later. Please refer to their GitHub repository for more information.

 

Writing the program

 

Based on the README.MD of their GitHub repository, we have created the application below:

 

using CustomSmtpServerForPINotifications.Models;
using HtmlAgilityPack;
using SmtpServer;
using SmtpServer.Authentication;
using SmtpServer.Mail;
using SmtpServer.Protocol;
using SmtpServer.Storage;
using System;
using System.Threading;
using System.Threading.Tasks;


namespace CustomSmtpServerForPINotifications
{
    class Program
    {
        static void Main(string[] args)
        {
            var options = new SmtpServerOptionsBuilder()
             .ServerName("localhost")
             .Port(25, 587)
             .MessageStore(new SampleMessageStore())
             .MailboxFilter(new SampleMailboxFilter())
             .UserAuthenticator(new SampleUserAuthenticator())
             .Build();


            var smtpServer = new SmtpServer.SmtpServer(options);
            smtpServer.StartAsync(CancellationToken.None).Wait();
        }
    }


    public class SampleMessageStore : MessageStore
    {
        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            return Task.FromResult(SmtpResponse.Ok);
        }
    }




    public class SampleMailboxFilter : IMailboxFilter, IMailboxFilterFactory
    {
        public Task<MailboxFilterResult> CanAcceptFromAsync(ISessionContext context, IMailbox @from, int size = 0, CancellationToken token = default(CancellationToken))
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public Task<MailboxFilterResult> CanDeliverToAsync(ISessionContext context, IMailbox to, IMailbox @from, CancellationToken token)
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public IMailboxFilter CreateInstance(ISessionContext context)
        {
            return new SampleMailboxFilter();
        }
    }




    public class SampleUserAuthenticator : IUserAuthenticator, IUserAuthenticatorFactory
    {
        public Task<bool> AuthenticateAsync(ISessionContext context, string user, string password, CancellationToken token)
        {
            return Task.FromResult(true);
        }


        public IUserAuthenticator CreateInstance(ISessionContext context)
        {
            return new SampleUserAuthenticator();
        }
    }
}



 

Running the console application will make the custom SMTP server starts and it will keep monitoring ports 25 and 587. When it receives an e-mail, it will display information with the HTML body of the e-mail message.

 

We can test it by opening the Email Delivery Channel Configuration through PI System Explorer, typing the IP address of the machine running the custom SMTP server and clicking on the "Test..." button.

 

 

The e-mail is shown on the console application as expected. Note that the e-mail information is sent to the custom SMTP server but the SMTP Server never tries to send it to the SMTP server of the test.com domain. This is why it is a very good tool for testing purposes.

 

 

 

Let's validate if the e-mail has all the information that we need when it has the notification content. We are going to use the Html Agility Pack library to extract information from the HTML.

 

 

        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            NotificationRequest request = GenerateRequestFromEmail(message.HtmlBody);
            if (request != null)
            {
                Console.WriteLine("AttributeFullPath: " + request.AttributeFullPath);
                Console.WriteLine("NotificationRuleName: " + request.NotificationRuleName);
                Console.WriteLine("StartTime: " + request.StartTime);
            }
            return Task.FromResult(SmtpResponse.Ok);
        }




        private NotificationRequest GenerateRequestFromEmail(string htmlBody)
        {
            try
            {
                NotificationRequest request = new NotificationRequest();
                var doc = new HtmlDocument();
                doc.LoadHtml(htmlBody);
                var nodes = doc.DocumentNode.SelectNodes("/html[1]/body[1]/div[1]/p");
                foreach (var node in nodes)
                {
                    string[] texts = node.InnerText.Split(':');
                    if (texts[0].ToLower().Trim() == "name")
                    {
                        request.NotificationRuleName = texts[1].Trim();
                    }
                    if (texts[0].ToLower().Trim() == "start time")
                    {
                        request.StartTime = node.InnerText.Replace(texts[0] + ":", string.Empty);
                    }
                    if (texts[0].ToLower().Trim() == "attribute path")
                    {
                        request.AttributeFullPath = texts[1].Trim();
                    }
                }
                return request;
            }
            catch (Exception)
            {
                return null;
            }
        }

 

Recompiling the project on Visual Studio and running the custom SMTP server again, it is possible to see the details of the notification received:

 

 

Conclusion

Although this custom SMTP server shouldn't be used for production, I am sure that it can be a good friend for developers and PI admins when they need to test their notifications.

 

Please share your thoughts with me and the community!!

Introduction

 

When I started as a vCampus Support Engineer back in 2012, one of my favorites materials to learn about our PI Developer Technologies was the material written for our vCampus Live! events. Recently, I've taken a look at the vCampus Live! 2012 workbooks and I found the "Develop Custom Delivery Channels for Use with PI Notifications" hands-on lab. In this lab, it was shown how to develop a custom delivery channel that would write notification to the Windows Event Logs using PI Notification 2012 and Visual Studio 2012 and .NET Framework 3.5.

 

The new generation of PI Notification (which starts in 2016) uses Event Frames instead of PI Points to store their historical data and does not support custom delivery channels. Nevertheless, it has integration with custom web services.

 

On this blog post, I will show you how to develop a custom web service that not only integrates with the newer releases of PI Notification and but also writes notifications events to the Windows Event Log. This time we are going to use Visual Studio 2017 and ASP.NET Core 2.1.

 

Creating a delivery endpoint

 

Please make sure that you have PI AF 2018 and PI Notifications 2018 installed on your system. If you are interested in setting up new Notification Rules, please refer to this video.

 

Before starting to code, we need to create a new delivery endpoint as shown on the screenshot below. Note that you should select WebService as the delivery channel. REST is the standard nowadays and this is what we are going to use.  We are going to create an action that accepts HTTP POST requests. For this demo, we won't be using any type of authentication although PI Notifications does support Basic and Windows.

 

 

 

 

 

 

 

Creating the ASP.NET Core project with .NET Framework

 

You can download the source code package of this blog post by accessing this GitHub repository.

 

As explained in this blog post, ASP.NET Core could be created with .NET Framework and .NET Core. We are going to create an ASP.NET Core application using .NET Framework because it is easier to integrate it with Windows Event Log. Let's create the project by referring the screenshot below:

 

 

On the second screen, make sure to select ".NET Framework" and "ASP.NET Core 2.1". This code probably does not work with ASP.NET Core 2.0. The solution would be to update Visual Studio as it will show newer versions of that platform.

 

 

 

 

 

Adding the ASP.NET Core MVC library

 

Open Package Manager Console and type:

 

Install-Package Microsoft.AspNetCore.Mvc

 

This will install the MVC component of the ASP.NET Core.

 

 

Editing the Startup.cs

 

Now that we have all libraries added, we need to enable it by editing the Startup.cs.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;


namespace CustomWebServiceForPINotifications
{
    public class Startup
    {
        public IConfiguration IConfiguration { get; }


        public Startup(IConfiguration configuration)
        {
            IConfiguration = configuration;
        }


        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }


        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }


            app.UseMvc();
        }
    }
}

 

 

I have published some blog post and videos about using ASP.NET Core with PI AF SDK and Angular. Please refer to the Developing Web Apps on top of the PI System Hub.

 

 

Creating the model

 

Create a new folder on the project root named Models and create a new class called NotificationRequest.cs. This .NET class will be used to process the JSON created by PI Notification according to the WebService configured. In this case, the JSON will have 3 properties: NotificationRuleName, AttributeFullPath and StartTime. Please refer to this example in order to learn how to create the Notification Rule programmatically.

 

 

Now that the properties were defined, we can write our NotificationRequest class as:

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;


namespace CustomWebServiceForPINotifications.Models
{
    public class NotificationRequest
    {
        public string NotificationRuleName { get; set; }
        public string AttributeFullPath { get; set; }
        public DateTime StartTime { get; set; }
    }
}

 

 

Creating the controller

 

Create a new folder on the project root named Controllers and create a new class called NotificationReceiverController.cs

 

This controller has two actions: Get and Post. The first action is used to make sure that the web site is up and running properly. The second action has the same code of the hands-on lab used to write a text to the Windows Event Log. For the second action, if the operation is successful, it will return a 201 status code. If not, it will return a bad request error.

 

using System;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using CustomWebServiceForPINotifications.Models;


namespace CustomWebServiceForPINotifications.Controllers
{
    [Route("api/notifications-receiver")]
    [ApiController]
    public class NotificationReceiverController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(new List<int> { 1, 2, 3, 4 });
        }


        [HttpPost]
        public IActionResult Post([FromBody]NotificationRequest request)
        {


            try
            {
                string sSource = "PI Notifications Delivery Channel Log";
                string sLog = "Application";
                if (!System.Diagnostics.EventLog.SourceExists(sSource))
                    System.Diagnostics.EventLog.CreateEventSource(sSource, sLog);


                string message = String.Format("Notification: {0} triggered for asset {1} at {2}", request.NotificationRuleName,
                    request.AttributeFullPath.ToString(), request.StartTime.ToString());
                System.Diagnostics.EventLog.WriteEntry(sSource, message, System.Diagnostics.EventLogEntryType.Information);
                return StatusCode(201);


            }
            catch (Exception ex)
            {
                return BadRequest(ex);
            }
        }


    }
}



 

Publishing to IIS

 

Now that the code is ready, we should publish the application to IIS. Please refer to the video ASP.NET Core 2 (Web API) and Angular with PI AF SDK: Part 5 - Publishing the application to IIS  for more information. You will have to download the run time version of ASP.NET Core and make sure the service running the web site has enough privileges in the folder with the web site files.

 

We can easily see if our web service is running as expected by testing it with our browser:

 

 

If there is something wrong, you will receive an error message.

 

 

Testing the integration between PI Notifications and the Custom Web Service

 

Ok, it is time to test our Notification. Let's go back to the Web Service Configuration window and click on the "Test Send" button. The result of the operation will be shown on the bottom of the Window.

 

 

We can confirm that the operation was successful by opening the Windows Event Viewer on the machine hosting our web service.

 

 

 

 

Conclusion

 

PI Notification 2016 does not allow developing custom delivery channels but it does allow you to integrate it with custom web services. I hope that this blog post will help you develop great integrations with the new generation of PI Notifications!!

Below is a link to the presentation given at Barcelona on September 27, 2018 for the LiveCoding session titled "Getting the Most Out of AFSearch".

 

Link to Recording:     LiveCoding--Getting-the-Most-Out-of-the-AFSearch

 

GitHub Repository:     AF-SDK -PIWorld-EMEA-2018-AFSearch-LiveCoding

 

While some of it is a repeat of what was presented at San Francisco in April, the beginning section features new material on the upcoming AF SDK 2.10.5 release.  The new features discussed:

 

  • OR clauses in searches.
  • FindObjects replaces the now deprecated FindElements, FindEventFrames, FindAttributes, etc.
  • AFSearchToken structure being replaced by new AFSearchBaseToken abstract class with 4 concrete classes: AFSearchFilterToken, AFSearchQueryToken, AFSearchOrToken, and AFSearchExpressionToken.
  • AFSearchToken will be mapped to new AFSearchBaseToken, if possible.
  • If your query uses an OR, the AFSearch.Tokens property will throw an exception.
  • New AFSearch.TokensCollection property is meant to replace Tokens functionality and works with new AFSearchBaseToken.
  • AFSearchToken and AFSearch.Tokens property are both marked as deprecated.

We are excited to present the PI World Innovation Hackathon EMEA 2018 winners!

 

DEME kindly provided a sample of their data with sensors, jack-up vessels and soil models information. Participants were encouraged to create killer applications for DEME by leveraging the PI System infrastructure.

 

OSI_Barcelona18_Day1_120.jpg

 

The participants had 23 hours to create an app using any of the following technologies:

  • PI Server 2018
  • PI Web API 2018
  • PI Vision 2017 R2

 

Our judges evaluated each app based on their creativity, technical content, potential business impact, data analysis and insight and UI/UX. Although it is a tough challenge to create an app in 23 hours, 4 groups were able to finish their app and present to the judges!

 

Prizes:

1st place: Intel NUC Barebone (Core 3-7100U, 120 GB SSD, 8 GB RAM), one year free subscription to PI Developers Club, one time free registration at OSIsoft PI World over the next 1 year

2nd place: Bose SoundLink Around-Ear Wireless Headphones II (black), one year free subscription to PI Developers Club, 50% discount for registration at OSIsoft PI World over the next 1 year

3rd place: Raspberry Pi 3 Model B+ Retro Arcade Gaming Kit incl. 2 classic controllers and one year free subscription to PI Developers Club

 

 

Without further do, here are the winners!

 

1st place - AG Solution

 

The team members were: Sergio Hernandez, Juri Krivoruchko and Marc Torralba

 

 

OSI_Barcelona18_Day4_386.jpg

 

Team AG Solution has developed an application on top of PI AF SDK and PI Vision. They've developed a custom data reference that uses Stochastic Dual Coordinate classifier from ML.NET to detect the current state of vessel by the direction of changing the state.

 

The team used the following technologies:

  • PI AF SDK
  • ML.NET
  • PI Vision

 

Here are some screenshots presented by the AG Solution team!

 

 

 

 

 

 

 

 

2nd place - M.E.S.S

 

The team members were:  David Rodriguez, Leandro Hideo, Alexander Hosefelder and Alexander Dixon

 

OSI_Barcelona18_Day4_384.jpg

 

Team M.E.S.S developed an algorithm in R in order to detect sensors anomalies using Machine Learning.

 

The team used the following technologies:

  • R
  • PI Web API
  • PI Web API package for R

 

Here are some screenshots presented by M.E.S.S!

 

 

 

 

 

 

3rd place - Werusys Cologne

 

The team members were: Kai Weber, Ansgar Backhaus and Julian Weber

 

 

OSI_Barcelona18_Day4_383.jpg

 

Team Werusys Cologne developed an application to analyze wind mill installation case data based on hidden markov model.

 

The team used the following technologies:

  • PI Web API
  • Seeq

 

Here are some screenshots presented by the Werusys Cologne!

 

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In this blog post about security and containers, we will be discussing about implementing a Kerberos Double Hop from the client machine to the PI Web API container and finally to the PI Data Archive container. Previously, when we are using the PI Web API container located here Spin up PI Web API container (AF Server included), we are using local accounts for authentication to the backend server such as the AF Server or the PI Data Archive. The limitation is that without Kerberos Delegation, we will not be able to have per user security which means that all users of PI Web API will have the same permissions. i.e. an operator can read the sensitive tags that were meant for the upper management and vice versa. Obviously, this is not ideal. What we want is to have more granularity in assigning permissions to the right people so that they can only access the tags that they are supposed to read.

 

Prerequisites

You will need to have 2 GMSA accounts. You can request such accounts from your IT department. They can refer to this blog post if they do not know how to create GMSA Spin up AF Server container (Kerberos enabled). Also be sure that one of them has the TrustedforDelegation property set to True. This can be done with the Set-ADServiceAccount  cmdlet.

 

You will also need to build the PI Data Archive container by following the instructions in the Build the image section here.

PI Data Archive container health check

 

For the PI Web API container, you will need to pull it from the repository by using this command.

docker pull elee3/afserver:webapi18

 

Demo without GMSA

First let us demonstrate how authentication will look like when we run containers without GMSA.

 

Let's have a look at the various authentication modes that PI Web API offers.

1. Anonymous

2. Basic

3. Kerberos

4. Bearer

For more detailed explanation aboout each mode, please refer to this page.

 

We will only be going through the first 3 modes as Bearer requires an external identity provider which is out of the scope of this blog.

 

Create the PI Data Archive container and the PI Web API container. We will also create a local user called 'enduser' in the two containers.

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h wa --name wa elee3/afserver:webapi18
docker exec wa net user enduser qwert123! /add
docker exec pi net user enduser qwert123! /add

 

Anonymous

Now let's open up PSE and connect to the hostname "wa". If prompted for the credentials, use

Username: afadmin

Password: qwert123!

Change the authentication to Anonymous and check in the changes. Restart the PI Web API service.

Verify that the setting has taken effect by using internet explorer to browse to /system/configuration. There will be no need for any credentials.

 

We can now try to connect to the PI Data Archive container with this URL.

https://wa/piwebapi/dataservers?path=\\pi

 

Check the PI Data Archive logs to see how PI Web API is authenticating.

Result: With Anonymous authentication, PI Web API authenticates with its service account using NTLM.

 

Basic

Now use PSE to change the authentication to Basic and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. This time, there will be a prompt for credentials. Enter

Username: enduser

Password: qwert123!

Try to connect to the same PI Data Archive earlier. You will get an error as the default PI Data Archive container doesn't have any mappings for enduser

Let's see what is happening on the PI Data Archive side.

Result: With Basic authentication, the end user credential has been transferred to the PI Data Archive with NTLM.

 

Kerberos

Finally use PSE to change the authentication to Keberos and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. The prompt for credentials will look different from the Basic authentication one. Use the same credentials as you did for the Basic authentication scenario.

Try to connect to the same PI Data Archive again. You should not be able to connect. When you check on the PI Data Archive logs, you will see

Result: With Kerberos authentication, the delegation failed and the credential became NT AUTHORITY\ANONYMOUS LOGON even though we logged on to PI Web API with the local account 'enduser'.

 

Demo with GMSA

Kerberos

Now we shall use the GMSA accounts that we have to make the last scenario with Kerberos delegation work.

Download the scripts for Kerberos enabled PI Data Archive and PI Web API here.

PI-Web-API-container/New-KerberosPWA.ps1

PI-Data-Archive-container-build/New-KerberosPIDA.ps1

 

I will use the name 'untrusted' as the name of the GMSA account that is not trusted for delegation and 'trusted' as the name of the GMSA account that is trusted for delegation. Set the SPN for 'trusted' like such

setspn -s HTTP/trusted trusted

 

Once you have the scripts, run them like this

.\New-KerberosPIDA.ps1 -AccountName untrusted -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName trusted -ContainerName wak

 

The scripts will help you to create a credential spec for the container based on the GMSA that you provide to it. A credential spec will let the container know how it can access Active Directory resources. Then, it will use this credential spec to create the container using docker run command. It will also set the hostname of the container to be the same as the name of the GMSA. This is required because it is a current limitation with the implementation that might be resolved in the future so that you can choose your own hostnames.

 

Open internet explorer now with your domain account and access PI Web API /system/userinfo. The hostname is 'trusted'.

Make sure that ImpersonationLevel is 'Delegation'.

 

Now try to access the PI Data Archive. The hostname is 'untrusted'. You will be unable to access. Why? Because you haven't created a mapping yet! So let's use SMT to create a mapping to your domain account. After creating a mapping. Try again and you should be able to connect. The PI Data Archive logs will show that you have connected with Kerberos. You do not need any mapping to your PI Web API service account at all if Kerberos delegation is working properly.

 

Result: With Kerberos authentication method in PI Web API and the use of GMSAs, Kerberos delegation works. The end domain user is delegated from the client to the PI Web API container to the PI Data Archive container. We have successfully completed the double hop.

 

Troubleshoot

If this doesn't seem to work for you, one thing you can try is to check the setting for internet explorer according to this KB article.

KB01223 - Kerberos and Internet Browsers

Your browser settings might differ from mine but the container settings should be the same since the containers are newly created.

 

Alternative: Resource Based Constrained Delegation

A more secure way to do Kerberos delegation instead of trusting the PI Web API container GMSA for delegation is to set the property "PrincipalsAllowedToDelegateToAccount" on the PI Data Archive container GMSA. This is what we call Resource Based Constrained Delegation (RBCD). You do not have to trust any GMSAs for delegation in this scenario. You will still need two GMSAs.

 

Assuming that you have already created the two containers with the scripts found above. I will use 'pida' as the name of the PI Data Archive container GMSA and 'piwebapi' as the name of the PI Web API container GMSA.

.\New-KerberosPIDA.ps1 -AccountName pida -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName piwebapi -ContainerName wak

 

Execute these two additional commands to enable RBCD.

docker exec pik powershell -command "Add-WindowsFeature RSAT-AD-PowerShell"
docker exec pik powershell -command "Set-ADServiceAccount $env:computername -PrincipalsAllowedToDelegateToAccount (Get-ADServiceAccount piwebapi)"

 

You will still be able to connect with Kerberos delegation from the client machine to the PI Web API container to the PI Data Archive container. In this case, the PI Data Archive container only strictly allows delegation from the PI Web API container with 'piwebapi' as its GMSA.

 

Conclusion

We have seen that containers are able to utilize Kerberos delegation with the usage of GMSAs. This is important for middleware server containers such as PI Web API. Here is a quick summary of the various results that we have seen.

Authentication Mode
No GMSAWith GMSA
AnonymousNTLM with service accountNo reason to do this
BasicNTLM with local end user accountNo reason to do this
KerberosNTLM with anonymous logonKerberos delegation with domain end user account

 

The interesting thing is that Basic authentication can also have per user security with local end user accounts. But you will need to maintain the list of local users in the PI Web API container and the PI Data Archive container separately which is not recommended. The ideal case is to go with Kerberos delegation.

ATTENTION Developers, Data Scientists, IT CyberSecurity, and Power Users.  For lack of a better word, I will refer to you collectively as "developers".

 

Developers coming to PI World Barcelona may notice more offerings than ever before.  We proudly proclaim that Barcelona will have the most robust Developer Agenda (see link) ever seen at UC or PI World EMEA.  This includes the Developer Innovation Hackathon on Day 0 (see link), thanks to our data sponsor DEME.

 

Besides the traditional hands-on labs, which require an additional fee and pre-registration, the Day 3 agenda is chock full of technical content aimed specifically at developers, thanks to 90-minute Live Coding or How-To sessions.  These in-depth technical talks do not require a fee nor pre-registration.  You are free to come and go as you please.  Make no mistake about it ... just because we call the LiveCoding "talks" does not mean they contain less technical information than labs. We expect to offer even more such talks at future PI World events because we think it is better event for you.  Our reasoning: you can attend 2 labs on Day 3 for $300, or you can sit in on 4 LiveCoding talks for free.  Who can argue against more technical training for less cost?  (Tip: if you are trying to convince your boss to send you to PI World, presenting it as a major training event (which it is) could be a strong justification to attend.)

 

I invite you once again to review the Day 3 agenda.  You will see an Analytics Track and Developer Track.  One late correction I would like to make is the PI Admin Track, which is not really for PI Admins but should be considered a 2nd Developer Track.

Filter Blog

By date: By tag: