Skip navigation
1 2 Previous Next

# PI Developers Club

17 Posts authored by: Eugene Lee

# Spin up PI to PI Interface container

Posted by Eugene Lee Nov 19, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Link to other containerization articles

Containerization Hub

Introduction

Until now, when installing PI interfaces on a separate node from the PI Data Archive, we need to provision a separate physical or virtual machine just for the interface itself. Don't you think that it is a little waste of resources? To combat this, we can containerize interfaces so that they become more portable which allows them to be scheduled anywhere inside your computing cluster. Their batch file configuration also makes them good candidates for lifting and shifting into containers.

We will start off by introducing the PI to PI interface container which is the first ever interface container! It will have buffering capabilities (via PI Buffer Subsystem) and its performance counters will also be active.

Set up servers

First, let me spin up 2 PI Data Archive containers to act as the source and destination servers. Check out this link on how to build the PI Data Archive container.

PI Data Archive container health check

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h pi1 --name pi1 -e trust=%computername% pidax:18


For the source code to build the PI Data Archive container and also the PI to PI interface container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies.

We shall be using pi1 as our source and pi as our destination.

Let's open up PI SMT to add the trust for the PI to PI Interface container. Do this on both PI Data Archives.

The IP address and NetMask are obtained by running ipconfig on your container host.

The reason I set the trusts this way is because the containers are guaranteed to spawn within this subnet since they are attached to the default NAT network. Therefore, the 2 PI Data Archive containers and the PI to PI Interface container are all in this subnet. Container to container connections are bridged through an internal Hyper-V switch.

On pi, create a PI Point giving it any name you want (my PI Point shall be named 'cdtclone'). Configure the other attributes of the point as such

Point Source: pitopi
Exception: off
Compression: off
Location1: 1
Location4: 1
Instrument Tag: cdt158


Leave the other attributes as default. This point will be receiving data from cdt158 on the source server. This is specified in the instrument tag attribute.

Set up interface

Now you are all set to proceed to the next step which is to create the PI to PI Interface container!

You can easily do so with just one command. Remember to login to Docker with the usual credentials.

docker run -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi


The environment variables that you can configure include

host: destination server

src: source server

ps: point source

That is all the parameters that is supported for now.

You should be able to see data appearing in the cdtclone tag on the destination server now.

Don't you think it was very quick and easy to get started.

Buffer

As I mentioned before, the container also has buffering capabilities. We shall consider 2 scenarios.

1. The destination server is stopped. Same effect as losing network connectivity to the destination server.

2. The PI to PI interface container is destroyed.

Scenario 1

Stop pi.

docker stop pi


Wait for a few minutes and run

docker exec p2p cmd /c pibufss -cfg


You should see the following output which indicates that the buffer is working and actively queuing data in anticipation for the destination server to be back up.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 18:39:23, regid: 3
lastsend: 2-Nov-18 18:58:59
total events sent: 47, snapshot posts: 42, queued events: 8


When we start up pi again

docker start pi


Wait a few minutes before running pibufss -cfg again. You should now see

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: SendingData, successful connections: 2
PI identities: piadmins | PIWorld, auth type: SSPI
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 19:07:24, regid: 3
total events sent: 64, snapshot posts: 45, queued events: 0


The buffer has re-registered with the server and flushed the queued events to the server. You can check the archive editor to make sure the events are there.

Scenario 2

Stop pi just so that events will start to buffer.

docker stop pi


Check that events are getting buffered.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 13-Nov-18 15:25:07, lastreg: 13-Nov-18 15:25:08, regid: 3
lastsend: 13-Nov-18 17:54:14
total events sent: 8901, snapshot posts: 2765, queued events: 530


Now while pi is still stopped, stop p2p.

docker stop p2p


Check the volume name that was created by Docker.

docker inspect p2p -f "{{.Mounts}}"


Output as below. The name is highlighted in red. Save that name somewhere.

[{volume 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17 C:\ProgramData\docker\volumes\76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17\_data c:\programdata\osisoft\buffering local true }]


Now you can destroy p2p and start pi

docker rm p2p
docker start pi


Use archive editor to verify that data has stopped flowing.

The last event was at 5:54:13 PM.

We want to recover the data that are in the buffer queue files. We can create a new PI to PI interface container pointing to the saved volume name.

docker run -v 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17:"%programdata%\osisoft\buffering" -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi


And VOILA! The events in the buffer queues have all been flushed into pi.

To be sure that the recovered events are not due to history recovery by the PI to PI interface container, I have disabled it.

I have demonstrated that the events in the buffer queue files were persisted across container destruction and creation as the data was persisted outside the container.

Performance counters

The container also has performance counters activated. Let's try to get the value of Device Status. Run the following command in the container.

Get-Counter '\pitopi(_Total)\Device Status'


Output

Timestamp CounterSamples
--------- --------------
11/2/2018 7:24:14 PM \\d13072c5ff8b\pitopi(_total)\device status :0


Device status is 0 which means healthy.

What if we stopped the source server?

docker stop pi1


Now run the Get-Counter command again and we will expect to see

Timestamp CounterSamples
--------- --------------
11/2/2018 7:29:29 PM \\d13072c5ff8b\pitopi(_total)\device status :95


Device status of 95 which means Network communication error to source PI server.

These performance counters will be perfect for writing health checks against the interface container.

Conclusion

We have seen in this blog how to use the PI to PI Interface container to transfer data between two PI Data Archive containers. As you know, OSIsoft has hundreds of interfaces. Being able to containerize one means the success of containerizing others is very high. The example in this blog will serve as a proof of concept.

# Containers and Swarm Part 1 (Setup, Service)

Posted by Eugene Lee Oct 26, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Link to other containerization articles

Containerization Hub

Introduction

We have learnt much about using containers in previous blog posts. Until now, we have been working with standalone containers. This is great for familiarizing yourself with the concept of containers in general. Today, we shall take the next step in our container journey which is to learn how to orchestrate these containers. There are several container orchestration platforms on the market today such as Docker Swarm, Kubernetes, Service Fabric and Marathon. I will be using Docker Swarm today to illustrate the concept of orchestration since it is directly integrated with the Docker Engine making it the quickest and easiest to set up.

Motivation

Before we even start on the orchestration journey, it is important that we understand the WHY behind it. For someone who is new to all these, the objective of doing this might not be clear. Let me illustrate with two analogies.

One that a layman can understand and another that a PI admin can relate to.

First analogy

Suppose your hobby is baking cakes (containers). You have been hard at work in your kitchen trying to formulate the ultimate recipe (image) for the best chiffon cake in the world. One day, you managed to bake a cake with the perfect taste and texture after going through countless rounds of trial and error of varying the temperature of the oven, the duration in the oven, the amount of each type of ingredient etc. Your entrepreneurial friend advise you to open a small shop selling this cake (dealing with standalone containers in a single node). You decided to heed your friend's advice and did so. Over the years, business boomed and you want to expand your small shop to a chain of outlets (cluster of nodes). However, you have only one pair of hands and it is not possible for you to bake all the cakes that you are going to sell. How are you going to scale beyond a small shop?

Luckily, your same entrepreneurial friend found a vendor called Docker Inc who can manufacture a system of machines (orchestration platform) where you install one machine in each of your outlet stores. These machines can communicate with each other and they can take your recipe and bake cakes that taste exactly the same as the ones that you baked yourself. Furthermore, you can let the machines know how many cakes to bake each hour to address different levels of demand throughout the day. The machines even have a QA tester at the end of the process to test if the cake meets its quality criteria and will automatically discard cakes that fail to replace them with new ones. You are so impressed that you decide to buy this system and start expanding your cake empire.

Second analogy

Suppose you are in charge of the PI System at your company. Your boss has given you a cluster of 10 nodes. He would like you to make an AF Server service spanning this cluster that has the following capabilities

1. able to adapt to different demands to save resources

2. self-healing to maximize uptime

3. rolling system upgrades to minimize downtime

4. easy to upgrade to newer versions for bug fixes and feature enhancements

5. able to prepare for planned outages needed for maintenance

6. automated roll out of cluster wide configuration changes

7. manage secrets such as certificates and passwords for maximum security

How are you going to fulfill his crazy demands? This is where a container orchestration platform might help.

Terminology

Now let us get some terminologies clear.

Swarm: A swarm consists of multiple Docker hosts which run in swarm mode and act as managers and workers. A given Docker host can be a manager, a worker, or perform both roles.

Manager: The manager delivers work (in the form of tasks) to workers, and it also manages the state of the swarm to which it belongs. The manager can also run the same services workers, but you can also make them run only manager-related services.

Worker: Workers run tasks distributed by the swarm manager. Each worker runs an agent that reports back to the master about the state of the tasks assigned to it, so the manager can keep track of the work running in the swarm.

Service: A service defines which container images the swarm should use and which commands the swarm will run in each container. For example, it’s where you define configuration parameters for an AF Server service running in your swarm.

Task: A task is a running container which is part of a swarm service and managed by a swarm manager. It is the atomic scheduling unit of a swarm.

Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.

There are two types of service.

Replicated: The swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state.

Global: The swarm manager runs one task for the service on every available node in the cluster.

Prerequisites

To follow along with this blog, you will need two Windows Server 2016 Docker hosts. Check out how to install Docker in the Containerization Hub link above.

Set up

Select one of the nodes (we will call it "Manager") and run

docker swarm init


This will output the following

Swarm initialized: current node (vgppy0347mggrbam05773pz55) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377


Now select the other node (we will call it "Worker") and run the command that was being output in the previous command.

docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377


Go back to Manager and run

docker node ls


to list out the nodes that are participating in the swarm. Note that this command only works on manager nodes.

Service

Now that the nodes have been provisioned, we can start to create some services.

For this blog, I will be using a new AF Server container image that I have recently developed tagged 18s. If you have been following my series of blogs, you might be curious what is the difference between the tag 18x (last seen here) and 18s. With 18s, the data is now separated from the AF Server application service. What this means is that the PIFD database mdf, ndf and ldf files are now mounted in a separate data volume. The result is that on killing the AF Server container, the data won't be lost and I can easily recreate a AF Server container pointing to this data volume to keep the previous state. This will be useful in future blogs on container fail-over with data persistence.

You will need to login with the usual docker credentials that I have been using in my blogs. To create the service, run

docker service create --name=af18 --detach=false --with-registry-auth elee3/afserver:18s


Note: If --detach=false was not specified, tasks will be updated in the background. If it was specified, then the command will wait for the service to converge before exiting. I do it so that I can get some visual output.

Output

goa9cljsek42krqgvjtwdd2nd
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Waiting 6 seconds to verify that tasks are stable...


Now we can list the service to find out which node is hosting the tasks of that service.

docker service ps af18


Once you know which node is hosting the task, go to that node and run

docker ps -f "name=af18."


Output

CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS                        PORTS               NAMES
9e3d26d712f9        elee3/afserver:18s   "powershell -Comma..."   About a minute ago   Up About a minute (healthy)                       af18.1.w3ui9tvkoparwjogeg26dtfz


The output will show the list of containers that the swarm service has started for you. Let us inspect the network that the container belongs to by using inspecting with the container ID.

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks}}"


Output

map[nat:0xc0420c0180]


The output indicates that the container is attached to the nat network by default if you do not explicitly specify a network to attach to. This means that your AF Server is accessible from within the same container host.

You can get the IP address of the container with

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks.nat.IPAddress}}"


Then you can connect with PSE using the IP address. It is also possible to connect with the container ID as the container ID is the hostname by default.

Now that we have a service up and running, let us take a look at how to change some configurations of the service. In the previous image, the name of the AF Server derives from the container ID which is some random string. I would like to make it have the name 'af18'. I can do so with

docker service update --hostname af18 --detach=false af18


Once you execute that, Swarm will stop the current task that is running and reschedule it with the new configuration. To see this, run

docker service ps af18


Output

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
llueiqx8ke86        af18.1              elee3/afserver:18s   worker           Running             Running 8 minutes ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 9 minutes ago


During rescheduling, it is entirely possible for Swarm to shift the container to another node. In my case, it shifted from master to worker. It is possible to ensure that the container will only be rescheduled on a specific node by using a placement constraint.

docker service update --constraint-add node.hostname==master --detach=false af18


We can check the service state to confirm.

docker service ps af18


Output

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
r70qwri3s435        af18.1              elee3/afserver:18s   master            Running             Starting 9 seconds ago
llueiqx8ke86         \_ af18.1          elee3/afserver:18s   worker           Shutdown            Shutdown 9 seconds ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 2 hours ago


Now, the service will only get scheduled on the master node. You will now be able to connect with PSE on the master node using the hostname 'af18'.

When you are done with the service, you can remove it.

docker service rm af18


Conclusion

In this article, we have learnt how to set up a 2 node Swarm cluster consisting of one master and one worker. We scheduled an AF Server swarm service on the cluster and updated its configuration without needing to recreate the service. The Swarm takes care of scheduling the service's tasks on the appropriate node. We do not need to manually do it ourselves. We also seen how to control the location of the tasks by adding a placement constraint. In the next part of the Swarm series, we will take a look at Secrets and Configs management within Swarm. Stay tuned for more!

# Container Kerberos Double Hop

Posted by Eugene Lee Sep 17, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Link to other containerization articles

Containerization Hub

Introduction

In this blog post about security and containers, we will be discussing about implementing a Kerberos Double Hop from the client machine to the PI Web API container and finally to the PI Data Archive container. Previously, when we are using the PI Web API container located here Spin up PI Web API container (AF Server included), we are using local accounts for authentication to the backend server such as the AF Server or the PI Data Archive. The limitation is that without Kerberos Delegation, we will not be able to have per user security which means that all users of PI Web API will have the same permissions. i.e. an operator can read the sensitive tags that were meant for the upper management and vice versa. Obviously, this is not ideal. What we want is to have more granularity in assigning permissions to the right people so that they can only access the tags that they are supposed to read.

Prerequisites

You will need to have 2 GMSA accounts. You can request such accounts from your IT department. They can refer to this blog post if they do not know how to create GMSA Spin up AF Server container (Kerberos enabled). Also be sure that one of them has the TrustedforDelegation property set to True. This can be done with the Set-ADServiceAccount  cmdlet.

You will also need to build the PI Data Archive container by following the instructions in the Build the image section here.

PI Data Archive container health check

For the PI Web API container, you will need to pull it from the repository by using this command.

docker pull elee3/afserver:webapi18


Demo without GMSA

First let us demonstrate how authentication will look like when we run containers without GMSA.

Let's have a look at the various authentication modes that PI Web API offers.

1. Anonymous

2. Basic

3. Kerberos

4. Bearer

For more detailed explanation aboout each mode, please refer to this page.

We will only be going through the first 3 modes as Bearer requires an external identity provider which is out of the scope of this blog.

Create the PI Data Archive container and the PI Web API container. We will also create a local user called 'enduser' in the two containers.

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h wa --name wa elee3/afserver:webapi18
docker exec wa net user enduser qwert123! /add
docker exec pi net user enduser qwert123! /add


Anonymous

Now let's open up PSE and connect to the hostname "wa". If prompted for the credentials, use

Username: afadmin

Password: qwert123!

Change the authentication to Anonymous and check in the changes. Restart the PI Web API service.

Verify that the setting has taken effect by using internet explorer to browse to /system/configuration. There will be no need for any credentials.

We can now try to connect to the PI Data Archive container with this URL.

https://wa/piwebapi/dataservers?path=\\pi

Check the PI Data Archive logs to see how PI Web API is authenticating.

Result: With Anonymous authentication, PI Web API authenticates with its service account using NTLM.

Basic

Now use PSE to change the authentication to Basic and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. This time, there will be a prompt for credentials. Enter

Username: enduser

Password: qwert123!

Try to connect to the same PI Data Archive earlier. You will get an error as the default PI Data Archive container doesn't have any mappings for enduser

Let's see what is happening on the PI Data Archive side.

Result: With Basic authentication, the end user credential has been transferred to the PI Data Archive with NTLM.

Kerberos

Finally use PSE to change the authentication to Keberos and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. The prompt for credentials will look different from the Basic authentication one. Use the same credentials as you did for the Basic authentication scenario.

Try to connect to the same PI Data Archive again. You should not be able to connect. When you check on the PI Data Archive logs, you will see

Result: With Kerberos authentication, the delegation failed and the credential became NT AUTHORITY\ANONYMOUS LOGON even though we logged on to PI Web API with the local account 'enduser'.

Demo with GMSA

Kerberos

Now we shall use the GMSA accounts that we have to make the last scenario with Kerberos delegation work.

Download the scripts for Kerberos enabled PI Data Archive and PI Web API here.

PI-Web-API-container/New-KerberosPWA.ps1

PI-Data-Archive-container-build/New-KerberosPIDA.ps1

I will use the name 'untrusted' as the name of the GMSA account that is not trusted for delegation and 'trusted' as the name of the GMSA account that is trusted for delegation. Set the SPN for 'trusted' like such

setspn -s HTTP/trusted trusted


Once you have the scripts, run them like this

.\New-KerberosPIDA.ps1 -AccountName untrusted -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName trusted -ContainerName wak


The scripts will help you to create a credential spec for the container based on the GMSA that you provide to it. A credential spec will let the container know how it can access Active Directory resources. Then, it will use this credential spec to create the container using docker run command. It will also set the hostname of the container to be the same as the name of the GMSA. This is required because it is a current limitation with the implementation that might be resolved in the future so that you can choose your own hostnames.

Open internet explorer now with your domain account and access PI Web API /system/userinfo. The hostname is 'trusted'.

Make sure that ImpersonationLevel is 'Delegation'.

Now try to access the PI Data Archive. The hostname is 'untrusted'. You will be unable to access. Why? Because you haven't created a mapping yet! So let's use SMT to create a mapping to your domain account. After creating a mapping. Try again and you should be able to connect. The PI Data Archive logs will show that you have connected with Kerberos. You do not need any mapping to your PI Web API service account at all if Kerberos delegation is working properly.

Result: With Kerberos authentication method in PI Web API and the use of GMSAs, Kerberos delegation works. The end domain user is delegated from the client to the PI Web API container to the PI Data Archive container. We have successfully completed the double hop.

Troubleshoot

If this doesn't seem to work for you, one thing you can try is to check the setting for internet explorer according to this KB article.

KB01223 - Kerberos and Internet Browsers

Your browser settings might differ from mine but the container settings should be the same since the containers are newly created.

Alternative: Resource Based Constrained Delegation

A more secure way to do Kerberos delegation instead of trusting the PI Web API container GMSA for delegation is to set the property "PrincipalsAllowedToDelegateToAccount" on the PI Data Archive container GMSA. This is what we call Resource Based Constrained Delegation (RBCD). You do not have to trust any GMSAs for delegation in this scenario. You will still need two GMSAs.

Assuming that you have already created the two containers with the scripts found above. I will use 'pida' as the name of the PI Data Archive container GMSA and 'piwebapi' as the name of the PI Web API container GMSA.

.\New-KerberosPIDA.ps1 -AccountName pida -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName piwebapi -ContainerName wak


Execute these two additional commands to enable RBCD.

docker exec pik powershell -command "Add-WindowsFeature RSAT-AD-PowerShell"
docker exec pik powershell -command "Set-ADServiceAccount $env:computername -PrincipalsAllowedToDelegateToAccount (Get-ADServiceAccount piwebapi)"  You will still be able to connect with Kerberos delegation from the client machine to the PI Web API container to the PI Data Archive container. In this case, the PI Data Archive container only strictly allows delegation from the PI Web API container with 'piwebapi' as its GMSA. Conclusion We have seen that containers are able to utilize Kerberos delegation with the usage of GMSAs. This is important for middleware server containers such as PI Web API. Here is a quick summary of the various results that we have seen. Authentication Mode No GMSAWith GMSA AnonymousNTLM with service accountNo reason to do this BasicNTLM with local end user accountNo reason to do this KerberosNTLM with anonymous logonKerberos delegation with domain end user account The interesting thing is that Basic authentication can also have per user security with local end user accounts. But you will need to maintain the list of local users in the PI Web API container and the PI Data Archive container separately which is not recommended. The ideal case is to go with Kerberos delegation. # PI Data Archive container health check Posted by Eugene Lee Sep 3, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In my previous blog on AF Server container health check, I talked about implementing a health check for the AF Server container. Naturally, we will also have to discuss about such a check for the PI Data Archive container. For an introduction to what a health check is about and also how you can integrate a health check with Docker. Please refer to the previous blog post as I won't be repeating it here. In part 1, I will be covering the definition of the health tests that we can do for the PI Data Archive and then we will hook them up in the Dockerfile. In part 2, we will be doing something interesting with these health check enabled containers by using another container that I wrote to inform us by email whenever there is a change in their health status so that we are aware when things fail. Without further ado, let's jump into the definition of the health tests for the PI Data Archive container! Define health tests There are 2 tests that we will be performing. The first test is a test on the port 5450 to determine if there are any services listening on that port. The second test will use piartool to block for some essential subsystems of the PI Data Archive with a fixed timeout so that the test will fail if it exceeds that timeout. The Powershell cmdlet Get-NetTCPConnection can accomplish the first check for us. A return value of null means that there is no service listening on port 5450. The relevant code is below $val = Get-NetTCPConnection -LocalPort 5450 -State Listen -ErrorAction SilentlyContinue
if ($val -eq$null)
{
# return 1: unhealthy - the container is not working correctly
Write-Host "Failed: No TCP Listener found on 5450"
exit 1
}


Next, piartool is a utility that is located in the adm folder in PI Data Archive home directory. It has an option called "block" which waits for the specified subsystem to respond. This command is also used in the PI Data Archive start scripts to pause the script until the subsystem is available. The subsystems that we are going to check is the following list.

$SubsystemList = @( @("pibasess", "PI Base Subsystem"), @("pisnapss", "PI Snapshot Subsystem"), @("piarchss", "PI Archive Subsystem"), @("piupdmgr", "PI Update Manager") )  We are going to change the amount of time that we allow for each check to 10 seconds so that we do not have to wait 1 hour for it to complete . We will also grab the start and end times so that we can provide detailed logging for troubleshooting purposes. The code for this is below. function Block-Subsystem { Param ([string]$Name, [string]$DisplayName, [int]$TimeoutSeconds= 10)
$StartDate=Get-Date$rc = Start-Process -FilePath "${env:PISERVER}\adm\piartool.exe" -ArgumentList @("-block",$Name, $TimeoutSeconds) -Wait -PassThru -NoNewWindow$EndDate=Get-Date
if($rc.ExitCode -ne 0) { echo ("Block failed for {0} with exit code {1}, block started: {2}, block ended: {3}" -f$DisplayName,$rc.ExitCode,$StartDate,$EndDate) exit 1 } } ForEach ($Subsystem in $SubsystemList) {Block-Subsystem -Name$Subsystem[0] -DisplayName $Subsystem[1] -TimeoutSeconds 10}  Integrate into Docker We will add this line of code to our Dockerfile to make Docker start performing health checks. HEALTHCHECK --start-period=60s --timeout=60s --retries=1 CMD powershell .\check.ps1  The start period is given as 60 seconds to allow the PI Data Archive to start up and initialize properly before the health check test results will be taken into account. A time out of 60 seconds is given for the entire health check to complete. If it takes longer than that, the health check is deemed to have failed. I also gave only 1 retry which means that the health check will be unsuccessful if the first try fails. There is no second chance! . Build the image As usual, you will have to supply the PI Server 2018 installer and pilicense.dat yourself. The rest of the files can be found here. elee3/PI-Data-Archive-container-build Put all the files into the same folder and run the build.bat file. Once your image is built, you can create a container. docker run -h pi --name pi -e trust=%computername% pidax:18  Now check docker ps. The health status should be starting. After 1 minute which is the timeout period, run docker ps again. The health status should now be healthy. Health monitoring Now that we have a health check enabled container up and running, we can start to do some wonderful things with it. If your job is a PI administrator. don't you wish there was some way to keep tabs on your PI Data Archive's health so that if it fails, an email can be sent to notify you that it is unhealthy. This way, you won't get a shock the next time you check on your PI Data Archive and realize that it has been down for a week! I have written an application that can help you monitor ANY health enabled containers (i.e. not only the PI Data Archive container and the AF Server container but any container that has a health check enabled) and send you an email when they become unhealthy. We can start the monitoring with just one simple command. You should change the following variables Name of your SMTP server: <mysmtp> Source email: <admin@osisoft.com>: Destination email: <operator@osisoft.com> to your own values. docker run --rm -id -h test --name test -e smtp=<mysmtp> -e from=<admin@osisoft.com> -e to=<operator@osisoft.com> elee3/health  Once the application is running, we can test it by trying to break our PI Data Archive container. I will do so by stopping the PI Snapshot Subsystem since it is one of the services that is monitored by our health check. After a short while, I received an email in my inbox. Let me check docker ps again. The health status of docker ps corresponds to what the email has indicated. Notice that the email even provides us with the health logs so that we know exactly what went wrong. This is so useful. Now let me go back and start the PI Snapshot Subsystem again. The monitoring application will inform me that my container is healthy again. The latest log at 2:30:47 PM has no output which indicates that there are no errors. The logs will normally fetch the 5 most recent events. With the health monitoring application in place, we can now sleep in peace and not worry about container failures which go unnoticed. Conclusion In addition to what I have shown here, I want to mention that the health tests can be defined by the users themselves. You do not have to use the implementation that is provided by me. This level of flexibility is very important since health is a subjective topic. One man's trash is another man's treasure. You might think a BMI of 25 is ok but the official recommendation from the health hub is 23 and below. Therefore, the ability to define your own tests and thresholds will help you receive the right notifications that are appropriate to your own environment. You can hook them up during docker run. Here is more information if you are interested. Source code for health monitoring application is here. elee3/Health-Monitor # AF Server container health check Posted by Eugene Lee Aug 23, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In a complex infrastructure which spans several data centers and has multiple dependencies with minimum service up-time requirements, it is inevitable that services can still fail occasionally. The question then is how we can manage that in order to continue to maintain a high availability environment and keep downtime as low as possible. In this blog post, we will be talking about how we can implement a health check in the AF Server container to help with that goal. What is a health check? A container that is running doesn't necessarily mean that it is working. i.e. performing the service that it is supposed to do. In Docker Engine 1.12, a new HEALTHCHECK instruction was added to the Dockerfile so that we can define a command that verifies the state of health in the container. It is the same concept as a health check for humans such as making sure that your liver or kidney is working properly and take preventative measures before things go worse. In the container scenario, the exit code of the command will determine whether the container is operational and doing what is it meant to do. In the AF Server context, we will need to think about what it means for the AF Server to be 'healthy'. Luckily for us, we have such a counter to indicate the health status. AF server includes a Windows PerfMon counter called AF Health Check. If both the AF application service and the SQL Server are running and responding, this counter returns a value of 1. Another way we can check for health is to check if a service is listening on the port 5457 since AF Server uses that. We can also test if the service is running. Including all of these tests will make our health check more robust. Define health tests For the first measure of health, we will be using the Get-Counter Powershell cmdlet to read the value of the performance counter. A healthy AF Server is shown below. A value of 1 indicates that the AF Server and SQL Server are healthy while 0 means otherwise. The second measure of health is to test for a service listening on port 5457. We will use the Powershell cmdlet Get-NetTCPConnection to do so. When there is no listener on port 5457, we will get an error. The third measure of health is to check if the service is running by using the Get-Service Powershell cmdlet. Integrate into Docker With the health tests on hand, how can we ask Docker to perform these tests? The answer is to use the HEALTHCHECK instruction in the Dockerfile to instruct the Docker Engine to carry out the tests at regular intervals that can be defined by the image builder or the user. The syntax of the instruction is HEALTHCHECK [OPTIONS] CMD command The options that can appear before CMD are: • --interval=DURATION (default: 30s) • --timeout=DURATION (default: 30s) • --start-period=DURATION (default: 0s) • --retries=N (default: 3) For more information on what the options mean, please look here. I will be using a start-period of 10s to allow the AF Server sometime to initialize before starting the health checks. The other options I will leave as default. The user of the image can still override these options during Docker run. The command’s exit status indicates the health status of the container. The possible values are: • 0: success - the container is healthy and ready for use • 1: unhealthy - the container is not working correctly • 2: reserved - do not use this exit code The command will be a batch file that runs the aforementioned tests. The instruction will therefore look like this. HEALTHCHECK --start-period=10s CMD powershell .\check.ps1  Here are the contents of check.ps1 #test for service listening on port 5457 Get-NetTCPConnection -LocalPort 5457 -State Listen -ErrorAction SilentlyContinue|out-null if ($? -eq $false) { write-host "No one listening on 5457" exit 1 } #test if AF service is running$status = Get-Service afservice|select -expand status
if ($status -ne "Running") { write-host "PI AF Application Service (afservice) is$status."
write-host "PI AF Application Service (afservice) is not running."
exit 1
}

#test for AF Server Health Counter
$counter = get-counter "\PI AF Server\Health"|Select -Expand CounterSamples| Select -expand CookedValue; if ($counter -eq 0)
{
write-host "The health counter is $counter. This might mean either" write-host "1. SQL Server is non-responsive" write-host "2. SQL Server is responding with errors" exit 1 }  Usage The container image elee3/afserver:18x has been updated with the health check ability. After pulling it from the Docker repository with docker pull elee3/afserver:18x  You can have some fun with it. Let me spin up a new AF Server container based on the new image. docker run -d -h af18 --name af18 elee3/afserver:18x  Now, let's do a docker ps  Notice that my other container af17 that is based on the elee3/afserver:17R2 image doesn't have any health status next to it status because a health check was not implemented for it while container af18 indicates "(health: starting)". Let's run docker ps again after waiting for a little while. Notice that the health status has changed from 'starting' to 'healthy' after the first test which is run interval (configured in options) seconds after the container is started. We can also do docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expandproperty log  to see the health logs. Health event When the health status of a container changes, a health_status event is generated with the new status. We can observe that using docker events. We will now intentionally break the container by stopping the SQL Server service and trying to connect with PSE. This is expected. Now let us check using docker events which is a tool for getting real time events from the Docker Engine. We can do a filter on docker events to only grab the health_status events for a certain time range so that we do not need to be concerned with irrelevant events. Let us grab those health_status events for the past hour for my container af18. (docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time


Also check on

docker ps


and also docker inspect which can give us clues on what went wrong.

docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expand log|fl


With the health check, it is now obvious that even though the container is running, it doesn't work when we try to connect to it with PSE.

We shall restart the SQL Server service and try connecting with PSE. We can check if the container becomes healthy again by running

docker ps


and

(docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time  As expected, a new health_status event is generated which indicates healthy. Conclusion We can leverage on the health check mechanism further when we use a container orchestrator such as Docker Swarm that can detect the unhealthy state of a container and automatically replace the container with a new and working container. This will be discussed in a future blog. So stay tuned! # AF Server container in the cloud Posted by Eugene Lee Aug 10, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In my previous articles, I have demonstrated using the AF Server container in local Docker host deployments. The implication is that you have to manage the Docker host infrastructure yourself. The installation, patching, maintenance and upgrading work has to be done by you manually. This represents significant barriers to get up and running. As an analogy, imagine you visit another country for vacation and need to get from the airport to the hotel. Would it be better to buy a car (if they even sold one at the airport?) and drive to the hotel or just take a taxi (transport as a service). The first option requires a larger initial investment of time and money compared to the latter. For quick demo, training or testing purposes, getting a Docker host infrastructure up and running requires effort (getting a machine with right specifications, procuring an OS with Windows container capabilities, patching the OS so that you can use Docker, installing the right edition of Docker) and troubleshooting if things go south (errors during setup or services refusing to start). In the past, we have no other choice so we just have to live with it. But in this modern era of cloud computing, using a container as a service might be a faster and cheaper alternative. Today, I will show you how to operate the AF Server container in the cloud using Azure Container Instances. The very first service of its kind in the cloud, Azure Container Instances is a new Azure service delivering containers with great simplicity and speed. It is a form of serverless containers. Prerequisites You will need an Azure subscription to follow along with the blog. You can get a free trial account here. Azure CLI Install the Azure CLI which is a command line tool for managing Azure resources. It is a small install. Once done, we need to login az login  If the CLI can determine your default browser and has access to open it, it will do so and direct you immediately to a sign in page. Otherwise, you need to open a browser page and follow the instructions on the command line to enter an authorization code after navigating to https://aka.ms/devicelogin in your browser. Complete the sign in via the browser and you will see Now set your default subscription if you have many subscriptions. If you only have one subscription to your account, then you can skip this step. az account set -s <subscription name>  Create cloud container We are now ready to create the AF Server cloud container. First create a resource group. az group create --name resourcegrp -l southeastasia  You can change southeastasia to a location nearest to you. Here is the list of locations (remove the space when using it) Create a file named af.yaml. Replace <username> and <password> with the credentials for pulling the AF Server container image. There are some variables that you can configure afname: The name that you choose for your AF Server. user: Username to authenticate to your AF Server. pw: Password to authenticate to your AF Server. af.yaml apiVersion: '2018-06-01' name: af properties: containers: - name: af properties: environmentVariables: - name: afname value: eugeneaf - name: user value: eugene - name: pw secureValue: qwert123! image: elee3/afserver:18x ports: - port: 5457 protocol: TCP resources: requests: cpu: 1.0 memoryInGB: 1.0 imageRegistryCredentials: - server: index.docker.io username: <username> password: <password> ipAddress: dnsNameLabel: eleeaf ports: - port: 5457 protocol: TCP type: Public osType: Windows type: Microsoft.ContainerInstance/containerGroups  Then run this in Azure CLI to create the container. az container create --resource-group resourcegrp --file af.yaml  The command will return in about 5 minutes. You can check the state of the container. az container show --resource-group resourcegrp -n af --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table  You can check the container logs. az container logs --resource-group resourcegrp -n af  Explore with PSE You now have an AF Server container in the cloud that can be accessed ANYWHERE as long as there is internet connectivity. You can connect to it with PSE using the FQDN. The credentials to use are those that you specified in af.yaml. Notice that the name of the AF Server is the value of the afname environment variable that was passed in af.yaml. Run commands in container If you have a need to login to the container to run commands such as using afdiag, you can do so with az container exec --resource-group resourcegrp -n af --exec-command "cmd.exe"  Clean up When you are done with using the container, you should destroy it so that you won't have to pay for it when it is not being used. az container delete --resource-group resourcegrp -n af  You can check that the resource is deleted by listing your resources. az resource list  Considerations There are some tricks to hosting a container in the cloud to optimize its deployment time. 1. Base OS The Base OS should be one of the three most recent versions of Windows Server Core 2016. These are cached in Azure Container Instances to help in the deployment time. If you want to experience the difference, try pulling elee3/afserver:18 in the create container command above. The time taken will be 13min which is more than twice of the 5min needed to pull elee3/afserver:18x. The reason is because the old image with “18” tag is based on the public SQL Server image which is 7 months old and doesn't have the latest OS version to be able to leverage on the caching mechanism to improve performance. I have rebuilt the image with “18x” tag based on my own SQL Server image with the latest OS version. 2. Image registry location Hosting the image in Azure Container Registry in the same region that you use to deploy your container will help to improve deployment time as this shortens the network path that the image needs to travel which shortens the download time. Take note that ACR is not free unlike DockerHub. In my tests, it took 4min to deploy with ACR. 3. Image size This one is obviously a no-brainer. That's why I am always looking to make my images smaller. Another consideration is the number of containers per container group. In this example, we are creating a single-container group. The current limitation of Windows containers is that we can only create single-container groups. When this limitation is lifted in the future, there are some scenarios where I see value in creating multi-container groups such as spinning up sets of containers that are complimentary to each other. E.g. PI Data Archive container, AF Server container, PI Analysis Service container in a 3-container group. However, for scenarios such as spinning up 2 AF Servers containers, we should still keep them in separate container groups so that they won't fight for the same port. Limitations Kerberos authentication is not supported in a cloud environment. We are using NTLM authentication in this example. Conclusion Deploying the AF Server container to Azure Container Instances might not be as fast as deploying it to a local Docker host. But it is cheaper compared to the upfront time and cost of setting up your own Docker host. This makes it ideal for demo/training/testing scenarios. The containers are billed on a per second basis so you only pay for what you use. That is like only paying for your trip from the airport to the hotel without having to pay anything extra. # Upgrade to AF Server 2018 container with Data Persistence Posted by Eugene Lee Jul 24, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction AF Server 2018 has been released on 27 Jun 2018! Let's take a look at some of the new features that are available. The following list is not exhaustive. • AF Server Connection information is now available for administrative users. • A new UOM Class, Computer Storage, is provided. The canonical UOM is byte (b) and multiples of 1000 and 1024. • AFElementSearch and AFEventFrameSearch now support searching for elements and event frames by attribute values without having to specify a template. • The AFDiag utility has been enhanced to allow for bulk deletes of event frames by database and/or template and within a specified time range Here are also some articles that talk about other new features in AF 2018. Mass Event Frame Deletion in AF SDK 2.10 DisplayDigits Exposed in AF 2018 / AF SDK 2.10 What's new in AF 2018 (2.10) OSIsoft.AF.PI Namespace Introducing the AFSession Structure To take advantage of these new features, we will need to upgrade to the AF Server 2018 container. Let me demonstrate how we can do that. Create 2017R2 container and inject data The steps for creating the container can be found in Spin up AF Server container (SQL Server included). I will use af17 as the name in this example. docker run -di --hostname af17 --name af17 elee3/afserver:17R2  Now, we can create some elements, attributes and event frames. We will also list the version to confirm it is 2017R2 (2.9.5.8368). Pull 2018 image We can use the following command to pull down the 2018 image. docker pull elee3/afserver:18  The credentials required are the same as the 2017R2 image. Check the digest to make sure the image is correct. 18: digest: sha256:99e091dc846d2afbc8ac3c1ec4dcf847c7d3e6bb0e3945718f00e3f4deffe073 Upgrade from 2017R2 to 2018 Create an empty folder, open up a Powershell, navigate to that folder and run the following commands. Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/afbackup.bat" -UseBasicParsing -OutFile afbackup.bat Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/upgradeto18.bat" -UseBasicParsing -OutFile upgradeto18.bat .\upgradeto18.bat af17 af18  Wait a short moment for your AF Server 2018 container to be ready. In this example, I will give it the name af18. Verification Now we can check that the element, attribute and event frame that we created earlier in the 2017R2 container is persisted to the 2018 container. First, let's connect to af18 with PSE. Upon successful connection, notice that the name and ID of the AF Server 2017R2 is retained. Our element, attribute and event frame are all persisted. Finally, we can see that the version has been upgraded to 2018 (2.10.0.8628). Congratulations. You have successfully upgraded to the AF Server 2018 container and retained your data. Rollback If you want to rollback to the AF Server 2017R2 container, you will need to use the backup that was automatically generated and stored in the folder C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup docker rm -f af17 docker exec af18 cmd /c "copy /b "C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup\PIAFSqlBackup*.bak" c:\db\PIFD.bak" docker run -d -h af17 --name af17 --volumes-from af18 elee3/afserver:17R2  Once a PIFD database is upgraded, it is impossible to downgrade it as seen here stating "a downgrade of the PIFD database will not be possible". This means that it won't be possible to persist data entered after the upgrade during the rollback. Explore new features Computer Storage UOM AF Server Connections history Bulk deletes of event frames by database and/or template and within a specified time range Conclusion Now that the AF Server container has at least two versions available (2017R2 and 2018), you can really start to appreciate its usage for testing the compatibility of your applications with two different versions of the server. In the past, you would need to create two large VMs in order to host two AF Server. Those days are over. You can realize immediate savings in storage space and memory. We will look into bringing these containers into some cloud offerings for future articles. # Upgrade to PI Data Archive 2018 container with Data Persistence Posted by Eugene Lee Jul 9, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction PI Data Archive 2018 has been released on 27 Jun 2018! It is now time for us to upgrade to experience all the latest enhancements. Legacy subsystems such as PI AF Link Subsystem, PI Alarm Subsystem, PI Performance Equation Scheduler, PI Recalculation Subsystem and PI Batch Subsystem are not installed by default. These legacy subsystems mentioned above will not be in the PI Data Archive 2018 container because of the command line that I have chosen for it. This upgrade procedure assumes that you were not using any of these legacy subsystems. We also have client side load balancing in addition to scheduled archive shifts for easier management of archives. Finally, there is the integrated PI Server installation kit which is the enhancement I am most excited about. The kit has the ability to let us generate a command line statement for use during silent installation. No more having to comb through the documentation to find the feature that you want to install. All you have to do is just use the GUI to select the features that you desire and save the command line to a file. The command line is useful in environments without a GUI such as a container environment. Today, I will be guiding you on a journey to upgrade your PI Data Archive 2017R2 container to the PI Data Archive 2018 container. In this article, Overcome limitations of the PI Data Archive container, I have addressed most of the limitations that were present in the original article Spin up PI Data Archive container. We are now left with the final limitation to address. This example doesn't support upgrading without re-initialization of data. I will show you how we can upgrade to the 2018 container without losing your data. Let's begin on this wonderful adventure! Create 2017R2 container and inject data See the "Create container" section in Overcome limitations of the PI Data Archive container for the detailed procedure on how to create the container. In this example, my container name will be pi17. docker run -id -h pi17 --name pi17 pidax:17R2  Once your container is ready, we can use PI SMT to introduce some data which we can use as validation that the data has been persisted to the new container. I will create a PI Point called "test" to store some string data. We will also change some tuning parameters such as Archive_AutoArchiveFileRoot and Archive_FutureAutoArchiveFileRoot to show that they are persisted as well. Take a backup Before proceeding with the upgrade, let us take a backup of the container using the backup script found here. This is so that we can roll back later on if needed. The backup will be stored in a folder named after the container. Build 2018 image 1. Get the files from elee3/PI-Data-Archive-container-build 2. Get the PI Server 2018 integrated install kill from techsupport website 3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website 4. Your folder structure should look similar to this now. 5. Run build.bat. Upgrade from 2017R2 to 2018 Now that we have the image built. We can perform the upgrade. To do so, stop the pi17 container. docker stop pi17  Create the PI Data Archive 2018 container (I will name this pi18) by mounting the data volumes from the pi17 container. docker run -id -h pi18 --name pi18 --volumes-from pi17 -e trust=<containerhost> pidax:18  Verification Now let us verify that the container named pi18 has our old data and tuning parameters and also let us check its version. We can do so with PI SMT. Data has been persisted! Tuning parameters has also been persisted! Version is now 3.4.420.1182 which means the upgrade is successful. Note that the legacy subsystems that were mentioned above are no longer present. Congratulations. You have successfully upgraded to the PI Data Archive 2018 container and retained your data. Rollback Now what if you want to rollback to the previous version for whatever reasons? I will show you that it is also simple to do. There are two ways that we can go about doing this. MethodProsCons RestoreWill always workData added after the upgrade will be lost after the rollback. Only data prior to the backup will be present. Requires a backup Non-RestoreData added after the upgrade is persisted after the rollbackMight not always work. It depends on whether the configuration files are compatible between versions. E.g. it works for 2018 to 2017R2 but not for 2015 to earlier versions We will explore both methods in this blog since both methods will work for rolling back 2018 to 2017R2. Restore method In this method, we can remove pi17, recreate a fresh instance and restore the backup. In the container world, we treat software not as pets but more like cattle. docker rm pi17 docker run -id -h pi17 --name pi17 pidax:17R2 docker stop pi17  Copy the backup folders into the appropriate volumes at C:\ProgramData\docker\volumes docker start pi17  Now let us compare pi17 and pi18 with PI SMT. We can see that they have the same data but their versions are different. Non-Restore method In this method, data that is added AFTER the upgrade will still be persisted after rollback. Let us add some data to the pi18 container. We shall also change the tuning parameter from container17 to container18. Now, let's remove any pi17 container that exists so that we only have the pi18 container running. After that, we can do docker rm -f pi17 docker stop pi18 docker run -id -h pi17 --name pi17 --volumes-from pi18 pidax:17R2  We can now verify that the data added after the upgrade still exists when we roll back to the 2017R2 container. Conclusion In this article, we have shown that it is easy to perform upgrades and rollbacks with containers while preserving data throughout the process. Upgrades that used to take days can now be done in minutes. There is no worry that upgrading will break your container since data is separated from the container. One improvement that I would like to see is that archives can be downgraded by an older PI Archive Subsystem automatically. Currently, this cannot be done. If you try to connect to a newer archive format with an older piarchss without downgrading the version manually, you will see However, the reverse is possible. Connecting to an older archive format with a newer piarchss will upgrade the version automatically. New updates (24 Jul 2018) 1. Fix unknown message problem in logs 2. Add trust on run-time by specifying environment variable # Overcome limitations of the PI Data Archive container Posted by Eugene Lee Jul 2, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In this blog post, we will be exploring how to overcome the limitations that were previously mentioned in the blog post Spin up PI Data Archive container. Container technology can contribute to the manageability of a PI System (installations/migrations/maintenance/troubleshooting that used to take weeks can potentially be reduced to minutes) so I would like to try and overcome as many limitations as I can so that they will become production ready. Let us have a look at the limitations that were previously mentioned. 1. This example does not persist data or configuration between runs of the container image. 2. This example relies on PI Data Archive trusts and local accounts for authentication. 3. This example doesn't support VSS backups. Let us go through them one at a time. Data and Configuration Persistence This limitation can be solved by separating the data from the application container. In Docker, we can make use of something called Volumes which are completely managed by Docker. When we persist data in volumes, the data will exist beyond the life cycle of the container. Therefore, even if we destroy the container, the data will still remain. We create external data volumes by including the VOLUME directive in the Dockerfile like such VOLUME ["C:/Program Files/PI/arc","C:/Program Files/PI/dat","C:/Program Files/PI/log"] When we instantiate the container, Docker will now know that it has to create the external data volumes to store the data and configuration that exists in the PI Data Archive arc, dat and log directories. Windows Authentication This issue can be addressed with the use of GMSA and a little voodoo magic. This enables the container host to obtain the TGT for the container so that the container is able to perform Kerberos authentication and it will be connected to the domain. The container host will need to be domain joined for this to happen. VSS Backups When data is persisted externally, we can leverage on the VSS provider in the container host to perform the VSS snapshot for us so that we do not have to stop the container while performing the backup. This way, the container will be able to run 24/7 without any downtime (as required by production environments). The PI Data Archive has mechanisms to put the archive in a consistent state and freeze it to prepare for snapshot. Create container 1. Grab the files in the 2017R2 folder from my Github repo and place them into a folder. elee3/PI-Data-Archive-container-build 2. Get PI Data Archive 2017 R2A Install Kit and extract it into the folder as well. Download from techsupport website 3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website and place it in the Enterprise_X64 folder. 4. Your folder structure should look similar to this now. 5. Execute buildx.bat. This will build the image. 6. Once the build is complete, you can navigate to the Kerberos folder and run the powershell script (check 3 Aug 2018 updates) to create a Kerberos enabled container .\New-KerberosPIDA.ps1 -AccountName <GMSA name> -ContainerName <container name>  You can request for a GMSA from your IT department and get it installed on your container host with the Install-ADServiceAccount cmdlet. OR If you think it will be difficult for you to get a GMSA from your IT department, then you can use the following command as well to create a non Kerberos enabled container docker run -id -h <DNS hostname> --name <container name> pidax:17R2  7. Go to the pantry to make some tea or coffee. After about 1.5 minutes, your container will be ready. Demo of container abilities 1. Kerberos This section only applies if you created a Kerberos enabled container. After creating a mapping for my domain account using PI System Management Tools (SMT) (the container automatically creates an initial trust for the container host so that you can create the mapping), let me now try to connect to the PI Data Archive container using PI System Explorer (PSE). After successful connection, let me go view the message logs of the PI Data Archive container. We can see that we have Kerberos authentication from AFExplorer.exe a.k.a PSE. 2. Persist Data and Configuration When I kill off the container, I noticed that I am still able to see the configuration and data volumes persisted on my container host so I don't have to worry that my data and configuration is lost. 3. VSS Backups Finally, what if I do not want to stop my container but I want to take a back up of my config and data? For that, we can make use of the VSS provider on the container host. Obtain the 3 files here. elee3/PI-Data-Archive-container-build Place them anywhere on your container host. Execute .\backup.ps1 -ContainerName <container name>  The output of the command will look like this. Your backup will be found in the pibackup folder that is automatically created and will look like this. pi17 is the name of my container. Your container is still running all the time. 4. Restore a backup to a container Now that we have a backup, let me show you how to restore it to a new container. It is a very simple 3 step process. • docker stop the new container • Copy the backup files into the persisted volume. (You can find the volumes at C:\ProgramData\docker\volumes) • docker start the container As you can see, it can't get any simpler . When I go to browse my new container, I can see the values that I entered in my old container which had its backup taken. Conclusion In this blog post, we addressed the limitations of the original PI Data Archive container to make it more production ready. Do we still have any need of the original PI Data Archive container then? My answer is yes. If you do not need the capabilities offered by this enhanced container, then you can use the original one. Why? Simply because the original one starts up in 15 seconds while this one starts up in 1.5 minutes! The 1.5 minutes is due to limitations in Windows Containers. So if you need to spin up PI Data Archive containers quickly without having to worry about these limitations (e.g. in unit testing), then the original container is for you. New updates (3 Aug 2018) Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com. Refer to Upgrade to PI Data Archive 2018 container with Data Persistence to build the pidax:18 image needed for use with the script. # Spin up PI Analysis Service container Posted by Eugene Lee Jun 12, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction During PI World 2018, there was a request for a PI Analysis Service container. The user wanted to be able to spin up multiple PI Analysis Service container to balance the load during periods where there was a lot of back filling to do. Unfortunately, this is limited by the fact that each AF server can only have exactly one instance of PI Analysis Service that runs the analytics for the server. But this has not discouraged me from making a PI Analysis Service container to add to our PI System compose architecture! Features of this container include: 1. Ability to test the presence of AF Server so that set up won't fail 2. Simple configuration. The only thing you need to change is the host name of the AF Server container that you will be using. 3. Speed. Build and set up takes less than 4 minutes in total. 4. Buffering ability. Data will be held in the buffer when connection to target PI Data Archive goes down. (Added 13 Jun 2018) Prerequisite You will need to be running the AF Server container since PI Analysis Service stores its run-time settings in the AF Server. You can get one from Spin up AF Server container (SQL Server included). Procedure 1. Gather the install kits from the Techsupport website. AF Services 2. Gather the scripts and files from GitHub - elee3/PI-Analysis-Service-container-build. 3. Your folder should now look like this. 4. Run build.bat with the hostname of your AF Server container. build.bat <AF Server container hostname>  5. Now you can execute the following to create the container. docker run -it -h <DNS hostname> --name <container name> pias  That's all you need to do! Now when you connect to the AF Server container with PI System Explorer, you will notice that the AF Server is now enabled for asset analysis. (originally, it wasn't enabled) Conclusion By running this PI Analysis Service container, you can now configure asset analytics for your AF Server container to produce value added calculated streams from your raw data streams. I will be including this service in the Docker Compose PI System architecture so that you can run everything with just one command. Update 2 Jul 2018 Removed telemetry and added 17R2 tag. # Spin up AF Server container (Kerberos enabled) Posted by Eugene Lee May 30, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In one of my previous blog posts, I was spinning up an AF Server container using local accounts for authentication. For non-production purposes, this is fine. But since Kerberos is the authentication method that we recommend, I would like to show you that it is also possible to use Kerberos authentication for the AF Server container. To do this, you will have to involve a domain administrator since a Group Managed Service Account (GMSA) will need to be created. Think of GMSA as a usable version of the Managed Service Account. A single gMSA can be used for multiple hosts. For more details about GMSA, you can refer to this article: Group Managed Service Accounts Overview Prerequisite You will need the AF Server image from this blog post. Spin up AF Server container (SQL Server included) Procedure 1. Request GMSA from your domain administrator. The steps are listed here. Add-KDSRootKey -EffectiveTime (Get-Date).AddHours(-10) #Best is to wait 10 hours after running this command to make sure that all domain controllers have replicated before proceeding Add-WindowsFeature RSAT-AD-PowerShell New-ADServiceAccount -name <name> -DNSHostName <dnshostname> -PrincipalsAllowedToRetrieveManagedPassword <containerhostname> -ServicePrincipalNames "AFServer/<name>"  2. Once you have the GMSA, you can proceed to install it on your container host. Install-ADServiceAccount <name>  3. Test that the GMSA is working. You should get a return value of True Test-ADServiceAccount <name>  4. Get script to create AF Server container with Kerberos. Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/New-KerberosAFServer.ps1" -UseBasicParsing -OutFile New-KerberosAFServer.ps1  5. Create a new AF Server container .\New-KerberosAFServer.ps1 -ContainerName <containername> -AccountName <name>  Usage Now you can open up PI System Explorer on your container host to connect to your containerized AF Server with the <name> parameter that you have been using in the procedure section. On the very first connect, you should connect with the afadmin user (password:qwert123!) so that you can set up mappings for your domain accounts. Otherwise, your domain accounts will only have 'World' permissions. After you set up your mappings, you can choose to delete that afadmin user or just keep it. With the mappings for your domain account created, you can now disconnect from your AF Server and reconnect to it with Kerberos authentication. From now on, you do not need explicit logins for your AF Server anymore! Conclusion We can see that security is not a limitation when it comes to using an AF Server container. It is just more troublesome to get it going and requires the intervention of a domain administrator. However, this will remove the need of using local accounts for authentication which is definitely a step towards using the AF Server container for production. I will be showing how to overcome some limitations of containers in future posts such as letting containers have static IP and the ability to communicate outside of the host. New updates (3 Aug 2018) Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com. Script now uses the new image with 18x tag based on a newer version of Windows Server Core. # Compose PI System container architecture Posted by Eugene Lee May 21, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In this blog post, I will be giving an overview of how to use Docker Compose to create a PI System compose architecture that you can use for 1. Learning PI System development 2. Running your unit tests with a clean PI System 3. Compiling your AF Client code 4. Exploring PI Web API structure 5. Testing out Asset Analytics syntax 5. Other use cases that I haven't thought of (Post in the comments!) What is Compose? It is a tool for defining and running multi-container Docker applications. With Compose, you use a single file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. It is both easy and convenient. Setup images The Setup involved is simple. You can refer to my previous blog posts set up these images. Docker setup instructions can be found in the Containerization Hub link above. Spin up PI Web API container (AF Server included) Spin up PI Data Archive container Spin up AF Client container Compose setup In Powershell, run as administrator these commands: [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.21.2/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile$Env:ProgramFiles\docker\docker-compose.exe

Obtain Compose file from docker-compose.yml. Place it on your desktop.

Deployment

Open a command prompt and navigate to your desktop. Enter

docker-compose up


Wait until the screen shows

Once you see that. You can close the window. Your PI System architecture is now up and running!

Usage

There are various things you can try out. If you are experiencing networking issues between the containers, turn off the firewall for the Public Profile on your container host.

1. You can try browsing the PI Web API structure by using this URL (https://eleeaf/piwebapi) in your web browser. When prompted for credentials, you can use

username: afadmin

password: qwert123!

2. Test network connectivity from client container to the PI Data Archive and AF Server by running

docker exec -it desktop_client_1 afs


The hostname of the AF Server is eleeaf. When prompted to use NTLM, enter q. The hostname of the PI Data Archive is eleepi. You should see the following results.

3. You can install PI System Management Tools on your container host and connect to the PI Data Archive via IP address of the container. Somehow, PI SMT doesn't let you connect with hostname.

4. You can also install PI System Explorer and connect to the AF Server to create new databases.

5. You can try compiling some open source AF SDK code found in our Github repository using the AF Client container. (so that you do not have to install Visual Studio)

6. You can use PI System Explorer to experiment with some Asset Analytics equations that you have in mind to check if they are valid.

Destroy

Once you are done with the environment, you can destroy it with

docker-compose down


Limitations

This example does not persist data or configuration between runs of the container.

These applications do not yet support upgrade of container without re-initialization of the data.

This example relies on PI Data Archive trusts and local accounts for authentication.

AF Server, PI Web API, and SQL Express are all combined in a single container.

Conclusion

Notice how easy it is to set up a PI System compose architecture. You can do this in less than 10 minutes. No more having to wait hours to install a PI System for testing and developing with.

The current environment contains PI Data Archive, AF Server, AF Client, PI Web API, a AF SDK sample application (called afs) and PI Analysis Service. More services will be added in the future!

# Spin up AF Client container

Posted by Eugene Lee May 21, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Link to other containerization articles

Containerization Hub

In this blog post, the instructions for building an AF Client image will be shown. For instructions on how to install Docker, please see the link above.

1. Please clone this git repository. GitHub - elee3/AF-Client-container-build

2. Download AF Client 2017R2 from the Techsupport website. AF Client 2017 R2

3. Extract AF Client into the cloned folder.

4. Run build.bat

If you prefer us to build the image for you so that you can docker pull it immediately (less hassle). Please post in the comments!

Usage

This container can be used to compile your AF SDK code (so that you do not have to install Visual Studio) and you can use the container to pack an AF SDK application with its AF Client dependency for easier distribution. An AF SDK sample application (called afs) has been included in the image for you to try compiling it.

Limitations

Containers cannot run applications with GUI such as WPF and Windows Forms applications.

Update 27 Jun 2018

Fixed an issue with the registry links breaking.

# Containerization Hub

Posted by Eugene Lee May 21, 2018

Good day everyone, welcome to the Containerization Hub. Here you will find containerization articles that have already been published in PI Square and also the titles of those articles that are planned for the future (subject to changes). Users will just need to bookmark this page rather than bookmark all the individual articles.

Important: Due to the review of our open source policies, the container Github/DockerHub repositories have been taken down. If you would like access to the source code of the Docker images that were previously hosted there, please send an email to technologyenablement@osisoft.com

Standalone

Spin up AF Server container (SQL Server included)

Spin up PI Web API container (AF Server included)

Spin up PI Data Archive container

Spin up AF Client container

Spin up PI Analysis Service container

Overcome limitations of the PI Data Archive container

Spin up stateless AF Server container (in progress)

Upgrading

Upgrade to PI Data Archive 2018 container with Data Persistence

Upgrade to AF Server 2018 container with Data Persistence

Security

Spin up AF Server container (Kerberos enabled)

Container Kerberos Double Hop

Health

AF Server container health check

PI Data Archive container health check

Cloud

AF Server container in the cloud

Orchestration

Compose PI System container architecture

Containers and Swarm Part 1 (Setup, Service)

Containers and Swarm Part 2 (Secrets, Configs)

Containers and Swarm Part 3 (Scaling, Self-Healing)

Containers and Swarm Part 4 (Overlay Networking)

Containers and Swarm Part 5 (External Traffic, Load Balancing)

Miscellaneous

Form collectives with PIDA container

Spin up PI Web API website container

Building containers with Jenkins

Containers and Kubernetes

Install Docker

The steps to setup Docker are below.

For Windows 10,

You can install Docker for Windows. Please follow the instructions here.

For Windows Server 2016/2019,

You can use the OneGet provider PowerShell module. Open an elevated PowerShell session and run the below commands.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force



# Spin up PI Web API container (AF Server included)

Posted by Eugene Lee May 14, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Link to other containerization articles

Containerization Hub

Introduction

I now present to you another blog post in the containerization series on spinning up PI Web API in less than 3 minutes (My test came out to be 2 min 44 sec!).

I will repeat the steps here for setting up Docker for your convenience. If you have already done so while using the AF Server image, then you do not need to repeat it again. The PI Web API image offered here is fully self contained. In other words, you do not have to worry about any dependencies such as where to store your PI Web API configuration. In a later blog post, I will be posting on a PI Web API image that only contain the application service for those of you who want the application service to be separate from the database service. In that image, you will need to furnish your own AF Server then. For now, you do not have to care about that.

Set up

## Install PI Web API image

Run the following command at a console. When prompted for the username and password during login, please contact me (elee@osisoft.com) for them. Currently, this image is only offered for users who already have a PI Server license or are PI Developers Club members (try it now for free!). You will have to login before doing the pull. Otherwise, the pull will be unsuccessful.

docker login
docker pull elee3/afserver:piwebapi
docker logout



Remember to check digest of image to make sure it is not tampered with.

Update 2 Jul 2018: Please use the fast version with tag webapifast17R2 as that image is better in every possible way. Boot up time 15 sec compared to 3 minutes.

Deployment

Now that the setup is complete, you can proceed to running the container image. To do so, use the following command. Replace <DNS hostname> and <containername> with one of your own picking. Remember to pick a DNS hostname that is unique.

docker run -it --hostname <DNS hostname> --name <containername> elee3/afserver:piwebapi

After about 3 minutes, you will see that the command prompt indicates that both the PI Web API and AF Server are Ready.

This indicates that your PI Web API is ready for usage. At this point, you can just close the window.

Update 2 Jul 2018: Please use the fast version with tag webapifast17R2 as that image is better in every possible way. Boot up time 15 sec compared to 3 minutes.

Usage

Now you can open a browser on your container host and connect to it with the DNS hostname that you chose earlier.

https://<DNS hostname>/piwebapi

When prompted for credentials, you can use

User name: afadmin

Password: qwert123!

## Browsing your PI Data Archive

You can use a URL of the form

https://<DNS hostname>/piwebapi/dataservers?path=\\<PI Data Archive hostname>

to access your PI Data Archive. Of course, you need to give access permissions by creating a local user on the PI Data Archive machine with the same username and password above and give a PI mapping to that user.

## Browsing your AF Server

You can use a URL of the form

https://<DNS hostname>/piwebapi/assetservers?path=\\<AF Server hostname>

to access your AF Server. Again, you need to give access permissions by creating a local user on the AF Server machine with the same username and password above. By default, everyone has World identity in AF Server so you do not need to give any special AF mapping.

Multiple PI Web API instances

You can spin up several PI Web API instances by using the docker run command multiple times with a difference hostname and containername.

You can see above that I have spin up several instances on my container host.

Destroy PI Web API instance

If you no longer need the PI Web API instance, you can destroy it using

docker stop <containername>
docker rm <containername>



Limitations

AF Server, PI Web API, and SQL Express are all combined in a single container. There will be an upcoming blog post for a container with just PI Web API in it.

This example relies on local accounts for authentication.

Conclusion

Observe that the steps to deploy both the AF Server and PI Web API containers are quite similar and can be easily scripted. This helps to provision testing environments quickly and efficiently which helps in DevOps.

New updates (12 Jun 2018)

In the never ending quest for speed and productivity, every minute and second that we save waiting for applications to boot up can be better utilized elsewhere such as taking a nap or watching that cat video clip that your friend sent you. Therefore, I present to you a faster PI Web API container image that is more than 60% faster than the original one.

docker pull elee3/afserver:webapifast17R2


Remember to check digest of image to make sure it is not tampered with.

3 test runs were performed to compare the boot up time.

Run 1

Start time was 13:48:00 for both. The original image finished in 2 min 36 sec while the new one finished in 55 sec.

Run 2

Start time was 13:58:00 for both. The original image finished in 2 min 27 sec while the new one finished in 55 sec.

Run 3

Start time was 14:29:00 for both. The original image finished in 2 min 28 sec while the new one finished in 57 sec.

Summary of results

Run
Original (s)
New (s)
115655
214755
314857
Average15055

The results show that the new image is about 63% faster than the original one.

New updates (18 Jun 2018)

1. Added reminder to check digest of the image to make sure image has not been tampered with.

New updates (2 Jul 2018)

1. Removed telemetry and changed tag from webapifast to webapifast17R2. Took down image with tag piwebapi from repository. Boot up time for webapifast17R2 has been further reduced to 15 sec!!

New updates (26 Oct 2018)

1. New tag webapi18s for version 2018.

By date: By tag: