# PI Developers Club

15 Posts authored by: Eugene Lee

# Container Kerberos Double Hop

Posted by Eugene Lee Sep 17, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

In this blog post about security and containers, we will be discussing about implementing a Kerberos Double Hop from the client machine to the PI Web API container and finally to the PI Data Archive container. Previously, when we are using the PI Web API container located here Spin up PI Web API container (AF Server included), we are using local accounts for authentication to the backend server such as the AF Server or the PI Data Archive. The limitation is that without Kerberos Delegation, we will not be able to have per user security which means that all users of PI Web API will have the same permissions. i.e. an operator can read the sensitive tags that were meant for the upper management and vice versa. Obviously, this is not ideal. What we want is to have more granularity in assigning permissions to the right people so that they can only access the tags that they are supposed to read.

Prerequisites

You will need to have 2 GMSA accounts. You can request such accounts from your IT department. They can refer to this blog post if they do not know how to create GMSA Spin up AF Server container (Kerberos enabled). Also be sure that one of them has the TrustedforDelegation property set to True. This can be done with the Set-ADServiceAccount  cmdlet.

You will also need to build the PI Data Archive container by following the instructions in the Build the image section here.

PI Data Archive container health check

For the PI Web API container, you will need to pull it from the repository by using this command.

docker pull elee3/afserver:webapi18


Demo without GMSA

First let us demonstrate how authentication will look like when we run containers without GMSA.

Let's have a look at the various authentication modes that PI Web API offers.

1. Anonymous

2. Basic

3. Kerberos

4. Bearer

We will only be going through the first 3 modes as Bearer requires an external identity provider which is out of the scope of this blog.

Create the PI Data Archive container and the PI Web API container. We will also create a local user called 'enduser' in the two containers.

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h wa --name wa elee3/afserver:webapi18
docker exec wa net user enduser qwert123! /add
docker exec pi net user enduser qwert123! /add


Anonymous

Now let's open up PSE and connect to the hostname "wa". If prompted for the credentials, use

Change the authentication to Anonymous and check in the changes. Restart the PI Web API service.

Verify that the setting has taken effect by using internet explorer to browse to /system/configuration. There will be no need for any credentials.

We can now try to connect to the PI Data Archive container with this URL.

https://wa/piwebapi/dataservers?path=\\pi

Check the PI Data Archive logs to see how PI Web API is authenticating.

Result: With Anonymous authentication, PI Web API authenticates with its service account using NTLM.

Basic

Now use PSE to change the authentication to Basic and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. This time, there will be a prompt for credentials. Enter

Try to connect to the same PI Data Archive earlier. You will get an error as the default PI Data Archive container doesn't have any mappings for enduser

Let's see what is happening on the PI Data Archive side.

Result: With Basic authentication, the end user credential has been transferred to the PI Data Archive with NTLM.

Kerberos

Finally use PSE to change the authentication to Keberos and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. The prompt for credentials will look different from the Basic authentication one. Use the same credentials as you did for the Basic authentication scenario.

Try to connect to the same PI Data Archive again. You should not be able to connect. When you check on the PI Data Archive logs, you will see

Result: With Kerberos authentication, the delegation failed and the credential became NT AUTHORITY\ANONYMOUS LOGON even though we logged on to PI Web API with the local account 'enduser'.

Demo with GMSA

Kerberos

Now we shall use the GMSA accounts that we have to make the last scenario with Kerberos delegation work.

Download the scripts for Kerberos enabled PI Data Archive and PI Web API here.

PI-Web-API-container/New-KerberosPWA.ps1

PI-Data-Archive-container-build/New-KerberosPIDA.ps1

I will use the name 'untrusted' as the name of the GMSA account that is not trusted for delegation and 'trusted' as the name of the GMSA account that is trusted for delegation. Set the SPN for 'trusted' like such

setspn -s HTTP/trusted trusted


Once you have the scripts, run them like this

.\New-KerberosPIDA.ps1 -AccountName untrusted -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName trusted -ContainerName wak


The scripts will help you to create a credential spec for the container based on the GMSA that you provide to it. A credential spec will let the container know how it can access Active Directory resources. Then, it will use this credential spec to create the container using docker run command. It will also set the hostname of the container to be the same as the name of the GMSA. This is required because it is a current limitation with the implementation that might be resolved in the future so that you can choose your own hostnames.

Open internet explorer now with your domain account and access PI Web API /system/userinfo. The hostname is 'trusted'.

Make sure that ImpersonationLevel is 'Delegation'.

Now try to access the PI Data Archive. The hostname is 'untrusted'. You will be unable to access. Why? Because you haven't created a mapping yet! So let's use SMT to create a mapping to your domain account. After creating a mapping. Try again and you should be able to connect. The PI Data Archive logs will show that you have connected with Kerberos. You do not need any mapping to your PI Web API service account at all if Kerberos delegation is working properly.

Result: With Kerberos authentication method in PI Web API and the use of GMSAs, Kerberos delegation works. The end domain user is delegated from the client to the PI Web API container to the PI Data Archive container. We have successfully completed the double hop.

Troubleshoot

If this doesn't seem to work for you, one thing you can try is to check the setting for internet explorer according to this KB article.

KB01223 - Kerberos and Internet Browsers

Your browser settings might differ from mine but the container settings should be the same since the containers are newly created.

Alternative: Resource Based Constrained Delegation

A more secure way to do Kerberos delegation instead of trusting the PI Web API container GMSA for delegation is to set the property "PrincipalsAllowedToDelegateToAccount" on the PI Data Archive container GMSA. This is what we call Resource Based Constrained Delegation (RBCD). You do not have to trust any GMSAs for delegation in this scenario. You will still need two GMSAs.

Assuming that you have already created the two containers with the scripts found above. I will use 'pida' as the name of the PI Data Archive container GMSA and 'piwebapi' as the name of the PI Web API container GMSA.

.\New-KerberosPIDA.ps1 -AccountName pida -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName piwebapi -ContainerName wak


Execute these two additional commands to enable RBCD.

docker exec pik powershell -command "Add-WindowsFeature RSAT-AD-PowerShell"
docker exec pik powershell -command "Set-ADServiceAccount $env:computername -PrincipalsAllowedToDelegateToAccount (Get-ADServiceAccount piwebapi)"  You will still be able to connect with Kerberos delegation from the client machine to the PI Web API container to the PI Data Archive container. In this case, the PI Data Archive container only strictly allows delegation from the PI Web API container with 'piwebapi' as its GMSA. Conclusion We have seen that containers are able to utilize Kerberos delegation with the usage of GMSAs. This is important for middleware server containers such as PI Web API. Here is a quick summary of the various results that we have seen. Authentication Mode No GMSAWith GMSA AnonymousNTLM with service accountNo reason to do this BasicNTLM with local end user accountNo reason to do this KerberosNTLM with anonymous logonKerberos delegation with domain end user account The interesting thing is that Basic authentication can also have per user security with local end user accounts. But you will need to maintain the list of local users in the PI Web API container and the PI Data Archive container separately which is not recommended. The ideal case is to go with Kerberos delegation. # PI Data Archive container health check Posted by Eugene Lee Sep 3, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In my previous blog on AF Server container health check, I talked about implementing a health check for the AF Server container. Naturally, we will also have to discuss about such a check for the PI Data Archive container. For an introduction to what a health check is about and also how you can integrate a health check with Docker. Please refer to the previous blog post as I won't be repeating it here. In part 1, I will be covering the definition of the health tests that we can do for the PI Data Archive and then we will hook them up in the Dockerfile. In part 2, we will be doing something interesting with these health check enabled containers by using another container that I wrote to inform us by email whenever there is a change in their health status so that we are aware when things fail. Without further ado, let's jump into the definition of the health tests for the PI Data Archive container! Define health tests There are 2 tests that we will be performing. The first test is a test on the port 5450 to determine if there are any services listening on that port. The second test will use piartool to block for some essential subsystems of the PI Data Archive with a fixed timeout so that the test will fail if it exceeds that timeout. The Powershell cmdlet Get-NetTCPConnection can accomplish the first check for us. A return value of null means that there is no service listening on port 5450. The relevant code is below $val = Get-NetTCPConnection -LocalPort 5450 -State Listen -ErrorAction SilentlyContinue
if ($val -eq$null)
{
# return 1: unhealthy - the container is not working correctly
Write-Host "Failed: No TCP Listener found on 5450"
exit 1
}


Next, piartool is a utility that is located in the adm folder in PI Data Archive home directory. It has an option called "block" which waits for the specified subsystem to respond. This command is also used in the PI Data Archive start scripts to pause the script until the subsystem is available. The subsystems that we are going to check is the following list.

$SubsystemList = @( @("pibasess", "PI Base Subsystem"), @("pisnapss", "PI Snapshot Subsystem"), @("piarchss", "PI Archive Subsystem"), @("piupdmgr", "PI Update Manager") )  We are going to change the amount of time that we allow for each check to 10 seconds so that we do not have to wait 1 hour for it to complete . We will also grab the start and end times so that we can provide detailed logging for troubleshooting purposes. The code for this is below. function Block-Subsystem { Param ([string]$Name, [string]$DisplayName, [int]$TimeoutSeconds= 10)
$StartDate=Get-Date$rc = Start-Process -FilePath "${env:PISERVER}\adm\piartool.exe" -ArgumentList @("-block",$Name, $TimeoutSeconds) -Wait -PassThru -NoNewWindow$EndDate=Get-Date
if($rc.ExitCode -ne 0) { echo ("Block failed for {0} with exit code {1}, block started: {2}, block ended: {3}" -f$DisplayName,$rc.ExitCode,$StartDate,$EndDate) exit 1 } } ForEach ($Subsystem in $SubsystemList) {Block-Subsystem -Name$Subsystem[0] -DisplayName $Subsystem[1] -TimeoutSeconds 10}  Integrate into Docker We will add this line of code to our Dockerfile to make Docker start performing health checks. HEALTHCHECK --start-period=60s --timeout=60s --retries=1 CMD powershell .\check.ps1  The start period is given as 60 seconds to allow the PI Data Archive to start up and initialize properly before the health check test results will be taken into account. A time out of 60 seconds is given for the entire health check to complete. If it takes longer than that, the health check is deemed to have failed. I also gave only 1 retry which means that the health check will be unsuccessful if the first try fails. There is no second chance! . Build the image As usual, you will have to supply the PI Server 2018 installer and pilicense.dat yourself. The rest of the files can be found here. elee3/PI-Data-Archive-container-build Put all the files into the same folder and run the build.bat file. Once your image is built, you can create a container. docker run -h pi --name pi -e trust=%computername% pidax:18  Now check docker ps. The health status should be starting. After 1 minute which is the timeout period, run docker ps again. The health status should now be healthy. Health monitoring Now that we have a health check enabled container up and running, we can start to do some wonderful things with it. If your job is a PI administrator. don't you wish there was some way to keep tabs on your PI Data Archive's health so that if it fails, an email can be sent to notify you that it is unhealthy. This way, you won't get a shock the next time you check on your PI Data Archive and realize that it has been down for a week! I have written an application that can help you monitor ANY health enabled containers (i.e. not only the PI Data Archive container and the AF Server container but any container that has a health check enabled) and send you an email when they become unhealthy. We can start the monitoring with just one simple command. You should change the following variables Name of your SMTP server: <mysmtp> Source email: <admin@osisoft.com>: Destination email: <operator@osisoft.com> to your own values. docker run --rm -id -h test --name test -e smtp=<mysmtp> -e from=<admin@osisoft.com> -e to=<operator@osisoft.com> elee3/health  Once the application is running, we can test it by trying to break our PI Data Archive container. I will do so by stopping the PI Snapshot Subsystem since it is one of the services that is monitored by our health check. After a short while, I received an email in my inbox. Let me check docker ps again. The health status of docker ps corresponds to what the email has indicated. Notice that the email even provides us with the health logs so that we know exactly what went wrong. This is so useful. Now let me go back and start the PI Snapshot Subsystem again. The monitoring application will inform me that my container is healthy again. The latest log at 2:30:47 PM has no output which indicates that there are no errors. The logs will normally fetch the 5 most recent events. With the health monitoring application in place, we can now sleep in peace and not worry about container failures which go unnoticed. Conclusion In addition to what I have shown here, I want to mention that the health tests can be defined by the users themselves. You do not have to use the implementation that is provided by me. This level of flexibility is very important since health is a subjective topic. One man's trash is another man's treasure. You might think a BMI of 25 is ok but the official recommendation from the health hub is 23 and below. Therefore, the ability to define your own tests and thresholds will help you receive the right notifications that are appropriate to your own environment. You can hook them up during docker run. Here is more information if you are interested. Source code for health monitoring application is here. elee3/Health-Monitor # AF Server container health check Posted by Eugene Lee Aug 23, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In a complex infrastructure which spans several data centers and has multiple dependencies with minimum service up-time requirements, it is inevitable that services can still fail occasionally. The question then is how we can manage that in order to continue to maintain a high availability environment and keep downtime as low as possible. In this blog post, we will be talking about how we can implement a health check in the AF Server container to help with that goal. What is a health check? A container that is running doesn't necessarily mean that it is working. i.e. performing the service that it is supposed to do. In Docker Engine 1.12, a new HEALTHCHECK instruction was added to the Dockerfile so that we can define a command that verifies the state of health in the container. It is the same concept as a health check for humans such as making sure that your liver or kidney is working properly and take preventative measures before things go worse. In the container scenario, the exit code of the command will determine whether the container is operational and doing what is it meant to do. In the AF Server context, we will need to think about what it means for the AF Server to be 'healthy'. Luckily for us, we have such a counter to indicate the health status. AF server includes a Windows PerfMon counter called AF Health Check. If both the AF application service and the SQL Server are running and responding, this counter returns a value of 1. Another way we can check for health is to check if a service is listening on the port 5457 since AF Server uses that. We can also test if the service is running. Including all of these tests will make our health check more robust. Define health tests For the first measure of health, we will be using the Get-Counter Powershell cmdlet to read the value of the performance counter. A healthy AF Server is shown below. A value of 1 indicates that the AF Server and SQL Server are healthy while 0 means otherwise. The second measure of health is to test for a service listening on port 5457. We will use the Powershell cmdlet Get-NetTCPConnection to do so. When there is no listener on port 5457, we will get an error. The third measure of health is to check if the service is running by using the Get-Service Powershell cmdlet. Integrate into Docker With the health tests on hand, how can we ask Docker to perform these tests? The answer is to use the HEALTHCHECK instruction in the Dockerfile to instruct the Docker Engine to carry out the tests at regular intervals that can be defined by the image builder or the user. The syntax of the instruction is HEALTHCHECK [OPTIONS] CMD command The options that can appear before CMD are: • --interval=DURATION (default: 30s) • --timeout=DURATION (default: 30s) • --start-period=DURATION (default: 0s) • --retries=N (default: 3) For more information on what the options mean, please look here. I will be using a start-period of 10s to allow the AF Server sometime to initialize before starting the health checks. The other options I will leave as default. The user of the image can still override these options during Docker run. The command’s exit status indicates the health status of the container. The possible values are: • 0: success - the container is healthy and ready for use • 1: unhealthy - the container is not working correctly • 2: reserved - do not use this exit code The command will be a batch file that runs the aforementioned tests. The instruction will therefore look like this. HEALTHCHECK --start-period=10s CMD powershell .\check.ps1  Here are the contents of check.ps1 #test for service listening on port 5457 Get-NetTCPConnection -LocalPort 5457 -State Listen -ErrorAction SilentlyContinue|out-null if ($? -eq $false) { write-host "No one listening on 5457" exit 1 } #test if AF service is running$status = Get-Service afservice|select -expand status
if ($status -ne "Running") { write-host "PI AF Application Service (afservice) is$status."
write-host "PI AF Application Service (afservice) is not running."
exit 1
}

#test for AF Server Health Counter
$counter = get-counter "\PI AF Server\Health"|Select -Expand CounterSamples| Select -expand CookedValue; if ($counter -eq 0)
{
write-host "The health counter is $counter. This might mean either" write-host "1. SQL Server is non-responsive" write-host "2. SQL Server is responding with errors" exit 1 }  Usage The container image elee3/afserver:18x has been updated with the health check ability. After pulling it from the Docker repository with docker pull elee3/afserver:18x  You can have some fun with it. Let me spin up a new AF Server container based on the new image. docker run -d -h af18 --name af18 elee3/afserver:18x  Now, let's do a docker ps  Notice that my other container af17 that is based on the elee3/afserver:17R2 image doesn't have any health status next to it status because a health check was not implemented for it while container af18 indicates "(health: starting)". Let's run docker ps again after waiting for a little while. Notice that the health status has changed from 'starting' to 'healthy' after the first test which is run interval (configured in options) seconds after the container is started. We can also do docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expandproperty log  to see the health logs. Health event When the health status of a container changes, a health_status event is generated with the new status. We can observe that using docker events. We will now intentionally break the container by stopping the SQL Server service and trying to connect with PSE. This is expected. Now let us check using docker events which is a tool for getting real time events from the Docker Engine. We can do a filter on docker events to only grab the health_status events for a certain time range so that we do not need to be concerned with irrelevant events. Let us grab those health_status events for the past hour for my container af18. (docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time


Also check on

docker ps


and also docker inspect which can give us clues on what went wrong.

docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expand log|fl


With the health check, it is now obvious that even though the container is running, it doesn't work when we try to connect to it with PSE.

We shall restart the SQL Server service and try connecting with PSE. We can check if the container becomes healthy again by running

docker ps


and

(docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time  As expected, a new health_status event is generated which indicates healthy. Conclusion We can leverage on the health check mechanism further when we use a container orchestrator such as Docker Swarm that can detect the unhealthy state of a container and automatically replace the container with a new and working container. This will be discussed in a future blog. So stay tuned! # AF Server container in the cloud Posted by Eugene Lee Aug 10, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In my previous articles, I have demonstrated using the AF Server container in local Docker host deployments. The implication is that you have to manage the Docker host infrastructure yourself. The installation, patching, maintenance and upgrading work has to be done by you manually. This represents significant barriers to get up and running. As an analogy, imagine you visit another country for vacation and need to get from the airport to the hotel. Would it be better to buy a car (if they even sold one at the airport?) and drive to the hotel or just take a taxi (transport as a service). The first option requires a larger initial investment of time and money compared to the latter. For quick demo, training or testing purposes, getting a Docker host infrastructure up and running requires effort (getting a machine with right specifications, procuring an OS with Windows container capabilities, patching the OS so that you can use Docker, installing the right edition of Docker) and troubleshooting if things go south (errors during setup or services refusing to start). In the past, we have no other choice so we just have to live with it. But in this modern era of cloud computing, using a container as a service might be a faster and cheaper alternative. Today, I will show you how to operate the AF Server container in the cloud using Azure Container Instances. The very first service of its kind in the cloud, Azure Container Instances is a new Azure service delivering containers with great simplicity and speed. It is a form of serverless containers. Prerequisites You will need an Azure subscription to follow along with the blog. You can get a free trial account here. Azure CLI Install the Azure CLI which is a command line tool for managing Azure resources. It is a small install. Once done, we need to login az login  If the CLI can determine your default browser and has access to open it, it will do so and direct you immediately to a sign in page. Otherwise, you need to open a browser page and follow the instructions on the command line to enter an authorization code after navigating to https://aka.ms/devicelogin in your browser. Complete the sign in via the browser and you will see Now set your default subscription if you have many subscriptions. If you only have one subscription to your account, then you can skip this step. az account set -s <subscription name>  Create cloud container We are now ready to create the AF Server cloud container. First create a resource group. az group create --name resourcegrp -l southeastasia  You can change southeastasia to a location nearest to you. Here is the list of locations (remove the space when using it) Create a file named af.yaml. Replace <username> and <password> with the credentials for pulling the AF Server container image. There are some variables that you can configure afname: The name that you choose for your AF Server. user: Username to authenticate to your AF Server. pw: Password to authenticate to your AF Server. af.yaml apiVersion: '2018-06-01' name: af properties: containers: - name: af properties: environmentVariables: - name: afname value: eugeneaf - name: user value: eugene - name: pw secureValue: qwert123! image: elee3/afserver:18x ports: - port: 5457 protocol: TCP resources: requests: cpu: 1.0 memoryInGB: 1.0 imageRegistryCredentials: - server: index.docker.io username: <username> password: <password> ipAddress: dnsNameLabel: eleeaf ports: - port: 5457 protocol: TCP type: Public osType: Windows type: Microsoft.ContainerInstance/containerGroups  Then run this in Azure CLI to create the container. az container create --resource-group resourcegrp --file af.yaml  The command will return in about 5 minutes. You can check the state of the container. az container show --resource-group resourcegrp -n af --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table  You can check the container logs. az container logs --resource-group resourcegrp -n af  Explore with PSE You now have an AF Server container in the cloud that can be accessed ANYWHERE as long as there is internet connectivity. You can connect to it with PSE using the FQDN. The credentials to use are those that you specified in af.yaml. Notice that the name of the AF Server is the value of the afname environment variable that was passed in af.yaml. Run commands in container If you have a need to login to the container to run commands such as using afdiag, you can do so with az container exec --resource-group resourcegrp -n af --exec-command "cmd.exe"  Clean up When you are done with using the container, you should destroy it so that you won't have to pay for it when it is not being used. az container delete --resource-group resourcegrp -n af  You can check that the resource is deleted by listing your resources. az resource list  Considerations There are some tricks to hosting a container in the cloud to optimize its deployment time. 1. Base OS The Base OS should be one of the three most recent versions of Windows Server Core 2016. These are cached in Azure Container Instances to help in the deployment time. If you want to experience the difference, try pulling elee3/afserver:18 in the create container command above. The time taken will be 13min which is more than twice of the 5min needed to pull elee3/afserver:18x. The reason is because the old image with “18” tag is based on the public SQL Server image which is 7 months old and doesn't have the latest OS version to be able to leverage on the caching mechanism to improve performance. I have rebuilt the image with “18x” tag based on my own SQL Server image with the latest OS version. 2. Image registry location Hosting the image in Azure Container Registry in the same region that you use to deploy your container will help to improve deployment time as this shortens the network path that the image needs to travel which shortens the download time. Take note that ACR is not free unlike DockerHub. In my tests, it took 4min to deploy with ACR. 3. Image size This one is obviously a no-brainer. That's why I am always looking to make my images smaller. Another consideration is the number of containers per container group. In this example, we are creating a single-container group. The current limitation of Windows containers is that we can only create single-container groups. When this limitation is lifted in the future, there are some scenarios where I see value in creating multi-container groups such as spinning up sets of containers that are complimentary to each other. E.g. PI Data Archive container, AF Server container, PI Analysis Service container in a 3-container group. However, for scenarios such as spinning up 2 AF Servers containers, we should still keep them in separate container groups so that they won't fight for the same port. Limitations Kerberos authentication is not supported in a cloud environment. We are using NTLM authentication in this example. Conclusion Deploying the AF Server container to Azure Container Instances might not be as fast as deploying it to a local Docker host. But it is cheaper compared to the upfront time and cost of setting up your own Docker host. This makes it ideal for demo/training/testing scenarios. The containers are billed on a per second basis so you only pay for what you use. That is like only paying for your trip from the airport to the hotel without having to pay anything extra. # Upgrade to AF Server 2018 container with Data Persistence Posted by Eugene Lee Jul 24, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction AF Server 2018 has been released on 27 Jun 2018! Let's take a look at some of the new features that are available. The following list is not exhaustive. • AF Server Connection information is now available for administrative users. • A new UOM Class, Computer Storage, is provided. The canonical UOM is byte (b) and multiples of 1000 and 1024. • AFElementSearch and AFEventFrameSearch now support searching for elements and event frames by attribute values without having to specify a template. • The AFDiag utility has been enhanced to allow for bulk deletes of event frames by database and/or template and within a specified time range Here are also some articles that talk about other new features in AF 2018. Mass Event Frame Deletion in AF SDK 2.10 DisplayDigits Exposed in AF 2018 / AF SDK 2.10 What's new in AF 2018 (2.10) OSIsoft.AF.PI Namespace Introducing the AFSession Structure To take advantage of these new features, we will need to upgrade to the AF Server 2018 container. Let me demonstrate how we can do that. Create 2017R2 container and inject data The steps for creating the container can be found in Spin up AF Server container (SQL Server included). I will use af17 as the name in this example. docker run -di --hostname af17 --name af17 elee3/afserver:17R2  Now, we can create some elements, attributes and event frames. We will also list the version to confirm it is 2017R2 (2.9.5.8368). Pull 2018 image We can use the following command to pull down the 2018 image. docker pull elee3/afserver:18  The credentials required are the same as the 2017R2 image. Check the digest to make sure the image is correct. 18: digest: sha256:99e091dc846d2afbc8ac3c1ec4dcf847c7d3e6bb0e3945718f00e3f4deffe073 Upgrade from 2017R2 to 2018 Create an empty folder, open up a Powershell, navigate to that folder and run the following commands. Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/afbackup.bat" -UseBasicParsing -OutFile afbackup.bat Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/upgradeto18.bat" -UseBasicParsing -OutFile upgradeto18.bat .\upgradeto18.bat af17 af18  Wait a short moment for your AF Server 2018 container to be ready. In this example, I will give it the name af18. Verification Now we can check that the element, attribute and event frame that we created earlier in the 2017R2 container is persisted to the 2018 container. First, let's connect to af18 with PSE. Upon successful connection, notice that the name and ID of the AF Server 2017R2 is retained. Our element, attribute and event frame are all persisted. Finally, we can see that the version has been upgraded to 2018 (2.10.0.8628). Congratulations. You have successfully upgraded to the AF Server 2018 container and retained your data. Rollback If you want to rollback to the AF Server 2017R2 container, you will need to use the backup that was automatically generated and stored in the folder C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup docker rm -f af17 docker exec af18 cmd /c "copy /b "C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup\PIAFSqlBackup*.bak" c:\db\PIFD.bak" docker run -d -h af17 --name af17 --volumes-from af18 elee3/afserver:17R2  Once a PIFD database is upgraded, it is impossible to downgrade it as seen here stating "a downgrade of the PIFD database will not be possible". This means that it won't be possible to persist data entered after the upgrade during the rollback. Explore new features Computer Storage UOM AF Server Connections history Bulk deletes of event frames by database and/or template and within a specified time range Conclusion Now that the AF Server container has at least two versions available (2017R2 and 2018), you can really start to appreciate its usage for testing the compatibility of your applications with two different versions of the server. In the past, you would need to create two large VMs in order to host two AF Server. Those days are over. You can realize immediate savings in storage space and memory. We will look into bringing these containers into some cloud offerings for future articles. # Upgrade to PI Data Archive 2018 container with Data Persistence Posted by Eugene Lee Jul 9, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction PI Data Archive 2018 has been released on 27 Jun 2018! It is now time for us to upgrade to experience all the latest enhancements. Legacy subsystems such as PI AF Link Subsystem, PI Alarm Subsystem, PI Performance Equation Scheduler, PI Recalculation Subsystem and PI Batch Subsystem are not installed by default. These legacy subsystems mentioned above will not be in the PI Data Archive 2018 container because of the command line that I have chosen for it. This upgrade procedure assumes that you were not using any of these legacy subsystems. We also have client side load balancing in addition to scheduled archive shifts for easier management of archives. Finally, there is the integrated PI Server installation kit which is the enhancement I am most excited about. The kit has the ability to let us generate a command line statement for use during silent installation. No more having to comb through the documentation to find the feature that you want to install. All you have to do is just use the GUI to select the features that you desire and save the command line to a file. The command line is useful in environments without a GUI such as a container environment. Today, I will be guiding you on a journey to upgrade your PI Data Archive 2017R2 container to the PI Data Archive 2018 container. In this article, Overcome limitations of the PI Data Archive container, I have addressed most of the limitations that were present in the original article Spin up PI Data Archive container. We are now left with the final limitation to address. This example doesn't support upgrading without re-initialization of data. I will show you how we can upgrade to the 2018 container without losing your data. Let's begin on this wonderful adventure! Create 2017R2 container and inject data See the "Create container" section in Overcome limitations of the PI Data Archive container for the detailed procedure on how to create the container. In this example, my container name will be pi17. docker run -id -h pi17 --name pi17 pidax:17R2  Once your container is ready, we can use PI SMT to introduce some data which we can use as validation that the data has been persisted to the new container. I will create a PI Point called "test" to store some string data. We will also change some tuning parameters such as Archive_AutoArchiveFileRoot and Archive_FutureAutoArchiveFileRoot to show that they are persisted as well. Take a backup Before proceeding with the upgrade, let us take a backup of the container using the backup script found here. This is so that we can roll back later on if needed. The backup will be stored in a folder named after the container. Build 2018 image 1. Get the files from elee3/PI-Data-Archive-container-build 2. Get the PI Server 2018 integrated install kill from techsupport website 3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website 4. Your folder structure should look similar to this now. 5. Run build.bat. Upgrade from 2017R2 to 2018 Now that we have the image built. We can perform the upgrade. To do so, stop the pi17 container. docker stop pi17  Create the PI Data Archive 2018 container (I will name this pi18) by mounting the data volumes from the pi17 container. docker run -id -h pi18 --name pi18 --volumes-from pi17 -e trust=<containerhost> pidax:18  Verification Now let us verify that the container named pi18 has our old data and tuning parameters and also let us check its version. We can do so with PI SMT. Data has been persisted! Tuning parameters has also been persisted! Version is now 3.4.420.1182 which means the upgrade is successful. Note that the legacy subsystems that were mentioned above are no longer present. Congratulations. You have successfully upgraded to the PI Data Archive 2018 container and retained your data. Rollback Now what if you want to rollback to the previous version for whatever reasons? I will show you that it is also simple to do. There are two ways that we can go about doing this. MethodProsCons RestoreWill always workData added after the upgrade will be lost after the rollback. Only data prior to the backup will be present. Requires a backup Non-RestoreData added after the upgrade is persisted after the rollbackMight not always work. It depends on whether the configuration files are compatible between versions. E.g. it works for 2018 to 2017R2 but not for 2015 to earlier versions We will explore both methods in this blog since both methods will work for rolling back 2018 to 2017R2. Restore method In this method, we can remove pi17, recreate a fresh instance and restore the backup. In the container world, we treat software not as pets but more like cattle. docker rm pi17 docker run -id -h pi17 --name pi17 pidax:17R2 docker stop pi17  Copy the backup folders into the appropriate volumes at C:\ProgramData\docker\volumes docker start pi17  Now let us compare pi17 and pi18 with PI SMT. We can see that they have the same data but their versions are different. Non-Restore method In this method, data that is added AFTER the upgrade will still be persisted after rollback. Let us add some data to the pi18 container. We shall also change the tuning parameter from container17 to container18. Now, let's remove any pi17 container that exists so that we only have the pi18 container running. After that, we can do docker rm -f pi17 docker stop pi18 docker run -id -h pi17 --name pi17 --volumes-from pi18 pidax:17R2  We can now verify that the data added after the upgrade still exists when we roll back to the 2017R2 container. Conclusion In this article, we have shown that it is easy to perform upgrades and rollbacks with containers while preserving data throughout the process. Upgrades that used to take days can now be done in minutes. There is no worry that upgrading will break your container since data is separated from the container. One improvement that I would like to see is that archives can be downgraded by an older PI Archive Subsystem automatically. Currently, this cannot be done. If you try to connect to a newer archive format with an older piarchss without downgrading the version manually, you will see However, the reverse is possible. Connecting to an older archive format with a newer piarchss will upgrade the version automatically. New updates (24 Jul 2018) 1. Fix unknown message problem in logs 2. Add trust on run-time by specifying environment variable # Overcome limitations of the PI Data Archive container Posted by Eugene Lee Jul 2, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In this blog post, we will be exploring how to overcome the limitations that were previously mentioned in the blog post Spin up PI Data Archive container. Container technology can contribute to the manageability of a PI System (installations/migrations/maintenance/troubleshooting that used to take weeks can potentially be reduced to minutes) so I would like to try and overcome as many limitations as I can so that they will become production ready. Let us have a look at the limitations that were previously mentioned. 1. This example does not persist data or configuration between runs of the container image. 2. This example relies on PI Data Archive trusts and local accounts for authentication. 3. This example doesn't support VSS backups. Let us go through them one at a time. Data and Configuration Persistence This limitation can be solved by separating the data from the application container. In Docker, we can make use of something called Volumes which are completely managed by Docker. When we persist data in volumes, the data will exist beyond the life cycle of the container. Therefore, even if we destroy the container, the data will still remain. We create external data volumes by including the VOLUME directive in the Dockerfile like such VOLUME ["C:/Program Files/PI/arc","C:/Program Files/PI/dat","C:/Program Files/PI/log"] When we instantiate the container, Docker will now know that it has to create the external data volumes to store the data and configuration that exists in the PI Data Archive arc, dat and log directories. Windows Authentication This issue can be addressed with the use of GMSA and a little voodoo magic. This enables the container host to obtain the TGT for the container so that the container is able to perform Kerberos authentication and it will be connected to the domain. The container host will need to be domain joined for this to happen. VSS Backups When data is persisted externally, we can leverage on the VSS provider in the container host to perform the VSS snapshot for us so that we do not have to stop the container while performing the backup. This way, the container will be able to run 24/7 without any downtime (as required by production environments). The PI Data Archive has mechanisms to put the archive in a consistent state and freeze it to prepare for snapshot. Create container 1. Grab the files in the 2017R2 folder from my Github repo and place them into a folder. elee3/PI-Data-Archive-container-build 2. Get PI Data Archive 2017 R2A Install Kit and extract it into the folder as well. Download from techsupport website 3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website and place it in the Enterprise_X64 folder. 4. Your folder structure should look similar to this now. 5. Execute buildx.bat. This will build the image. 6. Once the build is complete, you can navigate to the Kerberos folder and run the powershell script (check 3 Aug 2018 updates) to create a Kerberos enabled container .\New-KerberosPIDA.ps1 -AccountName <GMSA name> -ContainerName <container name>  You can request for a GMSA from your IT department and get it installed on your container host with the Install-ADServiceAccount cmdlet. OR If you think it will be difficult for you to get a GMSA from your IT department, then you can use the following command as well to create a non Kerberos enabled container docker run -id -h <DNS hostname> --name <container name> pidax:17R2  7. Go to the pantry to make some tea or coffee. After about 1.5 minutes, your container will be ready. Demo of container abilities 1. Kerberos This section only applies if you created a Kerberos enabled container. After creating a mapping for my domain account using PI System Management Tools (SMT) (the container automatically creates an initial trust for the container host so that you can create the mapping), let me now try to connect to the PI Data Archive container using PI System Explorer (PSE). After successful connection, let me go view the message logs of the PI Data Archive container. We can see that we have Kerberos authentication from AFExplorer.exe a.k.a PSE. 2. Persist Data and Configuration When I kill off the container, I noticed that I am still able to see the configuration and data volumes persisted on my container host so I don't have to worry that my data and configuration is lost. 3. VSS Backups Finally, what if I do not want to stop my container but I want to take a back up of my config and data? For that, we can make use of the VSS provider on the container host. Obtain the 3 files here. elee3/PI-Data-Archive-container-build Place them anywhere on your container host. Execute .\backup.ps1 -ContainerName <container name>  The output of the command will look like this. Your backup will be found in the pibackup folder that is automatically created and will look like this. pi17 is the name of my container. Your container is still running all the time. 4. Restore a backup to a container Now that we have a backup, let me show you how to restore it to a new container. It is a very simple 3 step process. • docker stop the new container • Copy the backup files into the persisted volume. (You can find the volumes at C:\ProgramData\docker\volumes) • docker start the container As you can see, it can't get any simpler . When I go to browse my new container, I can see the values that I entered in my old container which had its backup taken. Conclusion In this blog post, we addressed the limitations of the original PI Data Archive container to make it more production ready. Do we still have any need of the original PI Data Archive container then? My answer is yes. If you do not need the capabilities offered by this enhanced container, then you can use the original one. Why? Simply because the original one starts up in 15 seconds while this one starts up in 1.5 minutes! The 1.5 minutes is due to limitations in Windows Containers. So if you need to spin up PI Data Archive containers quickly without having to worry about these limitations (e.g. in unit testing), then the original container is for you. New updates (3 Aug 2018) Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com. Refer to Upgrade to PI Data Archive 2018 container with Data Persistence to build the pidax:18 image needed for use with the script. # Spin up PI Analysis Service container Posted by Eugene Lee Jun 12, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction During PI World 2018, there was a request for a PI Analysis Service container. The user wanted to be able to spin up multiple PI Analysis Service container to balance the load during periods where there was a lot of back filling to do. Unfortunately, this is limited by the fact that each AF server can only have exactly one instance of PI Analysis Service that runs the analytics for the server. But this has not discouraged me from making a PI Analysis Service container to add to our PI System compose architecture! Features of this container include: 1. Ability to test the presence of AF Server so that set up won't fail 2. Simple configuration. The only thing you need to change is the host name of the AF Server container that you will be using. 3. Speed. Build and set up takes less than 4 minutes in total. 4. Buffering ability. Data will be held in the buffer when connection to target PI Data Archive goes down. (Added 13 Jun 2018) Prerequisite You will need to be running the AF Server container since PI Analysis Service stores its run-time settings in the AF Server. You can get one from Spin up AF Server container (SQL Server included). Procedure 1. Gather the install kits from the Techsupport website. AF Services 2. Gather the scripts and files from GitHub - elee3/PI-Analysis-Service-container-build. 3. Your folder should now look like this. 4. Run build.bat with the hostname of your AF Server container. build.bat <AF Server container hostname>  5. Now you can execute the following to create the container. docker run -it -h <DNS hostname> --name <container name> pias  That's all you need to do! Now when you connect to the AF Server container with PI System Explorer, you will notice that the AF Server is now enabled for asset analysis. (originally, it wasn't enabled) Conclusion By running this PI Analysis Service container, you can now configure asset analytics for your AF Server container to produce value added calculated streams from your raw data streams. I will be including this service in the Docker Compose PI System architecture so that you can run everything with just one command. Update 2 Jul 2018 Removed telemetry and added 17R2 tag. # Spin up AF Server container (Kerberos enabled) Posted by Eugene Lee May 30, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In one of my previous blog posts, I was spinning up an AF Server container using local accounts for authentication. For non-production purposes, this is fine. But since Kerberos is the authentication method that we recommend, I would like to show you that it is also possible to use Kerberos authentication for the AF Server container. To do this, you will have to involve a domain administrator since a Group Managed Service Account (GMSA) will need to be created. Think of GMSA as a usable version of the Managed Service Account. A single gMSA can be used for multiple hosts. For more details about GMSA, you can refer to this article: Group Managed Service Accounts Overview Prerequisite You will need the AF Server image from this blog post. Spin up AF Server container (SQL Server included) Procedure 1. Request GMSA from your domain administrator. The steps are listed here. Add-KDSRootKey -EffectiveTime (Get-Date).AddHours(-10) #Best is to wait 10 hours after running this command to make sure that all domain controllers have replicated before proceeding Add-WindowsFeature RSAT-AD-PowerShell New-ADServiceAccount -name <name> -DNSHostName <dnshostname> -PrincipalsAllowedToRetrieveManagedPassword <containerhostname> -ServicePrincipalNames "AFServer/<name>"  2. Once you have the GMSA, you can proceed to install it on your container host. Install-ADServiceAccount <name>  3. Test that the GMSA is working. You should get a return value of True Test-ADServiceAccount <name>  4. Get script to create AF Server container with Kerberos. Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/New-KerberosAFServer.ps1" -UseBasicParsing -OutFile New-KerberosAFServer.ps1  5. Create a new AF Server container .\New-KerberosAFServer.ps1 -ContainerName <containername> -AccountName <name>  Usage Now you can open up PI System Explorer on your container host to connect to your containerized AF Server with the <name> parameter that you have been using in the procedure section. On the very first connect, you should connect with the afadmin user (password:qwert123!) so that you can set up mappings for your domain accounts. Otherwise, your domain accounts will only have 'World' permissions. After you set up your mappings, you can choose to delete that afadmin user or just keep it. With the mappings for your domain account created, you can now disconnect from your AF Server and reconnect to it with Kerberos authentication. From now on, you do not need explicit logins for your AF Server anymore! Conclusion We can see that security is not a limitation when it comes to using an AF Server container. It is just more troublesome to get it going and requires the intervention of a domain administrator. However, this will remove the need of using local accounts for authentication which is definitely a step towards using the AF Server container for production. I will be showing how to overcome some limitations of containers in future posts such as letting containers have static IP and the ability to communicate outside of the host. New updates (3 Aug 2018) Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com. Script now uses the new image with 18x tag based on a newer version of Windows Server Core. # Compose PI System container architecture Posted by Eugene Lee May 21, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In this blog post, I will be giving an overview of how to use Docker Compose to create a PI System compose architecture that you can use for 1. Learning PI System development 2. Running your unit tests with a clean PI System 3. Compiling your AF Client code 4. Exploring PI Web API structure 5. Testing out Asset Analytics syntax 5. Other use cases that I haven't thought of (Post in the comments!) What is Compose? It is a tool for defining and running multi-container Docker applications. With Compose, you use a single file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. It is both easy and convenient. Setup images The Setup involved is simple. You can refer to my previous blog posts set up these images. Docker setup instructions can be found in the Containerization Hub link above. Spin up PI Web API container (AF Server included) Spin up PI Data Archive container Spin up AF Client container Compose setup In Powershell, run as administrator these commands: [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.21.2/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile$Env:ProgramFiles\docker\docker-compose.exe

Obtain Compose file from docker-compose.yml. Place it on your desktop.

Deployment

Open a command prompt and navigate to your desktop. Enter

docker-compose up


Wait until the screen shows

Once you see that. You can close the window. Your PI System architecture is now up and running!

Usage

There are various things you can try out. If you are experiencing networking issues between the containers, turn off the firewall for the Public Profile on your container host.

1. You can try browsing the PI Web API structure by using this URL (https://eleeaf/piwebapi) in your web browser. When prompted for credentials, you can use

2. Test network connectivity from client container to the PI Data Archive and AF Server by running

docker exec -it desktop_client_1 afs


The hostname of the AF Server is eleeaf. When prompted to use NTLM, enter q. The hostname of the PI Data Archive is eleepi. You should see the following results.

3. You can install PI System Management Tools on your container host and connect to the PI Data Archive via IP address of the container. Somehow, PI SMT doesn't let you connect with hostname.

4. You can also install PI System Explorer and connect to the AF Server to create new databases.

5. You can try compiling some open source AF SDK code found in our Github repository using the AF Client container. (so that you do not have to install Visual Studio)

6. You can use PI System Explorer to experiment with some Asset Analytics equations that you have in mind to check if they are valid.

Destroy

Once you are done with the environment, you can destroy it with

docker-compose down


Limitations

This example does not persist data or configuration between runs of the container.

These applications do not yet support upgrade of container without re-initialization of the data.

This example relies on PI Data Archive trusts and local accounts for authentication.

AF Server, PI Web API, and SQL Express are all combined in a single container.

Conclusion

Notice how easy it is to set up a PI System compose architecture. You can do this in less than 10 minutes. No more having to wait hours to install a PI System for testing and developing with.

The current environment contains PI Data Archive, AF Server, AF Client, PI Web API, a AF SDK sample application (called afs) and PI Analysis Service. More services will be added in the future!

# Spin up AF Client container

Posted by Eugene Lee May 21, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

In this blog post, the instructions for building an AF Client image will be shown. For instructions on how to install Docker, please see the link above.

1. Please clone this git repository. GitHub - elee3/AF-Client-container-build

2. Download AF Client 2017R2 from the Techsupport website. AF Client 2017 R2

3. Extract AF Client into the cloned folder.

4. Run build.bat

If you prefer us to build the image for you so that you can docker pull it immediately (less hassle). Please post in the comments!

Usage

This container can be used to compile your AF SDK code (so that you do not have to install Visual Studio) and you can use the container to pack an AF SDK application with its AF Client dependency for easier distribution. An AF SDK sample application (called afs) has been included in the image for you to try compiling it.

Limitations

Containers cannot run applications with GUI such as WPF and Windows Forms applications.

Update 27 Jun 2018

Fixed an issue with the registry links breaking.

# Containerization Hub

Posted by Eugene Lee May 21, 2018

Good day everyone, I am creating this blog post as a convenient way for users to find the containerization articles that have already been published and also list those that have yet to be published (subject to changes). Users will just need to bookmark this page rather than bookmark all the individual articles.

Spin up AF Server container (SQL Server included)

Spin up PI Web API container (AF Server included)

Spin up PI Data Archive container

Spin up AF Client container

Compose PI System container architecture

Spin up AF Server container (Kerberos enabled)

Spin up PI Analysis Service container

Overcome limitations of the PI Data Archive container

Upgrade to PI Data Archive 2018 container with Data Persistence

Upgrade to AF Server 2018 container with Data Persistence

AF Server container in the cloud

AF Server container health check

PI Data Archive container health check

Containers and Swarm

Spin up PI to PI Interface container

Form collectives with PIDA container

Spin up stateless AF Server container

Spin up stateless PI Web API container

Spin up PI Web API website container

Let me know if you have any requests!

The steps to setup Docker are below.

Install Docker

For Windows 10,

For Windows Server 2016,

You can use the OneGet provider PowerShell module. Open an elevated PowerShell session and run the below commands.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force


# Spin up PI Web API container (AF Server included)

Posted by Eugene Lee May 14, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

I now present to you another blog post in the containerization series on spinning up PI Web API in less than 3 minutes (My test came out to be 2 min 44 sec!).

I will repeat the steps here for setting up Docker for your convenience. If you have already done so while using the AF Server image, then you do not need to repeat it again. The PI Web API image offered here is fully self contained. In other words, you do not have to worry about any dependencies such as where to store your PI Web API configuration. In a later blog post, I will be posting on a PI Web API image that only contain the application service for those of you who want the application service to be separate from the database service. In that image, you will need to furnish your own AF Server then. For now, you do not have to care about that.

Set up

## Install PI Web API image

docker login
docker pull elee3/afserver:piwebapi
docker logout



Remember to check digest of image to make sure it is not tampered with.

Update 2 Jul 2018: Please use the fast version with tag webapifast17R2 as that image is better in every possible way. Boot up time 15 sec compared to 3 minutes.

Deployment

Now that the setup is complete, you can proceed to running the container image. To do so, use the following command. Replace <DNS hostname> and <containername> with one of your own picking. Remember to pick a DNS hostname that is unique.

docker run -it --hostname <DNS hostname> --name <containername> elee3/afserver:piwebapi

After about 3 minutes, you will see that the command prompt indicates that both the PI Web API and AF Server are Ready.

This indicates that your PI Web API is ready for usage. At this point, you can just close the window.

Update 2 Jul 2018: Please use the fast version with tag webapifast17R2 as that image is better in every possible way. Boot up time 15 sec compared to 3 minutes.

Usage

Now you can open a browser on your container host and connect to it with the DNS hostname that you chose earlier.

https://<DNS hostname>/piwebapi

When prompted for credentials, you can use

## Browsing your PI Data Archive

You can use a URL of the form

https://<DNS hostname>/piwebapi/dataservers?path=\\<PI Data Archive hostname>

to access your PI Data Archive. Of course, you need to give access permissions by creating a local user on the PI Data Archive machine with the same username and password above and give a PI mapping to that user.

You can use a URL of the form

https://<DNS hostname>/piwebapi/assetservers?path=\\<AF Server hostname>

to access your AF Server. Again, you need to give access permissions by creating a local user on the AF Server machine with the same username and password above. By default, everyone has World identity in AF Server so you do not need to give any special AF mapping.

Multiple PI Web API instances

You can spin up several PI Web API instances by using the docker run command multiple times with a difference hostname and containername.

You can see above that I have spin up several instances on my container host.

Destroy PI Web API instance

If you no longer need the PI Web API instance, you can destroy it using

docker stop <containername>
docker rm <containername>



Limitations

AF Server, PI Web API, and SQL Express are all combined in a single container. There will be an upcoming blog post for a container with just PI Web API in it.

This example relies on local accounts for authentication.

Conclusion

Observe that the steps to deploy both the AF Server and PI Web API containers are quite similar and can be easily scripted. This helps to provision testing environments quickly and efficiently which helps in DevOps.

In the never ending quest for speed and productivity, every minute and second that we save waiting for applications to boot up can be better utilized elsewhere such as taking a nap or watching that cat video clip that your friend sent you. Therefore, I present to you a faster PI Web API container image that is more than 60% faster than the original one.

docker pull elee3/afserver:webapifast17R2


Remember to check digest of image to make sure it is not tampered with.

3 test runs were performed to compare the boot up time.

Run 1

Start time was 13:48:00 for both. The original image finished in 2 min 36 sec while the new one finished in 55 sec.

Run 2

Start time was 13:58:00 for both. The original image finished in 2 min 27 sec while the new one finished in 55 sec.

Run 3

Start time was 14:29:00 for both. The original image finished in 2 min 28 sec while the new one finished in 57 sec.

Summary of results

Run
Original (s)
New (s)
115655
214755
314857
Average15055

The results show that the new image is about 63% faster than the original one.

1. Added reminder to check digest of the image to make sure image has not been tampered with.

1. Removed telemetry and changed tag from webapifast to webapifast17R2. Took down image with tag piwebapi from repository. Boot up time for webapifast17R2 has been further reduced to 15 sec!!

# Spin up AF Server container (SQL Server included)

Posted by Eugene Lee May 14, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

Currently, in order to set up an AF Server for testing/development purposes, you have two choices.

1. Install SQL Server and AF Server on your local machine

The problem with this method is that there is no isolation from the host operating system. Therefore, you risk the stability of the host computer if something goes wrong. You also can't spin up multiple AF Servers this way.

2. Provision a VM and then install SQL Server and AF Server on it

While this method provides isolation, the problem lies in the time it takes to get it set up and also the size of the VM which includes many unnecessary components.

There is a better way!

Today, I will be teaching you how spin up AF Server instances in less than 1 minutes (after performing the initial setup which might take a bit longer). This is made possible by the usage of containerization technology.

## Requirements

Windows Server build 1709, Windows Server 2016 (Core and with Desktop Experience) or Windows 10 Professional and Enterprise (Anniversary Edition). Ensure that your system is current with the Windows Update.

Benefits

1. Portability. Easy to transfer containers to other container hosts that meet the prerequisites. No need to do tedious migrations.

2. Side by side versioning. Ability to run multiple versions of AF Server on the same container host for compatibility testing and debugging purposes.

3. Speed. Very fast to deploy.

4. Resource efficiency and density. More AF Servers can run on the same bare metal machine compared to virtualization.

5. Isolation. If you no longer need the AF Server. You can remove it easily. It won’t leave any temporary or configuration files on your container host.

6. Able to use with container orchestration systems such as Swarm or Service Fabric.

Set up

## Install Docker

For Windows 10,

For Windows Server 2016,

You can use the OneGet provider PowerShell module. Open an elevated PowerShell session and run the below commands.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force


## Install AF Server image

docker login
docker pull elee3/afserver:18x
docker logout


Remember to check digest of image to make sure it is not tampered with.

Deployment

Now that the setup is complete, you can proceed to running the container image. To do so, use the following command.

Replace <DNS hostname> and <containername> with one of your own picking. Remember to pick a DNS hostname that is unique in your domain.

docker run -d --hostname <DNS hostname> --name <containername> elee3/afserver:18x


You can now open up PI System Explorer on your local machine and connect to the AF Server by specifying the DNS Hostname that you chose earlier. When prompted for credentials, use

Check the box to remember the credentials so that you won't have to enter it every time.

You can choose to rename the AF Server if you wish.

And you are done! Enjoy the new AF Server instance that you have created!

Using with AF SDK

To connect to the AF Server from code using AF SDK, the following Connect overload can be utilized with the same credentials as above.

PISystem.Connect Method (NetworkCredential)

Multiple AF Servers

In order to spin up another AF Server instance, follow the steps above. When you get the new container running. You have to change the ServerID. You can do this via

docker exec -i <containername> cmd /c "cd %pihome64%\af&afdiag.exe /cid:<guid>"

You can generate a new guid using this.

You do not need to manually generate a new Server ID anymore. The image does it automatically for you.

Destroy AF Server

If you no longer need the AF Server, you can destroy it using

docker stop <containername>
docker rm <containername>


Limitations

This example uses a public SQL Express container image which is currently not available for use in a production environment. (Changed to a SQL Express image that I built myself)

This example relies on local accounts for authentication. Refer to the following article if you want to use Kerberos. Spin up AF Server container (Kerberos enabled)

1. 2017R2 tag is now available

2. Image has been updated with ability to import in an existing AF Server backup in the form of PIFD.bak file. To do this, run

docker run -di --hostname <DNS hostname> --name <containername> -v <path to folder containing PIFD.bak>:c:\db elee3/afserver:2017R2 migrate.bat


1. Local account is no longer in the administrators group. Only a mapping to an AF Identity is done (better security)

1. Added reminder to check digest of the image to make sure image has not been tampered with

1. Changed tag from 2017R2 to 17R2

2. Removed telemetry

1. Changes to facilitate upgrading to 2018 container

1. Updated to use tag 18x which comes with health check and some performance improvements

# Spin up PI Data Archive container

Posted by Eugene Lee Apr 17, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

Today, I will be teaching you a recipe for cooking a PI Data Archive container. Please see my previous blog posts above on how to get Docker installed. We will be mixing the ingredients in a folder to create an image. After we have the image, we can bake the image to obtain a container.

Ingredients

1. PI Data Archive 2017 R2A Install Kit Download from techsupport website (contains the software)

2. Dockerfile GitHub - elee3/PI-Data-Archive-container-build (describes the mixing steps to form the image)

3. build.bat GitHub - elee3/PI-Data-Archive-container-build (script to start the mixing)

4. generateid.txt GitHub - elee3/PI-Data-Archive-container-build (reference commands for changing the Server ID)

5. pilicense.dat (many ways to obtain one such as through Account Manager/Partner Manager/Academic Team/PI DevClub membership, best to get a demo license that doesn't require a MSF)

6. temp.txt GitHub - elee3/PI-Data-Archive-container-build (adds host trust)

7. trust.bat GitHub - elee3/PI-Data-Archive-container-build (adds host trust)

Recipe

1. Gather all the required ingredients as listed above

2. Extract out the Enterprise_X64 folder from the PI Data Archive Install Kit.

3. Add pilicense.dat into the Enterprise_X64 folder. Override the existing files if needed.

4. Put the other ingredients into the parent folder of the Enterprise_X64 folder.

Your folder structure should now look like this.

5. Execute build.bat. The mixing will take less than 5 minutes.

6. Once the image is formed, you can now execute

docker run -it --hostname <DNS hostname> --name <containername> pida


at the command line to bake the image. This will take about 15 seconds.

You will see the IP address of the PI Data Archive listed in the IPv4 Address field. Use this IP address with PI SMT on your container host to connect. Your PI Data Archive container is now ready to be consumed!

Hint: Multiple PI Data Archive instances

If you want to bake another instance of the PI Data Archive container (just repeat step 6 with a different hostname and containername), you will need to change the Server ID too. The following procedure can be done in piconfig to accomplish this.

@tabl pisys,piserver
@mode ed
@istr name,serverid
hostname,
@quit


Limitations

1. This example does not persist data or configuration between runs of the container image.

2. This example relies on PI Data Archive trusts and local accounts for authentication.

3. This example doesn't support VSS backups.

4. This example doesn't support upgrading without re-initialization of data.

For Developers

Here is an example to connect with AF SDK and read the current value of a PI Point.

Conclusion

Notice how quick and easy it is to cook a PI Data Archive container. I hope you find it delicious. I like it because I can easily cook up instances for testing and remove them when I do not need them. (Please don't waste food)

Update 31 May 2018

Local account is no longer in the administrators group. Only a mapping to a PI Identity.

Update 2 Jul 2018