Skip navigation
All Places > PI Developers Club > Blog > Authors Eugene Lee

PI Developers Club

12 Posts authored by: Eugene Lee Employee

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In my previous articles, I have demonstrated using the AF Server container in local Docker host deployments. The implication is that you have to manage the Docker host infrastructure yourself. The installation, patching, maintenance and upgrading work has to be done by you manually. This represents significant barriers to get up and running. As an analogy, imagine you visit another country for vacation and need to get from the airport to the hotel. Would it be better to buy a car (if they even sold one at the airport?) and drive to the hotel or just take a taxi (transport as a service). The first option requires a larger initial investment of time and money compared to the latter.

 

For quick demo, training or testing purposes, getting a Docker host infrastructure up and running requires effort (getting a machine with right specifications, procuring an OS with Windows container capabilities, patching the OS so that you can use Docker, installing the right edition of Docker) and troubleshooting if things go south (errors during setup or services refusing to start). In the past, we have no other choice so we just have to live with it. But in this modern era of cloud computing, using a container as a service might be a faster and cheaper alternative. Today, I will show you how to operate the AF Server container in the cloud using Azure Container Instances. The very first service of its kind in the cloud, Azure Container Instances is a new Azure service delivering containers with great simplicity and speed. It is a form of serverless containers.

 

Prerequisites

You will need an Azure subscription to follow along with the blog. You can get a free trial account here.

 

Azure CLI

Install the Azure CLI which is a command line tool for managing Azure resources. It is a small install. Once done, we need to login

az login

 

If the CLI can determine your default browser and has access to open it, it will do so and direct you immediately to a sign in page.

Otherwise, you need to open a browser page and follow the instructions on the command line to enter an authorization code after navigating to https://aka.ms/devicelogin in your browser.

 

Complete the sign in via the browser and you will see

 

Now set your default subscription if you have many subscriptions. If you only have one subscription to your account, then you can skip this step.

az account set -s <subscription name>

 

Create cloud container

We are now ready to create the AF Server cloud container. First create a resource group.

az group create --name resourcegrp -l southeastasia

You can change southeastasia to a location nearest to you. Here is the list of locations (remove the space when using it)

 

Create a file named af.yaml. Replace <username> and <password> with the credentials for pulling the AF Server container image. There are some variables that you can configure

 

afname: The name that you choose for your AF Server.

user: Username to authenticate to your AF Server.

pw: Password to authenticate to your AF Server.

 

af.yaml

apiVersion: '2018-06-01'
name: af
properties:
  containers:
  - name: af
    properties:
      environmentVariables:
      - name: afname
        value: eugeneaf
      - name: user
        value: eugene
      - name: pw
        secureValue: qwert123!
      image: elee3/afserver:18x
      ports:
      - port: 5457
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.0
  imageRegistryCredentials:
  - server: index.docker.io
    username: <username>
    password: <password>
  ipAddress:
    dnsNameLabel: eleeaf
    ports:
    - port: 5457
      protocol: TCP
    type: Public
  osType: Windows
type: Microsoft.ContainerInstance/containerGroups

 

Then run this in Azure CLI to create the container.

az container create --resource-group resourcegrp --file af.yaml

The command will return in about 5 minutes.

 

You can check the state of the container.

az container show --resource-group resourcegrp -n af --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table

 

You can check the container logs.

az container logs --resource-group resourcegrp -n af

 

Explore with PSE

You now have an AF Server container in the cloud that can be accessed ANYWHERE as long as there is internet connectivity. You can connect to it with PSE using the FQDN. The credentials to use are those that you specified in af.yaml.

Notice that the name of the AF Server is the value of the afname environment variable that was passed in af.yaml.

 

Run commands in container

If you have a need to login to the container to run commands such as using afdiag, you can do so with

az container exec --resource-group resourcegrp -n af --exec-command "cmd.exe"

 

Clean up

When you are done with using the container, you should destroy it so that you won't have to pay for it when it is not being used.

az container delete --resource-group resourcegrp -n af

You can check that the resource is deleted by listing your resources.

az resource list

 

Considerations

There are some tricks to hosting a container in the cloud to optimize its deployment time.

 

1. Base OS

The Base OS should be one of the three most recent versions of Windows Server Core 2016. These are cached in Azure Container Instances to help in the deployment time. If you want to experience the difference, try pulling elee3/afserver:18 in the create container command above. The time taken will be 13min which is more than twice of the 5min needed to pull elee3/afserver:18x. The reason is because the old image with “18” tag is based on the public SQL Server image which is 7 months old and doesn't have the latest OS version to be able to leverage on the caching mechanism to improve performance. I have rebuilt the image with “18x” tag based on my own SQL Server image with the latest OS version.

 

2. Image registry location

Hosting the image in Azure Container Registry in the same region that you use to deploy your container will help to improve deployment time as this shortens the network path that the image needs to travel which shortens the download time. Take note that ACR is not free unlike DockerHub. In my tests, it took 4min to deploy with ACR.

 

3. Image size

This one is obviously a no-brainer. That's why I am always looking to make my images smaller.

 

Another consideration is the number of containers per container group. In this example, we are creating a single-container group. The current limitation of Windows containers is that we can only create single-container groups. When this limitation is lifted in the future, there are some scenarios where I see value in creating multi-container groups such as spinning up sets of containers that are complimentary to each other. E.g. PI Data Archive container, AF Server container, PI Analysis Service container in a 3-container group. However, for scenarios such as spinning up 2 AF Servers containers, we should still keep them in separate container groups so that they won't fight for the same port.

 

Limitations

Kerberos authentication is not supported in a cloud environment. We are using NTLM authentication in this example.

 

Conclusion

Deploying the AF Server container to Azure Container Instances might not be as fast as deploying it to a local Docker host. But it is cheaper compared to the upfront time and cost of setting up your own Docker host. This makes it ideal for demo/training/testing scenarios. The containers are billed on a per second basis so you only pay for what you use. That is like only paying for your trip from the airport to the hotel without having to pay anything extra.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

AF Server 2018 has been released on 27 Jun 2018! Let's take a look at some of the new features that are available. The following list is not exhaustive.

  • AF Server Connection information is now available for administrative users.
  • A new UOM Class, Computer Storage, is provided. The canonical UOM is byte (b) and multiples of 1000 and 1024.
  • AFElementSearch and AFEventFrameSearch now support searching for elements and event frames by attribute values without having to specify a template.

What's new in AFSearch 2.10 (PI AF 2018)

  • The AFDiag utility has been enhanced to allow for bulk deletes of event frames by database and/or template and within a specified time range

 

Here are also some articles that talk about other new features in AF 2018.

Mass Event Frame Deletion in AF SDK 2.10

DisplayDigits Exposed in AF 2018 / AF SDK 2.10

What's new in AF 2018 (2.10) OSIsoft.AF.PI Namespace

Introducing the AFSession Structure

 

To take advantage of these new features, we will need to upgrade to the AF Server 2018 container. Let me demonstrate how we can do that.

 

Create 2017R2 container and inject data

The steps for creating the container can be found in Spin up AF Server container (SQL Server included). I will use af17 as the name in this example.

docker run -di --hostname af17 --name af17 elee3/afserver:17R2

 

Now, we can create some elements, attributes and event frames.

We will also list the version to confirm it is 2017R2 (2.9.5.8368).

 

Pull 2018 image

We can use the following command to pull down the 2018 image.

docker pull elee3/afserver:18

 

The credentials required are the same as the 2017R2 image. Check the digest to make sure the image is correct.

18: digest: sha256:99e091dc846d2afbc8ac3c1ec4dcf847c7d3e6bb0e3945718f00e3f4deffe073

 

Upgrade from 2017R2 to 2018

Create an empty folder, open up a Powershell, navigate to that folder and run the following commands.

Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/afbackup.bat" -UseBasicParsing -OutFile afbackup.bat
Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/upgradeto18.bat" -UseBasicParsing -OutFile upgradeto18.bat
.\upgradeto18.bat af17 af18

 

Wait a short moment for your AF Server 2018 container to be ready. In this example, I will give it the name af18.

 

Verification

Now we can check that the element, attribute and event frame that we created earlier in the 2017R2 container is persisted to the 2018 container. First, let's connect to af18 with PSE. Upon successful connection, notice that the name and ID of the AF Server 2017R2 is retained.

 

 

Our element, attribute and event frame are all persisted.

Finally, we can see that the version has been upgraded to 2018 (2.10.0.8628).

 

Congratulations. You have successfully upgraded to the AF Server 2018 container and retained your data.

 

Rollback

If you want to rollback to the AF Server 2017R2 container, you will need to use the backup that was automatically generated and stored in the folder

C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup

docker rm -f af17
docker exec af18 cmd /c "copy /b "C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup\PIAFSqlBackup*.bak" c:\db\PIFD.bak"
docker run -d -h af17 --name af17 --volumes-from af18 elee3/afserver:17R2

 

Once a PIFD database is upgraded, it is impossible to downgrade it as seen here stating "a downgrade of the PIFD database will not be possible". This means that it won't be possible to persist data entered after the upgrade during the rollback.

 

Explore new features

Computer Storage UOM

AF Server Connections history

Bulk deletes of event frames by database and/or template and within a specified time range

 

Conclusion

Now that the AF Server container has at least two versions available (2017R2 and 2018), you can really start to appreciate its usage for testing the compatibility of your applications with two different versions of the server. In the past, you would need to create two large VMs in order to host two AF Server. Those days are over. You can realize immediate savings in storage space and memory. We will look into bringing these containers into some cloud offerings for future articles.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

PI Data Archive 2018 has been released on 27 Jun 2018! It is now time for us to upgrade to experience all the latest enhancements.

 

Legacy subsystems such as PI AF Link Subsystem, PI Alarm Subsystem, PI Performance Equation Scheduler, PI Recalculation Subsystem and PI Batch Subsystem are not installed by default. These legacy subsystems mentioned above will not be in the PI Data Archive 2018 container because of the command line that I have chosen for it. This upgrade procedure assumes that you were not using any of these legacy subsystems.

 

We also have client side load balancing in addition to scheduled archive shifts for easier management of archives. Finally, there is the integrated PI Server installation kit which is the enhancement I am most excited about. The kit has the ability to let us generate a command line statement for use during silent installation. No more having to comb through the documentation to find the feature that you want to install. All you have to do is just use the GUI to select the features that you desire and save the command line to a file. The command line is useful in environments without a GUI such as a container environment.

 

Today, I will be guiding you on a journey to upgrade your PI Data Archive 2017R2 container to the  PI Data Archive 2018 container. In this article, Overcome limitations of the PI Data Archive container, I have addressed most of the limitations that were present in the original article Spin up PI Data Archive container. We are now left with the final limitation to address.

 

This example doesn't support upgrading without re-initialization of data.

 

I will show you how we can upgrade to the 2018 container without losing your data. Let's begin on this wonderful adventure!

 

Create 2017R2 container and inject data

See the "Create container" section in Overcome limitations of the PI Data Archive container for the detailed procedure on how to create the container. In this example, my container name will be pi17.

docker run -id -h pi17 --name pi17 pidax:17R2

 

Once your container is ready, we can use PI SMT to introduce some data which we can use as validation that the data has been persisted to the new container. I will create a PI Point called "test" to store some string data.

We will also change some tuning parameters such as Archive_AutoArchiveFileRoot and Archive_FutureAutoArchiveFileRoot to show that they are persisted as well.

 

 

Take a backup

Before proceeding with the upgrade, let us take a backup of the container using the backup script found here. This is so that we can roll back later on if needed.

The backup will be stored in a folder named after the container.

 

Build 2018 image

1. Get the files from elee3/PI-Data-Archive-container-build

2. Get the PI Server 2018 integrated install kill from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website

4. Your folder structure should look similar to this now.

5. Run build.bat.

 

Upgrade from 2017R2 to 2018

Now that we have the image built. We can perform the upgrade. To do so, stop the pi17 container.

docker stop pi17

 

Create the PI Data Archive 2018 container (I will name this pi18) by mounting the data volumes from the pi17 container.

docker run -id -h pi18 --name pi18 --volumes-from pi17 -e trust=<containerhost> pidax:18

 

Verification

Now let us verify that the container named pi18 has our old data and tuning parameters and also let us check its version. We can do so with PI SMT.

Data has been persisted!

Tuning parameters has also been persisted!

Version is now 3.4.420.1182 which means the upgrade is successful. Note that the legacy subsystems that were mentioned above are no longer present.

 

Congratulations. You have successfully upgraded to the PI Data Archive 2018 container and retained your data.

 

Rollback

Now what if you want to rollback to the previous version for whatever reasons? I will show you that it is also simple to do. There are two ways that we can go about doing this.

 

MethodProsCons
RestoreWill always workData added after the upgrade will be lost after the rollback. Only data prior to the backup will be present. Requires a backup
Non-RestoreData added after the upgrade is persisted after the rollbackMight not always work. It depends on whether the configuration files are compatible between versions. E.g. it works for 2018 to 2017R2 but not for 2015 to earlier versions

 

We will explore both methods in this blog since both methods will work for rolling back 2018 to 2017R2.

 

Restore method

In this method, we can remove pi17, recreate a fresh instance and restore the backup. In the container world, we treat software not as pets but more like cattle.

docker rm pi17
docker run -id -h pi17 --name pi17 pidax:17R2
docker stop pi17

Copy the backup folders into the appropriate volumes at C:\ProgramData\docker\volumes

docker start pi17

 

Now let us compare pi17 and pi18 with PI SMT. We can see that they have the same data but their versions are different.

 

 

Non-Restore method

In this method, data that is added AFTER the upgrade will still be persisted after rollback. Let us add some data to the pi18 container.

 

We shall also change the tuning parameter from container17 to container18.

 

Now, let's remove any pi17 container that exists so that we only have the pi18 container running. After that, we can do

docker rm -f pi17
docker stop pi18
docker run -id -h pi17 --name pi17 --volumes-from pi18 pidax:17R2

 

We can now verify that the data added after the upgrade still exists when we roll back to the 2017R2 container.

 

 

Conclusion

In this article, we have shown that it is easy to perform upgrades and rollbacks with containers while preserving data throughout the process. Upgrades that used to take days can now be done in minutes. There is no worry that upgrading will break your container since data is separated from the container. One improvement that I would like to see is that archives can be downgraded by an older PI Archive Subsystem automatically. Currently, this cannot be done. If you try to connect to a newer archive format with an older piarchss without downgrading the version manually, you will see

 

 

However, the reverse is possible. Connecting to an older archive format with a newer piarchss will upgrade the version automatically.

 

New updates (24 Jul 2018)

1. Fix unknown message problem in logs

2. Add trust on run-time by specifying environment variable

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In this blog post, we will be exploring how to overcome the limitations that were previously mentioned in the blog post Spin up PI Data Archive container. Container technology can contribute to the manageability of a PI System (installations/migrations/maintenance/troubleshooting that used to take weeks can potentially be reduced to minutes) so I would like to try and overcome as many limitations as I can so that they will become production ready. Let us have a look at the limitations that were previously mentioned.

 

1. This example does not persist data or configuration between runs of the container image.

2. This example relies on PI Data Archive trusts and local accounts for authentication.

3. This example doesn't support VSS backups.

 

Let us go through them one at a time.

 

Data and Configuration Persistence

This limitation can be solved by separating the data from the application container. In Docker, we can make use of something called Volumes which are completely managed by Docker. When we persist data in volumes, the data will exist beyond the life cycle of the container. Therefore, even if we destroy the container, the data will still remain. We create external data volumes by including the VOLUME directive in the Dockerfile like such

 

VOLUME ["C:/Program Files/PI/arc","C:/Program Files/PI/dat","C:/Program Files/PI/log"]

 

When we instantiate the container, Docker will now know that it has to create the external data volumes to store the data and configuration that exists in the PI Data Archive arc, dat and log directories.

 

Windows Authentication

This issue can be addressed with the use of GMSA and a little voodoo magic. This enables the container host to obtain the TGT for the container so that the container is able to perform Kerberos authentication and it will be connected to the domain. The container host will need to be domain joined for this to happen.

 

VSS Backups

When data is persisted externally, we can leverage on the VSS provider in the container host to perform the VSS snapshot for us so that we do not have to stop the container while performing the backup. This way, the container will be able to run 24/7 without any downtime (as required by production environments). The PI Data Archive has mechanisms to put the archive in a consistent state and freeze it to prepare for snapshot.

 

Create container

1. Grab the files in the 2017R2 folder from my Github repo and place them into a folder. elee3/PI-Data-Archive-container-build

2. Get PI Data Archive 2017 R2A Install Kit and extract it into the folder as well. Download from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website and place it in the Enterprise_X64 folder.

4. Your folder structure should look similar to this now.

5. Execute buildx.bat. This will build the image.

6. Once the build is complete, you can navigate to the Kerberos folder and run the powershell script (check 3 Aug 2018 updates) to create a Kerberos enabled container

.\New-KerberosPIDA.ps1 -AccountName <GMSA name> -ContainerName <container name>

You can request for a GMSA from your IT department and get it installed on your container host with the Install-ADServiceAccount cmdlet.

OR

If you think it will be difficult for you to get a GMSA from your IT department, then you can use the following command as well to create a non Kerberos enabled container

docker run -id -h <DNS hostname> --name <container name> pidax:17R2

7. Go to the pantry to make some tea or coffee. After about 1.5 minutes, your container will be ready.

 

Demo of container abilities

1. Kerberos

This section only applies if you created a Kerberos enabled container. After creating a mapping for my domain account using PI System Management Tools (SMT) (the container automatically creates an initial trust for the container host so that you can create the mapping), let me now try to connect to the PI Data Archive container using PI System Explorer (PSE). After successful connection, let me go view the message logs of the PI Data Archive container.

We can see that we have Kerberos authentication from AFExplorer.exe a.k.a PSE.

 

2. Persist Data and Configuration

When I kill off the container, I noticed that I am still able to see the configuration and data volumes persisted on my container host so I don't have to worry that my data and configuration is lost.

 

3. VSS Backups

Finally, what if I do not want to stop my container but I want to take a back up of my config and data? For that, we can make use of the VSS provider on the container host. Obtain the 3 files here. elee3/PI-Data-Archive-container-build

Place them anywhere on your container host. Execute

.\backup.ps1 -ContainerName <container name>

 

The output of the command will look like this.

 

Your backup will be found in the pibackup folder that is automatically created and will look like this. pi17 is the name of my container.

 

Your container is still running all the time.

 

4. Restore a backup to a container

Now that we have a backup, let me show you how to restore it to a new container. It is a very simple 3 step process.

  • docker stop the new container
  • Copy the backup files into the persisted volume. (You can find the volumes at C:\ProgramData\docker\volumes)
  • docker start the container

As you can see, it can't get any simpler . When I go to browse my new container, I can see the values that I entered in my old container which had its backup taken.

 

Conclusion

In this blog post, we addressed the limitations of the original PI Data Archive container to make it more production ready. Do we still have any need of the original PI Data Archive container then? My answer is yes. If you do not need the capabilities offered by this enhanced container, then you can use the original one. Why? Simply because the original one starts up in 15 seconds while this one starts up in 1.5 minutes! The 1.5 minutes is due to limitations in Windows Containers. So if you need to spin up PI Data Archive containers quickly without having to worry about these limitations (e.g. in unit testing), then the original container is for you.

 

New updates (3 Aug 2018)

Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com.

Refer to Upgrade to PI Data Archive 2018 container with Data Persistence to build the pidax:18 image needed for use with the script.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

During PI World 2018, there was a request for a PI Analysis Service container. The user wanted to be able to spin up multiple PI Analysis Service container to balance the load during periods where there was a lot of back filling to do. Unfortunately, this is limited by the fact that each AF server can only have exactly one instance of PI Analysis Service that runs the analytics for the server. But this has not discouraged me from making a PI Analysis Service container to add to our PI System compose architecture!

 

Features of this container include:

1. Ability to test the presence of AF Server so that set up won't fail

2. Simple configuration. The only thing you need to change is the host name of the AF Server container that you will be using.

3. Speed. Build and set up takes less than 4 minutes in total.

4. Buffering ability. Data will be held in the buffer when connection to target PI Data Archive goes down. (Added 13 Jun 2018)

 

Prerequisite

You will need to be running the AF Server container since PI Analysis Service stores its run-time settings in the AF Server. You can get one from Spin up AF Server container (SQL Server included).

 

Procedure

1. Gather the install kits from the Techsupport website. AF Services

2. Gather the scripts and files from GitHub - elee3/PI-Analysis-Service-container-build.

3. Your folder should now look like this.

4. Run build.bat with the hostname of your AF Server container.

build.bat <AF Server container hostname>

5. Now you can execute the following to create the container.

docker run -it -h <DNS hostname> --name <container name> pias

 

That's all you need to do! Now when you connect to the AF Server container with PI System Explorer, you will notice that the AF Server is now enabled for asset analysis. (originally, it wasn't enabled)

 

Conclusion

By running this PI Analysis Service container, you can now configure asset analytics for your AF Server container to produce value added calculated streams from your raw data streams. I will be including this service in the Docker Compose PI System architecture so that you can run everything with just one command.

 

Update 2 Jul 2018

Removed telemetry and added 17R2 tag.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In one of my previous blog posts, I was spinning up an AF Server container using local accounts for authentication. For non-production purposes, this is fine. But since Kerberos is the authentication method that we recommend, I would like to show you that it is also possible to use Kerberos authentication for the AF Server container. To do this, you will have to involve a domain administrator since a Group Managed Service Account (GMSA) will need to be created. Think of GMSA as a usable version of the Managed Service Account. A single gMSA can be used for multiple hosts. For more details about GMSA, you can refer to this article: Group Managed Service Accounts Overview

 

Prerequisite

You will need the AF Server image from this blog post.

Spin up AF Server container (SQL Server included)

 

Procedure

1. Request GMSA from your domain administrator. The steps are listed here.

Add-KDSRootKey -EffectiveTime (Get-Date).AddHours(-10) #Best is to wait 10 hours after running this command to make sure that all domain controllers have replicated before proceeding
Add-WindowsFeature RSAT-AD-PowerShell
New-ADServiceAccount -name <name> -DNSHostName <dnshostname> -PrincipalsAllowedToRetrieveManagedPassword <containerhostname> -ServicePrincipalNames "AFServer/<name>"

2. Once you have the GMSA, you can proceed to install it on your container host.

Install-ADServiceAccount <name>

3. Test that the GMSA is working. You should get a return value of True

Test-ADServiceAccount <name>

4. Get script to create AF Server container with Kerberos.

Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/New-KerberosAFServer.ps1" -UseBasicParsing -OutFile New-KerberosAFServer.ps1

5. Create a new AF Server container

.\New-KerberosAFServer.ps1 -ContainerName <containername> -AccountName <name>

 

Usage

Now you can open up PI System Explorer on your container host to connect to your containerized AF Server with the <name> parameter that you have been using in the procedure section. On the very first connect, you should connect with the afadmin user (password:qwert123!) so that you can set up mappings for your domain accounts. Otherwise, your domain accounts will only have 'World' permissions. After you set up your mappings, you can choose to delete that afadmin user or just keep it. With the mappings for your domain account created, you can now disconnect from your AF Server and reconnect to it with Kerberos authentication. From now on, you do not need explicit logins for your AF Server anymore!

 

Conclusion

We can see that security is not a limitation when it comes to using an AF Server container. It is just more troublesome to get it going and requires the intervention of a domain administrator. However, this will remove the need of using local accounts for authentication which is definitely a step towards using the AF Server container for production. I will be showing how to overcome some limitations of containers in future posts such as letting containers have static IP and the ability to communicate outside of the host.

 

New updates (3 Aug 2018)

Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com.

Script now uses the new image with 18x tag based on a newer version of Windows Server Core.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In this blog post, I will be giving an overview of how to use Docker Compose to create a PI System compose architecture that you can use for

 

1. Learning PI System development

2. Running your unit tests with a clean PI System

3. Compiling your AF Client code

4. Exploring PI Web API structure

5. Testing out Asset Analytics syntax

5. Other use cases that I haven't thought of (Post in the comments!)

 

What is Compose?

It is a tool for defining and running multi-container Docker applications. With Compose, you use a single file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. It is both easy and convenient.

 

Setup images

The Setup involved is simple. You can refer to my previous blog posts set up these images. Docker setup instructions can be found in the Containerization Hub link above.

Spin up PI Web API container (AF Server included)

Spin up PI Data Archive container

Spin up AF Client container

Spin up PI Analysis Service container

 

Compose setup

In Powershell, run as administrator these commands:

 

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.21.2/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe

 

Obtain Compose file from docker-compose.yml. Place it on your desktop.

 

Deployment

Open a command prompt and navigate to your desktop. Enter

docker-compose up

 

Wait until the screen shows

Once you see that. You can close the window. Your PI System architecture is now up and running!

 

Usage

There are various things you can try out. If you are experiencing networking issues between the containers, turn off the firewall for the Public Profile on your container host.

 

1. You can try browsing the PI Web API structure by using this URL (https://eleeaf/piwebapi) in your web browser. When prompted for credentials, you can use

username: afadmin

password: qwert123!

 

2. Test network connectivity from client container to the PI Data Archive and AF Server by running

docker exec -it desktop_client_1 afs

The hostname of the AF Server is eleeaf. When prompted to use NTLM, enter q. The hostname of the PI Data Archive is eleepi. You should see the following results.

 

3. You can install PI System Management Tools on your container host and connect to the PI Data Archive via IP address of the container. Somehow, PI SMT doesn't let you connect with hostname.

 

4. You can also install PI System Explorer and connect to the AF Server to create new databases.

 

5. You can try compiling some open source AF SDK code found in our Github repository using the AF Client container. (so that you do not have to install Visual Studio)

 

6. You can use PI System Explorer to experiment with some Asset Analytics equations that you have in mind to check if they are valid.

 

Destroy

Once you are done with the environment, you can destroy it with

docker-compose down

 

Limitations

This example does not persist data or configuration between runs of the container.

These applications do not yet support upgrade of container without re-initialization of the data.

This example relies on PI Data Archive trusts and local accounts for authentication.

AF Server, PI Web API, and SQL Express are all combined in a single container. 

 

Conclusion

Notice how easy it is to set up a PI System compose architecture. You can do this in less than 10 minutes. No more having to wait hours to install a PI System for testing and developing with.

The current environment contains PI Data Archive, AF Server, AF Client, PI Web API, a AF SDK sample application (called afs) and PI Analysis Service. More services will be added in the future!

Eugene Lee

Spin up AF Client container

Posted by Eugene Lee Employee May 21, 2018

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

In this blog post, the instructions for building an AF Client image will be shown. For instructions on how to install Docker, please see the link above.

 

1. Please clone this git repository. GitHub - elee3/AF-Client-container-build

2. Download AF Client 2017R2 from the Techsupport website. AF Client 2017 R2

3. Extract AF Client into the cloned folder.

4. Run build.bat

 

If you prefer us to build the image for you so that you can docker pull it immediately (less hassle). Please post in the comments!

 

Usage

This container can be used to compile your AF SDK code (so that you do not have to install Visual Studio) and you can use the container to pack an AF SDK application with its AF Client dependency for easier distribution. An AF SDK sample application (called afs) has been included in the image for you to try compiling it.

 

Limitations

Containers cannot run applications with GUI such as WPF and Windows Forms applications.

 

Update 27 Jun 2018

Fixed an issue with the registry links breaking.

Eugene Lee

Containerization Hub

Posted by Eugene Lee Employee May 21, 2018

Good day everyone, I am creating this blog post as a convenient way for users to find the containerization articles that have already been published and also list those that have yet to be published (subject to changes). Users will just need to bookmark this page rather than bookmark all the individual articles.

 

Spin up AF Server container (SQL Server included)

Spin up PI Web API container (AF Server included)

Spin up PI Data Archive container

Spin up AF Client container

Compose PI System container architecture

Spin up AF Server container (Kerberos enabled)

Spin up PI Analysis Service container

Overcome limitations of the PI Data Archive container

Upgrade to PI Data Archive 2018 container with Data Persistence

AF Server container in the cloud

Spin up AF Server container (without SQL Server)

Spin up PI Web API container (without AF Server)

Spin up PI Web API website container

Spin up PI Interface container

AF Server container network options

 

Let me know if you have any requests!

 

To prevent myself from repeating the same thing in every containerization article. I will include the steps to setup Docker here.

 

Install Docker

For Windows 10,

You can install Docker for Windows. Please follow the instructions here

 

For Windows Server 2016,

You can use the OneGet provider PowerShell module. Open an elevated PowerShell session and run the below commands.

 

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force    
Install-Package -Name docker -ProviderName DockerMsftProvider    
Restart-Computer -Force    

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

I now present to you another blog post in the containerization series on spinning up PI Web API in less than 3 minutes (My test came out to be 2 min 44 sec!).

 

I will repeat the steps here for setting up Docker for your convenience. If you have already done so while using the AF Server image, then you do not need to repeat it again. The PI Web API image offered here is fully self contained. In other words, you do not have to worry about any dependencies such as where to store your PI Web API configuration. In a later blog post, I will be posting on a PI Web API image that only contain the application service for those of you who want the application service to be separate from the database service. In that image, you will need to furnish your own AF Server then. For now, you do not have to care about that.

 

Set up

Install PI Web API image

Run the following command at a console. When prompted for the username and password during login, please contact me (elee@osisoft.com) for them. Currently, this image is only offered for users who already have a PI Server license or are PI Developers Club members (try it now for free!). You will have to login before doing the pull. Otherwise, the pull will be unsuccessful.

docker login  
docker pull elee3/afserver:piwebapi  
docker logout  

Remember to check digest of image to make sure it is not tampered with.

Update 2 Jul 2018: Please use the fast version with tag webapifast17R2 as that image is better in every possible way. Boot up time 15 sec compared to 3 minutes.

 

Deployment

Now that the setup is complete, you can proceed to running the container image. To do so, use the following command. Replace <DNS hostname> and <containername> with one of your own picking. Remember to pick a DNS hostname that is unique.

docker run -it --hostname <DNS hostname> --name <containername> elee3/afserver:piwebapi  

 

After about 3 minutes, you will see that the command prompt indicates that both the PI Web API and AF Server are Ready.

This indicates that your PI Web API is ready for usage. At this point, you can just close the window.

Update 2 Jul 2018: Please use the fast version with tag webapifast17R2 as that image is better in every possible way. Boot up time 15 sec compared to 3 minutes.

 

Usage

Now you can open a browser on your container host and connect to it with the DNS hostname that you chose earlier.

https://<DNS hostname>/piwebapi

 

When prompted for credentials, you can use

User name: afadmin

Password: qwert123!

 

Browsing your PI Data Archive

You can use a URL of the form

https://<DNS hostname>/piwebapi/dataservers?path=\\<PI Data Archive hostname>

to access your PI Data Archive. Of course, you need to give access permissions by creating a local user on the PI Data Archive machine with the same username and password above and give a PI mapping to that user.

 

Browsing your AF Server

You can use a URL of the form

https://<DNS hostname>/piwebapi/assetservers?path=\\<AF Server hostname>

to access your AF Server. Again, you need to give access permissions by creating a local user on the AF Server machine with the same username and password above. By default, everyone has World identity in AF Server so you do not need to give any special AF mapping.

 

Multiple PI Web API instances

You can spin up several PI Web API instances by using the docker run command multiple times with a difference hostname and containername.

You can see above that I have spin up several instances on my container host.

 

Destroy PI Web API instance

If you no longer need the PI Web API instance, you can destroy it using

docker stop <containername>  
docker rm <containername>  

 

Limitations

AF Server, PI Web API, and SQL Express are all combined in a single container. There will be an upcoming blog post for a container with just PI Web API in it.

This example relies on local accounts for authentication.

 

Conclusion

Observe that the steps to deploy both the AF Server and PI Web API containers are quite similar and can be easily scripted. This helps to provision testing environments quickly and efficiently which helps in DevOps.

 

New updates (12 Jun 2018)

In the never ending quest for speed and productivity, every minute and second that we save waiting for applications to boot up can be better utilized elsewhere such as taking a nap or watching that cat video clip that your friend sent you. Therefore, I present to you a faster PI Web API container image that is more than 60% faster than the original one.

 

docker pull elee3/afserver:webapifast17R2

Remember to check digest of image to make sure it is not tampered with.

 

3 test runs were performed to compare the boot up time.

 

Run 1

Start time was 13:48:00 for both. The original image finished in 2 min 36 sec while the new one finished in 55 sec.

 

Run 2

Start time was 13:58:00 for both. The original image finished in 2 min 27 sec while the new one finished in 55 sec.

 

Run 3

Start time was 14:29:00 for both. The original image finished in 2 min 28 sec while the new one finished in 57 sec.

 

Summary of results

Run
Original (s)
New (s)
115655
214755
314857
Average15055

 

The results show that the new image is about 63% faster than the original one.

 

New updates (18 Jun 2018)

1. Added reminder to check digest of the image to make sure image has not been tampered with.

 

New updates (2 Jul 2018)

1. Removed telemetry and changed tag from webapifast to webapifast17R2. Took down image with tag piwebapi from repository. Boot up time for webapifast17R2 has been further reduced to 15 sec!!

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Currently, in order to set up an AF Server for testing/development purposes, you have two choices.

 

1. Install SQL Server and AF Server on your local machine

The problem with this method is that there is no isolation from the host operating system. Therefore, you risk the stability of the host computer if something goes wrong. You also can't spin up multiple AF Servers this way.

 

2. Provision a VM and then install SQL Server and AF Server on it

While this method provides isolation, the problem lies in the time it takes to get it set up and also the size of the VM which includes many unnecessary components.

 

There is a better way!

Today, I will be teaching you how spin up AF Server instances in less than 1 minutes (after performing the initial setup which might take a bit longer). This is made possible by the usage of containerization technology.

 

Requirements

Windows Server build 1709, Windows Server 2016 (Core and with Desktop Experience) or Windows 10 Professional and Enterprise (Anniversary Edition). Ensure that your system is current with the Windows Update.

 

Benefits

1. Portability. Easy to transfer containers to other container hosts that meet the prerequisites. No need to do tedious migrations.

2. Side by side versioning. Ability to run multiple versions of AF Server on the same container host for compatibility testing and debugging purposes.

3. Speed. Very fast to deploy.

4. Resource efficiency and density. More AF Servers can run on the same bare metal machine compared to virtualization.

5. Isolation. If you no longer need the AF Server. You can remove it easily. It won’t leave any temporary or configuration files on your container host.

6. Able to use with container orchestration systems.

 

Set up

Install Docker

For Windows 10,

You can install Docker for Windows. Please follow the instructions here

 

For Windows Server 2016,

You can use the OneGet provider PowerShell module. Open an elevated PowerShell session and run the below commands.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider
Restart-Computer -Force

 

Install AF Server image

Run the following command at a console. When prompted for the username and password during login, please contact me (elee@osisoft.com) for them. Currently, this image is only offered for users who already have a PI Server license or are PI Developers Club members (try it now for free!). You will have to login before doing the pull. Otherwise, the pull will be unsuccessful.

docker login
docker pull elee3/afserver:17R2
docker logout

Remember to check digest of image to make sure it is not tampered with.

17R2: digest: sha256:0372be8a964841dc7abc9031472f549563ec5d9f42cfe8ea77aec67d7820235d

 

Deployment

Now that the setup is complete, you can proceed to running the container image. To do so, use the following command. Replace <DNS hostname> and <containername> with one of your own picking. This will take less than 1 minutes. Remember to pick a DNS hostname that is unique.

docker run -di --hostname <DNS hostname> --name <containername> elee3/afserver:17R2

 

You can now open up PI System Explorer on your local machine and connect to the AF Server by specifying the DNS Hostname that you chose earlier. When prompted for credentials, use

User name: afadmin

Password: qwert123!

Check the box to remember the credentials so that you won't have to enter it every time.

 

You can choose to rename the AF Server if you wish.

 

And you are done! Enjoy the new AF Server instance that you have created!

 

Using with AF SDK

To connect to the AF Server from code using AF SDK, the following Connect overload can be utilized with the same credentials as above.

PISystem.Connect Method (NetworkCredential)

 

Multiple AF Servers

In order to spin up another AF Server instance, follow the steps above. When you get the new container running. You have to change the ServerID. You can do this via

docker exec -i <containername> cmd /c "cd %pihome64%\af&afdiag.exe /cid:<guid>"

 

You can generate a new guid using this.

 

Destroy AF Server

If you no longer need the AF Server, you can destroy it using

docker stop <containername>
docker rm <containername>

 

Limitations

This example uses a public SQL Express container image which is currently not available for use in a production environment.

This example relies on local accounts for authentication. Refer to the following article if you want to use Kerberos. Spin up AF Server container (Kerberos enabled)

 

New updates (14 Feb 2018)

1. 2017R2 tag is now available. Commands have been updated in the blog.

2. Image has been updated with ability to import in an existing AF Server backup in the form of PIFD.bak file. To do this, run

docker run -di --hostname <DNS hostname> --name <containername> -v <path to folder containing PIFD.bak>:c:\db elee3/afserver:2017R2 migrate.bat

 

New updates (30 May 2018)

1. Local account is no longer in the administrators group. Only a mapping to an AF Identity is done (better security).

 

New updates (18 Jun 2018)

1. Added reminder to check digest of the image to make sure image has not been tampered with.

 

New updates (2 Jul 2018)

1. Changed tag from 2017R2 to 17R2.

2. Removed telemetry

 

New updates (13 Jul 2018)

1. Changes to facilitate upgrading to 2018 container

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Today, I will be teaching you a recipe for cooking a PI Data Archive container. Please see my previous blog posts above on how to get Docker installed. We will be mixing the ingredients in a folder to create an image. After we have the image, we can bake the image to obtain a container.

 

Ingredients

1. PI Data Archive 2017 R2A Install Kit Download from techsupport website (contains the software)

2. Dockerfile GitHub - elee3/PI-Data-Archive-container-build (describes the mixing steps to form the image)

3. build.bat GitHub - elee3/PI-Data-Archive-container-build (script to start the mixing)

4. generateid.txt GitHub - elee3/PI-Data-Archive-container-build (reference commands for changing the Server ID)

5. pilicense.dat (many ways to obtain one such as through Account Manager/Partner Manager/Academic Team/PI DevClub membership, best to get a demo license that doesn't require a MSF)

6. temp.txt GitHub - elee3/PI-Data-Archive-container-build (adds host trust)

7. trust.bat GitHub - elee3/PI-Data-Archive-container-build (adds host trust)

 

Recipe

1. Gather all the required ingredients as listed above

2. Extract out the Enterprise_X64 folder from the PI Data Archive Install Kit.

3. Add pilicense.dat into the Enterprise_X64 folder. Override the existing files if needed.

4. Put the other ingredients into the parent folder of the Enterprise_X64 folder.

 

Your folder structure should now look like this.

5. Execute build.bat. The mixing will take less than 5 minutes.

 

6. Once the image is formed, you can now execute

 

docker run -it --hostname <DNS hostname> --name <containername> pida

 

at the command line to bake the image. This will take about 15 seconds.

You will see the IP address of the PI Data Archive listed in the IPv4 Address field. Use this IP address with PI SMT on your container host to connect. Your PI Data Archive container is now ready to be consumed!

 

Hint: Multiple PI Data Archive instances

If you want to bake another instance of the PI Data Archive container (just repeat step 6 with a different hostname and containername), you will need to change the Server ID too. The following procedure can be done in piconfig to accomplish this.

 

@tabl pisys,piserver
@mode ed
@istr name,serverid
hostname,
@quit

 

Replace hostname with the real hostname of your container. For more information about this, please refer to this KB article. Duplicate PI Server IDs cause PI SDK applications to fail

 

Limitations

1. This example does not persist data or configuration between runs of the container image.

2. This example relies on PI Data Archive trusts and local accounts for authentication.

3. This example doesn't support VSS backups.

4. This example doesn't support upgrading without re-initialization of data.

 

For Developers

Here is an example to connect with AF SDK and read the current value of a PI Point.

 

Conclusion

Notice how quick and easy it is to cook a PI Data Archive container. I hope you find it delicious. I like it because I can easily cook up instances for testing and remove them when I do not need them. (Please don't waste food)

 

Update 31 May 2018

Local account is no longer in the administrators group. Only a mapping to a PI Identity.

 

Update 2 Jul 2018

Added 17R2 tag.

Filter Blog

By date: By tag: