Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Until now, when installing PI interfaces on a separate node from the PI Data Archive, we need to provision a separate physical or virtual machine just for the interface itself. Don't you think that it is a little waste of resources? To combat this, we can containerize interfaces so that they become more portable which allows them to be scheduled anywhere inside your computing cluster. Their batch file configuration also makes them good candidates for lifting and shifting into containers.

 

We will start off by introducing the PI to PI interface container which is the first ever interface container! It will have buffering capabilities (via PI Buffer Subsystem) and its performance counters will also be active.

 

Set up servers

First, let me spin up 2 PI Data Archive containers to act as the source and destination servers. Check out this link on how to build the PI Data Archive container.

PI Data Archive container health check

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h pi1 --name pi1 -e trust=%computername% pidax:18

 

For the source code to build the PI Data Archive container and also the PI to PI interface container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies.

 

We shall be using pi1 as our source and pi as our destination.

 

Let's open up PI SMT to add the trust for the PI to PI Interface container. Do this on both PI Data Archives.

The IP address and NetMask are obtained by running ipconfig on your container host.

The reason I set the trusts this way is because the containers are guaranteed to spawn within this subnet since they are attached to the default NAT network. Therefore, the 2 PI Data Archive containers and the PI to PI Interface container are all in this subnet. Container to container connections are bridged through an internal Hyper-V switch.

 

On pi, create a PI Point giving it any name you want (my PI Point shall be named 'cdtclone'). Configure the other attributes of the point as such

Point Source: pitopi
Exception: off
Compression: off
Location1: 1
Location4: 1
Instrument Tag: cdt158

 

Leave the other attributes as default. This point will be receiving data from cdt158 on the source server. This is specified in the instrument tag attribute.

 

Set up interface

Now you are all set to proceed to the next step which is to create the PI to PI Interface container!

 

You can easily do so with just one command. Remember to login to Docker with the usual credentials.

docker run -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

The environment variables that you can configure include

host: destination server

src: source server

ps: point source

That is all the parameters that is supported for now.

 

You should be able to see data appearing in the cdtclone tag on the destination server now.

 

Don't you think it was very quick and easy to get started.

 

Buffer

As I mentioned before, the container also has buffering capabilities. We shall consider 2 scenarios.

 

1. The destination server is stopped. Same effect as losing network connectivity to the destination server.

2. The PI to PI interface container is destroyed.

 

Scenario 1

Stop pi.

docker stop pi

 

Wait for a few minutes and run

docker exec p2p cmd /c pibufss -cfg

 

You should see the following output which indicates that the buffer is working and actively queuing data in anticipation for the destination server to be back up.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 18:39:23, regid: 3
lastsend: 2-Nov-18 18:58:59
total events sent: 47, snapshot posts: 42, queued events: 8

 

When we start up pi again

docker start pi

 

Wait a few minutes before running pibufss -cfg again. You should now see

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: SendingData, successful connections: 2
PI identities: piadmins | PIWorld, auth type: SSPI
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 19:07:24, regid: 3
total events sent: 64, snapshot posts: 45, queued events: 0

 

The buffer has re-registered with the server and flushed the queued events to the server. You can check the archive editor to make sure the events are there.

 

Scenario 2

Stop pi just so that events will start to buffer.

docker stop pi

 

Check that events are getting buffered.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering


*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 13-Nov-18 15:25:07, lastreg: 13-Nov-18 15:25:08, regid: 3
lastsend: 13-Nov-18 17:54:14
total events sent: 8901, snapshot posts: 2765, queued events: 530

 

Now while pi is still stopped, stop p2p.

docker stop p2p

 

Check the volume name that was created by Docker.

docker inspect p2p -f "{{.Mounts}}"

 

Output as below. The name is highlighted in red. Save that name somewhere.

[{volume 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17 C:\ProgramData\docker\volumes\76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17\_data c:\programdata\osisoft\buffering local true }]

 

Now you can destroy p2p and start pi

docker rm p2p
docker start pi

 

Use archive editor to verify that data has stopped flowing.

The last event was at 5:54:13 PM.

 

We want to recover the data that are in the buffer queue files. We can create a new PI to PI interface container pointing to the saved volume name.

docker run -v 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17:"%programdata%\osisoft\buffering" -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

And VOILA! The events in the buffer queues have all been flushed into pi.

 

To be sure that the recovered events are not due to history recovery by the PI to PI interface container, I have disabled it.

 

I have demonstrated that the events in the buffer queue files were persisted across container destruction and creation as the data was persisted outside the container.

 

 

Performance counters

The container also has performance counters activated. Let's try to get the value of Device Status. Run the following command in the container.

Get-Counter '\pitopi(_Total)\Device Status'

 

Output

Timestamp CounterSamples
--------- --------------
11/2/2018 7:24:14 PM \\d13072c5ff8b\pitopi(_total)\device status :0

 

Device status is 0 which means healthy.

 

What if we stopped the source server?

docker stop pi1

 

Now run the Get-Counter command again and we will expect to see

Timestamp CounterSamples
--------- --------------
11/2/2018 7:29:29 PM \\d13072c5ff8b\pitopi(_total)\device status :95

 

Device status of 95 which means Network communication error to source PI server.

 

These performance counters will be perfect for writing health checks against the interface container.

 

Conclusion

We have seen in this blog how to use the PI to PI Interface container to transfer data between two PI Data Archive containers. As you know, OSIsoft has hundreds of interfaces. Being able to containerize one means the success of containerizing others is very high. The example in this blog will serve as a proof of concept.