Skip navigation
All Places > PI Developers Club > Blog > 2018 > August
2018

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In a complex infrastructure which spans several data centers and has multiple dependencies with minimum service up-time requirements, it is inevitable that services can still fail occasionally. The question then is how we can manage that in order to continue to maintain a high availability environment and keep downtime as low as possible. In this blog post, we will be talking about how we can implement a health check in the AF Server container to help with that goal.

 

What is a health check?

A container that is running doesn't necessarily mean that it is working. i.e. performing the service that it is supposed to do. In Docker Engine 1.12, a new HEALTHCHECK instruction was added to the Dockerfile so that we can define a command that verifies the state of health in the container. It is the same concept as a health check for humans such as making sure that your liver or kidney is working properly and take preventative measures before things go worse. In the container scenario, the exit code of the command will determine whether the container is operational and doing what is it meant to do.

 

In the AF Server context, we will need to think about what it means for the AF Server to be 'healthy'. Luckily for us, we have such a counter to indicate the health status. AF server includes a Windows PerfMon counter called AF Health Check. If both the AF application service and the SQL Server are running and responding, this counter returns a value of 1. Another way we can check for health is to check if a service is listening on the port 5457 since AF Server uses that. We can also test if the service is running. Including all of these tests will make our health check more robust.

 

Define health tests

For the first measure of health, we will be using the Get-Counter Powershell cmdlet to read the value of the performance counter. A healthy AF Server is shown below.

A value of 1 indicates that the AF Server and SQL Server are healthy while 0 means otherwise.

 

The second measure of health is to test for a service listening on port 5457. We will use the Powershell cmdlet Get-NetTCPConnection to do so.

When there is no listener on port 5457, we will get an error.

 

The third measure of health is to check if the service is running by using the Get-Service Powershell cmdlet.

 

Integrate into Docker

With the health tests on hand, how can we ask Docker to perform these tests? The answer is to use the HEALTHCHECK instruction in the Dockerfile to instruct the Docker Engine to carry out the tests at regular intervals that can be defined by the image builder or the user. The syntax of the instruction is

 

HEALTHCHECK [OPTIONS] CMD command

 

The options that can appear before CMD are:

  • --interval=DURATION (default: 30s)
  • --timeout=DURATION (default: 30s)
  • --start-period=DURATION (default: 0s)
  • --retries=N (default: 3)

 

For more information on what the options mean, please look here.

I will be using a start-period of 10s to allow the AF Server sometime to initialize before starting the health checks. The other options I will leave as default. The user of the image can still override these options during Docker run.

 

The command’s exit status indicates the health status of the container. The possible values are:

  • 0: success - the container is healthy and ready for use
  • 1: unhealthy - the container is not working correctly
  • 2: reserved - do not use this exit code

 

The command will be a batch file that runs the aforementioned tests. The instruction will therefore look like this.

HEALTHCHECK --start-period=10s CMD powershell .\check.ps1

 

Here are the contents of check.ps1

#test for service listening on port 5457
Get-NetTCPConnection -LocalPort 5457 -State Listen -ErrorAction SilentlyContinue|out-null
if ($? -eq $false)
{
write-host "No one listening on 5457"
exit 1
}

#test if AF service is running
$status = Get-Service afservice|select -expand status
if ($status -ne "Running")
{
write-host "PI AF Application Service (afservice) is $status."
write-host "PI AF Application Service (afservice) is not running."
exit 1
}

#test for AF Server Health Counter
$counter = get-counter "\PI AF Server\Health"|Select -Expand CounterSamples| Select -expand CookedValue;
if ($counter -eq 0)
{
write-host "The health counter is $counter. This might mean either"
write-host "1. SQL Server is non-responsive"
write-host "2. SQL Server is responding with errors"
exit 1
}

 

Usage

The container image elee3/afserver:18x has been updated with the health check ability. After pulling it from the Docker repository with

docker pull elee3/afserver:18x

 

You can have some fun with it. Let me spin up a new AF Server container based on the new image.

docker run -d -h af18 --name af18 elee3/afserver:18x

 

Now, let's do a

docker ps

 

Notice that my other container af17 that is based on the elee3/afserver:17R2 image doesn't have any health status next to it status because a health check was not implemented for it while container af18 indicates "(health: starting)". Let's run docker ps again after waiting for a little while.

Notice that the health status has changed from 'starting' to 'healthy' after the first test which is run interval (configured in options) seconds after the container is started.

 

We can also do

docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expandproperty log

to see the health logs.

 

Health event

When the health status of a container changes, a health_status event is generated with the new status. We can observe that using docker events. We will now intentionally break the container by stopping the SQL Server service and trying to connect with PSE.

This is expected. Now let us check using docker events which is a tool for getting real time events from the Docker Engine.

 

We can do a filter on docker events to only grab the health_status events for a certain time range so that we do not need to be concerned with irrelevant events. Let us grab those health_status events for the past hour for my container af18.

(docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time

 

Also check on

docker ps

 

and also docker inspect which can give us clues on what went wrong.

docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expand log|fl

 

With the health check, it is now obvious that even though the container is running, it doesn't work when we try to connect to it with PSE.

We shall restart the SQL Server service and try connecting with PSE. We can check if the container becomes healthy again by running

 

docker ps

and

(docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time

As expected, a new health_status event is generated which indicates healthy.

 

Conclusion

We can leverage on the health check mechanism further when we use a container orchestrator such as Docker Swarm that can detect the unhealthy state of a container and automatically replace the container with a new and working container. This will be discussed in a future blog. So stay tuned!

msingh

Stream Updates in PI Web API

Posted by msingh Employee Aug 22, 2018

What is Stream Updates?

Stream Updates is a mechanism in PI Web API to retrieve data updates for PI Points and Attributes. It is built on top of Streams and StreamSets, which use the HTTP protocol. Stream Updates uses markers to mark the specific event in a stream where the client got the last updates and uses those to get new updates since that point in the stream. Every time you request the updates, you get the changes since the time you registered, as well as a new hyperlink to use for the next set of updates. Currently, Stream Updates is only available as a CTP feature with PI Web API 2018.

 

Why was Stream Updates added?

Before Stream Updates, the way we retrieved new data for PI points and attributes was not very efficient. The only way to get new data was to continually issue requests to find out about changes (polling) which was time consuming and ineffiecient. We had lots of tweaks and options to make the overall experience less time-consuming but in order to achieve better performance, we decided to support incremental updates.

 

Why use Stream Updates?

Stream Updates is built to overcome some basic challenges with getting incremental data. Stream Updates operates over HTTP which means that all the basic benefits of normal HTTP requests are present. This contrasts with Channels, which are implemented via the WebSockets protocol. In most cases, Stream Updates is more performant than Channels.

Response sizes are much smaller than continuously polling because we get only the changes instead of the whole response all over again. Stream Updates also requires less server and network resource requirements than polling. Unlike Channels, Stream Updates is compatible with claims-based authentication.

 

How to use Stream Updates?

Stream Updates usage consists of these steps:

1. The Client registers an attribute/point for updates by sending a POST request.

2. If successful, the Client gets the updates by using the marker in the registration response. Markers are also provided as part of the "Links" object in the response and the "Location" header of the response. Clients can get updates by sending a GET request using this marker.

3. The response to receive updates will contain the "Latest Marker" which will be the current position in the stream. The user can get new updates after this position by sending out GET requests using this new marker. These requests can be chained together to get incremental updates for registered resources.

 

Sample CSharp client illustrating usage of Stream Updates:

`PIWebAPIClient.cs` is a wrapper around the `HttpClient` which implements the GET and the POST methods.

 

public class PIWebAPIClient
  {
       private HttpClient client;
       private string baseUrl;

       public PIWebAPIClient(string url, string username, string password)
       {
            client = new HttpClient();
            string auth = Convert.ToBase64String(Encoding.ASCII.GetBytes(string.Format("{0}:{1}", username, password)));
            client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", auth);
            baseUrl = url;
       }

       public PIWebAPIClient(string url)
       {
            client = new HttpClient();
            baseUrl = url;
       }

       public async Task<object> GetAsync(string uri)
       {
            HttpResponseMessage response = await client.GetAsync(uri);
            var jsonString = await response.Content.ReadAsStringAsync();
            var json = JsonConvert.DeserializeObject<object>(jsonString);
            if (!response.IsSuccessStatusCode)
            {
                 var responseMessage = "Response status code does not indicate success: " + (int)response.StatusCode + " (" + response.StatusCode + " ). ";
                 throw new HttpRequestException(responseMessage + Environment.NewLine + jsonString);
            }
            return json;
       }

       public async Task<object> PostAsync(string uri)
       {
            HttpResponseMessage response = await client.PostAsync(uri, null);
            var jsonString = await response.Content.ReadAsStringAsync();
            var json = JsonConvert.DeserializeObject<object>(jsonString);
            if (!response.IsSuccessStatusCode)
            {
                 var responseMessage = "Response status code does not indicate success: " + (int)response.StatusCode + " (" + response.StatusCode + " ). ";
                 throw new HttpRequestException(responseMessage + Environment.NewLine + jsonString);
            }
            return json;
       }

       public async Task<dynamic> RegisterForStreamUpdates(string webId)
       {
            string url = baseUrl + "/streams/" + webId + "/updates";
            dynamic response = await PostAsync(url);  
            return response;
       }

       public async Task<dynamic> RetrieveStreamUpdates(string marker)
       {
            string url = baseUrl + "/streams/" + "/updates/" + marker;
            dynamic response = await GetAsync(url);
            return response;
       }
  }

`Program.cs` is a simple C# class which uses the Client to register for and retrieve Stream Updates.

 

class Program
 {
    static string baseUrl = "https://your-server/piwebapi";
    static string marker = null;
    static string username = "username";
    static string password = "password";
    static PIWebAPIClient client = new PIWebAPIClient(baseUrl, username, password);

    static void Main(string[] args)
    {
        string webId = "webId";
        dynamic response = client.RegisterForStreamUpdates(webId).Result;
        marker = response.LatestMarker;

        var startTimeSpan = TimeSpan.Zero;
        var periodTimeSpan = TimeSpan.FromSeconds(10);
       
       //ReceiveUpdates is called every 10 seconds until the client explicitly exits the application.
        var timer = new System.Threading.Timer((e) =>
        {
            ReceiveUpdates(marker);
        } ,null, startTimeSpan, periodTimeSpan);

        Console.ReadLine();
    }

    public static void ReceiveUpdates(string marker)
    {
        dynamic update = client.RetrieveStreamUpdates(marker).Result;
        Console.WriteLine(update);
        Console.WriteLine("Press any key to exit anytime!");
        marker = update.LatestMarker;
    }
 }

For more information, see the topics page of your PI Web API installation: https://your-server/piwebapi/help/topics/stream-updates

One of the coolest things about Microsoft SQL Server in the last couple of years is how it has expanded from the confines of Windows Server and can now run on all three major desktop OSes, as well as sit in the cloud.

None of that expansion would have mattered much if downstream clients for SQL Server didn’t also expand their horizons to touch more platforms. And with Microsoft client tools for Linux and Mac, this is no longer an issue.

You can sneak PI Data through this mechanism

We can take advantage of Microsoft SQL Server and OSIsoft PI SQL by adding a linked server to SQL Server that forwards queries to PI Server. From there we can build SQL views which opens a portal into both the PI Data Archive and PI AF directly. You can also combine your own data stored in SQL with your real-time data. Downstream applications will see normal every day recordsets.

Here’s a screenshot where I’ve used this technique to pull data directly into Microsoft Excel for Mac. Not only is this data fresh, but I can refresh the query in my worksheet just like I would do in Excel for Windows. The connection from the worksheet is going straight to SQL Server.

ExcelForMac.png

Setup Steps

Setup PI SQL Data Access Server (RTQP Engine)

Make sure you’ve installed the PI SQL Data Access Server (RTQP) Engine which is in your PI Server 2018 (and later) install kit:

RTQP Install.png

Grab PI SQL Client

Next you need to get the PI SQL Client kit and install this on the instance where your SQL Server is. You can grab it from the OSIsoft Technical Support Downloads page.

Configure the PI SQL Client provider in Microsoft SQL Server Enterprise Manager

Hop over to SQL-EM and modify the linked server provider to ensure these options are switched on:

pisqlsetup.png

Create a Linked Server connection

By right-clicking on the Linked Servers folder in SQL-EM you can set up any number of linked server connections. Typically, you will want to set up one linked server connection per AF database. Here I’ve setup a connection to NuGreen:

linkedserversetup.png

Now the fun part: queries!

First let’s go with a basic type of query that finds all the pumps in the NuGreen database

Query

SELECT [ID]
   ,[Name]
   ,[Description]
   ,[Comment]
   ,[Revision]
   ,[HasChildren]
   ,[PrimaryPath]
   ,[Template]
   ,[Created]
   ,[CreatedBy]
   ,[Modified]
   ,[ModifiedBy]
   ,[TemplateID]
  FROM [PISERVER_TEST].[Master].[Element].[Element]
WHERE Name LIKE 'P%'
GO

Simple enough. This yields the following:

pumpssql.png

We can use a SQL Database to expose this as a view by wrapping this query with CREATE VIEW.

USE TEST
GO

CREATE VIEW REPORTING_PUMPS 
AS

SELECT [ID]
   ,[Name]
   ,[Description]
   ,[Comment]
   ,[Revision]
   ,[HasChildren]
   ,[PrimaryPath]
   ,[Template]
   ,[Created]
   ,[CreatedBy]
   ,[Modified]
   ,[ModifiedBy]
   ,[TemplateID]
  FROM [PISERVER_TEST].[Master].[Element].[Element]
WHERE Name LIKE 'P%'
GO

Now, when we select everything in the view we get:

reportingpumpsview.png

Perfect. Now that we have PI Server data we can pull this across to any application that can communicate to Microsoft SQL Server.

Importing PI Server data into Excel for Mac

Now that we have PI Server data exposed to Microsoft SQL Server it is fairly painless to connect this to downstream applications that can read recordsets from there. Let’s use this to connect the view we set up.

SQLServerODBCMac.png

Microsoft Excel can import remote a SQL datasets in the Data tab. From there you can select New Database Query and SQL Server ODBC.

odbcconnect.png

Insert your SQL Server credentials and authentication method and click Connect. A “Microsoft Query” window will then appear where you can enter a SQL statement to produce a recordset. It will follow the same syntax that you would use in SQL-EM.

From here I’ll select the contents of my view. Press Run to execute it on the server and inspect what comes back.

querywindow.png

Now you can press Return Data to deposit the results into your Excel worksheet. The connection to SQL Server is preserved in your worksheet when you save the Excel workbook. You can edit it and re-run the query by visiting the connections button that’s also on the Data ribbon.

connections.png

Data now refreshes on your terms in your Excel worksheet and your connection details are preserved between document openings.

Caveats

Read-only

Presently the restrictions that existed with PI OLDEDB Enterprise also apply to the latest PI SQL and Data Access Server. You cannot post data into your Asset Database via this connection type.

If you attempt to write, expect this error:

Msg 7390, Level 16, State 2, Line 35 The requested operation could not be performed because OLE DB provider “PISQLClient” for linked server “PISERVER_TEST” does not support the required transaction interface.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In my previous articles, I have demonstrated using the AF Server container in local Docker host deployments. The implication is that you have to manage the Docker host infrastructure yourself. The installation, patching, maintenance and upgrading work has to be done by you manually. This represents significant barriers to get up and running. As an analogy, imagine you visit another country for vacation and need to get from the airport to the hotel. Would it be better to buy a car (if they even sold one at the airport?) and drive to the hotel or just take a taxi (transport as a service). The first option requires a larger initial investment of time and money compared to the latter.

 

For quick demo, training or testing purposes, getting a Docker host infrastructure up and running requires effort (getting a machine with right specifications, procuring an OS with Windows container capabilities, patching the OS so that you can use Docker, installing the right edition of Docker) and troubleshooting if things go south (errors during setup or services refusing to start). In the past, we have no other choice so we just have to live with it. But in this modern era of cloud computing, using a container as a service might be a faster and cheaper alternative. Today, I will show you how to operate the AF Server container in the cloud using Azure Container Instances. The very first service of its kind in the cloud, Azure Container Instances is a new Azure service delivering containers with great simplicity and speed. It is a form of serverless containers.

 

Prerequisites

You will need an Azure subscription to follow along with the blog. You can get a free trial account here.

 

Azure CLI

Install the Azure CLI which is a command line tool for managing Azure resources. It is a small install. Once done, we need to login

az login

 

If the CLI can determine your default browser and has access to open it, it will do so and direct you immediately to a sign in page.

Otherwise, you need to open a browser page and follow the instructions on the command line to enter an authorization code after navigating to https://aka.ms/devicelogin in your browser.

 

Complete the sign in via the browser and you will see

 

Now set your default subscription if you have many subscriptions. If you only have one subscription to your account, then you can skip this step.

az account set -s <subscription name>

 

Create cloud container

We are now ready to create the AF Server cloud container. First create a resource group.

az group create --name resourcegrp -l southeastasia

You can change southeastasia to a location nearest to you. Here is the list of locations (remove the space when using it)

 

Create a file named af.yaml. Replace <username> and <password> with the credentials for pulling the AF Server container image. There are some variables that you can configure

 

afname: The name that you choose for your AF Server.

user: Username to authenticate to your AF Server.

pw: Password to authenticate to your AF Server.

 

af.yaml

apiVersion: '2018-06-01'
name: af
properties:
  containers:
  - name: af
    properties:
      environmentVariables:
      - name: afname
        value: eugeneaf
      - name: user
        value: eugene
      - name: pw
        secureValue: qwert123!
      image: elee3/afserver:18x
      ports:
      - port: 5457
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.0
  imageRegistryCredentials:
  - server: index.docker.io
    username: <username>
    password: <password>
  ipAddress:
    dnsNameLabel: eleeaf
    ports:
    - port: 5457
      protocol: TCP
    type: Public
  osType: Windows
type: Microsoft.ContainerInstance/containerGroups

 

Then run this in Azure CLI to create the container.

az container create --resource-group resourcegrp --file af.yaml

The command will return in about 5 minutes.

 

You can check the state of the container.

az container show --resource-group resourcegrp -n af --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table

 

You can check the container logs.

az container logs --resource-group resourcegrp -n af

 

Explore with PSE

You now have an AF Server container in the cloud that can be accessed ANYWHERE as long as there is internet connectivity. You can connect to it with PSE using the FQDN. The credentials to use are those that you specified in af.yaml.

Notice that the name of the AF Server is the value of the afname environment variable that was passed in af.yaml.

 

Run commands in container

If you have a need to login to the container to run commands such as using afdiag, you can do so with

az container exec --resource-group resourcegrp -n af --exec-command "cmd.exe"

 

Clean up

When you are done with using the container, you should destroy it so that you won't have to pay for it when it is not being used.

az container delete --resource-group resourcegrp -n af

You can check that the resource is deleted by listing your resources.

az resource list

 

Considerations

There are some tricks to hosting a container in the cloud to optimize its deployment time.

 

1. Base OS

The Base OS should be one of the three most recent versions of Windows Server Core 2016. These are cached in Azure Container Instances to help in the deployment time. If you want to experience the difference, try pulling elee3/afserver:18 in the create container command above. The time taken will be 13min which is more than twice of the 5min needed to pull elee3/afserver:18x. The reason is because the old image with “18” tag is based on the public SQL Server image which is 7 months old and doesn't have the latest OS version to be able to leverage on the caching mechanism to improve performance. I have rebuilt the image with “18x” tag based on my own SQL Server image with the latest OS version.

 

2. Image registry location

Hosting the image in Azure Container Registry in the same region that you use to deploy your container will help to improve deployment time as this shortens the network path that the image needs to travel which shortens the download time. Take note that ACR is not free unlike DockerHub. In my tests, it took 4min to deploy with ACR.

 

3. Image size

This one is obviously a no-brainer. That's why I am always looking to make my images smaller.

 

Another consideration is the number of containers per container group. In this example, we are creating a single-container group. The current limitation of Windows containers is that we can only create single-container groups. When this limitation is lifted in the future, there are some scenarios where I see value in creating multi-container groups such as spinning up sets of containers that are complimentary to each other. E.g. PI Data Archive container, AF Server container, PI Analysis Service container in a 3-container group. However, for scenarios such as spinning up 2 AF Servers containers, we should still keep them in separate container groups so that they won't fight for the same port.

 

Limitations

Kerberos authentication is not supported in a cloud environment. We are using NTLM authentication in this example.

 

Conclusion

Deploying the AF Server container to Azure Container Instances might not be as fast as deploying it to a local Docker host. But it is cheaper compared to the upfront time and cost of setting up your own Docker host. This makes it ideal for demo/training/testing scenarios. The containers are billed on a per second basis so you only pay for what you use. That is like only paying for your trip from the airport to the hotel without having to pay anything extra.

Filter Blog

By date: By tag: