1 2 Previous Next

# PI Developers Club

20 Posts authored by: Eugene Lee

# Async Streaming with AF SDK

Posted by Eugene Lee Jun 10, 2019

Disclaimer:

Any of the code in this blog could contain bugs and shouldn’t be used in production without extensive testing.

You agree that if you use any of the provided code in your own production code that you accept all ownership, risks, liabilities, and responsibilities associated with the performance, support, and maintenance of the code.

# Introduction

Greetings everyone!

In this blog post, we shall be discussing about a concept called Async Streaming and how we can use it with AF SDK to help make more responsive, scalable and concurrent applications.

Async Streaming is a new feature in C# that will be natively supported in version 8 and .NET Core 3. Even though AF SDK is not supported in .NET Core, there are still libraries available out there that can bring the benefits of Async Streaming to AF SDK.

Async Streaming can be advantageous in many cases. For example:

1. In front-end applications, the main UI thread can stay responsive during a data access call.
2. In both client and server applications, the number of threads used to service a call can be reduced, as waiting threads won't be blocked and can be returned to the thread pool for re-use.
3. The effect of latency is mitigated because remote calls can be executed concurrently.
4. Receiving data and processing it as it is retrieved in a way that doesn't block while we wait.

# What is Asynchronous programming?

Asynchronous programming is a means of parallel programming in which a unit of work runs separately from the main application thread and notifies the calling thread of its completion, failure or progress.

This is the first thing you will find if you do a Google search using that term. In layman terms, this means that it will be able to help us achieve more responsive applications by not blocking the main thread.

# AF SDK bulk calls

Let's examine the available data access bulk calls for PI Points that offer async behavior.

We notice that the methods with native async behavior tend to return one value per PI Point. If you look at their counterparts which return multiple values per PI Point, we find that they are not natively async.

I shall use the RecordedValues method as an example here. Below is a snippet of how we normally call this method.

We can make a wrapper called GetRecordedValues to return us a list of the recorded values. The return type is IEnumerable<AFValues>.

Disclaimer:

Any of the code in this blog could contain bugs and shouldn’t be used in production without extensive testing.

You agree that if you use any of the provided code in your own production code that you accept all ownership, risks, liabilities, and responsibilities associated with the performance, support, and maintenance of the code.

private static IEnumerable<AFValues> GetRecordedValues(PIPointList pointList){    PIPagingConfiguration config = new PIPagingConfiguration(PIPageType.TagCount, 1);    var timeRange = new AFTimeRange("*-10y", "*");    try    {        var listResults = pointList.RecordedValues(timeRange, AFBoundaryType.Inside, null, true, config);        return listResults;    }    catch (OperationCanceledException)    {        // Errors that occur during bulk calls get trapped here        // The actual error is stored on the PIPagingConfiguration object        Console.WriteLine(config.Error.Message);        return null;    }    catch (Exception otherEx)    {        // Errors that occur in an iterative fallback method get trapped here        Console.WriteLine(otherEx.Message);        return null;    }}

And then we can consume the wrapper using a foreach loop.

var afvalslist = GetRecordedValues(pointList);foreach (var pointResults in afvalslist){    foreach (var item in pointResults)    {        Console.WriteLine("Timestamp: " + item.Timestamp + "\tValue: " + item.Value + "\tName: " + pointResults.PIPoint);    }    Console.WriteLine();}

Now, this sample is generally fine in most cases. The only bad thing about it is that it doesn't have any async behavior. If the PI Data Archive is busy serving other users or applications, threads may get blocked such that responsiveness and performance will suffer. What can we do to improve upon our code?

This is where Async Streaming can save the day!

# Async Streaming

Async Streaming makes it possible to await for a stream of results. As I mentioned in the introduction above, there are libraries out there to integrate AF SDK with Async Streaming. For this blog post, I am going to use one of them called AsyncEnumerator found here

https://www.nuget.org/packages/AsyncEnumerator/

The package can be easily installed from NuGet via

Install-Package AsyncEnumerator -Version 2.2.2

It introduces 2 new interfaces called IAsyncEnumerable and IAsyncEnumerator. Lets examine each of them to understand how it helps us to do Async Streaming.

public interface IAsyncEnumerator{    object Current { get; }    Task<bool> MoveNextAsync(CancellationToken cancellationToken = default);}

The Current property is the same as IEnumerator's version. It gets the element in the collection at the current position of the enumerator. What's different is the MoveNextAsync method. Over here, we can see that it returns a Task to us. Thus, we can start the task and continue on with our work while letting the task run in the background. MoveNextAsync does not block the thread compared to MoveNext of IEnumerator.

public interface IAsyncEnumerable{    Task<IAsyncEnumerator> GetAsyncEnumeratorAsync(CancellationToken cancellationToken = default);}

GetAsyncEnumeratorAsync creates an enumerator that iterates through a collection asynchronously. This also returns a Task which returns an IAsyncEnumerator when it is complete.

# General usage patterns

We can use a general construct such as the one below to consume an async stream of values. Take note that this construct is specific to the library being used. C# 8 has a very similar syntax. This pattern of iteration will not block the thread which is what we desire for our application.

await asyncEnumerable.ForEachAsync(async number => {    await Console.Out.WriteLineAsync($"{number}");}); Behind the scenes, the compiler will translate the ForEachAsync statement to utilize the MoveNextAsync method and then access the Current property to get the element of interest. # Cancellation With this pattern, you can use a cancellation token to stop the streaming. This is useful for implementing timeouts or for the user to cancel the operation. If you look at the parameters of MoveNextAsync, you will notice that it accepts a cancellation token which you can use for notifying the streaming to stop. public virtual Task<bool> MoveNextAsync(CancellationToken cancellationToken = default)public static async Task ForEachAsync(this IAsyncEnumerable enumerable, Action<object> action, CancellationToken cancellationToken = default) The ForEachAsync extension method passes this token to MoveNextAsync where we can then retrieve this token with the yield.CancellationToken property to check for cancellation. An example is like the following. token = yield.CancellationToken;if (token.IsCancellationRequested){ await Console.Out.WriteLineAsync("cancelling"); yield.Break();} # Async Streaming + AF SDK = GetStreamingRecordedValuesAsync Now that we know what Async Streaming is about, let us improve upon the GetRecordedValues wrapper that was introduced in the previous section. We will leverage on the general usage patterns and also include cancellation in our wrapper. We will call this wrapper GetStreamingRecordedValuesAsync. We will retrieve pages of results from the PI Data Archive one tag at a time as defined by the PIPagingConfiguration settings. The return type is IAsyncEnumerable<AFValue>. Disclaimer: Any of the code in this blog could contain bugs and shouldn’t be used in production without extensive testing. You agree that if you use any of the provided code in your own production code that you accept all ownership, risks, liabilities, and responsibilities associated with the performance, support, and maintenance of the code. private static IAsyncEnumerable<AFValue> GetStreamingRecordedValuesAsync(PIPointList pointList){ PIPagingConfiguration config = new PIPagingConfiguration(PIPageType.TagCount, 1); var timeRange = new AFTimeRange("*-10y", "*"); return new AsyncEnumerable<AFValue>(async yield => { try { await Task.Run(async () => { var listResults = pointList.RecordedValues(timeRange, AFBoundaryType.Inside, null, true, config); CancellationToken token; foreach (var pointResults in listResults) { token = yield.CancellationToken; if (token.IsCancellationRequested) { await Console.Out.WriteLineAsync("cancelling"); yield.Break(); } foreach (var result in pointResults) { await yield.ReturnAsync(result); } } }); } catch (OperationCanceledException) { // Errors that occur during bulk calls get trapped here // The actual error is stored on the PIPagingConfiguration object await Console.Out.WriteLineAsync(config.Error.Message); yield.Break(); } catch (Exception otherEx) { // Errors that occur in an iterative fallback method get trapped here await Console.Out.WriteLineAsync(otherEx.Message); yield.Break(); } });} With this sample, we will be streaming the recorded values for each PI Point on the list. We can utilize the wrapper using the ForEachAsync loop and pass to it a cancellation token. var cts = new CancellationTokenSource();var afvalslist = GetStreamingRecordedValuesAsync(pointList);await afvalslist.ForEachAsync(async item =>{ await Console.Out.WriteLineAsync("Timestamp: " + item.Timestamp + "\tValue: " + item.Value.ToString().PadRight(20) + "Name: " + item.PIPoint);}, cts.Token); This method of streaming ensures the calling thread doesn't get blocked and can continue with other work. To refresh your memory, a PIPointList can contain points from multiple PI Data Archives. For a global enterprise, your PI Data Archives could be scattered around the world. What if your application is hosted in USA but you need data from the server in Singapore? No matter what, latency will definitely affect its performance. You can't beat the laws of physics but at least you are free to do other work while waiting. That's what productivity and concurrency is about! # Point of caution With Async Streaming on the client side, we can conveniently fire and forget calls. However, one has to keep in mind that the server will still need to process the data request. If every single application just dumps all these data calls asynchronously to the server, it will have some negative effects on the server. Therefore, it is up to the user to implement some kind of throttling. # Conclusion In this blog post, we have looked at what Async Streaming is and how it can help you make responsive, scalable and concurrent applications. In AF SDK, some bulk data calls might not have async methods. However, we can still use async streaming to improve the performance of our application utilizing these methods. I found a feature request here to expose asynchronous interfaces for bulk calls of AF Attributes. You can vote for it if you are interested. https://pisquare.osisoft.com/ideas/5743-af-sdk-async-data-methods-for-multiple-afattributes I hope you have learnt something useful from this article. Let me know if you have any comments! # Search for PI Point by Point ID with PI Web API Posted by Eugene Lee May 15, 2019 Introduction Normally, when we search for PI Points in PI Web API, we come equipped with its path. In this case, we can simply use the GetByPath action of the Point Controller to achieve this. Point Controller GetByPath But what if one day, your custom web application has a requirement where a list of Point IDs is given to you and there is a need for you to find the name of the PI Point corresponding to these IDs? If you are developing your application using the .NET Framework, then you have the option to use the PIPoint.FindPIPoint method available in AF SDK. PIPoint.FindPIPoint But since you are developing a web application, you do not have that luxury. Today, in this blog post, I will be addressing how to search for a PI Point using its Point ID via PI Web API and WebID 2.0. An example of such a request can be found here. https://pisquare.osisoft.com/thread/39989-find-pi-point-by-id # Concepts If you don't know already, WebID version 2.0, introduced in PI Web API 2017 R2, provides different types of WebIDs (see: WebID Type). Specifying the WebID type gives you options for reducing WebID sizes (for URL length limitations), for identifying ambiguous paths/names of AF Event Frames and AF Notifications, and for accommodating path and name changes. I will be using the IDOnly type today to achieve my goal of searching via the Point ID. The language of choice will be JavaScript. Let's take a look at the composition of the IDOnly type of a PI Point WebID. I chose a sample that is available from our public PI Web API endpoint found here. I1DPW6Wlk0_Utku9vWTvxg45oA0egAAA Let's try to break it down. NameValue IWeb ID type indicator. “IDOnly” in this case 1Web ID version number DPWeb ID marker for PI Point objects W6Wlk0_Utku9vWTvxg45oAURL Safe Base64 encoded string of PI Data Archive Id 0egAAAURL Safe Base64 encoded string of PI Point Id With this knowledge, we can see that we are able to use the Point ID directly by encoding it in the WebID and using it in the Point Controller Get action. Point Controller Get # Function to generate IDOnly type of WebID Disclaimer: This code could contain bugs and shouldn’t be used in production without extensive testing. You agree that if you use any of the provided code in your own production code that you accept all ownership, risks, liabilities, and responsibilities associated with the performance, support, and maintenance of the code. Please see this post here if you would like to see this same block of code with nicer indents. function NewIDOnlyWebID(dataType, guid, oid, ownerType, ownerguid) { //get the marker for the datatype var marker = getOwnerMarker(dataType, ownerType) //encode the server id to a base64 string var serverwebid = encodeguid(guid) //encode the owner id if datatype has a owner var typeswithowner = ["AFAnalysisRule", "AFAttribute", 'AFAttributeTemplate', "AFEnumerationValue", "AFTimeRule"]; if (typeswithowner.includes(dataType)) { if (ownerguid) { serverwebid += encodeguid(ownerguid) } else { throw 'please provide a valid owner guid' } } //return webid if datatype is a server if (dataType == "PIServer" || dataType == "PISystem") { return 'I1' + marker + (serverwebid).replace(/=/g, '').replace(/\//g, '_').replace(/\+/g, '-') } //return webid if datatype is a pi point if (dataType == "PIPoint") { if (!oid) throw 'provide a valid PI Point ID' var arr = new Uint8Array(new Uint32Array([oid]).buffer); var pointwebid = btoa(String.fromCharCode.apply(null, arr)) return 'I1' + marker + (serverwebid + pointwebid).replace(/=/g, '').replace(/\//g, '_').replace(/\+/g, '-') } //return webid for af objects var afwebid = encodeguid(oid) return 'I1' + marker + (serverwebid + afwebid).replace(/=/g, '').replace(/\//g, '_').replace(/\+/g, '-')}function encodeguid(guid) { var s = guid.replace(/[^0-9a-f]/ig, '').toLowerCase(); //check for invalid guid if (s.length != 32) throw 'invalid guid'; //arrange bytes as PI Web API uses Microsoft style GUID s = s.slice(6, 8) + s.slice(4, 6) + s.slice(2, 4) + s.slice(0, 2) + s.slice(10, 12) + s.slice(8, 10) + s.slice(14, 16) + s.slice(12, 14) + s.slice(16); //base64 encode the byte array var t = ''; for (var n = 0; n < s.length; n += 2) { t += String.fromCharCode(parseInt(s.substr(n, 2), 16)); } return btoa(t)}function getOwnerMarker(dataType, ownerType) { var marker switch (dataType) { case 'AFAnalysis': marker = 'XS' break; case 'AFAnalysisCategory': marker = 'XC' break; case 'AFAnalysisTemplate': marker = 'XT' break; case 'AFAnalysisRule': marker = 'XR' switch (ownerType) { case 'AFAnalysis': marker += "X" break case 'AFAnalysisTemplate': marker += "T" break default: throw "please provide owner type" } break; case 'AFAnalysisRulePlugin': marker = 'XP' break; case 'AFAttribute': marker = 'Ab' switch (ownerType) { case 'AFElement': marker += "E" break case 'AFEventFrame': marker += "F" break case 'AFNotification': marker += "N" break default: throw "please provide owner type" } break; case 'AFAttributeCategory': marker = 'AC' break; case 'AFAttributeTemplate': marker = 'ATE' break; case 'AFDatabase': marker = 'RD' break; case 'AFElement': marker = 'Em' break; case 'AFElementCategory': marker = 'EC' break; case 'AFElementTemplate': marker = 'ET' break; case 'AFEnumerationSet': marker = 'MS' switch (ownerType) { case 'PISystem': marker += "R" break case 'PIServer': marker += "D" break default: throw "please provide owner type" } break; case 'AFEnumerationValue': marker = 'MV' switch (ownerType) { case 'PISystem': marker += "R" break case 'PIServer': marker += "D" break default: throw "please provide owner type" } break; case 'AFEventFrame': marker = 'Fm' break; case 'AFNotification': marker = 'Nf' break; case 'AFNotificationTemplate': marker = 'NT' break; case 'AFNotificationContactTemplate': marker = 'NC' break; case 'AFTimeRule': marker = 'TR' switch (ownerType) { case 'AFAnalysis': marker += "X" break case 'AFAnalysisTemplate': marker += "T" break default: throw "please provide owner type" } break; case 'AFTimeRulePlugin': marker = 'TP' break; case 'AFSecurityIdentity': marker = 'SI' break; case 'AFSecurityMapping': marker = 'SM' break; case 'AFTable': marker = 'Bl' break; case 'AFTableCategory': marker = 'BC' break; case 'PIPoint': marker = 'DP' break; case 'PIServer': marker = 'DS' break; case 'PISystem': marker = 'RS' break; case 'UOM': marker = 'Ut' break; case 'UOMClass': marker = 'UC' break; default: throw "please provide a suitable datatype" } return marker} The code above provides a generalized function that can help you to generate a PI Point WebID. The function takes 5 parameters. namevalue dataTypethe type of object guidthe guid of the server that the object belongs to oidthe id of the object itself ownerTypethe type of the owner object (only required for objects with owner types) ownerguidthe guid of the owner object (only required for objects with owner types) Example usage for PI Point objects, console.log(NewIDOnlyWebID('PIPoint', '93a5a55b-d44f-4bb6-bdbd-64efc60e39a0', 59601)) Result will be the PI Point WebID I1DPW6Wlk0_Utku9vWTvxg45oA0egAAA Example usage for AF Attribute objects belong to an AF Element, console.log(NewIDOnlyWebID('AFAttribute', '0b101021-e3bc-433d-9f06-a6a2db5f0803', '4f46d670-487e-5aa1-38b9-cd626ea43bc6', 'AFElement', 'cd24b9af-68d5-11e8-80db-000d3a10c7ce')) Result will be the AF Attribute WebID I1AbEIRAQC7zjPUOfBqai218IAwr7kkzdVo6BGA2wANOhDHzgcNZGT35IoVo4uc1ibqQ7xg # How to get the object guids? For PI Data Archive, we can make a call to the DataServer Controller and utilize its GetByPath action and only select the Id field. An example is shown below. Similarly for AF Server, we can make a call to the AssetServer Controller and utilize its GetByPath action and only select the Id field. I will leave this as an exercise for you to try out. For AF objects, we can find the guid at the lower right corner of PI System Explorer. # Solve the initial problem At the start of this blog, our challenge was to find the name of the PI Point corresponding to a Point ID that was given. Now with the PI Point WebID that was generated, we can easily do so with the Get action of the Point Controller. # Conclusion We can see that WebID 2.0 has enhanced the flexibility of PI Web API and it allows us to do things that were previously thought impossible. The possibilities are endless and only limited by your imagination. I hoped you have enjoyed reading this blog and learnt something useful from it. Please drop any comments below! # See also pi-web-api-web-id-20-specification-tables using-web-id-20-to-optimize-your-applications # PI World 2019 San Francisco Tech Talk: Effortlessly deploying a PI System in Azure or AWS Posted by Eugene Lee Apr 5, 2019 Greetings fellow PI Geeks! It is about to be that time of the year again next week where we come together to learn from each other and share about the innovative and exciting things that is made possible with the wonderful PI System. Cloud computing will be one of the major themes in this year's event. At OSIsoft, our Cloud Vision is to be Ready Now, Ready for Tomorrow. To fulfill this vision today, the traditional on-premise PI System workloads can be migrated to an IaaS model in the cloud. In the future, a PaaS model migration will also be possible with OSIsoft Cloud Services. For those of you who are looking to deploy a PI System effortlessly in the cloud in just a matter of minutes, there is a Tech Talk that is perfect for you! Unlike a normal talk, this Tech Talk will go more in-depth into the topic presented with live demos, and will last for 90 minutes as opposed to the usual 45 minutes. Below are the details of the Tech Talk that will be presented by me and Valentin. Effortlessly deploying a PI System in Azure or AWS Time: Day 2, 10 April 2019, Wednesday, 2:30 PM - 4:00 PM Location: Parc 55, Mission II, Level 4 An increasing number of organizations are migrating their IT workloads to cloud platforms as part of their digital transformation and cloud first initiatives. New techniques have emerged to ease some of these challenges with automating deployments of network, storage and compute. Join us in this Tech Talk to learn about using automation and infrastructure as code tools to deploy a PI System simply on Amazon Web Services or Microsoft Azure cloud. AWS CloudFormation templates and ARM scripts are used for hosting the PI System on these platforms, helping to automate Dev/Test/Prod deployment of the PI System. Hope to see you next Wednesday and have a great weekend! # Spin up PI to PI Interface container Posted by Eugene Lee Nov 19, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction Until now, when installing PI interfaces on a separate node from the PI Data Archive, we need to provision a separate physical or virtual machine just for the interface itself. Don't you think that it is a little waste of resources? To combat this, we can containerize interfaces so that they become more portable which allows them to be scheduled anywhere inside your computing cluster. Their batch file configuration also makes them good candidates for lifting and shifting into containers. We will start off by introducing the PI to PI interface container which is the first ever interface container! It will have buffering capabilities (via PI Buffer Subsystem) and its performance counters will also be active. Set up servers First, let me spin up 2 PI Data Archive containers to act as the source and destination servers. Check out this link on how to build the PI Data Archive container. PI Data Archive container health check docker run -h pi --name pi -e trust=%computername% pidax:18 docker run -h pi1 --name pi1 -e trust=%computername% pidax:18  For the source code to build the PI Data Archive container and also the PI to PI interface container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies. We shall be using pi1 as our source and pi as our destination. Let's open up PI SMT to add the trust for the PI to PI Interface container. Do this on both PI Data Archives. The IP address and NetMask are obtained by running ipconfig on your container host. The reason I set the trusts this way is because the containers are guaranteed to spawn within this subnet since they are attached to the default NAT network. Therefore, the 2 PI Data Archive containers and the PI to PI Interface container are all in this subnet. Container to container connections are bridged through an internal Hyper-V switch. On pi, create a PI Point giving it any name you want (my PI Point shall be named 'cdtclone'). Configure the other attributes of the point as such Point Source: pitopi Exception: off Compression: off Location1: 1 Location4: 1 Instrument Tag: cdt158  Leave the other attributes as default. This point will be receiving data from cdt158 on the source server. This is specified in the instrument tag attribute. Set up interface Now you are all set to proceed to the next step which is to create the PI to PI Interface container! You can easily do so with just one command. Remember to login to Docker with the usual credentials. docker run -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi  The environment variables that you can configure include host: destination server src: source server ps: point source That is all the parameters that is supported for now. You should be able to see data appearing in the cdtclone tag on the destination server now. Don't you think it was very quick and easy to get started. Buffer As I mentioned before, the container also has buffering capabilities. We shall consider 2 scenarios. 1. The destination server is stopped. Same effect as losing network connectivity to the destination server. 2. The PI to PI interface container is destroyed. Scenario 1 Stop pi. docker stop pi  Wait for a few minutes and run docker exec p2p cmd /c pibufss -cfg  You should see the following output which indicates that the buffer is working and actively queuing data in anticipation for the destination server to be back up. *** Configuration: Buffering: On (API data buffered) Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering *** Buffer Sessions: 1 non-HA server, name: pi, session count: 1 1 [pi] state: Disconnected, successful connections: 1 PI identities: , auth type: firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 18:39:23, regid: 3 lastsend: 2-Nov-18 18:58:59 total events sent: 47, snapshot posts: 42, queued events: 8  When we start up pi again docker start pi  Wait a few minutes before running pibufss -cfg again. You should now see *** Configuration: Buffering: On (API data buffered) Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering *** Buffer Sessions: 1 non-HA server, name: pi, session count: 1 1 [pi] state: SendingData, successful connections: 2 PI identities: piadmins | PIWorld, auth type: SSPI firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 19:07:24, regid: 3 total events sent: 64, snapshot posts: 45, queued events: 0  The buffer has re-registered with the server and flushed the queued events to the server. You can check the archive editor to make sure the events are there. Scenario 2 Stop pi just so that events will start to buffer. docker stop pi  Check that events are getting buffered. *** Configuration: Buffering: On (API data buffered) Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering *** Buffer Sessions: 1 non-HA server, name: pi, session count: 1 1 [pi] state: Disconnected, successful connections: 1 PI identities: , auth type: firstcon: 13-Nov-18 15:25:07, lastreg: 13-Nov-18 15:25:08, regid: 3 lastsend: 13-Nov-18 17:54:14 total events sent: 8901, snapshot posts: 2765, queued events: 530  Now while pi is still stopped, stop p2p. docker stop p2p  Check the volume name that was created by Docker. docker inspect p2p -f "{{.Mounts}}"  Output as below. The name is highlighted in red. Save that name somewhere. [{volume 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17 C:\ProgramData\docker\volumes\76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17\_data c:\programdata\osisoft\buffering local true }]  Now you can destroy p2p and start pi docker rm p2p docker start pi  Use archive editor to verify that data has stopped flowing. The last event was at 5:54:13 PM. We want to recover the data that are in the buffer queue files. We can create a new PI to PI interface container pointing to the saved volume name. docker run -v 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17:"%programdata%\osisoft\buffering" -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi  And VOILA! The events in the buffer queues have all been flushed into pi. To be sure that the recovered events are not due to history recovery by the PI to PI interface container, I have disabled it. I have demonstrated that the events in the buffer queue files were persisted across container destruction and creation as the data was persisted outside the container. Performance counters The container also has performance counters activated. Let's try to get the value of Device Status. Run the following command in the container. Get-Counter '\pitopi(_Total)\Device Status'  Output Timestamp CounterSamples --------- -------------- 11/2/2018 7:24:14 PM \\d13072c5ff8b\pitopi(_total)\device status :0  Device status is 0 which means healthy. What if we stopped the source server? docker stop pi1  Now run the Get-Counter command again and we will expect to see Timestamp CounterSamples --------- -------------- 11/2/2018 7:29:29 PM \\d13072c5ff8b\pitopi(_total)\device status :95  Device status of 95 which means Network communication error to source PI server. These performance counters will be perfect for writing health checks against the interface container. Conclusion We have seen in this blog how to use the PI to PI Interface container to transfer data between two PI Data Archive containers. As you know, OSIsoft has hundreds of interfaces. Being able to containerize one means the success of containerizing others is very high. The example in this blog will serve as a proof of concept. # Containers and Swarm Part 1 (Setup, Service) Posted by Eugene Lee Oct 26, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction We have learnt much about using containers in previous blog posts. Until now, we have been working with standalone containers. This is great for familiarizing yourself with the concept of containers in general. Today, we shall take the next step in our container journey which is to learn how to orchestrate these containers. There are several container orchestration platforms on the market today such as Docker Swarm, Kubernetes, Service Fabric and Marathon. I will be using Docker Swarm today to illustrate the concept of orchestration since it is directly integrated with the Docker Engine making it the quickest and easiest to set up. Motivation Before we even start on the orchestration journey, it is important that we understand the WHY behind it. For someone who is new to all these, the objective of doing this might not be clear. Let me illustrate with two analogies. One that a layman can understand and another that a PI admin can relate to. First analogy Suppose your hobby is baking cakes (containers). You have been hard at work in your kitchen trying to formulate the ultimate recipe (image) for the best chiffon cake in the world. One day, you managed to bake a cake with the perfect taste and texture after going through countless rounds of trial and error of varying the temperature of the oven, the duration in the oven, the amount of each type of ingredient etc. Your entrepreneurial friend advise you to open a small shop selling this cake (dealing with standalone containers in a single node). You decided to heed your friend's advice and did so. Over the years, business boomed and you want to expand your small shop to a chain of outlets (cluster of nodes). However, you have only one pair of hands and it is not possible for you to bake all the cakes that you are going to sell. How are you going to scale beyond a small shop? Luckily, your same entrepreneurial friend found a vendor called Docker Inc who can manufacture a system of machines (orchestration platform) where you install one machine in each of your outlet stores. These machines can communicate with each other and they can take your recipe and bake cakes that taste exactly the same as the ones that you baked yourself. Furthermore, you can let the machines know how many cakes to bake each hour to address different levels of demand throughout the day. The machines even have a QA tester at the end of the process to test if the cake meets its quality criteria and will automatically discard cakes that fail to replace them with new ones. You are so impressed that you decide to buy this system and start expanding your cake empire. Second analogy Suppose you are in charge of the PI System at your company. Your boss has given you a cluster of 10 nodes. He would like you to make an AF Server service spanning this cluster that has the following capabilities 1. able to adapt to different demands to save resources 2. self-healing to maximize uptime 3. rolling system upgrades to minimize downtime 4. easy to upgrade to newer versions for bug fixes and feature enhancements 5. able to prepare for planned outages needed for maintenance 6. automated roll out of cluster wide configuration changes 7. manage secrets such as certificates and passwords for maximum security How are you going to fulfill his crazy demands? This is where a container orchestration platform might help. Terminology Now let us get some terminologies clear. Swarm: A swarm consists of multiple Docker hosts which run in swarm mode and act as managers and workers. A given Docker host can be a manager, a worker, or perform both roles. Manager: The manager delivers work (in the form of tasks) to workers, and it also manages the state of the swarm to which it belongs. The manager can also run the same services workers, but you can also make them run only manager-related services. Worker: Workers run tasks distributed by the swarm manager. Each worker runs an agent that reports back to the master about the state of the tasks assigned to it, so the manager can keep track of the work running in the swarm. Service: A service defines which container images the swarm should use and which commands the swarm will run in each container. For example, it’s where you define configuration parameters for an AF Server service running in your swarm. Task: A task is a running container which is part of a swarm service and managed by a swarm manager. It is the atomic scheduling unit of a swarm. Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. There are two types of service. Replicated: The swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state. Global: The swarm manager runs one task for the service on every available node in the cluster. Prerequisites To follow along with this blog, you will need two Windows Server 2016 Docker hosts. Check out how to install Docker in the Containerization Hub link above. Set up Select one of the nodes (we will call it "Manager") and run docker swarm init  This will output the following Swarm initialized: current node (vgppy0347mggrbam05773pz55) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377  Now select the other node (we will call it "Worker") and run the command that was being output in the previous command. docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377  Go back to Manager and run docker node ls  to list out the nodes that are participating in the swarm. Note that this command only works on manager nodes. Service Now that the nodes have been provisioned, we can start to create some services. For this blog, I will be using a new AF Server container image that I have recently developed tagged 18s. If you have been following my series of blogs, you might be curious what is the difference between the tag 18x (last seen here) and 18s. With 18s, the data is now separated from the AF Server application service. What this means is that the PIFD database mdf, ndf and ldf files are now mounted in a separate data volume. The result is that on killing the AF Server container, the data won't be lost and I can easily recreate a AF Server container pointing to this data volume to keep the previous state. This will be useful in future blogs on container fail-over with data persistence. You will need to login with the usual docker credentials that I have been using in my blogs. To create the service, run docker service create --name=af18 --detach=false --with-registry-auth elee3/afserver:18s  Note: If --detach=false was not specified, tasks will be updated in the background. If it was specified, then the command will wait for the service to converge before exiting. I do it so that I can get some visual output. Output goa9cljsek42krqgvjtwdd2nd overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Waiting 6 seconds to verify that tasks are stable...  Now we can list the service to find out which node is hosting the tasks of that service. docker service ps af18  Once you know which node is hosting the task, go to that node and run docker ps -f "name=af18."  Output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9e3d26d712f9 elee3/afserver:18s "powershell -Comma..." About a minute ago Up About a minute (healthy) af18.1.w3ui9tvkoparwjogeg26dtfz  The output will show the list of containers that the swarm service has started for you. Let us inspect the network that the container belongs to by using inspecting with the container ID. docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks}}"  Output map[nat:0xc0420c0180]  The output indicates that the container is attached to the nat network by default if you do not explicitly specify a network to attach to. This means that your AF Server is accessible from within the same container host. You can get the IP address of the container with docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks.nat.IPAddress}}"  Then you can connect with PSE using the IP address. It is also possible to connect with the container ID as the container ID is the hostname by default. Now that we have a service up and running, let us take a look at how to change some configurations of the service. In the previous image, the name of the AF Server derives from the container ID which is some random string. I would like to make it have the name 'af18'. I can do so with docker service update --hostname af18 --detach=false af18  Once you execute that, Swarm will stop the current task that is running and reschedule it with the new configuration. To see this, run docker service ps af18  Output ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS llueiqx8ke86 af18.1 elee3/afserver:18s worker Running Running 8 minutes ago w3ui9tvkopar \_ af18.1 elee3/afserver:18s master Shutdown Shutdown 9 minutes ago  During rescheduling, it is entirely possible for Swarm to shift the container to another node. In my case, it shifted from master to worker. It is possible to ensure that the container will only be rescheduled on a specific node by using a placement constraint. docker service update --constraint-add node.hostname==master --detach=false af18  We can check the service state to confirm. docker service ps af18  Output ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS r70qwri3s435 af18.1 elee3/afserver:18s master Running Starting 9 seconds ago llueiqx8ke86 \_ af18.1 elee3/afserver:18s worker Shutdown Shutdown 9 seconds ago w3ui9tvkopar \_ af18.1 elee3/afserver:18s master Shutdown Shutdown 2 hours ago  Now, the service will only get scheduled on the master node. You will now be able to connect with PSE on the master node using the hostname 'af18'. When you are done with the service, you can remove it. docker service rm af18  Conclusion In this article, we have learnt how to set up a 2 node Swarm cluster consisting of one master and one worker. We scheduled an AF Server swarm service on the cluster and updated its configuration without needing to recreate the service. The Swarm takes care of scheduling the service's tasks on the appropriate node. We do not need to manually do it ourselves. We also seen how to control the location of the tasks by adding a placement constraint. In the next part of the Swarm series, we will take a look at Secrets and Configs management within Swarm. Stay tuned for more! # Container Kerberos Double Hop Posted by Eugene Lee Sep 17, 2018 Note: Development and Testing purposes only. Not supported in production environments. Link to other containerization articles Containerization Hub Introduction In this blog post about security and containers, we will be discussing about implementing a Kerberos Double Hop from the client machine to the PI Web API container and finally to the PI Data Archive container. Previously, when we are using the PI Web API container located here Spin up PI Web API container (AF Server included), we are using local accounts for authentication to the backend server such as the AF Server or the PI Data Archive. The limitation is that without Kerberos Delegation, we will not be able to have per user security which means that all users of PI Web API will have the same permissions. i.e. an operator can read the sensitive tags that were meant for the upper management and vice versa. Obviously, this is not ideal. What we want is to have more granularity in assigning permissions to the right people so that they can only access the tags that they are supposed to read. Prerequisites You will need to have 2 GMSA accounts. You can request such accounts from your IT department. They can refer to this blog post if they do not know how to create GMSA Spin up AF Server container (Kerberos enabled). Also be sure that one of them has the TrustedforDelegation property set to True. This can be done with the Set-ADServiceAccount cmdlet. You will also need to build the PI Data Archive container by following the instructions in the Build the image section here. PI Data Archive container health check For the PI Web API container, you will need to pull it from the repository by using this command. docker pull elee3/afserver:webapi18  Demo without GMSA First let us demonstrate how authentication will look like when we run containers without GMSA. Let's have a look at the various authentication modes that PI Web API offers. 1. Anonymous 2. Basic 3. Kerberos 4. Bearer For more detailed explanation aboout each mode, please refer to this page. We will only be going through the first 3 modes as Bearer requires an external identity provider which is out of the scope of this blog. Create the PI Data Archive container and the PI Web API container. We will also create a local user called 'enduser' in the two containers. docker run -h pi --name pi -e trust=%computername% pidax:18 docker run -h wa --name wa elee3/afserver:webapi18 docker exec wa net user enduser qwert123! /add docker exec pi net user enduser qwert123! /add  Anonymous Now let's open up PSE and connect to the hostname "wa". If prompted for the credentials, use Username: afadmin Password: qwert123! Change the authentication to Anonymous and check in the changes. Restart the PI Web API service. Verify that the setting has taken effect by using internet explorer to browse to /system/configuration. There will be no need for any credentials. We can now try to connect to the PI Data Archive container with this URL. https://wa/piwebapi/dataservers?path=\\pi Check the PI Data Archive logs to see how PI Web API is authenticating. Result: With Anonymous authentication, PI Web API authenticates with its service account using NTLM. Basic Now use PSE to change the authentication to Basic and check in. Restart the PI Web API service. Close internet explorer and reopen it to point to /system/configuration to check the authentication method. This time, there will be a prompt for credentials. Enter Username: enduser Password: qwert123! Try to connect to the same PI Data Archive earlier. You will get an error as the default PI Data Archive container doesn't have any mappings for enduser Let's see what is happening on the PI Data Archive side. Result: With Basic authentication, the end user credential has been transferred to the PI Data Archive with NTLM. Kerberos Finally use PSE to change the authentication to Keberos and check in. Restart the PI Web API service. Close internet explorer and reopen it to point to /system/configuration to check the authentication method. The prompt for credentials will look different from the Basic authentication one. Use the same credentials as you did for the Basic authentication scenario. Try to connect to the same PI Data Archive again. You should not be able to connect. When you check on the PI Data Archive logs, you will see Result: With Kerberos authentication, the delegation failed and the credential became NT AUTHORITY\ANONYMOUS LOGON even though we logged on to PI Web API with the local account 'enduser'. Demo with GMSA Kerberos Now we shall use the GMSA accounts that we have to make the last scenario with Kerberos delegation work. Download the scripts for Kerberos enabled PI Data Archive and PI Web API here. PI-Web-API-container/New-KerberosPWA.ps1 PI-Data-Archive-container-build/New-KerberosPIDA.ps1 I will use the name 'untrusted' as the name of the GMSA account that is not trusted for delegation and 'trusted' as the name of the GMSA account that is trusted for delegation. Set the SPN for 'trusted' like such setspn -s HTTP/trusted trusted  Once you have the scripts, run them like this .\New-KerberosPIDA.ps1 -AccountName untrusted -ContainerName pik .\New-KerberosPWA.ps1 -AccountName trusted -ContainerName wak  The scripts will help you to create a credential spec for the container based on the GMSA that you provide to it. A credential spec will let the container know how it can access Active Directory resources. Then, it will use this credential spec to create the container using docker run command. It will also set the hostname of the container to be the same as the name of the GMSA. This is required because it is a current limitation with the implementation that might be resolved in the future so that you can choose your own hostnames. Open internet explorer now with your domain account and access PI Web API /system/userinfo. The hostname is 'trusted'. Make sure that ImpersonationLevel is 'Delegation'. Now try to access the PI Data Archive. The hostname is 'untrusted'. You will be unable to access. Why? Because you haven't created a mapping yet! So let's use SMT to create a mapping to your domain account. After creating a mapping. Try again and you should be able to connect. The PI Data Archive logs will show that you have connected with Kerberos. You do not need any mapping to your PI Web API service account at all if Kerberos delegation is working properly. Result: With Kerberos authentication method in PI Web API and the use of GMSAs, Kerberos delegation works. The end domain user is delegated from the client to the PI Web API container to the PI Data Archive container. We have successfully completed the double hop. Troubleshoot If this doesn't seem to work for you, one thing you can try is to check the setting for internet explorer according to this KB article. KB01223 - Kerberos and Internet Browsers Your browser settings might differ from mine but the container settings should be the same since the containers are newly created. Alternative: Resource Based Constrained Delegation A more secure way to do Kerberos delegation instead of trusting the PI Web API container GMSA for delegation is to set the property "PrincipalsAllowedToDelegateToAccount" on the PI Data Archive container GMSA. This is what we call Resource Based Constrained Delegation (RBCD). You do not have to trust any GMSAs for delegation in this scenario. You will still need two GMSAs. Assuming that you have already created the two containers with the scripts found above. I will use 'pida' as the name of the PI Data Archive container GMSA and 'piwebapi' as the name of the PI Web API container GMSA. .\New-KerberosPIDA.ps1 -AccountName pida -ContainerName pik .\New-KerberosPWA.ps1 -AccountName piwebapi -ContainerName wak  Execute these two additional commands to enable RBCD. docker exec pik powershell -command "Add-WindowsFeature RSAT-AD-PowerShell" docker exec pik powershell -command "Set-ADServiceAccount$env:computername -PrincipalsAllowedToDelegateToAccount (Get-ADServiceAccount piwebapi)"


You will still be able to connect with Kerberos delegation from the client machine to the PI Web API container to the PI Data Archive container. In this case, the PI Data Archive container only strictly allows delegation from the PI Web API container with 'piwebapi' as its GMSA.

Conclusion

We have seen that containers are able to utilize Kerberos delegation with the usage of GMSAs. This is important for middleware server containers such as PI Web API. Here is a quick summary of the various results that we have seen.

Authentication Mode
No GMSAWith GMSA
AnonymousNTLM with service accountNo reason to do this
BasicNTLM with local end user accountNo reason to do this
KerberosNTLM with anonymous logonKerberos delegation with domain end user account

The interesting thing is that Basic authentication can also have per user security with local end user accounts. But you will need to maintain the list of local users in the PI Web API container and the PI Data Archive container separately which is not recommended. The ideal case is to go with Kerberos delegation.

# PI Data Archive container health check

Posted by Eugene Lee Sep 3, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

In my previous blog on AF Server container health check, I talked about implementing a health check for the AF Server container. Naturally, we will also have to discuss about such a check for the PI Data Archive container. For an introduction to what a health check is about and also how you can integrate a health check with Docker. Please refer to the previous blog post as I won't be repeating it here.

In part 1, I will be covering the definition of the health tests that we can do for the PI Data Archive and then we will hook them up in the Dockerfile.

In part 2, we will be doing something interesting with these health check enabled containers by using another container that I wrote to inform us by email whenever there is a change in their health status so that we are aware when things fail.

Without further ado, let's jump into the definition of the health tests for the PI Data Archive container!

Define health tests

There are 2 tests that we will be performing. The first test is a test on the port 5450 to determine if there are any services listening on that port. The second test will use piartool to block for some essential subsystems of the PI Data Archive with a fixed timeout so that the test will fail if it exceeds that timeout.

The Powershell cmdlet Get-NetTCPConnection can accomplish the first check for us. A return value of null means that there is no service listening on port 5450.

The relevant code is below

$val = Get-NetTCPConnection -LocalPort 5450 -State Listen -ErrorAction SilentlyContinue if ($val -eq $null) { # return 1: unhealthy - the container is not working correctly Write-Host "Failed: No TCP Listener found on 5450" exit 1 }  Next, piartool is a utility that is located in the adm folder in PI Data Archive home directory. It has an option called "block" which waits for the specified subsystem to respond. This command is also used in the PI Data Archive start scripts to pause the script until the subsystem is available. The subsystems that we are going to check is the following list. $SubsystemList = @(
@("pibasess", "PI Base Subsystem"),
@("pisnapss", "PI Snapshot Subsystem"),
@("piarchss", "PI Archive Subsystem"),
@("piupdmgr", "PI Update Manager")
)


We are going to change the amount of time that we allow for each check to 10 seconds so that we do not have to wait 1 hour for it to complete . We will also grab the start and end times so that we can provide detailed logging for troubleshooting purposes. The code for this is below.

function Block-Subsystem
{
Param ([string]$Name, [string]$DisplayName, [int] $TimeoutSeconds= 10)$StartDate=Get-Date
$rc = Start-Process -FilePath "${env:PISERVER}\adm\piartool.exe" -ArgumentList @("-block", $Name,$TimeoutSeconds) -Wait -PassThru -NoNewWindow
$EndDate=Get-Date if($rc.ExitCode -ne 0)
{
echo ("Block failed for {0} with exit code {1}, block started: {2}, block ended: {3}" -f $DisplayName,$rc.ExitCode,$StartDate,$EndDate)
exit 1
}
}

ForEach ($Subsystem in$SubsystemList) {Block-Subsystem -Name $Subsystem[0] -DisplayName$Subsystem[1] -TimeoutSeconds 10}


Integrate into Docker

We will add this line of code to our Dockerfile to make Docker start performing health checks.

HEALTHCHECK --start-period=60s --timeout=60s --retries=1 CMD powershell .\check.ps1


The start period is given as 60 seconds to allow the PI Data Archive to start up and initialize properly before the health check test results will be taken into account. A time out of 60 seconds is given for the entire health check to complete. If it takes longer than that, the health check is deemed to have failed. I also gave only 1 retry which means that the health check will be unsuccessful if the first try fails. There is no second chance! .

Build the image

As usual, you will have to supply the PI Server 2018 installer and pilicense.dat yourself. The rest of the files can be found here.

elee3/PI-Data-Archive-container-build

Put all the files into the same folder and run the build.bat file.

Once your image is built, you can create a container.

docker run -h pi --name pi -e trust=%computername% pidax:18


Now check docker ps. The health status should be starting.

After 1 minute which is the timeout period, run docker ps again. The health status should now be healthy.

Health monitoring

Now that we have a health check enabled container up and running, we can start to do some wonderful things with it. If your job is a PI administrator. don't you wish there was some way to keep tabs on your PI Data Archive's health so that if it fails, an email can be sent to notify you that it is unhealthy. This way, you won't get a shock the next time you check on your PI Data Archive and realize that it has been down for a week!

I have written an application that can help you monitor ANY health enabled containers (i.e. not only the PI Data Archive container and the AF Server container but any container that has a health check enabled) and send you an email when they become unhealthy. We can start the monitoring with just one simple command. You should change the following variables

Name of your SMTP server: <mysmtp>

Destination email: <operator@osisoft.com>

docker run --rm -id -h test --name test -e smtp=<mysmtp> -e from=<admin@osisoft.com> -e to=<operator@osisoft.com> elee3/health


Once the application is running, we can test it by trying to break our PI Data Archive container. I will do so by stopping the PI Snapshot Subsystem since it is one of the services that is monitored by our health check. After a short while, I received an email in my inbox.

Let me check docker ps again.

The health status of docker ps corresponds to what the email has indicated. Notice that the email even provides us with the health logs so that we know exactly what went wrong. This is so useful. Now let me go back and start the PI Snapshot Subsystem again. The monitoring application will inform me that my container is healthy again.

The latest log at 2:30:47 PM has no output which indicates that there are no errors. The logs will normally fetch the 5 most recent events.

With the health monitoring application in place, we can now sleep in peace and not worry about container failures which go unnoticed.

Conclusion

In addition to what I have shown here, I want to mention that the health tests can be defined by the users themselves. You do not have to use the implementation that is provided by me. This level of flexibility is very important since health is a subjective topic. One man's trash is another man's treasure. You might think a BMI of 25 is ok but the official recommendation from the health hub is 23 and below. Therefore, the ability to define your own tests and thresholds will help you receive the right notifications that are appropriate to your own environment. You can hook them up during docker run. Here is more information if you are interested.

Source code for health monitoring application is here.

elee3/Health-Monitor

# AF Server container health check

Posted by Eugene Lee Aug 23, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

In a complex infrastructure which spans several data centers and has multiple dependencies with minimum service up-time requirements, it is inevitable that services can still fail occasionally. The question then is how we can manage that in order to continue to maintain a high availability environment and keep downtime as low as possible. In this blog post, we will be talking about how we can implement a health check in the AF Server container to help with that goal.

What is a health check?

A container that is running doesn't necessarily mean that it is working. i.e. performing the service that it is supposed to do. In Docker Engine 1.12, a new HEALTHCHECK instruction was added to the Dockerfile so that we can define a command that verifies the state of health in the container. It is the same concept as a health check for humans such as making sure that your liver or kidney is working properly and take preventative measures before things go worse. In the container scenario, the exit code of the command will determine whether the container is operational and doing what is it meant to do.

In the AF Server context, we will need to think about what it means for the AF Server to be 'healthy'. Luckily for us, we have such a counter to indicate the health status. AF server includes a Windows PerfMon counter called AF Health Check. If both the AF application service and the SQL Server are running and responding, this counter returns a value of 1. Another way we can check for health is to check if a service is listening on the port 5457 since AF Server uses that. We can also test if the service is running. Including all of these tests will make our health check more robust.

Define health tests

For the first measure of health, we will be using the Get-Counter Powershell cmdlet to read the value of the performance counter. A healthy AF Server is shown below.

A value of 1 indicates that the AF Server and SQL Server are healthy while 0 means otherwise.

The second measure of health is to test for a service listening on port 5457. We will use the Powershell cmdlet Get-NetTCPConnection to do so.

When there is no listener on port 5457, we will get an error.

The third measure of health is to check if the service is running by using the Get-Service Powershell cmdlet.

Integrate into Docker

With the health tests on hand, how can we ask Docker to perform these tests? The answer is to use the HEALTHCHECK instruction in the Dockerfile to instruct the Docker Engine to carry out the tests at regular intervals that can be defined by the image builder or the user. The syntax of the instruction is

HEALTHCHECK [OPTIONS] CMD command

The options that can appear before CMD are:

• --interval=DURATION (default: 30s)
• --timeout=DURATION (default: 30s)
• --start-period=DURATION (default: 0s)
• --retries=N (default: 3)

I will be using a start-period of 10s to allow the AF Server sometime to initialize before starting the health checks. The other options I will leave as default. The user of the image can still override these options during Docker run.

The command’s exit status indicates the health status of the container. The possible values are:

• 0: success - the container is healthy and ready for use
• 1: unhealthy - the container is not working correctly
• 2: reserved - do not use this exit code

The command will be a batch file that runs the aforementioned tests. The instruction will therefore look like this.

HEALTHCHECK --start-period=10s CMD powershell .\check.ps1


Here are the contents of check.ps1

#test for service listening on port 5457
Get-NetTCPConnection -LocalPort 5457 -State Listen -ErrorAction SilentlyContinue|out-null
if ($? -eq$false)
{
write-host "No one listening on 5457"
exit 1
}

#test if AF service is running
$status = Get-Service afservice|select -expand status if ($status -ne "Running")
{
write-host "PI AF Application Service (afservice) is $status." write-host "PI AF Application Service (afservice) is not running." exit 1 } #test for AF Server Health Counter$counter = get-counter "\PI AF Server\Health"|Select -Expand CounterSamples| Select -expand CookedValue;
if ($counter -eq 0) { write-host "The health counter is$counter. This might mean either"
write-host "1. SQL Server is non-responsive"
write-host "2. SQL Server is responding with errors"
exit 1
}


Usage

The container image elee3/afserver:18x has been updated with the health check ability. After pulling it from the Docker repository with

docker pull elee3/afserver:18x


You can have some fun with it. Let me spin up a new AF Server container based on the new image.

docker run -d -h af18 --name af18 elee3/afserver:18x


Now, let's do a

docker ps


Notice that my other container af17 that is based on the elee3/afserver:17R2 image doesn't have any health status next to it status because a health check was not implemented for it while container af18 indicates "(health: starting)". Let's run docker ps again after waiting for a little while.

Notice that the health status has changed from 'starting' to 'healthy' after the first test which is run interval (configured in options) seconds after the container is started.

We can also do

docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expandproperty log


to see the health logs.

Health event

When the health status of a container changes, a health_status event is generated with the new status. We can observe that using docker events. We will now intentionally break the container by stopping the SQL Server service and trying to connect with PSE.

This is expected. Now let us check using docker events which is a tool for getting real time events from the Docker Engine.

We can do a filter on docker events to only grab the health_status events for a certain time range so that we do not need to be concerned with irrelevant events. Let us grab those health_status events for the past hour for my container af18.

(docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time  Also check on docker ps  and also docker inspect which can give us clues on what went wrong. docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expand log|fl  With the health check, it is now obvious that even though the container is running, it doesn't work when we try to connect to it with PSE. We shall restart the SQL Server service and try connecting with PSE. We can check if the container becomes healthy again by running docker ps  and (docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time


As expected, a new health_status event is generated which indicates healthy.

Conclusion

We can leverage on the health check mechanism further when we use a container orchestrator such as Docker Swarm that can detect the unhealthy state of a container and automatically replace the container with a new and working container. This will be discussed in a future blog. So stay tuned!

# AF Server container in the cloud

Posted by Eugene Lee Aug 10, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

In my previous articles, I have demonstrated using the AF Server container in local Docker host deployments. The implication is that you have to manage the Docker host infrastructure yourself. The installation, patching, maintenance and upgrading work has to be done by you manually. This represents significant barriers to get up and running. As an analogy, imagine you visit another country for vacation and need to get from the airport to the hotel. Would it be better to buy a car (if they even sold one at the airport?) and drive to the hotel or just take a taxi (transport as a service). The first option requires a larger initial investment of time and money compared to the latter.

For quick demo, training or testing purposes, getting a Docker host infrastructure up and running requires effort (getting a machine with right specifications, procuring an OS with Windows container capabilities, patching the OS so that you can use Docker, installing the right edition of Docker) and troubleshooting if things go south (errors during setup or services refusing to start). In the past, we have no other choice so we just have to live with it. But in this modern era of cloud computing, using a container as a service might be a faster and cheaper alternative. Today, I will show you how to operate the AF Server container in the cloud using Azure Container Instances. The very first service of its kind in the cloud, Azure Container Instances is a new Azure service delivering containers with great simplicity and speed. It is a form of serverless containers.

Prerequisites

You will need an Azure subscription to follow along with the blog. You can get a free trial account here.

Azure CLI

Install the Azure CLI which is a command line tool for managing Azure resources. It is a small install. Once done, we need to login

az login


Otherwise, you need to open a browser page and follow the instructions on the command line to enter an authorization code after navigating to https://aka.ms/devicelogin in your browser.

Now set your default subscription if you have many subscriptions. If you only have one subscription to your account, then you can skip this step.

az account set -s <subscription name>


Create cloud container

We are now ready to create the AF Server cloud container. First create a resource group.

az group create --name resourcegrp -l southeastasia


You can change southeastasia to a location nearest to you. Here is the list of locations (remove the space when using it)

Create a file named af.yaml. Replace <username> and <password> with the credentials for pulling the AF Server container image. There are some variables that you can configure

afname: The name that you choose for your AF Server.

af.yaml

apiVersion: '2018-06-01'
name: af
properties:
containers:
- name: af
properties:
environmentVariables:
- name: afname
value: eugeneaf
- name: user
value: eugene
- name: pw
secureValue: qwert123!
image: elee3/afserver:18x
ports:
- port: 5457
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 1.0
imageRegistryCredentials:
- server: index.docker.io
dnsNameLabel: eleeaf
ports:
- port: 5457
protocol: TCP
type: Public
osType: Windows
type: Microsoft.ContainerInstance/containerGroups


Then run this in Azure CLI to create the container.

az container create --resource-group resourcegrp --file af.yaml


The command will return in about 5 minutes.

You can check the state of the container.

az container show --resource-group resourcegrp -n af --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table


You can check the container logs.

az container logs --resource-group resourcegrp -n af


Explore with PSE

You now have an AF Server container in the cloud that can be accessed ANYWHERE as long as there is internet connectivity. You can connect to it with PSE using the FQDN. The credentials to use are those that you specified in af.yaml.

Notice that the name of the AF Server is the value of the afname environment variable that was passed in af.yaml.

Run commands in container

If you have a need to login to the container to run commands such as using afdiag, you can do so with

az container exec --resource-group resourcegrp -n af --exec-command "cmd.exe"


Clean up

When you are done with using the container, you should destroy it so that you won't have to pay for it when it is not being used.

az container delete --resource-group resourcegrp -n af


You can check that the resource is deleted by listing your resources.

az resource list


Considerations

There are some tricks to hosting a container in the cloud to optimize its deployment time.

1. Base OS

The Base OS should be one of the three most recent versions of Windows Server Core 2016. These are cached in Azure Container Instances to help in the deployment time. If you want to experience the difference, try pulling elee3/afserver:18 in the create container command above. The time taken will be 13min which is more than twice of the 5min needed to pull elee3/afserver:18x. The reason is because the old image with “18” tag is based on the public SQL Server image which is 7 months old and doesn't have the latest OS version to be able to leverage on the caching mechanism to improve performance. I have rebuilt the image with “18x” tag based on my own SQL Server image with the latest OS version.

2. Image registry location

Hosting the image in Azure Container Registry in the same region that you use to deploy your container will help to improve deployment time as this shortens the network path that the image needs to travel which shortens the download time. Take note that ACR is not free unlike DockerHub. In my tests, it took 4min to deploy with ACR.

3. Image size

This one is obviously a no-brainer. That's why I am always looking to make my images smaller.

Another consideration is the number of containers per container group. In this example, we are creating a single-container group. The current limitation of Windows containers is that we can only create single-container groups. When this limitation is lifted in the future, there are some scenarios where I see value in creating multi-container groups such as spinning up sets of containers that are complimentary to each other. E.g. PI Data Archive container, AF Server container, PI Analysis Service container in a 3-container group. However, for scenarios such as spinning up 2 AF Servers containers, we should still keep them in separate container groups so that they won't fight for the same port.

Limitations

Kerberos authentication is not supported in a cloud environment. We are using NTLM authentication in this example.

Conclusion

Deploying the AF Server container to Azure Container Instances might not be as fast as deploying it to a local Docker host. But it is cheaper compared to the upfront time and cost of setting up your own Docker host. This makes it ideal for demo/training/testing scenarios. The containers are billed on a per second basis so you only pay for what you use. That is like only paying for your trip from the airport to the hotel without having to pay anything extra.

# Upgrade to AF Server 2018 container with Data Persistence

Posted by Eugene Lee Jul 24, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

AF Server 2018 has been released on 27 Jun 2018! Let's take a look at some of the new features that are available. The following list is not exhaustive.

• AF Server Connection information is now available for administrative users.
• A new UOM Class, Computer Storage, is provided. The canonical UOM is byte (b) and multiples of 1000 and 1024.
• AFElementSearch and AFEventFrameSearch now support searching for elements and event frames by attribute values without having to specify a template.
• The AFDiag utility has been enhanced to allow for bulk deletes of event frames by database and/or template and within a specified time range

Here are also some articles that talk about other new features in AF 2018.

Mass Event Frame Deletion in AF SDK 2.10

DisplayDigits Exposed in AF 2018 / AF SDK 2.10

What's new in AF 2018 (2.10) OSIsoft.AF.PI Namespace

Introducing the AFSession Structure

To take advantage of these new features, we will need to upgrade to the AF Server 2018 container. Let me demonstrate how we can do that.

Create 2017R2 container and inject data

The steps for creating the container can be found in Spin up AF Server container (SQL Server included). I will use af17 as the name in this example.

docker run -di --hostname af17 --name af17 elee3/afserver:17R2


Now, we can create some elements, attributes and event frames.

We will also list the version to confirm it is 2017R2 (2.9.5.8368).

Pull 2018 image

We can use the following command to pull down the 2018 image.

docker pull elee3/afserver:18


The credentials required are the same as the 2017R2 image. Check the digest to make sure the image is correct.

18: digest: sha256:99e091dc846d2afbc8ac3c1ec4dcf847c7d3e6bb0e3945718f00e3f4deffe073

Create an empty folder, open up a Powershell, navigate to that folder and run the following commands.

Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/afbackup.bat" -UseBasicParsing -OutFile afbackup.bat


Wait a short moment for your AF Server 2018 container to be ready. In this example, I will give it the name af18.

Verification

Now we can check that the element, attribute and event frame that we created earlier in the 2017R2 container is persisted to the 2018 container. First, let's connect to af18 with PSE. Upon successful connection, notice that the name and ID of the AF Server 2017R2 is retained.

Our element, attribute and event frame are all persisted.

Finally, we can see that the version has been upgraded to 2018 (2.10.0.8628).

Congratulations. You have successfully upgraded to the AF Server 2018 container and retained your data.

Rollback

If you want to rollback to the AF Server 2017R2 container, you will need to use the backup that was automatically generated and stored in the folder

C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup

docker rm -f af17
docker exec af18 cmd /c "copy /b "C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup\PIAFSqlBackup*.bak" c:\db\PIFD.bak"
docker run -d -h af17 --name af17 --volumes-from af18 elee3/afserver:17R2


Once a PIFD database is upgraded, it is impossible to downgrade it as seen here stating "a downgrade of the PIFD database will not be possible". This means that it won't be possible to persist data entered after the upgrade during the rollback.

Explore new features

Computer Storage UOM

AF Server Connections history

Bulk deletes of event frames by database and/or template and within a specified time range

Conclusion

Now that the AF Server container has at least two versions available (2017R2 and 2018), you can really start to appreciate its usage for testing the compatibility of your applications with two different versions of the server. In the past, you would need to create two large VMs in order to host two AF Server. Those days are over. You can realize immediate savings in storage space and memory. We will look into bringing these containers into some cloud offerings for future articles.

# Upgrade to PI Data Archive 2018 container with Data Persistence

Posted by Eugene Lee Jul 9, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

PI Data Archive 2018 has been released on 27 Jun 2018! It is now time for us to upgrade to experience all the latest enhancements.

Legacy subsystems such as PI AF Link Subsystem, PI Alarm Subsystem, PI Performance Equation Scheduler, PI Recalculation Subsystem and PI Batch Subsystem are not installed by default. These legacy subsystems mentioned above will not be in the PI Data Archive 2018 container because of the command line that I have chosen for it. This upgrade procedure assumes that you were not using any of these legacy subsystems.

We also have client side load balancing in addition to scheduled archive shifts for easier management of archives. Finally, there is the integrated PI Server installation kit which is the enhancement I am most excited about. The kit has the ability to let us generate a command line statement for use during silent installation. No more having to comb through the documentation to find the feature that you want to install. All you have to do is just use the GUI to select the features that you desire and save the command line to a file. The command line is useful in environments without a GUI such as a container environment.

Today, I will be guiding you on a journey to upgrade your PI Data Archive 2017R2 container to the  PI Data Archive 2018 container. In this article, Overcome limitations of the PI Data Archive container, I have addressed most of the limitations that were present in the original article Spin up PI Data Archive container. We are now left with the final limitation to address.

This example doesn't support upgrading without re-initialization of data.

I will show you how we can upgrade to the 2018 container without losing your data. Let's begin on this wonderful adventure!

Create 2017R2 container and inject data

See the "Create container" section in Overcome limitations of the PI Data Archive container for the detailed procedure on how to create the container. In this example, my container name will be pi17.

docker run -id -h pi17 --name pi17 pidax:17R2


Once your container is ready, we can use PI SMT to introduce some data which we can use as validation that the data has been persisted to the new container. I will create a PI Point called "test" to store some string data.

We will also change some tuning parameters such as Archive_AutoArchiveFileRoot and Archive_FutureAutoArchiveFileRoot to show that they are persisted as well.

Take a backup

Before proceeding with the upgrade, let us take a backup of the container using the backup script found here. This is so that we can roll back later on if needed.

The backup will be stored in a folder named after the container.

Build 2018 image

1. Get the files from elee3/PI-Data-Archive-container-build

2. Get the PI Server 2018 integrated install kill from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website

4. Your folder structure should look similar to this now.

5. Run build.bat.

Now that we have the image built. We can perform the upgrade. To do so, stop the pi17 container.

docker stop pi17


Create the PI Data Archive 2018 container (I will name this pi18) by mounting the data volumes from the pi17 container.

docker run -id -h pi18 --name pi18 --volumes-from pi17 -e trust=<containerhost> pidax:18


Verification

Now let us verify that the container named pi18 has our old data and tuning parameters and also let us check its version. We can do so with PI SMT.

Data has been persisted!

Tuning parameters has also been persisted!

Version is now 3.4.420.1182 which means the upgrade is successful. Note that the legacy subsystems that were mentioned above are no longer present.

Congratulations. You have successfully upgraded to the PI Data Archive 2018 container and retained your data.

Rollback

Now what if you want to rollback to the previous version for whatever reasons? I will show you that it is also simple to do. There are two ways that we can go about doing this.

MethodProsCons
RestoreWill always workData added after the upgrade will be lost after the rollback. Only data prior to the backup will be present. Requires a backup
Non-RestoreData added after the upgrade is persisted after the rollbackMight not always work. It depends on whether the configuration files are compatible between versions. E.g. it works for 2018 to 2017R2 but not for 2015 to earlier versions

We will explore both methods in this blog since both methods will work for rolling back 2018 to 2017R2.

Restore method

In this method, we can remove pi17, recreate a fresh instance and restore the backup. In the container world, we treat software not as pets but more like cattle.

docker rm pi17
docker run -id -h pi17 --name pi17 pidax:17R2
docker stop pi17


Copy the backup folders into the appropriate volumes at C:\ProgramData\docker\volumes

docker start pi17


Now let us compare pi17 and pi18 with PI SMT. We can see that they have the same data but their versions are different.

Non-Restore method

In this method, data that is added AFTER the upgrade will still be persisted after rollback. Let us add some data to the pi18 container.

We shall also change the tuning parameter from container17 to container18.

Now, let's remove any pi17 container that exists so that we only have the pi18 container running. After that, we can do

docker rm -f pi17
docker stop pi18
docker run -id -h pi17 --name pi17 --volumes-from pi18 pidax:17R2


We can now verify that the data added after the upgrade still exists when we roll back to the 2017R2 container.

Conclusion

In this article, we have shown that it is easy to perform upgrades and rollbacks with containers while preserving data throughout the process. Upgrades that used to take days can now be done in minutes. There is no worry that upgrading will break your container since data is separated from the container. One improvement that I would like to see is that archives can be downgraded by an older PI Archive Subsystem automatically. Currently, this cannot be done. If you try to connect to a newer archive format with an older piarchss without downgrading the version manually, you will see

However, the reverse is possible. Connecting to an older archive format with a newer piarchss will upgrade the version automatically.

1. Fix unknown message problem in logs

2. Add trust on run-time by specifying environment variable

# Overcome limitations of the PI Data Archive container

Posted by Eugene Lee Jul 2, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

In this blog post, we will be exploring how to overcome the limitations that were previously mentioned in the blog post Spin up PI Data Archive container. Container technology can contribute to the manageability of a PI System (installations/migrations/maintenance/troubleshooting that used to take weeks can potentially be reduced to minutes) so I would like to try and overcome as many limitations as I can so that they will become production ready. Let us have a look at the limitations that were previously mentioned.

1. This example does not persist data or configuration between runs of the container image.

2. This example relies on PI Data Archive trusts and local accounts for authentication.

3. This example doesn't support VSS backups.

Let us go through them one at a time.

Data and Configuration Persistence

This limitation can be solved by separating the data from the application container. In Docker, we can make use of something called Volumes which are completely managed by Docker. When we persist data in volumes, the data will exist beyond the life cycle of the container. Therefore, even if we destroy the container, the data will still remain. We create external data volumes by including the VOLUME directive in the Dockerfile like such

VOLUME ["C:/Program Files/PI/arc","C:/Program Files/PI/dat","C:/Program Files/PI/log"]

When we instantiate the container, Docker will now know that it has to create the external data volumes to store the data and configuration that exists in the PI Data Archive arc, dat and log directories.

Windows Authentication

This issue can be addressed with the use of GMSA and a little voodoo magic. This enables the container host to obtain the TGT for the container so that the container is able to perform Kerberos authentication and it will be connected to the domain. The container host will need to be domain joined for this to happen.

VSS Backups

When data is persisted externally, we can leverage on the VSS provider in the container host to perform the VSS snapshot for us so that we do not have to stop the container while performing the backup. This way, the container will be able to run 24/7 without any downtime (as required by production environments). The PI Data Archive has mechanisms to put the archive in a consistent state and freeze it to prepare for snapshot.

Create container

1. Grab the files in the 2017R2 folder from my Github repo and place them into a folder. elee3/PI-Data-Archive-container-build

2. Get PI Data Archive 2017 R2A Install Kit and extract it into the folder as well. Download from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website and place it in the Enterprise_X64 folder.

4. Your folder structure should look similar to this now.

5. Execute buildx.bat. This will build the image.

6. Once the build is complete, you can navigate to the Kerberos folder and run the powershell script (check 3 Aug 2018 updates) to create a Kerberos enabled container

.\New-KerberosPIDA.ps1 -AccountName <GMSA name> -ContainerName <container name>


You can request for a GMSA from your IT department and get it installed on your container host with the Install-ADServiceAccount cmdlet.

OR

If you think it will be difficult for you to get a GMSA from your IT department, then you can use the following command as well to create a non Kerberos enabled container

docker run -id -h <DNS hostname> --name <container name> pidax:17R2


7. Go to the pantry to make some tea or coffee. After about 1.5 minutes, your container will be ready.

Demo of container abilities

1. Kerberos

This section only applies if you created a Kerberos enabled container. After creating a mapping for my domain account using PI System Management Tools (SMT) (the container automatically creates an initial trust for the container host so that you can create the mapping), let me now try to connect to the PI Data Archive container using PI System Explorer (PSE). After successful connection, let me go view the message logs of the PI Data Archive container.

We can see that we have Kerberos authentication from AFExplorer.exe a.k.a PSE.

2. Persist Data and Configuration

When I kill off the container, I noticed that I am still able to see the configuration and data volumes persisted on my container host so I don't have to worry that my data and configuration is lost.

3. VSS Backups

Finally, what if I do not want to stop my container but I want to take a back up of my config and data? For that, we can make use of the VSS provider on the container host. Obtain the 3 files here. elee3/PI-Data-Archive-container-build

Place them anywhere on your container host. Execute

.\backup.ps1 -ContainerName <container name>


The output of the command will look like this.

Your backup will be found in the pibackup folder that is automatically created and will look like this. pi17 is the name of my container.

Your container is still running all the time.

4. Restore a backup to a container

Now that we have a backup, let me show you how to restore it to a new container. It is a very simple 3 step process.

• docker stop the new container
• Copy the backup files into the persisted volume. (You can find the volumes at C:\ProgramData\docker\volumes)
• docker start the container

As you can see, it can't get any simpler . When I go to browse my new container, I can see the values that I entered in my old container which had its backup taken.

Conclusion

In this blog post, we addressed the limitations of the original PI Data Archive container to make it more production ready. Do we still have any need of the original PI Data Archive container then? My answer is yes. If you do not need the capabilities offered by this enhanced container, then you can use the original one. Why? Simply because the original one starts up in 15 seconds while this one starts up in 1.5 minutes! The 1.5 minutes is due to limitations in Windows Containers. So if you need to spin up PI Data Archive containers quickly without having to worry about these limitations (e.g. in unit testing), then the original container is for you.

Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com.

Refer to Upgrade to PI Data Archive 2018 container with Data Persistence to build the pidax:18 image needed for use with the script.

# Spin up PI Analysis Service container

Posted by Eugene Lee Jun 12, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

During PI World 2018, there was a request for a PI Analysis Service container. The user wanted to be able to spin up multiple PI Analysis Service container to balance the load during periods where there was a lot of back filling to do. Unfortunately, this is limited by the fact that each AF server can only have exactly one instance of PI Analysis Service that runs the analytics for the server. But this has not discouraged me from making a PI Analysis Service container to add to our PI System compose architecture!

Features of this container include:

1. Ability to test the presence of AF Server so that set up won't fail

2. Simple configuration. The only thing you need to change is the host name of the AF Server container that you will be using.

3. Speed. Build and set up takes less than 4 minutes in total.

4. Buffering ability. Data will be held in the buffer when connection to target PI Data Archive goes down. (Added 13 Jun 2018)

Prerequisite

You will need to be running the AF Server container since PI Analysis Service stores its run-time settings in the AF Server. You can get one from Spin up AF Server container (SQL Server included).

Procedure

1. Gather the install kits from the Techsupport website. AF Services

2. Gather the scripts and files from GitHub - elee3/PI-Analysis-Service-container-build.

3. Your folder should now look like this.

4. Run build.bat with the hostname of your AF Server container.

build.bat <AF Server container hostname>


5. Now you can execute the following to create the container.

docker run -it -h <DNS hostname> --name <container name> pias


That's all you need to do! Now when you connect to the AF Server container with PI System Explorer, you will notice that the AF Server is now enabled for asset analysis. (originally, it wasn't enabled)

Conclusion

By running this PI Analysis Service container, you can now configure asset analytics for your AF Server container to produce value added calculated streams from your raw data streams. I will be including this service in the Docker Compose PI System architecture so that you can run everything with just one command.

Update 2 Jul 2018

Removed telemetry and added 17R2 tag.

# Spin up AF Server container (Kerberos enabled)

Posted by Eugene Lee May 30, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

In one of my previous blog posts, I was spinning up an AF Server container using local accounts for authentication. For non-production purposes, this is fine. But since Kerberos is the authentication method that we recommend, I would like to show you that it is also possible to use Kerberos authentication for the AF Server container. To do this, you will have to involve a domain administrator since a Group Managed Service Account (GMSA) will need to be created. Think of GMSA as a usable version of the Managed Service Account. A single gMSA can be used for multiple hosts. For more details about GMSA, you can refer to this article: Group Managed Service Accounts Overview

Prerequisite

You will need the AF Server image from this blog post.

Spin up AF Server container (SQL Server included)

Procedure

1. Request GMSA from your domain administrator. The steps are listed here.

Add-KDSRootKey -EffectiveTime (Get-Date).AddHours(-10) #Best is to wait 10 hours after running this command to make sure that all domain controllers have replicated before proceeding


2. Once you have the GMSA, you can proceed to install it on your container host.

Install-ADServiceAccount <name>


3. Test that the GMSA is working. You should get a return value of True

Test-ADServiceAccount <name>


4. Get script to create AF Server container with Kerberos.

Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/New-KerberosAFServer.ps1" -UseBasicParsing -OutFile New-KerberosAFServer.ps1


5. Create a new AF Server container

.\New-KerberosAFServer.ps1 -ContainerName <containername> -AccountName <name>


Usage

Conclusion

We can see that security is not a limitation when it comes to using an AF Server container. It is just more troublesome to get it going and requires the intervention of a domain administrator. However, this will remove the need of using local accounts for authentication which is definitely a step towards using the AF Server container for production. I will be showing how to overcome some limitations of containers in future posts such as letting containers have static IP and the ability to communicate outside of the host.

Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com.

Script now uses the new image with 18x tag based on a newer version of Windows Server Core.

# Compose PI System container architecture

Posted by Eugene Lee May 21, 2018

Note: Development and Testing purposes only. Not supported in production environments.

Containerization Hub

Introduction

In this blog post, I will be giving an overview of how to use Docker Compose to create a PI System compose architecture that you can use for

1. Learning PI System development

2. Running your unit tests with a clean PI System

3. Compiling your AF Client code

4. Exploring PI Web API structure

5. Testing out Asset Analytics syntax

5. Other use cases that I haven't thought of (Post in the comments!)

What is Compose?

It is a tool for defining and running multi-container Docker applications. With Compose, you use a single file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. It is both easy and convenient.

Setup images

The Setup involved is simple. You can refer to my previous blog posts set up these images. Docker setup instructions can be found in the Containerization Hub link above.

Spin up PI Web API container (AF Server included)

Spin up PI Data Archive container

Spin up AF Client container

Compose setup

In Powershell, run as administrator these commands:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.21.2/docker-compose-Windows-x86_64.exe" -UseBasicParsing -OutFile \$Env:ProgramFiles\docker\docker-compose.exe

Obtain Compose file from docker-compose.yml. Place it on your desktop.

Deployment

Open a command prompt and navigate to your desktop. Enter

docker-compose up


Wait until the screen shows

Once you see that. You can close the window. Your PI System architecture is now up and running!

Usage

There are various things you can try out. If you are experiencing networking issues between the containers, turn off the firewall for the Public Profile on your container host.

1. You can try browsing the PI Web API structure by using this URL (https://eleeaf/piwebapi) in your web browser. When prompted for credentials, you can use

2. Test network connectivity from client container to the PI Data Archive and AF Server by running

docker exec -it desktop_client_1 afs


The hostname of the AF Server is eleeaf. When prompted to use NTLM, enter q. The hostname of the PI Data Archive is eleepi. You should see the following results.

3. You can install PI System Management Tools on your container host and connect to the PI Data Archive via IP address of the container. Somehow, PI SMT doesn't let you connect with hostname.

4. You can also install PI System Explorer and connect to the AF Server to create new databases.

5. You can try compiling some open source AF SDK code found in our Github repository using the AF Client container. (so that you do not have to install Visual Studio)

6. You can use PI System Explorer to experiment with some Asset Analytics equations that you have in mind to check if they are valid.

Destroy

Once you are done with the environment, you can destroy it with

docker-compose down


Limitations

This example does not persist data or configuration between runs of the container.

These applications do not yet support upgrade of container without re-initialization of the data.

This example relies on PI Data Archive trusts and local accounts for authentication.

AF Server, PI Web API, and SQL Express are all combined in a single container.

Conclusion

Notice how easy it is to set up a PI System compose architecture. You can do this in less than 10 minutes. No more having to wait hours to install a PI System for testing and developing with.

The current environment contains PI Data Archive, AF Server, AF Client, PI Web API, a AF SDK sample application (called afs) and PI Analysis Service. More services will be added in the future!

By date: By tag: