Skip navigation
All Places > PI Developers Club > Blog
1 2 3 Previous Next

PI Developers Club

634 posts

Introduction

 

If you are getting started with PI Notifications, setting up your AF tree, creating new notifications rules linked to event frames, you probably want to test them before running on production. The traditional way to test is to set up the SMTP server from your enterprise and send all the e-mails from PI Notifications to your e-mail account. If your notifications are sent to a group of people, you need to check with them if they are receiving the e-mails properly.

 

If you are a developer, you might be interested in this new approach for testing by using a custom SMTP Server. This will help you be more efficient testing your notifications rules before running on production. The idea is that the custom SMTP server will listen to port 25 and it will display on the console information about the e-mails sent by PI Notifications.

 

Getting started

 

You can find the source code of the program here.

 

Open Visual Studio and create a .NET Core Console Application. Then add 4 libraries with the following commands:

 

Install-Package MailKit -Version="2.0.6"
Install-Package MimeKit -Version="2.0.6"
Install-Package SmtpServer" -Version="5.3.0"
Install-Package HtmlAgilityPack" -Version="1.8.9"

 

 

This will install our custom SMTP Server core library, some additional libraries to handle e-mails programmatically and the Html Agility Pack, which it will be described later. Please refer to their GitHub repository for more information.

 

Writing the program

 

Based on the README.MD of their GitHub repository, we have created the application below:

 

using CustomSmtpServerForPINotifications.Models;
using HtmlAgilityPack;
using SmtpServer;
using SmtpServer.Authentication;
using SmtpServer.Mail;
using SmtpServer.Protocol;
using SmtpServer.Storage;
using System;
using System.Threading;
using System.Threading.Tasks;


namespace CustomSmtpServerForPINotifications
{
    class Program
    {
        static void Main(string[] args)
        {
            var options = new SmtpServerOptionsBuilder()
             .ServerName("localhost")
             .Port(25, 587)
             .MessageStore(new SampleMessageStore())
             .MailboxFilter(new SampleMailboxFilter())
             .UserAuthenticator(new SampleUserAuthenticator())
             .Build();


            var smtpServer = new SmtpServer.SmtpServer(options);
            smtpServer.StartAsync(CancellationToken.None).Wait();
        }
    }


    public class SampleMessageStore : MessageStore
    {
        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            return Task.FromResult(SmtpResponse.Ok);
        }
    }




    public class SampleMailboxFilter : IMailboxFilter, IMailboxFilterFactory
    {
        public Task<MailboxFilterResult> CanAcceptFromAsync(ISessionContext context, IMailbox @from, int size = 0, CancellationToken token = default(CancellationToken))
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public Task<MailboxFilterResult> CanDeliverToAsync(ISessionContext context, IMailbox to, IMailbox @from, CancellationToken token)
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public IMailboxFilter CreateInstance(ISessionContext context)
        {
            return new SampleMailboxFilter();
        }
    }




    public class SampleUserAuthenticator : IUserAuthenticator, IUserAuthenticatorFactory
    {
        public Task<bool> AuthenticateAsync(ISessionContext context, string user, string password, CancellationToken token)
        {
            return Task.FromResult(true);
        }


        public IUserAuthenticator CreateInstance(ISessionContext context)
        {
            return new SampleUserAuthenticator();
        }
    }
}



 

Running the console application will make the custom SMTP server starts and it will keep monitoring ports 25 and 587. When it receives an e-mail, it will display information with the HTML body of the e-mail message.

 

We can test it by opening the Email Delivery Channel Configuration through PI System Explorer, typing the IP address of the machine running the custom SMTP server and clicking on the "Test..." button.

 

 

The e-mail is shown on the console application as expected. Note that the e-mail information is sent to the custom SMTP server but the SMTP Server never tries to send it to the SMTP server of the test.com domain. This is why it is a very good tool for testing purposes.

 

 

 

Let's validate if the e-mail has all the information that we need when it has the notification content. We are going to use the Html Agility Pack library to extract information from the HTML.

 

 

        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            NotificationRequest request = GenerateRequestFromEmail(message.HtmlBody);
            if (request != null)
            {
                Console.WriteLine("AttributeFullPath: " + request.AttributeFullPath);
                Console.WriteLine("NotificationRuleName: " + request.NotificationRuleName);
                Console.WriteLine("StartTime: " + request.StartTime);
            }
            return Task.FromResult(SmtpResponse.Ok);
        }




        private NotificationRequest GenerateRequestFromEmail(string htmlBody)
        {
            try
            {
                NotificationRequest request = new NotificationRequest();
                var doc = new HtmlDocument();
                doc.LoadHtml(htmlBody);
                var nodes = doc.DocumentNode.SelectNodes("/html[1]/body[1]/div[1]/p");
                foreach (var node in nodes)
                {
                    string[] texts = node.InnerText.Split(':');
                    if (texts[0].ToLower().Trim() == "name")
                    {
                        request.NotificationRuleName = texts[1].Trim();
                    }
                    if (texts[0].ToLower().Trim() == "start time")
                    {
                        request.StartTime = node.InnerText.Replace(texts[0] + ":", string.Empty);
                    }
                    if (texts[0].ToLower().Trim() == "attribute path")
                    {
                        request.AttributeFullPath = texts[1].Trim();
                    }
                }
                return request;
            }
            catch (Exception)
            {
                return null;
            }
        }

 

Recompiling the project on Visual Studio and running the custom SMTP server again, it is possible to see the details of the notification received:

 

 

Conclusion

Although this custom SMTP server shouldn't be used for production, I am sure that it can be a good friend for developers and PI admins when they need to test their notifications.

 

Please share your thoughts with me and the community!!

Introduction

 

When I started as a vCampus Support Engineer back in 2012, one of my favorites materials to learn about our PI Developer Technologies was the material written for our vCampus Live! events. Recently, I've taken a look at the vCampus Live! 2012 workbooks and I found the "Develop Custom Delivery Channels for Use with PI Notifications" hands-on lab. In this lab, it was shown how to develop a custom delivery channel that would write notification to the Windows Event Logs using PI Notification 2012 and Visual Studio 2012 and .NET Framework 3.5.

 

The new generation of PI Notification (which starts in 2016) uses Event Frames instead of PI Points to store their historical data and does not support custom delivery channels. Nevertheless, it has integration with custom web services.

 

On this blog post, I will show you how to develop a custom web service that not only integrates with the newer releases of PI Notification and but also writes notifications events to the Windows Event Log. This time we are going to use Visual Studio 2017 and ASP.NET Core 2.1.

 

Creating a delivery endpoint

 

Please make sure that you have PI AF 2018 and PI Notifications 2018 installed on your system. If you are interested in setting up new Notification Rules, please refer to this video.

 

Before starting to code, we need to create a new delivery endpoint as shown on the screenshot below. Note that you should select WebService as the delivery channel. REST is the standard nowadays and this is what we are going to use.  We are going to create an action that accepts HTTP POST requests. For this demo, we won't be using any type of authentication although PI Notifications does support Basic and Windows.

 

 

 

 

 

 

 

Creating the ASP.NET Core project with .NET Framework

 

You can download the source code package of this blog post by accessing this GitHub repository.

 

As explained in this blog post, ASP.NET Core could be created with .NET Framework and .NET Core. We are going to create an ASP.NET Core application using .NET Framework because it is easier to integrate it with Windows Event Log. Let's create the project by referring the screenshot below:

 

 

On the second screen, make sure to select ".NET Framework" and "ASP.NET Core 2.1". This code probably does not work with ASP.NET Core 2.0. The solution would be to update Visual Studio as it will show newer versions of that platform.

 

 

 

 

 

Adding the ASP.NET Core MVC library

 

Open Package Manager Console and type:

 

Install-Package Microsoft.AspNetCore.Mvc

 

This will install the MVC component of the ASP.NET Core.

 

 

Editing the Startup.cs

 

Now that we have all libraries added, we need to enable it by editing the Startup.cs.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;


namespace CustomWebServiceForPINotifications
{
    public class Startup
    {
        public IConfiguration IConfiguration { get; }


        public Startup(IConfiguration configuration)
        {
            IConfiguration = configuration;
        }


        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }


        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }


            app.UseMvc();
        }
    }
}

 

 

I have published some blog post and videos about using ASP.NET Core with PI AF SDK and Angular. Please refer to the Developing Web Apps on top of the PI System Hub.

 

 

Creating the model

 

Create a new folder on the project root named Models and create a new class called NotificationRequest.cs. This .NET class will be used to process the JSON created by PI Notification according to the WebService configured. In this case, the JSON will have 3 properties: NotificationRuleName, AttributeFullPath and StartTime. Please refer to this example in order to learn how to create the Notification Rule programmatically.

 

 

Now that the properties were defined, we can write our NotificationRequest class as:

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;


namespace CustomWebServiceForPINotifications.Models
{
    public class NotificationRequest
    {
        public string NotificationRuleName { get; set; }
        public string AttributeFullPath { get; set; }
        public DateTime StartTime { get; set; }
    }
}

 

 

Creating the controller

 

Create a new folder on the project root named Controllers and create a new class called NotificationReceiverController.cs

 

This controller has two actions: Get and Post. The first action is used to make sure that the web site is up and running properly. The second action has the same code of the hands-on lab used to write a text to the Windows Event Log. For the second action, if the operation is successful, it will return a 201 status code. If not, it will return a bad request error.

 

using System;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using CustomWebServiceForPINotifications.Models;


namespace CustomWebServiceForPINotifications.Controllers
{
    [Route("api/notifications-receiver")]
    [ApiController]
    public class NotificationReceiverController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(new List<int> { 1, 2, 3, 4 });
        }


        [HttpPost]
        public IActionResult Post([FromBody]NotificationRequest request)
        {


            try
            {
                string sSource = "PI Notifications Delivery Channel Log";
                string sLog = "Application";
                if (!System.Diagnostics.EventLog.SourceExists(sSource))
                    System.Diagnostics.EventLog.CreateEventSource(sSource, sLog);


                string message = String.Format("Notification: {0} triggered for asset {1} at {2}", request.NotificationRuleName,
                    request.AttributeFullPath.ToString(), request.StartTime.ToString());
                System.Diagnostics.EventLog.WriteEntry(sSource, message, System.Diagnostics.EventLogEntryType.Information);
                return StatusCode(201);


            }
            catch (Exception ex)
            {
                return BadRequest(ex);
            }
        }


    }
}



 

Publishing to IIS

 

Now that the code is ready, we should publish the application to IIS. Please refer to the video ASP.NET Core 2 (Web API) and Angular with PI AF SDK: Part 5 - Publishing the application to IIS  for more information. You will have to download the run time version of ASP.NET Core and make sure the service running the web site has enough privileges in the folder with the web site files.

 

We can easily see if our web service is running as expected by testing it with our browser:

 

 

If there is something wrong, you will receive an error message.

 

 

Testing the integration between PI Notifications and the Custom Web Service

 

Ok, it is time to test our Notification. Let's go back to the Web Service Configuration window and click on the "Test Send" button. The result of the operation will be shown on the bottom of the Window.

 

 

We can confirm that the operation was successful by opening the Windows Event Viewer on the machine hosting our web service.

 

 

 

 

Conclusion

 

PI Notification 2016 does not allow developing custom delivery channels but it does allow you to integrate it with custom web services. I hope that this blog post will help you develop great integrations with the new generation of PI Notifications!!

Below is a link to the presentation given at Barcelona on September 27, 2018 for the LiveCoding session titled "Getting the Most Out of AFSearch".

 

Link to Recording:     LiveCoding--Getting-the-Most-Out-of-the-AFSearch

 

GitHub Repository:     AF-SDK -PIWorld-EMEA-2018-AFSearch-LiveCoding

 

While some of it is a repeat of what was presented at San Francisco in April, the beginning section features new material on the upcoming AF SDK 2.10.5 release.  The new features discussed:

 

  • OR clauses in searches.
  • FindObjects replaces the now deprecated FindElements, FindEventFrames, FindAttributes, etc.
  • AFSearchToken structure being replaced by new AFSearchBaseToken abstract class with 4 concrete classes: AFSearchFilterToken, AFSearchQueryToken, AFSearchOrToken, and AFSearchExpressionToken.
  • AFSearchToken will be mapped to new AFSearchBaseToken, if possible.
  • If your query uses an OR, the AFSearch.Tokens property will throw an exception.
  • New AFSearch.TokensCollection property is meant to replace Tokens functionality and works with new AFSearchBaseToken.
  • AFSearchToken and AFSearch.Tokens property are both marked as deprecated.

We are excited to present the PI World Innovation Hackathon EMEA 2018 winners!

 

DEME kindly provided a sample of their data with sensors, jack-up vessels and soil models information. Participants were encouraged to create killer applications for DEME by leveraging the PI System infrastructure.

 

OSI_Barcelona18_Day1_120.jpg

 

The participants had 23 hours to create an app using any of the following technologies:

  • PI Server 2018
  • PI Web API 2018
  • PI Vision 2017 R2

 

Our judges evaluated each app based on their creativity, technical content, potential business impact, data analysis and insight and UI/UX. Although it is a tough challenge to create an app in 23 hours, 4 groups were able to finish their app and present to the judges!

 

Prizes:

1st place: Intel NUC Barebone (Core 3-7100U, 120 GB SSD, 8 GB RAM), one year free subscription to PI Developers Club, one time free registration at OSIsoft PI World over the next 1 year

2nd place: Bose SoundLink Around-Ear Wireless Headphones II (black), one year free subscription to PI Developers Club, 50% discount for registration at OSIsoft PI World over the next 1 year

3rd place: Raspberry Pi 3 Model B+ Retro Arcade Gaming Kit incl. 2 classic controllers and one year free subscription to PI Developers Club

 

 

Without further do, here are the winners!

 

1st place - AG Solution

 

The team members were: Sergio Hernandez, Juri Krivoruchko and Marc Torralba

 

 

OSI_Barcelona18_Day4_386.jpg

 

Team AG Solution has developed an application on top of PI AF SDK and PI Vision. They've developed a custom data reference that uses Stochastic Dual Coordinate classifier from ML.NET to detect the current state of vessel by the direction of changing the state.

 

The team used the following technologies:

  • PI AF SDK
  • ML.NET
  • PI Vision

 

Here are some screenshots presented by the AG Solution team!

 

 

 

 

 

 

 

 

2nd place - M.E.S.S

 

The team members were:  David Rodriguez, Leandro Hideo, Alexander Hosefelder and Alexander Dixon

 

OSI_Barcelona18_Day4_384.jpg

 

Team M.E.S.S developed an algorithm in R in order to detect sensors anomalies using Machine Learning.

 

The team used the following technologies:

  • R
  • PI Web API
  • PI Web API package for R

 

Here are some screenshots presented by M.E.S.S!

 

 

 

 

 

 

3rd place - Werusys Cologne

 

The team members were: Kai Weber, Ansgar Backhaus and Julian Weber

 

 

OSI_Barcelona18_Day4_383.jpg

 

Team Werusys Cologne developed an application to analyze wind mill installation case data based on hidden markov model.

 

The team used the following technologies:

  • PI Web API
  • Seeq

 

Here are some screenshots presented by the Werusys Cologne!

 

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In this blog post about security and containers, we will be discussing about implementing a Kerberos Double Hop from the client machine to the PI Web API container and finally to the PI Data Archive container. Previously, when we are using the PI Web API container located here Spin up PI Web API container (AF Server included), we are using local accounts for authentication to the backend server such as the AF Server or the PI Data Archive. The limitation is that without Kerberos Delegation, we will not be able to have per user security which means that all users of PI Web API will have the same permissions. i.e. an operator can read the sensitive tags that were meant for the upper management and vice versa. Obviously, this is not ideal. What we want is to have more granularity in assigning permissions to the right people so that they can only access the tags that they are supposed to read.

 

Prerequisites

You will need to have 2 GMSA accounts. You can request such accounts from your IT department. They can refer to this blog post if they do not know how to create GMSA Spin up AF Server container (Kerberos enabled). Also be sure that one of them has the TrustedforDelegation property set to True. This can be done with the Set-ADServiceAccount  cmdlet.

 

You will also need to build the PI Data Archive container by following the instructions in the Build the image section here.

PI Data Archive container health check

 

For the PI Web API container, you will need to pull it from the repository by using this command.

docker pull elee3/afserver:webapi18

 

Demo without GMSA

First let us demonstrate how authentication will look like when we run containers without GMSA.

 

Let's have a look at the various authentication modes that PI Web API offers.

1. Anonymous

2. Basic

3. Kerberos

4. Bearer

For more detailed explanation aboout each mode, please refer to this page.

 

We will only be going through the first 3 modes as Bearer requires an external identity provider which is out of the scope of this blog.

 

Create the PI Data Archive container and the PI Web API container. We will also create a local user called 'enduser' in the two containers.

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h wa --name wa elee3/afserver:webapi18
docker exec wa net user enduser qwert123! /add
docker exec pi net user enduser qwert123! /add

 

Anonymous

Now let's open up PSE and connect to the hostname "wa". If prompted for the credentials, use

Username: afadmin

Password: qwert123!

Change the authentication to Anonymous and check in the changes. Restart the PI Web API service.

Verify that the setting has taken effect by using internet explorer to browse to /system/configuration. There will be no need for any credentials.

 

We can now try to connect to the PI Data Archive container with this URL.

https://wa/piwebapi/dataservers?path=\\pi

 

Check the PI Data Archive logs to see how PI Web API is authenticating.

Result: With Anonymous authentication, PI Web API authenticates with its service account using NTLM.

 

Basic

Now use PSE to change the authentication to Basic and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. This time, there will be a prompt for credentials. Enter

Username: enduser

Password: qwert123!

Try to connect to the same PI Data Archive earlier. You will get an error as the default PI Data Archive container doesn't have any mappings for enduser

Let's see what is happening on the PI Data Archive side.

Result: With Basic authentication, the end user credential has been transferred to the PI Data Archive with NTLM.

 

Kerberos

Finally use PSE to change the authentication to Keberos and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. The prompt for credentials will look different from the Basic authentication one. Use the same credentials as you did for the Basic authentication scenario.

Try to connect to the same PI Data Archive again. You should not be able to connect. When you check on the PI Data Archive logs, you will see

Result: With Kerberos authentication, the delegation failed and the credential became NT AUTHORITY\ANONYMOUS LOGON even though we logged on to PI Web API with the local account 'enduser'.

 

Demo with GMSA

Kerberos

Now we shall use the GMSA accounts that we have to make the last scenario with Kerberos delegation work.

Download the scripts for Kerberos enabled PI Data Archive and PI Web API here.

PI-Web-API-container/New-KerberosPWA.ps1

PI-Data-Archive-container-build/New-KerberosPIDA.ps1

 

I will use the name 'untrusted' as the name of the GMSA account that is not trusted for delegation and 'trusted' as the name of the GMSA account that is trusted for delegation. Set the SPN for 'trusted' like such

setspn -s HTTP/trusted trusted

 

Once you have the scripts, run them like this

.\New-KerberosPIDA.ps1 -AccountName untrusted -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName trusted -ContainerName wak

 

The scripts will help you to create a credential spec for the container based on the GMSA that you provide to it. A credential spec will let the container know how it can access Active Directory resources. Then, it will use this credential spec to create the container using docker run command. It will also set the hostname of the container to be the same as the name of the GMSA. This is required because it is a current limitation with the implementation that might be resolved in the future so that you can choose your own hostnames.

 

Open internet explorer now with your domain account and access PI Web API /system/userinfo. The hostname is 'trusted'.

Make sure that ImpersonationLevel is 'Delegation'.

 

Now try to access the PI Data Archive. The hostname is 'untrusted'. You will be unable to access. Why? Because you haven't created a mapping yet! So let's use SMT to create a mapping to your domain account. After creating a mapping. Try again and you should be able to connect. The PI Data Archive logs will show that you have connected with Kerberos. You do not need any mapping to your PI Web API service account at all if Kerberos delegation is working properly.

 

Result: With Kerberos authentication method in PI Web API and the use of GMSAs, Kerberos delegation works. The end domain user is delegated from the client to the PI Web API container to the PI Data Archive container. We have successfully completed the double hop.

 

Troubleshoot

If this doesn't seem to work for you, one thing you can try is to check the setting for internet explorer according to this KB article.

KB01223 - Kerberos and Internet Browsers

Your browser settings might differ from mine but the container settings should be the same since the containers are newly created.

 

Alternative: Resource Based Constrained Delegation

A more secure way to do Kerberos delegation instead of trusting the PI Web API container GMSA for delegation is to set the property "PrincipalsAllowedToDelegateToAccount" on the PI Data Archive container GMSA. This is what we call Resource Based Constrained Delegation (RBCD). You do not have to trust any GMSAs for delegation in this scenario. You will still need two GMSAs.

 

Assuming that you have already created the two containers with the scripts found above. I will use 'pida' as the name of the PI Data Archive container GMSA and 'piwebapi' as the name of the PI Web API container GMSA.

.\New-KerberosPIDA.ps1 -AccountName pida -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName piwebapi -ContainerName wak

 

Execute these two additional commands to enable RBCD.

docker exec pik powershell -command "Add-WindowsFeature RSAT-AD-PowerShell"
docker exec pik powershell -command "Set-ADServiceAccount $env:computername -PrincipalsAllowedToDelegateToAccount (Get-ADServiceAccount piwebapi)"

 

You will still be able to connect with Kerberos delegation from the client machine to the PI Web API container to the PI Data Archive container. In this case, the PI Data Archive container only strictly allows delegation from the PI Web API container with 'piwebapi' as its GMSA.

 

Conclusion

We have seen that containers are able to utilize Kerberos delegation with the usage of GMSAs. This is important for middleware server containers such as PI Web API. Here is a quick summary of the various results that we have seen.

Authentication Mode
No GMSAWith GMSA
AnonymousNTLM with service accountNo reason to do this
BasicNTLM with local end user accountNo reason to do this
KerberosNTLM with anonymous logonKerberos delegation with domain end user account

 

The interesting thing is that Basic authentication can also have per user security with local end user accounts. But you will need to maintain the list of local users in the PI Web API container and the PI Data Archive container separately which is not recommended. The ideal case is to go with Kerberos delegation.

ATTENTION Developers, Data Scientists, IT CyberSecurity, and Power Users.  For lack of a better word, I will refer to you collectively as "developers".

 

Developers coming to PI World Barcelona may notice more offerings than ever before.  We proudly proclaim that Barcelona will have the most robust Developer Agenda (see link) ever seen at UC or PI World EMEA.  This includes the Developer Innovation Hackathon on Day 0 (see link), thanks to our data sponsor DEME.

 

Besides the traditional hands-on labs, which require an additional fee and pre-registration, the Day 3 agenda is chock full of technical content aimed specifically at developers, thanks to 90-minute Live Coding or How-To sessions.  These in-depth technical talks do not require a fee nor pre-registration.  You are free to come and go as you please.  Make no mistake about it ... just because we call the LiveCoding "talks" does not mean they contain less technical information than labs. We expect to offer even more such talks at future PI World events because we think it is better event for you.  Our reasoning: you can attend 2 labs on Day 3 for $300, or you can sit in on 4 LiveCoding talks for free.  Who can argue against more technical training for less cost?  (Tip: if you are trying to convince your boss to send you to PI World, presenting it as a major training event (which it is) could be a strong justification to attend.)

 

I invite you once again to review the Day 3 agenda.  You will see an Analytics Track and Developer Track.  One late correction I would like to make is the PI Admin Track, which is not really for PI Admins but should be considered a 2nd Developer Track.

AFSearch Barcelona.png

 

Day 3, Thursday September 27, 11:30 - 13:00

LiveCoding: Getting the Most Out of the New AFSearch

CCIB: Room 117 134, P1 Level

 

To any members of the PI Developer Community who will be at PI World Barcelona, you are invited to join me in a presentation on new features of AFSearch.  If you ask "Hey Rick, didn't you give this already in SF?", my answer would be "Yes BUT new sections were added to specifically cover some important NEW stuff."  PI AF 2018 R2 (AF SDK 2.10.5) will finally support OR conditions with AFSearch.  That is a highly anticipated new feature that many are looking forward to.  But in order to support OR conditions, it required replacing the older AFSearchToken structure with a new AFSearchTokenBase class that now has 4 different token instances.  Trust me, you will want to see how these new tokens will be used in code.  Everyone who has ever attended this talk has said they definitely learned something!

 

UPDATE: The room has been moved to 134 on Level P1.

Overview

Most of us have have searched for PI Points, but as our PI System grows larger or as more products like PI Connectors and Relays automatically create PI tags it becomes imperative to understand how to narrow down and optimize search queries. You might have used the Tag Search Dialog or simply copy pasted sample queries provided in the examples and modified them to suit your needs. Most of the times these queries are intuitive to read and understand but there are situations where it is we may need to utilize their full expressive power.

In this blog post we will explore PIPoint Query Search Syntax in PI AF SDK. We will have a deeper look into the Syntax Rules and Parsing of queries, along with Wild Cards, Operators and Aliases used in constructing a PI Point Query String to find the desired PIPoint objects. PIPoint Search Utility  is used as an aid to accompany the examples shown in the blog post to demonstrate the syntactic and search aspects of query strings.

 

Let us look at some typical examples one might come across while performing tag searches and their query strings.

 

Below are some Invalid queries. We need to be aware of the reasons that make them invalid and avoid such mistakes in the future.

 

Query Syntax

A query is one or more AND condition filters that are ORed together. Each AND condition contains one or more query items. A query item consists of a query filter name, an operator, and the query filter. This allows multiple conditions to be specified with a single query. Query syntax described in Extended Backus-Naur Form (EBNF)

 

It is important grasp the ENBF syntax rules to construct correct and effective queries. As we go along this blog we will take a look at examples on how to do this and how to avoid potential pit falls one may encounter with query strings. There are a large number of possible constructs filled with many nuances, however if we gain an understanding of some standard rules this the task becomes a lot easier.

As an example, the below query strings(non-exhaustive list) represent the exact same query even though they vary syntactically

  • sin* AND PointType:Float
  • (tag:=sin* AND PointType:=Float16) OR (tag:=sin* AND PointType:=Float32) OR (tag:=sin* AND PointType:=Float64)
  • (sin* PointType:='Float16') OR (sin* PointType:='Float32') OR (sin* PointType:='Float64')
  • tag:=sin* AND PointType:Float
  • ("sin*" PointType:='Float16') OR ("sin*" PointType:='Float32') OR ("sin*" PointType:='Float64')

 

How can we parse a Query String?

Parsing can be viewed as decomposing a query string into separate conditions. Think of this as an 'exploded view' of the string where you can see how the individual components fit together.  PIPointQuery is a structure in which PIPoint attribute specified by the AttributeName in the query is compared to the query's AttributeValue using the search Operator. PIPointQuery.ParseQuery Method parses the query string into PIPointQuery lists which can be used in used by the FindPIPoints(PIServer, IList<IEnumerable<PIPointQuery>>, IEnumerable<String>) method and also to verify the equivalence of search strings.

 

The example strings provided above would be transformed into the equivalent PIPointQuery list.

 

Note: Parsing the query string to PIPoint Query Lists in the examples are shown in order to help understand various aspects involved query string parsing. In most cases this is not necessary if one gains a good understanding of query syntax.

I highly recommended always using Query Strings which are more compact and can be used in code and as well as Tag Search dialogues for PI Point searches.

 

CAUTION: Parsed Query does NOT mean Valid Query (Syntax vs Semantics)

If a query string is parsed successfully it only indicates correct syntax. However, Syntactic correctness does not guarantee Semantic validity. In this trivial example, it is easy to see that Float1234 is not a valid point type, however it can still be parsed into a PointQuery structure as it conforms to the grammar rules.

 

The search performed using the query string will obviously fail as shown.

 

Use of Wild Card Characters

  • The string value of a filter can be enclosed in single quotes ('), double quotes ("), or without quotes. Quotations are required if non-escaped white space or quotation marks are desired within the filter string.
  • Single backslash (\) character is treated as a literal character unless followed by a wildcard character
  • Supported wild card characters are "*" to match any zero or more characters and "?" to match a single character. These characters cannot be escaped using the backslash ("\") character

 

Ex: Search tag names with pattern CD?1?8

 

Ex: Search all tags which have datasecurity of PI World (read or write, but not both) and which do not belong to point class with name starting with ba*

 

Alias Attribute Names

The following table lists the supported aliases for common PIPoint attribute names.These aliases can be used instead of the actual attribute name. The PICommonPointAttributes class contains the names of the common PIPoint attribute names.

 

Ex: Query strings 1 & 2 use aliases producing the same results

 

Notice the equivalent parsing for aliases.

Personal preference: I avoid using the aliases. One less thing to remember or make mistakes with.

 

Operators

  • EqualOperator can be specified either by ":" | ":="
    • Personal preference: I use := to be consistent with the use of other operators
  • PIPointValueFilter "Value" query if the PIPoint being queried is String type: LessThan, LessThanOrEqual, GreaterThan, GreaterThanOrEqual are not supported
  • PIPointValueFilter query with BooleanValue (i.e "Substituted", "Questionable", "Annotated", "IsGood"), only Equal and NotEqual are supported
  • The In operator is not supported. It will be implicitly translated as a filter value
    • Name:"IN("abc", "def")" this is implicitly translated to 'Tag:="IN("abc", "def")*"'

 

Syntax Rules: Cheat Sheet

  • AndOperator can be specified either by "AND" or <WHITESPACE>
    • Ex: AND is implied between pointtype and pointsource just by a space

         

  • EqualOperator  can be either  ":" or ":="

    

  • If a specific filter name is not specified, then the filter will default to the "Tag" filter and the operator will be "="

    

  • When a filter name is specified, no whitespace is allowed between the filter name, the ":" separator, and the optional operator.
    • If the operator is not specified, the default operator is "=".
  • If the type of a point attribute is DateTime, then the "TimeValue" format is supported for the filter value. This can be any recognized AFTimeString
  • Boolean can be specified by "True" or "False" or "1" or "0"
  • PointType:Float query is implicitly translated to 'PointType:=Float16 OR PointType:=Float32 OR PointType:=Float64'
  • PointType:Int query is implicitly translated to 'PointType:=Int16 OR PointType:=Int32
  • Starting in AF 2017, it also supports querying based on PIPoint Value. OR condition is not supported if querying based on PIPoint value.
  • Queries with OR condition are not supported for PIPointValueFilters.

    

  • A filter name may only be referenced one time per AND condition of the query string.
    • This example would cause an error: PointId:>5 AND PointId:<10
  • It is possible to construct queries which include multiple attributes and query conditions

    

  • Certain PIPoint Attributes are specific to a PIPointClass (Eg. AutoAck is applicable to ALARM & SQC_ALARM)
    • See this attachment (ptclassattr.txt) for each PointClass attributes and their typical values
  • Future point attribute, which is invalid, to a PI Data Archive version < 3.4.395
  • Security point attributes (e.g. "PtSecurity" and "DataSecurity"), are invalid for PI Data Archive version < 3.4.380
  • Query strings are Case Insensitive
  • On improving readability
    • Don't use quotes unless you need them, single quotes better than double
    • Don't use parenthesis unneccessarily

 

 

Additional searches options

SearchNameAndDescriptor

If True and the Tag attribute name is specified and the Descriptor attribute name is not specified in the query, then both of these attributes will be searched using the Tag query value

 

AFSearchTextOption

Indicates the text search option to be applied to the search pattern.

     1. StartsWith                                                                       2. Contains                                       3. ExactMatch                                                  4. EndsWith

 

Tag Search Dialog in AF Explorer

A good way to perform these same searches and check your queries used in your application is through Tag Search dialog in AF Explorer. You can open this by being in Elements View -> Search -> Tag Search.

 

 

Additional Search criteria can be specified through the UI. However all the attributes are not available through this. They can only be supplied when using the search string.

 

Bonus: Peek into PI Server

AF SDK makes a remote procedure call to the PI Server (PI Base Subsystem) which takes in the search parameters and returns the requested PI Points along with additionally specified attributes.

As a bonus you can run piartool -thread pibasess -history in your PI Server command line to track the RPC and see the number of points returned and the amount of time it took for it to run.

Example RPC output: 4452, 0, 14-Aug-18 13:37:01.63263, 1, piptsdk|1|GetPoints, 544, Return Count: 55. Returned Status: [0] Success

tramachandran

PIPoint Search Utility

Posted by tramachandran Employee Sep 5, 2018

Overview

This console utility was developed to demonstrate PIPoint Search Syntax in AF SDK and as a aid to accompany the examples shown in the blog post  PIPoint Search Query in AF SDK

As a stand alone tool, it provides a quick way to perform searches and verify syntactic correctness of query strings.

 

Usage

 

0. Connect to PI Data Archive

This is required for both searching for PI Points and Parsing Query Strings as certain attributes depend on the version of the server.

 

1. Search PI Points using a Query String

Output columns: Tag name, Point ID, PointType, PointClass

 

2. Parse Query Strings into individual PI Point Queries

 

3. Specify SearchNameAndDescriptor

If true and the Tag attribute name is specified and the Descriptor attribute name is not specified in the query, then both of these attributes will be searched using the Tag query value. Default = false

 

4. Specify AFSearchTextOption

Indicates the text search option to be applied to the search pattern. Default = StartsWith

 

Source Code and Download

GitHub: GitHub - ThyagOSI/PIPointSearchSyntax

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In my previous blog on AF Server container health check, I talked about implementing a health check for the AF Server container. Naturally, we will also have to discuss about such a check for the PI Data Archive container. For an introduction to what a health check is about and also how you can integrate a health check with Docker. Please refer to the previous blog post as I won't be repeating it here.

 

In part 1, I will be covering the definition of the health tests that we can do for the PI Data Archive and then we will hook them up in the Dockerfile.

In part 2, we will be doing something interesting with these health check enabled containers by using another container that I wrote to inform us by email whenever there is a change in their health status so that we are aware when things fail.

 

Without further ado, let's jump into the definition of the health tests for the PI Data Archive container!

 

Define health tests

There are 2 tests that we will be performing. The first test is a test on the port 5450 to determine if there are any services listening on that port. The second test will use piartool to block for some essential subsystems of the PI Data Archive with a fixed timeout so that the test will fail if it exceeds that timeout.

 

The Powershell cmdlet Get-NetTCPConnection can accomplish the first check for us. A return value of null means that there is no service listening on port 5450.

The relevant code is below

$val = Get-NetTCPConnection -LocalPort 5450 -State Listen -ErrorAction SilentlyContinue
if ($val -eq $null)
{
      # return 1: unhealthy - the container is not working correctly
      Write-Host "Failed: No TCP Listener found on 5450"
      exit 1
}

 

Next, piartool is a utility that is located in the adm folder in PI Data Archive home directory. It has an option called "block" which waits for the specified subsystem to respond. This command is also used in the PI Data Archive start scripts to pause the script until the subsystem is available. The subsystems that we are going to check is the following list.

$SubsystemList = @(
   @("pibasess", "PI Base Subsystem"),
   @("pisnapss", "PI Snapshot Subsystem"),
   @("piarchss", "PI Archive Subsystem"),
   @("piupdmgr", "PI Update Manager")
)

 

We are going to change the amount of time that we allow for each check to 10 seconds so that we do not have to wait 1 hour for it to complete . We will also grab the start and end times so that we can provide detailed logging for troubleshooting purposes. The code for this is below.

function Block-Subsystem
{
Param ([string]$Name, [string]$DisplayName, [int] $TimeoutSeconds= 10)
$StartDate=Get-Date
$rc = Start-Process -FilePath "${env:PISERVER}\adm\piartool.exe" -ArgumentList @("-block", $Name, $TimeoutSeconds) -Wait -PassThru -NoNewWindow
$EndDate=Get-Date
if($rc.ExitCode -ne 0)
{
echo ("Block failed for {0} with exit code {1}, block started: {2}, block ended: {3}" -f $DisplayName,$rc.ExitCode,$StartDate,$EndDate)
exit 1
}
}

ForEach ($Subsystem in $SubsystemList) {Block-Subsystem -Name $Subsystem[0] -DisplayName $Subsystem[1] -TimeoutSeconds 10}

 

Integrate into Docker

We will add this line of code to our Dockerfile to make Docker start performing health checks.

HEALTHCHECK --start-period=60s --timeout=60s --retries=1 CMD powershell .\check.ps1

 

The start period is given as 60 seconds to allow the PI Data Archive to start up and initialize properly before the health check test results will be taken into account. A time out of 60 seconds is given for the entire health check to complete. If it takes longer than that, the health check is deemed to have failed. I also gave only 1 retry which means that the health check will be unsuccessful if the first try fails. There is no second chance! .

 

Build the image

As usual, you will have to supply the PI Server 2018 installer and pilicense.dat yourself. The rest of the files can be found here.

elee3/PI-Data-Archive-container-build

 

Put all the files into the same folder and run the build.bat file.

Once your image is built, you can create a container.

docker run -h pi --name pi -e trust=%computername% pidax:18

 

Now check docker ps. The health status should be starting.

 

After 1 minute which is the timeout period, run docker ps again. The health status should now be healthy.

 

Health monitoring

Now that we have a health check enabled container up and running, we can start to do some wonderful things with it. If your job is a PI administrator. don't you wish there was some way to keep tabs on your PI Data Archive's health so that if it fails, an email can be sent to notify you that it is unhealthy. This way, you won't get a shock the next time you check on your PI Data Archive and realize that it has been down for a week!

 

I have written an application that can help you monitor ANY health enabled containers (i.e. not only the PI Data Archive container and the AF Server container but any container that has a health check enabled) and send you an email when they become unhealthy. We can start the monitoring with just one simple command. You should change the following variables

 

Name of your SMTP server: <mysmtp>

Source email: <admin@osisoft.com>:

Destination email: <operator@osisoft.com>

 

to your own values.

 

docker run --rm -id -h test --name test -e smtp=<mysmtp> -e from=<admin@osisoft.com> -e to=<operator@osisoft.com> elee3/health

 

Once the application is running, we can test it by trying to break our PI Data Archive container. I will do so by stopping the PI Snapshot Subsystem since it is one of the services that is monitored by our health check. After a short while, I received an email in my inbox.

 

Let me check docker ps again.

 

The health status of docker ps corresponds to what the email has indicated. Notice that the email even provides us with the health logs so that we know exactly what went wrong. This is so useful. Now let me go back and start the PI Snapshot Subsystem again. The monitoring application will inform me that my container is healthy again.

 

The latest log at 2:30:47 PM has no output which indicates that there are no errors. The logs will normally fetch the 5 most recent events.

 

With the health monitoring application in place, we can now sleep in peace and not worry about container failures which go unnoticed.

 

Conclusion

In addition to what I have shown here, I want to mention that the health tests can be defined by the users themselves. You do not have to use the implementation that is provided by me. This level of flexibility is very important since health is a subjective topic. One man's trash is another man's treasure. You might think a BMI of 25 is ok but the official recommendation from the health hub is 23 and below. Therefore, the ability to define your own tests and thresholds will help you receive the right notifications that are appropriate to your own environment. You can hook them up during docker run. Here is more information if you are interested.

 

Source code for health monitoring application is here.

elee3/Health-Monitor

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In a complex infrastructure which spans several data centers and has multiple dependencies with minimum service up-time requirements, it is inevitable that services can still fail occasionally. The question then is how we can manage that in order to continue to maintain a high availability environment and keep downtime as low as possible. In this blog post, we will be talking about how we can implement a health check in the AF Server container to help with that goal.

 

What is a health check?

A container that is running doesn't necessarily mean that it is working. i.e. performing the service that it is supposed to do. In Docker Engine 1.12, a new HEALTHCHECK instruction was added to the Dockerfile so that we can define a command that verifies the state of health in the container. It is the same concept as a health check for humans such as making sure that your liver or kidney is working properly and take preventative measures before things go worse. In the container scenario, the exit code of the command will determine whether the container is operational and doing what is it meant to do.

 

In the AF Server context, we will need to think about what it means for the AF Server to be 'healthy'. Luckily for us, we have such a counter to indicate the health status. AF server includes a Windows PerfMon counter called AF Health Check. If both the AF application service and the SQL Server are running and responding, this counter returns a value of 1. Another way we can check for health is to check if a service is listening on the port 5457 since AF Server uses that. We can also test if the service is running. Including all of these tests will make our health check more robust.

 

Define health tests

For the first measure of health, we will be using the Get-Counter Powershell cmdlet to read the value of the performance counter. A healthy AF Server is shown below.

A value of 1 indicates that the AF Server and SQL Server are healthy while 0 means otherwise.

 

The second measure of health is to test for a service listening on port 5457. We will use the Powershell cmdlet Get-NetTCPConnection to do so.

When there is no listener on port 5457, we will get an error.

 

The third measure of health is to check if the service is running by using the Get-Service Powershell cmdlet.

 

Integrate into Docker

With the health tests on hand, how can we ask Docker to perform these tests? The answer is to use the HEALTHCHECK instruction in the Dockerfile to instruct the Docker Engine to carry out the tests at regular intervals that can be defined by the image builder or the user. The syntax of the instruction is

 

HEALTHCHECK [OPTIONS] CMD command

 

The options that can appear before CMD are:

  • --interval=DURATION (default: 30s)
  • --timeout=DURATION (default: 30s)
  • --start-period=DURATION (default: 0s)
  • --retries=N (default: 3)

 

For more information on what the options mean, please look here.

I will be using a start-period of 10s to allow the AF Server sometime to initialize before starting the health checks. The other options I will leave as default. The user of the image can still override these options during Docker run.

 

The command’s exit status indicates the health status of the container. The possible values are:

  • 0: success - the container is healthy and ready for use
  • 1: unhealthy - the container is not working correctly
  • 2: reserved - do not use this exit code

 

The command will be a batch file that runs the aforementioned tests. The instruction will therefore look like this.

HEALTHCHECK --start-period=10s CMD powershell .\check.ps1

 

Here are the contents of check.ps1

#test for service listening on port 5457
Get-NetTCPConnection -LocalPort 5457 -State Listen -ErrorAction SilentlyContinue|out-null
if ($? -eq $false)
{
write-host "No one listening on 5457"
exit 1
}

#test if AF service is running
$status = Get-Service afservice|select -expand status
if ($status -ne "Running")
{
write-host "PI AF Application Service (afservice) is $status."
write-host "PI AF Application Service (afservice) is not running."
exit 1
}

#test for AF Server Health Counter
$counter = get-counter "\PI AF Server\Health"|Select -Expand CounterSamples| Select -expand CookedValue;
if ($counter -eq 0)
{
write-host "The health counter is $counter. This might mean either"
write-host "1. SQL Server is non-responsive"
write-host "2. SQL Server is responding with errors"
exit 1
}

 

Usage

The container image elee3/afserver:18x has been updated with the health check ability. After pulling it from the Docker repository with

docker pull elee3/afserver:18x

 

You can have some fun with it. Let me spin up a new AF Server container based on the new image.

docker run -d -h af18 --name af18 elee3/afserver:18x

 

Now, let's do a

docker ps

 

Notice that my other container af17 that is based on the elee3/afserver:17R2 image doesn't have any health status next to it status because a health check was not implemented for it while container af18 indicates "(health: starting)". Let's run docker ps again after waiting for a little while.

Notice that the health status has changed from 'starting' to 'healthy' after the first test which is run interval (configured in options) seconds after the container is started.

 

We can also do

docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expandproperty log

to see the health logs.

 

Health event

When the health status of a container changes, a health_status event is generated with the new status. We can observe that using docker events. We will now intentionally break the container by stopping the SQL Server service and trying to connect with PSE.

This is expected. Now let us check using docker events which is a tool for getting real time events from the Docker Engine.

 

We can do a filter on docker events to only grab the health_status events for a certain time range so that we do not need to be concerned with irrelevant events. Let us grab those health_status events for the past hour for my container af18.

(docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time

 

Also check on

docker ps

 

and also docker inspect which can give us clues on what went wrong.

docker inspect af18 -f "{{json .State.Health}}"|ConvertFrom-Json|select -expand log|fl

 

With the health check, it is now obvious that even though the container is running, it doesn't work when we try to connect to it with PSE.

We shall restart the SQL Server service and try connecting with PSE. We can check if the container becomes healthy again by running

 

docker ps

and

(docker events --format "{{json .}}" --filter event=health_status --filter container=af18 --since 1h --until 1s) | ConvertFrom-Json|ForEach-Object -Process {$_.time = (New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0).addSeconds($_.time).tolocaltime();$_}|select status,from,time

As expected, a new health_status event is generated which indicates healthy.

 

Conclusion

We can leverage on the health check mechanism further when we use a container orchestrator such as Docker Swarm that can detect the unhealthy state of a container and automatically replace the container with a new and working container. This will be discussed in a future blog. So stay tuned!

msingh

Stream Updates in PI Web API

Posted by msingh Employee Aug 22, 2018

What is Stream Updates?

Stream Updates is a mechanism in PI Web API to retrieve data updates for PI Points and Attributes. It is built on top of Streams and StreamSets, which use the HTTP protocol. Stream Updates uses markers to mark the specific event in a stream where the client got the last updates and uses those to get new updates since that point in the stream. Every time you request the updates, you get the changes since the time you registered, as well as a new hyperlink to use for the next set of updates. Currently, Stream Updates is only available as a CTP feature with PI Web API 2018.

 

Why was Stream Updates added?

Before Stream Updates, the way we retrieved new data for PI points and attributes was not very efficient. The only way to get new data was to continually issue requests to find out about changes (polling) which was time consuming and ineffiecient. We had lots of tweaks and options to make the overall experience less time-consuming but in order to achieve better performance, we decided to support incremental updates.

 

Why use Stream Updates?

Stream Updates is built to overcome some basic challenges with getting incremental data. Stream Updates operates over HTTP which means that all the basic benefits of normal HTTP requests are present. This contrasts with Channels, which are implemented via the WebSockets protocol. In most cases, Stream Updates is more performant than Channels.

Response sizes are much smaller than continuously polling because we get only the changes instead of the whole response all over again. Stream Updates also requires less server and network resource requirements than polling. Unlike Channels, Stream Updates is compatible with claims-based authentication.

 

How to use Stream Updates?

Stream Updates usage consists of these steps:

1. The Client registers an attribute/point for updates by sending a POST request.

2. If successful, the Client gets the updates by using the marker in the registration response. Markers are also provided as part of the "Links" object in the response and the "Location" header of the response. Clients can get updates by sending a GET request using this marker.

3. The response to receive updates will contain the "Latest Marker" which will be the current position in the stream. The user can get new updates after this position by sending out GET requests using this new marker. These requests can be chained together to get incremental updates for registered resources.

 

Sample CSharp client illustrating usage of Stream Updates:

`PIWebAPIClient.cs` is a wrapper around the `HttpClient` which implements the GET and the POST methods.

 

public class PIWebAPIClient
  {
       private HttpClient client;
       private string baseUrl;

       public PIWebAPIClient(string url, string username, string password)
       {
            client = new HttpClient();
            string auth = Convert.ToBase64String(Encoding.ASCII.GetBytes(string.Format("{0}:{1}", username, password)));
            client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", auth);
            baseUrl = url;
       }

       public PIWebAPIClient(string url)
       {
            client = new HttpClient();
            baseUrl = url;
       }

       public async Task<object> GetAsync(string uri)
       {
            HttpResponseMessage response = await client.GetAsync(uri);
            var jsonString = await response.Content.ReadAsStringAsync();
            var json = JsonConvert.DeserializeObject<object>(jsonString);
            if (!response.IsSuccessStatusCode)
            {
                 var responseMessage = "Response status code does not indicate success: " + (int)response.StatusCode + " (" + response.StatusCode + " ). ";
                 throw new HttpRequestException(responseMessage + Environment.NewLine + jsonString);
            }
            return json;
       }

       public async Task<object> PostAsync(string uri)
       {
            HttpResponseMessage response = await client.PostAsync(uri, null);
            var jsonString = await response.Content.ReadAsStringAsync();
            var json = JsonConvert.DeserializeObject<object>(jsonString);
            if (!response.IsSuccessStatusCode)
            {
                 var responseMessage = "Response status code does not indicate success: " + (int)response.StatusCode + " (" + response.StatusCode + " ). ";
                 throw new HttpRequestException(responseMessage + Environment.NewLine + jsonString);
            }
            return json;
       }

       public async Task<dynamic> RegisterForStreamUpdates(string webId)
       {
            string url = baseUrl + "/streams/" + webId + "/updates";
            dynamic response = await PostAsync(url);  
            return response;
       }

       public async Task<dynamic> RetrieveStreamUpdates(string marker)
       {
            string url = baseUrl + "/streams/" + "/updates/" + marker;
            dynamic response = await GetAsync(url);
            return response;
       }
  }

`Program.cs` is a simple C# class which uses the Client to register for and retrieve Stream Updates.

 

class Program
 {
    static string baseUrl = "https://your-server/piwebapi";
    static string marker = null;
    static string username = "username";
    static string password = "password";
    static PIWebAPIClient client = new PIWebAPIClient(baseUrl, username, password);

    static void Main(string[] args)
    {
        string webId = "webId";
        dynamic response = client.RegisterForStreamUpdates(webId).Result;
        marker = response.LatestMarker;

        var startTimeSpan = TimeSpan.Zero;
        var periodTimeSpan = TimeSpan.FromSeconds(10);
       
       //ReceiveUpdates is called every 10 seconds until the client explicitly exits the application.
        var timer = new System.Threading.Timer((e) =>
        {
            ReceiveUpdates(marker);
        } ,null, startTimeSpan, periodTimeSpan);

        Console.ReadLine();
    }

    public static void ReceiveUpdates(string marker)
    {
        dynamic update = client.RetrieveStreamUpdates(marker).Result;
        Console.WriteLine(update);
        Console.WriteLine("Press any key to exit anytime!");
        marker = update.LatestMarker;
    }
 }

For more information, see the topics page of your PI Web API installation: https://your-server/piwebapi/help/topics/stream-updates

One of the coolest things about Microsoft SQL Server in the last couple of years is how it has expanded from the confines of Windows Server and can now run on all three major desktop OSes, as well as sit in the cloud.

None of that expansion would have mattered much if downstream clients for SQL Server didn’t also expand their horizons to touch more platforms. And with Microsoft client tools for Linux and Mac, this is no longer an issue.

You can sneak PI Data through this mechanism

We can take advantage of Microsoft SQL Server and OSIsoft PI SQL by adding a linked server to SQL Server that forwards queries to PI Server. From there we can build SQL views which opens a portal into both the PI Data Archive and PI AF directly. You can also combine your own data stored in SQL with your real-time data. Downstream applications will see normal every day recordsets.

Here’s a screenshot where I’ve used this technique to pull data directly into Microsoft Excel for Mac. Not only is this data fresh, but I can refresh the query in my worksheet just like I would do in Excel for Windows. The connection from the worksheet is going straight to SQL Server.

ExcelForMac.png

Setup Steps

Setup PI SQL Data Access Server (RTQP Engine)

Make sure you’ve installed the PI SQL Data Access Server (RTQP) Engine which is in your PI Server 2018 (and later) install kit:

RTQP Install.png

Grab PI SQL Client

Next you need to get the PI SQL Client kit and install this on the instance where your SQL Server is. You can grab it from the OSIsoft Technical Support Downloads page.

Configure the PI SQL Client provider in Microsoft SQL Server Enterprise Manager

Hop over to SQL-EM and modify the linked server provider to ensure these options are switched on:

pisqlsetup.png

Create a Linked Server connection

By right-clicking on the Linked Servers folder in SQL-EM you can set up any number of linked server connections. Typically, you will want to set up one linked server connection per AF database. Here I’ve setup a connection to NuGreen:

linkedserversetup.png

Now the fun part: queries!

First let’s go with a basic type of query that finds all the pumps in the NuGreen database

Query

SELECT [ID]
   ,[Name]
   ,[Description]
   ,[Comment]
   ,[Revision]
   ,[HasChildren]
   ,[PrimaryPath]
   ,[Template]
   ,[Created]
   ,[CreatedBy]
   ,[Modified]
   ,[ModifiedBy]
   ,[TemplateID]
  FROM [PISERVER_TEST].[Master].[Element].[Element]
WHERE Name LIKE 'P%'
GO

Simple enough. This yields the following:

pumpssql.png

We can use a SQL Database to expose this as a view by wrapping this query with CREATE VIEW.

USE TEST
GO

CREATE VIEW REPORTING_PUMPS 
AS

SELECT [ID]
   ,[Name]
   ,[Description]
   ,[Comment]
   ,[Revision]
   ,[HasChildren]
   ,[PrimaryPath]
   ,[Template]
   ,[Created]
   ,[CreatedBy]
   ,[Modified]
   ,[ModifiedBy]
   ,[TemplateID]
  FROM [PISERVER_TEST].[Master].[Element].[Element]
WHERE Name LIKE 'P%'
GO

Now, when we select everything in the view we get:

reportingpumpsview.png

Perfect. Now that we have PI Server data we can pull this across to any application that can communicate to Microsoft SQL Server.

Importing PI Server data into Excel for Mac

Now that we have PI Server data exposed to Microsoft SQL Server it is fairly painless to connect this to downstream applications that can read recordsets from there. Let’s use this to connect the view we set up.

SQLServerODBCMac.png

Microsoft Excel can import remote a SQL datasets in the Data tab. From there you can select New Database Query and SQL Server ODBC.

odbcconnect.png

Insert your SQL Server credentials and authentication method and click Connect. A “Microsoft Query” window will then appear where you can enter a SQL statement to produce a recordset. It will follow the same syntax that you would use in SQL-EM.

From here I’ll select the contents of my view. Press Run to execute it on the server and inspect what comes back.

querywindow.png

Now you can press Return Data to deposit the results into your Excel worksheet. The connection to SQL Server is preserved in your worksheet when you save the Excel workbook. You can edit it and re-run the query by visiting the connections button that’s also on the Data ribbon.

connections.png

Data now refreshes on your terms in your Excel worksheet and your connection details are preserved between document openings.

Caveats

Read-only

Presently the restrictions that existed with PI OLDEDB Enterprise also apply to the latest PI SQL and Data Access Server. You cannot post data into your Asset Database via this connection type.

If you attempt to write, expect this error:

Msg 7390, Level 16, State 2, Line 35 The requested operation could not be performed because OLE DB provider “PISQLClient” for linked server “PISERVER_TEST” does not support the required transaction interface.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In my previous articles, I have demonstrated using the AF Server container in local Docker host deployments. The implication is that you have to manage the Docker host infrastructure yourself. The installation, patching, maintenance and upgrading work has to be done by you manually. This represents significant barriers to get up and running. As an analogy, imagine you visit another country for vacation and need to get from the airport to the hotel. Would it be better to buy a car (if they even sold one at the airport?) and drive to the hotel or just take a taxi (transport as a service). The first option requires a larger initial investment of time and money compared to the latter.

 

For quick demo, training or testing purposes, getting a Docker host infrastructure up and running requires effort (getting a machine with right specifications, procuring an OS with Windows container capabilities, patching the OS so that you can use Docker, installing the right edition of Docker) and troubleshooting if things go south (errors during setup or services refusing to start). In the past, we have no other choice so we just have to live with it. But in this modern era of cloud computing, using a container as a service might be a faster and cheaper alternative. Today, I will show you how to operate the AF Server container in the cloud using Azure Container Instances. The very first service of its kind in the cloud, Azure Container Instances is a new Azure service delivering containers with great simplicity and speed. It is a form of serverless containers.

 

Prerequisites

You will need an Azure subscription to follow along with the blog. You can get a free trial account here.

 

Azure CLI

Install the Azure CLI which is a command line tool for managing Azure resources. It is a small install. Once done, we need to login

az login

 

If the CLI can determine your default browser and has access to open it, it will do so and direct you immediately to a sign in page.

Otherwise, you need to open a browser page and follow the instructions on the command line to enter an authorization code after navigating to https://aka.ms/devicelogin in your browser.

 

Complete the sign in via the browser and you will see

 

Now set your default subscription if you have many subscriptions. If you only have one subscription to your account, then you can skip this step.

az account set -s <subscription name>

 

Create cloud container

We are now ready to create the AF Server cloud container. First create a resource group.

az group create --name resourcegrp -l southeastasia

You can change southeastasia to a location nearest to you. Here is the list of locations (remove the space when using it)

 

Create a file named af.yaml. Replace <username> and <password> with the credentials for pulling the AF Server container image. There are some variables that you can configure

 

afname: The name that you choose for your AF Server.

user: Username to authenticate to your AF Server.

pw: Password to authenticate to your AF Server.

 

af.yaml

apiVersion: '2018-06-01'
name: af
properties:
  containers:
  - name: af
    properties:
      environmentVariables:
      - name: afname
        value: eugeneaf
      - name: user
        value: eugene
      - name: pw
        secureValue: qwert123!
      image: elee3/afserver:18x
      ports:
      - port: 5457
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.0
  imageRegistryCredentials:
  - server: index.docker.io
    username: <username>
    password: <password>
  ipAddress:
    dnsNameLabel: eleeaf
    ports:
    - port: 5457
      protocol: TCP
    type: Public
  osType: Windows
type: Microsoft.ContainerInstance/containerGroups

 

Then run this in Azure CLI to create the container.

az container create --resource-group resourcegrp --file af.yaml

The command will return in about 5 minutes.

 

You can check the state of the container.

az container show --resource-group resourcegrp -n af --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table

 

You can check the container logs.

az container logs --resource-group resourcegrp -n af

 

Explore with PSE

You now have an AF Server container in the cloud that can be accessed ANYWHERE as long as there is internet connectivity. You can connect to it with PSE using the FQDN. The credentials to use are those that you specified in af.yaml.

Notice that the name of the AF Server is the value of the afname environment variable that was passed in af.yaml.

 

Run commands in container

If you have a need to login to the container to run commands such as using afdiag, you can do so with

az container exec --resource-group resourcegrp -n af --exec-command "cmd.exe"

 

Clean up

When you are done with using the container, you should destroy it so that you won't have to pay for it when it is not being used.

az container delete --resource-group resourcegrp -n af

You can check that the resource is deleted by listing your resources.

az resource list

 

Considerations

There are some tricks to hosting a container in the cloud to optimize its deployment time.

 

1. Base OS

The Base OS should be one of the three most recent versions of Windows Server Core 2016. These are cached in Azure Container Instances to help in the deployment time. If you want to experience the difference, try pulling elee3/afserver:18 in the create container command above. The time taken will be 13min which is more than twice of the 5min needed to pull elee3/afserver:18x. The reason is because the old image with “18” tag is based on the public SQL Server image which is 7 months old and doesn't have the latest OS version to be able to leverage on the caching mechanism to improve performance. I have rebuilt the image with “18x” tag based on my own SQL Server image with the latest OS version.

 

2. Image registry location

Hosting the image in Azure Container Registry in the same region that you use to deploy your container will help to improve deployment time as this shortens the network path that the image needs to travel which shortens the download time. Take note that ACR is not free unlike DockerHub. In my tests, it took 4min to deploy with ACR.

 

3. Image size

This one is obviously a no-brainer. That's why I am always looking to make my images smaller.

 

Another consideration is the number of containers per container group. In this example, we are creating a single-container group. The current limitation of Windows containers is that we can only create single-container groups. When this limitation is lifted in the future, there are some scenarios where I see value in creating multi-container groups such as spinning up sets of containers that are complimentary to each other. E.g. PI Data Archive container, AF Server container, PI Analysis Service container in a 3-container group. However, for scenarios such as spinning up 2 AF Servers containers, we should still keep them in separate container groups so that they won't fight for the same port.

 

Limitations

Kerberos authentication is not supported in a cloud environment. We are using NTLM authentication in this example.

 

Conclusion

Deploying the AF Server container to Azure Container Instances might not be as fast as deploying it to a local Docker host. But it is cheaper compared to the upfront time and cost of setting up your own Docker host. This makes it ideal for demo/training/testing scenarios. The containers are billed on a per second basis so you only pay for what you use. That is like only paying for your trip from the airport to the hotel without having to pay anything extra.

Filter Blog

By date: By tag: