Skip navigation
All Places > PI Developers Club > Blog
1 2 3 Previous Next

PI Developers Club

638 posts

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Until now, when installing PI interfaces on a separate node from the PI Data Archive, we need to provision a separate physical or virtual machine just for the interface itself. Don't you think that it is a little waste of resources? To combat this, we can containerize interfaces so that they become more portable which allows them to be scheduled anywhere inside your computing cluster. Their batch file configuration also makes them good candidates for lifting and shifting into containers.

 

We will start off by introducing the PI to PI interface container which is the first ever interface container! It will have buffering capabilities (via PI Buffer Subsystem) and its performance counters will also be active.

 

Set up servers

First, let me spin up 2 PI Data Archive containers to act as the source and destination servers. Check out this link on how to build the PI Data Archive container.

PI Data Archive container health check

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h pi1 --name pi1 -e trust=%computername% pidax:18

 

For the source code to build the PI Data Archive container and also the PI to PI interface container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies.

 

We shall be using pi1 as our source and pi as our destination.

 

Let's open up PI SMT to add the trust for the PI to PI Interface container. Do this on both PI Data Archives.

The IP address and NetMask are obtained by running ipconfig on your container host.

The reason I set the trusts this way is because the containers are guaranteed to spawn within this subnet since they are attached to the default NAT network. Therefore, the 2 PI Data Archive containers and the PI to PI Interface container are all in this subnet. Container to container connections are bridged through an internal Hyper-V switch.

 

On pi, create a PI Point giving it any name you want (my PI Point shall be named 'cdtclone'). Configure the other attributes of the point as such

Point Source: pitopi
Exception: off
Compression: off
Location1: 1
Location4: 1
Instrument Tag: cdt158

 

Leave the other attributes as default. This point will be receiving data from cdt158 on the source server. This is specified in the instrument tag attribute.

 

Set up interface

Now you are all set to proceed to the next step which is to create the PI to PI Interface container!

 

You can easily do so with just one command. Remember to login to Docker with the usual credentials.

docker run -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

The environment variables that you can configure include

host: destination server

src: source server

ps: point source

That is all the parameters that is supported for now.

 

You should be able to see data appearing in the cdtclone tag on the destination server now.

 

Don't you think it was very quick and easy to get started.

 

Buffer

As I mentioned before, the container also has buffering capabilities. We shall consider 2 scenarios.

 

1. The destination server is stopped. Same effect as losing network connectivity to the destination server.

2. The PI to PI interface container is destroyed.

 

Scenario 1

Stop pi.

docker stop pi

 

Wait for a few minutes and run

docker exec p2p cmd /c pibufss -cfg

 

You should see the following output which indicates that the buffer is working and actively queuing data in anticipation for the destination server to be back up.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 18:39:23, regid: 3
lastsend: 2-Nov-18 18:58:59
total events sent: 47, snapshot posts: 42, queued events: 8

 

When we start up pi again

docker start pi

 

Wait a few minutes before running pibufss -cfg again. You should now see

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: SendingData, successful connections: 2
PI identities: piadmins | PIWorld, auth type: SSPI
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 19:07:24, regid: 3
total events sent: 64, snapshot posts: 45, queued events: 0

 

The buffer has re-registered with the server and flushed the queued events to the server. You can check the archive editor to make sure the events are there.

 

Scenario 2

Stop pi just so that events will start to buffer.

docker stop pi

 

Check that events are getting buffered.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering


*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 13-Nov-18 15:25:07, lastreg: 13-Nov-18 15:25:08, regid: 3
lastsend: 13-Nov-18 17:54:14
total events sent: 8901, snapshot posts: 2765, queued events: 530

 

Now while pi is still stopped, stop p2p.

docker stop p2p

 

Check the volume name that was created by Docker.

docker inspect p2p -f "{{.Mounts}}"

 

Output as below. The name is highlighted in red. Save that name somewhere.

[{volume 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17 C:\ProgramData\docker\volumes\76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17\_data c:\programdata\osisoft\buffering local true }]

 

Now you can destroy p2p and start pi

docker rm p2p
docker start pi

 

Use archive editor to verify that data has stopped flowing.

The last event was at 5:54:13 PM.

 

We want to recover the data that are in the buffer queue files. We can create a new PI to PI interface container pointing to the saved volume name.

docker run -v 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17:"%programdata%\osisoft\buffering" -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

And VOILA! The events in the buffer queues have all been flushed into pi.

 

To be sure that the recovered events are not due to history recovery by the PI to PI interface container, I have disabled it.

 

I have demonstrated that the events in the buffer queue files were persisted across container destruction and creation as the data was persisted outside the container.

 

 

Performance counters

The container also has performance counters activated. Let's try to get the value of Device Status. Run the following command in the container.

Get-Counter '\pitopi(_Total)\Device Status'

 

Output

Timestamp CounterSamples
--------- --------------
11/2/2018 7:24:14 PM \\d13072c5ff8b\pitopi(_total)\device status :0

 

Device status is 0 which means healthy.

 

What if we stopped the source server?

docker stop pi1

 

Now run the Get-Counter command again and we will expect to see

Timestamp CounterSamples
--------- --------------
11/2/2018 7:29:29 PM \\d13072c5ff8b\pitopi(_total)\device status :95

 

Device status of 95 which means Network communication error to source PI server.

 

These performance counters will be perfect for writing health checks against the interface container.

 

Conclusion

We have seen in this blog how to use the PI to PI Interface container to transfer data between two PI Data Archive containers. As you know, OSIsoft has hundreds of interfaces. Being able to containerize one means the success of containerizing others is very high. The example in this blog will serve as a proof of concept.

Dear PI Developer's Club Users:

 

OSIsoft LLC is currently revising and consolidating its GitHub presence and content-sharing policies. This change in policy has caused many links to suddenly become broken for the time being.  We are working diligently to resolve as many of these as we can once we have clarified our new code-sharing approach.

 

We apologize for the inconvenience of such broken links, and equally important, the unavailability of our learning samples that many customers and partners have found to be valuable.  Until we have transitioned to our new policies, such links and repositories will remain unavailable.

 

This announcement supersedes any previous policy stated at:

 

 

 

 

 

Thank you for your patience and understanding.

 

Michael Sloves, Director of Technology Enablement

 

 

UPDATE November 13

 

Regarding the PI Web API client libraries previously available on GitHub, we will allow access to the source code, provided you first agree and accept some terms and conditions.  Any request to the client libraries may be made to technologyenablement@osisoft.com.

 

Rick Davin, Team Lead Technology Enablement

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

We have learnt much about using containers in previous blog posts. Until now, we have been working with standalone containers. This is great for familiarizing yourself with the concept of containers in general. Today, we shall take the next step in our container journey which is to learn how to orchestrate these containers. There are several container orchestration platforms on the market today such as Docker Swarm, Kubernetes, Service Fabric and Marathon. I will be using Docker Swarm today to illustrate the concept of orchestration since it is directly integrated with the Docker Engine making it the quickest and easiest to set up.

 

honey-bee-drawing-cartoon-64.jpg

 

Motivation

Before we even start on the orchestration journey, it is important that we understand the WHY behind it. For someone who is new to all these, the objective of doing this might not be clear. Let me illustrate with two analogies.

One that a layman can understand and another that a PI admin can relate to.

 

First analogy

Suppose your hobby is baking cakes (containers). You have been hard at work in your kitchen trying to formulate the ultimate recipe (image) for the best chiffon cake in the world. One day, you managed to bake a cake with the perfect taste and texture after going through countless rounds of trial and error of varying the temperature of the oven, the duration in the oven, the amount of each type of ingredient etc. Your entrepreneurial friend advise you to open a small shop selling this cake (dealing with standalone containers in a single node). You decided to heed your friend's advice and did so. Over the years, business boomed and you want to expand your small shop to a chain of outlets (cluster of nodes). However, you have only one pair of hands and it is not possible for you to bake all the cakes that you are going to sell. How are you going to scale beyond a small shop?

Luckily, your same entrepreneurial friend found a vendor called Docker Inc who can manufacture a system of machines (orchestration platform) where you install one machine in each of your outlet stores. These machines can communicate with each other and they can take your recipe and bake cakes that taste exactly the same as the ones that you baked yourself. Furthermore, you can let the machines know how many cakes to bake each hour to address different levels of demand throughout the day. The machines even have a QA tester at the end of the process to test if the cake meets its quality criteria and will automatically discard cakes that fail to replace them with new ones. You are so impressed that you decide to buy this system and start expanding your cake empire.

 

Second analogy

Suppose you are in charge of the PI System at your company. Your boss has given you a cluster of 10 nodes. He would like you to make an AF Server service spanning this cluster that has the following capabilities

1. able to adapt to different demands to save resources

2. self-healing to maximize uptime

3. rolling system upgrades to minimize downtime

4. easy to upgrade to newer versions for bug fixes and feature enhancements

5. able to prepare for planned outages needed for maintenance

6. automated roll out of cluster wide configuration changes

7. manage secrets such as certificates and passwords for maximum security

How are you going to fulfill his crazy demands? This is where a container orchestration platform might help.

 

Terminology

Now let us get some terminologies clear.

 

Swarm: A swarm consists of multiple Docker hosts which run in swarm mode and act as managers and workers. A given Docker host can be a manager, a worker, or perform both roles.

Manager: The manager delivers work (in the form of tasks) to workers, and it also manages the state of the swarm to which it belongs. The manager can also run the same services workers, but you can also make them run only manager-related services.

Worker: Workers run tasks distributed by the swarm manager. Each worker runs an agent that reports back to the master about the state of the tasks assigned to it, so the manager can keep track of the work running in the swarm.

Service: A service defines which container images the swarm should use and which commands the swarm will run in each container. For example, it’s where you define configuration parameters for an AF Server service running in your swarm.

Task: A task is a running container which is part of a swarm service and managed by a swarm manager. It is the atomic scheduling unit of a swarm.

Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.

 

There are two types of service.

Replicated: The swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state.

Global: The swarm manager runs one task for the service on every available node in the cluster.

 

Prerequisites

To follow along with this blog, you will need two Windows Server 2016 Docker hosts. Check out how to install Docker in the Containerization Hub link above.

 

Set up

Select one of the nodes (we will call it "Manager") and run

docker swarm init

 

This will output the following

 

Swarm initialized: current node (vgppy0347mggrbam05773pz55) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377

 

Now select the other node (we will call it "Worker") and run the command that was being output in the previous command.

docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377

 

Go back to Manager and run

docker node ls

 

to list out the nodes that are participating in the swarm. Note that this command only works on manager nodes.

 

Service

Now that the nodes have been provisioned, we can start to create some services.

 

For this blog, I will be using a new AF Server container image that I have recently developed tagged 18s. If you have been following my series of blogs, you might be curious what is the difference between the tag 18x (last seen here) and 18s. With 18s, the data is now separated from the AF Server application service. What this means is that the PIFD database mdf, ndf and ldf files are now mounted in a separate data volume. The result is that on killing the AF Server container, the data won't be lost and I can easily recreate a AF Server container pointing to this data volume to keep the previous state. This will be useful in future blogs on container fail-over with data persistence.

 

You will need to login with the usual docker credentials that I have been using in my blogs. To create the service, run

 

docker service create --name=af18 --detach=false --with-registry-auth elee3/afserver:18s

 

Note: If --detach=false was not specified, tasks will be updated in the background. If it was specified, then the command will wait for the service to converge before exiting. I do it so that I can get some visual output.

 

Output

goa9cljsek42krqgvjtwdd2nd
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Waiting 6 seconds to verify that tasks are stable...

 

Now we can list the service to find out which node is hosting the tasks of that service.

 

docker service ps af18

 

Once you know which node is hosting the task, go to that node and run

 

docker ps -f "name=af18."

 

Output

CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS                        PORTS               NAMES
9e3d26d712f9        elee3/afserver:18s   "powershell -Comma..."   About a minute ago   Up About a minute (healthy)                       af18.1.w3ui9tvkoparwjogeg26dtfz

 

The output will show the list of containers that the swarm service has started for you. Let us inspect the network that the container belongs to by using inspecting with the container ID.

 

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks}}"

 

Output

map[nat:0xc0420c0180]

 

The output indicates that the container is attached to the nat network by default if you do not explicitly specify a network to attach to. This means that your AF Server is accessible from within the same container host.

 

You can get the IP address of the container with

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks.nat.IPAddress}}"

 

Then you can connect with PSE using the IP address. It is also possible to connect with the container ID as the container ID is the hostname by default.

 

 

Now that we have a service up and running, let us take a look at how to change some configurations of the service. In the previous image, the name of the AF Server derives from the container ID which is some random string. I would like to make it have the name 'af18'. I can do so with

 

docker service update --hostname af18 --detach=false af18

 

Once you execute that, Swarm will stop the current task that is running and reschedule it with the new configuration. To see this, run

 

docker service ps af18

 

Output

 

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
llueiqx8ke86        af18.1              elee3/afserver:18s   worker           Running             Running 8 minutes ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 9 minutes ago

 

During rescheduling, it is entirely possible for Swarm to shift the container to another node. In my case, it shifted from master to worker. It is possible to ensure that the container will only be rescheduled on a specific node by using a placement constraint.

 

docker service update --constraint-add node.hostname==master --detach=false af18

 

We can check the service state to confirm.

 

docker service ps af18

 

Output

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
r70qwri3s435        af18.1              elee3/afserver:18s   master            Running             Starting 9 seconds ago
llueiqx8ke86         \_ af18.1          elee3/afserver:18s   worker           Shutdown            Shutdown 9 seconds ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 2 hours ago

 

Now, the service will only get scheduled on the master node. You will now be able to connect with PSE on the master node using the hostname 'af18'.

 

When you are done with the service, you can remove it.

docker service rm af18

 

Conclusion

In this article, we have learnt how to set up a 2 node Swarm cluster consisting of one master and one worker. We scheduled an AF Server swarm service on the cluster and updated its configuration without needing to recreate the service. The Swarm takes care of scheduling the service's tasks on the appropriate node. We do not need to manually do it ourselves. We also seen how to control the location of the tasks by adding a placement constraint. In the next part of the Swarm series, we will take a look at Secrets and Configs management within Swarm. Stay tuned for more!

Introduction

Lately, I've been experimenting with Microsoft Power BI and I'm impressed by how mature the tool is. The application not only supports a myriad of data sources but now there is even a Power Query SDK that allows developers to write their own Data Connectors. Of course, the majority of my experiments uses data from PI Points and AF Attributes and, because Power BI is very SQL oriented, I end up using PI OLEDB Enterprise most of the time. But let's face it: writing a query can be tricky and not a required skill for most Power BI users. So I decided to create a simple PI Web API Data Connector for Power BI. The reason I decided to use PI Web API is that the main use-case for the Data Connector is "Create a business analyst friendly view for a REST API". Also, there's no reason to install additional data providers.

 

Important Notes

Keep in mind that this tool is meant to help in small experiments where a PI and Power BI user wants to get some business intelligence done on PI data quickly. For production environment use cases, where there is a serious need for shaping data views, or where scale is of importance we highly recommend the use of PI Integrator for Business Analytics . Also, in order to avoid confusions, let me be clear that this is not a PI Connector. Microsoft named these extensions to Power BI "Data Connectors" and they are not related to our PI Connector products.

 

The custom Data Connector is a Power BI beta feature, so it may break with the release of newer versions. I will do my best to keep it update but, please, leave a comment if there's something not working.  It's also limited by the Power BI capabilities, that means it currently only supports basic and anonymous authentication for web requests. If the lack of Kerberos support is a no-go for you, please refer to this article on how to use PI OLEDB Enterprise.

 

Preparation

If you are using the latest version of the Power BI (October 2018), you should enable custom data connectors by going to File / Options and Settings / Options / Security and then lowering the security for Data Extensions to Allow any extension to load.

 

2018-10-17 10_48_09-.png

 

For older versions of the Power BI, you have to enable Custom data connectors. It's under File / Options and Settings / Options / Preview features / Custom data connectors.

 

2018-06-28 09_12_56-.png

 

This should automatically create a [My Documents]\Power BI Desktop\Custom Connectors folder. If it's not there, you can create it by yourself. Finally, download the file (PIWebAPIConnector.mez) at the end of this article, extract from the zip and manually place it there. If you have Power BI instance running, you need to restart it before using this connector. Keep in mind that custom data connectors were introduced in April 2018, so versions before that will not be able to use this extension.

 

Hot to use it

You first have to add a new data source by clicking Get Data / More and finding the PI Web API Data Connector under Services.

 

2018-10-17 10_59_21-.png

Once you click the Connect button, a warning will pop up to remember this is a preview connector. Once you acknowledge it, a form will be presented and you must fill it with the appropriate data:

 

2018-10-17 11_09_35-.png

 

Here's a short description of the parameters:

 

ParameterDescriptionAllowed Values
PI Web API ServerThe URL to where the PI Web API instance is located.A valid URL: https://server/piwebapi
Retrieval MethodThe method to get your data.Recorded, Interpolated
ShapeThe shape of the table. List is a row for every data entry, while Transposed is a column for every data path.List, Transposed
Data PathThe full path of a PI Point or an AF Element. Multiple values are allowed separated by a semicolon (;).\\PIServer\Sinusoid;\\AFServer\DB\Element|Attribute
Start TimeThe timestamp at the beginning of the data set you are retrieving.A PI TIme or a timestamp: *-1d, 2018-01-01
End TimeThe timestamp at the end of the data set you are retrieving.A PI TIme or a timestamp: *-1d, 2018-01-01
IntervalOnly necessary for interpolated. The frequency on which the data will be interpolated.A valid Time-Interval: 10m,

 

Once you fill the form with what you need, you hit the OK button and you may be asked for credentials. It's important to mention that, for the time being, there's no support for any other authentication method than anonymous and basic.

 

2018-10-17 11_10_43-Untitled - Power BI Desktop.png

 

After selecting the appropriate login method, a preview of the data is shown:

 

2018-10-17 11_15_11-.png

 

If it seems all right, click Load and a new query result will be added to your data.

 

List vs Transposed

In order to cover the basic usage of data, I'm offering two different ways to present the query result. The first one is List, where you can get data and some important attributes that can be useful from time to time:

 

     2018-10-17 11_23_41-Untitled - Power BI Desktop.png

 

If you don't want the metadata and are only interested in the data, you may find Transposed to be a tad more useful as I already present the data transposed for you:

 

     2018-10-17 11_26_25-Untitled - Power BI Desktop.png

 

Note that for Digital data points I only bring the string value when using transposed. And that's it. Now you can use the data the way you want!

 

2018-06-28 10_39_53-Untitled - Power BI Desktop.png

 

The Code

The GitHub repository for this project is available here. If you want to extend the code I'm providing and have not worked with the M language before, I strongly suggest you watch this live-coding session. Keep in mind that it's a functional language that excels in ETL scenarios, but is not yet fully mature, so expect some quirkiness like the mandatory variable declaration for each instruction, lack of flow control and no support for unit testing.

 

Final Considerations

I hope this tool can make it easier for you to quickly load PI System data into your Power BI dashboard for small projects and use cases. Keep in mind that this extension is not designed for production environments nor it's a full-featured ETL tool. If you need something more powerful and able to handle huge amounts of data, please consider using the PI Integrator for Business Analytics. If you have questions, suggestions or a bug report, please leave a comment.

Introduction

 

If you are getting started with PI Notifications, setting up your AF tree, creating new notifications rules linked to event frames, you probably want to test them before running on production. The traditional way to test is to set up the SMTP server from your enterprise and send all the e-mails from PI Notifications to your e-mail account. If your notifications are sent to a group of people, you need to check with them if they are receiving the e-mails properly.

 

If you are a developer, you might be interested in this new approach for testing by using a custom SMTP Server. This will help you be more efficient testing your notifications rules before running on production. The idea is that the custom SMTP server will listen to port 25 and it will display on the console information about the e-mails sent by PI Notifications.

 

Getting started

 

You can find the source code of the program here.

 

Open Visual Studio and create a .NET Core Console Application. Then add 4 libraries with the following commands:

 

Install-Package MailKit -Version="2.0.6"
Install-Package MimeKit -Version="2.0.6"
Install-Package SmtpServer" -Version="5.3.0"
Install-Package HtmlAgilityPack" -Version="1.8.9"

 

 

This will install our custom SMTP Server core library, some additional libraries to handle e-mails programmatically and the Html Agility Pack, which it will be described later. Please refer to their GitHub repository for more information.

 

Writing the program

 

Based on the README.MD of their GitHub repository, we have created the application below:

 

using CustomSmtpServerForPINotifications.Models;
using HtmlAgilityPack;
using SmtpServer;
using SmtpServer.Authentication;
using SmtpServer.Mail;
using SmtpServer.Protocol;
using SmtpServer.Storage;
using System;
using System.Threading;
using System.Threading.Tasks;


namespace CustomSmtpServerForPINotifications
{
    class Program
    {
        static void Main(string[] args)
        {
            var options = new SmtpServerOptionsBuilder()
             .ServerName("localhost")
             .Port(25, 587)
             .MessageStore(new SampleMessageStore())
             .MailboxFilter(new SampleMailboxFilter())
             .UserAuthenticator(new SampleUserAuthenticator())
             .Build();


            var smtpServer = new SmtpServer.SmtpServer(options);
            smtpServer.StartAsync(CancellationToken.None).Wait();
        }
    }


    public class SampleMessageStore : MessageStore
    {
        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            return Task.FromResult(SmtpResponse.Ok);
        }
    }




    public class SampleMailboxFilter : IMailboxFilter, IMailboxFilterFactory
    {
        public Task<MailboxFilterResult> CanAcceptFromAsync(ISessionContext context, IMailbox @from, int size = 0, CancellationToken token = default(CancellationToken))
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public Task<MailboxFilterResult> CanDeliverToAsync(ISessionContext context, IMailbox to, IMailbox @from, CancellationToken token)
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public IMailboxFilter CreateInstance(ISessionContext context)
        {
            return new SampleMailboxFilter();
        }
    }




    public class SampleUserAuthenticator : IUserAuthenticator, IUserAuthenticatorFactory
    {
        public Task<bool> AuthenticateAsync(ISessionContext context, string user, string password, CancellationToken token)
        {
            return Task.FromResult(true);
        }


        public IUserAuthenticator CreateInstance(ISessionContext context)
        {
            return new SampleUserAuthenticator();
        }
    }
}



 

Running the console application will make the custom SMTP server starts and it will keep monitoring ports 25 and 587. When it receives an e-mail, it will display information with the HTML body of the e-mail message.

 

We can test it by opening the Email Delivery Channel Configuration through PI System Explorer, typing the IP address of the machine running the custom SMTP server and clicking on the "Test..." button.

 

 

The e-mail is shown on the console application as expected. Note that the e-mail information is sent to the custom SMTP server but the SMTP Server never tries to send it to the SMTP server of the test.com domain. This is why it is a very good tool for testing purposes.

 

 

 

Let's validate if the e-mail has all the information that we need when it has the notification content. We are going to use the Html Agility Pack library to extract information from the HTML.

 

 

        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            NotificationRequest request = GenerateRequestFromEmail(message.HtmlBody);
            if (request != null)
            {
                Console.WriteLine("AttributeFullPath: " + request.AttributeFullPath);
                Console.WriteLine("NotificationRuleName: " + request.NotificationRuleName);
                Console.WriteLine("StartTime: " + request.StartTime);
            }
            return Task.FromResult(SmtpResponse.Ok);
        }




        private NotificationRequest GenerateRequestFromEmail(string htmlBody)
        {
            try
            {
                NotificationRequest request = new NotificationRequest();
                var doc = new HtmlDocument();
                doc.LoadHtml(htmlBody);
                var nodes = doc.DocumentNode.SelectNodes("/html[1]/body[1]/div[1]/p");
                foreach (var node in nodes)
                {
                    string[] texts = node.InnerText.Split(':');
                    if (texts[0].ToLower().Trim() == "name")
                    {
                        request.NotificationRuleName = texts[1].Trim();
                    }
                    if (texts[0].ToLower().Trim() == "start time")
                    {
                        request.StartTime = node.InnerText.Replace(texts[0] + ":", string.Empty);
                    }
                    if (texts[0].ToLower().Trim() == "attribute path")
                    {
                        request.AttributeFullPath = texts[1].Trim();
                    }
                }
                return request;
            }
            catch (Exception)
            {
                return null;
            }
        }

 

Recompiling the project on Visual Studio and running the custom SMTP server again, it is possible to see the details of the notification received:

 

 

Conclusion

Although this custom SMTP server shouldn't be used for production, I am sure that it can be a good friend for developers and PI admins when they need to test their notifications.

 

Please share your thoughts with me and the community!!

Introduction

 

When I started as a vCampus Support Engineer back in 2012, one of my favorites materials to learn about our PI Developer Technologies was the material written for our vCampus Live! events. Recently, I've taken a look at the vCampus Live! 2012 workbooks and I found the "Develop Custom Delivery Channels for Use with PI Notifications" hands-on lab. In this lab, it was shown how to develop a custom delivery channel that would write notification to the Windows Event Logs using PI Notification 2012 and Visual Studio 2012 and .NET Framework 3.5.

 

The new generation of PI Notification (which starts in 2016) uses Event Frames instead of PI Points to store their historical data and does not support custom delivery channels. Nevertheless, it has integration with custom web services.

 

On this blog post, I will show you how to develop a custom web service that not only integrates with the newer releases of PI Notification and but also writes notifications events to the Windows Event Log. This time we are going to use Visual Studio 2017 and ASP.NET Core 2.1.

 

Creating a delivery endpoint

 

Please make sure that you have PI AF 2018 and PI Notifications 2018 installed on your system. If you are interested in setting up new Notification Rules, please refer to this video.

 

Before starting to code, we need to create a new delivery endpoint as shown on the screenshot below. Note that you should select WebService as the delivery channel. REST is the standard nowadays and this is what we are going to use.  We are going to create an action that accepts HTTP POST requests. For this demo, we won't be using any type of authentication although PI Notifications does support Basic and Windows.

 

 

 

 

 

 

 

Creating the ASP.NET Core project with .NET Framework

 

You can download the source code package of this blog post by accessing this GitHub repository.

 

As explained in this blog post, ASP.NET Core could be created with .NET Framework and .NET Core. We are going to create an ASP.NET Core application using .NET Framework because it is easier to integrate it with Windows Event Log. Let's create the project by referring the screenshot below:

 

 

On the second screen, make sure to select ".NET Framework" and "ASP.NET Core 2.1". This code probably does not work with ASP.NET Core 2.0. The solution would be to update Visual Studio as it will show newer versions of that platform.

 

 

 

 

 

Adding the ASP.NET Core MVC library

 

Open Package Manager Console and type:

 

Install-Package Microsoft.AspNetCore.Mvc

 

This will install the MVC component of the ASP.NET Core.

 

 

Editing the Startup.cs

 

Now that we have all libraries added, we need to enable it by editing the Startup.cs.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;


namespace CustomWebServiceForPINotifications
{
    public class Startup
    {
        public IConfiguration IConfiguration { get; }


        public Startup(IConfiguration configuration)
        {
            IConfiguration = configuration;
        }


        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }


        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }


            app.UseMvc();
        }
    }
}

 

 

I have published some blog post and videos about using ASP.NET Core with PI AF SDK and Angular. Please refer to the Developing Web Apps on top of the PI System Hub.

 

 

Creating the model

 

Create a new folder on the project root named Models and create a new class called NotificationRequest.cs. This .NET class will be used to process the JSON created by PI Notification according to the WebService configured. In this case, the JSON will have 3 properties: NotificationRuleName, AttributeFullPath and StartTime. Please refer to this example in order to learn how to create the Notification Rule programmatically.

 

 

Now that the properties were defined, we can write our NotificationRequest class as:

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;


namespace CustomWebServiceForPINotifications.Models
{
    public class NotificationRequest
    {
        public string NotificationRuleName { get; set; }
        public string AttributeFullPath { get; set; }
        public DateTime StartTime { get; set; }
    }
}

 

 

Creating the controller

 

Create a new folder on the project root named Controllers and create a new class called NotificationReceiverController.cs

 

This controller has two actions: Get and Post. The first action is used to make sure that the web site is up and running properly. The second action has the same code of the hands-on lab used to write a text to the Windows Event Log. For the second action, if the operation is successful, it will return a 201 status code. If not, it will return a bad request error.

 

using System;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using CustomWebServiceForPINotifications.Models;


namespace CustomWebServiceForPINotifications.Controllers
{
    [Route("api/notifications-receiver")]
    [ApiController]
    public class NotificationReceiverController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(new List<int> { 1, 2, 3, 4 });
        }


        [HttpPost]
        public IActionResult Post([FromBody]NotificationRequest request)
        {


            try
            {
                string sSource = "PI Notifications Delivery Channel Log";
                string sLog = "Application";
                if (!System.Diagnostics.EventLog.SourceExists(sSource))
                    System.Diagnostics.EventLog.CreateEventSource(sSource, sLog);


                string message = String.Format("Notification: {0} triggered for asset {1} at {2}", request.NotificationRuleName,
                    request.AttributeFullPath.ToString(), request.StartTime.ToString());
                System.Diagnostics.EventLog.WriteEntry(sSource, message, System.Diagnostics.EventLogEntryType.Information);
                return StatusCode(201);


            }
            catch (Exception ex)
            {
                return BadRequest(ex);
            }
        }


    }
}



 

Publishing to IIS

 

Now that the code is ready, we should publish the application to IIS. Please refer to the video ASP.NET Core 2 (Web API) and Angular with PI AF SDK: Part 5 - Publishing the application to IIS  for more information. You will have to download the run time version of ASP.NET Core and make sure the service running the web site has enough privileges in the folder with the web site files.

 

We can easily see if our web service is running as expected by testing it with our browser:

 

 

If there is something wrong, you will receive an error message.

 

 

Testing the integration between PI Notifications and the Custom Web Service

 

Ok, it is time to test our Notification. Let's go back to the Web Service Configuration window and click on the "Test Send" button. The result of the operation will be shown on the bottom of the Window.

 

 

We can confirm that the operation was successful by opening the Windows Event Viewer on the machine hosting our web service.

 

 

 

 

Conclusion

 

PI Notification 2016 does not allow developing custom delivery channels but it does allow you to integrate it with custom web services. I hope that this blog post will help you develop great integrations with the new generation of PI Notifications!!

Below is a link to the presentation given at Barcelona on September 27, 2018 for the LiveCoding session titled "Getting the Most Out of AFSearch".

 

Link to Recording:     LiveCoding--Getting-the-Most-Out-of-the-AFSearch

 

GitHub Repository:     AF-SDK -PIWorld-EMEA-2018-AFSearch-LiveCoding

 

While some of it is a repeat of what was presented at San Francisco in April, the beginning section features new material on the upcoming AF SDK 2.10.5 release.  The new features discussed:

 

  • OR clauses in searches.
  • FindObjects replaces the now deprecated FindElements, FindEventFrames, FindAttributes, etc.
  • AFSearchToken structure being replaced by new AFSearchBaseToken abstract class with 4 concrete classes: AFSearchFilterToken, AFSearchQueryToken, AFSearchOrToken, and AFSearchExpressionToken.
  • AFSearchToken will be mapped to new AFSearchBaseToken, if possible.
  • If your query uses an OR, the AFSearch.Tokens property will throw an exception.
  • New AFSearch.TokensCollection property is meant to replace Tokens functionality and works with new AFSearchBaseToken.
  • AFSearchToken and AFSearch.Tokens property are both marked as deprecated.

We are excited to present the PI World Innovation Hackathon EMEA 2018 winners!

 

DEME kindly provided a sample of their data with sensors, jack-up vessels and soil models information. Participants were encouraged to create killer applications for DEME by leveraging the PI System infrastructure.

 

OSI_Barcelona18_Day1_120.jpg

 

The participants had 23 hours to create an app using any of the following technologies:

  • PI Server 2018
  • PI Web API 2018
  • PI Vision 2017 R2

 

Our judges evaluated each app based on their creativity, technical content, potential business impact, data analysis and insight and UI/UX. Although it is a tough challenge to create an app in 23 hours, 4 groups were able to finish their app and present to the judges!

 

Prizes:

1st place: Intel NUC Barebone (Core 3-7100U, 120 GB SSD, 8 GB RAM), one year free subscription to PI Developers Club, one time free registration at OSIsoft PI World over the next 1 year

2nd place: Bose SoundLink Around-Ear Wireless Headphones II (black), one year free subscription to PI Developers Club, 50% discount for registration at OSIsoft PI World over the next 1 year

3rd place: Raspberry Pi 3 Model B+ Retro Arcade Gaming Kit incl. 2 classic controllers and one year free subscription to PI Developers Club

 

 

Without further do, here are the winners!

 

1st place - AG Solution

 

The team members were: Sergio Hernandez, Juri Krivoruchko and Marc Torralba

 

 

OSI_Barcelona18_Day4_386.jpg

 

Team AG Solution has developed an application on top of PI AF SDK and PI Vision. They've developed a custom data reference that uses Stochastic Dual Coordinate classifier from ML.NET to detect the current state of vessel by the direction of changing the state.

 

The team used the following technologies:

  • PI AF SDK
  • ML.NET
  • PI Vision

 

Here are some screenshots presented by the AG Solution team!

 

 

 

 

 

 

 

 

2nd place - M.E.S.S

 

The team members were:  David Rodriguez, Leandro Hideo, Alexander Hosefelder and Alexander Dixon

 

OSI_Barcelona18_Day4_384.jpg

 

Team M.E.S.S developed an algorithm in R in order to detect sensors anomalies using Machine Learning.

 

The team used the following technologies:

  • R
  • PI Web API
  • PI Web API package for R

 

Here are some screenshots presented by M.E.S.S!

 

 

 

 

 

 

3rd place - Werusys Cologne

 

The team members were: Kai Weber, Ansgar Backhaus and Julian Weber

 

 

OSI_Barcelona18_Day4_383.jpg

 

Team Werusys Cologne developed an application to analyze wind mill installation case data based on hidden markov model.

 

The team used the following technologies:

  • PI Web API
  • Seeq

 

Here are some screenshots presented by the Werusys Cologne!

 

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In this blog post about security and containers, we will be discussing about implementing a Kerberos Double Hop from the client machine to the PI Web API container and finally to the PI Data Archive container. Previously, when we are using the PI Web API container located here Spin up PI Web API container (AF Server included), we are using local accounts for authentication to the backend server such as the AF Server or the PI Data Archive. The limitation is that without Kerberos Delegation, we will not be able to have per user security which means that all users of PI Web API will have the same permissions. i.e. an operator can read the sensitive tags that were meant for the upper management and vice versa. Obviously, this is not ideal. What we want is to have more granularity in assigning permissions to the right people so that they can only access the tags that they are supposed to read.

 

Prerequisites

You will need to have 2 GMSA accounts. You can request such accounts from your IT department. They can refer to this blog post if they do not know how to create GMSA Spin up AF Server container (Kerberos enabled). Also be sure that one of them has the TrustedforDelegation property set to True. This can be done with the Set-ADServiceAccount  cmdlet.

 

You will also need to build the PI Data Archive container by following the instructions in the Build the image section here.

PI Data Archive container health check

 

For the PI Web API container, you will need to pull it from the repository by using this command.

docker pull elee3/afserver:webapi18

 

Demo without GMSA

First let us demonstrate how authentication will look like when we run containers without GMSA.

 

Let's have a look at the various authentication modes that PI Web API offers.

1. Anonymous

2. Basic

3. Kerberos

4. Bearer

For more detailed explanation aboout each mode, please refer to this page.

 

We will only be going through the first 3 modes as Bearer requires an external identity provider which is out of the scope of this blog.

 

Create the PI Data Archive container and the PI Web API container. We will also create a local user called 'enduser' in the two containers.

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h wa --name wa elee3/afserver:webapi18
docker exec wa net user enduser qwert123! /add
docker exec pi net user enduser qwert123! /add

 

Anonymous

Now let's open up PSE and connect to the hostname "wa". If prompted for the credentials, use

Username: afadmin

Password: qwert123!

Change the authentication to Anonymous and check in the changes. Restart the PI Web API service.

Verify that the setting has taken effect by using internet explorer to browse to /system/configuration. There will be no need for any credentials.

 

We can now try to connect to the PI Data Archive container with this URL.

https://wa/piwebapi/dataservers?path=\\pi

 

Check the PI Data Archive logs to see how PI Web API is authenticating.

Result: With Anonymous authentication, PI Web API authenticates with its service account using NTLM.

 

Basic

Now use PSE to change the authentication to Basic and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. This time, there will be a prompt for credentials. Enter

Username: enduser

Password: qwert123!

Try to connect to the same PI Data Archive earlier. You will get an error as the default PI Data Archive container doesn't have any mappings for enduser

Let's see what is happening on the PI Data Archive side.

Result: With Basic authentication, the end user credential has been transferred to the PI Data Archive with NTLM.

 

Kerberos

Finally use PSE to change the authentication to Keberos and check in. Restart the PI Web API service.

Close internet explorer and reopen it to point to /system/configuration to check the authentication method. The prompt for credentials will look different from the Basic authentication one. Use the same credentials as you did for the Basic authentication scenario.

Try to connect to the same PI Data Archive again. You should not be able to connect. When you check on the PI Data Archive logs, you will see

Result: With Kerberos authentication, the delegation failed and the credential became NT AUTHORITY\ANONYMOUS LOGON even though we logged on to PI Web API with the local account 'enduser'.

 

Demo with GMSA

Kerberos

Now we shall use the GMSA accounts that we have to make the last scenario with Kerberos delegation work.

Download the scripts for Kerberos enabled PI Data Archive and PI Web API here.

PI-Web-API-container/New-KerberosPWA.ps1

PI-Data-Archive-container-build/New-KerberosPIDA.ps1

 

I will use the name 'untrusted' as the name of the GMSA account that is not trusted for delegation and 'trusted' as the name of the GMSA account that is trusted for delegation. Set the SPN for 'trusted' like such

setspn -s HTTP/trusted trusted

 

Once you have the scripts, run them like this

.\New-KerberosPIDA.ps1 -AccountName untrusted -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName trusted -ContainerName wak

 

The scripts will help you to create a credential spec for the container based on the GMSA that you provide to it. A credential spec will let the container know how it can access Active Directory resources. Then, it will use this credential spec to create the container using docker run command. It will also set the hostname of the container to be the same as the name of the GMSA. This is required because it is a current limitation with the implementation that might be resolved in the future so that you can choose your own hostnames.

 

Open internet explorer now with your domain account and access PI Web API /system/userinfo. The hostname is 'trusted'.

Make sure that ImpersonationLevel is 'Delegation'.

 

Now try to access the PI Data Archive. The hostname is 'untrusted'. You will be unable to access. Why? Because you haven't created a mapping yet! So let's use SMT to create a mapping to your domain account. After creating a mapping. Try again and you should be able to connect. The PI Data Archive logs will show that you have connected with Kerberos. You do not need any mapping to your PI Web API service account at all if Kerberos delegation is working properly.

 

Result: With Kerberos authentication method in PI Web API and the use of GMSAs, Kerberos delegation works. The end domain user is delegated from the client to the PI Web API container to the PI Data Archive container. We have successfully completed the double hop.

 

Troubleshoot

If this doesn't seem to work for you, one thing you can try is to check the setting for internet explorer according to this KB article.

KB01223 - Kerberos and Internet Browsers

Your browser settings might differ from mine but the container settings should be the same since the containers are newly created.

 

Alternative: Resource Based Constrained Delegation

A more secure way to do Kerberos delegation instead of trusting the PI Web API container GMSA for delegation is to set the property "PrincipalsAllowedToDelegateToAccount" on the PI Data Archive container GMSA. This is what we call Resource Based Constrained Delegation (RBCD). You do not have to trust any GMSAs for delegation in this scenario. You will still need two GMSAs.

 

Assuming that you have already created the two containers with the scripts found above. I will use 'pida' as the name of the PI Data Archive container GMSA and 'piwebapi' as the name of the PI Web API container GMSA.

.\New-KerberosPIDA.ps1 -AccountName pida -ContainerName pik
.\New-KerberosPWA.ps1 -AccountName piwebapi -ContainerName wak

 

Execute these two additional commands to enable RBCD.

docker exec pik powershell -command "Add-WindowsFeature RSAT-AD-PowerShell"
docker exec pik powershell -command "Set-ADServiceAccount $env:computername -PrincipalsAllowedToDelegateToAccount (Get-ADServiceAccount piwebapi)"

 

You will still be able to connect with Kerberos delegation from the client machine to the PI Web API container to the PI Data Archive container. In this case, the PI Data Archive container only strictly allows delegation from the PI Web API container with 'piwebapi' as its GMSA.

 

Conclusion

We have seen that containers are able to utilize Kerberos delegation with the usage of GMSAs. This is important for middleware server containers such as PI Web API. Here is a quick summary of the various results that we have seen.

Authentication Mode
No GMSAWith GMSA
AnonymousNTLM with service accountNo reason to do this
BasicNTLM with local end user accountNo reason to do this
KerberosNTLM with anonymous logonKerberos delegation with domain end user account

 

The interesting thing is that Basic authentication can also have per user security with local end user accounts. But you will need to maintain the list of local users in the PI Web API container and the PI Data Archive container separately which is not recommended. The ideal case is to go with Kerberos delegation.

ATTENTION Developers, Data Scientists, IT CyberSecurity, and Power Users.  For lack of a better word, I will refer to you collectively as "developers".

 

Developers coming to PI World Barcelona may notice more offerings than ever before.  We proudly proclaim that Barcelona will have the most robust Developer Agenda (see link) ever seen at UC or PI World EMEA.  This includes the Developer Innovation Hackathon on Day 0 (see link), thanks to our data sponsor DEME.

 

Besides the traditional hands-on labs, which require an additional fee and pre-registration, the Day 3 agenda is chock full of technical content aimed specifically at developers, thanks to 90-minute Live Coding or How-To sessions.  These in-depth technical talks do not require a fee nor pre-registration.  You are free to come and go as you please.  Make no mistake about it ... just because we call the LiveCoding "talks" does not mean they contain less technical information than labs. We expect to offer even more such talks at future PI World events because we think it is better event for you.  Our reasoning: you can attend 2 labs on Day 3 for $300, or you can sit in on 4 LiveCoding talks for free.  Who can argue against more technical training for less cost?  (Tip: if you are trying to convince your boss to send you to PI World, presenting it as a major training event (which it is) could be a strong justification to attend.)

 

I invite you once again to review the Day 3 agenda.  You will see an Analytics Track and Developer Track.  One late correction I would like to make is the PI Admin Track, which is not really for PI Admins but should be considered a 2nd Developer Track.

AFSearch Barcelona.png

 

Day 3, Thursday September 27, 11:30 - 13:00

LiveCoding: Getting the Most Out of the New AFSearch

CCIB: Room 117 134, P1 Level

 

To any members of the PI Developer Community who will be at PI World Barcelona, you are invited to join me in a presentation on new features of AFSearch.  If you ask "Hey Rick, didn't you give this already in SF?", my answer would be "Yes BUT new sections were added to specifically cover some important NEW stuff."  PI AF 2018 R2 (AF SDK 2.10.5) will finally support OR conditions with AFSearch.  That is a highly anticipated new feature that many are looking forward to.  But in order to support OR conditions, it required replacing the older AFSearchToken structure with a new AFSearchTokenBase class that now has 4 different token instances.  Trust me, you will want to see how these new tokens will be used in code.  Everyone who has ever attended this talk has said they definitely learned something!

 

UPDATE: The room has been moved to 134 on Level P1.

Overview

Most of us have have searched for PI Points, but as our PI System grows larger or as more products like PI Connectors and Relays automatically create PI tags it becomes imperative to understand how to narrow down and optimize search queries. You might have used the Tag Search Dialog or simply copy pasted sample queries provided in the examples and modified them to suit your needs. Most of the times these queries are intuitive to read and understand but there are situations where it is we may need to utilize their full expressive power.

In this blog post we will explore PIPoint Query Search Syntax in PI AF SDK. We will have a deeper look into the Syntax Rules and Parsing of queries, along with Wild Cards, Operators and Aliases used in constructing a PI Point Query String to find the desired PIPoint objects. PIPoint Search Utility  is used as an aid to accompany the examples shown in the blog post to demonstrate the syntactic and search aspects of query strings.

 

Let us look at some typical examples one might come across while performing tag searches and their query strings.

 

Below are some Invalid queries. We need to be aware of the reasons that make them invalid and avoid such mistakes in the future.

 

Query Syntax

A query is one or more AND condition filters that are ORed together. Each AND condition contains one or more query items. A query item consists of a query filter name, an operator, and the query filter. This allows multiple conditions to be specified with a single query. Query syntax described in Extended Backus-Naur Form (EBNF)

 

It is important grasp the ENBF syntax rules to construct correct and effective queries. As we go along this blog we will take a look at examples on how to do this and how to avoid potential pit falls one may encounter with query strings. There are a large number of possible constructs filled with many nuances, however if we gain an understanding of some standard rules this the task becomes a lot easier.

As an example, the below query strings(non-exhaustive list) represent the exact same query even though they vary syntactically

  • sin* AND PointType:Float
  • (tag:=sin* AND PointType:=Float16) OR (tag:=sin* AND PointType:=Float32) OR (tag:=sin* AND PointType:=Float64)
  • (sin* PointType:='Float16') OR (sin* PointType:='Float32') OR (sin* PointType:='Float64')
  • tag:=sin* AND PointType:Float
  • ("sin*" PointType:='Float16') OR ("sin*" PointType:='Float32') OR ("sin*" PointType:='Float64')

 

How can we parse a Query String?

Parsing can be viewed as decomposing a query string into separate conditions. Think of this as an 'exploded view' of the string where you can see how the individual components fit together.  PIPointQuery is a structure in which PIPoint attribute specified by the AttributeName in the query is compared to the query's AttributeValue using the search Operator. PIPointQuery.ParseQuery Method parses the query string into PIPointQuery lists which can be used in used by the FindPIPoints(PIServer, IList<IEnumerable<PIPointQuery>>, IEnumerable<String>) method and also to verify the equivalence of search strings.

 

The example strings provided above would be transformed into the equivalent PIPointQuery list.

 

Note: Parsing the query string to PIPoint Query Lists in the examples are shown in order to help understand various aspects involved query string parsing. In most cases this is not necessary if one gains a good understanding of query syntax.

I highly recommended always using Query Strings which are more compact and can be used in code and as well as Tag Search dialogues for PI Point searches.

 

CAUTION: Parsed Query does NOT mean Valid Query (Syntax vs Semantics)

If a query string is parsed successfully it only indicates correct syntax. However, Syntactic correctness does not guarantee Semantic validity. In this trivial example, it is easy to see that Float1234 is not a valid point type, however it can still be parsed into a PointQuery structure as it conforms to the grammar rules.

 

The search performed using the query string will obviously fail as shown.

 

Use of Wild Card Characters

  • The string value of a filter can be enclosed in single quotes ('), double quotes ("), or without quotes. Quotations are required if non-escaped white space or quotation marks are desired within the filter string.
  • Single backslash (\) character is treated as a literal character unless followed by a wildcard character
  • Supported wild card characters are "*" to match any zero or more characters and "?" to match a single character. These characters cannot be escaped using the backslash ("\") character

 

Ex: Search tag names with pattern CD?1?8

 

Ex: Search all tags which have datasecurity of PI World (read or write, but not both) and which do not belong to point class with name starting with ba*

 

Alias Attribute Names

The following table lists the supported aliases for common PIPoint attribute names.These aliases can be used instead of the actual attribute name. The PICommonPointAttributes class contains the names of the common PIPoint attribute names.

 

Ex: Query strings 1 & 2 use aliases producing the same results

 

Notice the equivalent parsing for aliases.

Personal preference: I avoid using the aliases. One less thing to remember or make mistakes with.

 

Operators

  • EqualOperator can be specified either by ":" | ":="
    • Personal preference: I use := to be consistent with the use of other operators
  • PIPointValueFilter "Value" query if the PIPoint being queried is String type: LessThan, LessThanOrEqual, GreaterThan, GreaterThanOrEqual are not supported
  • PIPointValueFilter query with BooleanValue (i.e "Substituted", "Questionable", "Annotated", "IsGood"), only Equal and NotEqual are supported
  • The In operator is not supported. It will be implicitly translated as a filter value
    • Name:"IN("abc", "def")" this is implicitly translated to 'Tag:="IN("abc", "def")*"'

 

Syntax Rules: Cheat Sheet

  • AndOperator can be specified either by "AND" or <WHITESPACE>
    • Ex: AND is implied between pointtype and pointsource just by a space

         

  • EqualOperator  can be either  ":" or ":="

    

  • If a specific filter name is not specified, then the filter will default to the "Tag" filter and the operator will be "="

    

  • When a filter name is specified, no whitespace is allowed between the filter name, the ":" separator, and the optional operator.
    • If the operator is not specified, the default operator is "=".
  • If the type of a point attribute is DateTime, then the "TimeValue" format is supported for the filter value. This can be any recognized AFTimeString
  • Boolean can be specified by "True" or "False" or "1" or "0"
  • PointType:Float query is implicitly translated to 'PointType:=Float16 OR PointType:=Float32 OR PointType:=Float64'
  • PointType:Int query is implicitly translated to 'PointType:=Int16 OR PointType:=Int32
  • Starting in AF 2017, it also supports querying based on PIPoint Value. OR condition is not supported if querying based on PIPoint value.
  • Queries with OR condition are not supported for PIPointValueFilters.

    

  • A filter name may only be referenced one time per AND condition of the query string.
    • This example would cause an error: PointId:>5 AND PointId:<10
  • It is possible to construct queries which include multiple attributes and query conditions

    

  • Certain PIPoint Attributes are specific to a PIPointClass (Eg. AutoAck is applicable to ALARM & SQC_ALARM)
    • See this attachment (ptclassattr.txt) for each PointClass attributes and their typical values
  • Future point attribute, which is invalid, to a PI Data Archive version < 3.4.395
  • Security point attributes (e.g. "PtSecurity" and "DataSecurity"), are invalid for PI Data Archive version < 3.4.380
  • Query strings are Case Insensitive
  • On improving readability
    • Don't use quotes unless you need them, single quotes better than double
    • Don't use parenthesis unneccessarily

 

 

Additional searches options

SearchNameAndDescriptor

If True and the Tag attribute name is specified and the Descriptor attribute name is not specified in the query, then both of these attributes will be searched using the Tag query value

 

AFSearchTextOption

Indicates the text search option to be applied to the search pattern.

     1. StartsWith                                                                       2. Contains                                       3. ExactMatch                                                  4. EndsWith

 

Tag Search Dialog in AF Explorer

A good way to perform these same searches and check your queries used in your application is through Tag Search dialog in AF Explorer. You can open this by being in Elements View -> Search -> Tag Search.

 

 

Additional Search criteria can be specified through the UI. However all the attributes are not available through this. They can only be supplied when using the search string.

 

Bonus: Peek into PI Server

AF SDK makes a remote procedure call to the PI Server (PI Base Subsystem) which takes in the search parameters and returns the requested PI Points along with additionally specified attributes.

As a bonus you can run piartool -thread pibasess -history in your PI Server command line to track the RPC and see the number of points returned and the amount of time it took for it to run.

Example RPC output: 4452, 0, 14-Aug-18 13:37:01.63263, 1, piptsdk|1|GetPoints, 544, Return Count: 55. Returned Status: [0] Success

tramachandran

PIPoint Search Utility

Posted by tramachandran Employee Sep 5, 2018

Overview

This console utility was developed to demonstrate PIPoint Search Syntax in AF SDK and as a aid to accompany the examples shown in the blog post  PIPoint Search Query in AF SDK

As a stand alone tool, it provides a quick way to perform searches and verify syntactic correctness of query strings.

 

Usage

 

0. Connect to PI Data Archive

This is required for both searching for PI Points and Parsing Query Strings as certain attributes depend on the version of the server.

 

1. Search PI Points using a Query String

Output columns: Tag name, Point ID, PointType, PointClass

 

2. Parse Query Strings into individual PI Point Queries

 

3. Specify SearchNameAndDescriptor

If true and the Tag attribute name is specified and the Descriptor attribute name is not specified in the query, then both of these attributes will be searched using the Tag query value. Default = false

 

4. Specify AFSearchTextOption

Indicates the text search option to be applied to the search pattern. Default = StartsWith

 

Source Code and Download

GitHub: GitHub - ThyagOSI/PIPointSearchSyntax

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In my previous blog on AF Server container health check, I talked about implementing a health check for the AF Server container. Naturally, we will also have to discuss about such a check for the PI Data Archive container. For an introduction to what a health check is about and also how you can integrate a health check with Docker. Please refer to the previous blog post as I won't be repeating it here.

 

In part 1, I will be covering the definition of the health tests that we can do for the PI Data Archive and then we will hook them up in the Dockerfile.

In part 2, we will be doing something interesting with these health check enabled containers by using another container that I wrote to inform us by email whenever there is a change in their health status so that we are aware when things fail.

 

Without further ado, let's jump into the definition of the health tests for the PI Data Archive container!

 

Define health tests

There are 2 tests that we will be performing. The first test is a test on the port 5450 to determine if there are any services listening on that port. The second test will use piartool to block for some essential subsystems of the PI Data Archive with a fixed timeout so that the test will fail if it exceeds that timeout.

 

The Powershell cmdlet Get-NetTCPConnection can accomplish the first check for us. A return value of null means that there is no service listening on port 5450.

The relevant code is below

$val = Get-NetTCPConnection -LocalPort 5450 -State Listen -ErrorAction SilentlyContinue
if ($val -eq $null)
{
      # return 1: unhealthy - the container is not working correctly
      Write-Host "Failed: No TCP Listener found on 5450"
      exit 1
}

 

Next, piartool is a utility that is located in the adm folder in PI Data Archive home directory. It has an option called "block" which waits for the specified subsystem to respond. This command is also used in the PI Data Archive start scripts to pause the script until the subsystem is available. The subsystems that we are going to check is the following list.

$SubsystemList = @(
   @("pibasess", "PI Base Subsystem"),
   @("pisnapss", "PI Snapshot Subsystem"),
   @("piarchss", "PI Archive Subsystem"),
   @("piupdmgr", "PI Update Manager")
)

 

We are going to change the amount of time that we allow for each check to 10 seconds so that we do not have to wait 1 hour for it to complete . We will also grab the start and end times so that we can provide detailed logging for troubleshooting purposes. The code for this is below.

function Block-Subsystem
{
Param ([string]$Name, [string]$DisplayName, [int] $TimeoutSeconds= 10)
$StartDate=Get-Date
$rc = Start-Process -FilePath "${env:PISERVER}\adm\piartool.exe" -ArgumentList @("-block", $Name, $TimeoutSeconds) -Wait -PassThru -NoNewWindow
$EndDate=Get-Date
if($rc.ExitCode -ne 0)
{
echo ("Block failed for {0} with exit code {1}, block started: {2}, block ended: {3}" -f $DisplayName,$rc.ExitCode,$StartDate,$EndDate)
exit 1
}
}

ForEach ($Subsystem in $SubsystemList) {Block-Subsystem -Name $Subsystem[0] -DisplayName $Subsystem[1] -TimeoutSeconds 10}

 

Integrate into Docker

We will add this line of code to our Dockerfile to make Docker start performing health checks.

HEALTHCHECK --start-period=60s --timeout=60s --retries=1 CMD powershell .\check.ps1

 

The start period is given as 60 seconds to allow the PI Data Archive to start up and initialize properly before the health check test results will be taken into account. A time out of 60 seconds is given for the entire health check to complete. If it takes longer than that, the health check is deemed to have failed. I also gave only 1 retry which means that the health check will be unsuccessful if the first try fails. There is no second chance! .

 

Build the image

As usual, you will have to supply the PI Server 2018 installer and pilicense.dat yourself. The rest of the files can be found here.

elee3/PI-Data-Archive-container-build

 

Put all the files into the same folder and run the build.bat file.

Once your image is built, you can create a container.

docker run -h pi --name pi -e trust=%computername% pidax:18

 

Now check docker ps. The health status should be starting.

 

After 1 minute which is the timeout period, run docker ps again. The health status should now be healthy.

 

Health monitoring

Now that we have a health check enabled container up and running, we can start to do some wonderful things with it. If your job is a PI administrator. don't you wish there was some way to keep tabs on your PI Data Archive's health so that if it fails, an email can be sent to notify you that it is unhealthy. This way, you won't get a shock the next time you check on your PI Data Archive and realize that it has been down for a week!

 

I have written an application that can help you monitor ANY health enabled containers (i.e. not only the PI Data Archive container and the AF Server container but any container that has a health check enabled) and send you an email when they become unhealthy. We can start the monitoring with just one simple command. You should change the following variables

 

Name of your SMTP server: <mysmtp>

Source email: <admin@osisoft.com>:

Destination email: <operator@osisoft.com>

 

to your own values.

 

docker run --rm -id -h test --name test -e smtp=<mysmtp> -e from=<admin@osisoft.com> -e to=<operator@osisoft.com> elee3/health

 

Once the application is running, we can test it by trying to break our PI Data Archive container. I will do so by stopping the PI Snapshot Subsystem since it is one of the services that is monitored by our health check. After a short while, I received an email in my inbox.

 

Let me check docker ps again.

 

The health status of docker ps corresponds to what the email has indicated. Notice that the email even provides us with the health logs so that we know exactly what went wrong. This is so useful. Now let me go back and start the PI Snapshot Subsystem again. The monitoring application will inform me that my container is healthy again.

 

The latest log at 2:30:47 PM has no output which indicates that there are no errors. The logs will normally fetch the 5 most recent events.

 

With the health monitoring application in place, we can now sleep in peace and not worry about container failures which go unnoticed.

 

Conclusion

In addition to what I have shown here, I want to mention that the health tests can be defined by the users themselves. You do not have to use the implementation that is provided by me. This level of flexibility is very important since health is a subjective topic. One man's trash is another man's treasure. You might think a BMI of 25 is ok but the official recommendation from the health hub is 23 and below. Therefore, the ability to define your own tests and thresholds will help you receive the right notifications that are appropriate to your own environment. You can hook them up during docker run. Here is more information if you are interested.

 

Source code for health monitoring application is here.

elee3/Health-Monitor

Filter Blog

By date: By tag: