Skip navigation
All Places > PI Developers Club > Blog > 2018 > October
2018

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

We have learnt much about using containers in previous blog posts. Until now, we have been working with standalone containers. This is great for familiarizing yourself with the concept of containers in general. Today, we shall take the next step in our container journey which is to learn how to orchestrate these containers. There are several container orchestration platforms on the market today such as Docker Swarm, Kubernetes, Service Fabric and Marathon. I will be using Docker Swarm today to illustrate the concept of orchestration since it is directly integrated with the Docker Engine making it the quickest and easiest to set up.

 

honey-bee-drawing-cartoon-64.jpg

 

Motivation

Before we even start on the orchestration journey, it is important that we understand the WHY behind it. For someone who is new to all these, the objective of doing this might not be clear. Let me illustrate with two analogies.

One that a layman can understand and another that a PI admin can relate to.

 

First analogy

Suppose your hobby is baking cakes (containers). You have been hard at work in your kitchen trying to formulate the ultimate recipe (image) for the best chiffon cake in the world. One day, you managed to bake a cake with the perfect taste and texture after going through countless rounds of trial and error of varying the temperature of the oven, the duration in the oven, the amount of each type of ingredient etc. Your entrepreneurial friend advise you to open a small shop selling this cake (dealing with standalone containers in a single node). You decided to heed your friend's advice and did so. Over the years, business boomed and you want to expand your small shop to a chain of outlets (cluster of nodes). However, you have only one pair of hands and it is not possible for you to bake all the cakes that you are going to sell. How are you going to scale beyond a small shop?

Luckily, your same entrepreneurial friend found a vendor called Docker Inc who can manufacture a system of machines (orchestration platform) where you install one machine in each of your outlet stores. These machines can communicate with each other and they can take your recipe and bake cakes that taste exactly the same as the ones that you baked yourself. Furthermore, you can let the machines know how many cakes to bake each hour to address different levels of demand throughout the day. The machines even have a QA tester at the end of the process to test if the cake meets its quality criteria and will automatically discard cakes that fail to replace them with new ones. You are so impressed that you decide to buy this system and start expanding your cake empire.

 

Second analogy

Suppose you are in charge of the PI System at your company. Your boss has given you a cluster of 10 nodes. He would like you to make an AF Server service spanning this cluster that has the following capabilities

1. able to adapt to different demands to save resources

2. self-healing to maximize uptime

3. rolling system upgrades to minimize downtime

4. easy to upgrade to newer versions for bug fixes and feature enhancements

5. able to prepare for planned outages needed for maintenance

6. automated roll out of cluster wide configuration changes

7. manage secrets such as certificates and passwords for maximum security

How are you going to fulfill his crazy demands? This is where a container orchestration platform might help.

 

Terminology

Now let us get some terminologies clear.

 

Swarm: A swarm consists of multiple Docker hosts which run in swarm mode and act as managers and workers. A given Docker host can be a manager, a worker, or perform both roles.

Manager: The manager delivers work (in the form of tasks) to workers, and it also manages the state of the swarm to which it belongs. The manager can also run the same services workers, but you can also make them run only manager-related services.

Worker: Workers run tasks distributed by the swarm manager. Each worker runs an agent that reports back to the master about the state of the tasks assigned to it, so the manager can keep track of the work running in the swarm.

Service: A service defines which container images the swarm should use and which commands the swarm will run in each container. For example, it’s where you define configuration parameters for an AF Server service running in your swarm.

Task: A task is a running container which is part of a swarm service and managed by a swarm manager. It is the atomic scheduling unit of a swarm.

Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.

 

There are two types of service.

Replicated: The swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state.

Global: The swarm manager runs one task for the service on every available node in the cluster.

 

Prerequisites

To follow along with this blog, you will need two Windows Server 2016 Docker hosts. Check out how to install Docker in the Containerization Hub link above.

 

Set up

Select one of the nodes (we will call it "Manager") and run

docker swarm init

 

This will output the following

 

Swarm initialized: current node (vgppy0347mggrbam05773pz55) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377

 

Now select the other node (we will call it "Worker") and run the command that was being output in the previous command.

docker swarm join --token SWMTKN-1-624dkyy11zmx4omebau2sin4yr9rvvzy6zm1n58g2ttiejzogp-8phpv0kb5nm8kxgvjq1pd144w 192.168.85.157:2377

 

Go back to Manager and run

docker node ls

 

to list out the nodes that are participating in the swarm. Note that this command only works on manager nodes.

 

Service

Now that the nodes have been provisioned, we can start to create some services.

 

For this blog, I will be using a new AF Server container image that I have recently developed tagged 18s. If you have been following my series of blogs, you might be curious what is the difference between the tag 18x (last seen here) and 18s. With 18s, the data is now separated from the AF Server application service. What this means is that the PIFD database mdf, ndf and ldf files are now mounted in a separate data volume. The result is that on killing the AF Server container, the data won't be lost and I can easily recreate a AF Server container pointing to this data volume to keep the previous state. This will be useful in future blogs on container fail-over with data persistence.

 

You will need to login with the usual docker credentials that I have been using in my blogs. To create the service, run

 

docker service create --name=af18 --detach=false --with-registry-auth elee3/afserver:18s

 

Note: If --detach=false was not specified, tasks will be updated in the background. If it was specified, then the command will wait for the service to converge before exiting. I do it so that I can get some visual output.

 

Output

goa9cljsek42krqgvjtwdd2nd
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Waiting 6 seconds to verify that tasks are stable...

 

Now we can list the service to find out which node is hosting the tasks of that service.

 

docker service ps af18

 

Once you know which node is hosting the task, go to that node and run

 

docker ps -f "name=af18."

 

Output

CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS                        PORTS               NAMES
9e3d26d712f9        elee3/afserver:18s   "powershell -Comma..."   About a minute ago   Up About a minute (healthy)                       af18.1.w3ui9tvkoparwjogeg26dtfz

 

The output will show the list of containers that the swarm service has started for you. Let us inspect the network that the container belongs to by using inspecting with the container ID.

 

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks}}"

 

Output

map[nat:0xc0420c0180]

 

The output indicates that the container is attached to the nat network by default if you do not explicitly specify a network to attach to. This means that your AF Server is accessible from within the same container host.

 

You can get the IP address of the container with

docker inspect 9e3d26d712f9 -f "{{.NetworkSettings.Networks.nat.IPAddress}}"

 

Then you can connect with PSE using the IP address. It is also possible to connect with the container ID as the container ID is the hostname by default.

 

 

Now that we have a service up and running, let us take a look at how to change some configurations of the service. In the previous image, the name of the AF Server derives from the container ID which is some random string. I would like to make it have the name 'af18'. I can do so with

 

docker service update --hostname af18 --detach=false af18

 

Once you execute that, Swarm will stop the current task that is running and reschedule it with the new configuration. To see this, run

 

docker service ps af18

 

Output

 

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
llueiqx8ke86        af18.1              elee3/afserver:18s   worker           Running             Running 8 minutes ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 9 minutes ago

 

During rescheduling, it is entirely possible for Swarm to shift the container to another node. In my case, it shifted from master to worker. It is possible to ensure that the container will only be rescheduled on a specific node by using a placement constraint.

 

docker service update --constraint-add node.hostname==master --detach=false af18

 

We can check the service state to confirm.

 

docker service ps af18

 

Output

ID                  NAME                IMAGE                NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
r70qwri3s435        af18.1              elee3/afserver:18s   master            Running             Starting 9 seconds ago
llueiqx8ke86         \_ af18.1          elee3/afserver:18s   worker           Shutdown            Shutdown 9 seconds ago
w3ui9tvkopar         \_ af18.1          elee3/afserver:18s   master            Shutdown            Shutdown 2 hours ago

 

Now, the service will only get scheduled on the master node. You will now be able to connect with PSE on the master node using the hostname 'af18'.

 

When you are done with the service, you can remove it.

docker service rm af18

 

Conclusion

In this article, we have learnt how to set up a 2 node Swarm cluster consisting of one master and one worker. We scheduled an AF Server swarm service on the cluster and updated its configuration without needing to recreate the service. The Swarm takes care of scheduling the service's tasks on the appropriate node. We do not need to manually do it ourselves. We also seen how to control the location of the tasks by adding a placement constraint. In the next part of the Swarm series, we will take a look at Secrets and Configs management within Swarm. Stay tuned for more!

Introduction

Lately, I've been experimenting with Microsoft Power BI and I'm impressed by how mature the tool is. The application not only supports a myriad of data sources but now there is even a Power Query SDK that allows developers to write their own Data Connectors. Of course, the majority of my experiments uses data from PI Points and AF Attributes and, because Power BI is very SQL oriented, I end up using PI OLEDB Enterprise most of the time. But let's face it: writing a query can be tricky and not a required skill for most Power BI users. So I decided to create a simple PI Web API Data Connector for Power BI. The reason I decided to use PI Web API is that the main use-case for the Data Connector is "Create a business analyst friendly view for a REST API". Also, there's no reason to install additional data providers.

 

Important Notes

Keep in mind that this tool is meant to help in small experiments where a PI and Power BI user wants to get some business intelligence done on PI data quickly. For production environment use cases, where there is a serious need for shaping data views, or where scale is of importance we highly recommend the use of PI Integrator for Business Analytics . Also, in order to avoid confusions, let me be clear that this is not a PI Connector. Microsoft named these extensions to Power BI "Data Connectors" and they are not related to our PI Connector products.

 

The custom Data Connector is a Power BI beta feature, so it may break with the release of newer versions. I will do my best to keep it update but, please, leave a comment if there's something not working.  It's also limited by the Power BI capabilities, that means it currently only supports basic and anonymous authentication for web requests. If the lack of Kerberos support is a no-go for you, please refer to this article on how to use PI OLEDB Enterprise.

 

Preparation

If you are using the latest version of the Power BI (October 2018), you should enable custom data connectors by going to File / Options and Settings / Options / Security and then lowering the security for Data Extensions to Allow any extension to load.

 

2018-10-17 10_48_09-.png

 

For older versions of the Power BI, you have to enable Custom data connectors. It's under File / Options and Settings / Options / Preview features / Custom data connectors.

 

2018-06-28 09_12_56-.png

 

This should automatically create a [My Documents]\Power BI Desktop\Custom Connectors folder. If it's not there, you can create it by yourself. Finally, download the file (PIWebAPIConnector.mez) at the end of this article, extract from the zip and manually place it there. If you have Power BI instance running, you need to restart it before using this connector. Keep in mind that custom data connectors were introduced in April 2018, so versions before that will not be able to use this extension.

 

Hot to use it

You first have to add a new data source by clicking Get Data / More and finding the PI Web API Data Connector under Services.

 

2018-10-17 10_59_21-.png

Once you click the Connect button, a warning will pop up to remember this is a preview connector. Once you acknowledge it, a form will be presented and you must fill it with the appropriate data:

 

2018-10-17 11_09_35-.png

 

Here's a short description of the parameters:

 

ParameterDescriptionAllowed Values
PI Web API ServerThe URL to where the PI Web API instance is located.A valid URL: https://server/piwebapi
Retrieval MethodThe method to get your data.Recorded, Interpolated
ShapeThe shape of the table. List is a row for every data entry, while Transposed is a column for every data path.List, Transposed
Data PathThe full path of a PI Point or an AF Element. Multiple values are allowed separated by a semicolon (;).\\PIServer\Sinusoid;\\AFServer\DB\Element|Attribute
Start TimeThe timestamp at the beginning of the data set you are retrieving.A PI TIme or a timestamp: *-1d, 2018-01-01
End TimeThe timestamp at the end of the data set you are retrieving.A PI TIme or a timestamp: *-1d, 2018-01-01
IntervalOnly necessary for interpolated. The frequency on which the data will be interpolated.A valid Time-Interval: 10m,

 

Once you fill the form with what you need, you hit the OK button and you may be asked for credentials. It's important to mention that, for the time being, there's no support for any other authentication method than anonymous and basic.

 

2018-10-17 11_10_43-Untitled - Power BI Desktop.png

 

After selecting the appropriate login method, a preview of the data is shown:

 

2018-10-17 11_15_11-.png

 

If it seems all right, click Load and a new query result will be added to your data.

 

List vs Transposed

In order to cover the basic usage of data, I'm offering two different ways to present the query result. The first one is List, where you can get data and some important attributes that can be useful from time to time:

 

     2018-10-17 11_23_41-Untitled - Power BI Desktop.png

 

If you don't want the metadata and are only interested in the data, you may find Transposed to be a tad more useful as I already present the data transposed for you:

 

     2018-10-17 11_26_25-Untitled - Power BI Desktop.png

 

Note that for Digital data points I only bring the string value when using transposed. And that's it. Now you can use the data the way you want!

 

2018-06-28 10_39_53-Untitled - Power BI Desktop.png

 

The Code

The GitHub repository for this project is available here. If you want to extend the code I'm providing and have not worked with the M language before, I strongly suggest you watch this live-coding session. Keep in mind that it's a functional language that excels in ETL scenarios, but is not yet fully mature, so expect some quirkiness like the mandatory variable declaration for each instruction, lack of flow control and no support for unit testing.

 

Final Considerations

I hope this tool can make it easier for you to quickly load PI System data into your Power BI dashboard for small projects and use cases. Keep in mind that this extension is not designed for production environments nor it's a full-featured ETL tool. If you need something more powerful and able to handle huge amounts of data, please consider using the PI Integrator for Business Analytics. If you have questions, suggestions or a bug report, please leave a comment.

Introduction

 

If you are getting started with PI Notifications, setting up your AF tree, creating new notifications rules linked to event frames, you probably want to test them before running on production. The traditional way to test is to set up the SMTP server from your enterprise and send all the e-mails from PI Notifications to your e-mail account. If your notifications are sent to a group of people, you need to check with them if they are receiving the e-mails properly.

 

If you are a developer, you might be interested in this new approach for testing by using a custom SMTP Server. This will help you be more efficient testing your notifications rules before running on production. The idea is that the custom SMTP server will listen to port 25 and it will display on the console information about the e-mails sent by PI Notifications.

 

Getting started

 

You can find the source code of the program here.

 

Open Visual Studio and create a .NET Core Console Application. Then add 4 libraries with the following commands:

 

Install-Package MailKit -Version="2.0.6"
Install-Package MimeKit -Version="2.0.6"
Install-Package SmtpServer" -Version="5.3.0"
Install-Package HtmlAgilityPack" -Version="1.8.9"

 

 

This will install our custom SMTP Server core library, some additional libraries to handle e-mails programmatically and the Html Agility Pack, which it will be described later. Please refer to their GitHub repository for more information.

 

Writing the program

 

Based on the README.MD of their GitHub repository, we have created the application below:

 

using CustomSmtpServerForPINotifications.Models;
using HtmlAgilityPack;
using SmtpServer;
using SmtpServer.Authentication;
using SmtpServer.Mail;
using SmtpServer.Protocol;
using SmtpServer.Storage;
using System;
using System.Threading;
using System.Threading.Tasks;


namespace CustomSmtpServerForPINotifications
{
    class Program
    {
        static void Main(string[] args)
        {
            var options = new SmtpServerOptionsBuilder()
             .ServerName("localhost")
             .Port(25, 587)
             .MessageStore(new SampleMessageStore())
             .MailboxFilter(new SampleMailboxFilter())
             .UserAuthenticator(new SampleUserAuthenticator())
             .Build();


            var smtpServer = new SmtpServer.SmtpServer(options);
            smtpServer.StartAsync(CancellationToken.None).Wait();
        }
    }


    public class SampleMessageStore : MessageStore
    {
        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            return Task.FromResult(SmtpResponse.Ok);
        }
    }




    public class SampleMailboxFilter : IMailboxFilter, IMailboxFilterFactory
    {
        public Task<MailboxFilterResult> CanAcceptFromAsync(ISessionContext context, IMailbox @from, int size = 0, CancellationToken token = default(CancellationToken))
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public Task<MailboxFilterResult> CanDeliverToAsync(ISessionContext context, IMailbox to, IMailbox @from, CancellationToken token)
        {
            return Task.FromResult(MailboxFilterResult.Yes);
        }


        public IMailboxFilter CreateInstance(ISessionContext context)
        {
            return new SampleMailboxFilter();
        }
    }




    public class SampleUserAuthenticator : IUserAuthenticator, IUserAuthenticatorFactory
    {
        public Task<bool> AuthenticateAsync(ISessionContext context, string user, string password, CancellationToken token)
        {
            return Task.FromResult(true);
        }


        public IUserAuthenticator CreateInstance(ISessionContext context)
        {
            return new SampleUserAuthenticator();
        }
    }
}



 

Running the console application will make the custom SMTP server starts and it will keep monitoring ports 25 and 587. When it receives an e-mail, it will display information with the HTML body of the e-mail message.

 

We can test it by opening the Email Delivery Channel Configuration through PI System Explorer, typing the IP address of the machine running the custom SMTP server and clicking on the "Test..." button.

 

 

The e-mail is shown on the console application as expected. Note that the e-mail information is sent to the custom SMTP server but the SMTP Server never tries to send it to the SMTP server of the test.com domain. This is why it is a very good tool for testing purposes.

 

 

 

Let's validate if the e-mail has all the information that we need when it has the notification content. We are going to use the Html Agility Pack library to extract information from the HTML.

 

 

        public override Task<SmtpResponse> SaveAsync(ISessionContext context, IMessageTransaction transaction, CancellationToken cancellationToken)
        {
            var textMessage = (ITextMessage)transaction.Message;


            var message = MimeKit.MimeMessage.Load(textMessage.Content);
            Console.WriteLine("\n\nNew e-mail received from {0} to {1}.", message.From.ToString(), message.To.ToString());
            Console.WriteLine("HTML: " + message.HtmlBody);
            NotificationRequest request = GenerateRequestFromEmail(message.HtmlBody);
            if (request != null)
            {
                Console.WriteLine("AttributeFullPath: " + request.AttributeFullPath);
                Console.WriteLine("NotificationRuleName: " + request.NotificationRuleName);
                Console.WriteLine("StartTime: " + request.StartTime);
            }
            return Task.FromResult(SmtpResponse.Ok);
        }




        private NotificationRequest GenerateRequestFromEmail(string htmlBody)
        {
            try
            {
                NotificationRequest request = new NotificationRequest();
                var doc = new HtmlDocument();
                doc.LoadHtml(htmlBody);
                var nodes = doc.DocumentNode.SelectNodes("/html[1]/body[1]/div[1]/p");
                foreach (var node in nodes)
                {
                    string[] texts = node.InnerText.Split(':');
                    if (texts[0].ToLower().Trim() == "name")
                    {
                        request.NotificationRuleName = texts[1].Trim();
                    }
                    if (texts[0].ToLower().Trim() == "start time")
                    {
                        request.StartTime = node.InnerText.Replace(texts[0] + ":", string.Empty);
                    }
                    if (texts[0].ToLower().Trim() == "attribute path")
                    {
                        request.AttributeFullPath = texts[1].Trim();
                    }
                }
                return request;
            }
            catch (Exception)
            {
                return null;
            }
        }

 

Recompiling the project on Visual Studio and running the custom SMTP server again, it is possible to see the details of the notification received:

 

 

Conclusion

Although this custom SMTP server shouldn't be used for production, I am sure that it can be a good friend for developers and PI admins when they need to test their notifications.

 

Please share your thoughts with me and the community!!

Introduction

 

When I started as a vCampus Support Engineer back in 2012, one of my favorites materials to learn about our PI Developer Technologies was the material written for our vCampus Live! events. Recently, I've taken a look at the vCampus Live! 2012 workbooks and I found the "Develop Custom Delivery Channels for Use with PI Notifications" hands-on lab. In this lab, it was shown how to develop a custom delivery channel that would write notification to the Windows Event Logs using PI Notification 2012 and Visual Studio 2012 and .NET Framework 3.5.

 

The new generation of PI Notification (which starts in 2016) uses Event Frames instead of PI Points to store their historical data and does not support custom delivery channels. Nevertheless, it has integration with custom web services.

 

On this blog post, I will show you how to develop a custom web service that not only integrates with the newer releases of PI Notification and but also writes notifications events to the Windows Event Log. This time we are going to use Visual Studio 2017 and ASP.NET Core 2.1.

 

Creating a delivery endpoint

 

Please make sure that you have PI AF 2018 and PI Notifications 2018 installed on your system. If you are interested in setting up new Notification Rules, please refer to this video.

 

Before starting to code, we need to create a new delivery endpoint as shown on the screenshot below. Note that you should select WebService as the delivery channel. REST is the standard nowadays and this is what we are going to use.  We are going to create an action that accepts HTTP POST requests. For this demo, we won't be using any type of authentication although PI Notifications does support Basic and Windows.

 

 

 

 

 

 

 

Creating the ASP.NET Core project with .NET Framework

 

You can download the source code package of this blog post by accessing this GitHub repository.

 

As explained in this blog post, ASP.NET Core could be created with .NET Framework and .NET Core. We are going to create an ASP.NET Core application using .NET Framework because it is easier to integrate it with Windows Event Log. Let's create the project by referring the screenshot below:

 

 

On the second screen, make sure to select ".NET Framework" and "ASP.NET Core 2.1". This code probably does not work with ASP.NET Core 2.0. The solution would be to update Visual Studio as it will show newer versions of that platform.

 

 

 

 

 

Adding the ASP.NET Core MVC library

 

Open Package Manager Console and type:

 

Install-Package Microsoft.AspNetCore.Mvc

 

This will install the MVC component of the ASP.NET Core.

 

 

Editing the Startup.cs

 

Now that we have all libraries added, we need to enable it by editing the Startup.cs.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;


namespace CustomWebServiceForPINotifications
{
    public class Startup
    {
        public IConfiguration IConfiguration { get; }


        public Startup(IConfiguration configuration)
        {
            IConfiguration = configuration;
        }


        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }


        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }


            app.UseMvc();
        }
    }
}

 

 

I have published some blog post and videos about using ASP.NET Core with PI AF SDK and Angular. Please refer to the Developing Web Apps on top of the PI System Hub.

 

 

Creating the model

 

Create a new folder on the project root named Models and create a new class called NotificationRequest.cs. This .NET class will be used to process the JSON created by PI Notification according to the WebService configured. In this case, the JSON will have 3 properties: NotificationRuleName, AttributeFullPath and StartTime. Please refer to this example in order to learn how to create the Notification Rule programmatically.

 

 

Now that the properties were defined, we can write our NotificationRequest class as:

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;


namespace CustomWebServiceForPINotifications.Models
{
    public class NotificationRequest
    {
        public string NotificationRuleName { get; set; }
        public string AttributeFullPath { get; set; }
        public DateTime StartTime { get; set; }
    }
}

 

 

Creating the controller

 

Create a new folder on the project root named Controllers and create a new class called NotificationReceiverController.cs

 

This controller has two actions: Get and Post. The first action is used to make sure that the web site is up and running properly. The second action has the same code of the hands-on lab used to write a text to the Windows Event Log. For the second action, if the operation is successful, it will return a 201 status code. If not, it will return a bad request error.

 

using System;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using CustomWebServiceForPINotifications.Models;


namespace CustomWebServiceForPINotifications.Controllers
{
    [Route("api/notifications-receiver")]
    [ApiController]
    public class NotificationReceiverController : ControllerBase
    {
        [HttpGet]
        public IActionResult Get()
        {
            return Ok(new List<int> { 1, 2, 3, 4 });
        }


        [HttpPost]
        public IActionResult Post([FromBody]NotificationRequest request)
        {


            try
            {
                string sSource = "PI Notifications Delivery Channel Log";
                string sLog = "Application";
                if (!System.Diagnostics.EventLog.SourceExists(sSource))
                    System.Diagnostics.EventLog.CreateEventSource(sSource, sLog);


                string message = String.Format("Notification: {0} triggered for asset {1} at {2}", request.NotificationRuleName,
                    request.AttributeFullPath.ToString(), request.StartTime.ToString());
                System.Diagnostics.EventLog.WriteEntry(sSource, message, System.Diagnostics.EventLogEntryType.Information);
                return StatusCode(201);


            }
            catch (Exception ex)
            {
                return BadRequest(ex);
            }
        }


    }
}



 

Publishing to IIS

 

Now that the code is ready, we should publish the application to IIS. Please refer to the video ASP.NET Core 2 (Web API) and Angular with PI AF SDK: Part 5 - Publishing the application to IIS  for more information. You will have to download the run time version of ASP.NET Core and make sure the service running the web site has enough privileges in the folder with the web site files.

 

We can easily see if our web service is running as expected by testing it with our browser:

 

 

If there is something wrong, you will receive an error message.

 

 

Testing the integration between PI Notifications and the Custom Web Service

 

Ok, it is time to test our Notification. Let's go back to the Web Service Configuration window and click on the "Test Send" button. The result of the operation will be shown on the bottom of the Window.

 

 

We can confirm that the operation was successful by opening the Windows Event Viewer on the machine hosting our web service.

 

 

 

 

Conclusion

 

PI Notification 2016 does not allow developing custom delivery channels but it does allow you to integrate it with custom web services. I hope that this blog post will help you develop great integrations with the new generation of PI Notifications!!

Below is a link to the presentation given at Barcelona on September 27, 2018 for the LiveCoding session titled "Getting the Most Out of AFSearch".

 

Link to Recording:     LiveCoding--Getting-the-Most-Out-of-the-AFSearch

 

GitHub Repository:     AF-SDK -PIWorld-EMEA-2018-AFSearch-LiveCoding

 

While some of it is a repeat of what was presented at San Francisco in April, the beginning section features new material on the upcoming AF SDK 2.10.5 release.  The new features discussed:

 

  • OR clauses in searches.
  • FindObjects replaces the now deprecated FindElements, FindEventFrames, FindAttributes, etc.
  • AFSearchToken structure being replaced by new AFSearchBaseToken abstract class with 4 concrete classes: AFSearchFilterToken, AFSearchQueryToken, AFSearchOrToken, and AFSearchExpressionToken.
  • AFSearchToken will be mapped to new AFSearchBaseToken, if possible.
  • If your query uses an OR, the AFSearch.Tokens property will throw an exception.
  • New AFSearch.TokensCollection property is meant to replace Tokens functionality and works with new AFSearchBaseToken.
  • AFSearchToken and AFSearch.Tokens property are both marked as deprecated.

We are excited to present the PI World Innovation Hackathon EMEA 2018 winners!

 

DEME kindly provided a sample of their data with sensors, jack-up vessels and soil models information. Participants were encouraged to create killer applications for DEME by leveraging the PI System infrastructure.

 

OSI_Barcelona18_Day1_120.jpg

 

The participants had 23 hours to create an app using any of the following technologies:

  • PI Server 2018
  • PI Web API 2018
  • PI Vision 2017 R2

 

Our judges evaluated each app based on their creativity, technical content, potential business impact, data analysis and insight and UI/UX. Although it is a tough challenge to create an app in 23 hours, 4 groups were able to finish their app and present to the judges!

 

Prizes:

1st place: Intel NUC Barebone (Core 3-7100U, 120 GB SSD, 8 GB RAM), one year free subscription to PI Developers Club, one time free registration at OSIsoft PI World over the next 1 year

2nd place: Bose SoundLink Around-Ear Wireless Headphones II (black), one year free subscription to PI Developers Club, 50% discount for registration at OSIsoft PI World over the next 1 year

3rd place: Raspberry Pi 3 Model B+ Retro Arcade Gaming Kit incl. 2 classic controllers and one year free subscription to PI Developers Club

 

 

Without further do, here are the winners!

 

1st place - AG Solution

 

The team members were: Sergio Hernandez, Juri Krivoruchko and Marc Torralba

 

 

OSI_Barcelona18_Day4_386.jpg

 

Team AG Solution has developed an application on top of PI AF SDK and PI Vision. They've developed a custom data reference that uses Stochastic Dual Coordinate classifier from ML.NET to detect the current state of vessel by the direction of changing the state.

 

The team used the following technologies:

  • PI AF SDK
  • ML.NET
  • PI Vision

 

Here are some screenshots presented by the AG Solution team!

 

 

 

 

 

 

 

 

2nd place - M.E.S.S

 

The team members were:  David Rodriguez, Leandro Hideo, Alexander Hosefelder and Alexander Dixon

 

OSI_Barcelona18_Day4_384.jpg

 

Team M.E.S.S developed an algorithm in R in order to detect sensors anomalies using Machine Learning.

 

The team used the following technologies:

  • R
  • PI Web API
  • PI Web API package for R

 

Here are some screenshots presented by M.E.S.S!

 

 

 

 

 

 

3rd place - Werusys Cologne

 

The team members were: Kai Weber, Ansgar Backhaus and Julian Weber

 

 

OSI_Barcelona18_Day4_383.jpg

 

Team Werusys Cologne developed an application to analyze wind mill installation case data based on hidden markov model.

 

The team used the following technologies:

  • PI Web API
  • Seeq

 

Here are some screenshots presented by the Werusys Cologne!

 

Filter Blog

By date: By tag: