Skip navigation
All Places > PI Developers Club > Blog
1 2 3 Previous Next

PI Developers Club

621 posts

One of the coolest things about Microsoft SQL Server in the last couple of years is how it has expanded from the confines of Windows Server and can now run on all three major desktop OSes, as well as sit in the cloud.

None of that expansion would have mattered much if downstream clients for SQL Server didn’t also expand their horizons to touch more platforms. And with Microsoft client tools for Linux and Mac, this is no longer an issue.

You can sneak PI Data through this mechanism

We can take advantage of Microsoft SQL Server and OSIsoft PI SQL by adding a linked server to SQL Server that forwards queries to PI Server. From there we can build SQL views which opens a portal into both the PI Data Archive and PI AF directly. You can also combine your own data stored in SQL with your real-time data. Downstream applications will see normal every day recordsets.

Here’s a screenshot where I’ve used this technique to pull data directly into Microsoft Excel for Mac. Not only is this data fresh, but I can refresh the query in my worksheet just like I would do in Excel for Windows. The connection from the worksheet is going straight to SQL Server.

ExcelForMac.png

Setup Steps

Setup PI SQL Data Access Server (RTQP Engine)

Make sure you’ve installed the PI SQL Data Access Server (RTQP) Engine which is in your PI Server 2018 (and later) install kit:

RTQP Install.png

Grab PI SQL Client

Next you need to get the PI SQL Client kit and install this on the instance where your SQL Server is. You can grab it from the OSIsoft Technical Support Downloads page.

Configure the PI SQL Client provider in Microsoft SQL Server Enterprise Manager

Hop over to SQL-EM and modify the linked server provider to ensure these options are switched on:

pisqlsetup.png

Create a Linked Server connection

By right-clicking on the Linked Servers folder in SQL-EM you can set up any number of linked server connections. Typically, you will want to set up one linked server connection per AF database. Here I’ve setup a connection to NuGreen:

linkedserversetup.png

Now the fun part: queries!

First let’s go with a basic type of query that finds all the pumps in the NuGreen database

Query

SELECT [ID]
   ,[Name]
   ,[Description]
   ,[Comment]
   ,[Revision]
   ,[HasChildren]
   ,[PrimaryPath]
   ,[Template]
   ,[Created]
   ,[CreatedBy]
   ,[Modified]
   ,[ModifiedBy]
   ,[TemplateID]
  FROM [PISERVER_TEST].[Master].[Element].[Element]
WHERE Name LIKE 'P%'
GO

Simple enough. This yields the following:

pumpssql.png

We can use a SQL Database to expose this as a view by wrapping this query with CREATE VIEW.

USE TEST
GO

CREATE VIEW REPORTING_PUMPS 
AS

SELECT [ID]
   ,[Name]
   ,[Description]
   ,[Comment]
   ,[Revision]
   ,[HasChildren]
   ,[PrimaryPath]
   ,[Template]
   ,[Created]
   ,[CreatedBy]
   ,[Modified]
   ,[ModifiedBy]
   ,[TemplateID]
  FROM [PISERVER_TEST].[Master].[Element].[Element]
WHERE Name LIKE 'P%'
GO

Now, when we select everything in the view we get:

reportingpumpsview.png

Perfect. Now that we have PI Server data we can pull this across to any application that can communicate to Microsoft SQL Server.

Importing PI Server data into Excel for Mac

Now that we have PI Server data exposed to Microsoft SQL Server it is fairly painless to connect this to downstream applications that can read recordsets from there. Let’s use this to connect the view we set up.

SQLServerODBCMac.png

Microsoft Excel can import remote a SQL datasets in the Data tab. From there you can select New Database Query and SQL Server ODBC.

odbcconnect.png

Insert your SQL Server credentials and authentication method and click Connect. A “Microsoft Query” window will then appear where you can enter a SQL statement to produce a recordset. It will follow the same syntax that you would use in SQL-EM.

From here I’ll select the contents of my view. Press Run to execute it on the server and inspect what comes back.

querywindow.png

Now you can press Return Data to deposit the results into your Excel worksheet. The connection to SQL Server is preserved in your worksheet when you save the Excel workbook. You can edit it and re-run the query by visiting the connections button that’s also on the Data ribbon.

connections.png

Data now refreshes on your terms in your Excel worksheet and your connection details are preserved between document openings.

Caveats

Read-only

Presently the restrictions that existed with PI OLDEDB Enterprise also apply to the latest PI SQL and Data Access Server. You cannot post data into your Asset Database via this connection type.

If you attempt to write, expect this error:

Msg 7390, Level 16, State 2, Line 35 The requested operation could not be performed because OLE DB provider “PISQLClient” for linked server “PISERVER_TEST” does not support the required transaction interface.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In my previous articles, I have demonstrated using the AF Server container in local Docker host deployments. The implication is that you have to manage the Docker host infrastructure yourself. The installation, patching, maintenance and upgrading work has to be done by you manually. This represents significant barriers to get up and running. As an analogy, imagine you visit another country for vacation and need to get from the airport to the hotel. Would it be better to buy a car (if they even sold one at the airport?) and drive to the hotel or just take a taxi (transport as a service). The first option requires a larger initial investment of time and money compared to the latter.

 

For quick demo, training or testing purposes, getting a Docker host infrastructure up and running requires effort (getting a machine with right specifications, procuring an OS with Windows container capabilities, patching the OS so that you can use Docker, installing the right edition of Docker) and troubleshooting if things go south (errors during setup or services refusing to start). In the past, we have no other choice so we just have to live with it. But in this modern era of cloud computing, using a container as a service might be a faster and cheaper alternative. Today, I will show you how to operate the AF Server container in the cloud using Azure Container Instances. The very first service of its kind in the cloud, Azure Container Instances is a new Azure service delivering containers with great simplicity and speed. It is a form of serverless containers.

 

Prerequisites

You will need an Azure subscription to follow along with the blog. You can get a free trial account here.

 

Azure CLI

Install the Azure CLI which is a command line tool for managing Azure resources. It is a small install. Once done, we need to login

az login

 

If the CLI can determine your default browser and has access to open it, it will do so and direct you immediately to a sign in page.

Otherwise, you need to open a browser page and follow the instructions on the command line to enter an authorization code after navigating to https://aka.ms/devicelogin in your browser.

 

Complete the sign in via the browser and you will see

 

Now set your default subscription if you have many subscriptions. If you only have one subscription to your account, then you can skip this step.

az account set -s <subscription name>

 

Create cloud container

We are now ready to create the AF Server cloud container. First create a resource group.

az group create --name resourcegrp -l southeastasia

You can change southeastasia to a location nearest to you. Here is the list of locations (remove the space when using it)

 

Create a file named af.yaml. Replace <username> and <password> with the credentials for pulling the AF Server container image. There are some variables that you can configure

 

afname: The name that you choose for your AF Server.

user: Username to authenticate to your AF Server.

pw: Password to authenticate to your AF Server.

 

af.yaml

apiVersion: '2018-06-01'
name: af
properties:
  containers:
  - name: af
    properties:
      environmentVariables:
      - name: afname
        value: eugeneaf
      - name: user
        value: eugene
      - name: pw
        secureValue: qwert123!
      image: elee3/afserver:18x
      ports:
      - port: 5457
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.0
  imageRegistryCredentials:
  - server: index.docker.io
    username: <username>
    password: <password>
  ipAddress:
    dnsNameLabel: eleeaf
    ports:
    - port: 5457
      protocol: TCP
    type: Public
  osType: Windows
type: Microsoft.ContainerInstance/containerGroups

 

Then run this in Azure CLI to create the container.

az container create --resource-group resourcegrp --file af.yaml

The command will return in about 5 minutes.

 

You can check the state of the container.

az container show --resource-group resourcegrp -n af --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table

 

You can check the container logs.

az container logs --resource-group resourcegrp -n af

 

Explore with PSE

You now have an AF Server container in the cloud that can be accessed ANYWHERE as long as there is internet connectivity. You can connect to it with PSE using the FQDN. The credentials to use are those that you specified in af.yaml.

Notice that the name of the AF Server is the value of the afname environment variable that was passed in af.yaml.

 

Run commands in container

If you have a need to login to the container to run commands such as using afdiag, you can do so with

az container exec --resource-group resourcegrp -n af --exec-command "cmd.exe"

 

Clean up

When you are done with using the container, you should destroy it so that you won't have to pay for it when it is not being used.

az container delete --resource-group resourcegrp -n af

You can check that the resource is deleted by listing your resources.

az resource list

 

Considerations

There are some tricks to hosting a container in the cloud to optimize its deployment time.

 

1. Base OS

The Base OS should be one of the three most recent versions of Windows Server Core 2016. These are cached in Azure Container Instances to help in the deployment time. If you want to experience the difference, try pulling elee3/afserver:18 in the create container command above. The time taken will be 13min which is more than twice of the 5min needed to pull elee3/afserver:18x. The reason is because the old image with “18” tag is based on the public SQL Server image which is 7 months old and doesn't have the latest OS version to be able to leverage on the caching mechanism to improve performance. I have rebuilt the image with “18x” tag based on my own SQL Server image with the latest OS version.

 

2. Image registry location

Hosting the image in Azure Container Registry in the same region that you use to deploy your container will help to improve deployment time as this shortens the network path that the image needs to travel which shortens the download time. Take note that ACR is not free unlike DockerHub. In my tests, it took 4min to deploy with ACR.

 

3. Image size

This one is obviously a no-brainer. That's why I am always looking to make my images smaller.

 

Another consideration is the number of containers per container group. In this example, we are creating a single-container group. The current limitation of Windows containers is that we can only create single-container groups. When this limitation is lifted in the future, there are some scenarios where I see value in creating multi-container groups such as spinning up sets of containers that are complimentary to each other. E.g. PI Data Archive container, AF Server container, PI Analysis Service container in a 3-container group. However, for scenarios such as spinning up 2 AF Servers containers, we should still keep them in separate container groups so that they won't fight for the same port.

 

Limitations

Kerberos authentication is not supported in a cloud environment. We are using NTLM authentication in this example.

 

Conclusion

Deploying the AF Server container to Azure Container Instances might not be as fast as deploying it to a local Docker host. But it is cheaper compared to the upfront time and cost of setting up your own Docker host. This makes it ideal for demo/training/testing scenarios. The containers are billed on a per second basis so you only pay for what you use. That is like only paying for your trip from the airport to the hotel without having to pay anything extra.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

AF Server 2018 has been released on 27 Jun 2018! Let's take a look at some of the new features that are available. The following list is not exhaustive.

  • AF Server Connection information is now available for administrative users.
  • A new UOM Class, Computer Storage, is provided. The canonical UOM is byte (b) and multiples of 1000 and 1024.
  • AFElementSearch and AFEventFrameSearch now support searching for elements and event frames by attribute values without having to specify a template.

What's new in AFSearch 2.10 (PI AF 2018)

  • The AFDiag utility has been enhanced to allow for bulk deletes of event frames by database and/or template and within a specified time range

 

Here are also some articles that talk about other new features in AF 2018.

Mass Event Frame Deletion in AF SDK 2.10

DisplayDigits Exposed in AF 2018 / AF SDK 2.10

What's new in AF 2018 (2.10) OSIsoft.AF.PI Namespace

Introducing the AFSession Structure

 

To take advantage of these new features, we will need to upgrade to the AF Server 2018 container. Let me demonstrate how we can do that.

 

Create 2017R2 container and inject data

The steps for creating the container can be found in Spin up AF Server container (SQL Server included). I will use af17 as the name in this example.

docker run -di --hostname af17 --name af17 elee3/afserver:17R2

 

Now, we can create some elements, attributes and event frames.

We will also list the version to confirm it is 2017R2 (2.9.5.8368).

 

Pull 2018 image

We can use the following command to pull down the 2018 image.

docker pull elee3/afserver:18

 

The credentials required are the same as the 2017R2 image. Check the digest to make sure the image is correct.

18: digest: sha256:99e091dc846d2afbc8ac3c1ec4dcf847c7d3e6bb0e3945718f00e3f4deffe073

 

Upgrade from 2017R2 to 2018

Create an empty folder, open up a Powershell, navigate to that folder and run the following commands.

Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/afbackup.bat" -UseBasicParsing -OutFile afbackup.bat
Invoke-WebRequest "https://raw.githubusercontent.com/elee3/AF-Server-container-build/master/upgradeto18.bat" -UseBasicParsing -OutFile upgradeto18.bat
.\upgradeto18.bat af17 af18

 

Wait a short moment for your AF Server 2018 container to be ready. In this example, I will give it the name af18.

 

Verification

Now we can check that the element, attribute and event frame that we created earlier in the 2017R2 container is persisted to the 2018 container. First, let's connect to af18 with PSE. Upon successful connection, notice that the name and ID of the AF Server 2017R2 is retained.

 

 

Our element, attribute and event frame are all persisted.

Finally, we can see that the version has been upgraded to 2018 (2.10.0.8628).

 

Congratulations. You have successfully upgraded to the AF Server 2018 container and retained your data.

 

Rollback

If you want to rollback to the AF Server 2017R2 container, you will need to use the backup that was automatically generated and stored in the folder

C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup

docker rm -f af17
docker exec af18 cmd /c "copy /b "C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup\PIAFSqlBackup*.bak" c:\db\PIFD.bak"
docker run -d -h af17 --name af17 --volumes-from af18 elee3/afserver:17R2

 

Once a PIFD database is upgraded, it is impossible to downgrade it as seen here stating "a downgrade of the PIFD database will not be possible". This means that it won't be possible to persist data entered after the upgrade during the rollback.

 

Explore new features

Computer Storage UOM

AF Server Connections history

Bulk deletes of event frames by database and/or template and within a specified time range

 

Conclusion

Now that the AF Server container has at least two versions available (2017R2 and 2018), you can really start to appreciate its usage for testing the compatibility of your applications with two different versions of the server. In the past, you would need to create two large VMs in order to host two AF Server. Those days are over. You can realize immediate savings in storage space and memory. We will look into bringing these containers into some cloud offerings for future articles.

Introduction

 

After almost 4 years, it is time to rewrite/update my old ASP.NET MVC 5 blog post since the recommended Microsoft technology nowadays for web development is ASP.NET Core MVC 2.

 

But what is ASP.NET Core MVC?

 

According to this web page, the ASP.NET Core MVC framework is a lightweight, open source, highly testable presentation framework optimized for use with ASP.NET Core.

ASP.NET Core MVC provides a patterns-based way to build dynamic websites that enables a clean separation of concerns. It gives you full control over markup, supports TDD-friendly development and uses the latest web standards.

 

Can ASP.NET Core MVC be used together with PI AF SDK?

 

Since PI AF SDK is compiled against .NET Framework, one could argue that PI AF SDK is not compatible with ASP.NET Core since it was not compiled against .NET Core. Nevertheless, you can create an ASP.NET Core MVC project against .NET Core or .NET Framework. If you choose the second option, you can add PI AF SDK as a reference to your project.

 

Requirements

 

If you want to follow this blog post along, please make sure the following software are installed on your computer:

  • Visual Studio 2017 Update 3
  • PI AF SDK 2017 R2+

 

 

Source code package

You can have access to the source code package related to this blog post in this GitHub repository.

 

Creating the project

 

Let's see how this works. Open Visual Studio and create a new project from scratch. Select the Web item on the left pane and then "ASP.NET Core Web Application". Then write "ASPNETCoreWithAFSDKPar1"as the name of the project and click on the "Ok" button.

 

 

Figure 1 – Creating a new  ASP.NET Core MVC project in Visual Studio.

 

 

On the next window, make sure that ".NET Framework" and "ASP.NET Core 2.0" are selected. Next, select the Empty template and then click on “OK”. Figure 2 shows the screenshot for selecting the template. We have chosen the empty template instead of the MVC template because the last option not only comes with the MVC binaries but also with some Controllers, Views, Layouts and the ASP.NET Identity responsible for the authentication and authorization from your web site. As we are not going to use many of those default files, the Empty seemed to be a better option for me.

 

 

 

 

Figure 2 – Selecting the ASP.NET Core template.

 

If you don't see those options on this window, make sure you have an updated version of Visual Studio 2017.

 

Right click on Dependencies and then "Add Reference..". Browse to the PI AF SDK binary and add it to your project.

 

Figure 3 – Adding PI AF SDK to your project

 

Before we start coding, make sure the target framework is .NET Framework and not .NET Core. Right click on the project and select properties to verify this information.

 

Figure 4 – VS project properties

 

Add NuGet libraries

 

Open the ASPNETCoreWithAFSDKPart1.csproj file and update it with the following content.

 

<Project Sdk="Microsoft.NET.Sdk.Web">


  <PropertyGroup>
    <TargetFramework>net472</TargetFramework>
  </PropertyGroup>


  <ItemGroup>
    <Content Remove="bower.json" />
  </ItemGroup>


  <ItemGroup>
    <Folder Include="wwwroot\" />
  </ItemGroup>


  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="2.0.3" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.0.3" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="2.0.3" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.0.2" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="2.0.2" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="2.0.2" />
    <PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="2.0.2" />
    <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.0.3" />
  </ItemGroup>


  <ItemGroup>
    <Reference Include="OSIsoft.AFSDK">
      <HintPath>..\..\..\..\..\Program Files (x86)\PIPC\AF\PublicAssemblies\4.0\OSIsoft.AFSDK.dll</HintPath>
    </Reference>
  </ItemGroup>


  <ItemGroup>
    <Content Update="Views\Shared\_Layout.cshtml">
      <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
    </Content>
  </ItemGroup>


</Project>

 

Save and recompile this project. The missing libraries will be retrieved from the internet and will be added to your project automatically.

 

Configure the HTTP requests pipeline

 

Open the Startup.cs file and edit it according to the content below:

 

 public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
        }


        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            app.UseDeveloperExceptionPage();
            app.UseStatusCodePages();
            app.UseStaticFiles();
            app.UseMvcWithDefaultRoute();
        }
    }

 

This will make sure that:

  • MVC is enabled.
  • Static files are enabled.
  • Developer exception pages are shown if an exception is thrown (Don't use it on production).

 

Setting up Bower and adding Bootstrap

 

We are going to use bootstrap 3 to style our web site. Although there are better options nowaways, we are going to use bower to download bootstrap and jQuery. On the VS project root create two file: .bowerrc and bower.json.

 

.bowerrc

{
  "directory": "wwwroot/lib"
}

 

bower.json

 

{
  "name": "ASPNETCOreWithAFSDKPart1",
  "private": true,
  "dependencies": {
    "bootstrap": "3.3.7"
  }
}

 

The first file tells the system to paste the retrieved libraries on the wwwroot//lib folder. The second file has all the dependencies.

 

Open the command prompt and change directory to the VS project folder. Then run"bower install" to install Bootstrap 3.3.7 and jQuery (both are JavaScript libraries), according to the dependencies listed on the bower.json file.

 

Creating the Home Controller

 

Let's create a new folder called Controllers and our home controller (new file homeController.cs) with folowing code:

 

using System.Collections.Generic;
using System.Linq;
using ASPNETCoreWithAFSDKPart1.Models;
using Microsoft.AspNetCore.Mvc;
using OSIsoft.AF.PI;


namespace ASPNETCoreWithAFSDKPart1.Controllers
{
    public class HomeController : Controller
    {
        public IActionResult Index()
        {
            return this.RedirectToAction("Query");
        }


        public IActionResult Query()
        {
            return View();
        }


        public IActionResult Result(string pointNameQuery, string pointSourceQuery)
        {
            PIServer piServer = new PIServers()["MARC-PI2016"];
            piServer.Connect();


            PIPointQuery query1 = new PIPointQuery(PICommonPointAttributes.Tag, OSIsoft.AF.Search.AFSearchOperator.Equal, pointNameQuery);
            PIPointQuery query2 = new PIPointQuery(PICommonPointAttributes.PointSource, OSIsoft.AF.Search.AFSearchOperator.Equal, pointSourceQuery);
            IEnumerable<PIPoint> foundPoints = PIPoint.FindPIPoints(piServer, new PIPointQuery[] { query1, query2 });
            PIPointList pointList = new PIPointList(foundPoints.Take(1000));


            List<PIPointSnapshotModelView> pointSnapshotValueList = new List<PIPointSnapshotModelView>();
            foreach (PIPoint point in pointList)
            {
                pointSnapshotValueList.Add(new PIPointSnapshotModelView(point.Name.ToString(), point.CurrentValue()));
            }


            return View(pointSnapshotValueList.AsEnumerable());
        }
    }
}

 

When the web site starts, it will load the Index() action, which will be redirected to the Query() action. This action will display a page with a form. When the user submits the form, the Result action will be called. PI AF SDK will search for PI Points matching certain conditions with the help of PIPointQuery class. The results are converted to an IEnumerable<PIPointSnapshotModelView>.

 

 

Creating the Model

 

The model all has the information needed to display to the use (PI Point name, value and attribute). Please create a new file PIPointSnapshotModelView.cs in the Models folder. Create the Models folder if it doesn't exist.

 

using OSIsoft.AF.Asset;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;


namespace ASPNETCoreWithAFSDKPart1.Models
{
    public class PIPointSnapshotModelView
    {
        public string PointName { get; set; }
        public object Value { get; set; }
        public DateTime TimeStamp { get; set; }


        public PIPointSnapshotModelView(string pointName, AFValue afValue)
        {
            PointName = pointName;
            Value = afValue.Value;
            TimeStamp = afValue.Timestamp;
        }
    }
}

 

 

Creating the Shared View

 

Please create a new folder on the root called Views. Then create 3 files with the content below:

 

_ViewStart.cshtml

@{
    Layout = "_Layout";
}

 

 

_ViewImports.cshtml

@using  ASPNETCoreWithAFSDKPart1.Models
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

 

Shared\_Layout.cshtml (don't forget to create the Views\Shared folder first!)

@model IEnumerable<ASPNETCoreWithAFSDKPart1.Models.PIPointSnapshotModelView>
@{
    Layout = null;
}


<!DOCTYPE html>


<html>
<head>
    <meta name="viewport" content="width=device-width" />
    <title>Results</title>
    <link href="~/lib/tether/dist/css/tether.css" rel="stylesheet" />
    <link href="~/lib/bootstrap/dist/css/bootstrap.css" rel="stylesheet" />
    <script src="~/lib/tether/dist/js/tether.js"></script>
    <script src="~/lib/jquery/dist/jquery.js"></script>
    <script src="~/lib/bootstrap/dist/js/bootstrap.js"></script>
</head>
<body>
    @RenderBody()
</body>
</html>

 

Notes:

  • _ViewStart file can be used to define common view code that you want to execute at the start of each View's rendering.
  • _ViewImports.cshtml serves one major purpose: to provide namespaces which can be used by all other views.
  • _Layout: is the master layout that will be used by Query.cshtml and Result.html.

 

Creating the Views

 

Now, we just need to create Query.cshtml and Result.cshtml to finish our application!

 

Query.cshtml


<div class="col-sm-8 col-sm-offset-2">
    <h1>My first ASP.NET Core MVC with PI AF SDK</h1>
    <br /><br />
    <h4>Search for PI Points and check its snapshots.</h4>
    <br /><br />
    <div class="container">
        <div class="row">
            <form asp-controller="Home" asp-action="Result" method="get" class="form-inline">
                <div class="form-group">
                    <label class="sr-only" for="name">Search PI Point: </label>
                    <input id="pointNameQuery" name="pointNameQuery"  placeholder="Point Name" class="form-control" />
                </div>
                <div class="form-group">
                    <label class="sr-only" for="inputfile">With this Point Source: </label>
                    <input id="pointSourceQuery" name="pointSourceQuery" placeholder="Point Source" class="form-control" />
                </div>
                <button type="submit" class="btn btn-default">Submit</button>
            </form>
        </div>
    </div>
</div>

 

 

Result.cshtml

 

@model IEnumerable<PIPointSnapshotModelView>






<div class="col-sm-8 col-sm-offset-2">
    <h1>My first ASP.NET Core MVC with PI AF SDK</h1>
    <br /><br />
    <table class="table">
        <tr>
            <th>PI Point</th>
            <th>Value</th>
            <th>Timestamp</th>
        </tr>
        @foreach (var pointSnapshotValue in Model)
            {
        <tr>
            <td>@pointSnapshotValue.PointName</td>
            <td>@pointSnapshotValue.Value.ToString()</td>
            <td>@pointSnapshotValue.TimeStamp.ToString()</td>
        </tr>
            }
    </table>
    <br />
    <center>
        <a asp-controller="Home" asp-action="Index" class="btn btn-primary">Make another search</a>
    </center>
</div>



 

 

Testing the web app

 

We have finished writing our ASP.NET Core MVC web application. It is time to test. When you run this web application on Visual Studio, you should see the Query page below.

 

 

Figure 5 – Screenshot of the Query page from our web application

 

Figure 6 shows a screenshot in case we want to find all PI Points which match the “sin*d” from any PI Point Source. After clicking on submit, it will show the table with the PI Points found on the Result page.

 

 

Figure 6 – Screenshot of the Result page from our web application

 

Note that the design of the page has improved using bootstrap adding only a few words to the View. The query string defines the input parameters so you can edit them directly on the uri. If you click on “Make another search” it should go back to the previous page.

 

Conclusions

If you are a PI Developer that needs to develop a web application on top of the PI System., ASP.NET Core MVC with PI AF SDK is a great option. Nevertheless, there are other options as well, which we are going to see on my upcoming blog posts. Stay tuned!

Got a bunch of Event Frames laying around?

 

Getting rid of old event frames in your database is now made much easier in AF SDK 2.10 with the new DeleteEventFrames() method that is hanging off the AFEventFrame class.

 

All you need to do is collect a list of event frame object IDs and pass it in to DeleteEventFrames(). You do this by performing an Event Frame Search.   The EventFrameSearch object supports returning the located event frames by Object ID-only, which you can use to mass delete them.

 

Why might I want to do this?

 

In some shops you might be creating Event Frames at a significant rate that has nothing to do with alarms and notifications but rather as part of an eventing process.  After those event frames run through your natural business process you may no longer have a desire to keep them around.  It would make sense then to setup a deletion schedule (once a week is recommended) to purge irrelevant data.

 

 

Make Sure To Use FindObjectIds() Where You Can

 

 

If you're doing regular mass deletions of objects it is far more efficient to limit your search returns to the AFObject IDs.  Not only does this reduce network traffic but the underlying database retrievals are going against well-indexed columns in the database which will result in fast returns.  By issuing mass deletes in page blocks (see the pageSize constant) you can loop through an unlimited number of event frames to purge until you have none left.  If your AFDatabase happens to undergo lots of writes and your batch deletes need  to happen when the server is busy, you can reduce the page size and introduce waits between each batch delete block to reduce the time the amount of locking in the underlying SQL database.

 

Observe:

using System;
using System.Linq;
using System.Collections.Generic;
using OSIsoft.AF;
using OSIsoft.AF.EventFrame;
using OSIsoft.AF.Search;

namespace MassEFDelete
{
    class Program
    {
        static void Main(string[] args)
        {

            const int pageSize = 200000;

            PISystems pi = new PISystems();
            PISystem sys = pi.DefaultPISystem;
            AFDatabase db = sys.Databases["Chris"];

            var cpu = db.Elements["CHRISMACAIR"];
            var machinetemplate = db.ElementTemplates["Macs"];

            string query = string.Format("Element:CHRISMACAIR");  // Every event frame where this AFElement is a factor
            var search = new AFEventFrameSearch(db, "CPU Search", query);

            //Use this if you really need to look in the event frames before deleting them,
            //else use the much faster FindObjectIds() method
            //var frames = search.FindEventFrames();  // Get every event

            List<Guid> ids = null;
            while ((ids = search.FindObjectIds(pageSize: pageSize).Take(pageSize).ToList()).Count > 0)
            {
                Console.WriteLine($"Deleting a batch of {ids} Event Frames..");
                AFEventFrame.DeleteEventFrames(sys, ids);
            }

            Console.WriteLine("Completed.");

        }
    }
}

 

Many of you have been frustrated with clearing event frames out in batch. Now you can conduct a lightweight search (fullload=false), grab the IDs out of the search and feed them into bulk delete all at once

I just needed to merge some data from one PI Point into another PI Point on the same PI Server.

 

Background:

The PI OPC UA Connector was running for a while before following the PI Connector Mapping Guide and routing the output of the OPC UA Connector to the previously OPC DA PI Points.

 

This Powershell script uses AFSDK to get the events from PI Point "sinusoid" and write these into PI Point "testtag" for the last hour.

 

[Reflection.Assembly]::LoadWithPartialName("OSIsoft.AFSDK") | Out-Null
[OSIsoft.AF.PI.PIServers] $piSrvs = New-Object OSIsoft.AF.PI.PIServers
[OSIsoft.AF.PI.PIServer] $piSrv = $piSrvs.DefaultPIServer
[OSIsoft.AF.PI.PIPoint] $piPoint = [OSIsoft.AF.PI.PIPoint]::FindPIPoint($piSrv, "SINUSOID")
[OSIsoft.AF.PI.PIPoint] $piPoint2 = [OSIsoft.AF.PI.PIPoint]::FindPIPoint($piSrv, "testag")
[OSIsoft.AF.Time.AFTimeRange] $timeRange = New-Object OSIsoft.AF.Time.AFTimeRange("*-1h", "*")
[OSIsoft.AF.Asset.AFValues] $piValues = $piPoint.RecordedValues($timeRange, [OSIsoft.AF.Data.AFBoundaryType]::Inside, $null, $true, 0)






foreach ($val in $piValues)
{
    Write-Host $val.Value " at " $val.Timestamp
   $piPoint2.UpdateValue($val,1)
} 

I am happy to announce that the client libraries below were updated using the new PI Web API 2018 Swagger specification. All the new methods from PI Web API 2018 are available for you to use on the client side! The StreamUpdate related methods (still in CTP) were also added to those projects.

 

 

If you have questions, please don't hesitate to ask!

In this blog we will have a look at the new features included in the OSIsoft.AF.PI Namespace, namely

  • Methods added to find PIPoints in bulk from a list of IDs.
  • A change event that can be checked if it was the result of a rename

 

Finding PI Points using IDs

The PIPoint. FindPIPoints(PIServer, IEnumerable< Int32> , IEnumerable< String> ) and PIPoint. FindPIPointsAsync(PIServer, IEnumerable< Int32> , IEnumerable< String> , CancellationToken) methods were added to find PIPoints in bulk from a list if IDs.This helps to retrieve a list of PIPoint objects from the specified point ids without the requirement of knowing the PI Point names apriori.

 

Snippets:

var piDataArchive = new PIServers()[serverName];
var pointIDs = new List<int>() { 1, 3, 4, 6, 10000 };
var pointAttributes = new List<string>() { "Name", "PointType", "PointID" };
IList<PIPoint> myPoints = FindMyPIPoints(piDataArchive, pointIDs, pointAttributes);

 

private static IList<PIPoint> FindMyPIPoints(PIServer piDataArchive, IList<int> pointIDs,  IList<string> pointAttributes)
{
            return PIPoint.FindPIPoints(piServer: piDataArchive,
                                                  ids: pointIDs,
                                                  attributeNames: pointAttributes);
}

 

Asynchronous method (This call might use a background task to complete some of its work. See the Threading Overview for some matters to consider when execution transitions to another thread)

var tokenSource = new CancellationTokenSource();
var token = tokenSource.Token;
var findpointstask = FindMyPointsAsync(piDataArchive, pointIDs, pointAttributes, token);

 

private static async Task<IList<PIPoint>> FindMyPointsAsync(PIServer piDataArchive, IList<int> pointIDs,  IList<string> pointAttributes, CancellationToken token)
{
            return await PIPoint.FindPIPointsAsync(piServer: piDataArchive,
                                                      ids: pointIDs,
                                                      attributeNames: pointAttributes,
                                                      cancellationToken: token);
}

 

NOTE:

  • The PIPoint attribute names to be loaded from the server as the PIPoint objects are found. The GetAttribute(String) method can be used to access the loaded attribute values. If null, then no attribute values are loaded for the returned PIPoints.
  • Asynchronous methods throw AggregateException on failure which will contain one or inner more exceptions containing the failure
  • A cancellation token used to abort processing before completion. Passing the default CancellationToken.None will run to completion or until the PIConnectionInfo.OperationTimeOut period elapses.

 

Check if PI Point has been renamed

PIPointChangeInfo.IsRenameEvent Method is used to check if PIPointChangeInfo instance is a rename event. Previously this information was not possible to obtain through AF SDK and required further digging into the PI Data Archive.

 

Method Signature

public bool IsRenameEvent(
               out string oldName,
               out string newName
)

 

Implementation through PIServer.FindChangedPIPoints Method

IList<PIPointChangeInfo> changes = piDataArchive.FindChangedPIPoints(largeCount, cookie, out cookie, null);
if (!(changes is null))
{
     foreach (PIPointChangeInfo change in changes)
     {
          if (change.IsRenameEvent(out string oldname, out string newname))
           {
                Console.WriteLine($"\tRenamed New name: {newname} Old name: {oldname} ");
           }
    }
}

 

NOTE:

  • PIServer.FindChangedPIPoints method involves various aspects like cookies, filterpoints and  PIPointChangeInfo structure, which the reader is encouraged to explore in detail through the AF SDK documentation
  • Cookie: Use the return from a previous call to this method to find all changes since the last call. Pass null to begin monitoring for changes to PI Points on the PIServer. Pass an AFTime to find all changes, since a particular time
  • filterPoints (Optional): A list of PIPoint objects for which the resulting changes should be filtered

 

The code that demonstrates the use of the the above methods in the form of a simple console application can be obtained at: GitHub - ThyagOSI/WhatsNewAF2018

A sample output from the console application

 

Conclusion

Hope this post helps you, the PI developer to explore these methods further and implement them in your future projects.

Please feel free to provide any feedback that you may have on the new methods, its documentation or this blog post.

Introduction

 

PI Web API 2018 was released in late June with some interesting features. In this blog post, I will show you some of those features using the PI Web API client library for .NET Standard which was already upgraded to 2018.

 

According to the Changelog of the PI Web API 2018 help, the new features of PI Web API 2018 are listed below:

 

  • Read for notification contact templates, notification rules, notification rule subscribers and notification rule templates
  • Read, create, update and delete for annotation attachments on event frames
  • Retrieve relative paths of an element
  • Retrieve full inheritance branch of an element template
  • Allow reading annotations of a stream using the "associations" parameter
  • Allow showing all child attribute templates of an element template
  • Add parameter support for tables
  • Filter attributes by trait and trait category
  • Support health traits
  • Incrementally receive updates from streams with Stream Updates (CTP)

Data model changes:

  • Expose 'ServerTime' property on objects of asset server, data server and system
  • Expose 'DisplayDigits', 'Span' and 'Zero' properties on objects of attribute and point
  • Expose 'DefaultUnitsNameAbbreviation' property on objects of attribute and attribute template

 

You can have an idea of the new features by looking at the list above but showing some code snippets will help you understand better.

 

Please visit the GitHub repository to download the source code package used in this blog post.

 

Demo

 

Before we start, please create a .NET Framework or .NET Core console application and add the OSIsoft.PIDevClub.PIWebApiClient according to the README.MD file of the client library repository if you want to try yourself.

 

 

Read for notification contact templates, notification rules, notification rule subscribers and notification rule templates

 

PI Web API 2018 comes with 4 new controllers (NotificationContactTemplate, NotificationRule, NotificationRuleSubscriber and NotificationRuleTemplate) with read access to the notification objects.

 

For this demonstration, I have created a Notification Rule to make sure that it is accessible through client.NotificationRule.GetNotificationRuleQuery() action.

 

 

            PIAssetDatabase db = client.AssetDatabase.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018");
            PIItemsNotificationRule notificationRules = client.NotificationRule.GetNotificationRulesQuery(db.WebId, query: "Name:=Not*");
            Console.WriteLine($"Found {notificationRules.Items.Count}");

 

HTTP Request: GET - /piwebapi/notificationrules/search?databaseWebId=F1RDbvBs-758HkKnOzuxolJRlgBW-drMj-Q0eZlG9e0V3JigTUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOA&query=Name%3a%3dNot*

 

Running this application, the result is:

 

 

Read, create, update and delete for annotation attachments on event frames

 

I have manually created a new Event Frame (EF) using PI System Explorer. I want to retrieve the same EF programmatically and add an annotation with a value of "EF Test" through the CreateAnnotiationWithHttpInfo() method available of the client library. I've reviewed the status code to make sure that the request was made successfully.

 

 

            PIAssetDatabase db = client.AssetDatabase.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018");
            PIItemsEventFrame efs = client.AssetDatabase.GetEventFrames(db.WebId);
            PIEventFrame ef = efs.Items.First();
            PIAnnotation piAnnotation = new PIAnnotation();
            piAnnotation.Value = "EF Test";
            ApiResponse<object> result = client.EventFrame.CreateAnnotationWithHttpInfo(ef.WebId, piAnnotation);
            if (result.StatusCode < 300)
            {
                Console.WriteLine("Annotation in EF was created successfully!");
            }


            PIItemsAnnotation piAnnotations = client.EventFrame.GetAnnotations(ef.WebId);
            foreach (PIAnnotation annotation in piAnnotations.Items)
            {
                Console.WriteLine($"Annotation from Event Frame: {annotation.Value}");
            }

 

HTTP Request: POST - /piwebapi/eventframes/F1FmbvBs-758HkKnOzuxolJRlgbFJEBGSE6BGbywAVXQAeEATUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFVkVOVEZSQU1FU1tFRjIwMTgwNzEwLTAwMV0/annotations

 

I can see my created annotations in PI System Explorer. Just right click on the EF and then click on "Annotate..." You will be able to see all EFs generated programmatically.

 

 

Retrieve relative paths of an element

 

PI Web API 2018 allows you to get a list of the full or relative paths to this display. Let's see how this works. Please refer to the AF Tree below.

 

 

 

Using the code snippet below.

 

            PIElement element = client.Element.GetByPath(@"\\MARC-PI2016\AFPartnerCourseWeather\Cities\Chicago");
            PIItemsstring relativePath = client.Element.GetPaths(element.WebId, @"\\MARC-PI2016\AFPartnerCourseWeather");
            Console.WriteLine($"The Relative Path is {relativePath.Items.First()}.");

 

The second line calls GetPaths with the second input being "\\MARC-PI2016\AFPartnerCourseWeather" as the full path of one of their parents' AF object, which in this case is the AF database.

 

HTTP Request: GET -  /piwebapi/elements/F1EmbvBs-758HkKnOzuxolJRlg-Jnwgsge5hGAwwAVXX0eAQTUFSQy1QSTIwMTZcQUZQQVJUTkVSQ09VUlNFV0VBVEhFUlxDSVRJRVNcQ0hJQ0FHTw/paths?relativePath=%5c%5cMARC-PI2016%5cAFPartnerCourseWeather

 

The result will be the element relative path which is the path relative to the element path of one of its parents.

 

 

Retrieve full inheritance branch of an element template

 

In this example I have create 3 element templates and 1 element:

  • TemplateA
  • TemplateAA derived from TemplateA
  • TemplateAAA derived from TemplateAA.

 

 

PI Web API provides two interesting methods: GetBaseElementTemplates() and GetDerivedElementTemplates() to get the base and derived element templates from element templates. Let's see how this work:

 

            PIElementTemplate templateA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateA]");
            PIElementTemplate templateAA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateAA]");
            PIElementTemplate templateAAA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateAAA]");


            PIItemsElementTemplate baseTemplatesFromTemplateA = client.ElementTemplate.GetBaseElementTemplates(templateA.WebId);           
            PIItemsElementTemplate baseTemplatesFromTemplateAA = client.ElementTemplate.GetBaseElementTemplates(templateAA.WebId);
            PIItemsElementTemplate baseTemplatesFromTemplateAAA = client.ElementTemplate.GetBaseElementTemplates(templateAAA.WebId);


            Console.WriteLine($"There are {baseTemplatesFromTemplateA.Items.Count} base templates in Element Template A");
            Console.WriteLine($"There are {baseTemplatesFromTemplateAA.Items.Count} base templates in Element Template AA");
            Console.WriteLine($"There are {baseTemplatesFromTemplateAAA.Items.Count} base templates in Element Template AAA");




            PIItemsElementTemplate derivedTemplatesFromTemplateA = client.ElementTemplate.GetDerivedElementTemplates(templateA.WebId);
            PIItemsElementTemplate derivedTemplatesFromTemplateAA = client.ElementTemplate.GetDerivedElementTemplates(templateAA.WebId);
            PIItemsElementTemplate derivedTemplatesFromTemplateAAA = client.ElementTemplate.GetDerivedElementTemplates(templateAAA.WebId);


            Console.WriteLine($"There are {derivedTemplatesFromTemplateA.Items.Count} derived templates in Element Template A");
            Console.WriteLine($"There are {derivedTemplatesFromTemplateAA.Items.Count} derived templates in Element Template AA");
            Console.WriteLine($"There are {derivedTemplatesFromTemplateAAA.Items.Count} derived templates in Element Template AAA");

 

HTTP Request: GET - /piwebapi/elementtemplates/F1ETbvBs-758HkKnOzuxolJRlg-av52dhufE6iHqLJJk5faATUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFTEVNRU5UVEVNUExBVEVTW1RFTVBMQVRFQV0/baseelementtemplates

HTTP Request GET - /piwebapi/elementtemplates/F1ETbvBs-758HkKnOzuxolJRlg-av52dhufE6iHqLJJk5faATUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFTEVNRU5UVEVNUExBVEVTW1RFTVBMQVRFQV0/derivedelementtemplates

 

Running the application, the results are:

 

 

Allow reading annotations of a stream using the "associations" parameter

 

Some of the actions from the Stream and StreamSets controllers have the associations parameter added as an optional input. If this input is null, the values will be retrieved without their annotations. If the associations input is equal to "Annotations", then the annotations will be retrieved as well. Let's take a look at the example using the Stream.GetRecorded() method.

 

 

            PIPoint point1 = client.Point.GetByPath($"\\\\{piDataArchive.Name}\\SINUSOID");
            PIExtendedTimedValues valuesWithAnnotation = client.Stream.GetRecorded(point1.WebId, "Annotations");
            IEnumerable<List<PIStreamAnnotation>> annotationsList = valuesWithAnnotation.Items.Where(v => v.Annotated == true).Select(v => v.Annotations);
            foreach (List<PIStreamAnnotation> annotations in annotationsList)
            {
                foreach (PIStreamAnnotation annotation in annotations)
                {
                    Console.WriteLine($"Showing annotation: {annotation.Value}, {annotation.ModifyDate}");
                }
            }

 

HTTP Request: GET  /piwebapi/streams/F1DPQuorgJ0MskeiLb6TmEmH5gAQAAAATUFSQy1QSTIwMTZcU0lOVVNPSUQ/recorded?associations=Annotations

 

The result is shown below:

 

 

 

Allow showing all child attribute templates of an element template

 

On this demonstration, I have created a new attribute template called ParentAttribute on the TemplateAAA element template which has 2 child attributes templates named ChildAttribute1 and ChildAttribute2.

 

 

On previous versions of PI Web API it was not possible to access the child attribute templates of an attribute template. But this is possible in 2018 version:

 

            PIElementTemplate templateAAA = client.ElementTemplate.GetByPath("\\\\MARC-PI2016\\WhatsNewInPIWebAPI2018\\ElementTemplates[TemplateAAA]");
            PIItemsAttributeTemplate attributesWithChild = client.ElementTemplate.GetAttributeTemplates(templateAAA.WebId, showDescendants: true);
            foreach (PIAttributeTemplate attributeTemplate in attributesWithChild.Items)
            {
                Console.WriteLine($"Showing attribute template -  Path: {attributeTemplate.Path}, HasChildren:{attributeTemplate.HasChildren}");
            }

 

HTTP Request:  /piwebapi/elementtemplates/F1ETbvBs-758HkKnOzuxolJRlgoPGTVEM9d0S3M9GObE-kagTUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxFTEVNRU5UVEVNUExBVEVTW1RFTVBMQVRFQUFBXQ/attributetemplates?showDescendants=True

 

The results are shown below:

 

 

Filter attributes by trait and trait category

 

PI Web API allows you to get the attribute traits of an attribute by adding two new inputs to the GetAttributes() method: trait and traitCategory. I have created the attribute traits for the MainAttribute.

 

Please refer to the code below.

 

            PIAttribute mainAttribute = client.Attribute.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018\AttributeTraits|MainAttribute");
            PIItemsAttribute attributeTraits = client.Attribute.GetAttributes(mainAttribute.WebId, trait: new List<string> { "Minimum", "Target" });

 

 

HTTP Request: GET -  /piwebapi/attributes/F1AbEbvBs-758HkKnOzuxolJRlgBG1jiF-E6BGbywAVXQAeEAUkg-GMAek0-aGWOEUJKR9QTUFSQy1QSTIwMTZcV0hBVFNORVdJTlBJV0VCQVBJMjAxOFxBVFRSSUJVVEVUUkFJVFN8TUFJTkFUVFJJQlVURQ/attributes?trait=Minimum&trait=Target

 

Please refer to the Attribute Trait page of the PI Web API help for more information.

 

Support health traits

 

PI Web API also allows you to get the health score and health status of an element by adding the same inputs of the previous example.

 

 

            PIElement mainElement = client.Element.GetByPath(@"\\MARC-PI2016\WhatsNewInPIWebAPI2018\AttributeTraits");
            PIItemsAttribute healthAttributeTraits = client.Element.GetAttributes(mainElement.WebId, trait: new List<string> { "HealthScore" });

 

The variable healthAttributeTraits will store the Health Score attribute of the element.

 

Please refer to the Attribute Trait page of the PI Web API help for more information.

 

Incrementally receive updates from streams with Stream Updates (CTP)

 

PI Web API comes with Stream Updates whose behavior is similar to the PI Web API channels but it uses the default HTTP requests instead of WebSockets. Here is the description of this feature on the PI Web API help.

 

"Stream Updates is a way in PI Web API to stream incremental and most recent data updates for PIPoints/Attributes on streams and streamsets without opening a websocket. It uses markers to mark the specific event in a stream where the client got the last updates and uses those to get the updates since that point in the stream."

 

In the example below, we will get the WebIDs of 3 PI Points and retrieve their new real time values every 30 seconds.

 

 

            PIPoint point1 = client.Point.GetByPath("\\\\marc-pi2016\\sinusoid");
            PIPoint point2 = client.Point.GetByPath("\\\\marc-pi2016\\sinusoidu");
            PIPoint point3 = client.Point.GetByPath("\\\\marc-pi2016\\cdt158");
            List<string> webIds = new List<string>() { point1.WebId, point2.WebId, point3.WebId };


            PIItemsStreamUpdatesRegister piItemsStreamUpdatesRegister = client.StreamSet.RegisterStreamSetUpdates(webIds);
            List<string> markers = piItemsStreamUpdatesRegister.Items.Select(i => i.LatestMarker).ToList();
            int k = 3;
            while (k > 0)
            {
                PIItemsStreamUpdatesRetrieve piItemsStreamUpdatesRetrieve = client.StreamSet.RetrieveStreamSetUpdates(markers);
                markers = piItemsStreamUpdatesRetrieve.Items.Select(i => i.LatestMarker).ToList();
                foreach (PIStreamUpdatesRetrieve item in piItemsStreamUpdatesRetrieve.Items)
                {
                    foreach (PIDataPipeEvent piEvent in item.Events)
                    {
                        Console.WriteLine("Action={0}, Value={1}, SourcePath={2}", piEvent.Action, piEvent.Value, item.SourcePath);
                    }
                }
                System.Threading.Thread.Sleep(30000);
                k--;
            }

 

 

HTTP Request: POST - /piwebapi/streamsets/updates?webId=F1DPQuorgJ0MskeiLb6TmEmH5gAQAAAATUFSQy1QSTIwMTZcU0lOVVNPSUQ&webId=F1DPQuorgJ0MskeiLb6TmEmH5gAgAAAATUFSQy1QSTIwMTZcU0lOVVNPSURV&webId=F1DPQuorgJ0MskeiLb6TmEmH5g9AQAAATUFSQy1QSTIwMTZcQ0RUMTU4 H

 

HTTP Request: GET /piwebapi/streamsets/updates?marker=796081654a5648a3bf0d322fb58df518_0&marker=e2604da295584f498ce65976437b0c0f_0&marker=6f402964b31e4635aac2b87a68b79e60_0

 

The results are shown below:

 

 

Exposing properties of objects

 

There was some data model changes:

  • Expose 'ServerTime' property on objects of asset server, data server and system
  • Expose 'DisplayDigits', 'Span' and 'Zero' properties on objects of attribute and point
  • Expose 'DefaultUnitsNameAbbreviation' property on objects of attribute and attribute template

 

We can easily demonstrate that through the following code:

 

            PIPoint point = client.Point.GetByPath($"\\\\{piDataArchive.Name}\\SINUSOID");
            //Expose 'ServerTime' property on objects of asset server, data server and system
            PIItemsAssetDatabase dbs = client.AssetServer.GetDatabases(afServer.WebId);
            afServer = client.AssetServer.Get(afServer.WebId);
            Console.WriteLine($"PI AF Server ServerTime is {afServer.ServerTime}");




            //Expose 'DisplayDigits', 'Span' and 'Zero' properties on objects of attribute and point
            Console.WriteLine($"Sinusoid PIPoint: DisplayDigits={point.DisplayDigits}, Span={point.Span}, Zero={point.Zero}");


            //Expose 'DefaultUnitsNameAbbreviation' property on objects of attribute and attribute template
            PIAttribute attribute = client.Attribute.GetByPath(@"\\MARC-PI2016\AFPartnerCourseWeather\Cities\Chicago|Temperature");
            Console.WriteLine($"DefaultUnitsNameAbbreviation of the attribute is {attribute.DefaultUnitsNameAbbreviation}");

 

The results are shown below:

 

 

Conclusion

 

I hope you've found this blog post useful to learn the new features of PI Web API 2018. Please provide your feedback so we can repeat when future PI Web API releases are published.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

PI Data Archive 2018 has been released on 27 Jun 2018! It is now time for us to upgrade to experience all the latest enhancements.

 

Legacy subsystems such as PI AF Link Subsystem, PI Alarm Subsystem, PI Performance Equation Scheduler, PI Recalculation Subsystem and PI Batch Subsystem are not installed by default. These legacy subsystems mentioned above will not be in the PI Data Archive 2018 container because of the command line that I have chosen for it. This upgrade procedure assumes that you were not using any of these legacy subsystems.

 

We also have client side load balancing in addition to scheduled archive shifts for easier management of archives. Finally, there is the integrated PI Server installation kit which is the enhancement I am most excited about. The kit has the ability to let us generate a command line statement for use during silent installation. No more having to comb through the documentation to find the feature that you want to install. All you have to do is just use the GUI to select the features that you desire and save the command line to a file. The command line is useful in environments without a GUI such as a container environment.

 

Today, I will be guiding you on a journey to upgrade your PI Data Archive 2017R2 container to the  PI Data Archive 2018 container. In this article, Overcome limitations of the PI Data Archive container, I have addressed most of the limitations that were present in the original article Spin up PI Data Archive container. We are now left with the final limitation to address.

 

This example doesn't support upgrading without re-initialization of data.

 

I will show you how we can upgrade to the 2018 container without losing your data. Let's begin on this wonderful adventure!

 

Create 2017R2 container and inject data

See the "Create container" section in Overcome limitations of the PI Data Archive container for the detailed procedure on how to create the container. In this example, my container name will be pi17.

docker run -id -h pi17 --name pi17 pidax:17R2

 

Once your container is ready, we can use PI SMT to introduce some data which we can use as validation that the data has been persisted to the new container. I will create a PI Point called "test" to store some string data.

We will also change some tuning parameters such as Archive_AutoArchiveFileRoot and Archive_FutureAutoArchiveFileRoot to show that they are persisted as well.

 

 

Take a backup

Before proceeding with the upgrade, let us take a backup of the container using the backup script found here. This is so that we can roll back later on if needed.

The backup will be stored in a folder named after the container.

 

Build 2018 image

1. Get the files from elee3/PI-Data-Archive-container-build

2. Get the PI Server 2018 integrated install kill from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website

4. Your folder structure should look similar to this now.

5. Run build.bat.

 

Upgrade from 2017R2 to 2018

Now that we have the image built. We can perform the upgrade. To do so, stop the pi17 container.

docker stop pi17

 

Create the PI Data Archive 2018 container (I will name this pi18) by mounting the data volumes from the pi17 container.

docker run -id -h pi18 --name pi18 --volumes-from pi17 -e trust=<containerhost> pidax:18

 

Verification

Now let us verify that the container named pi18 has our old data and tuning parameters and also let us check its version. We can do so with PI SMT.

Data has been persisted!

Tuning parameters has also been persisted!

Version is now 3.4.420.1182 which means the upgrade is successful. Note that the legacy subsystems that were mentioned above are no longer present.

 

Congratulations. You have successfully upgraded to the PI Data Archive 2018 container and retained your data.

 

Rollback

Now what if you want to rollback to the previous version for whatever reasons? I will show you that it is also simple to do. There are two ways that we can go about doing this.

 

MethodProsCons
RestoreWill always workData added after the upgrade will be lost after the rollback. Only data prior to the backup will be present. Requires a backup
Non-RestoreData added after the upgrade is persisted after the rollbackMight not always work. It depends on whether the configuration files are compatible between versions. E.g. it works for 2018 to 2017R2 but not for 2015 to earlier versions

 

We will explore both methods in this blog since both methods will work for rolling back 2018 to 2017R2.

 

Restore method

In this method, we can remove pi17, recreate a fresh instance and restore the backup. In the container world, we treat software not as pets but more like cattle.

docker rm pi17
docker run -id -h pi17 --name pi17 pidax:17R2
docker stop pi17

Copy the backup folders into the appropriate volumes at C:\ProgramData\docker\volumes

docker start pi17

 

Now let us compare pi17 and pi18 with PI SMT. We can see that they have the same data but their versions are different.

 

 

Non-Restore method

In this method, data that is added AFTER the upgrade will still be persisted after rollback. Let us add some data to the pi18 container.

 

We shall also change the tuning parameter from container17 to container18.

 

Now, let's remove any pi17 container that exists so that we only have the pi18 container running. After that, we can do

docker rm -f pi17
docker stop pi18
docker run -id -h pi17 --name pi17 --volumes-from pi18 pidax:17R2

 

We can now verify that the data added after the upgrade still exists when we roll back to the 2017R2 container.

 

 

Conclusion

In this article, we have shown that it is easy to perform upgrades and rollbacks with containers while preserving data throughout the process. Upgrades that used to take days can now be done in minutes. There is no worry that upgrading will break your container since data is separated from the container. One improvement that I would like to see is that archives can be downgraded by an older PI Archive Subsystem automatically. Currently, this cannot be done. If you try to connect to a newer archive format with an older piarchss without downgrading the version manually, you will see

 

 

However, the reverse is possible. Connecting to an older archive format with a newer piarchss will upgrade the version automatically.

 

New updates (24 Jul 2018)

1. Fix unknown message problem in logs

2. Add trust on run-time by specifying environment variable

I really need to see my CPU status all the time

How many times do you fish through Task Manager in Windows or pop open a terminal to run htop to reassure yourself that your CPU is running hot?  If you write code this is all the time.   For one project at OSIsoft I actually bought a DEC VT-420 dumb tube and a serial-to-USB adapter whose sole purpose was to run top as I was working because I needed to kill errant programs that often.  But that wasn't very green and it took up space on my desk.

 

I've found a better way. I have a Task Manager CPU display on my keyboard and I'm tracking my CPU% on the PI Server continuously.   Using Go*.

 

 

You can use any Web API Client library you want, but this was a good way to get a one-off daemon up and running quickly in a matter of minutes.

PI AF was released last week along with a new version AF SDK (2.10.0.8628), so let me show you a feature that has been long requested by the community and that it's now available for you: the AFSession structure. This structure is used to represent a session on the AF Server and it exposes the following members:

 

Public PropertyDescription
AuthenticationTypeThe authentication type of the account which made the connection.
ClientHostThe IP address of the client host which made the connection.
ClientPortThe port number of the client host which made the connection.
EndTimeThe end time of the connection.
GracefulTerminationA boolean that indicates if the end time was logged for graceful client termination.
StartTimeThe start time of the connection.
UserNameThe username of the account which made the connection.

 

In order to get session information of a given PI System, the PISystem class now exposes a function called GetSessions(AFTime? startTime, AFTime? endTime, AFSortOrder sortOrder, int startIndex, int maxCount) and it returns an array of AFSessions. The AFSortOrder is an enumeration defining whether you want the startTime to be ascending or descending. Note that you can specify AFTime.MaxValue at the endTime to search only sessions which are still open.

 

From the documentation's remarks: The returned session data can be used to determine information about clients that are connected to the server. This information can be used to identify active clients. Then from the client machine, you can use the GetClientRpcMetrics() (for AF Server) method to determine what calls the clients are making to the server. Session information is not replicated in PI AF Collective environments. In these setups, make sure you connect to the member you want to retrieve session info from.

 

Shall we see it in action? The code I'm using is very simple:

 

var piSystem = (new PISystems()).DefaultPISystem;
var sessions = piSystem.GetSessions(new AFTime("*-1d"), null, AFSortOrder.Descending);
foreach (var session in sessions)
{
     Console.WriteLine($"---- {session.ClientHost}:{session.ClientPort} ----");
     Console.WriteLine($"Username: {session.UserName}");
     Console.WriteLine($"Start time: {session.StartTime}");
     Console.WriteLine($"End time: {session.EndTime}");
     Console.WriteLine($"Graceful: {session.GracefulTermination}");
     Console.WriteLine();
}

 

A cropped version of the result can be seen below:

 

---- 10.10.10.10:51582 ----

Username: OSI\rborges

Start time: 07/02/18 13:18:54

End time:

Graceful:

 

---- 10.10.10.10:62307 ----

Username: OSI\rborges

Start time: 07/02/18 13:06:36

End time: 07/02/18 13:11:51

Graceful: True

 

---- 10.10.10.10:62305 ----

Username: OSI\rborges

Start time: 07/02/18 13:06:17

End time: 07/02/18 13:06:19

Graceful: True

 

As you can see, now we can easily monitor sessions in your PI System. Share your thoughts about it in the comments and how you are planning on using it.

 

Happy coding!

 

Reference:

Rick's post on how to use metrics with AF SDK.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

In this blog post, we will be exploring how to overcome the limitations that were previously mentioned in the blog post Spin up PI Data Archive container. Container technology can contribute to the manageability of a PI System (installations/migrations/maintenance/troubleshooting that used to take weeks can potentially be reduced to minutes) so I would like to try and overcome as many limitations as I can so that they will become production ready. Let us have a look at the limitations that were previously mentioned.

 

1. This example does not persist data or configuration between runs of the container image.

2. This example relies on PI Data Archive trusts and local accounts for authentication.

3. This example doesn't support VSS backups.

 

Let us go through them one at a time.

 

Data and Configuration Persistence

This limitation can be solved by separating the data from the application container. In Docker, we can make use of something called Volumes which are completely managed by Docker. When we persist data in volumes, the data will exist beyond the life cycle of the container. Therefore, even if we destroy the container, the data will still remain. We create external data volumes by including the VOLUME directive in the Dockerfile like such

 

VOLUME ["C:/Program Files/PI/arc","C:/Program Files/PI/dat","C:/Program Files/PI/log"]

 

When we instantiate the container, Docker will now know that it has to create the external data volumes to store the data and configuration that exists in the PI Data Archive arc, dat and log directories.

 

Windows Authentication

This issue can be addressed with the use of GMSA and a little voodoo magic. This enables the container host to obtain the TGT for the container so that the container is able to perform Kerberos authentication and it will be connected to the domain. The container host will need to be domain joined for this to happen.

 

VSS Backups

When data is persisted externally, we can leverage on the VSS provider in the container host to perform the VSS snapshot for us so that we do not have to stop the container while performing the backup. This way, the container will be able to run 24/7 without any downtime (as required by production environments). The PI Data Archive has mechanisms to put the archive in a consistent state and freeze it to prepare for snapshot.

 

Create container

1. Grab the files in the 2017R2 folder from my Github repo and place them into a folder. elee3/PI-Data-Archive-container-build

2. Get PI Data Archive 2017 R2A Install Kit and extract it into the folder as well. Download from techsupport website

3. Procure a PI License that doesn't require a MSF such as the demo license on the techsupport website and place it in the Enterprise_X64 folder.

4. Your folder structure should look similar to this now.

5. Execute buildx.bat. This will build the image.

6. Once the build is complete, you can navigate to the Kerberos folder and run the powershell script (check 3 Aug 2018 updates) to create a Kerberos enabled container

.\New-KerberosPIDA.ps1 -AccountName <GMSA name> -ContainerName <container name>

You can request for a GMSA from your IT department and get it installed on your container host with the Install-ADServiceAccount cmdlet.

OR

If you think it will be difficult for you to get a GMSA from your IT department, then you can use the following command as well to create a non Kerberos enabled container

docker run -id -h <DNS hostname> --name <container name> pidax:17R2

7. Go to the pantry to make some tea or coffee. After about 1.5 minutes, your container will be ready.

 

Demo of container abilities

1. Kerberos

This section only applies if you created a Kerberos enabled container. After creating a mapping for my domain account using PI System Management Tools (SMT) (the container automatically creates an initial trust for the container host so that you can create the mapping), let me now try to connect to the PI Data Archive container using PI System Explorer (PSE). After successful connection, let me go view the message logs of the PI Data Archive container.

We can see that we have Kerberos authentication from AFExplorer.exe a.k.a PSE.

 

2. Persist Data and Configuration

When I kill off the container, I noticed that I am still able to see the configuration and data volumes persisted on my container host so I don't have to worry that my data and configuration is lost.

 

3. VSS Backups

Finally, what if I do not want to stop my container but I want to take a back up of my config and data? For that, we can make use of the VSS provider on the container host. Obtain the 3 files here. elee3/PI-Data-Archive-container-build

Place them anywhere on your container host. Execute

.\backup.ps1 -ContainerName <container name>

 

The output of the command will look like this.

 

Your backup will be found in the pibackup folder that is automatically created and will look like this. pi17 is the name of my container.

 

Your container is still running all the time.

 

4. Restore a backup to a container

Now that we have a backup, let me show you how to restore it to a new container. It is a very simple 3 step process.

  • docker stop the new container
  • Copy the backup files into the persisted volume. (You can find the volumes at C:\ProgramData\docker\volumes)
  • docker start the container

As you can see, it can't get any simpler . When I go to browse my new container, I can see the values that I entered in my old container which had its backup taken.

 

Conclusion

In this blog post, we addressed the limitations of the original PI Data Archive container to make it more production ready. Do we still have any need of the original PI Data Archive container then? My answer is yes. If you do not need the capabilities offered by this enhanced container, then you can use the original one. Why? Simply because the original one starts up in 15 seconds while this one starts up in 1.5 minutes! The 1.5 minutes is due to limitations in Windows Containers. So if you need to spin up PI Data Archive containers quickly without having to worry about these limitations (e.g. in unit testing), then the original container is for you.

 

New updates (3 Aug 2018)

Script updated to allow GMSA to work in both child and parent domains. For example, mycompany.com and test.mycompany.com.

Refer to Upgrade to PI Data Archive 2018 container with Data Persistence to build the pidax:18 image needed for use with the script.

Introduction

 

We're excited to announce to PI Dev Club members that we now have a PI Web API Client Library for Go, the Google-sponsored programming language specifically designed around concurrent processing.

 

You can visit the GitHub repository of this library here.

 

Requirements

 

go1.9.7 or later

 

Installation

 

If you haven't already, install the Go software development kit.

 

Run this line to install the PI Web API Client for go

 

go get -u github.com/christoofar/gowebapi

 

Note: You don't need the Go SDK after you have compiled a go program and wish to deploy it somewhere.  Go creates self-reliant executable programs that compact dependent libraries inside them.

 

Getting Started

 

Here is a sample Go program for retrieving the links from the Web API home page.

 

Create a directory under %GOPATH% and let's call it webapitest. Then create a new code file with the name webapitest.go

This will print all the version numbers of your PI Web API server plugins. Replace the string literals {in braces} with the appropriate values for your environment.

 

// webapitest.go
package main

import (
    "context"
    "fmt"
    "log"

    pi "github.com/christoofar/gowebapi"
)

var cfg = pi.NewConfiguration()

var client *pi.APIClient
var auth context.Context

func Init() {
    cfg.BasePath = "https://{your web api server here}/piwebapi"

    auth = context.WithValue(context.Background(), pi.ContextBasicAuth, pi.BasicAuth{
        UserName: "{user name here}",
        Password: "{password here}",
    })

    client = pi.NewAPIClient(cfg)
}

func main() {
     Init()
    response, _, fail := client.SystemApi.SystemVersions(auth)
    if fail != nil {
        log.Fatal(fail)
    }

    fmt.Println("Here's all the plugin versions on PI Web API")
    for i := range response {
        fmt.Println(i, response[i].FullVersion)
    }
}

 

You can run the program by issuing the following commands

 

~/go/webapitest $ go build
~/go/webapitest $ ./webapitest

 

Your output should look something like this

 

~/go/webapitest $ ./webapitest
Here's all the plugin versions on PI Web API
OSIsoft.REST.Documentation 1.11.0.967
OSIsoft.REST.Services 1.11.0.967
OSIsoft.Search.SvcLib 1.8.0.3651
OSIsoft.PIDirectory 1.0.0.0
OSIsoft.REST.Core 1.11.0.967

 

Coding examples

 

There are some simple examples on how to start probing the PI Web API Client library over here.

 

Developing in Go

 

Golang programmers tend to develop using Visual Studio Code on Windows which has great golang support and is also available on MacOS and Linux.   There is great golang support available for emacs (configure emacs from scratch as a Go IDE) and vim as plugins which also give you function templates, IntelliSense, syntax checking, godoc (the documentation system for go), gofmt (code formatting/style) and support Delve, the debugger for the go language which cleanly handles the concept of go routines.

 

You can also build Go code with nothing but your web browser using the Go Playground.   This is a very handy tool where you can experiment with Go code snippets and compile and run them directly in a web browser, viewing the output.

 

Caveats

 

A WebID wrapper has been added to the library.  You can review the unit test code to see how you can create WebIDs from paths.

 

Final Thoughts

 

Most everyone's exposure with Go is minimal (including myself!).  But this language is expected to grow in popularity.   The reason: Go's awesome power to simplify concurrent programming is making it spread quickly, particularly within the realm of sensors and other lightweight devices.   Go code is also quite fast and produces programs that are lightweight yet powerful.

 

It's also a very simple programming language to learn (so simple you can become a Go programmer in 15 minutes if you have exposure to any other programming language).  Considering that there is also a rich library of data adapters written in Go, it made obvious sense to open a portal to the world of OSIsoft in golang.

 

Happy Gophering!

datanerd.jpg

 

A gopher is a euphemism Go programmers use to describe one another.

 

With the announcement of the PI Web API Client Library for the Go programming language I have hope that we can all broaden our understanding of concurrent programming.  Go isn't just the "programming language of the {date_year}".   It really is an exciting time to be coding, particularly with a programming language as simple to implement and understand as this one.

 

What is Go?

 

package main

import (
    "fmt"
)

func main() {
    fmt.Println("Hello, playground")
}

Go (known as golang) is a language funded by Google with a design goal of making concurrent programming much easier to write, debug and manage.  The principle designers at Google are Robert Griesemer, Rob Pike, and Ken Thompson formerly of AT&T and Bell Labs.   It's easier to explain what Go is by describing what it's not:

 

What isn't in Go

 

Objects

Yes, you can survive totally fine in a computer language these days with no objects.  It certainly lowers the barrier to new programmers who don't have the patience to memorize Design Patterns.   As a functional programming language that mimics C, avoiding holding on to state as much as possible is primarily the point.  So in the case of web server-based REST programming and other types of concurrent processing--avoiding holding on to state as much as possible is ideal and makes concurrent code easier to understand.

 

Threads

Well, threads as you've always known them to be.   Intel/ARM processor threads are used in Go programs as you would expect but go runtimes utilize a concept called a goroutine, which is a much more lightweight concept than a thread and they live within your processor's threads.   The main advantage of using a lightweight goroutine is massive scalability on a single server instance.   A Java service that might support 5,000 threads could theoretically scale to hundreds of thousands of threads when implemented in Go.   Some more benefits of goroutines:

  • You can run more goroutines on a typical system than you can threads.
  • Goroutines have growable segmented stacks.
  • Goroutines have a faster startup time than threads.
  • Goroutines come with built-in primitives to communicate safely between themselves (channels).
  • Goroutines allow you to avoid having to resort to mutex locking when sharing data structures.
  • Goroutines are multiplexed onto a small number of OS threads, rather than a 1:1 mapping.
  • You can write massively concurrent servers without having to resort to evented programming.

 

A process that normally would have to live on an expensive cloud VM instance or on a medium-sized server can be scaled down to an Arduino device or a much smaller VM instance.    This gives you an unparalleled amount of parallel power (pun intended), not only taking full advantage of all that hardware you paid for but it also affords you the capability to go cheaper down the hardware cost curve in future hardware.

 

An even more convincing selling point for Go is the power of race-condition debugging, which is difficult to do in nearly every evolved programming language.  The Data Race detector is built-in to Go and can pick up your memory conflict points in your code as you run a production workload through it.  To invoke the detector you just kick off your program with the go command using the -race option.

 

Anyone who has had to hunt down race conditions in .NET languages or C++ only to download loads of third-party tools to assist with locating offending race condition code would kill to have this feature built into the language.

 

Exception handling

One of the behaviors of Go that it definitely inherited from C is the concept of exception handling, or rather the lack of it.  Just like you must do in C and Visual Basic, in Go you will need to check returns from functions and handle error states immediately after calls that have a non-zero probability of failure.   The only thing you can really trust being available are memory cells, your local variables and the CPU.   Just about everything else you touch can fail (disk, network, http calls, etc).

 

To get around this though, Go supports multiple return variables from functions which is a very pleasing feature of the language.   A typical call to a function that might fail often looks like this:

 

var theData, err, numRecords := GetRecordsFromDatabase(userContext, sql)
if err != nil
{
     go myapp.PrintWarning("The database is down right now.  Contact Support. " + err)
     return
}

... // Begin processing records

 

What Go feels like to code in

 

The designers of Go definitely went on a shopping trip; starting with C and picking off concepts found in Pascal (the := assignment operator), inferred assignments from Javascript and C# using the same var keyword in both languages.  It also has pointers, breakouts to assembly, and the compiler condenses raw-metal binary executables that are freed from the need to host a JVM or a .NET runtime kit to have your programs launch.

 

One of the strongest benefits is that Go has garbage collection--a feature in hosted languages that need frameworks (Java and .NET).   So there aren't any calls to .free() and destructors are not necessary since there are no objects.  Instead, Go uses the defer keyword so you can run a cleanup routine when a function ends.   And unlike C there is no malloc() nonsense to worry about.   It's the fun of C without the items that make C frustrating.

 

The Go language is also surprisingly simple to wrap your brain around to the point that I am seeing people who have learned Go as their first programming language.  The Go code spec has a strict convention on formatting and gofmt comes with Go which can lint your code.   It's also customary to use godoc to heavily document what you're doing inside go routines.  Once again this comes with Go; no 3rd party tools are necessary to stylize your code.   The burden of code reviews that developers must do inside teams is greatly simplified thanks to these standards.

 

These standards combined with the lightweight thread power this design offers make it easy to understand why this language is taking off so rapidly and why Google invested in it.

 

Where Should I Start?

 

These are places that helped me get started in Go:

 

IDE Developing in Go

 

Golang programmers tend to develop using Visual Studio Code on Windows which has great golang support and is also available on MacOS and Linux.   There is great golang support available for emacs (configure emacs from scratch as a Go IDE) and vim as plugins which also give you function templates, code completion, syntax checking, godoc (the documentation system for go), gofmt (code formatting/style) and support Delve, the debugger for the go language which cleanly handles the concept of go routines.

 

You can also build Go code with nothing but your web browser using the Go Playground. This is a very handy tool where you can experiment with Go code snippets and compile and run them directly in a web browser, viewing the output.

 

Happy Gophering!

Filter Blog

By date: By tag: