Skip navigation
All Places > PI Developers Club > Blog
1 2 3 Previous Next

PI Developers Club

651 posts

Last week during PI World, Rong Xu and I gave a talk about using PI Web API with PowerApps.

 

PowerApps is a tool offered by Microsoft which features a drag and drop app building experience. Adding logic to access PI Web API and controlling other UI components is similar to calling functions in Microsoft Excel. This reduces the technical knowledge in order to create custom solutions using your PI System data.

 

Click image below to ENLARGE

EditorScreen.PNG

 

As promised, I've attached the PowerApps app, Postman Collection and a copy of the AF database that was used in this talk. Be aware that exporting and importing apps are still a 'preview' feature in PowerApps so you may run into issues using this export directly. Instead, I recommend following along with the recording to build up the app.

 

The Postman Collection contains the requests that were used in this talk. Take note that the Web IDs used in this collection will not work in your environment because they contain information specific to the AF Server that was targeted in the talk.

WebID

 

Early last year I introduced you to a deep dive of how WebID works internally in PI Web API.  For those of you generating Web API clients using OpenAPI and Swagger, you'll notice straight away that your web clients do not include a helper for constructing and deconstructing WebID strings.  This requires you to use a helper.  Here is one that I wrote by hand for the Google Go language (golang), which is pretty simple to use.

 

To construct a WebID you find the appropriate structure you need to find the right struct to create a WebID from a path (and optionally, specify supporting information if there's a subtype needed).  When constructing WebID locally this code example here will always build PathOnly WebID types, since those are the most human-readable.

 

For encoding a WebID for an ElementAttribute, that looks like this:

 

webid := EncodeWebID(NewAFAttributeWebID("CLSAF\\Chris\\CLSAF|Memory % In Use", IS_AF_ELEMENT))

 

And encoding a WebID locally for a PI Point looks like this:

 

webid := EncodeWebID(NewPIPointWebID("CLSAF\\SINUSOID"))

 

Enjoy!

 

The Go code

 

package gowebapi

import (
    "encoding/base64"
    "fmt"
    "log"
    "strings"
)

type BaseElementType int
type AnalysisRuleOwnerType int
type EnumerationSourceMarkerType int
type TimeRuleOwnerType int

// Used by AFAnalysisRuleWebID
const (
    ANALYSIS_RULE_OWNER_ANALYSIS AnalysisRuleOwnerType = 0
    ANALYSIS_RULE_OWNER_TEMPLATE AnalysisRuleOwnerType = 1
)

// Used by AFAttribute
const (
    IS_AF_ELEMENT      BaseElementType = 0
    IS_AF_EVENTFRAME   BaseElementType = 1
    IS_AF_NOTIFICATION BaseElementType = 2
)

// Used by AFEnumerationSet
const (
    ENUM_SOURCE_MARKER_IS_PISYSTEM = 0
    ENUM_SOURCE_MARKER_IS_PISERVER = 1
)

// Used by AFTimeRule
const (
    TIME_RULE_OWNER_ANALYSIS         = 0
    TIME_RULE_OWNER_ANALYSISTEMPLATE = 1
)

// *************************************************
// Structures to represent decoded WebIDs
// *************************************************

type AFAnalysisWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFAnalysisCategoryWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFAnalysisTemplateWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFAnalysisRuleWebID struct {
    Type        string
    Version     string
    Marker      string
    OwnerMarker string
    Name        string
}
type AFAnalysisRulePluginWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFAttributeWebID struct {
    Type              string
    Version           string
    Marker            string
    BaseElementMarker string
    Name              string
}

type AFAttributeCategoryWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFAttributeTemplateWebID struct {
    Type                  string
    Version               string
    Marker                string
    ElementTemplateMarker string
    Name                  string
}

type AFDatabaseWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFElementWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFElementCategoryWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFElementTemplateWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFEnumerationSetWebID struct {
    Type         string
    Version      string
    Marker       string
    SourceMarker string
    Name         string
}

type AFEnumerationValueWebID struct {
    Type         string
    Version      string
    Marker       string
    SourceMarker string
    Name         string
}

type AFEventFrameWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFNotificationWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFNotificationTemplateWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFNotificationContactTemplateWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFTimeRuleWebID struct {
    Type        string
    Version     string
    Marker      string
    OwnerMarker string
    Name        string
}

type AFTimeRulePluginWebID struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFSecurityIdentityWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFSecurityMappingWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFTableWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type AFTableCategoryWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type PIPointWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type PIServerWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type PISystemWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type UOMWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

type UOMClassWebId struct {
    Type    string
    Version string
    Marker  string
    Name    string
}

// *************************************************
// Functions to create parseable WebID structures
// *************************************************

func NewAFAnalysisWebID(path string) AFAnalysisWebID {
    var webID AFAnalysisWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "Xs"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFAnalysisCategoryWebID(path string) AFAnalysisCategoryWebID {
    var webID AFAnalysisCategoryWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "XC"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFAnalysisTemplateWebID(path string) AFAnalysisTemplateWebID {
    var webID AFAnalysisTemplateWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "XT"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFAnalysisRuleWebID(path string, ownerType AnalysisRuleOwnerType) AFAnalysisRuleWebID {
    var webID AFAnalysisRuleWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "XR"

    switch ownerType {
    case ANALYSIS_RULE_OWNER_ANALYSIS:
        webID.OwnerMarker = "X"
    case ANALYSIS_RULE_OWNER_TEMPLATE:
        webID.OwnerMarker = "T"
    }

    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFAnalysisRulePluginWebID(path string) AFAnalysisRulePluginWebID {
    var webID AFAnalysisRulePluginWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "XP"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFAttributeWebID(path string, baseType BaseElementType) AFAttributeWebID {
    var webID AFAttributeWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "Ab"

    switch baseType {
    case 0:
        webID.BaseElementMarker = "E"
    case 1:
        webID.BaseElementMarker = "F"
    case 2:
        webID.BaseElementMarker = "N"
    }

    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFAttributeCategoryWebID(path string) AFAttributeCategoryWebID {
    var webID AFAttributeCategoryWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "AC"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFAttributeTemplateWebID(path string) AFAttributeTemplateWebID {
    var webID AFAttributeTemplateWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "AT"
    webID.ElementTemplateMarker = "E"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFDatabaseWebID(path string) AFDatabaseWebID {
    var webID AFDatabaseWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "RD"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFElementWebID(path string) AFElementWebID {
    var webID AFElementWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "Em"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFElementCategoryWebID(path string) AFElementCategoryWebID {
    var webID AFElementCategoryWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "EC"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFElementTemplateWebID(path string) AFElementTemplateWebID {
    var webID AFElementTemplateWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "ET"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFEnumerationSetWebID(path string, sourceMarker EnumerationSourceMarkerType) AFEnumerationSetWebID {
    var webID AFEnumerationSetWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "MS"

    switch sourceMarker {
    case ENUM_SOURCE_MARKER_IS_PISYSTEM:
        webID.SourceMarker = "R"
    case ENUM_SOURCE_MARKER_IS_PISERVER:
        webID.SourceMarker = "D"
    }

    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFEnumerationValueWebID(path string, sourceMarker EnumerationSourceMarkerType) AFEnumerationValueWebID {
    var webID AFEnumerationValueWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "MV"

    switch sourceMarker {
    case ENUM_SOURCE_MARKER_IS_PISYSTEM:
        webID.SourceMarker = "R"
    case ENUM_SOURCE_MARKER_IS_PISERVER:
        webID.SourceMarker = "D"
    }

    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFEventFrameWebID(path string) AFEventFrameWebID {
    var webID AFEventFrameWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "Fm"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFNotificationWebID(path string) AFNotificationWebID {
    var webID AFNotificationWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "Nf"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFNotificationTemplateWebID(path string) AFNotificationTemplateWebID {
    var webID AFNotificationTemplateWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "NT"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFNotificationContactTemplateWebID(path string) AFNotificationContactTemplateWebID {
    var webID AFNotificationContactTemplateWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "NC"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFTimeRuleWebID(path string, ownerType TimeRuleOwnerType) AFTimeRuleWebID {
    var webID AFTimeRuleWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "TR"

    switch ownerType {
    case TIME_RULE_OWNER_ANALYSIS:
        webID.OwnerMarker = "X"
    case TIME_RULE_OWNER_ANALYSISTEMPLATE:
        webID.OwnerMarker = "T"
    }

    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFTimeRulePluginWebID(path string) AFTimeRulePluginWebID {
    var webID AFTimeRulePluginWebID

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "TP"

    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFSecurityIdentityWebID(path string) AFSecurityIdentityWebId {
    var webID AFSecurityIdentityWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "SI"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFSecurityMappingWebID(path string) AFSecurityMappingWebId {
    var webID AFSecurityMappingWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "SM"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFTableWebID(path string) AFTableWebId {
    var webID AFTableWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "Bl"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewAFTableCategoryWebID(path string) AFTableCategoryWebId {
    var webID AFTableCategoryWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "BC"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewPIPointWebID(path string) PIPointWebId {
    var webID PIPointWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "DP"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewPIServerWebID(path string) PIServerWebId {
    var webID PIServerWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "DS"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewPISystemWebID(path string) PISystemWebId {
    var webID PISystemWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "RS"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewUOMWebID(path string) UOMWebId {
    var webID UOMWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "Ut"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

func NewUOMClassWebID(path string) UOMClassWebId {
    var webID UOMClassWebId

    webID.Type = "P"
    webID.Version = "1"
    webID.Marker = "UC"
    webID.Name = strings.TrimSpace(strings.ToUpper(path))

    return webID
}

// *************************************************
// Functions that process WebIDs
// *************************************************

// Converts a string to a de-padded Base64 string
func Base64EncodeNoPadding(text string) string {
    return strings.Replace(base64.StdEncoding.EncodeToString([]byte(text)), "=", "", -1)
}

// By passing in any number of WebID structures, you get back a valid WebID to use
// when calling PI Web API endpoints or sending structure lists.
func EncodeWebID(webIdStructure interface{}) (webIDString string) {

    switch webIdStructure.(type) {
    case AFAnalysisWebID:
        webId := webIdStructure.(AFAnalysisWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFAnalysisCategoryWebID:
        webId := webIdStructure.(AFAnalysisCategoryWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFAnalysisTemplateWebID:
        webId := webIdStructure.(AFAnalysisTemplateWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFAnalysisRuleWebID:
        webId := webIdStructure.(AFAnalysisRuleWebID)
        return fmt.Sprintf("%v%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            webId.OwnerMarker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFAnalysisRulePluginWebID:
        webId := webIdStructure.(AFAnalysisRulePluginWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFAttributeWebID:
        webId := webIdStructure.(AFAttributeWebID)
        return fmt.Sprintf("%v%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            webId.BaseElementMarker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFAttributeCategoryWebID:
        webId := webIdStructure.(AFAttributeCategoryWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFAttributeTemplateWebID:
        webID := webIdStructure.(AFAttributeTemplateWebID)
        return fmt.Sprintf("%v%v%v%v%v",
            webID.Type,
            webID.Version,
            webID.Marker,
            webID.ElementTemplateMarker,
            Base64EncodeNoPadding(webID.Name),
        )
    case AFDatabaseWebID:
        webId := webIdStructure.(AFDatabaseWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFElementWebID:
        webId := webIdStructure.(AFElementWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFElementCategoryWebID:
        webId := webIdStructure.(AFElementCategoryWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFElementTemplateWebID:
        webId := webIdStructure.(AFElementTemplateWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFEnumerationSetWebID:
        webId := webIdStructure.(AFEnumerationSetWebID)
        return fmt.Sprintf("%v%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            webId.SourceMarker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFEnumerationValueWebID:
        webId := webIdStructure.(AFEnumerationValueWebID)
        return fmt.Sprintf("%v%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            webId.SourceMarker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFEventFrameWebID:
        webId := webIdStructure.(AFEventFrameWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFNotificationWebID:
        webId := webIdStructure.(AFNotificationWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFNotificationTemplateWebID:
        webId := webIdStructure.(AFNotificationTemplateWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFNotificationContactTemplateWebID:
        webId := webIdStructure.(AFNotificationContactTemplateWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFTimeRuleWebID:
        webId := webIdStructure.(AFTimeRuleWebID)
        return fmt.Sprintf("%v%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            webId.OwnerMarker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFTimeRulePluginWebID:
        webId := webIdStructure.(AFTimeRulePluginWebID)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFSecurityIdentityWebId:
        webId := webIdStructure.(AFSecurityIdentityWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFSecurityMappingWebId:
        webId := webIdStructure.(AFSecurityMappingWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFTableWebId:
        webId := webIdStructure.(AFTableWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case AFTableCategoryWebId:
        webId := webIdStructure.(AFTableCategoryWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case PIPointWebId:
        webId := webIdStructure.(PIPointWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case PIServerWebId:
        webId := webIdStructure.(PIServerWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case PISystemWebId:
        webId := webIdStructure.(PISystemWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case UOMWebId:
        webId := webIdStructure.(UOMWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    case UOMClassWebId:
        webId := webIdStructure.(UOMClassWebId)
        return fmt.Sprintf("%v%v%v%v",
            webId.Type,
            webId.Version,
            webId.Marker,
            Base64EncodeNoPadding(webId.Name),
        )
    }

    log.Fatal("The item you passed to gowebapi.EncodeWebID() is not a supported type. Use the pi.New..WebId functions to create a valid WebID struct.")
    return ""
}

Greetings fellow PI Geeks!

 

It is about to be that time of the year again next week where we come together to learn from each other and share about the innovative and exciting things that is made possible with the wonderful PI System.

 

 

Cloud computing will be one of the major themes in this year's event. At OSIsoft, our Cloud Vision is to be Ready Now, Ready for Tomorrow.

To fulfill this vision today, the traditional on-premise PI System workloads can be migrated to an IaaS model in the cloud. In the future, a PaaS model migration will also be possible with OSIsoft Cloud Services.

For those of you who are looking to deploy a PI System effortlessly in the cloud in just a matter of minutes, there is a Tech Talk that is perfect for you!

Unlike a normal talk, this Tech Talk will go more in-depth into the topic presented with live demos, and will last for 90 minutes as opposed to the usual 45 minutes.

 

Below are the details of the Tech Talk that will be presented by me and Valentin.

 

Effortlessly deploying a PI System in Azure or AWS

Time: Day 2, 10 April 2019, Wednesday, 2:30 PM - 4:00 PM

Location: Parc 55, Mission II, Level 4

 

An increasing number of organizations are migrating their IT workloads to cloud platforms as part of their digital transformation and cloud first initiatives. New techniques have emerged to ease some of these challenges with automating deployments of network, storage and compute. Join us in this Tech Talk to learn about using automation and infrastructure as code tools to deploy a PI System simply on Amazon Web Services or Microsoft Azure cloud. AWS CloudFormation templates and ARM scripts are used for hosting the PI System on these platforms, helping to automate Dev/Test/Prod deployment of the PI System.

 

Hope to see you next Wednesday and have a great weekend!

The PI Geek Talks are generally presented by partners and customers.  The target audience is PI Admins and Developers.  All the talks will be on Wednesday, Day 2, at the Parc 55 Hotel in the Powell room located on Level 3.  You are invited to read the PI World Agenda for more information.  Along with Day 2 Tech Talks, these are great reasons to take the walk from the Hilton to the Parc 55.

 

 

Selecting the Right Analytics Tool

David Soll, Omicron

There are several analytics tools and approaches available for working with PI data: Performance Equations, AF analytics, custom data references, PI ACE, PI DataLInk and Business Intelligence (BI) tools. It can be a quandary in determining which tool should you use for what. Should you focus on only one tool or use a mix? As it turns out, the answer is not as simple as basing it on the specific analytic. Other considerations should be put into the decision including: scalability, reliability, maintainability, and future-proofing, to name a few.

This talk will discuss the various tools available for performing analytics on PI data and their strengths and weaknesses, their scalability, reliability, maintainability, and future-proofing. The tools will be separated into two major classes: server side (persistent) analytics and client side (query time) analytics and the general differences between the two classes. Attendees will learn practical guidelines to for selecting analytics tools

David Soll

 

 

Providing Enterprise Level Visibility for Snowflakes Using PI and AF

David Rodriguez, EDF Renewables, and Lonnie Bowling, Diemus

As part a larger project to monitor a large number of distributed wind farms throughout the US and Canada, the customer desired to have visibility into substation status information. This included showing substation one-line diagrams, voltage regulation status, breaker status, and events to notify them of any issues. Each wind project was design and installed by others which resulted in large differences between sites, include variability in networking, communications, and tag configuration. In other words, each project was like a snowflake. Using PI, AF Analytics, and Event frames, a solution was developed to normalized all wind projects. Once standardization was achieved, we then defined substation one-line circuits using an AF hierarchy. Data visualization was developed to provide on-demand, real-time rendering of circuits, voltage regulation trends, events, supporting information. This was implemented enterprise wide, and allowed for easy access and visibility for everyone in the organization.

David Rodriguez   Lonnie Bowling

 

Just Another Weather Application – Evaluating the OSIsoft Cloud System

Lonnie Bowling, Diemus

This session will showcase a weather application designed using the new OSIsoft Cloud System (OCS).

A backyard weather station was used as a data source for a live and historical data source. Forecasted data was then added to provide a complete picture of historical, current, and forecasted weather. Once all the data was streaming into an OCS sequencial data store, a full stack front-end solution was developed. This included an API layer in C#, Angular for the UI, and D3 for data visualization. A complete solution was developed to fully evaluate how OCS could be used in a real-life, purpose-built application. Key takeaways, including challenges, an architectural review, and source-code highlights will be shared.

Lonnie Bowling

 

Data Analytics to enhance Advanced Energy Communities planning and operation

John Rogers and Alberto Colombo, DERNetSoft

In today’s energy marketplace, poor energy awareness and a lack of data visibility coupled with the technical complexities of DER integration leads to a gap in local Advanced Energy Community development. DERNetSoft provides a scalable solution to this issue, making it possible to build advanced energy communities increasing energy awareness, enabling Distributed Energy Resources planning and supporting their operational optimization. We transform data into actionable insight and value-added advanced analytics and machine learning technique in the energy industry at the community level.

 

 

Data Quality & Shaping: Two Keys to Enabling Advanced Analytics & Data Science for the PI System

Camille Metzinger and Kleanthis Mazarakis, OSIsoft

Data quality is critical in the success of data-driven decisions. Issues with data will impact users across the organization- from operators, engineers, data scientists, to leaders. Answering business intelligence questions such as “which assets are performing well and which are under-performing” requires a birds-eye view of the data which may require (re)shaping of the data within the PI System. This talk and demo will explore the aspects of data quality and data shaping using PI System infrastructure by illustrating why they are so critical for success. We will also demonstrate the steps of how to improve Data Quality in the PI System and shape the PI System data to give it the right context for your advanced analytics.

Camille Metzinger   Kleanthis Mazarakis

In this blog post I will show how to write a machine learning output, that was produced in Python, back to the PI System.

 

This blog post is preceeded by this blog post: Machine Learning Pipeline 1: Importing PI System Data into Python

 

Output of machine learning

The output of machine learning is expected to be a numpy array. The output features and output length determine the dimension of this numpy array.

 

Dimension: (output length, output features)

 

With

output length = number of predicted timesteps

output features = number of predicted features (for example: Temperature, Humdity,..)

 

Example:

Dimensions of this numpy array are: (192, 9)

9 columns and 192 rows

 

These values do not yet have a timestamp.

 

Generating a timestamp

Depending on how these predicted values were generated, a timestamp for these must be generated before they can be written to the PI System.

The timestamp format for the PI System is:

 

"YYYY-MM-DDThh:mm:ssZ"

 

Pythons datetime package can be used to generate timestamps:

 

from datetime import datetime
from datetime import timedelta

timestamp = datetime.now()

print(timestamp)

now = datetime.now() # current date and time

year = now.strftime("%Y")
print("year:", year)

month = now.strftime("%m")
print("month:", month)

day = now.strftime("%d")
print("day:", day)

time = now.strftime("%H:%M:%S")
print("time:", time)

date_time = now.strftime("%Y-%m-%dT%H:%M:%SZ")
print("date and time:",date_time)

(This is just an example how the parts of the timestamp can be generated)

 

Pythons timedelta can be used to add time to a timestamp. We will use timedelta to generate the timestamps for our predicted values. In our case we know that the sampling time of our values is 1h. (This is by design, as we earlier imported events with the same sampling frequency)

 

Posting output to the PI System

 

The following code will use the Python requests library to send a HTTP POST request to the PI Web API endpoint:

Requests: HTTP for Humans™ — Requests 2.21.0 documentation

 

for event in predict:

#build timestamp of format "YYYY-MM-DDThh:mm:ssZ"
timestamp = timestamp + timedelta(hours=1)#as we have 1h delta in between each predicted event
pi_timestamp = timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")

#take only first column
value = event[0]

#Writing back to PI System
response = requests.post('https://<PIWebAPI_host>/piwebapi/streams/<webID_of_target_PIPoint>/value?updateOption=InsertNoCompression', data={'Timestamp': pi_timestamp, 'UnitsAbbreviation': '', 'Good': 'true' , 'Questionable': 'false', 'Value': value}, headers={"Authorization": "Basic %s" % b64Val}, verify=True)

(Sorry for the wrong intendation)

 

Here the UpdateValue method of PI Web API is used:

UpdateValue POST streams/{webId}/value

 

The efficiency can be enhanced by first creating all JSON objects for the events that are supposed to pe posted to the PI System, per PIPoint, and send them in bulk, using the UpdateValues method:

UpdateValues POST streams/{webId}/recorded

With this blog post series I want to enable data scientists to quickly get started doing Data Science in Python, without worrying about how to get the data out of the PI System.

 

In specific i want to highlight 2 options to get PI System data into Python for the use in data science:

 

  1. Writing PI System Data into a .csv file and using the .csv file as data source in Python.
  2. Directly accessing the PI Sytem using HTTP requests in Python.

 

Approach 1: Extracting PI System Data into a .csv file

Please check out these 3 ways to extract PI System data into .csv files:

 

Extracting PI System data in C# with AFSDK:

Extracting PI System Data to file using AFSDK in .NET

 

Extracting Pi System data in C# using PI SQL Client OLEDB

Extracting PI System Data to file using PI SQL Client OLEDB via PI SQL DAS RTQP in .NET

 

Extracting PI System Data in Python using PI Web API

Extracting PI System Data to file using PI Web API in Python

 

In each of the above approaches all events for the requested PI Points are extracted, no matter what how far the events are apart in time.

This can be not wanted, especially when using the data for time series prediction. In this case you would have to exchange the "RecordedValues" method by the "Interpolated" method to be able to define a sampling frequency:

 

PI Web API:

GetInterpolated GET streams/{webId}/interpolated

 

AFSDK:

AFData.InterpolatedValues Method

 

  • PI Datalink can also be used to create the .csv file, but focus is on programmatic approaches.

 

Reading data from .csv file in Python

Sample .csv file:

The events are stripped of their timestamps, as the events have a fixed sampling frequency, which makes a timestamp obsolete.

 

 

import numpy as np
import csv

dataset = np.loadtxt(open('filepath_csv', "rb"), delimiter=",", skiprows=1)

 

skiprows=1: will skip the first row of the .csv file. This can be useful when the header of the file contains column description.

The columns of the .csv file are stored in a numpy array, which can be further used for machine learning.

 

Approach 2: Directly accessing the PI Sytem using HTTP requests in Python.

For this approach we make use of the requests library in Python.

Requests: HTTP for Humans™ — Requests 2.21.0 documentation

 

The PI Web API GetInterpolated method is used to extract constantly sampled values of a desired PI Point:

GetInterpolated GET streams/{webId}/interpolated

 

In order to retrieve data for a certain PI Point we need the WebID as reference. It can be retrieved by the built-in search of PI Web API.

In this case the WebID can be found here:

 

 

 

Using the requests library of Python and the GetInterpolated method of PI Web API, we retrieve the sampled events of the desired PI Point as a JSON HTTP response:

 

import requests

response = requests.get('https://<PIWebAPI_host>/piwebapi/streams/<webID_of_PIPoint>/interpolated?startTime=T-10d&endTime=T&Interval=1h', headers={"Authorization": "Basic %s" % b64Val}, verify=True)

 

The response is in JSON format and will look something like that:

 

 

Parsing the JSON HTTP response:

We only need the values of the events. As they are interpolated, we do not care about quality. The timestamp information is contained in the sampling itnerval, that we have earlier specified in the GetInterpolated method of PI Web API.

We assume that we have 2 JSON responses r1 and r2 for 2 different PIPoints, but both generated with the GetInterpolated method, with same sampling interval, over the same timerange.

 

 

import json
import numpy as np

json1_data = r1.json()
json2_data = r2.json()

data_list_1 = list()

for j_object in json1_data["Items"]:

value = j_object["Value"]
if type(value) is float: #this is important to not iclude the last element which is of type "dict"

data_list_1 = np.append(data_list_1, float(value))
data_list_2 = list()

for j_object in json2_data["Items"]:

value = j_object["Value"]
if type(value) is float:
data_list_2 = np.append(data_list_2, float(value))

# Stack both 1-D Lists into a 2-D Array:
array_request_values = np.array(np.column_stack((data_list_1, data_list_2)))

(Sorry for the wrong intendation)

 

This Python code parses the JSON HTTP responses and writes them into 2 seperate lists. These then are stacked into a numpy array:

 

Example:

 

 

This numpy array can be used as input for machine learning.

 

Please check out Machine Learning Pipeline 2, for an easy way to write back machine learning output to the PI System.

rborges

Using VS Code with AF SDK

Posted by rborges Employee Mar 6, 2019

From time to time we hear from you several excuses for not jumping into software development with PI data. Today we will discuss the top two that I get and how to fix them with a very simple solution. Here they are:

 

  1. I must be an administrator to install development applications;
  2. The tools required for developing applications in a commercial environment are expensive;

 

1. Free IDEs and Licensing

 

Let's start by talking about the cost of applications used for software development. Yes, usually they are expensive, but there are pretty good free alternatives. The problem is that you have to be really careful with the license that it uses and what are your responsibilities. For instance, the first version of Eclipse's EPL was so vague that auto-generated code could be interpreted as derivative work and could make it mandatory for the developer to open the source code. Since EPL2 they have fixed it and Eclipse is now business friendly.

 

Now, if you want to stick with Microsoft tools, there are two free alternatives. The first one is Visual Studio Community, a slim version of the full-blown Visual Studio. But it has a proprietary license that it's not suitable for enterprise as it states:

If you are an enterprise, your employees and contractors may not use the software to develop or test your applications, except for: (i ) open source; (ii) Visual Studio extensions; (iii) device drivers for the Windows operating system; and, (iv) education purposes as permitted above.

 

So if you are writing a tool to process your corporate data, it's a no-go. This pushes us to the other alternative, VS Code, a free, yet very powerful, community-driven IDE that's suitable for enterprise development. It uses a simple license that allows you to use the application for commercial purposes and in an enterprise environment. As clearly stated on item 1.a: "you may use any number of copies of the software to develop and test your applications, including deployment within your internal corporate network".

 

2. You don't need to be an Admin

 

This item is really easy to fix as Microsoft offers a standalone version of VS Code. You just have to download the .zip version from here and run the executable. If you are not able to execute the file, the company you work for may be blocking and, in this case, you should talk to your manager or IT departament.

 

Well, what now? Is it that simple? Unfortunately no. Due to the increasing participation of Microsoft in the open-source world, several of its products are now targeting the open-source audience. More specifically for the .NET platform, Microsoft has created the .NET Foundation to handle the usage of the .NET products under OSI (Open Source Initiative, not OSIsoft) rules. Because of this, VS Code is designed to work natively with .NET Core and not the standard .NET Framework. So, in order to use AF SDK, you need some tweaks. But fear not, they are really simple!

 

3. Adding the C# Extension to VS Code

 

In this step, we will install the C# Extension that allows the IDE to process .cs and .csproj files. It's very simple as you just have to go to the Extensions tab (highlighted below or just press control + shif + x), type "C# for Visual Studio Code" and hit install.

 

 

4. Creating a .NET Standard project in VS Code

 

First, we start VS Code and go to File -> Open Folder so we can select a folder that will be the base of our demo project. Once you see the welcome screen, go to View -> Terminal so you can use the integrated terminal to use the .NET CLI to create a new .NET project. Because we will create a console application, we must type

 

PS C:\Users\rafael\afdemo> dotnet new console

 

If you really need to work with VB.NET you can append "-lang VB" at the end. (but please, don't )

 

You should end up with a structure similar to this:

 

 

If by any chance, your structure misses the .vscode folder, press control+shift+B to compile it and VS Code will ask if you want to add a default build action. Just say yes.

 

5. Converting from .NET Core to .NET Standard

 

Now that we have our project, we must change it to .NET Standard, so we can later reference the AF SDK assembly. We have to change two files. The first one is the .csproj file. You have to change from this:

 

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>

    <OutputType>Exe</OutputType>

    <TargetFramework>netcoreapp2.2</TargetFramework>

  </PropertyGroup>

</Project>

 

To this:

 

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>

    <OutputType>Exe</OutputType>

    <TargetFramework>net472</TargetFramework>

    <PlatformTarget>x64</PlatformTarget>

    <DebugType>net472</DebugType>

  </PropertyGroup>

</Project>

 

Here we see the first caveat of this approach: the application must be 64 bits. Also, note that I'm explicitly referencing the .NET Framework version that I want to work with. The list of available frameworks (as well as how to write it on the XML file) can be found here.

 

Now we move on the recently created launch.json inside the .vscode folder. If you pay close attention, you will see that it references a type "coreclr". You have to change it to type "clr". Also, the path must match the references framework version. So, if every goes as planned, you will change from this:

 

{

  "version":"0.2.0",

  "configurations":[

  {

    "name":".NET Core Launch (console)",

    "type":"coreclr",

    "request":"launch",

    "preLaunchTask":"build",

    "program":"${workspaceFolder}/bin/Debug/netcoreapp2.1/afdemo2.dll",

    "args":[],

    "cwd":"${workspaceFolder}",

    "console":"internalConsole",

    "stopAtEntry":false,

    "internalConsoleOptions":"openOnSessionStart"

  },

  {

    "name":".NET Core Attach",

    "type":"coreclr",

    "request":"attach",

    "processId":"${command:pickProcess}"

  }

 ]

}

 

To this:

{

  "version":"0.2.0",

  "configurations":[

  {

    "name":".NET Standard Launch (console)",

    "type":"clr",

    "request":"launch",

    "preLaunchTask":"build",

    "program":"${workspaceFolder}/bin/Debug/net472/afdemo.exe",

    "args":[],

    "cwd":"${workspaceFolder}",

    "console":"internalConsole",

    "stopAtEntry":false,

    "internalConsoleOptions":"openOnSessionStart"

  },

{

  "name":".NET Standard Attach",

  "type":"clr",

  "request":"attach",

  "processId":"${command:pickProcess}"

  }

 ]

}

 

And that's it! If everything went all right, you can now test your Hello World and see if it's working as it should be.

 

6. Referencing OSIsoft.AFSDK.dll

 

Now, in order to get PI Data into your project, you just have to add the reference to the AF SDK assembly. This is just a matter of changing your .csproj file and manually including the reference by adding the <ItemGroup> element to your <Project>. Also, it's important to mention that HintPath can also handle relative paths. I'm hardcoding the full path here so the unobservant copypaster don't break the code:

 

<Project Sdk="Microsoft.NET.Sdk">

  <ItemGroup>

    <Reference Include="OSIsoft.AFSDK, Version=4.0.0.0, Culture=neutral, PublicKeyToken=6238be57836698e6, processorArchitecture=MSIL">

      <SpecificVersion>False</SpecificVersion>

      <HintPath>C:\Program Files (x86)\PIPC\AF\PublicAssemblies\4.0\OSIsoft.AFSDK.dll</HintPath>

    </Reference>

  </ItemGroup>

</Project>

 

7. Code Time

 

Now let's write a simple code that lists all PI Systems and their respective AF Databases

 

using System;

using OSIsoft.AF;

using OSIsoft.AF.Asset;

using OSIsoft.AF.Data;

using OSIsoft.AF.PI;

using OSIsoft.AF.Time;

 

namespace afdemo

{

  class Program

  {

    static void Main(string[] args)

    {

      Console.WriteLine("This is a VS Code Project!");

      PISystems piSystems = new PISystems();

     foreach (var piSystem in piSystems)

     {

        Console.WriteLine(piSystem.Name);

        foreach (var database in piSystem.Databases)

        {

          Console.WriteLine(" ->" + database.Name);

        }

      }

    }

  }

}

 

You can build the code by pressing Control+Shift+B (Visual Studio users, rejoice!) and debug it by pressing F5. Now you can easily debug your code on the same way you do on the full-blown version of Visual Studio!

 

 

You can also execute the code by typing dotnet run on your integrated console. Here the output for this specific code on my machine:

 

 

8. Conclusion

 

By using VS Code you have no more excuses to not jump headfirst into developing custom tools that can make your workflow simpler and easier. Oh, and one more thing: VS Code is suitable for a myriad of languages, including web development. So you can also use it to create PI Vision Custom Symbols (leave a comment if you would like a guide on how to configure an environment for PI Vision Custom Symbols development).

In about a month, PI World 2019 will kick off in San Francisco.  Like the past many years, the events will be spread over 3 hotels.  Also like the past many years, the Parc 55 hotel is where you will find events catering specifically to developers and the data science community.  A year ago we introduced "Live Coding" sessions and "How To" walk-throughs to offer a more indepth talk (more steak, less sizzle).  This year we have collectively rebranded these new formats into a common track: Tech Talks.  These 90 minute talks hit the sweet spot for many.  If you leave a traditional 45 minute PowerPoint talk wishing for more details, then the longer 90 minute Tech Talk is for you.  If you feel a 3 hour lab is too slow, or you would rather be shown the material rather than typing it yourself, then the shorter 90 minute Tech Talk is for you too.

 

 

We have expanded the Tech Talks to begin on Day 2 rather than wait until Day 3.  Here's what you can find on the agenda.

 

Day 2 Tech Talks (Parc 55)

  • Using Stream Views for Real Time Analytics
  • Using PI Web API and PowerApps to Build Real World Apps With Your PI Data
  • Leveraging the Power of PI Vision Extensibility
  • Concurrent Programming for PI Developers
  • Generating API Clients with OpenAPI 2.0 (Swagger) specifications and interacting with REST endpoints
  • Effortlessly deploying a PI System in Azure or AWS

 

Day 3 Tech Talks (Parc 55)

  • OSIsoft Cloud Services for Developers
  • Writing High Performance Applications with AF SDK
  • Modernizing PI SQL For the Future
  • Create Dashboards to monitor PI Analysis Service

 

Check out the agenda for exact times and room locations.  While you are peeking at the agenda take a look at Day 2 PI Geek Talks and Day 3 Developer Talks too.  All are offered at the Parc 55.

 

And join us from 4:30-6:00 PM on Day 3 for the Developers Reception.

Dear Fellow PI Geeks,

 

It is with a heavy heart that I must announce that we have cancelled this year's PI World Innovation Hackathon.  It was not an easy decision.  We were given a cutoff date 8 weeks before PI World to make a decision to go forward or cancel.  It's hard to make firm predictions so far in advance.  While we were confident optimistic we could have a similar number of participants as last year, the tipping point in the decision was the bitter reality that the hackathon has been shrinking in attendance for many years.  When we started out on this brave new venture 7 years ago, we obviously were filled with hopes that it would grow rather than decline.

 

I have received emails asking if we will offer hackathons in the future.  YES, we will.  However, I do not see us offering hackathons during PI World.  Many of our partners and EA customers form the core of the developer community, but PI World has so many demands pulling those partners and customers in so many different directions.  Thus, we are currently considering a special hackathon-only event.

 

I will be hanging out at the Parc 55 hotel on Day 2 and Day 3, where the agenda has lots of developer related offerings from 90 minute Tech Talks, some cool PI Geek talks, and the traditional 45 minute Developer Talks.  Not to mention the Day 3 hands on labs!  So I invite all developers to come over to the Parc 55 to attend some in-depth talks. And if you happen to bump into me, I would love to talk to you about what you want to see in future hackathons.

 

There will be a Developer Reception from 04:30-06:30 PM at the Parc 55 with drinks and appetizers.  Come meet fellow developers or supporting members of the PI Developers Club.

 

If you have any concerns or comments, please email me and/or the entire team below.

 

Rick Davin    

rdavin@osisoft.com OR TechnologyEnablementTeam@osisoft.com

Last week we met with Viten's Data Science team and they showed us this amazing Custom Data Reference that they've built for Dynamic Water Demand Forecasts.

They're using it for leak detection, but I'm sure the same could be applied to other use cases and other industries as well.

What's even more interesting is that they've been kind enough to upload the material in GitHub!

There is also a youtube video showing the installation, configuration and running DBM and DBM data reference.

 

DBM.jpg

 

Many thanks to Johan Fitié and the rest of Vitens' Data Science team for making this available and here are also a few comments from them:

 

Water company Vitens has created a demonstration site called the Vitens Innovation Playground (VIP), in which new technologies and methodologies are developed, tested, and demonstrated. The projects conducted in the demonstration site can be categorized into one of four themes: energy optimization, real-time leak detection, online water quality monitoring, and customer interaction. In the real-time leak detection theme, a method for leak detection based on statistical demand forecasting was developed.

 

Using historical demand patterns and statistical methods - such as median absolute deviation, linear regression, sample variance, and exponential moving averages - real-time values can be compared to a forecast demand pattern and checked to be within calculated bandwidths. The method was implemented in Vitens' realtime data historian, continuously comparing measured demand values to be within operational bounds.

 

One of the advantages of this method is that it doesn't require manual configuration or training sets. Next to leak detection, unmeasured supply between areas and unscheduled plant shutdowns were also detected. The method was found to be such a success within the company, that it was implemented in an operational dashboard and is now used in day-to-day operations.

 

The software is available as free software under the GPLv3 license;

 

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

 

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

 

You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/

 

For more information, please take a look at https://github.com/Vitens/DBM

In this post I will be leveraging OSISoft's PI Web API to extract PI System Data to a flat file.

To keep things simple and easy to reproduce, this post will focus how to extract data with this technology.

 

Prerequisites:

 

Remote PI System:

PI Data Archive 2018

PI AF Server 2018

 

Client:

Python 3.7.2

 

For simplicity 7 days of data of solely the PIPoint "Sinusoid" will be queried and written to a .txt file.

 

In order to retrieve data for a certain PI Point we need the WebID as reference. It can be retrieved by the built-in search of PI Web API.

In this case the WebID can be found here:

 

 

 

Given the WebID of the PI Point "Sinusoid", the following code will request historical data for the previous 7 days. It will parse the response JSON package, and write "Timestamp, Value, isGood" to the datafile specified.

 

Python Code:

import requests
import json
url = "https://<piwebapi_endpoint>/piwebapi/streams/<WebID_of_Sinusoid>/recorded?startTime=*-7d&endTime=*&boundaryType=Inside&maxCount=150000" #maxCount will set upper limit of values to be returned
filepath = "<filepath>"
response = requests.get(str(url), auth=('<user>', '<password>'), verify=False) #verify=False will disable the certificate verification check
json_data = response.json()
timestamp = []
value = []
isGood = []
#Parsing Json response
for j_object in json_data["Items"]:
 timestamp.append(j_object["Timestamp"])
 value.append(j_object["Value"])
 isGood.append(j_object["Good"])

event_array = zip(timestamp, value, isGood)
#Writing to file
with open(str(filepath), "w") as f:
 for item in event_array:

 try:
 writestring = "Timestamp: " + str(item[0]) + " , Value: " + str(item[1]) + " , isGood: " + str(item[2]) + " \n"

 except:

 try:
 writestring = "" + str(item[0]) + " \n"
 except:
 writestring = "" + " \n"

 f.write(writestring)
 f.close()

(*intendation not correctly displayed)

 

Result:

 

Timestamp, value and the quality for this time range were successfully written to the file.

In this post I will be leveraging OSISoft's AFSDK to extract PI System Data to a flat file.

To keep things simple and easy to reproduce, this post will focus how to extract data with this technology.

 

Prerequisites:

PI Data Archive 2018

PI AF Server 2018

PI AF Client 2018 SP1

Microsoft Visual Studio 2017

 

For simplicity 7 days of data of solely the PIPoint "Sinusoid" will be queried and written to a .txt file.

 

Following code will establish a AFSDK connection to the default PI Data Archive Server specified in the local Known Servers Table. A query for PI Points is launched to find the PIPoint "Sinusoid".

The method PIPoint.Recordedvalues is used to retrieve a list of AFValues. Their properties "Timestamp, Value, IsGood" are then written to a flat file.

 

C# Code:

namespace Data_Access_AFSDK
{
class Program
{
static void Main(string[] args)
{
PIServer myPIserver = null;
string tagMask = "";
string startTime = "";
string endTime = "";
string fltrExp = "";
bool filtered = true;

//connection to PI server
if (myPIserver == null)
myPIserver = new PIServers().DefaultPIServer;

//Query for PI Point
tagMask = "Sinusoid";
List<PIPointQuery> ptQuery = new List<PIPointQuery>();
ptQuery.Add(new PIPointQuery("tag", AFSearchOperator.Equal, tagMask));
PIPointList myPointList = new PIPointList(PIPoint.FindPIPoints(myPIserver, ptQuery));

startTime = "*-7d";
endTime = "*";

//Retrieve events using PIPointList.RecordedValues into list 'myAFvalues'
List<AFValues> myAFvalues = myPointList.RecordedValues(new AFTimeRange(startTime, endTime), AFBoundaryType.Inside, fltrExp, filtered, new PIPagingConfiguration(PIPageType.EventCount, 10000)).ToList();

//Convert to PIValues to string[]
string[] query_result_string_timestamp_value = new string[myAFvalues[0].Count];
string value_to_write;
string quality_value;
int i = 0;

foreach (AFValue query_event in myAFvalues[0])
{
value_to_write = query_event.Value.ToString();
quality_value = query_event.IsGood.ToString();
query_result_string_timestamp_value[i] = "Timestamp: " + query_event.Timestamp.LocalTime + ", " + "Value: " + value_to_write + ", " + "IsGood: " + quality_value;
i += 1;
}
//Writing data into file
System.IO.File.WriteAllLines(@"<FilePath>", query_result_string_timestamp_value);
}
}
}

 

 

Result:

Timestamp, value and the quality for this timerange were successfully written to the file.

In this post I will be leveraging OSISoft's PI SQL Client OLEDB to extract PI System Data via the PI SQL Data Access Server (RTQP).

To keep things simple and easy to reproduce, this post will focus how to extract data with this technology.

 

Prerequisites:

PI SQL Client OLEDB 2018

PI Data Archive 2018

PI AF Server 2018

PI SQL Data Access Server (RTQP Engine) 2018

Microsoft Visual Studio 2017

 

For simplicity a test AF database "testDB" with a single element "Element1" which has a single attribute "Attribute1" was created.

This attribute references the PIPoint "Sinusoid".

 

 

SQL Query used to extract 7 days of events from \\<AFServer>\testDB\Element1|Attribute1

 

SELECT av.Value, av.TimeStamp, av.IsValueGood
FROM [Master].[Element].[Archive] av
INNER JOIN [Master].[Element].[Attribute] ea ON av.AttributeID = ea.ID
WHERE ea.Element = 'Element1' AND ea.Name = 'Attribute1'
AND av.TimeStamp BETWEEN N't-7d' AND N't'

 

Example C# Code:

 

using System;
using System.Data;
using System.Data.OleDb;
using System.IO;


namespace Data_Access_PI_SQL_Client_OLEDB
{
class Program
{
static void Main(string[] args)
{
DataTable dataTable = new DataTable();
using (var connection = new OleDbConnection())
using (var command = connection.CreateCommand())
{
connection.ConnectionString = "Provider=PISQLClient; Data Source=<AFServer>\\<AF_DB>; Integrated Security=SSPI;";
connection.Open();


string SQL_query = "SELECT av.Value, av.TimeStamp, av.IsValueGood ";
SQL_query += "FROM [Master].[Element].[Archive] av ";
SQL_query += "INNER JOIN [Master].[Element].[Attribute] ea ON av.AttributeID = ea.ID ";
SQL_query += "WHERE ea.Element = 'Element1' AND ea.Name = 'Attribute1' ";
SQL_query += "AND av.TimeStamp BETWEEN N't-7d' AND N't' ";


command.CommandText = SQL_query;
var reader = command.ExecuteReader();
using (StreamWriter writer = new StreamWriter("<outputfilepath>"))
{
while (reader.Read())
{
writer.WriteLine("Timestamp: {0}, Value : {1}, isGood : {2}",
reader["Timestamp"], reader["Value"], reader["IsValueGood"]);
}
}
}

Console.WriteLine("Completed Successfully!");
Console.ReadKey();
}
}
}

 

 

Result:

Events were successfully written to flat file:

 

Great list, though the author admits the list is more exhausting than it is exhaustive.  A must-read for VB.NET developers.

 

An Exhausting List of Differences Between VB.NET & C# | Anthony's blog

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Until now, when installing PI interfaces on a separate node from the PI Data Archive, we need to provision a separate physical or virtual machine just for the interface itself. Don't you think that it is a little waste of resources? To combat this, we can containerize interfaces so that they become more portable which allows them to be scheduled anywhere inside your computing cluster. Their batch file configuration also makes them good candidates for lifting and shifting into containers.

 

We will start off by introducing the PI to PI interface container which is the first ever interface container! It will have buffering capabilities (via PI Buffer Subsystem) and its performance counters will also be active.

 

Set up servers

First, let me spin up 2 PI Data Archive containers to act as the source and destination servers. Check out this link on how to build the PI Data Archive container.

PI Data Archive container health check

docker run -h pi --name pi -e trust=%computername% pidax:18
docker run -h pi1 --name pi1 -e trust=%computername% pidax:18

 

For the source code to build the PI Data Archive container and also the PI to PI interface container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies.

 

We shall be using pi1 as our source and pi as our destination.

 

Let's open up PI SMT to add the trust for the PI to PI Interface container. Do this on both PI Data Archives.

The IP address and NetMask are obtained by running ipconfig on your container host.

The reason I set the trusts this way is because the containers are guaranteed to spawn within this subnet since they are attached to the default NAT network. Therefore, the 2 PI Data Archive containers and the PI to PI Interface container are all in this subnet. Container to container connections are bridged through an internal Hyper-V switch.

 

On pi, create a PI Point giving it any name you want (my PI Point shall be named 'cdtclone'). Configure the other attributes of the point as such

Point Source: pitopi
Exception: off
Compression: off
Location1: 1
Location4: 1
Instrument Tag: cdt158

 

Leave the other attributes as default. This point will be receiving data from cdt158 on the source server. This is specified in the instrument tag attribute.

 

Set up interface

Now you are all set to proceed to the next step which is to create the PI to PI Interface container!

 

You can easily do so with just one command. Remember to login to Docker with the usual credentials.

docker run -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

The environment variables that you can configure include

host: destination server

src: source server

ps: point source

That is all the parameters that is supported for now.

 

You should be able to see data appearing in the cdtclone tag on the destination server now.

 

Don't you think it was very quick and easy to get started.

 

Buffer

As I mentioned before, the container also has buffering capabilities. We shall consider 2 scenarios.

 

1. The destination server is stopped. Same effect as losing network connectivity to the destination server.

2. The PI to PI interface container is destroyed.

 

Scenario 1

Stop pi.

docker stop pi

 

Wait for a few minutes and run

docker exec p2p cmd /c pibufss -cfg

 

You should see the following output which indicates that the buffer is working and actively queuing data in anticipation for the destination server to be back up.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 18:39:23, regid: 3
lastsend: 2-Nov-18 18:58:59
total events sent: 47, snapshot posts: 42, queued events: 8

 

When we start up pi again

docker start pi

 

Wait a few minutes before running pibufss -cfg again. You should now see

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering

*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: SendingData, successful connections: 2
PI identities: piadmins | PIWorld, auth type: SSPI
firstcon: 2-Nov-18 18:39:23, lastreg: 2-Nov-18 19:07:24, regid: 3
total events sent: 64, snapshot posts: 45, queued events: 0

 

The buffer has re-registered with the server and flushed the queued events to the server. You can check the archive editor to make sure the events are there.

 

Scenario 2

Stop pi just so that events will start to buffer.

docker stop pi

 

Check that events are getting buffered.

*** Configuration:
Buffering: On (API data buffered)
Loaded physical server global parameters: queuePath=C:\ProgramData\OSIsoft\Buffering


*** Buffer Sessions:
1 non-HA server, name: pi, session count: 1
1 [pi] state: Disconnected, successful connections: 1
PI identities: , auth type:
firstcon: 13-Nov-18 15:25:07, lastreg: 13-Nov-18 15:25:08, regid: 3
lastsend: 13-Nov-18 17:54:14
total events sent: 8901, snapshot posts: 2765, queued events: 530

 

Now while pi is still stopped, stop p2p.

docker stop p2p

 

Check the volume name that was created by Docker.

docker inspect p2p -f "{{.Mounts}}"

 

Output as below. The name is highlighted in red. Save that name somewhere.

[{volume 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17 C:\ProgramData\docker\volumes\76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17\_data c:\programdata\osisoft\buffering local true }]

 

Now you can destroy p2p and start pi

docker rm p2p
docker start pi

 

Use archive editor to verify that data has stopped flowing.

The last event was at 5:54:13 PM.

 

We want to recover the data that are in the buffer queue files. We can create a new PI to PI interface container pointing to the saved volume name.

docker run -v 76016ed9fd8129714f29adeead02b737394485d278781417c80af860c4927c17:"%programdata%\osisoft\buffering" -e host=pi -e src=pi1 -e ps=pitopi --name p2p pitopi

 

And VOILA! The events in the buffer queues have all been flushed into pi.

 

To be sure that the recovered events are not due to history recovery by the PI to PI interface container, I have disabled it.

 

I have demonstrated that the events in the buffer queue files were persisted across container destruction and creation as the data was persisted outside the container.

 

 

Performance counters

The container also has performance counters activated. Let's try to get the value of Device Status. Run the following command in the container.

Get-Counter '\pitopi(_Total)\Device Status'

 

Output

Timestamp CounterSamples
--------- --------------
11/2/2018 7:24:14 PM \\d13072c5ff8b\pitopi(_total)\device status :0

 

Device status is 0 which means healthy.

 

What if we stopped the source server?

docker stop pi1

 

Now run the Get-Counter command again and we will expect to see

Timestamp CounterSamples
--------- --------------
11/2/2018 7:29:29 PM \\d13072c5ff8b\pitopi(_total)\device status :95

 

Device status of 95 which means Network communication error to source PI server.

 

These performance counters will be perfect for writing health checks against the interface container.

 

Conclusion

We have seen in this blog how to use the PI to PI Interface container to transfer data between two PI Data Archive containers. As you know, OSIsoft has hundreds of interfaces. Being able to containerize one means the success of containerizing others is very high. The example in this blog will serve as a proof of concept.

Filter Blog

By date: By tag: