Skip navigation
All Places > PI Developers Club > Blog > 2016 > May
2016

Introduction

 

PI Vision 2017 is already released and one of its great features is the ability to develop custom symbols and custom tool panes. There are a lot of interesting JavaScript libraries available to help you create your own custom PI Vision symbol. On this blog post series, I will show you how to develop the Google Maps custom symbol for PI Vision 3 (2016, 2016 R2, 2017 and 2017 R2).

 

The idea is that when the user drags an element and drops it on the PI Vision display, a Google Maps will be created with a marker located according to the values of the latitude and longitude attributes of the dropped element. If another element is dropped on the map, another marker should be created accordingly.

 

Below there is a screenshot taken using the version from part 3:

 

 

 

 

Setting up your environment

 

Here is a suggestion for you to set up your environment in order to develop and debug any custom symbol in an efficient manner. Let's suppose you have a client machine where you have the IDE/editor installed to develop your library and a web server machine with PI Vision installed.

 

Because I am really used to using all the great Visual Studio features, I don't feel like using any other IDE/editor such as Visual Studio Code, Notepad++ or Sublime. I have Visual Studio 2015 and Google Chrome already installed on my client machine so here are the steps I had to do to set up my development environment:

 

  • Edit the web.config file on the PI Vision installation by changing the compilation tag under system.web from debug="false" to debug="true". This will help you debug your library using Google Chrome. Please refer to the PI Vision 2017 Custom Extension Creation documentation for more information about this step.
  • On the PI Vision server, open the Windows Explorer and navigate to the folder: C:\Program Files\PIPC\PIVision\Scripts\app\editor\symbols. Right-click on the ext folder and select Properties. Under the Sharing tab, you will find options to share your folder on the network. Make sure that your user account has read and write privileges.
  • On the client machine, open Windows Explorer and navigate to the PI Vision server machine. You will find the ext folder that you have just shared. Right-click on it, select  "Map network drive..." and follow the steps. After that, you should have a new drive mapped to the PI Vision server ext folder.
  • Create two blank files on the shared ext folder: sym-gmaps-p1.js and sym-gmaps-p1-template.html. Open Visual Studio but do not create any new project. Just drag those two files and and drop into the IDE. This is all! Whenever you save a file using Visual Studio, this file will be saved on the PI Vision server. If you are browsing using Google Chrome, just refresh the page by pressing CTRL + F5 to clear the cache each time you save a new version of the file.

 

Getting started developing the PI Vision symbol

 

Besides the PI Vision extensibility document, there is a great OSIsoft GitHub repository called "PI Vision: Extensibility Samples and Tutorials" with many useful examples. Please refer to the "Simple Value Symbol" document as it provides more information in case you have never created a custom symbol before.

 

Before we start, you can download the source code package from this GitHub repository.

 

  • We have already created the two files on the ext folder. Open the sym-gmaps-p1.js file and add the following code to it.

 

(function (PV) {
    function symbolVis() { }
    PV.deriveVisualizationFromBase(symbolVis);
    var definition = {
        typeName: 'gmaps-p1',
        datasourceBehavior: PV.Extensibility.Enums.DatasourceBehaviors.Multiple,
iconUrl: 'Images/google-maps.svg',
        getDefaultConfig: function () {
            return {
                DataShape: 'Table',
                Height: 600,
                Width: 400           
            };
        },
        visObjectType: symbolVis
    };
    PV.symbolCatalog.register(definition);
})(window.PIVisualization);

 

  • The code above creates an IIFE (immediately invoked function expression), which is a JavaScript function that runs as soon as it is defined.  A Javascript object is created with some properties that describe the symbol, including typeName and dataSourceBehaviour. This object is the input for the CS.symbolCatalog.register() function which makes this symbol available for PI Vision users.
  • The datasourceBehaviour is set to Multiple since we want to drag and drop AF Elements and monitor two of their attributes (latitude and longitude). This is why CS.DatasourceBehaviours.Single shouldn't be used.
  • Another property is added to the definition object called getDefaultConfig() which will return the default configuration values to save in the database.
  • If you read the extensibility document, you will find that the DataShape field from the getDefaultConfig() function is used to define how data should be retrieved by PI Vision. This documents defines each option for this property. Table seems to be the most suitable for this use case since we are working with multiple data sources and not interested in historical data.
  • The Width and Height properties were added to the object returned by the getDefaultConfig() and used to determine the width and height of the symbols when they are added to the PI Vision display.
  • The final property is init where the init function should be defined. Please refer to the code below:

 

(function (PV) {


    function symbolVis() { }
    PV.deriveVisualizationFromBase(symbolVis);


    var definition = {
        typeName: 'gmaps-p1',
        datasourceBehavior: PV.Extensibility.Enums.DatasourceBehaviors.Multiple,
iconUrl: 'Images/google-maps.svg',
        getDefaultConfig: function () {
            return {
                DataShape: 'Table',
                Height: 600,
                Width: 400           
            };
        },
        visObjectType: symbolVis
    };


    symbolVis.prototype.init = function init(scope, elem) {
        this.onDataUpdate = dataUpdate;
        this.onConfigChange = configChanged;
        this.onResize = resize;    


        var container = elem.find('#container')[0];
        var id = "gmaps_" + Math.random().toString(36).substr(2, 16);
        container.id = id;
        scope.id = id;


        function configChanged(config, oldConfig) {


        };




        function resize(width, height) {


        }  


       
        function dataUpdate(data) {


        }
    }


    PV.symbolCatalog.register(definition);
})(window.PIVisualization);

 

  • The first 3 lines of the init function changes the id of the node of the symbol to make it unique by using random function. This is possible as one of the inputs of the init function is elem which is the node of the current symbol.
  • The content of the sym-gmapspartone-template.html file is:

 

<div id="container" style="width:100%;height:100%;">


</div>


 

  • Finally, we have defined the updateGoogleMapsConfig, resizeGoogleMaps and dataUpdate functions which are going to be explained on the following blog posts about this topic.

 

It is time for us to initialize the Google Maps JavaScript API libraries.

 

Getting started with Google Maps JavaScript API

 

Google provides a programming reference and samples for Google Maps JavaScript API, which were really useful to write this blog post. Let's take a look at the most basic example. Their HTML page has the following source code:

 

<!DOCTYPE html>
<html>
  <head>
    <title>Simple Map</title>
    <meta name="viewport" content="initial-scale=1.0">
    <meta charset="utf-8">
    <style>
      html, body {
        height: 100%;
        margin: 0;
        padding: 0;
      }
      #map {
        height: 100%;
      }
    </style>
  </head>
  <body>
    <div id="map"></div>
    <script>

var map;
function initMap() {
  map = new google.maps.Map(document.getElementById('map'), {
    center: {lat: -34.397, lng: 150.644},
    zoom: 8
  });
}

    </script>
    <script src="https://maps.googleapis.com/maps/api/js?&callback=initMap" async defer></script>
  </body>
</html>

 

 

Ok, we have some problems to solve:

 

  1. Our first problem is that it is not a good practice the edit the Index file of the PI Vision in order to add a script tag to load the external Google Maps library. Ideally, the symbol code should be able to do this task.
  2. The second problem is that url which refers to the GMaps (Google Maps) library has the name of the callback function to be called after this library is loaded. How can we make this work within the symbol code?
  3. Users can add as many instances of this symbol as they want. On the other hand, if the GMaps libraries were already loaded, the symbol should not try to load it again.

 

 

Solving problem 1: After some research, I found this interesting page, which allows us to dynamically load external JavaScript and CSS files. After making some changes here is the code that makes the trick:

 

                var script_tag = document.createElement('script');
                script_tag.setAttribute("type", "text/javascript");
                script_tag.setAttribute("src", "http://maps.google.com/maps/api/js?sensor=false&callback=gMapsCallback");
                (document.getElementsByTagName("head")[0] || document.documentElement).appendChild(script_tag);

 

Solving problem 2: If we define a function as property of the window JavaScript object, GMaps will be able to call it. Therefore, we have defined the window.gMapsCallback function as:

 

    window.gMapsCallback = function () {
        $(window).trigger('gMapsLoaded');
    }

 

We have also bound the gMapsLoaded event with the scope.startGoogleMaps function:

 

        $(window).bind('gMapsLoaded', scope.startGoogleMaps);

 

The scope.startGoogleMaps function will actually create the map on the display..

 

Solving problem 3: Properties of the window object can be used as global variables.

 

By assigning window.googleRequested = true, all other symbol instances will know that the web page has already requested to transfer the JavaScript libraries required. By checking if the window.google is undefined or not, the library can know if the GMaps libraries were already loaded or not.

 

We need to be prepared to handle the following scenario:

  1. User adds the first instance GMaps symbol to his display.
  2. The symbol starts loading the GMaps libraries (window.googleRequested = true) and it will create a map when window.gMapsCallback is called.
  3. User adds a second instance of this custom symbol. At this point, the GMaps libraries are still loading (window.google = undefined). The second instance shouldn't try to load the external libraries again because they are already being loaded.
  4. If the second instance tries to create a map, an exception will be thrown because the second instance needs to wait until window.google is different than undefined.

 

Finally, here is definition from scope.startGoogleMaps:

 

        scope.startGoogleMaps = function () {
            if (scope.map == undefined) {
                scope.map = new window.google.maps.Map(document.getElementById(scope.id), {
                    center: { lat: 0, lng: 0 },
                    zoom: 1
                });
            }
            scope.updateGoogleMapsConfig(scope.config);
        };

 

With all these concepts and restrictions in mind, here is the final version of this blog post (part 1).

 

(function (PV) {


    function symbolVis() { }
    PV.deriveVisualizationFromBase(symbolVis);


    var definition = {
        typeName: 'gmaps-p1',
        datasourceBehavior: PV.Extensibility.Enums.DatasourceBehaviors.Multiple,
iconUrl: 'Images/google-maps.svg',
        getDefaultConfig: function () {
            return {
                DataShape: 'Table',
                Height: 600,
                Width: 400           
            };
        },
        visObjectType: symbolVis
    };






    window.gMapsCallback = function () {
        $(window).trigger('gMapsLoaded');
    }


    function loadGoogleMaps() {
        if (window.google == undefined) {
            if (window.googleRequested) {
                setTimeout(function () {
                    window.gMapsCallback();
                }, 3000);


            }
            else {
                var script_tag = document.createElement('script');
                script_tag.setAttribute("type", "text/javascript");
                script_tag.setAttribute("src", "http://maps.google.com/maps/api/js?key=AIzaSyDUQhTeNplK37EX-mXdAB-zVuYDutE5c2w&callback=gMapsCallback");
                (document.getElementsByTagName("head")[0] || document.documentElement).appendChild(script_tag);
                window.googleRequested = true;
            }
        }
        else {
            window.gMapsCallback();
        }
    }




    symbolVis.prototype.init = function init(scope, elem) {




        this.onDataUpdate = dataUpdate;
        this.onConfigChange = configChanged;
        this.onResize = resize;
     


        var container = elem.find('#container')[0];
        var id = "gmaps_" + Math.random().toString(36).substr(2, 16);
        container.id = id;
        scope.id = id;


        function configChanged(config, oldConfig) {


        };


        scope.startGoogleMaps = function () {
            if (scope.map == undefined) {
                scope.map = new window.google.maps.Map(document.getElementById(scope.id), {
                    center: { lat: 0, lng: 0 },
                    zoom: 1
                });
            }
            configChanged(scope.config);
        };




        function resize(width, height) {


        }  


       
        function dataUpdate(data) {
            if ((data == null) || (data.Rows.length == 0)) {
                return;
            }
        }


        $(window).bind('gMapsLoaded', scope.startGoogleMaps);
        loadGoogleMaps();
    }


    PV.symbolCatalog.register(definition);
})(window.PIVisualization);

 

 

Save both files, open PI Vision using Google Chrome and create a new display. Select the Google Maps symbol on the left pane and drag and drop any element to the new PI Vision display. Make sure not only that the Google Maps symbol is created and that no exception is thrown when multiple symbols are added on a single display.

 

 

 

Conclusions

 

When developing custom PI Vision symbols, it might be required to load external JavaScript and CSS libraries dynamically. On this first blog post of this series, it was shown how to get started developing the Google Maps PI Vision symbol. The next blog post I will show you how to add a marker on the map, according to the latitude and longitude of the dragged element.

 

If you have any question or suggestion, please post it on the comments below.

 

Introduction

Debugging, testing, and understanding what an application does is almost impossible without a good logging system.  When our customers are developing .NET application with PI Developer Technologies we get a lot of questions pointing on a possible bad behavior of our components; further investigations including adding logs in the .NET application often leads to a complete different cause.  This is why I'll share with you how I am implementing a logging system in a .NET application.  You will see this is not difficult and  that a logging system has huge benefits:

  • Helps to understand what happens, and when.
  • Provides an history of actions and errors your application has performed / encountered
  • Speeds up development cycle: having logs also, often, allows to make corrections in the code without having to run the program in debug from Visual Studio, because you can look at log files to determine what is happening, and see when something goes wrong.

 

This post will explain you how to use Apache log4net in your application

 

Software used - Requirements

Visual Studio 2015 is used for this post, log4net is compatible with .Net since .Net v1, so if you are using another version of Visual Studio, that should not be a problem

Internet connection to get the NuGet package - You may also reference the .dll directly, NuGet is easier to use though and removes the need of keeping the .dll in your version control system.

 

Configure your application to use log4net

We'll create a simple application that also contains a library .dll, to see how easy it is to get logs from every parts of your application.

So go ahead and create :

  • a new console application called application.
  • Add a new code library project to the solution and call it library.

 

1 - Add the Nuget package to your project(s)

From the solution node in Visual Studio, right click and select "Manage NuGet Packages for Solution"

 

In the NuGet packages manager:

  1. Select Browse (this will search the Package source nuget.org, on internet)
  2. Enter "log4Net"
  3. Select log4net in the search results
  4. Select all projects in your solution for which you want to use logging
  5. Click Install

 

It is always good to look at the output to see if there is no error, mine looks like this:

 

2 - Create the log4net Configuration file

Add an empty .xml file to your application, I like to call it: log4net.config.xml

log4net needs XML configuration to work and this is what makes it so flexible, you can use either: your application.exe.config file or a separate file to store the configuration.  Personally I prefer a separate file and this is how we will configure it in this post.

log4net allows you to configure one or many "appenders".  An Appender is where the data from the logging statements is being written e.g.: AdoNet, MS SQL Server, Console, EventLog, File, Memory, SMTP, etc.

I will provide you the two basic configurations I am using, but keep in mind that this is not limited to it, log4net has a wide range of possibilities, possibly all you can think of you can log into it!

For this post, I'll use the configuration 1 below.

 

Configuration 1 - For a Console Application

This configuration will show logs in both: the console and a rolling text file:

  • For the console, logs will have a color based on the log level: ALL,WARN,ERROR.

Content of log4net.config.xml:

<log4net debug="false">
  <appender name="RollingFile" type="log4net.Appender.RollingFileAppender">
    <file value="Logs\CommandLine.Log" />
    <threshold value="ALL" />
    <appendToFile value="true" />
    <rollingStyle value="Composite" />
    <maximumFileSize value="1MB" />
    <maxSizeRollBackups value="10" />
    <datePattern value="yyyyMMdd" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="*%-10level %-30date %message [%logger] [%thread] %newline" />    </layout>
  </appender>

  <appender name="ColoredConsoleAppender" type="log4net.Appender.ColoredConsoleAppender">
    <mapping>
      <level value="ERROR" />
      <foreColor value="Red, highintensity" />
    </mapping>
    <mapping>
      <level value="WARN" />
      <foreColor value="Yellow, highintensity" />
    </mapping>
    <mapping>
      <level value="ALL" />
      <foreColor value="Green, highintensity" />
    </mapping>
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="*%-10level %-30date %message [%logger] [%thread] %newline" />    </layout>
  </appender>

  <root>
    <level value="ALL" />
    <appender-ref ref="RollingFile" />
    <appender-ref ref="ColoredConsoleAppender" />
  </root>
</log4net>

 

 

Configuration 2 - For any application

This configuration simply logs into a text file ( for services and any other application type)

Content of log4net.config.xml:

<log4net>
  <appender name="RollingFile" type="log4net.Appender.RollingFileAppender">
    <file value="Logs\Application.Log" />
    <threshold value="ALL" />
    <appendToFile value="true" />
    <rollingStyle value="Composite" />
    <maximumFileSize value="1MB" />
    <maxSizeRollBackups value="10" />
    <datePattern value="yyyyMMdd" />
    <layout type="log4net.Layout.PatternLayout">
      <conversionPattern value="*%-10level %-30date %message [%logger] [%thread] %newline" />      </layout>
  </appender>

  <root>
    <level value="ALL" />
    <appender-ref ref="RollingFile" />
  </root>
</log4net>

 

Now my content in log4net.config.xml is same as what is shown in Configuration 1

 

One last step is to make sure that the file will be copied when I build the application, so I'll set the file property "Copy to Output Directory" to "Copy if newer":

 

3- Make the application load log4Net configuration

Open AssemblyInfo.cs

Insert the following line somewhere in the file, it does not matter where you insert it:

C#

[assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config.xml", Watch = true)]

VB.NET

<Assembly: log4net.Config.XMLConfigurator(ConfigFile:="Log4Net.Config.xml", Watch:=True)>

 

 

The logger variable

Each time you need to log information from a file, you'll need to declare a logger in this file like shown below. Make sure to set the type of your containing class in the typeof operator. (You could hard code the logger type in a string, however, if you rename your class this is best if this is not hard coded.)

Add the following variable in your file, at the class level (or module if VB):

C#

private readonly log4net.ILog _logger = log4net.LogManager.GetLogger(typeof(Program));

 

VB.NET

Private logger As log4net.Ilog=log4net.LogManager.GetLogger(GetType(Module).ToString)

 

Full Example

Here is a more concrete example, you should notice the only place I am catching an exception is in the main, if an issue occurs the exception will "bubble up" until there.  Unless you are handling a particular exception, you should not catch exceptions.

application - Program.cs

using System;
using System.Diagnostics;

namespace application
{
    class Program
    {
        private static readonly log4net.ILog _logger = log4net.LogManager.GetLogger(typeof(Program));
        
        static void Main(string[] args)
        {
            var timer = Stopwatch.StartNew();// starting a timer do show how to log it later
            _logger.Info("application is starting...");
            _logger.Warn("Looks like nothing is really happening in this application");

            try
            {
                // doing something with the library
                var worker=new library.Worker();
                worker.doWork();
            }

            catch (Exception ex)
            {
                _logger.Error("Humm... something went as unexpected", ex);
            }
            finally
            {
                _logger.InfoFormat("application completed in {0}ms",timer.ElapsedMilliseconds);
            }
            
            Console.Write("press a key to terminate");
            Console.ReadKey();
        }
    }
}

 

library - Worker.cs

using System;


namespace library
{
    public class Worker
    {
        private readonly log4net.ILog _logger = log4net.LogManager.GetLogger(typeof(Worker));
        public void doWork()
        {
            _logger.Info("Worker starting...");
            throw new Exception("This is an exception that occurred to show how to log your exceptions with log4Net");
        }
    }
}

 

Result when running the application:

 

Console logs:

The log file in the application\logs directory:

What information are available in the logs

From the log output we could gather a lot of information:

  1. The type of log message, it is up to the developer (you!) to decide what log level to use when logging a message.  In the configuration file (root:level value) you can set which level of logs to output : ALL,DEBUG,INFO,WARN,ERROR,FATAL,OFF e.g.  If you set INFO in the configuration file, you will no see debug logs in the log output.
  2. The precise time of all log events
  3. The log message
  4. The object/class that generated the log. This is really helpful to know where to look in the code to find the entity that logged the message.
  5. The thread id, very useful when debugging multi-threaded applications. FYI: I never had any issue with log4net used by multiple threads.
  6. The full stack trace in case of an error, probably the most useful thing.  As log4net takes the Exception object , as you can see on line 25 of Program.cs, it is really easy to do and really useful.

 

Conclusion

I hope this helps you to configure logging in your applications, I use log4net all the time in all my applications and I am recommending it strongly.  It is really mature and highly configurable:  you can configure it to write logs in many different places, change the directory or the log file name from the configuration file, etc. I have attached the solution to the post in case you want to see it, you'll need to restore the NuGet packages for it to work.

 

I am also looking forward for your comments!

 

Talk to you soon

Introduction

 

In this blog post, we will change the JavaScript application logic in order to make PI Web API Batch requests instead of using jQuery promise chains. The web app we are going to use was created using the source files from Cordova application developed for the “Develop Cross-Platform mobile apps using PI Web API” TechCon 2016 lab. As we are developing a web app and not a native app anymore, all Cordova plugins were removed from the project.

With the PI Web API 2016 release, the BATCH is now part of the PI Web API Core, which means that it is not in CTP anymore.

 

What is PI Web API Batch?

 

PI Web API Batch allows you to execute a batch of HTTP requests with just a single request by making a POST against \piwebapi\batch. The content of the request is an array with multiple objects where each one represents an HTTP request.

Those internal HTTP requests could be totally independent or not. It is possible to get the response of the first request and use it as an input of the second request as we are going to see on the following sections.

 

Please refer to the PI Web API Programming Reference for more information about BATCH.

 

How does the application work?

 

This application shows a Google Street View display. The image that the current user is seeing could be saved on the PI System by providing the following live information:

  • Latitude
  • Longitude
  • Heading
  • Pitch
  • Zoom
  • Timestamp

 

The web application sends updated values to a PI System on the cloud through a public PI Web API endpoint. This will only occur if the user provides a username. For each new user, a new element is created on the root of the AF database with 6 attributes and also 6 PI Points are created on the PI Data Archive.

 

When the application starts, those are the mainly HTTP requests that take place:

  • Get the AF database WebId
  • Get all root elements (usernames)
  • Check if the user already exists on the database
  • If it does exist, then it:
    • Creates a new event frame
    • Gets all PI Points WebIds
  • If it does not exist, then it:
    • Creates a new element on the AF database root
    • Create 6 PI Points on the PI Data Archive
    • Creates a new event frame
    • Gets all PI Points WebIds

 

The changed of the web app could be downloaded from our GitHub repository which includes both Visual Studio projects, the old one with jQuery promise chains and the new one using PI Web API Batch.

 

Let's compare the Start method from both projects:

 

            getDatabaseWebId().then(getRootElements).then(function (usersDataResponse) {
                if (usersDataResponse == null) {
                    return null;
                }
                var foundUser = false;
                usersData = usersDataResponse.Items;
                for (var i = 0; i < usersData.length; i++) {
                    if (usersData[i].Name.toLowerCase() == currentUserName.toLowerCase()) {
                        foundUser = true;
                    }
                }
                console.log('FoundUser: ' + foundUser);
                if (foundUser == false) {
                    //Exercise 4: Writing promise chains 
                    return createNewElement().then(getDataArchiveWebId).then(createUserPoints).done(function (r1, r2, r3, r4, r5, r6) {
                        showMessage('User was created!');
                        return getRootElements(null).then(getUserAttributes).then(saveUserAttributes).then(createNewEventFrame);
                    });
                }
                else {
                    showMessage('User was found!');
                    return getRootElements(null).then(getUserAttributes).then(saveUserAttributes).then(createNewEventFrame);
                }
            });

Start method using jQuery promises chains

 

     getDatabaseWebIdAndElementsRoot().then(function (data) {
                if (data[2].Status != 200) {
                    return null;
                }
                afDatabaseWebId = data[1].Content.WebId;
                var foundUser = false;
                usersData = data[2].Content.Items;
                for (var i = 0; i < usersData.length; i++) {
                    if (usersData[i].Name.toLowerCase() == currentUserName.toLowerCase()) {
                        foundUser = true;
                    }
                }
                console.log('FoundUser: ' + foundUser);
                if (foundUser == false) {
                    showMessage('User was created!');
                    return continueUserNotFound().then(saveUserAttributes);
                }
                else {
                    showMessage('User was found!');
                    return continueUserFound().then(saveUserAttributes);
                }
            });

Start method using PI Web API Batch

 

There are mainly three parts of the Start function which was updated:

 

  •   getDatabaseWebIdAndElementsRoot() replaces  getDatabaseWebId().then(getRootElements).
  •   continueUserNotFound() replaces all the jQuery promise chains in case the user is not found.
  •   continueUserFound() replaces all the jQuery promise chains in case the user is found.

 

 

Change 1 - Getting the AF Database WebId and Root Elements with BATCH

 

The functions getDatabaseWebId and getRootElements of the old project, which call processJsonContent internally, are defined below:

 


    function processJsonContent(type, data, url) {
        return $.ajax({
            type: type,
            headers: {
                "Content-Type": "application/json"
            },
            url: url,
            cache: false,
            data: data,
            async: true,
            username: 'pilabuser',
            password: 'PIWebAPI2015',
            crossDomain: true,
            xhrFields: {
                withCredentials: true
            }
        });
    }


    var getDatabaseWebId = function (data, textStatus, jqXHR) {
        console.log('PI Web API: Getting AF Database WebId....');
        var url = base_service_url + "assetdatabases?path=\\\\" + afServerName + "\\" + afDatabaseName;
        return processJsonContent('GET', null, url);
    }


    var getRootElements = function (data, textStatus, jqXHR) {
        console.log('PI Web API: Getting all users data....');
        if (data != null) {
            afDatabaseWebId = data.WebId;
        }
        var url = base_service_url + "assetdatabases/" + afDatabaseWebId + "/elements";
        return processJsonContent('GET', null, url);
    }

 

 

The new project is changed to:

 

    function processBatchRequest(batchData) {
        return $.ajax({
            type: 'POST',
            headers: {
                "Content-Type": "application/json"
            },
            url: base_service_url + 'batch',
            cache: false,
            data: JSON.stringify(batchData),
            async: true,
            username: 'pilabuser',
            password: 'PIWebAPI2015',
            crossDomain: true,
            xhrFields: {
                withCredentials: true
            }
        });
    }


    var getDatabaseWebId = function () {
        console.log('PI Web API: Getting AF Database WebId....');
        return {
            "Method": "GET",
            "Resource": base_service_url + "assetdatabases?path=\\\\" + afServerName + "\\" + afDatabaseName,
            "Headers": {
                "Cache-Control": "no-cache"
            }
        };
    }






    var getRootElements = function (parentId, parameter) {
        console.log('PI Web API: Getting PI Data Archive WebId....');
        return {
            "Method": "GET",
            "Resource": base_service_url + "assetdatabases/{0}/elements",
            "Parameters": parameter,
            "ParentIds": parentId,
        };
    }




    var getDatabaseWebIdAndElementsRoot = function () {
        var batchData = {};
        batchData["1"] = getDatabaseWebId();
        batchData["2"] = getRootElements(["1"], ["$.1.Content.WebId"]);
        return processBatchRequest(batchData);
    }











 

 

The new function called getDatabaseWebIdAndElementRoot creates an array of two objects each one representing an HTTP request. The first one gets an object from getDatabaseWebId and the second one get another object from getRootElements. This array is sent to the new method processBatchRequest which makes a POST HTTP request against /batch. Note that the getRootElements method has two inputs: parentId and parameter. Note that in order to get the root elements of the AF database, its WebId is needed. The response of the first request is needed as an input of the second one. Those inputs are to apply this behaviour.

 

Change 2 - Case when the user is found

 

Let's take a look first of the promise chains in case the user is found:

 

                else {
                    showMessage('User was found!');
                    return getRootElements(null).then(getUserAttributes).then(saveUserAttributes).then(createNewEventFrame);
                }

 

The createNewEventFrame, getUserAttributes and saveUserAttributes functions of the old project are defined below:

 




    var createNewEventFrame = function () {
        console.log('PI Web API: Creating a new AF Event Frame....');
        var url = base_service_url + "assetdatabases/" + afDatabaseWebId + "/eventframes";
        var data = new Object();
        var today = new Date();
        currentEventFrameName = currentUserName + " - " + today.toString().substring(0, 24);        
        data.Name = currentEventFrameName;
        data.Description = "Event Frame from user " + currentUserName;
        data.TemplateName = "GoogleStreetViewActivity";
        data.StartTime = "*";
        data.EndTime = "*";
        var jsonString = JSON.stringify(data);
        logged = true;
        return processJsonContent('POST', jsonString, url);
    }



  var getUserAttributes = function (data, textStatus, jqXHR) {
        console.log('PI Web API: Retrieving user attributes....');
        if (data == null) {
            return null;
        }
        usersData = data.Items;
        var url = null;
        for (var i = 0; i < usersData.length; i++) {
            if (usersData[i].Name.toLowerCase() == currentUserName.toLowerCase()) {
                url = usersData[i].Links.Attributes;
            }
        }


        return processJsonContent('GET', null, url);
    }


    var saveUserAttributes = function (data, textStatus, jqXHR) {
        userAttributesWebIds = {};
        for (var i = 0; i < data.Items.length; i++) {
            userAttributesWebIds[data.Items[i].Name] = data.Items[i].WebId;
        }
        console.log('PI Web API: Saving user attributes....');
    }


 

The new project is changed to:

 

    
    var createNewEventFrame = function () {
        console.log('PI Web API: Creating a new AF Event Frame....');
        var data = new Object();
        var today = new Date();
        currentEventFrameName = currentUserName + " - " + today.toString().substring(0, 24);
        data.Name = currentEventFrameName;
        data.Description = "Event Frame from user " + currentUserName;
        data.TemplateName = "GoogleStreetViewActivity";
        data.StartTime = "*";
        data.EndTime = "*";
        logged = true;
        return {
            "Method": "POST",
            "Resource": base_service_url + "assetdatabases/" + afDatabaseWebId + "/eventframes",
            "Content": JSON.stringify(data),
        };
    }




    var getUserElement = function (parentId) {
        console.log('PI Web API: Retrieving current element name....');
        return {
            "Method": "GET",
            "Resource": base_service_url + "elements?path=\\\\" + afServerName + "\\" + afDatabaseName + "\\" + currentUserName,
            "ParentIds": parentId
        };
    }


    var getUserAttributes = function (resource, parentIds) {
        console.log('PI Web API: Retrieving user attributes....');
        return {
            "Method": "GET",
            "Resource": resource,
            "ParentIds": parentIds
        };
    }


  var continueUserFound = function () {
        var batchData = {};
        batchData["2"] = getUserElement([]);
        batchData["1"] = getUserAttributes("$.2.Content.Links.Attributes", ["2"]);
        batchData["3"] = createNewEventFrame();
        return processBatchRequest(batchData);
    }

 

The logic of code migration is very similar of the previous change. The continueUsersFound method call three methods to get objects used to make the BATCH request. In order to make the request from getUserAttributes(), the response from the getUserElement() is needed.

 

 

Change 3 - Case when the user is NOT found

 

Let's take a look first of the promise chains in case the user is not found:

 

                    return createNewElement().then(getDataArchiveWebId).then(createUserPoints).done(function (r1, r2, r3, r4, r5, r6) {
                        showMessage('User was created!');
                        return getRootElements(null).then(getUserAttributes).then(saveUserAttributes).then(createNewEventFrame);
                    });

 

The createNewElement, getDataArchiveWebId, createUsersPoints functions of the old project are defined below:

 

 








var createNewElement = function () {        

        console.log('PI Web API: Creating a new user element....');
        //Exercise 2: Creating new elements
        var url = base_service_url + "assetdatabases/" + afDatabaseWebId + "/elements";
        var data = new Object();
        data.Name = currentUserName;
        data.Description = "Participant of the hands-on-lab";
        data.TemplateName = "UserTemplate";
        var jsonString = JSON.stringify(data);
        return processJsonContent('POST', jsonString, url);
    }



    var getDataArchiveWebId = function (data, textStatus, jqXHR) {
        console.log('PI Web API: Getting PI Data Archive WebId....');
        var url = base_service_url + "dataservers?path=\\\\" + piDataArchiveName;
        return processJsonContent('GET', null, url);


    }


    var createUserPoints = function (data, textStatus, jqXHR) {
        console.log('PI Web API: Creating user PI Points....');
        //Exercise 3: create user PI Points
        var createUserPoint = function (attributeName, pointType) {
            var url = base_service_url + "dataservers/" + piDataArchiveWebId + "/points";
            var data = new Object();
            data.Name = "CrossPlatformLab" + "." + currentUserName + "." + attributeName;
            data.PointClass = "classic";
            data.PointType = pointType;
            data.Future = false;
            var jsonString = JSON.stringify(data);
            return processJsonContent('POST', jsonString, url);
        }
        piDataArchiveWebId = data.WebId;
        var pt1 = createUserPoint('Heading', 'Float64');
        var pt2 = createUserPoint("Latitude", 'Float64');
        var pt3 = createUserPoint("Longitude", 'Float64');
        var pt4 = createUserPoint("Network Connection", 'String');
        var pt5 = createUserPoint("Pitch", 'Float64');
        var pt6 = createUserPoint("Zoom", 'Float64');
        return $.when(pt1, pt2, pt3, pt4, pt5, pt6);
    }

 

The new project is changed to:

 


    var createUserPoint = function (attributeName, pointType, parentId, parameter) {
        var data = new Object();
        data.Name = "CrossPlatformLab" + "." + currentUserName + "." + attributeName;
        data.PointClass = "classic";
        data.PointType = pointType;
        data.Future = false;


        return {
            "Method": "POST",
            "Resource": base_service_url + "dataservers/{0}/points",
            "Content": JSON.stringify(data),
            "Parameters": [parameter],
            "ParentIds": [parentId],
        };
    }


    var getDataArchiveWebId = function () {
        console.log('PI Web API: Getting PI Data Archive WebId....');
        return {
            "Method": "GET",
            "Resource": base_service_url + "dataservers?path=\\\\" + piDataArchiveName,
            "Headers": {
                "Cache-Control": "no-cache"
            }
        };
    }



    var createNewElement = function () {
        console.log('PI Web API: Creating a new user element....');
        var data = new Object();
        data.Name = currentUserName;
        data.Description = "Participant of the hands-on-lab";
        data.TemplateName = "UserTemplate";
        return {
            "Method": "POST",
            "Resource": base_service_url + "assetdatabases/" + afDatabaseWebId + "/elements",
            "Content": JSON.stringify(data),
        };
    }



    var createNewEventFrame = function () {
        console.log('PI Web API: Creating a new AF Event Frame....');
        var data = new Object();
        var today = new Date();
        currentEventFrameName = currentUserName + " - " + today.toString().substring(0, 24);
        data.Name = currentEventFrameName;
        data.Description = "Event Frame from user " + currentUserName;
        data.TemplateName = "GoogleStreetViewActivity";
        data.StartTime = "*";
        data.EndTime = "*";
        logged = true;
        return {
            "Method": "POST",
            "Resource": base_service_url + "assetdatabases/" + afDatabaseWebId + "/eventframes",
            "Content": JSON.stringify(data),
        };
    }

  var continueUserNotFound = function () {
        var batchData = {};
        batchData["1"] = getUserAttributes("$.9.Content.Links.Attributes", ["9"]);
        batchData["2"] = getDataArchiveWebId();
        batchData["3"] = createUserPoint('Heading', 'Float64', "2", "$.2.Content.WebId");
        batchData["4"] = createUserPoint("Latitude", 'Float64', "2", "$.2.Content.WebId");
        batchData["5"] = createUserPoint("Longitude", 'Float64', "2", "$.2.Content.WebId");
        batchData["6"] = createUserPoint("Network Connection", 'String', "2", "$.2.Content.WebId");
        batchData["7"] = createUserPoint("Pitch", 'Float64', "2", "$.2.Content.WebId");
        batchData["8"] = createUserPoint("Zoom", 'Float64', "2", "$.2.Content.WebId");
        batchData["9"] = getUserElement(["10"]);
        batchData["10"] = createNewElement();
        batchData["11"] = createNewEventFrame();
        return processBatchRequest(batchData);
    }



 

On the old project, we have used jQuery.when function in order to create the 6 PI Points. It makes 6 HTTP requests simultaneously as each request is independent from one another. What it was done on the new project is to merge those 6 requests with the other requests in the batch array.

 

If something is not clear just step in the code using Google Developer Tools or post a question here.

 

Conclusion

 

As you could realize migrating your web app JavaScript functions in order to use PI Web API Batch is not very difficult. It is just a matter of understanding how it works, which requests are needed and which ones needs to finish for the next ones to get started. The great benefit of doing so is the better performance your app would get as BATCH enables you to make multiple requests with a single one.

 

If you have any suggestion or questions please post a comment here.

Filter Blog

By date: By tag: