Skip navigation
All Places > PI Developers Club > Blog
4 5 6 7 8 Previous Next

PI Developers Club

662 posts

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Currently, in order to set up an AF Server for testing/development purposes, you have two choices.

 

1. Install SQL Server and AF Server on your local machine

The problem with this method is that there is no isolation from the host operating system. Therefore, you risk the stability of the host computer if something goes wrong. You also can't spin up multiple AF Servers this way.

 

2. Provision a VM and then install SQL Server and AF Server on it

While this method provides isolation, the problem lies in the time it takes to get it set up and also the size of the VM which includes many unnecessary components.

 

There is a better way!

Today, I will be teaching you how spin up AF Server instances in less than 1 minutes (after performing the initial setup which might take a bit longer). This is made possible by the usage of containerization technology.

 

Requirements

Windows Server build 1709, Windows Server 2016 (Core and with Desktop Experience) or Windows 10 Professional and Enterprise (Anniversary Edition). Ensure that your system is current with the Windows Update.

 

Benefits

1. Portability. Easy to transfer containers to other container hosts that meet the prerequisites. No need to do tedious migrations.

2. Side by side versioning. Ability to run multiple versions of AF Server on the same container host for compatibility testing and debugging purposes.

3. Speed. Very fast to deploy.

4. Resource efficiency and density. More AF Servers can run on the same bare metal machine compared to virtualization.

5. Isolation. If you no longer need the AF Server. You can remove it easily. It won’t leave any temporary or configuration files on your container host.

6. Able to use with container orchestration systems such as Swarm or Service Fabric.

 

Set up

Install Docker

For Windows 10,

You can install Docker for Windows. Please follow the instructions here

 

For Windows Server 2016,

You can use the OneGet provider PowerShell module. Open an elevated PowerShell session and run the below commands.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force

 

Install AF Server image

Run the following command at a console. When prompted for the username and password during login, please contact me (elee@osisoft.com) for them. Currently, this image is only offered for users who already have a PI Server license or are PI Developers Club members (try it now for free!). You will have to login before doing the pull. Otherwise, the pull will be unsuccessful.

docker login
docker pull elee3/afserver:18x
docker logout

Remember to check digest of image to make sure it is not tampered with.

18x: digest: sha256:4ce64e6dafa4fe93d3cf2ef4fb3a38ddadd4493e10e7a68efd809b92d946a77f

 

Deployment

Now that the setup is complete, you can proceed to running the container image. To do so, use the following command.

Replace <DNS hostname> and <containername> with one of your own picking. Remember to pick a DNS hostname that is unique in your domain.

docker run -d --hostname <DNS hostname> --name <containername> elee3/afserver:18x

 

You can now open up PI System Explorer on your local machine and connect to the AF Server by specifying the DNS Hostname that you chose earlier. When prompted for credentials, use

User name: afadmin

Password: qwert123!

Check the box to remember the credentials so that you won't have to enter it every time.

 

You can choose to rename the AF Server if you wish.

 

And you are done! Enjoy the new AF Server instance that you have created!

 

Using with AF SDK

To connect to the AF Server from code using AF SDK, the following Connect overload can be utilized with the same credentials as above.

PISystem.Connect Method (NetworkCredential)

 

Multiple AF Servers

In order to spin up another AF Server instance, follow the steps above. When you get the new container running. You have to change the ServerID. You can do this via

docker exec -i <containername> cmd /c "cd %pihome64%\af&afdiag.exe /cid:<guid>"

You can generate a new guid using this.

 

You do not need to manually generate a new Server ID anymore. The image does it automatically for you.

 

Destroy AF Server

If you no longer need the AF Server, you can destroy it using

docker stop <containername>
docker rm <containername>

 

Limitations

This example uses a public SQL Express container image which is currently not available for use in a production environment. (Changed to a SQL Express image that I built myself)

This example relies on local accounts for authentication. Refer to the following article if you want to use Kerberos. Spin up AF Server container (Kerberos enabled)

 

New updates (14 Feb 2018)

1. 2017R2 tag is now available

2. Image has been updated with ability to import in an existing AF Server backup in the form of PIFD.bak file. To do this, run

docker run -di --hostname <DNS hostname> --name <containername> -v <path to folder containing PIFD.bak>:c:\db elee3/afserver:2017R2 migrate.bat

 

New updates (30 May 2018)

1. Local account is no longer in the administrators group. Only a mapping to an AF Identity is done (better security)

 

New updates (18 Jun 2018)

1. Added reminder to check digest of the image to make sure image has not been tampered with

 

New updates (2 Jul 2018)

1. Changed tag from 2017R2 to 17R2

2. Removed telemetry

 

New updates (13 Jul 2018)

1. Changes to facilitate upgrading to 2018 container

 

New updates (4 Sep 2018)

1. Updated to use tag 18x which comes with health check and some performance improvements

 

New updates (26 Oct 2018)

1. New tag 18s comes with SQL data files persisted externally

Motivation

 

Recently, during PI World 2018, I was surprised by the number of people asking me if it's possible to list all PI Points and AF Attributes used in PI Vision's displays. The good news is that it's possible to do it, the bad news is that it's not that straightforward.

 

I will show two different ways to achieve this. The first one using Powershell and the second one querying directly PI Vision's database. Warning: we strongly recommend that you don't mess around with PI Vision's database unless you know what you are doing. If you have questions, please contact tech support or leave a comment in this post.

 

Powershell

 

The PowerShell method is the simplest and safest. In order to understand how it works, let's first do a quick recap of PI's architecture.

 

In a very high-level description, PI uses a producer-consumer pattern: multiple producers (interfaces, connectors, AFSDK writes, etc) send data to a central repository, while consumers subscribe to updates on a set of PI Points. Whenever a new data comes in, the Update Manager Subsystem notifies subscribers that fresh data is available.

 

If you open your PI System Management Tools and navigate to Operation -> Update Manager, you will see a list of all processes consuming and producing data.

 

2018-05-07 09_19_19-Update Manager - PI System Management Tools.png

 

Now, if you filter by *w3wp* (the name of the IIS process) you can drill down the data and get the list of tags being consumed by that specific signup.

 

2018-05-07 09_37_34-Update Manager - PI System Management Tools.png

 

But hey, this is PI DevClub! What about doing it programmatically? Unfortunately, the Update Manager information is not available in AF SDK, but we have the PowerShell tools to help us with this task:

 

$conn = Connect-PIDataArchive -PIDataArchiveMachineName "emiller-vm2";
$pointIds = @();
While ($true)
{
     $consumers = Get-PIUpdateManagerSignupStatistics -Connection $conn -ConsumerName "*w3wp*";
     $consumers | ForEach-Object -Process {
          $pointId = $_.Qualifier;
          if ($pointIds -notcontains $pointId -And $pointId -ne 0)
          {
               $pointIds += $pointId;
               $piPoint = Get-PIPoint -Connection $conn -WhereClause "pointid:=$pointId" -AllAttributes;
               $printObj = New-Object PSObject;
               $printObj | Add-Member Name $piPoint.Point.Name;
               $printObj | Add-Member Description $piPoint.Attributes.descriptor;
               $printObj | Add-Member Changer $piPoint.Attributes.changer;
               Write-Output $printObj;
          }
     }
}

 

If you run this script it will keep listening for every call to your PI Server originated from an IIS:

 

2018-05-07 17_09_02-Windows PowerShell.png

 

By now you may have noticed the two problems of this method: (1) it only shows a new entry if somebody request data for a given PI point (i.e.: open a display) and (2), we are just listing tags and we totally ignore AF attributes. A workaround for the first one is to leave the script running for a while and pipe the result to a text file.

 

PI Vision's Database

 

Let me say this once again before we proceed: we strongly recommend users to not touch PI Visions' database. That said...

 

There are two ways to extract this information from the database. The first one is dead simple, but only works if you don't have displays imported from ProcessBook:

 

SELECT 
     E.Name [DisplayName],
     E.Owner [DisplayOwner],
     D.FullDatasource [Path]
FROM 
     BrowseElements E,
     DisplayDatasources D
WHERE
     E.ID = D.DisplayID

 

The result of this select is a table with all AF Attributes and PI Points used by PI Vision.

 

2018-05-07 17_05_28-SQLQuery1.sql - bmoura-vm3.PIVisualization (OSI_rborges (53))_ - Microsoft SQL S.png

 

This may work for you, but one person that approached me during PI World, also asked me if it was possible to list not only the data sources but also the Symbols using them. Also, most of the displays were imported from ProcessBook. And that's when things get tricky:

 

SELECT
     E.Name as [DisplayName],
     E.Owner as [DisplayOwner],
     S.c.value('../@Id', 'nvarchar(128)') as [Symbol],
     D.c.value('local-name(.)', 'nvarchar(2)') as [Source],
     CASE -- Constructing the path according to the data source
          WHEN D.c.value('local-name(.)', 'nvarchar(2)') = 'PI' -- The data comes from a PI Point
          THEN '\\' +
               CASE WHEN CHARINDEX('?',D.c.value('@Node', 'nvarchar(128)')) > 0 -- Here we check if the server ID is present
               THEN LEFT(D.c.value('@Node', 'nvarchar(128)'), CHARINDEX('?',D.c.value('@Node', 'nvarchar(128)'))-1)
               ELSE D.c.value('@Node', 'nvarchar(128)')
               END 
               + '\' + 
               CASE WHEN CHARINDEX('?',T.c.value('@Name', 'nvarchar(128)')) > 0 -- Here we check if the point ID is present
               THEN LEFT(T.c.value('@Name', 'nvarchar(128)'), CHARINDEX('?',T.c.value('@Name', 'nvarchar(128)'))-1)
               ELSE T.c.value('@Name', 'nvarchar(128)')
               END
          WHEN D.c.value('local-name(.)', 'nvarchar(2)') = 'AF' -- The data comes from an AF attribute
               THEN '\\' + D.c.value('@Node', 'nvarchar(128)')  + '\' + D.c.value('@Db', 'nvarchar(256)') +  '\' 
               + CASE 
               WHEN T.c.value('@ElementPath', 'nvarchar(128)') IS NOT NULL 
               THEN T.c.value('@ElementPath', 'nvarchar(128)') + '|' + T.c.value('@Name', 'nvarchar(128)')
               ELSE O.c.value('@ElementPath', 'nvarchar(128)') + T.c.value('@Name', 'nvarchar(128)')
               END
     END as [Path]
FROM
     BaseDisplays B
     CROSS APPLY B.COG.nodes('/*:COG/*:Datasources/*/*') T(c)
     CROSS APPLY B.COG.nodes('/*:COG/*:Databases/*') D(c)
     CROSS APPLY B.COG.nodes('/*:COG/*:Symbols/*:Symbol/*') S(c)
     LEFT JOIN BaseDisplays B2 OUTER APPLY B2.COG.nodes('/*:COG/*:Contexts/*:AFAttributeParameter') O(c) 
          ON T.c.value('../@Id', 'nvarchar(128)') = O.c.value('@Datasource', 'nvarchar(128)'),
     BrowseElements E
WHERE
     E.ID = B.BrowseElementID
     AND E.DeleteFlag = 'N'
     AND D.c.value('@Id', 'nvarchar(128)') = T.c.value('../@DbRef', 'nvarchar(128)')
     AND T.c.value('../@Id', 'nvarchar(128)') = S.c.value('@Ref', 'nvarchar(128)')

 

The result is a little more comprehensive than the previous script:

 

2018-05-07 17_05_47-SQLQuery2.sql - bmoura-vm3.PIVisualization (OSI_rborges (55))_ - Microsoft SQL S.png

 

These queries were made for the latest version available (2017 R2 Update 1) and it's not guaranteed to be future-proof. It's known that PI Vision 2018 will use a different data model, so, If needed, I will revisit this post after the launch of the 2018 version.

 

I'm not going to dig into the specifics of this script as it has a lot of T-SQL going on to deal with the XML information that is stored in the database. If you have specific questions about how it works, leave a comment. Also keep in mind that this query is a little expensive, so you should consider running during off-peak hours or on a dev database.

 

Conclusion

 

List all tags and attributes used by PI Vision is a valid use case and most PI admins will agree that it helps to understand their current tag usage. We have been increasing our efforts on system usage awareness and, with this post, I hope to help with this goal.

The recording for this Live Coding presentation may be found at: Getting the most out of AFSearch

 

Associated code may be found on this GitHub repository:

GitHub - Rick-at-OSIsoft/AF-SDK-TechCon-2018-AFSearch-LiveCoding: Presentation also titled "Getting the most out of AFSe…

 

GitHub also includes a PDF file of the slide deck.  Topics include:

 

Demo 1: What's new in AF SDK 2.9.5 and 2.10

Demo 2: Breaking it down line-by-line to see what's happening on the client versus the server

Demo 3: Caching versus Not Caching

Demo 4: Accurate counts especially when client-side filtering is required

Demo 5: The perils of LINQ

 

Along the way we touch upon AFAttributeSearch, using the skinny, lightweight FindObjectFields with AFEventFrames, why caching is so important to performance, and why you need to properly dispose of the AFSearch objects.

We are excited to present the PI World Innovation Hackathon 2018 winners!

 

DCP Midstream kindly provided a sample of their data with boosters and compressors information. Participants were encouraged to create killer applications for DCP Midstream by leveraging the PI System infrastructure.

 

The participants had 23 hours to create an app using any of the following technologies:

  • PI Server 2017 R2
  • PI Web API 2017 R2
  • PI Vision 2017  R2

 

Our judges evaluated each app based on their creativity, technical content, potential business impact, data analysis and insight and UI/UX. Although it is a tough challenge to create an app in 23 hours, six groups were able to finish their app and present to the judges!

 

Prizes:

1st place: Sonos PLAY 5 + Echo Dot, one year free subscription to PI Developers Club, one time free registration at OSIsoft PI World over the next 1 year

2nd place: Bose Quiet Comfort 35 (Series II), one year free subscription to PI Developers Club, 50% discount for registration at OSIsoft PI World over the next 1 year

3rd place: Vilros Raspberry Pi 3 Retro Arcade Gaming Kit + 5 USB Classic Controllers and one year free subscription to PI Developers Club

 

 

Without further do, here are the winners!

 

1st place - Oogway

 

The team members were: Kshitiz Jain, Matthew Wallace, Paurav Joshi and Diego Eduardo Mercadal

 

 

1st place.jpg

 

1st place winners interview.jpg

 

Team Oogway built a dashboard that enables the plants to make a more informed decision on the route boosters should take. This is done through providing the optimal plant for each booster and telling the current target of each booster. While giving the ability to drill down into each of the boosters to understand efficiency and anomalies.

 

The team used the following technologies:

  • Google Maps API embedded into PI Vision
  • PI AF SDK

Here are some screenshots presented by the Oogway team!

 

Compressor Page:

 

 

 

 

 

2nd place - <Insert Obligatory Gas Joke Here>

 

The team members were:  Rob Raesemann, Greg Busch, Lonnie Bowling and David Rodriguez

 

2nd place.jpg

 

They developed an app which utilizes PI AF event frames to generate leading data sets for further analysis. Utilizes PI Web API client library in Python to analyze and visualize data. Uses the PI AF SDK and Angular to provide visualization and interaction with event frames by end users. Explores novel voice interaction with end users.

 

The team used the following technologies:

  • Python
  • PI AF SDK
  • PI Web API and its client library for Python
  • Angular

 

Here are some screenshots presented by <Insert Obligatory Gas Joke Here>!

 

 

 

 

3rd place - Fantastic Four

 

The team members were: Xihui Zhang, Michael Baldauff, Jonathan Mejeur  and Syed Rehanrawos

 

3rd place.jpg

 

Team Fantastic Four developed an app which leverages features of PI Vision 4 in order to display IIOT results from a high level overview to multistate reporting.

 

The team used the following technologies:

  • PI Vision 4

 

Here is a screenshot presented by the Fantastic Four!

 

This is the material for the talk "Build PI Applications Faster with PI Web API Client Libraries" held during the PI World SF 2018 Developer Track.

 

GitHub - osimloeff/Building-Apps-Faster-Client-Libs-TechCon2018: Samples used on the "Building applications faster using…

 

It includes:

1. Visual Studio Solution

2. PowerPoint Presentation

 

Click "Download ZIP" on the right-side of the toolbar or fork and clone to your local repository.

 

This talk and source code have 10 examples:

 

  • Example 1 - Authentication
  • Example 2 - Retrieve basic PI Objects: PIElement, PIAttributes
  • Example 3 - Handling exceptions
  • Example 4 - Updating PI Point description
  • Example 5 - Retrieve data in bulk
  • Example 6 - Update data in bulk
  • Example 7 - PI Web API Batch
  • Examples 8 and 9 - Web ID 2.0 client generation
  • Example 10 - PI Web API Channels

 

This is the material for the "Developing Modern Web Apps Using Angular 5 and PI Web API " hands-on-lab held during the PI World SF 2018 Developer Track.

 

GitHub - osimloeff/Modern-Web-Technologies-TechCon2018

 

It includes:

1. Two Visual Studio Code Projects

2. Workbook

3. XML to be imported in PI AF

4. PowerPoint Presentation

 

Click "Download ZIP" on the right-side of the toolbar or fork and clone to your local repository.

 

The screenshot below shows the sample app used in this lab.

 

 

This lab has 5 exercises.

  • Exercises 1 and 2 are about retrieving the cities geolocation.
  • Exercises 3 is about getting live and performing calculations.
  • Exercise 4 will explain how integrate your app with PI Vision 3.
  • Exercise 5 is about PI Web API Batch

 

The Virtual Machine for this lab is available on OSIsoft Virtual Learning Environment whose link is below:

https://learning.osisoft.com/Labs/LabInformation/

On behalf of Engineering's Technology Enablement team, and in conjunction with Technical Support's Developer Technologies team, it is my privilege to announce this year's Community All-Stars.  We give special recognition to these individuals for their prolific and relentless contributions to the PI Community at large.  We thank them for sharing their experiences and knowledge for all things PI, and in particular for any posts that are developer related.  Let me add that each recipient holds near and dear to each of them the concept of "community", and even if we did not hand out such awards, or if we did not hand out any prize other than simple recognition, these individuals would still post and contribute to our PI Square Community with the same dedication for the sheer sake of expanding the knowledge base of the community.

 

I ask my fellow PI Geeks to help congratulate our deserving 2018 All-Stars!  It's interesting to note that are 3 Community All-Star and one Rising Star are from 4 different continents.

 

PI All Stars 2018.jpg

Rick Davin (far left), Paurav Joshi, John Messinger, and Dan Fishman.  Not shown: Roger Palmen.

 

 

PI Developers Community All-Stars 2018

  • Dan Fishman, Exele Information Systems.  Dan is a first time winner as Community All-Star but won twice before as OSIsoft All-Star.
  • John Messinger, Optimate Pty Ltd.  John wins for the 2nd year in a row.
  • Roger Palmen, CGI.  I don't remember a year when Roger wasn't a winner.

 

Last year as an experiment, I began following a lot of people.  Anyone who has ever been an All-Star, honorable mention, or just nominated.  I can tell you that my Inbox was slammed all year by answers from Dan, John, and Roger.  Thanks, guys.  I don't mean that sarcastically.  Each "spam" in my Inbox from you is you helping another community member, so absolutely, sincerely THANKS!

 

PI Developers Club Community Rising Star 2018

We introduced this category last year to recognize contributions and efforts on well deserving contributors that steadily add to our content here but just fall short of the elite upper strata of All-Star.  A big difference between Rising Star and All-Star is a Rising Star may only be awarded once, whereas All-Star may be won year after year (I'm looking at you, Roger).  Anyone who has ever been an All-Star or Rising Star in the past is ineligible for further consideration as a Rising Star.  Last year we handed out 3 of these as a means of playing catch-up to some folks who have been solid contributors over the year.  This year we have only one winner:

 

 

Paurav along with Dan, John, and Roger win:

  1. Recognition in the community
  2. One year free subscription or renewal to PI Developers Club
  3. One-time free registration to a 2019 PI World event (SF or EMEA)
  4. A legal waiver for them to sign
  5. An Amazon gift card worth $400 USD granted after submitting the signed legal waiver back to us

 

And let's not forget there are a lot of fine contributors within OSIsoft.  While it is our job to help customers, we do want to recognize a select few for their efforts the past year.

 

PI Developers Club OSIsoft All-Stars 2018

 

Our OSIsoft recipients win:

  1. Recognition in the community
  2. Amazon gift card worth $200 USD

 

The clock has been reset and everyone's slate is clean

It's never too early to start thinking about 2019 All-Stars.  We monitor the quantity and quality of posts from one PI World SF to the next in order to pick our winners.  The new countdown has begun.  One thing I can tell you that all of our 2018 winners have in common are the following:

  • A high volume of posts
  • Posts of high quality (lots flagged as helpful or marked as answers)
  • A strong desire to help others.

 

Absolutely, positively candidates must contribute to threads other than their own.  In short, you must show strong participation in the community as a whole.  Without a doubt this year's crop of winners have done just that.

 

CONGRATULATIONS TO ALL WINNERS!

OSIsoft has released a new watchface for PI World attendees!
20180419_102953.jpg

 

Here's how to get it on your SAMSUNG Gear S2 or S3 watch.   Start the SAMSUNG Gear store on your phone:

20180424_144259.jpg

Now jump to the Watch Face list...

20180424_144439.jpg

 

And search for "OSIsoft"...

20180424_144457.jpg

 

And now you can install it to your watch!

 

Screenshot_20180424-101715_Galaxy Apps.jpg

 

The watch face runs on TIZEN, SAMSUNG's newest mobile OS.

 

 

PI World NA is next week and I am quite excited.  Join us the afternoon of Day 3 (Thursday April 26) from 4:30 - 6:00 PM PDT for the first ever Developers Meet-Up Reception, where we will also be handing out a gaggle of awards for this year's crop of Community All-Stars, winners of the Innovation Hackathon, as well as the winners of the recent Visualization Virtual Hackathon!  If you consider yourself a developer, or just wish you were a developer in-training, or are a member of PI Developers Club, or at the very least are attending any DevCon presentations or labs at Parc 55, then by all means stop in for a beer or refreshing drink, have a snack, and mingle with fellow developers.  At 5:00 PM we will begin announcing awards, where you may congratulate the winners or commiserate with the losers other not-so-fortunate participants.  Don't overdo it because at 7:00 PM the party shifts to the Hilton for the always fun PI Geek Night, with more food and drinks, games, and other hijinks.

 

What:     Developer Meet-Up Reception & Awards

Where:   Parc 55, Cyril Magnin I & Foyer, Level 4

When:    4:30 PM - 6:00 PM (awards start at 5:00 PM)

 

You may see more at the bottom of this link:  PI World for Developers: PI World - OSIsoft Users Conference 2018 (SF)

 

Since it is being held in the late afternoon and to a limited segment of PI World attendees, this is not listed as an Evening Event.  However, it does appear on the Day 3 Agenda for Parc 55 hotel under the Sessions Agenda

 

We hope to see you there!

rborges

C#7 & AF SDK

Posted by rborges Employee Apr 17, 2018

If you have Visual Studio 2017 and the .NET Framework 4.6.2, you can benefit from new features that are available from the language specification. Some of them are pure syntactic sugar, but yet useful. The full list can be found in this post and here I have some examples of how you can use them to leverage your AF SDK usage.

 

1) Out variables

We use out variables by declaring them before a function assign a value to it:

 

AFDataPipe pipe = new AFDataPipe();
var more = false;
pipe.GetUpdateEvents(out more);
if (more) { ... }

 

Now you can inline the variable declaration, so there's no need for you to explicitly declare it before. They will be available throughout your current execution scope:

 

AFDataPipe pipe = new AFDataPipe();
pipe.GetUpdateEvents(out bool more);
if (more) { ... }

 

2) Pattern Matching

C# now has the idea of patterns. Those are elements that can test if an object conforms to a given pattern and extract information out of it. Right now the two most useful uses of it are Is-expressions and Switch statements.

 

2.1) Is-expressions

This is very simple and straightforward. what used to be:

 

if (obj.GetType() == typeof(AFDatabase))
{
    var db = (AFDatabase)obj;
    Console.WriteLine(db.Name);
}

 

Can now be simplified to:

 

if (obj is AFDatabase db)
     Console.WriteLine(db.Name);

 

Note that we are only instantiating the db object if it's an AFDatabase.

 

2.2) Switch Statements

So far this is my favorite because it completely changes flow control in C#. For me is the end of if / else if as it allows you to test variables types and values on the go with the when keyword:

 

public AFObject GetParent(AFObject obj)
{
    switch (obj)
    {
        case PISystem system:
            return null;
        case AFDatabase database:
            return database.PISystem;
        case AFElement element when element.Parent == null:
            return element.Database;
        case AFElement element when element.Parent != null:
            return element.Parent;
        case AFAttribute attribute when attribute.Parent == null:
            return attribute.Element;
        case AFAttribute attribute when attribute.Parent != null:
            return attribute.Parent;
        default:
            return null;
    }
}

 

The when keyword is a gamechanger for me. It will make the code simpler and way more readable.

 

3) Tuples

As a Python programmer that has been using tuples for years, I've always felt that C# could benefit from using more of it across the language specification. Well, the time is now! This new feature is not available out-of-the-box. You have to install a missing assembly from NuGet:

 

PM> Install-Package System.ValueTuple

 

Once you do it, you not only have access to new ways to deconstruct a tuple but also use them as function returns. Here's an example of a function that returns the value and the timestamp for a given AFAttribute and AFTime:

 

private (double? Value, DateTime? LocalTime) GetValueAndTimestamp(AFAttribute attribute, AFTime time)
{
    var afValue = attribute?.Data.RecordedValue(time, AFRetrievalMode.Auto, null);
    var localTime = afValue?.Timestamp.LocalTime;
    var value = afValue.IsGood ? afValue.ValueAsDouble() : (double?)null;
    return (value, localTime);
}

 

Then you can use it like this:

 

public void PrintLastTenMinutes(AFAttribute attribute)
{
    // First we get a list with last 10 minutes
    var timestamps = Enumerable.Range(0, 10).Select(m => 
        DateTime.Now.Subtract(TimeSpan.FromMinutes(m))).ToList();
    // Then, for each timestamp ...
    timestamps.ForEach(t => {
        // We get the attribute value
        var (val, time) = GetValueAndTimestamp(attribute, t);
        // and print it
        Console.WriteLine($"Value={val} at {time} local time.");
    });
}

 

Note how we can unwrap the tuple directly into separated variables. It's the end of out variables!

 

4) Local Functions

Have you gone through a situation where a method exists only to support another method and you don't want other team members using it? That happens frequently when you are dealing with recursion or some very specific data transformations. A good example is in our last snippet, where GetValueAndTimestamp is specific to the method that uses it. In this case, we can move the function declaration to inside the method that uses is:

 

public void PrintLastTenMinutes(AFAttribute attribute)
{
    // First we get the last 10 minutes
    var timestamps = Enumerable.Range(0, 10).Select(m => 
        DateTime.Now.Subtract(TimeSpan.FromMinutes(m))).ToList();
    // Then, for each timestamp ...
    timestamps.ForEach(t => {
        // We get the attribute value
        var (val, time) = GetValueAndTimestamp(t);
        // and print it
        Console.WriteLine($"Value={val} at {time} local time.");
    });
    // Here we declare our GetValueAndTimestamp
    (double? Value, DateTime? LocalTime) GetValueAndTimestamp(AFTime time)
    {
        var afValue = attribute?.Data.RecordedValue(time, AFRetrievalMode.Auto, null);
        var localTime = afValue?.Timestamp.LocalTime;
        var value = afValue.IsGood ? afValue.ValueAsDouble() : (double?)null;
        return (value, localTime);
    }
}

 

As you can see, we are declaring GetValueAndTimestamp inside PrintLastTenMinutes and blocking it from external calls. This increases encapsulation and helps you keep your code DRY. Note how the attribute is accessible from within local function without passing it as a parameter. Just keep in mind that local variables are passed by reference to the local function (more info here).

 

There are other new features but those are my favorite so far. I hope you see good usage and, please, let me know if you have a good example of C#7.0 features.

Note: Development and Testing purposes only. Not supported in production environments.

 

Link to other containerization articles

Containerization Hub

 

Introduction

Today, I will be teaching you a recipe for cooking a PI Data Archive container. Please see my previous blog posts above on how to get Docker installed. We will be mixing the ingredients in a folder to create an image. After we have the image, we can bake the image to obtain a container.

 

Ingredients

For the source code to build the PI Data Archive container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies. Apologies for any inconvience caused.

1. PI Data Archive 2017 R2A Install Kit Download from techsupport website (contains the software)

2. Dockerfile (describes the mixing steps to form the image)

3. build.bat (script to start the mixing)

4. generateid.txt (reference commands for changing the Server ID)

5. pilicense.dat (many ways to obtain one such as through Account Manager/Partner Manager/Academic Team/PI DevClub membership, best to get a demo license that doesn't require a MSF)

6. temp.txt (adds host trust)

7. trust.bat (adds host trust)

 

Recipe

1. Gather all the required ingredients as listed above

2. Extract out the Enterprise_X64 folder from the PI Data Archive Install Kit.

3. Add pilicense.dat into the Enterprise_X64 folder. Override the existing files if needed.

4. Put the other ingredients into the parent folder of the Enterprise_X64 folder.

 

Your folder structure should now look like this.

5. Execute build.bat. The mixing will take less than 5 minutes.

 

6. Once the image is formed, you can now execute

 

docker run -it --hostname <DNS hostname> --name <containername> pida

 

at the command line to bake the image. This will take about 15 seconds.

You will see the IP address of the PI Data Archive listed in the IPv4 Address field. Use this IP address with PI SMT on your container host to connect. Your PI Data Archive container is now ready to be consumed!

 

Hint: Multiple PI Data Archive instances

If you want to bake another instance of the PI Data Archive container (just repeat step 6 with a different hostname and containername), you will need to change the Server ID too. The following procedure can be done in piconfig to accomplish this.

 

@tabl pisys,piserver
@mode ed
@istr name,serverid
hostname,
@quit

 

Replace hostname with the real hostname of your container. For more information about this, please refer to this KB article. Duplicate PI Server IDs cause PI SDK applications to fail

 

Limitations

1. This example does not persist data or configuration between runs of the container image.

2. This example relies on PI Data Archive trusts and local accounts for authentication.

3. This example doesn't support VSS backups.

4. This example doesn't support upgrading without re-initialization of data.

 

For Developers

Here is an example to connect with AF SDK and read the current value of a PI Point.

 

Conclusion

Notice how quick and easy it is to cook a PI Data Archive container. I hope you find it delicious. I like it because I can easily cook up instances for testing and remove them when I do not need them. (Please don't waste food)

 

Update 31 May 2018

Local account is no longer in the administrators group. Only a mapping to a PI Identity.

 

Update 2 Jul 2018

Added 17R2 tag.

 

Update 21 Nov 2018

For the source code to build the PI Data Archive container. Please send an email to technologyenablement@osisoft.com. This is a short term measure to obtain the source code while we are revising our public code sharing policies. Apologies for any inconvience caused.

Important update to OSIsoft's GitHub Policy, November 2018

Motivation

The AF SDK provides two different ways to get live data updates and I recently did some stress tests on AFDataPipes, comparing the observer pattern (GetObserverEvents) with the more traditional GetUpdateEvents. My goal was to determine if there is a preferred implementation.

 

The Performance Test

The setup is simple: listen to 5332 attributes that are updated at a rate of 20 events per second. This produces over 100k events per second that we should process. I agree that this is not a challenging stress test but is on par with what we usually see on customers around the globe. The server is very modest, with only 8GB of RAM and around 1.2GHz of processor speed (it’s an old spare laptop that we have here at the office). Here is the code I used to fetch data using GetUpdateEvents (DON’T USE IT - Later in this article, I will show the code I've used to test the observer pattern implementation):

 

var dataPipe = new AFDataPipe();
dataPipe.AddSignups(attributes);
CancellationTokenSource source = new CancellationTokenSource();
Task.Run(async () =>
{
    try
    {
        while (!source.IsCancellationRequested)
        {
            // Here we fetch new data
            var updates = dataPipe.GetUpdateEvents();
            foreach (var update in updates)
            {
                Console.WriteLine("{0}, Value {1}, TimeStamp: {2}",
                            update.Value.Attribute.GetPath(),
                            update.Value.Value,
                            update.Value.Timestamp.ToString());
            }
            await Task.Delay(500);
        }
    }
    catch (Exception exception)
    {
        Console.WriteLine("Server sent an error: {0}", exception.Message);
    }
}, source.Token);
Console.ReadKey();
source.Cancel();
dataPipe.RemoveSignups(attributes);
dataPipe.Dispose();
Console.WriteLine("Finished");

 

After several hours running the application, I noticed that the GetUpdateEvents was falling behind and sometimes it was leaving some data for the next iteration. This is not a problem per se as, eventually, it would catch up with current data. I suspected that this would happen, but I decided to investigate what was going on. After some bit twiddling, I noticed something weird. Below we have a chart with the memory used by the application. On the top, we have the one that uses GetObserverEvents. On the bottom the GetUpdateEvents. They both use the same amount of memory but look closely at the number of GC calls executed by the .NET Framework.

 

2018-04-09 11_39_55-ObservableTest (Running) - Microsoft Visual Studio.png

(using GetObserverEvents)

2018-04-09 11_37_51-ObservableTest (Running) - Microsoft Visual Studio.png

(Using GetUpdateEvents)

 

Conclusion

Amazingly, this is expected as we are running the code on a server with a limited amount of memory and GetUpdates has extra code to deal with. Honestly, I was expected an increased memory usage and the GC kicking in like this was a surprise. Ultimately, the .NET framework is trying to save my bad code by constantly freeing resources back to the system.

 

Can this be fixed? Absolutely, but it is a waste of time as you could use this effort to implement the observer pattern (that handles all of this natively) and get some extra benefits:

  • Because it allows you to decouple your event handling from the code that is responsible for fetching the new data.
  • Because it is OOP and easier to encapsulate.
  • Because it is easier to control the flow.

 

Observer Pattern Implementation for AF SDK

In this GitHub file, you can find the full implementation of a reusable generic class that listens to AF attributes and executes a callback when new data arrives. It's very simple, efficient and has a minimal memory footprint. Let’s break down the most important aspects of it so I can explain what’s going on and show how it works.

 

The class starts by implementing the IObserver interface. This allows it to subscribe itself to receive notifications of incoming data. I also implement IDisposable because the observer pattern can cause memory leaks when you fail to explicitly unsubscribe to observers. This is known as the lapsed listener problem and it is a very common cause of memory issues:

 

public class AttributeListener : IObserver<AFDataPipeEvent>, IDisposable

 

Then comes our constructor:

 

public AttributeListener(List<AFAttribute> attributes, Action<AFDataPipeEvent> onNewData, Action<Exception> onError, Action onFinish)
{
      _dataPipe = new AFDataPipe();
     _dataPipe.Subscribe(this);
}

 

Here I expect some controversy. First, because we are moving the subject to inside the observer and breaking the traditional structure of the pattern. Secondly, by using Action callbacks I’m going against the Event Pattern that Microsoft has been using since the first version of the .NET framework and has a lot of fans. It's a matter of preference and there are no performance differences. I personally don’t like events because they are too verbose and we usually don't remove the accessor (ie: implement a -=) and that can cause memory leaks. By the way, I’m not alone on this preference for reactive design as even the folks from Redmond think that reactive code is more suitable for the observer pattern. The takeaway here is how we subscribe the class to the AFDataPipe while keeping the data handling oblivious to it, giving us maximum encapsulation and modularity.

 

Now comes the important stuff, the code that does the polling:

 

public void StartListening()
{
    if (Attributes.Count > 0)
    {
        _dataPipe.AddSignups(Attributes);
        Task.Run(async () =>
        {
        while (!_source.IsCancellationRequested)
            {
            _dataPipe.GetObserverEvents();
            await Task.Delay(500);
            }
        }, _source.Token);
    }
}

 

There is not much to talk about this code.  It starts a background thread with a cancellable loop that polls new data every 500 milliseconds. The await operator (together with the async modifier) allows our anonymous function to run fully asynchronous. Additionally, note how the cancellation token is used twice: as a regular token for the thread created by Task.Run(), but also as a loop breaker, ensuring that there will be no more calls to the server. To see how the cancelation is handled, give a look at the StopListening method of the class.

 

When a new DataPipeEvent arrives, the AF SDK calls the OnNext method of the IObserver. In our case it’s a simple code that only executes the callback provided to the constructor:

 

public void OnNext(AFDataPipeEvent pipeEvent)
{
     _onNewDataCallBack?.Invoke(pipeEvent);
}

 

Caveat lector: This is an oversimplified version of the actual implementation. In the final version of the class , the IObserver implementations are actually piping data to a BufferBlock that fires your Action whenever a new AFDataPipeEvent comes in. I'm using a producer-consumer pattern based on Microsoft's Dataflow library.

 

Finally, here’s an example of how this class should be used. The full code is available in this GitHub repo:

 

static void Main(string[] args)
{
    // We start by getting the database that we want to get data from
    AFDatabase database = (new PISystems())["MySystem"].Databases["MyDB"];
    // Defining our callbacks
    void newDataCallback(AFDataPipeEvent pipeEvent)
    {
        Console.WriteLine(Thread.CurrentThread.ManagedThreadId);
        Console.WriteLine("{0}, Value {1}, TimeStamp: {2}",
        pipeEvent.Value.Attribute.GetPath(), pipeEvent.Value.Value, pipeEvent.Value.Timestamp.ToString());
    };
    void errorCallback(Exception exception) => Console.WriteLine("Server sent an error: {0}", exception.Message);
    void finishCallback() => Console.WriteLine("Finished");
    // Then we search for the attributes that we want
    IEnumerable<AFAttribute> attributes = null;
    using (AFAttributeSearch attributeSearch =
        new AFAttributeSearch(database, "ROPSearch", @"Element:{ Name:'Rig*' } Name:'ROP'"))
    {
        attributeSearch.CacheTimeout = TimeSpan.FromMinutes(10);
        attributes = attributeSearch.FindAttributes();
    }
    // We proceed by creating our listener
    var listener = new AttributeListener(attributes.Take(10).ToList(), finishCallback);
    listener.StartListening();
    // Now we inform the user that a key press cancels everything
    Console.WriteLine("Press any key to quit");
    // Now we consume new data arriving
    listener.ConsumeDataAsync(newDataCallback, errorCallback);
    // Then we wait for an user key press to end it all
    Console.ReadKey();
    // User pressed a key, let's stop everything
    listener.StopListening();
}

 

Simple and straightforward. I hope you like it. And please, let me know whether you agree or disagree with me. This is an important topic and everybody benefits from this discussion!

 

UPDATE: Following David's comments, I updated the class to offer both async and sync data handling. Give a look at my final code to see how to use the two methods. Keep in mind that the sync version will run on the main thread and block it, so I strongly suggest you use the async call ConsumeDataAsync(). If you need to update your GUI from a separated thread, use Control.Invoke.

 

Related posts

Barry Shang's post on Reactive extensions for AF SDK

Patrice Thivierge 's post on how to use DataPipes

Marcos Loeff's post on observer pattern with PI Web API channels

David Moler's comment on GetObserverEvents performance

Barry Shang's post on async observer calls

Are you trying to capture data in the PI System from a new data source?

Do you have picture or video data that needs to be analyzed and stored into PI?

Are you interested in how you can get your IoT data into PI? Are you interested in edge or cloud data storage?

 

Check out the "Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams" hands-on lab at PI World SF 2018:

 

This lab, Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams, was created by the OSIsoft Applied Research team based on a couple different projects undertaken by the research team in the last year.  For the lab, we will explore what it takes to get simulated sensor data, weather data, and picture classification data using machine learning into OMF and send that to the PI System, OSIsoft Cloud Services and Edge Data Store.  This lab is offered on Thursday (Day 3) afternoon of PI World and during it, we will do some basic programming in Python and Javascript, with the configuration in Node-RED.  If any of these topics – OMF, PI Connector Relay, OSIsoft Cloud Services, Edge Data Store – sound interesting, come learn more about it in this lab: Learn How to Leverage OSIsoft Message Format (OMF) to Handle IoT Data Streams. 

 

To learn more about the hands-on labs at PI World SF 2018, visit https://piworld.osisoft.com/us2018/sessions

If you are interested in signing up for a lab, please do so as part of registration for the conference. If you have already registered and would like to add a lab, please email piworld@osisoft.com.

PI AF 2017 R2 (AF SDK 2.9.5) was released shortly before 2018.  There are some exciting new features with AFSearch that should interest developers.

 

First off, I would hope any developer would always go to the Live Library What's New page for any major PI AF release.  That page gives a summary of what's new with that particular release, not just for AFSearch but for all namespaces.  Specific to AFSearch namespace, you would see the following:

 

OSIsoft.AF.Search Namespace

Two new query based search classes have been added in this release: AFAttributeSearch and AFNotificationContactTemplateSearch. The AFSearchToken.AFSearchToken(AFSearchFilter, AFSearchOperator, String, IList<AFSearchToken> ) constructor and the AFSearchToken.Tokens property have also been added to support enhanced nested query filters for searches.

The following new search tokens have been added: EventFrame, IsInternal, Parent, PlugIn, and PlugInName.

 

As a fellow developer and PI geek, when I first read that my reaction was "Cool! Nested queries!"  If you think about it, an AFAttributeSearch means that attributes are being searched upon some element(s) or event frame(s).  This implies there will first be a search for elements or event frames, followed by a search upon those results for the attributes.  This does not require you to create 2 search objects on the client.  Rather you will have 1 search object, namely an AFAttributeSearch, and it will have a nested query to filter on the desired elements or event frames.  But the nested queries are not limited to AFAttributeSearch.   For example, you may have an AFEventFrameSearch that uses a nested query for its elements.

 

Each release of AF SDK since 2.8.0 has introduced new features and capabilities to put more efficient searches at your fingertips.  For example, if you were searching on an attribute category to return many attributes per element, you could additionally filter on PlugIn or PlugInName to restrict the returned attributes to be PI points.

 

Contacts Search Example

 

Here's a quick example where I search for any contact with a name beginning with "Davi*".

 

using (AFNotificationContactTemplateSearch search = new AFNotificationContactTemplateSearch(assetServer, "contact demo", " Name:'Davi*' "))
{
    foreach (AFNotificationContactTemplate contactTemplate in search.FindNotificationContactTemplates(fullLoad: true))
    {
        Console.WriteLine($"   {contactTemplate.Name,-24} {contactTemplate.Contact}");
    }
}

 

The output:

 

   Template                 Contact

   David Burns_Email        David Burns

   David Doll_OCS           David Doll

   David Moler_Email        David Moler

 

 

Nested Queries

 

Let's take a look at 3 different examples that all do the exact same thing.  We want to perform an attribute search for any attributes named "Feed Rate" that belong to any elements whose name starts with "Boiler".  We will perform the same search 3 times, but each time how we setup the search object will be different.  The 3 techniques we will briefly cover are:

 

  • Using Nested Search Tokens
  • Using Nested Query String
  • Using Interpolated Nested Query String

 

They example code is kept simple.  We will perform an AFAttributeSearch searching for attributes found within a nested element query with the following filters:

  1. Attribute Category of "Process Monitoring"  (note the blank)
  2. Element Category of "ProcessMonitoring" (does not have a blank)
  3. Element Template of "FCC Pump Process"

 

If you've ever worked with AFSearch before, then your previous experience should have been that you could not specify Category twice on a query prior to nested queries in 2.9.5

 

Using Nested Search Tokens

 

string templateName = "FCC Pump Process";
string elemCatName = "ProcessMonitoring";
string attrCatName = "Process Monitoring";

List<AFSearchToken> nestedTokens = new List<AFSearchToken>();
nestedTokens.Add(new AFSearchToken(AFSearchFilter.Template, templateName));
nestedTokens.Add(new AFSearchToken(AFSearchFilter.Category, elemCatName));

List<AFSearchToken> tokens = new List<AFSearchToken>();
// The Element uses the nested token(s)
tokens.Add(new AFSearchToken(AFSearchFilter.Element, AFSearchOperator.Equal, null, nestedTokens));
// The Attribute uses the non-nested tokens, in this case just Category.
tokens.Add(new AFSearchToken(AFSearchFilter.Category, attrCatName));

using (AFAttributeSearch search = new AFAttributeSearch(root.Database, "nested tokens example", tokens))
{
    search.CacheTimeout = TimeSpan.FromMinutes(10);
    foreach (AFAttribute item in search.FindAttributes())
    {
        // Do something
    }
}

 

 

Using Nested Query String

 

The trick with a nested query string is that it will be enclosed in {braces}.

 

// Notice how element has nested { }. 
// The Category depends on nesting level. 'Process Monitoring' with a blank is outside the nesting,
// so it will be an Attribute Category, whereas 'ProcessMonitoring' inside the nesting is an
// Element Category.
string query = "Element:{Template:'FCC Pump Process' Category:'ProcessMonitoring'} Category:'Process Monitoring'";

using (AFAttributeSearch search = new AFAttributeSearch(root.Database, "nested query string", query))
{
    search.CacheTimeout = TimeSpan.FromMinutes(10);
    foreach (AFAttribute item in search.FindAttributes())
    {
        // Do Something
    }
}

 

 

Using Interpolated Nested Query Strings

 

With Interpolated Strings, you may easily substitute a variable's value (technically, it substitute's the string returned from the variable's ToString() method).  If you are familiar with this in either C# or VB.NET, you know that {braces} are used.  This raises an interesting question of how the Interpolated String knows which brace is for the nested query, and which is for the value substitution.  You would denote that using an escape sequence of the braces themselves.

 

string templateName = "FCC Pump Process";
string elemCatName = "ProcessMonitoring";
string attrCatName = "Process Monitoring";

// This gives a compile error with an Interpolated String
// string query = $"Element:{Template:'{templateName}' Category:'{elemCatName}'} Category:'{attrCatName}'";

// Escape the { and } around literal braces with {{ and }}.
// Fun Fact: {{something}} is called the Mustache Template!
// See https://en.wikipedia.org/wiki/Mustache_(template_system)
string query = $"Element:{{Template:'{templateName}' Category:'{elemCatName}'}} Category:'{attrCatName}'";

using (AFAttributeSearch search = new AFAttributeSearch(root.Database, "interpolated nested query string", query))
{
    search.CacheTimeout = TimeSpan.FromMinutes(10);
    foreach (AFAttribute item in search.FindAttributes())
    {
        // Do Something
    }
}

 

An interesting bit of trivia: the {{something}} format of double braces is called the Mustache Template!

 

 

AFAttributeSearch

 

The nested query examples used AFAttributeSearch for searching upon elements.  The AFAttributeSearch may search on event frames instead.

 

One thing to note is the fullLoad parameter is missing from the FindAttributes method because it must always do a full load since the attributes cannot exist without the owning element or event frame.  However, the AFAttributeSearch.FindObjectFields is smart enough to know when it must make a full load to evaluate data references.  To state that a different way, it is smart enough to know when to skip a full load because of captured event frames. You don't have to do anything special like consult Oracles to figure this out for any given situation.  You would code it the same way regardless of the situation and let AFAttributeSearch.FindObjectFields make the right decision!

 

Why you should use the new search classes

 

Maintainability - some Find methods in AFAttribute or AFNotificationContactTemplate have already been marked as Obsolete, and it is reasonable to suspect that others may be marked so with future releases of AF SDK.  Make your applications more resilient by switching to these new methods sooner rather than later.

 

Performance - if you require fetching more than one page of results from the server and you opt-in to server-side caching, any of the AFSearch classes perform much faster than the older methods.  This means that AFAttributeSearch will be much faster than the AFAttribute.FindElementAttributes overloads.

 

Ease of Use - the AFSearch classes take care of paging issues seamlessly for you.  The new nested queries makes searching for attributes on either elements or event frames very easy with fewer lines of code.  There is previously mentioned smartness built into AFAttributeSearch.FindObjectFields. All of this without extra coding on your part.

 

Reliability - when searching on attribute values that might be evaluated client-side (due to needing to evaluate the data reference), it is impossible to do paging properly using the older search methods because you don’t reliably know what to use for the next page index.  By opting-in to server-side caching with the newer AFSearch methods, paging is more reliable.

"IoT and Fog Computing: Develop Data Ingress Applications from Edge to Cloud" hands-on lab at PI World SF 2018:

 

Just one year ago, at the 2017 OSIsoft User Conference in San Francisco, we had long discussions and debates around the present and future of operational intelligence: thousands of new sensors and devices, petabytes

of data generated every day, fragmented in an endless number of incompatible protocols and interfaces. Furthermore, new and old players want to provide their own, isolated service, leaving developers and end users with

the [almost] insurmountable problem of putting all this stuff together and making it run in a smooth and cost-effective fashion.

That was the time when the idea of an Open Edge Module, that then became FogLAMP , was conceived.

 

One year later, at PIWorld 2018 in San Francisco, we are able to demonstrate the result of twelve months of hard coding on this concept. We have worked on a thriving open source project to bring any data coming

from new and old devices, sensors, actuators etc. into PI. This project is called FogLAMP (available on GitHub): free to learn, use, and adopt as every open source project is.

 

These are really exciting times, and for a good reason. PIWorld is a milestone for FogLAMP and a game changer for the whole Community involved in the development and implementation of IoT and IIoT projects.

For the first time, industrial-grade solutions can combine decades of investments in existing infrastructure with new and innovative technologies.

 

We will talk about this and more at the FogLAMP Community booth, where you will be able to meet developers and contributors from OSIsoft, Dianomic Systems, software providers and hardware manufacturers.

 

We also have two talks, one on the features of FogLAMP (with a short demo) and one on Fog Computing architectures.

 

North/South and East/West components interaction in a Fog Computing Architecture.

 

Certainly, the most important appointment for developers, the one that must be added to your agenda, is the IoT Lab titled “IoT and Fog Computing: Develop Data Ingress Applications from Edge to Cloud”.

The IoT and Fog Computing Lab

If you are a developer and you are interested in learning more about collecting data from sensors and devices and sending data to PI and Cloud systems, the IoT and Fog Computing Lab is for you.

 

The lab is packed with exercises that will help you understand and learn about problems and solutions associated to data collection, transformation, buffering and transfer. Put your hands on code and hardware that can

simulate typical scenarios in manufacturing, transportation, and in other industrial and infrastructure sectors.

 

But there is more! Participants will have access to a Raspberry PI Zero W and a set of sensors packed into a board called Enviro PHAT. We will place the boards and a battery pack on a remote controlled truck

to simulate an installation on moving devices, where you may experience intermittent connectivity with other layers of the data infrastructure.

 

Our Lab environment.

 

Last but not least, we will have a bit of fun with a game we have organized for you. We will run our RC trucks with the Raspberry PIs mounted on it, on a track where you can score points and race for the highest number of collected data in the room!

It will be fun to learn and play!

 

Don’t forget to register and bring your laptop. All you need to install is:

- A SSH client, such as PuTTY (Windows) or Terminal (MacOS)

- An API client, such as Postman (for Windows and MacOS)

- A Microsoft Remote Desktop client

 

The rest is on us for you to try, use and take home to learn even more!

 

Talks and Lab Schedule - Thursday, April 26, 2018:

- 10.30am - 11.15am: Introduction to the Open Edge Module (FogLAMP)

- 1.30pm - 4.30pm: IoT and Fog Computing: Develop Data Ingress Applications from Edge to Cloud

- 2.30pm - 3.15pm: Fog Computing and OSIsoft

 

Filter Blog

By date: By tag: