Skip navigation
All Places > All Things PI - Ask, Discuss, Connect > Blog

I am thrilled to announce the posting of the PI System Connectivity Toolkit for LabVIEW on the National Instruments Reference Design Portal.

 

National Instruments equips engineers and scientists with systems that accelerate productivity, innovation, and discovery.

 

National Instruments measurement platforms often connect to the kinds of sensors and perform the kinds of analyses that aren’t usually found in the industrial control systems, which are traditionally connected to the PI System.  Examples of this are vibration analysis, motor current signature analysis, thermography, acoustic, electro-magnetic induction, and the analysis of other equipment condition performance indicators.

 

By enabling bi-directional communication between LabVIEW and the PI System, maintenance and reliability personnel can gain deeper insights, not only into the condition of equipment, through analysis of the signals in LabVIEW, but also into the process conditions that effect the equipment and vice versa.

 

LabVIEW edge analytics are enhanced by process data from other monitoring and control systems via the PI System.  The PI System real-time data infrastructure furthermore makes the LabVIEW data available enterprise-wide, for better insights and decision-making across an organization, and so that data can be integrated with other systems for predictive analytics, augmented reality, and computerized maintenance management for automating maintenance processes.

 

To obtain installation instructions, LabVIEW Virtual Instruments, and sample code files, see the following posting on the National Instruments Reference Design Portal:

http://forums.ni.com/t5/Reference-Design-Portal/OSIsoft-PI-System-connectivity-toolkit-for-LabVIEW/ta-p/3568074

 

The write-to-PI function requires a license for the PI-UFL Connector.  Please contact your Account Manager or Partner Manager.

 

The read-from-PI function requires a PI Web API license which can be downloaded and used for free for development purposes from OSIsoft Tech Support website.

 

For more information on LabVIEW, please see http://www.ni.com/labview.

 

Please direct any questions to NationalInstruments@osisoft.com.

jkorman

Balancing on the Edge

Posted by jkorman Employee Jan 10, 2017

Please check out my blog post about balancing Edge vs Cloud computing and let me know your thoughts!

http://www.osisoft.com/Blogs/Balancing-on-the-Edge/

 

Kind Regards,

Jeremy

While developing web-based applications that leverage the PI Web API, I often find myself asking the following question:

"How can I get all WebIds of a subset of attributes for all elements?

One obvious use-case that comes to mind is to display a table of attribute data for all elements of a certain type. Maybe I have a template that defines 20 attributes, but I only need to display 10 of them. Of those 10 attributes, maybe I only want to display snapshot values for 6 of them, and the other four I want to trend in a sparkline.

In order to accomplish this, a series of requests to the PI Web API needs to be made:

  1. Get all the WebIds for the elements I need
  2. Loop through each of these elements, and for each element get all the WebIds for the attributes I need
  3. Make streamset requests to the PI Web API with the WebIds of my attributes

In the past, making all of these calls to the PI Web API did not scale very well in terms of performance. Today, we have batch requests with request template functionality that makes implementing this use-case a lot easier. Even with batch requests, however, I have discovered some performance implications that I believe are worth knowing about. I will share these in this blog post.

 

 

Setting up the batch request

 

As a starting point, we need to write a batch request to get the elements we want. Throughout this blog post, I am using the public PI Web API endpoint with the NuGreen database. Suppose I want to get all elements of the "Boiler" template type. This batch request may look something like this:

 

{
 "database": {
  "Method": "GET",
  "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen"
 },
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["database"],
  "Parameters": ["$.database.Content.Links.Elements"]
 }
}

 

Notice that I am specifically asking for elements that are instances of the "Boiler" template. Also, I am using the "searchFullHierarchy" parameter in the element sub-request. I often include this parameter because I need all element descendants of a specific root element. As such, I will sometimes write my element query as follows:

 

{
 "rootElement": {
  "Method": "GET",
  "Resource": "https://devdata.osisoft.com/piwebapi/elements?path=\\\\PISRV1\\NuGreen\\NuGreen"
 },
 "elements": {
  "Method": "GET",
  "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true",
  "ParentIds": ["rootElement"],
  "Parameters": ["$.rootElement.Content.Links.Elements"]
 }
}

 

The only difference here is that I have changed my starting point from an AF database to as specific AF element. For the rest of my examples, I'll be sticking to the database method.

 

Moving on to attributes

 

Now that we have the element batch query, we can expand it to get all attributes for each of the elements returned. This is where the new request template functionality of batch requests in the PI Web API comes into play. Here is what the query may look like:

 

{
  "database": {
   "Method": "GET",
   "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  },
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  },
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": "{0}?searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path"
   },
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Links.Attributes"]
  }
}

 

Notice the use of RequestTemplate in the "attributes" sub-request. As documented in the PI Web API:

 

A request can alternatively specify a request template in place of a resource. In this case, a single JsonPath may select multiple tokens, and a separate subrequest will be made from the template for each token. The responses of these subrequests will returned as the content of a single outer response.

 

This means my batch query is going to loop through all of my elements, and make a separate sub-request for each element's attributes. Even better, this is all being handled internally by the PI Web API... pretty cool!

Before we continue, there are a few things to note about the attribute query:

 

  1. I am also using searchFullHierarchy for the attributes. This is important because an element can have many levels of nested attributes. Writing a batch query to loop through all these levels in a generic way would probably be impossible. Luckily we have "searchFullHierarchy" for this.
  2. I have included "selectedFields" in all my sub-requests. As we will see later, this results in a MAJOR performance improvement over letting PI Web API return all of its default metadata.

 

Now, moving on...

 

What if I only want certain attributes?

 

Well, you can do that. PI Web API 2016 introduced a new function to request multiple attributes by WebId or path in one call. Suppose I only want the following attributes for my Boiler elements:

 

  • Asset Name
  • Model
  • Plant

 

Then our batch query may look something like this:

 

{
  "database": {
   "Method": "GET",
   "Resource": "https://devdata.osisoft.com/piwebapi/assetdatabases?path=\\\\PISRV1\\NuGreen&selectedFields=WebId;Path;Links"
  },
  "elements": {
   "Method": "GET",
   "Resource": "{0}?templateName=Boiler&searchFullHierarchy=true&selectedFields=Items.WebId;Items.Path;Items.Links",
   "ParentIds": ["database"],
   "Parameters": ["$.database.Content.Links.Elements"]
  },
  "attributes": {
   "Method": "GET",
   "RequestTemplate": {
    "Resource": "https://devdata.osisoft.com/piwebapi/attributes/multiple?selectedFields=Items.Object.WebId;Items.Object.Path&path={0}|Asset Name&path={0}|Model&path={0}|Plant"
   },
   "ParentIds": ["elements"],
   "Parameters": ["$.elements.Content.Items[*].Path"]
  }
}

 

Here, I'm using the multiple attributes function call and supplying the "path" parameter in the resource URL multiple times. Each instance of this parameter appends the relative path to the attributes I want to the sub-request parameter (which happens to be the element path). From here, it is only a matter of writing your client-side code to properly construct the resource URL based on the attributes you want.

 

Where do I go from here?

 

Now that we have all the WebIds for my attributes, we have to decide what to do with them. Usually I will start by creating a flat data structure that maps my element paths to attributes to WebIds. In JavaScript, this may look something like this:

 

// HTTP response is stored in `res' variable
var WebIds = {};


res.elements.Content.Items.forEach(function (element) {
    WebIds[element.Path] = {};
});


res.attributes.Content.Items.forEach(function (subRes) {
    subRes.Content.Items.forEach(function (attribute) {
        var path = attribute.Object.Path,
            elementPath = path.substring(0, path.indexOf('|')),
            attributePath = path.substring(path.indexOf('|'));


        WebIds[elementPath][attributePath] = attribute.Object.WebId;
    });
});

 

After putting the WebIds into a structure that is a bit more usable, you could use them in a variety of ways. Typically I will create an additional PI Web API batch call for all of the streamsets I need. For example, I may want to use an "end" streamset for 5 of my attributes, but use a "plot" streamset" for only 2 of them.

 

How is the performance?

 

I ran different flavors of these types of batch queries and collected the response times. First I will present the raw results and then comment on them.

 

Query description
Response time (Path and WebId only)
Response time (all metadata)
All attributes for all boiler elements143 ms2611 ms
All attributes for all elements435 ms3809 ms
Specific attributes (3) for all boiler elements180 ms186 ms
Specific attributes (3) for all elements635 ms1120 ms
All attributes except one for all boiler elements453 ms2895 ms
All attributes except one for all elements3249 ms5272 ms

 

I included results for all elements (not filtering by template type) to demonstrate how these batch queries scale. I do NOT recommend doing this on a large AF structure.

 

Based on these results, I have made the following observations. YYMV, however, as PI Web API appears to do some internal caching that can cause successive calls to perform faster.

 

  1. When getting all attributes (not specific ones using the "multiple" function call), it is important to be mindful of which fields you are selecting. The more metadata you ask for, the longer the request will take.
  2. Using the "multiple" function call for ad-hoc attributes does not perform very well if you have a lot of elements and are requesting a lot of attributes for each element. You're better off just asking for all attributes with a call to "attributes," unless you only need a small subset of attributes.
  3. The more elements that need to be processed, the longer the query. This makes intuitive sense and is to be expected.

 

Concluding remarks

 

I hope this blog post is helpful to you as you use the PI Web API to develop your own web apps. What other ways have you discovered of getting attributes with the PI Web API?

I would like to share some thoughts about PI calculation datasets in ProcessBook and what would help us to keep – I’d rather say bring - them under control.

Undoubtedly, datasets come in very handy for users to do some simple calculations on the fly. But they are also a challenge for PI administrators when they start to get out of control - I'd almost say they become a mess then.

 

How did that come about?

In a perfect "PI world" data is accessible by what- and whomever and tag names never change. Users start building displays with calculations and - as they are very handy - they tend to spread more and more all over the company.

 

In reality we have PI security and systems that change over time. Display builder and user not always share the same access rights. That's the point where it gets weird. Not primarily for the user but definitely for the administrators. Thousands of error messages start to flood the server logs and make them almost unreadable:

User query failed: Connection ID: 6092, User: xxx, User ID: yyy, Point ID: 0, Type: expcalc, …

 

Users often are not even aware of using datasets. They picked them up with a copied display. Missing data access or renamed points are the most common source of trouble.

Besides a small green spot becoming red (status report) there is nothing that could draw users notice. That's nothing unusual in an extensive display, since this could be also be triggered by a temporary system digital state.

Individual datasets, spread all over the company are not manageable - and that's by far not the only handicap.

 

How to relieve the symptoms?

First of all, error messages should appear at the place where the error occurs.

It depends on the point of view, what would be the right place in case of calculation datasests.

Technically, the PI server might be the right place, practically and for logical reasons it'd be the client. That's the only place it can be fixed.

Here are my proposals:

Provide options to

  • prompt users for calculation dataset problems in ProcessBook
  • automatically disable datasets that can't be executed (security, tag not found) in ProcessBook
  • suppress those error messages (Point ID: 0) on the server
  • disable datasets in ProcessBook (stepwise, e.g. creation, execution, disable)

How to cure?

What I definitely see, is an urgent need (at least in our case) to replace the unmanageable decentralized approach with a centralized one.

This is where AF On Demand Calculations come in.

 

Anyway, replacing datasets with On Demand Calculations will be a big challenge and there are several topics that need to be discussed, e.g.:

  • support for migration from datasets to AF On demand calculations (export)
  • how to provide the individual flexibility of personal datasets (may be an organizational matter)
  • ...

 

Conclusion

From an administrators point of view the most annoying issue is the log entries for failing calculations in the server log.

500 messages per hour on a 50k server with 100 connected ProcessBooks are more the rule but the exception. An option to suppress those calculation errors with "Point ID: 0" would be a blessing.

 

A good starting point on the client could be to more obviously make users aware of the problem of failing calculation datasets.

This can (and may be should be) be annoying and therefore an option to disable specific datasets is needed. Together with an option to disable permanently failing calculations automatically this would be perfect.

 

Finally, a centralized and manageable approach with e.g. On Demand Calculations should be the goal - with benefits for all the new and future clients. Let’s do it step by step.

 

May be someone wants to picks this up and makes an enhancement request out of it

Purpose

I have seen quite a few questions and confusion over the last year regarding the timestamps returned when querying data using the PowerShell Tools for the PI System.  Specifically, that the timestamps returned do not match the client time-zone settings similar to the behavior of other PI client tools. (PI DataLink, PI ProcessBook, PI Coresight, PI SMT etc)    It also came to my attention that there was not a clear explanation for this behavior in documentation or here on PI Square at this time.

 

Current time on test machine:

In PI SMT - values for SINUSOID over the last two hours:

Using PowerShell tools for the PI System - values for SINUSOID over the last two hours:

# connect to PI Data Archive
$con = Connect-PIDataArchive -PIDataArchiveMachineName "afink-pi1" 

# set start time to two hours ago
$starttime = Get-Date
$starttime = $starttime.AddHours(-2)
# set end time of now
$endtime = Get-Date

# query values of SINUSOID for last two hours
$values = Get-PIValue -PointName Sinusoid -StartTime $starttime -EndTime $endtime -Connection $con

# write values to console
Write-Output $values


Background/Explanation

The PI Data Archive stores all data using UTC time.  Therefore, when a PI client tool requests data for a tag, the values are returned with UTC timestamps.  It is the responsibility of the PI client to convert these times from UTC to local time based on the Windows Time Zone settings on the client machine.  When a client requests data from the PI Data Archive, the timestamps for the returned values are absorbed by the client as System.DateTime .NET objects in UTC time.  The client then calls the ToLocalTime() method on the returned values and populates the spreadsheet, trend, or other symbol with the timestamp in the client's local time zone.  This is the behavior of OSIsoft's most used client tools such as PI DataLink, PI Coresight, PI ProcessBook.

 

The PowerShell tools for the PI System, however, do not automatically call the ToLocalTime() on returned values, so all values are still shown in UTC by default.  This is what raises the questions/confusion described above.

 

Solution

Luckily for us, since PowerShell is also built on .NET, we simply need to the same steps used for all other PI client tools as described above.  The results of my SINUSOID query are stored in the variable $values.  If we select the "Timestamp" property of our resultant events and pipe it to the Get-Member cmd-let (aliased as "gm"), we can see the object type is indeed System.DateTime.  We can also see the various methods and properties associated with the object type.

 

 

Specifically, notice the ToLocalTime() method.  Using this method, we can verify that we can now return the values with the proper client time zone timestamps.

 

Finally, since the ultimate goal is often to generate a .csv or .txt file with the resultant archive data, we can use some PowerShell tricks to replace our UTC timestamp column with our client local time column. Specifically, using calculated properties.

 

In fact, thanks to the PowerShell pipeline, we can select our properties, build our custom output column, and write the result to a .csv file all in a one-liner:

# selects the value, and creates a custom column for client tz timestamps and outputs to file C:\values.csv
$values | Select-Object -Property value,@{Name='Timestamp (Client Time Zone)';Expression={$_.timestamp.tolocaltime()}}  | Export-Csv -Path C:\values.csv -NoTypeInformation

 

Lets break this down a little bit... this one liner can be broken down into 3 parts separated by the  "|" characters.  The | allows us to take the output of one cmd-let or script block, and pass it as the input to the next cmd-let or script block.

 

Part 1:

$values |

 

Here we are simply setting our "start off point".  We currently have our $values variable which contains our SINUSOID values and UTC timestamps.  Keep in mind this variable also contains other properties, such as isGood, isAnnotated, and WriteMode.  We take all of this information and pass it to the next step:

 

Part 2:

Select-Object -Property value,@{Name='Timestamp (Local Time Zone)';Expression={$_.timestamp.tolocaltime()}}  |

 

In this case, we are only interested in the value and our new client timezone timestamp properties for our resultant .csv file.  Therefore, from our query results, we are only selecting the "value" column and our new column "Timestamp (Local Time Zone)", which we are creating by specifying the Name of the new column, and the Expression to evaluate for the new column.  In PowerShell, we do this by adding the new column as a hash-table, which is why we must use the @{Name='';Expression=''} syntax.  The operation we want to perform to generate the data for our "Timestamp (Local Time Zone)" column is call the ToLocalTime() method as previously discussed.  The "$_." tells PowerShell to call the timestamp.ToLocalTime() method on whatever object was piped to it, and from Part 1, we had piped our $values variable.  Therefore, we will be taking the "timestamp" column of our $values variable, and calling ToLocalTime(), resulting in our new column "Timestamp (Local Time Zone)".

 

For more information on calculated properties in PowerShell, check out the following TechNet article:

< Windows PowerShell Tip: Using Calculated Properties >

 

Part 3:

Export-Csv -Path C:\values.csv -NoTypeInformation

 

Now that we have selected only the columns we are interested in from the previous step ("value" and our new calculated "Timestamp (Local Time Zone)" columns), we can export the result to a .csv file using Export-Csv and specifying a path for the output file.  The NoTypeInformation flag prevents object type data from being pre-appended to the .csv file header.  Oftentimes this extra note in the .csv file can be cumbersome, so we can remove it in this way.  Without the -NoTypeInformation flag, the output file will look like this:

 

Now, we have a .csv export of our desired PI data and have it properly returned in our client time zone which we can now use for further analysis in other programs.  Furthermore, since we were able to do this in PowerShell, we can run our queries in parallel, against multiple servers, and in fully automated fashion.

 

The full script would look something like this:

# connect to PI Data Archive
$con = Connect-PIDataArchive -PIDataArchiveMachineName "afink-pi1" 

# set start time to two hours ago
$starttime = Get-Date
$starttime = $starttime.AddHours(-2)
# set end time of now
$endtime = Get-Date

# query values of SINUSOID for last two hours
$values = Get-PIValue -PointName Sinusoid -StartTime $starttime -EndTime $endtime -Connection $con

# write values to console
Write-Output $values

# selects the value, and creates a custom column for client tz timestamps and outputs to file C:\values.csv
$values | Select-Object -Property value,@{Name='Timestamp (Client Time Zone)';Expression={$_.timestamp.tolocaltime()}}  | Export-Csv -Path C:\values.csv -NoTypeInformation


 

Happy slinging!

 

- Adam Fink

KristofDM

Osi to the cloud

Posted by KristofDM Dec 15, 2016

So, we are moving the entire pi infrastructure to the cloud or at least are trying to...

We are looking at IaaS and PaaS where possible.

I'll write down my successes (and losses)  here to help the community members that might have the same idea.

 

Please note that not all things we do are 'supported' by osisoft.. (yet!).

 

First some info on the Environment.

We have a pi data archive 3.4.935.72 (2015 R2) server.

Pi coresight

Pi AF

Pi SQL Data access server

and a  'heavy consumer app' server.

SQL server for coresight and AF.

 

Testing:

To make sure we make the most out of it i wanted to have the AF and coresight databases hosted in PaaS.

This is not yet supported and was only tested on an older PaaS sql edition (pre v12) .

We have VPN with azure and are able to join AzureVM's in our local AD.

 

PiArchive

I went ahead and Installed the PI data archive on a AzureVM with ssd storage;

No problems here

 

PiCoresight

Installed Coresight on an AzureVM and adapted connect strings to use PaaS.

Migrated on-premise database to PaaS using sql management studio.

issues

1: The DvService user (in the COresight SQL database) is linked with a sql login from the domain !

=> Not supported in Azure,Dropped and recreated on PaaS.

2: Coresight connection string to sql in web.conf adapted so a Azure database can be used

code:

<add name="CoresightConnectionString"  connectionString="Data Source=yourPaaSSQlServer.database.windows.net;Initial Catalog=YourPaasDatabaseName;User ID=DatabaseUSer;Password=DatabaseUSerPassword;Encrypt=true;Trusted_Connection=false; Application Name='Coresight Web Service'" />

 

It works ! the Coresight overview page does not show any more errors.

 

PiAfServer

Moving the AF database to the cloud sql PaaS.

I probably have and advantage here since i'm a classic Database Admin so if anything is not clear. shoot!

 

The AF Database in SQL servers uses some stored procedures/functions  to create a replica, get the status of the sql agent, get the Sysadmin name, etc. ...

Since these are all calls to the database instance (the sql Engine) they will no longer work in a PaaS database. For obvious reasons Microsoft does not allow these kind of queries.

The instance catalog is not the traditional sql server kind and thus is unusable.

 

Since this is a tricky part, we made a copy of the database in our local sql server (PiAfCopy) and:

 

Changed the stored proc on the sa user to always return SA as string value.

Changed the Check SQLagent Job to always return 1

Dropped the procs for the replica (Osi discourages users to use it anyway)

Changed all windows logins to sql logins.

 

Adapted Functions:

fn_getSaName

IsSqlAgentEnabled

 

Dropped Stored Procedures:

[Ddbo].[USP_AUDIT_ENABLED]

[dbo].[usp_replicate_primary_createdistributor]

[dbo].[USPC_CDC_ENABLE]

 

Now , using sql management studio 2016, i deployed the database to the PaaS sql.

Issue: Out of memory

This can be quite intensive on the client computer so having enough RAM is essential !

In case you haven't, try closing as much apps ass possible.

You could also do this on the database server itself but it needs an internet connection!

 

After some time (it takes a long time) the database should be deployed in Azure PaaS.

 

Installing AF server + System Eplxorer!

 

We'v setup an IaaS server and installed AF like we normally did. We disabled the database execution utility and disabled check connection while specifying the database server.

Just specified the PaaSSQLserver.

 

Installation went smooth.

 

The connect string needed to be changed.  this time in AfService.Exe.Config

It's located  C:\Program Files\PIPC\AF. It is somewhat different from coresight...

 

    <add key="connectString" value="Persist Security Info=False;server=YourPaasSqlServer.database.windows.net,1433;database=YourPaasDatabase;USER id=DatabseUser;PASSWORD=DatabaseUserPassword; Application Name=AF Application Server;" />

 

Restarted af and..

 

It worked!

Initially i had an error while creating new ellements.

"cannot check in model with unique id {Some very long SID} because there is another model with an identical name"

But after retrying some times, it vanished!

 

Next steps:

Analysis service, Notifications, pi sql data access server.

To be continued!

 

 

SQL DATA ACCESS (PIOLEDB)

After installing the pi data access layer on a IaaS server, running the sql commander fails.

Under the hood  the PI sql commander tries to use the (undocumented) system sp sp_oledb_ro_usrname .

The sp_oledb_ro_usrname stored procedure returns the oledb database read only status.

This kind of queries are not supported on Microsoft SQL Azure Paas.

 

Here ends the SQL PaaS try-out.

 

We'll look at Hosting the SQL on premise ...

rkoonce

Excel Is Our Friend – Part V

Posted by rkoonce Employee Dec 14, 2016

Excel Is Our Friend – Part V


Congratulations! You have escaped Interface Island! And if you are ever caught on another similar island, you have developed the survival skills to survive and eventually escape. The bad news is that some of your toolbox was left behind on that island. When you find yourself in need the tools there, you could always swing by there and get a “copy” You left your shelter behind, it will have to be rebuilt. On a new island, project references will need to be re-added, functions re-created in the new Island’s project. But what if there was a way to take the recyclables with you? Like that user defined function? That might come in handy anywhere. So, the good news is that you can have your cake and eat it too.

How to: Making Your Custom Spreadsheet Functions Available Anywhere

To use a custom function, the workbook containing the module in which you created the function must be open. If that workbook is not open, you get a #NAME? error when you try to use the function. Even if the workbook is open, if you use the function in a different workbook, you must precede the function name with the name of the workbook in which the function resides. For example, if you create a function called DISCOUNT in a workbook called Personal.xlsb and you call that function from another workbook, you must type =personal.xlsb!discount(), not simply =discount().

You can save yourself some keystrokes (and possible typing errors) by selecting your custom functions from the Insert Function dialog box. Your custom functions appear in the User Defined category:

ExcelFig7.png

In the instructions, a workbook type of xlsb is referenced above. But the screenshot has an xlsm… my workbook was created long ago, originally as an .xls (I am a dinosaur).

* Tip: An even easier way to make your custom functions available at all times is to store them in a separate workbook and then save that workbook as an add-in. You can then make the add-in available whenever you run Excel. Detailed instructions are found at the office.microsoft.com site linked here. This varies slightly from my own “old skool” methods, but I agree with the authors that this new way seems an easier way. I’ll be converting to an add-in when I get the chance.


Note: An .xlsb extension indicates Excel Binary Format. The .xlsm extension denotes a macro enabled workbook. An .xlam extension indicates Add-in. All of these formats require a trust for location and/or a much more complicated trust for publisher.

  • How to: Let Excel Teach You VBA


All of us can learn a lot of VBA by using the “Record Macro” function. After recording your actions within Excel, just examine the VBA code afterward and learn. It’s like pulling the curtain back on the Wizard of Oz. Prior to VBA, I was using Excel 4’s macro language, and the macro recorder was instrumental for me, personally, when first learning VBA. I still use it on occasion.


* On the Developer Ribbon, click “Record Macro”.
* Note the Macro Name and location.
* Perform some actions within the workbook, select a range of cells, change the font, change the number format, type some text or a formula.
* Click the Stop Recording button (same location as above).
* Examine the code that was generated.

ExcelFig8.png

  • FizzBlog Exercise:


  • Create a new macro-enabled workbook.
  • Copy the mySplitter function into this new workbook (the functions that we created earlier in this blog series).
  • Save this workbook as an Excel Add-In > “MyFunctions.xlam”
  • In Excel>File>Options> Add-ins>Load the newly created add-in in Excel.
  • Create a new workbook. Click Insert Function
  • FizzBlog Challenge:


  • In the dialog to select the user defined functions, is the function name prepended with the add-in worksheet name? workbook name? Or none? Prove it with a screenshot.
  • Were you able to make it work with a regular workbook? Or does it only work in macro-enabled workbooks?
  • Did you have to do anything special to make it work? (Did I leave out any significant steps in the FizzBlog Exercise?) If so, please explain.
  • Sharing your recyclables:


  • If you have a useful reusable function(s), please share it below.
  • Perhaps you just had an idea for one, please post it as a comment! (Santa might be reading this thread?)


This entry concludes my blog series for "Excel Is Our Friend" (perhaps I will append it someday). I hope that some of you have learned a few things along the way. Regardless, I really hope you enjoyed the series.

I plan to begin another series soon, but first I will have to choose my next destination. I have several ideas that I have been kicking around, but I would welcome any suggestions regarding any "islands" you would like to explore.


Cheers! (from the grass hut)

Introduction

 

In line with our partnership with Dell, our Systems Engineering team in Singapore received an opportunity to test out a Dell Edge Gateway unit for a short period. The Dell Edge Gateway is like a mini PC that is designed for Industrial-use to aggregate, secure and relay data from diverse sensors and equipment. It able to accept many communication protocols such as BACNet, Modbus, and CANbus, to modern wireless mesh networks like Zigbee, 6LoWPAN, and ZWave.

 

For this testing, we decided to install the full PI system and connect it to temperature sensors to perform simple direct data collection examples.
In this post, we will show what we have done on the Dell Gateway and go through 2 examples:

1) how to connect and collect data from an industrial thermometer transmitter via RS232 port

2) how to collect data from Temperature-Humidity sensor via Raspberry Pi through PI Web API

 

Setup

 

Sensors: Comet Thermometer (T0310) with RS232 output and Temperature-Humidity sensor (DHT22) (with jumper cables, breadboard and 10k Ohm resistor )

Development Board (for example 2): Raspberry pi3 Model B

Gateway Device: Dell Edge Gateway Model 5000. OS - Windows 10 IoT Industry. Processor - Intel Atom E3827 1.75 GHz 2 Cores. Memory - 8 GB

PI System component installed: PI Data Archive 2016, PI AF 2016 (including SQL Server 2012 R2 Express), PI ProcessBook, PI Coresight and PI DataLink (including MS Excel 2013)

Data Collection Method: PI Interface for Modbus Serial PLC (Example 1) and PI Web API 2016 (Example 2)

 

In this post, we will assume that all of the PI System components have been installed successfully on the Dell Gateway beforehand. Dell Gateway unit is running in Workgroup and does not have internet access (although it's possible).

 

Example 1

Comet Temperature sensor (T0310) comes with RS232 Serial output port which can be connected to Dell Gateway via a Serial Port COM1 directly.

PI system on Dell Gateway can easily collects the sensor’s real-time reading (Degree Fahrenheit) into a PI tag via PI Modbus Serial interface.

 

Depending on how familiar you are with PI Modbus interface/tag, the interface configuration was straight forward with all default setting.

As for PI Tag Configuration, you need to follow the manual. Some of important tag attributes are:

Tagname
Point SourcePoint Type
Location 1
Location 2Location 3Location 4Location 5Instrument Tag
T0310_Comet_TemperatureMODBUSSFloat3211104149COM1

 

Once we started the interface, the data then started flowing to the PI server.

We built a display on PI Processbook and imported it to PI Coresight.

 

 

Example 2

Plug Temperature-Humidity sensor (DHT22) onto the breadboard and connect jumper cables as per diagram below.

  • Connect Pin 1 of the sensor to 3.3V GPIO
  • Connect Pin 2 of the sensor to GPIO4
  • Connect Pin 4 of the sensor to Ground
  • Connect an Ethernet cable between the Dell Gateway and Raspberry pi Ethernet port

Image result for dht22 raspberry pi ref: DHT22, gpio

Once everything is ready. On the PI Server side, we created 2 PI tags to collect the sensor’s readings (Temperature and Humidity). These 2 tags can have 'float32' type and use 'PS = L'. Other tag's configuration can be kept as default. 

The data will be reading off from Raspberry pi into PI tags via PI Web API.

 

Make sure to set the AuthenticationMethods parameter on PI Web API 's 'System Configuration' in AF Configuration Database to 'Anonymous'. 

Then, note down the WebID of two new PI Tags that were created. You should have 2 sets of webID; one for Temperature tag and one for Humidity tag.

 

Code

 

We used Python scripts. We need 2 scripts below:

1) DHT22.py available online via github

2) dev.py - Write sensor's data to PI every 5 seconds. Make sure to modify 'base_url' and insert WebIDs for 'temperature' and 'humidity'.

Note: 'webapi.cert' is a self-signed certificate stored at the same location as the scripts. More information on Certificate Generation is available here.

 

import requests
import os
import time
import pigpio
import DHT22


base_url = 'https://gateway5000/piwebapi'
os.environ['REQUESTS_CA_BUNDLE']='webapi.cert'
temperature='P0KXR0VK08VUemBzWNbx97QQZgUAAAR0FURVdBWTUwMDBcREhUMjIgVEVNUEVSQVRVUkU'
humidity='P0KXR0VK08VUemBzWNbx97QQZwUAAAR0FURVdBWTUwMDBcREhUMjIgSFVNSURJVFk'
def post_pi_value(value,webid):  
    data = {'Value': value}  
    headers = {'Content-Type': 'application/json'}  
    response = requests.post(base_url + '/streams/'\
                             + webid + '/value', json=data,\
                             headers=headers)  
    return response
def get_pi_value(webid):  
    response = requests.get(base_url + '/streams/'\
                             + webid + '/value')  
    return response
pi=pigpio.pi()
s=DHT22.sensor(pi,4)
i=1000
while True:
    s.trigger()
    post_pi_value(s.temperature(),temperature)
    post_pi_value(s.humidity(),humidity)
    r = get_pi_value(temperature)
    #print r.text
    i=i+1
    time.sleep(5)

Credit: Eugene Lee

 

You can setup to run above python script on startup.

Or start/stop on demand using bash scripts below.

and

These two files can be placed at the same location as the Python scripts. When you want to start/stop reading the sensor's data, then you can execute command from the terminal application (command-line) as shown below.

 

Conclusion

 

In this post, we made use of Dell Gateway's capability to directly connect to data sources via Serial and Ethernet ports. We selected simple sensors that are easy to connect to and come at low cost. We have successfully delivered data to PI Server, which resided on the same machine. Since Dell Gateway can accept many other communication protocols, this means the data collection from many other IoT devices are possible. In my opinion, Dell Gateway can be a very good interface node for the PI system, where all sensors can be connected and gathered to one location. However, it is not suitable for hosting our full version of PI system. This is because it has physical limitation such as 2 processors and only 8 GB RAM. The performance will surely be limited if we have large number of tags or utilized AF/AF Analyses heavily. In the future, with higher specification, it maybe possible.

 

If you have any questions, please post them here and if you have an experience in using Dell Gateway, please share with us.

Looking for help with a technical issue or challenge? All Things PI is your place for online support from OSI Technical Support, as well as fellow users!

 

Who is available on PI Square to help?

 

All of OSIsoft Technical Support monitors and participates on our Community, so the same support you can get via a phone call or email, will be what you'll receive here!

 

In addition, we have active Product Specialists, Product Managers, Developers, and more who are always keeping an eye out for discussions relevant to their expertise.

 

How do I post a question?

 

Use any search bar to try to see if your question has been asked before - search results will pop up automatically, and if you need to you can click the "Search all results ->" button at the bottom to see more.

 

If your issue hasn't been discussed before, simply press the "Ask It" button to launch a new post. Fill in as many details as you can, including background to the issue.

 

In the "In a Place" section, browse to the appropriate sub-location of All Things PI - we have product-specific subspaces such as PI Server, PI Interfaces & Connectors, PI Visualization, etc. See below on why this is beneficial to you!

 

Create a few tags to better catalogue your post. These tags are used in the site's search engine, and make it easier for others to find your post!

 

Why should I post in a product-specific Space?

 

Our specialized Support resources such as Product Specialists, Product Managers, and Developers will often follow specific product spaces so that they get alerted to new discussions. By posting in the right product space, you'll have a better chance of getting targeted support for your issue!

 

What if I need a remote session?

 

If we can't solve your problem on PI Square, one of our engineers will assist you in creating a full Tech Support case, and will contact you directly to start a remote session. The process is seamless, and won't be any extra work for you.

 

What about the rest of PI Square?

 

OSIsoft Technical Support resources will be monitoring discussions in the All Things PI space and product subspaces to provide technical support and assistance. (For questions: Taylor McManus)

 

OSIsoft Development Support teams will be monitoring questions in the PI Developers Club space and sub-spaces there to help provide suggestions and answers to development questions. (For questions: Ahmad Fattahi)

 

OSIsoft Learning will be monitoring the Learn PI space and class content and will be replying to questions, comments, ideas, and suggestions within that area.(For questions: Kelsey Holstein

 

OSIsoft PI Square team will be monitoring the Welcome to PI Square! space and will reply to your questions, comments, ideas, and suggestions within that area.(For questions: Jarita Sirois)

 

How do I get started?

 

Head on over to All Things PI and dive into the Community!

This post will contain an overview of an internship project designed to showcase the value of the PI System in a Smart City context. The project was undertaken at the Montreal OSIsoft office, by two interns – Georges Khairallah and Zachary Zoldan  – both of whom are studying Mechanical Engineering at Concordia University in Montreal.

As a proof of concept for the PI System as a tool for the management of smart cities, we chose to bring in data from public bike sharing systems. We then set about collecting data from other sources that may affect bike share usage – data such as weather, festival and events, public transit disruptions and traffic data. In undertaking this project, we hope to better understand the loading patterns of bike sharing systems, analyze how they react to changes in their environment, and be able to predict future loading.

 

 

 

What is a Smart City?

 

A smart city implements technology to continuously improve its citizen’s daily lives. Through analyzing data in real time, through the movement of people in traffic to noise and pollution, a Smart City is able to understand its environment and quickly adjust and act according to ensure it maximizes the usage of its resources. A smart city’s assets can all be monitored in real time, something which is becoming easier with the emergence of IoT technologies.

 

We chose to monitor bike sharing systems due to the abundance of live and historical open data, and the numerous analyses which can be run on the system. Bike sharing systems represent one aspect of a smart city, and by demonstrating what can be done with just this data, we can show the potential value of the PI System in a Smart City context.

 

 

 

What is a bike sharing system?

 

A bike sharing system is a service which allows users to rent bikes on a short term basis. It consists of a network of stations which are distributed around a city. Each station consists of a pay station, bike docks, and bikes that the user can take out, and drop off at any other station.  Bikes are designed to be durable, resistant to inclement weather, and to fit the majority of riders.

Users can be classified as “members” or “casual users”. Members are those who pay an annual, semi-annual, or monthly fee for unlimited access to the bike sharing network. Members access a network with a personal key fob, allowing the bike sharing system to keep a record of a members’ trips. Casual users pay for a one-way, 24-hour or 72-hour pass using their credit card at the pay station.

 

o1.png

 

Every bike is equipped with unique identifier in the form of a RFID tag, which keeps track of the station it was taken out from and returned to. Coupled with a member’s key fob identification, or a 4 digit-code generated from the pay station for an occasional user, we can track each trip and determine whether it was taken by a member or a casual user.

 

 

What datasets were available to us?

 

We had access to live open data from numerous bike sharing systems , from cities such as Montreal, New York City, Philadelphia, Boston and Toronto. Each bike sharing system posts data in the form of a JSON document, updated every 10-15 minutes, containing data pertaining to individual bike stations. It reported back with the number of docks available at a station, number of bikes available at a station as well as station metadata such as station name, longitude/latitude positioning and station ID.

Example Live Data.png

We also had access to historical trip data from different bike sharing systems. This data came in the form of CSV files, with each row entry representing a single bike trip. Each individual trip is tracked using an RFID chip embedded in the bike, and can tell us the start time/end time of a trip, start station/end station and whether the trip was taken by a member or a casual user.

The specifics of the historical data vary from system to system. Montreal’s BIXI system only offers the trip duration and account type on top of the trip information, whereas New York City’s CITIBike also offers data on member’s gender and age.

 

o99.png

 

We then brought in data sets which we thought might affect bike usage – data such as weather data, festival and events data, traffic data, and public transit disruptions. Weather data was collected from an open weather API, and public transit data was collected using a web scraper and the PI Connector for UFL.

Importing data into the PI System:

Much of our data was text-based, so we chose to use the PI Connector for UFL to import the data into our PI System. The Connector was set up using the REST endpoint configuration for most of our data sources. It was also set up to create PI AF elements and our PI AF hierarchy.

We stored most of our live data as PI Points, while the historical trip data was stores as Event Frames.

 

To learn more about using the PI Connector for UFL to create event frames, click here.

To learn more about using the PI Connector for UFL as a REST endpoint, click here.

To learn more about creating PI AF elements with the PI Connector for UFL, click here.

 

Historical Data Analyses:

 

The historical bike trips have a designated start and end time – to us, it seemed like the best way to store this data was as Event Frames, with an Event Frame for each bike trip. We used the PI Connector for UFL to create Event Frames from our CSV files, we loaded more than 5 million event frames in our PI System!

We used the PI integrator for BA to publish various “Event Views” and then used Power BI to visualize the data. We can then access this data using a Business Intelligence tool such as Microsoft Power BI.

 

June 2015 Trips vs Rain.png

Above we can clearly see that the days which experienced the highest amount of usage had 0 rainy events in other words it was a sunny day (as shown by the variable “temp_numerical”), whereas the days that held the least amount of trips had the most amount of rainy events.

 

o5.png

Above, we have a dashboard that represents the trips from June 2015. It holds a total of 556,000 trips.

On the top left, we can observe bubbles that represent the location of the stations. Furthermore, the color of the bubble shows to which neighborhood it belongs to, and finally the size of the bubble indicates the amount of trips made from that bubble (or station).

On the bottom right, we can determine that “Le Plateau” neighborhood is the most popular, with “Downtown” coming in second place.

Next, we can slice through “Le Plateau” using the donut chart on the top right, to determine whether all of the stations were equally used, or some stations outperformed others?

 

o56.png

Looks like many stations outperform other, why is that? Although we observe two stations nearby, one of them holds 7140 trips whereas the other has only 2378 trips, the station that is nearest to metro stations experience the most trips because it is the path of least resistance.

 

o7.png

 

This was the same case for “Downtown” neighborhood!

Since the bike share companies have information regarding the revenue each bike station is approximately generating, the city can make better financial decisions regarding the possibility of withdrawing stations under-performing because at some point maintenance costs outweigh profit.

 

 

 

Live Data Analyses:

 

Using AF analyses and the live data available, we created several KPIs in order to keep track of usage at each station throughout a city’s bike sharing network. These include station utilization (how empty a station is), and 4/8/12/24 hour combined in/out traffic events.

 

Let’s take a look at the 4 hour combined in/our traffic events in Montreal, over one weekday:

 

 

4 Hr total traffic - Montreal only.png

We see that there are two characteristic peaks, one at 11 am (representing traffic from 7 am - 11 am) and another at 8 pm (representing traffic from 4 pm - 8 pm). As expected, this corresponds to commuting times for the standard 9-5 workday. But is this trend repeated on the weekends?

Let’s look at the same metric, but over one week. Different colors indicate different days. (Saturday-Friday, Left-Right)

We can see the two characteristic weekday peaks which were present in the previous graph, but the usage on weekends tell a different story. There is only one usage peak on the weekend, since users are not using the bikes to commute to work.

 


Can we use the PI system to explain why the midday usage on Monday has dropped off compared to other weekdays? Let’s bring in our weather data and see if it had any effect on the usage.

o8.png

The added purple line which varies from 0 to 1, acts as a marker to when it was raining in Montreal, with "1" representing rainy conditions. Now, we have a clear explanation for why Monday July 18th experience a drop in usage during the day, it was raining!

 

o9.png

 

 

 

Predicting future demand:

 

A key issue which bike share system managers must deal with is the need to ensure bikes are available to users at all times. This means they often need to redistribute bikes from one station to another to ensure bike availability. System administrators need to be able to predict usage at a station level and on the system level to account for this.

Using data collected with the PI system, we can predict the future usage at the station level and at the network level. We looked at the value at a given time, say 3:00 PM on a Monday, and then averaged values from the past three weeks. We were able to accurately predict future usage within a margin of error of 2%-5% . We store the forecasted future usage using AF analyses and PI System’s “future data” option.

o43.png
Mapping Smart City assets with the PI Integrator for ESRI ArcGIS and ESRI Online:

To learn more about mapping smart city assets with the PI Integrator for ESRI ArcGIS, click here.

 

From the beginning of the project, we knew that the geographical location of our assets would be incredibly important for our analysis. The usage of any bike station is directly tied to its location – those located closer to major office buildings, pedestrian malls, and points of interest will experience an increase in usage. By mapping our assets (along with all associated PI points and KPIs), we can view any geographical trends that might emerge.

 

We started off by mapping our AF assets (Bike stations in Montreal) using the ESRI online. The examples below show how we can view our elements in a much more intuitive way, and combine secondary data sets such as the location of bike paths, subway stations and bike accidents. We set the marker size to be tied to a specific station attribute, the 4-hour total in/out traffic events.

 

We can click on any circle and bring up a configured pop-up showing all relevant AF attributes and KPIs.

 

o87.pngo88.png

We can then further enrich this data by adding other map layers. For example, we can see that most bike stations coincide with the location of bicycle paths (left) and the location of subway stations (right).

 

o90.pngo91.png

 

 

Conclusion:

 

In this post, we were able to show the value of the PI System in just analyzing bike sharing systems, which is just one small aspect of a smart city. The PI System excelled at data collection, time-based analysis, and helping us visualize numerous datasets. We could have performed a much more in depth analysis if we had access to other closed smart city datasets such as power consumption, waste management and more. The potential for further analysis is there, all we need is access to the data.

 

 

 

Demo Video

chuck

OSIsoft's New Offices

Posted by chuck Employee Aug 14, 2016

Have you seen the progress of construction for OSIsoft's new offices in the new San Leandro Tech Campus (SLTC) ?

 

Just this month the base of the sculpture Truth in Beauty was installed.  The building is all closed in and interior finishing progressing rapidly.  The parking structure is quickly coming together.

If interested in watching the progress, there are some videos on YouTube you can enjoy! 

 

OSIsoft Headquarters (new) construction - view 1 -

https://www.youtube.com/watch?v=5k-FzZsIhwM

OSIsoft Headquarters (new) construction - view 2 -

https://www.youtube.com/watch?v=YbrnJ84b490

Parking structure construction -

https://www.youtube.com/watch?v=ggcJS73LfF0

View from the new office building - currently the tallest building in San Leandro

https://www.youtube.com/watch?v=mNVcprtuyC0

Sculpture installation - part 1 -

https://www.youtube.com/watch?v=D1gustPKpbY

 

... and more information from the official website for the San Leandro Tech Campus -

http://sl-tc.com/

 

( No I didn't make any of the videos, but I am really impressed with what others have done to capture SLTC construction progress! )

Last time we showed how to choose database and print some information. Today we will look on how to get an attribute value.

So we reached the database already:

PISystems myPIsystems = new PISystems();
AFDatabases myDatabases = myPIsystems["W-SRV1"].Databases;
AFDatabase myDatabase = myDatabases["test"];

How to reach the element

I have this structure in my database:

If I would like to reach the Element1, I need this code:

AFElements myElements = myDatabase.Elements;
AFElement myElement = myElements["Element1"];

This is OK, but what to do, if I would like to reach child_Element1?

 

There are two options. First do the same again and select element form elements under Element1:

AFElements myElements2 = myElement.Elements;
AFElement myElement2 = myElements2 ["child_Element1"];

Or use the path in the beginning:

AFElements myElements = myDatabase.Elements;
AFElement myElement = myElements["Element1\\child_Element1"];

NOTE: The symbol of \ have some function in the string, so we need to us \\ and is translated to the \ when string is used.

How to reach the attribute

Now we have myElement and we would like to have an attribute under this element. This is similar as before:

AFAttributes myAttributes = myElement.Attributes;
AFAttribute myAttribute = myAttributes["Attribute1"];

As you see nothing changed, we still use the same logic as with elements.

 

How to get the value

To get the value we first need to get it as AFValue:

AFValue myAFValue = myAttribute.GetValue();

This is current value, how to get value form specific time will be showed in the next chapter.

We can get the float value from the AFValue now. Value in the AFValue is of object type so I decided to  retype it to float (my value is float and I know it).

float myValue = (float) myAFValue.Value; 

Printing information

Now we can print some information:

Console.WriteLine("Attribute path: {0}", myAttribute.GetPath());
Console.WriteLine("Value: {0}", myValue);
Console.WriteLine("Timestamp: {0}", myAFValue.Timestamp);
Console.WriteLine("Value type: {0}", myAFValue.ValueType);
Console.WriteLine("Source PI Point: {0}", myAFValue.PIPoint);

Results

Recapitulation

So here is the code with shortcuts

AFElement myElement = myDatabase.Elements["Element1\\child_Element1"];
AFAttribute myAtribute = myElement.Attributes["Attribute1"];
AFValue myAFValue = myAtribute.GetValue();
float myValue = (float) myAFValue.Value;

End

So now we are able to get current value of an attribute and get some another information from it. How to get a value from specific time will be topic of the next chapter.

chuck

Hardware LifeCycles (OpenVMS)

Posted by chuck Employee Mar 29, 2016

A couple years ago I posted articles inquiring as to the state of OpenVMS and PI System dependencies on OpenVMS within our PI System user community.  Now that a couple of years or so have passed - just the other day I was thinking about this again.  There still remain a number of PI systems, mostly PI3 servers, which depend on interface nodes or application nodes running OpenVMS.  A goodly number of these OpenVMS systems, based on VAX and Alpha platforms, are considered mission or business critical.

 

Just how long can we expect this stuff to run that we type on and archive PI data with?  Let's think about just the disk drives for a moment… 

Disk drives for ”modern” OpenVMS systems had advertised very long lifetimes and smart hardware.  Disk drives in OpenVMS systems which were new in the late 1990’s and early 2000’s were pretty capable and included advanced features helping to implement RAID and caching implementations in drive hardware as compared to other computing systems of the same era which may have implemented such features in disk controllers, or microcode in processor boards, or in operating system disk device drivers.

 

A typical Wide SCSI drive from this era might have advertised 800000 hours MTBF (nearly 100 years!), however… That was a hugely big "However":  MTBF was affected by handling, storage, power up method (and power fail), error rate and so on.

  • Nearly 20% of storage capacity of such drives was reserved for system/hardware functions such as engineering partition, dedicated head landing zone, sectors reserved for error replacement, and auto calibration/head alignment functions.
  • These disks anticipated an average of as many as 100 sectors failing per year of operation. 
  • The way DC power was applied to the drive and the manner of application of the first SCSI controller commands after power up affected drive life and lifecycle.

As a consequence of these features, disk life was expected to be only 5 years in normal use.  Here at OSIsoft we are experiencing average lifetime of these disks to range from 5-8 years.

 

Similarly limited lifetimes apply to our OpenVMS server's disk controllers, memory boards, processor boards and so on.  The last VAX/VMS system was built September and shipped in December of 2000.

And we know the last Alpha OpenVMS system was ordered in April 2007 and shipped later that year.  We can expect nearly all VAX and Alpha based OpenVMS systems to be past end of life at this point in time.

 

Reference:
Seagate Barracuda ST15150N specification sheet and reference manual

Hi all,

 

I have created a .Net application that can help in the maintenance of PI Datalink Reports in Excel.   Some of you may find this extremely useful.

 

This program was developed out of the need for a project that I was working on where we are renaming around 100,000 PI tags as part of a PI tag standardization effort.  Renaming of PI tags has a detrimental effect on PI Datalink report and can also impact VBA code that may contain PI tags as well.

 

I have a screen shot of the GUI below that shows where you select an input folder which contains all the Excel files that you want to convert, a conversion file which contains all the old and new PI tag names, and an output folder to store the converted files.  I have tested this with 100,000 tag names.  You can easily handle all of the Excel reports in your system in a single batch, test them, and then reload them to your production folder afterwards.

 

You can get a trial version of the software at http://www.exceltextswitcher.com

 

Enjoy...

 

 

Brad

The LazyPI project was created to hopefully provide an API that feels more like the native AFSDK. The project has come along nicely and is ready for testing. Testing requires the development of Unit Tests and an access to a WebAPI server. The currently implement pieces should allow for basic management of AFElements, AFEventFrames, and AFAttributes. LazyPI will simplify interactions with the WebAPI and reduced required knowledge of WebAPIs.

 

Contributors Needed

Anyone that might be interested in helping to test or expand on the library please see the GitHub page. If you would be interested in contributing in any way please fill out this super short application.

 

Outstanding Tasks

  • Find and report bugs
  • Create Unit Tests for all implemented pieces
  • Implement "Controller" objects that are missing (ex. AFTable, Calculations)
  • Iron out design flaws

 

Disclaimer

This code is still not ready for production as of Feb. 15, 2016. Please do not use LazyPI in its current state for important projects.

Filter Blog

By date: By tag: