Skip navigation
All Places > PI Developers Club > Blog > Authors MichaelvdV@Atos
1 2 Previous Next

PI Developers Club

24 Posts authored by: MichaelvdV@Atos

vCampus Live! 2011 is right around the corner: in less than a month (on Nov. 30th and Dec. 1)  we will have our event at the Palace Hotel in San Francisco.

 

Many of you already registered, if you have not: Check out the agenda and the abstracts. This year we will have 3 tracks of hands-on sessions, and one track of presentations. We also have Vox Pop sessions, Roundtable discussions and the Developers Lounge.

 

We are very happy with two new security presentations on track 4 by Joel Langill from ScadaHacker.com. Check out his website and blog: the aim of scadahacker.com is to bring security information to those involved in Industrial  Control Systems in a simple and easy to undertand manner.

 

Joel has worked for more than 25 years in the industrial automation and control industry. Joel's unique approach to security emphasizes the processes and people used to implement security programs, rather than relying solely on technology or "products".  The best strategy for comprehensive security balances People, Processes and Products.   His perspective has been sought and cited by numerous industry publications focused on both industrial automation and information security.  Most recently he has played a central role in the analysis and implications of the Stuxnet worm, including new methods of mitigating current and future attacks on critical infrastructure.

 

Joel is also the Director of Critical Infrastructure and SCADA representative for the Cyber Security Forum Initiative, where he was a lead contributor to a report on the use of could in cyber warfare.  He is a Certified Ethical Hacker, Certified Penetration Test, Cisco Certified Network Associate, and TüV Functional Safety Engineer. 

 

He will be presenting two presentations on Track 4

How Stuxnet Spreads (30 mins. Track 4, Day 1 04:15 pm - 06:00 pm block)

The Stuxnet worm is a sophisticated piece of computer malware designed to sabotage industrial assets. The worm used both known and previously unknown vulnerabilities to install, infect and propagate, and was powerful enough to evade state-of-the-practice security technologies and procedures, including firewalls, authentication, and anti-virus software to name a few.

 

Since the discovery of Stuxnet, there has been extensive analysis of Stuxnet’s internal workings. What has not been discussed is how the worm might have migrated from the outside world to supposedly isolated and secure industrial control systems (ICS). Understanding the routes that a directed worm takes as it targets an ICS is critical if these vulnerable pathways are to be closed for future worms.

 

This presentation is meant to provide a summary of how modern day cyber threats may work their way through even the most protected networks. It also takes a look at what can be learned from the analysis of pathways in order to prevent infection from future worms - whether targeted or not. If the systems that control critical infrastructure are to remain safe and secure, then owners, operators, integrators, and vendors need to recognize that their control systems are now the target of sophisticated attacks. Improved defense-in-depth postures for industrial control systems are needed urgently. Waiting for the next worm may be too late.

Network Architecture and Active Directory Considerations for the PI System (30 mins. Day 1 Track 4, 04:15 pm - 06:00 pm block)

 

 

Security standards for industrial control systems (ICS) generally emphasize network segregation between corporate information and automation networks. Typical PI System information flow requires connection with data sources and potentially users residing on automation networks. Careful consideration should be given to network design and Active Directory implementation.

 

Active Directory is very flexible and scalable but can be quite complex in a large enterprise. While there may not be a one size fits all approach this presentation will highlight common do’s and don’ts related to PI System deployment with Active Directory. It will also provide insight into new features that can help improve user authentication throughout the architecture without compromising security within any particular network zone or communication segment.

 

Please do not forget to register, seating on the hands-on sessions is limited!

 

 

Daylight Saving Time (DST) is the practice of 'advancing' clocks during the summer. Normal practice is to advance the clock 1 hour during spring, and are adjusted backwards during autumn. In Europe we adjusted the clocks last weekend (Oct. 30th)., and the US will do the same next weekend (Nov. 6th). I thought this would be a good time to talk a little about DST, and what it means for us programmers.

 

DST is a rather 'weird' phenomana. There is here is a lot of dispute over the actual benefits of having DST, the original arguments of reducing energy usage and crime have been proved and disproved by numerous studies. Some countries even stopped using DST, to later resume using it. Some countries stopped using it all together.

 

Here is a map of countries that observe DST.

 

6254.DaylightSaving_2D00_World_2D00_Subdivisions.png

 

 DST observed

 
 DST no longer observed
 DST never observed
As you can see, almost all European countries and Northern America observe DST. Notice that the usage of DST can differ, even inside a country. Notice that for instance Arizona and Hawaii (US States) and parts of Canada do not observe DST, while the rest of the country does. Also the coordination on shifting clocks in area's that observe DST can differ.
If you look at the Windows TimeZones, you can see Arizona has its own timezone designation, allthough it has the same UTC offset as Mountain Standard Time (MST), but this changes to Mountain Daylight Time (MDT), and the Arizona timezone doesn't.
8400.arizona_5F00_timezone.png
The European Union switches all at once (01:00 AM UTC), but most of America switches at 02:00 AM Local Time, so every timezone switches at a different (absolute) time. During the 1950's and 1960's each US locality could start and end DST when they wanted. At one year 23 pairs of start and end dates were used in Iowa (US State) alone. On a Ohio to West Virginia bus route, passengers had to (officially) change the time on their watches 7 times on a 55 km busride.
As it seems, DST can create quite some confusion and chaos. Specially when dealing with information systems spanning several countries and contintents. I'm sure we have all encountered issues when dealing with timezones and DST. You will still encounter this with legacy systems. If you are not careful, you can even experience this in your own .NET applications.
For instance, consider the following piece of code
7875.snippet1.png
Here in Europe we have ended DST this year at 03:00 CEST to 02:00 CET on October 30th. That means the for loop will iterate over the DST switch.

At first, you would think and hope that this is reflected in the dates printed by the Console.Writeline, and that this snippet of code would produce something like this:

 

8463.snippet2_5F00_output.png

 

 But, sadly it does not. It produces the following output (and thus, not observing the DST switch).

 

 4555.snippet1_5F00_output.png

 

Why is this? We add one hour with each iteration, so logically it should print 02:00:00 AM twice. The issue here is the DateTime structure. DateTime does not contain information about timezones. From .NET 2.0 onwards, DateTime does contain a 'Kind' property, of type DateTimeKind.  The Kind of a DateTime struct can be 'Local', 'Utc' or 'Unspecified'. To provide for backwards compatibility with pre .NET 2.0 versions, the Kind of a DateTime instance will always be 'Unspecified' unless the kind is specified in the constructor, or set with the static DateTime.SpecifyKind method. To further provide backwards compatibility, a 'Unspecified' DateTime will behave like a 'Local' DateTime.

 

In our example, we have not set the 'Kind' of the DateTime struct in the constructor, therefore it will behave like a Local DateTime. So, going form 01:00 AM to 02:00 AM by adding one hour, and then adding one hour from 02:00 AM to 03:00 AM is expected behavior. We can change this behavior by specifying the Kind of the DateTime struct to UTC in the constructor. Basically, we are working only with UTC untill we present output to the user. When we want to present the output, we use the ToLocalTime() method to format the date according to the current timezone and culture information.

 

1464.snippet2.png

 

This produces the following output:

 

1348.snippet2_5F00_output.png

 

A very good alternative when dealing with different timezones is using the DateTimeOffset struct. This structure is a 'timezone' aware DateTime alternative that was introduced in .NET 3.5. It represents a point in time relative to UTC. If we want to achieve something similar, we can use the following code:

 

4087.snippet3.png

 

Here we create a local DateTime object, and use it to instantiate a DateTimeOffset. It is not necessary to supply a local DateTime in the DateTimeOffset constructor, it also supports constructors almost identical to DateTime. We iterate through the hours in the same fashion, and we print the localized time to the console. The output shows what we want, and it even includes the timezone offset information. You can clearly see the UTC offset switch.

 

1614.snippet3_5F00_output.png

 

This output is possible because DateTimeOffset stores it's information in UTC format, so calculating with the dates also occurs in UTC. It is best practice and really conveniant to always use UTC internally, and only convert it to a local time when it has to be presented (either to a user, or to another (legacy) system that requires a certain timezone.

 

The PI System works in the exact same way. You don't have to worry about DST changes. PI uses UTC time internally, and only when it needs to be presented to the user it will use the localized time. This means that internally, PI does not have 23 or 25 hour days internally: only 24 hour days. There is a short video on the Youtube OSIsoftLearning channel here

 

 For instance, this is what you would get if you create a trend PI  Processbook covering the period of a DST switch (in this case, advancing one hour in 2010)

 

2728.pb_5F00_dst_5F00_switch.png

 

 

 

Sometimes however you have to know the number of hours in a day (specially when creating daily averages or aggregates). The DayLightTime class contains the timestamps of the start and end of DST for a given year. You can obtain a DayLightTime instance from a System.TimeZone. To get the number of hours in day for your current timezone:

 

3704.snippet4.png

 

And we can use it like so:

 

4645.snippet5.png

 

The output will then be:

 

5773.snippet5_5F00_output.png

 

 

 

Some good reads about working with Dates, Times, TimeZones and DST in .NET:

 And now for some fun and offtopic facts on DST:

  • The proper description of DST is 'Daylight Saving Time', not 'Daylight Savings Time'
  • A man, born just after 12:00 a.m. DST, circumvented the Vietnam War draft by using a daylight saving time loophole. When drafted, he argued that standard time, not DST, was the official time for recording births in his state of Delaware in the year of his birth. Thus, under official standard time he was actually born on the previous day--and that day had a much higher draft lottery number, allowing him to avoid the draft.
  • While twins born at 11:55 p.m. and 12:05 a.m. may have different birthdays, Daylight Saving Time can change birth order -- on paper, anyway. During the time change in the fall, one baby could be born at 1:55 a.m. and the sibling born ten minutes later, at 1:05 a.m. In the spring, there is a gap when no babies are born at all: from 2:00 a.m. to 3:00 a.m.
  • In the U.S., Arizona doesn’t observe Daylight Saving Time, but the Navajo Nation (parts of which are in three states) does. However, the Hopi Reservation, which is entirely surrounded by the Navajo Nation, doesn’t observe DST. In effect, there is a donut-shaped area of Arizona that does observe DST, but the “hole” in the center does not.

I couldn't verify these, so I'm not 100% sure about wether they are true

  • Daylight saving time once single handedly thwarted a terrorist attack, causing the would-be terrorists to blow themselves up instead of other people. What happened was, in September 1999, the West Bank was on daylight saving time while Israel was on standard time; West Bank terrorists prepared bombs set on timers and smuggled them to their associates in Israel. As a result, the bombs exploded one hour sooner than the terrorists in Israel thought they would, resulting three terrorists dying instead of the two busloads of people who were the intended targets.
  • I personally cannot really believe this one, I hope someone from the US can verify: To keep to their published timetables, trains cannot leave a station before the scheduled time. So, when the clocks fall back one hour in October, all Amtrak trains in the U.S. that are running on time stop at 2:00 a.m. and wait one hour before resuming. Overnight passengers are often surprised to find their train at a dead stop and their travel time an hour longer than expected. At the spring Daylight Saving Time change, trains instantaneously become an hour behind schedule at 2:00 a.m., but they just keep going and do their best to make up the time.

 

 

Sources for this article:

Introduction to Project Roslyn

I have to admit that I was one of those people that were quite worried before the big Microsoft BUILD event. There were a lot of rumors around Microsoft killing off Silverlight and WPF, and going with JavaScript and HTML as the preferred application development tools.

 

After I watched some of the videos of the BUILD conference I was more at ease, but still quite worried about the direction that was taken. This was my state until I watched this presentation by Anders Hejlsberg.

 

Anders talks about future directions for C# and Visual Basic. There is a lot of interesting stuff happening. The adoption of (true) async programming in the .NET framework is very promising. One thing really made my day, and got me that same enthusiasm back as when Microsoft announced Silverlight 2. Like back then, I was happy like a little puppy.

 

I remember I saw something like this at the local Microsoft DevDays when the beta of .NET 4 was announced. The presenter showed a (very simple) C# interactive console, and told us this was something they were working on. The thought of having real C# code evaluation sounded like a dream to me. I have struggled a lot with evaluating simple expressions in C# (up to the point of writing my own interpreter for it). Having this in the framework was huge. We had to wait a few years, and probably have to wait for after the next .NET release... But it is finally happening!

 

I’m talking about a new project called ‘Project Roslyn’. Microsoft made Roslyn available as a CTP about a week ago. Roslyn will bring a whole variety of new and very powerful features to the C# and VB.NET languages. I think we (including Microsoft itself) cannot envision the implications this new technology is going to have. Basically, Roslyn is a new compiler for VB.NET and C#, written in VB.NET and C# respectively. And it’s all opened up!

 

Compilers are typically seen as a ‘black box’. That is, if you are not a compiler developer. It works quite simple, you put source code in, and you get back something you can execute. What happens in between is something magical for most of us.

 

Well, most of us know some things about how a compiler works: there is the parsing of the source code, the creation of the syntax tree, creating a list of all the symbols, binding the symbols with the objects, and so on.

 

Roslyn opens all this up to us developers. We can now ‘interact’ with the compiler. This will open up a whole new world of possibilities like code generation, code evaluation, code analysis, meta-programming and refactoring.

 

Roslyn exposes the C# and VB compiler’s code analysis to us by providing an API layer that mirrors the compiler pipeline.

 

4670.compiler_5F00_pipeline.png
 
First, we have the parse phase. Here the source is tokenized and lexical analysis is performed. The result is called a ‘syntax tree’. This is a tree representation of the syntactic structure of the source code. Then we have the ‘declaration phase’. Here declarations are analyzed to form named symbols. Next, the identifiers in the code are matched to the symbols in the bind phase. The last phase, the emit phase, will take care of emitting all the information that has been gathered and build in the previous phases to an assembly.

 

8765.compiler_5F00_pipline_5F00_and_5F00_api.png
 
If we look at the structure of the Roslyn project, each phase is represented by an object model that gives information about that phase. The parsing phase is exposed as a Syntax Tree, the declaration phase as a hierarchical symbol table, the binding phase as a model that exposes the result of the semantic analysis of the compiler. And finally the emit phase as an API that produces IL byte codes.

 

This is all nice and very theoretical. Now for some real world example and application. First we will need to get the Project Roslyn installed.

 

To get started with Roslyn you will need:

 

Microsoft Visual Studio 2010 SP1 (download here)
Microsoft Visual Studio 2010 SDK SP1 (download here)
Microsoft ‘Roslyn’ CTP (download here)

 

Once you have everything installed, you should go to the install directory. This is typically

 

C:\Program Files\Microsoft Codename Roslyn CTP\ for x86 systems, or
C:\Program Files (x86)\Microsoft Codename Roslyn CTP\ for x64 systems.

 

You should definitely checkout the ‘Readme’ directory. There is a readme html application there that will take you through some of the basics of the libraries and installed Visual Studio templates. Next is the ‘Documentation’ directory. There are 13 documents there which will be your primary source of information when working with Roslyn.

 

One of the first and simplest real world examples is the possibility of an interactive C# console, otherwise known as a REPL (Read Evaluate Print Loop) console. You can use the interactive C# console from Visual Studio. You will find it under View > Other Windows > C# Interactive Window.

 

The interactive window is a prime example of the possibilities of Roslyn. You can create variables, create expressions, etc. You can really use C# in an interactive manner.

 

6874.interactive1.png
 
You can also create some more complex statements, for example:

 

3630.interactive2.png

 

This is an even more complex example, where we create a new class called ‘Person’. It has a name and age property. It also has a Speak() function. This example will really show the ‘magic’ of Roslyn.

 

8750.interactive3.png
 
The real exciting thing here is  not only the (on-the-fly) evaluation of the code, but especially the automatic indentation of the class declaration and the full intellisense and auto completion. The console does not just evaluate a bunch of code (something we could do earlier with the Mono REPL), but we also have dynamic intellisense and auto completion. We had this in Visual Studio, but now it’s open to us!

 

I would really like to encourage you to install Roslyn. The C# interactive window only is already of great value. You will no longer have your Visual Studio projects folder filled with ConsoleApplication1 to ConsoleApplication123 because you had to check some code.

 

In the next blog post we will dive deeper into Roslyn, and we will start writing some code using Roslyn. We will be making use of the ScriptEngine that Roslyn provides. We are going to make a simple calculation engine for the PI system, and we will see that with a few lines of code we can already build something powerful and useful.

 

 

During the BUILD conference Microsoft announced a lot more details about Windows 8. There are some big changes comming to the OS we all love. In fact, Microsoft is calling it 'The biggest change since Windows 95'. That is quite a bold statement, and with all the radical changes: also quite a gamble. Microsoft is drastically changing their flagship product - the real centerpiece of their business. The 'old' Windows interface is familiar to hundreds of millions of people. I think a lot of us will have to get used to the new 'Metro' style apps.

 

But not only has the look and feel of Windows 8 been changed, maybe the even bigger changes are 'under the hood'. A lot of the internal libraries and API's have undergone a complete overhaul. All Windows subsystems have been reimagined to be modern.

 

Windows 8 is designed to be used on Deskop PC's and touch devices. It supports the ARM CPU, which is an architecture that is used a lot on tablet computers and other smart devices.

 

0753.20110914080134_2D00_windows8_5F00_2.jpg

 

While Windows 8 got a lot of attention, an even interesting product got a lot less attention. I'm talking about the big server brother of Windows 8: Windows Server 8!

 

Windows Server 8 has the newest Hyper-V-hypervisor, which supports the vhdx format for virtual storage. Virtual disks can now be 16TB in size, where the previous maximum size was 2TB. Hyper-V 3.0 supports virtual machines with 32 cores, and 512GB internal memory. Also new are two new forms of deduplication. This technique should ensure more efficient use of storage and memory capacity.

 

Page Combining should ensure identical memory pages to be combined into one page. This should give a performance boost to virtual machines. Storage spaces ensures that a server with multiple disks can be combined into one storage pool, which can then be served over the network. This does not require a SAN array.

 

It seems the message with WS8 is 'virtualization, virtualization, virtualization'. This paradigm fits perfectly with the great demand for flexible computing (cloud computing).

 

One other change further emphasizes that paradigm: the removal of the Graphical User Interface (GUI) for the server products. Windows Server 2008 could be installed without a GUI, but that option was somewhat unpopulair. It seems WS8 is here to change that opinion. 'Graphical User Interfaces are for clients, not for server products' according to Jeffrey Snover - Lead Architect of Microsofts' Windows Server Division. 

 

5432.winnt40serv.png

 

A lot of existing server software needs a GUI for configuration or operation. That's why the GUI is not completely dissapearing. It's recommended that you use the GUI-less edition, unless you really have to use a GUI. There are two possibilities for users that want to have some kind of graphical capability. The full grahpics shell (Metro Style) can be installed, or you can choose for a 'slim' version. This version does not have the Taskbar, Windows Explorer and Internet Explorer (among other items).

 

One of the major changes to the Windows 2008 Server GUI-less install, is that you can change your installation type afterwards. If you installed the GUI-less version, you can later switch to one of the versions with a GUI. Maybe this further supports the confidence of users to install their server OS without a GUI. Also the fact that the original Windows 2008 Server Core did not have PowerShell. This only became standard with Windows 2008 R2 Server Core. Besides that, only 230 PowerShell cmdlets were installed by default. With Windows Server 8, almost 2500 cmdlets are installed by default!

 

3324.Windows_2D00_Server_2D00_8_2D00_Windows_2D00_PowerShell_2D00_v3.png

 

The main reason why the GUI is being removed is the fact that a GUI uses CPU and memory which then can not be used to perform 'server duties'. Specially in the age where servers are more likely to be a part of a large cluster, there is no need for GUI's. In an ideal situation, cluster nodes should be controlled 'in bulk' instead of being configured individually.

 

The memory footprint of Windows Server 8 is far less when no GUI is installed. That installation only requires 512MB internal memory. There is no real data on how much is needed in production/practice; Microsoft says it is still optimizing the product.

 

One other reason of having a Server product without GUI is that it having a GUI is more of a security risk. The more code you introduce on your system, the more likely that code contains bugs that can be exploited. Having no GUI for your applications means less code, and therefore less change of having criticial bugs that can be exploited. Jeffrey Snover commented that the number of critical security patches required for the Core installation is reduced with 50 to 70 percent. Having no GUI also means faster installation. This is something that is getting more and more important with the dynamic nature of today's computer environments.

 

What does this mean for us developers? Well, at this point the compatibility with existing products should be garantueed if you install the full version with the GUI. Developers are encouraged (and recommended) to create the configuration options in such a way, that they can also be set without a GUI, for instance with the help of configuration files or command line options. There is an option to detect the installed WS8 version, so your application can adapt accordingly. 

 

7674.Windows_2D00_Server_2D00_8_2D00_The_2D00_completely_2D00_revamped_2D00_Server_2D00_Manager.png

 

A good example is the new Server Manager. The new Server Manager is a Metro Style application (corrected: it's not a Metro style application, but it does have a new Office 365 like look), which uses powershell commands to control the server configuration.

 

Windows Server 8 both supports graphical interfaces and command line tools. We are encouraged to make use of the 'Core' edition, without any GUI. There is no real saying wether the GUI will totally dissapear from our servers, but the concept of having no GUI seems very reasonable.

 

Edit: included a handy table from this source

 

 

Server Core Installation

Features on Demand

Full Installation

Windows Core

o

o

o

Windows PowerShell

o

o

o

.Net Framework 4

o

o

o

Server Manager

o

o

Microsoft Management Consoles

o

o

A subset of Control Panel Applets

o

All Control Panel Applets

o

Windows Help

o

Windows Explorer

o

Internet Explorer

o

By now most of you will have gotten the news that Steve Jobs, co-founder and CEO of Apple has died yesterday. 

 

He was one of the people that changed the computer market forever. I don't think anyone can argue the fact that he was a great visionary and a very creative man.

 

The reactions of many important people reflect this opinion:

 

I'm truly saddened to learn of Steve Jobs' death. Melinda and I extend our sincere condolences to his family and friends, and to everyone Steve has touched through his work. Steve and I first met nearly 30 years ago, and have been colleagues, competitors and friends over the course of more than half our lives. The world rarely sees someone who has had the profound impact Steve has had, the effects of which will be felt for many generations to come. For those of us lucky enough to get to work with him, it's been an insanely great honor. I will miss Steve immensely. 

 

- Bill Gates

 

Steve was among the greatest of American innovators - brave enough to think differently, bold enough to believe he could change the world, and talented enough to do it. By building one of the planet’s most successful companies from his garage, he exemplified the spirit of American ingenuity. By making computers personal and putting the internet in our pockets, he made the information revolution not only accessible, but intuitive and fun. And by turning his talents to storytelling, he has brought joy to millions of children and grownups alike. Steve was fond of saying that he lived every day like it was his last. Because he did, he transformed our lives, redefined entire industries, and achieved one of the rarest feats in human history: he changed the way each of us sees the world.

 

- Pres. Barack Obama

 

I personally was never a big Apple user or fan, but you have to admire the man. He really pushed the boundries between what was real, and what could be real. His strong believe in easy to use Graphical User Interfaces made the company a success, and it is still a big part of Apples success today: from the NeXT Cube to the IPhone. Once every 7 years or so he would present something revolutionary after the famous words 'oh... And one more thing...'

 

After looking for information about the history of the company, I found this great documentary from 1996. It's called 'Triumph of the Nerds'. It is a 3 part 50 min. documentary, that explains the beginning of the PC/Computer revolution. It features Microsoft, Xerox, IBM, Apple, etc.  I found it very educating and fun to watch. It really shows the politics and atmosphere of the day, and it also makes you feel really nostalgic.

 

Here are the Youtube links:

 

Triumph of the Nerds Part I: Impressing their friends

 

Triumph of the Nerds Part II: Riding the bear

 

Triumph of the Nerds Part III: Great artists steal

 

 

 

 

This time, no programming at 'Michaels programming Blog'. Something totally unrelated to programming, and even to the PI System... But, we could be witnessing one of the largest scientific discoveries of our time.

 

Researchers at CERN have discovered neutrinos travelling faster than light. It is possible that the results are due to systematic measurement errors, and the experiment hasn't been reproduced yet. But if this is true, the result will be so revolutionary, and will have impact in different fields of science. I'm in no way qualified and try to explain all this, but below are a few good reads about the subject.

 

Reading material:

 

http://www.bbc.co.uk/news/science-environment-15017484

 

http://www.wired.com/wiredscience/2011/09/neutrinos-faster-than-light/

 

http://en.wikipedia.org/wiki/Faster-than-light

 

http://usersguidetotheuniverse.com/?p=2169

 

http://arxiv.org/ftp/arxiv/papers/1109/1109.4897.pdf

 

 

 

Maybe in a few dozen years we can indeed do this: http://www.youtube.com/watch?v=QKcF5je7K2E

8203.programmer.gif Hello fellow vCampus members,

 

As a spontaneous action I'm going to dedicate myself to a 24 hour programming challenge starting today. Maybe you have noticed that I'm working on some research projects to create a Natural User Interface (NUI) for access to the PI System and PI AF. I'm focussing on using the Microsoft Kinect on a Windows PC. So far I've made two 'demo' projects. I've blogged about those here and here.

 

Now I want to take things further, and dedicate a 24 hour period soly to programming some sort of NUI for the PI System.

 

 

 

Here are the rules:

  • I will start at 17:00 hours CET on Sept. 20 (today) until 17:00 hours Sept. 21 (24 hours)
  • The project must involve the Kinect sensor and the PI System
  • The project has to be in .NET
  • I will allow myself time to eat and get some sleep :)
  • Updates will be posted regularly here in this blogpost (and the forum), and on twitter (http://twitter.com/#!/mvdveeken)
  • You can influence the project! Please tell what you would like to see for this project.

I already have some idea's on how to spend my time:

  • Creating a reliable finger recognition library for the Kinect sensor, and use it to display trends and access information about tags and AF Elements.
  • Creating a reliable gesture recognition library
  • Create a speech recognition application, which uses the Kinect microphone array. This application should be able to answer questions about the PI system (get and display data, create trends, access AF Elements).

My preference will be with idea #3. Speech recognition can be a big part of a Natural User Interface (later combined with gestures), and wouldn't it be great if you can access your PI System with speech, just like HAL-9000 from 2001: A Space Oddyssy, or J.A.R.V.I.S. from Iron Man?

 

Updates will be posted here and on twitter (http://twitter.com/#!/mvdveeken)

 

Please let me know what you think! I need your sugggestions!

 

edit: for further updates and discussions I've opened a discussion thread on the forum [DEAD LINK] http://vcampus.osisoft.com/discussion_hall/generic_forums/f/20/p/2285/12148.aspx

 

 

MichaelvdV@Atos

Twitter Test

Posted by MichaelvdV@Atos Sep 20, 2011

Hello, this is my twitter feed

 

 

 

 

vCampus2011_5F00_banner_5F00_764x163.png

 

The dates for vCampus Live! 2011 are set. Our annual event will take place on Nov. 30th and Dec. 1 at the Palace Hotel in San Francisco. On November 29th there will be an optional PI System Overview, and offcourse the Welcome Reception!

 

We are all really excited for this years event. This event will be even better than the previous years! OSIsoft has listened to all the comments and requests from previous years, and created an event with a lot of technical content and hands-on sessions! Because we are so excited about this, we are offering a '2 for 1 special'. Buy 1, get 1 Free Conference Package! You can find more information about it at the registration site.

 

You can find the agenda for this years event here: http://www.osisoft.com/vcampuslive2011/agenda/ . If you are a Developer, System Integrator, Architect or Administrator, this event will be very valuable for you!

 

One recurring event this year are the vCampus All-Stars awards. The vCampus All-Star awards are granted to vCampus members that actively share their technical expertise with the community. They are considered the most active and most valuable members in the community.

 

This year we will again reward those who are considered the most valuable members in the community. Next to getting recognized by the community, being a vCampus All-Star will profile yourself (and your company) as being PI experts. You can get awarded multiple years in a row for your continued involvement with vCampus.

 

Being a vCampus All-Star will also get you:

 

•Personal vCampus blog (if desired)
•Voluntary participation to team meetings
•Free admissions for the year to come
◦OSIsoft vCampus & OSIsoft vCampus Live
◦Users Conference
•A few more surprises…

 

We as a community decide who will be granted these awards. Therefore, we really need your input! Who do you consider the most active and valuable members?

 

Let us know, and nominate your vCampus All-Stars by emailing us at vcampus@osisoft.com

 

Ofcourse, don't forget to register for the event

 

 

 

 

If you don't want to read on the background of this demo, and go right to a demo video: click here. (watch in 720P, fullscreen, audio on)

 

About two months ago, I published this blogpost. After a blogpost by Rhys, where he said that it would be very important for us to be able to wave at PI :)... So, an onofficial competetion started up (still waiting for you Rhys.... ;-) ).

 

This is a second version of a little research project of integrating Kinect, PI and PI AF. You can find the first demo here. This demo also makes use of the BING Maps Geocode service, and the BING Maps WPF Control.

 

The scenario is that there is a fictional Renewable Energy company that operates windfarms in different countries across Europe. This application lets an employee navigate the different countries, and dive into the operations of the windfarms up to the turbine level. By pointing left, right, up and down you can navigate and quickly get an overview of the operation of all the countries, windfarms and turbines.

 

Here is a general overview off the application:

 

3678.Kinect-Demo-2-_2D00_-Overview.png

 

The caroussel control on top navigates trough the different countries by pointing left and right. There are KPI's displayed of the selected item. If you select a country and point down, you drill down and can view the different windfarms in that country. Every selected asset is being displayed on the map control, and the map automatically zooms in and navigates to the location of that asset.

 

Here is an example of drilling down to a windfarm in the Netherlands, located near the city of Groningen.

 

1524.Kinect-Demo-2-_2D00_-Windfarm-view.png

 

 And here is an example of drilling down to the wind turbine level

 

3527.Kinect-Demo-2-_2D00_-Wind-Turbine-view.png

 

 At the heart of this application is an AF database, with an element tree representing our different assets

 

4477.Kinect-Demo-2-_2D00_-AF-hierarchy.png

 

As you can see, the countries are on the top level, they contain windfarms, and windfarms contain turbines. There is no 'depth' limit build into the application logic, so it is possible to create hierarchies that are very deep. The application also does not contain any logic specific to windfarms, it would be easy to configure this application to support oil platforms, solar, chemical plants, etc. just by changing the AF configuration.

 

The KPI's displayed in the application are inferred from the attributes on the element. There are 3 mayor catagories that are now supported. Attributes decorated with the 'History' category will show up as trends, Attributes decorated with the 'ViewInTable' category will be displayed in a table.

 

The 'DisplayConfiguration' category supports two attributes: Image and Location. Image should have a path to an imagefile (local file or URL) which represents the current asset (country flag, picture of the windfarm or turbine). Location supports a location line, which can be geolocated using the BING geolocation service. The geolocation service will translate it to longitude and latitude information, which can be placed on the map using the pushpins.

 

Here is a video of the application at work (with comments by your truly): http://www.youtube.com/watch?v=2E2nELzEWqY (watch in 720P, fullscreen with audio on).

 

I'm working on a 'general' library to support Kinect, and abstract some of the SDK logic to make it more easy for us to start developing and researching NUI (Natural User Interface) applications for PI. Stay tuned, I'm hoping to provide an update on this soon...

If you are developing using Microsoft Products, you have probably heard of Channel9. Channel9 provides tutorial-, interview- and behind the scenes videos about topics that involve Microsoft development and integration. Microsoft and OSIsoft are partners, OSIsoft was even awarded the 'Sustainability Partner of the Year' award!

 

I personally love Channel9 because it has good resources, I visit the site regularly to find new video's. I really like the no-nonsense and pure technical approach of the video's. I few years ago I already bumped into a three part video on OSIsoft, and 'forgot' about it. Today I bumped into it again, and thought it would be great to share it with vCampus! If you have a coffeebreak comming up, it is really worth watching.

 

It's from 2007 (which is ages in IT-years), but the content is still very valid. It stars some interesting people (including Dr. Kennedy himself).

 

Here are the links: part I, part II, part III (demo)

 

 

 

 

5873.clippy.jpg

 

 

 

Everyone who has worked with Microsoft Office in the past knows the paperclip office assistant 'Clippy'. I think most of us had a love-hate relationship with this, sometimes annoying, feature. The Office assistant was first introduced in the 1997 release of Microsoft Office, and it was removed as a feature in Microsoft Office 2007. Altough most users experienced the feature as annoying and intrusive, there is actually some really nice science involved!

 

The origin of the feature was started in 1993 with codename Lumiere. The goal was to study and improve human computer interaction using Bayesian Probabilities. They wanted to create a way to infer or predict user goals. As software in general (and in this case, Microsoft Office in particular) became more and more complex, the idea of having an 'assistant' that helps you achieve your goals makes sense to me.

 

 

MichaelvdV@Atos

The internet of things

Posted by MichaelvdV@Atos Jul 19, 2011

Cisco released a nice infographic called 'The Internet of Things'.

 

The interesting part is that there are more 'things' connected to the internet then there are people in the world. And we are not just talking about pc's, laptops, smartphones and tablets. There are a whole range of new developments and products that are connected to the internet. I think that this infographic nicely visualizes that the internet is not just a 'people' internet anymore. Not every device has a user behind it requesting information from the internet. It has become much more than that. In the comming years we will see much more of these new developments, now that wireless access (WiFi, 3G) are getting much wider acceptance due to lower costs and more covering.

 

For us, as users/developers/architects working with PI this opens up a whole new range of possibilities. Many of these devices need to interconnect, store and share information. In a lot of cases, this will be timeseries or asset data. It really makes me think what new possibilities and markets are out there for the PI system. With a lot of functionality and applications moving to the cloud, and now that the sharing of information becomes more and more important, I envision a whole new future of unexplored possibilities for us!

 

 

Soure: http://blogs.cisco.com/news/the-internet-of-things-infographic/

 

edit: I think Cisco might have exaggerated the statement that 'At the end of 2011 20 households will generate more traffic than the entire internet in 2008'. The facts really state otherwise, even if you compare the world wide internet traffic of 2011 to 1998 this statement still does not seem true. Wikipedia has some nice information about the history of Internet traffic here. Besides this, the infographic shows some nice insights.

If you have developed with Silverlight, you will probably have noticed that there are a lot of, sometimes subtle, features that can make your live as a programmer a lot easier. Some of these features are sometimes a little bit hidden. It's always nice to see some small tutorials on how you can do certain things, so you know it exists. When you are in a situation where it could be applicable, you will certainly think about it!

 

Here are some 'must read' tutorials for Silverlight developers, have fun reading!

 

StringFormat for DateTime conversion

 

Explains how to use the stringformat property of a textbox when binding to a DateTime value

 

Treeview Drag & Drop

 

Explains how to use drag and drop targets in Silverlight 4 to easily enable drag & drop for the Treeview.

 

Silverlight Chart Example

 

There is a very big chance that you will want to visualize PI data using a chart in Silverlight. This article should get you started with using the Charts.

 

Data Validation in Silverlight

 

Validating user input data and responding with clear messages is very important. Silverlight has very nice built-in data validation mechanisms

 

Assembly caching in Silverlight 4

 

You can drastically speed up download and load time of your Silverlight application by using 'assembly caching'. This article explains how to enable assembly caching.

 

Printing support in Silverlight 4

 

Want to be able to print out that nice report with PI data from your Silverlight application? This is a good place to start!

 

Using Duplex WCF communication with Silverlight

 

Want to push events to the client using your WCF service? This is a good place to start learning about the PollingDuplexHttpBinding

This already is the third part in a blogpost series where we explore lesser known C# language features. You can find part I here, and part II here. We already discussed several keywords and operators. We didn't receive a lot of feedback on this series. We would really appreciate any feedback! With this third post, we will continue on our quest to explore the lesser known features of C#

 

Extension methods

 

Extension methods were introduced in .NET 3.5 (and C# 3.0). They are an exceptional type of method. They are static methods, but are called if they where instance methods. You can use extension methods to 'extent' methods to an existing class. This means you can add methods to an already existing type, that you don't even have the sourcecode or internals for.

 

The big advantages of using Extension methods:

  • Reusability. Declare the method once, and it will be available troughout the application.
  • Extension methods extend methods: you can add methods to existing classes. This allows you to use the Object Oriented paradigm better
  • Portability. You can create your own library with extension methods you can use troughout different applications.

LINQ heavily relies on extension methods. You can recognize extension methods in Visual Studio by the downward arrow if you use Intellisense. You will also notice the '(extension)' prefix before the documentation.

 

6215.extensionmethods_5F00_intellisense.png

 

You can create your own extension methods by creating a static class. Inside this static class you declare static methods. The first parameter of these static methods should be of the type that you want to extent. This parameter should be preceded by the 'this' modifier.

 

In this example, we are extending the 'PIPoint' class with an extension method called 'GetAttribute', which takes an attribute name, and returns the attribute value.

 
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using PISDK;

namespace PIExtensions
{
    public static class PIExtensions
    {
        public static string GetAttribute(this PIPoint point, string name)
        {
            return point.PointAttributes[name].Value.ToString();
        }
    }
}

 

 

We can use this extension method by bringing the 'PIExtensions' namespace into scope, and calling 'GetAttribute' on an object of type 'PIPoint'.

 
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using PIExtensions;

namespace ExtensionMethods
{
    class Program
    {
        static void Main(string[] args)
        {
            var server = new PISDK.PISDK().Servers["hans-ottosrv"];
            var point = server.PIPoints["sinusoid"];

            var span = point.GetAttribute("span");

            Console.WriteLine(span);
            Console.ReadLine();
        }
    }
}

 

 

This way, no mather where  you are in your application: if you have a 'using PIExtensions;' directive, you can use the PIExtensions extension methods.

 

A more 'real live' example would be an extension method to copy a pipoint:

 
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using PISDK;

namespace PIExtensions
{
    public static class PIExtensions
    {
        static string[] ReadOnlyAttributes = new string[] 
        {  
            "changedate",
            "changer",  
            "creationdate",
            "creator",    
            "pointid",   
            "pointnumber", 
            "pointtype",    
            "ptclassid",   
            "ptclassrev", 
            "recno" 
        };

        public static PISDK.PIPoint CopyPoint(this PISDK.PIPoint sourcePoint, string newName)
        {
            return CopyPoint(sourcePoint, newName, sourcePoint.Server);
        }

        public static PISDK.PIPoint CopyPoint(this PISDK.PIPoint sourcePoint, string newName, PISDK.Server destServer) 
         {      
             var sourceAttribs = sourcePoint.PointAttributes.GetAttributes();   
             var newAttribs = new PISDKCommon.NamedValues();            
             var ptClass = sourcePoint.PointClass.Name;             
             var ptType = sourcePoint.PointType;             
             foreach (PISDKCommon.NamedValue nv in sourceAttribs)   
             {               
                 if (!ReadOnlyAttributes.Contains(nv.Name.ToLower()))   
                 newAttribs.Add(nv.Name, nv.Value);        
             }          
             return destServer.PIPoints.Add(newName, ptClass, ptType, newAttribs);  
         }   
     
    }
}

 This is also an example of overloading your extension methods. In this case we have an CopyPoint method that accepts only a new tagname, and we have a CopyPoint method that accepts a new tagname and a new server. You can use these extension methods like so:

 
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using PIExtensions;

namespace ExtensionMethods
{
    class Program
    {
        static void Main(string[] args)
        {
            var sdk = new PISDK.PISDK();
            var server = sdk.Servers["hans-ottosrv"];
            var otherServer = sdk.Servers.DefaultServer;
            var point = server.PIPoints["sinusoid"];

            //copy point to same server
            point.CopyPoint("sinusoid-new-1");

            //copy point to different server
            point.CopyPoint("sinusoid-new-otherserver", otherServer);

            Console.ReadLine();
        }
    }
}

 

 

 A few tips and tricks when dealing with Extension methods:

  • You cannot 'replace' or 'override' existing methods. If your extension method has the same name as an already existing method, your extension method will never get called
  • You can overload new extension methods
  • Extension methods are brought into scope on the namespace level. If you have multiple 'extension classes' defined in the same namespace, they will all be in scope.
  • If you have certain actions that you find cumbersome, or that you need in multiple applications: extension methods are a great way to have in a seperate 'helper' library, that you can reference into your new applications. This saves you a lot of time and effort.

 And, to conclude this article, some small tips and tricks about strings!

 

String checking and comparison

 

Since .NET 4.0, the String class has a 'NullOrWhiteSpace' method. There always was an 'NullOrEmpty' method, but that didn't account for extra whitespace. In order to be secure, you had to use the 'Trim()' method. This will fail if the string is indeed null, and throw an exception

 

Old way (bad):

 
            var checkString = " ";
            if (string.IsNullOrEmpty(checkString.Trim()))
                //do something

 

 

Using 'NullOrWhiteSpace' (good):

 
      var checkString = " ";
            if (string.IsNullOrWhiteSpace(checkString))
                //do something

 

 

When dealing with string comparison, do you do it like this?

 
       var string1 = "HeLLo WoRlD";

            var string2 = "hellO wOrlD";

            if (string1.ToUpper() == string2.ToUpper())
                //they are the same!
            else
                //they are not the same!

 

 

A far better way is using string.Equals(). This can account for cultural and case comparison.

 
  var string1 = "HeLLo WoRlD";
            var string2 = "hellO wOrlD";
            if (string1.Equals(string2, StringComparison.CurrentCultureIgnoreCase)
                //they are the same!
            else
                //they are not the same!

 

 

This will account for the culturesettings of the string, and is best practice when dealing with string comparison!

 

Further reads

 

 Extension Methods (MSDN)

 

Database of extension methods (Extensionmethod.net)

 

How to implement and call a custom Extension Method (MSDN)

 

String.IsNullOrWhiteSpace (MSDN)

 

String.Equals (MSDN)

 

Previous articles in this series

 

Exploring lesser known C# Language Features Part I

 

Exploring lesser known C# Language Features Part II

 

 

 

 

Filter Blog

By date: By tag: