Skip navigation
All Places > PI Developers Club > Blog > 2011 > October

Daylight Saving Time (DST) is the practice of 'advancing' clocks during the summer. Normal practice is to advance the clock 1 hour during spring, and are adjusted backwards during autumn. In Europe we adjusted the clocks last weekend (Oct. 30th)., and the US will do the same next weekend (Nov. 6th). I thought this would be a good time to talk a little about DST, and what it means for us programmers.


DST is a rather 'weird' phenomana. There is here is a lot of dispute over the actual benefits of having DST, the original arguments of reducing energy usage and crime have been proved and disproved by numerous studies. Some countries even stopped using DST, to later resume using it. Some countries stopped using it all together.


Here is a map of countries that observe DST.




 DST observed

 DST no longer observed
 DST never observed
As you can see, almost all European countries and Northern America observe DST. Notice that the usage of DST can differ, even inside a country. Notice that for instance Arizona and Hawaii (US States) and parts of Canada do not observe DST, while the rest of the country does. Also the coordination on shifting clocks in area's that observe DST can differ.
If you look at the Windows TimeZones, you can see Arizona has its own timezone designation, allthough it has the same UTC offset as Mountain Standard Time (MST), but this changes to Mountain Daylight Time (MDT), and the Arizona timezone doesn't.
The European Union switches all at once (01:00 AM UTC), but most of America switches at 02:00 AM Local Time, so every timezone switches at a different (absolute) time. During the 1950's and 1960's each US locality could start and end DST when they wanted. At one year 23 pairs of start and end dates were used in Iowa (US State) alone. On a Ohio to West Virginia bus route, passengers had to (officially) change the time on their watches 7 times on a 55 km busride.
As it seems, DST can create quite some confusion and chaos. Specially when dealing with information systems spanning several countries and contintents. I'm sure we have all encountered issues when dealing with timezones and DST. You will still encounter this with legacy systems. If you are not careful, you can even experience this in your own .NET applications.
For instance, consider the following piece of code
Here in Europe we have ended DST this year at 03:00 CEST to 02:00 CET on October 30th. That means the for loop will iterate over the DST switch.

At first, you would think and hope that this is reflected in the dates printed by the Console.Writeline, and that this snippet of code would produce something like this:




 But, sadly it does not. It produces the following output (and thus, not observing the DST switch).




Why is this? We add one hour with each iteration, so logically it should print 02:00:00 AM twice. The issue here is the DateTime structure. DateTime does not contain information about timezones. From .NET 2.0 onwards, DateTime does contain a 'Kind' property, of type DateTimeKind.  The Kind of a DateTime struct can be 'Local', 'Utc' or 'Unspecified'. To provide for backwards compatibility with pre .NET 2.0 versions, the Kind of a DateTime instance will always be 'Unspecified' unless the kind is specified in the constructor, or set with the static DateTime.SpecifyKind method. To further provide backwards compatibility, a 'Unspecified' DateTime will behave like a 'Local' DateTime.


In our example, we have not set the 'Kind' of the DateTime struct in the constructor, therefore it will behave like a Local DateTime. So, going form 01:00 AM to 02:00 AM by adding one hour, and then adding one hour from 02:00 AM to 03:00 AM is expected behavior. We can change this behavior by specifying the Kind of the DateTime struct to UTC in the constructor. Basically, we are working only with UTC untill we present output to the user. When we want to present the output, we use the ToLocalTime() method to format the date according to the current timezone and culture information.




This produces the following output:




A very good alternative when dealing with different timezones is using the DateTimeOffset struct. This structure is a 'timezone' aware DateTime alternative that was introduced in .NET 3.5. It represents a point in time relative to UTC. If we want to achieve something similar, we can use the following code:




Here we create a local DateTime object, and use it to instantiate a DateTimeOffset. It is not necessary to supply a local DateTime in the DateTimeOffset constructor, it also supports constructors almost identical to DateTime. We iterate through the hours in the same fashion, and we print the localized time to the console. The output shows what we want, and it even includes the timezone offset information. You can clearly see the UTC offset switch.




This output is possible because DateTimeOffset stores it's information in UTC format, so calculating with the dates also occurs in UTC. It is best practice and really conveniant to always use UTC internally, and only convert it to a local time when it has to be presented (either to a user, or to another (legacy) system that requires a certain timezone.


The PI System works in the exact same way. You don't have to worry about DST changes. PI uses UTC time internally, and only when it needs to be presented to the user it will use the localized time. This means that internally, PI does not have 23 or 25 hour days internally: only 24 hour days. There is a short video on the Youtube OSIsoftLearning channel here


 For instance, this is what you would get if you create a trend PI  Processbook covering the period of a DST switch (in this case, advancing one hour in 2010)






Sometimes however you have to know the number of hours in a day (specially when creating daily averages or aggregates). The DayLightTime class contains the timestamps of the start and end of DST for a given year. You can obtain a DayLightTime instance from a System.TimeZone. To get the number of hours in day for your current timezone:




And we can use it like so:




The output will then be:






Some good reads about working with Dates, Times, TimeZones and DST in .NET:

 And now for some fun and offtopic facts on DST:

  • The proper description of DST is 'Daylight Saving Time', not 'Daylight Savings Time'
  • A man, born just after 12:00 a.m. DST, circumvented the Vietnam War draft by using a daylight saving time loophole. When drafted, he argued that standard time, not DST, was the official time for recording births in his state of Delaware in the year of his birth. Thus, under official standard time he was actually born on the previous day--and that day had a much higher draft lottery number, allowing him to avoid the draft.
  • While twins born at 11:55 p.m. and 12:05 a.m. may have different birthdays, Daylight Saving Time can change birth order -- on paper, anyway. During the time change in the fall, one baby could be born at 1:55 a.m. and the sibling born ten minutes later, at 1:05 a.m. In the spring, there is a gap when no babies are born at all: from 2:00 a.m. to 3:00 a.m.
  • In the U.S., Arizona doesn’t observe Daylight Saving Time, but the Navajo Nation (parts of which are in three states) does. However, the Hopi Reservation, which is entirely surrounded by the Navajo Nation, doesn’t observe DST. In effect, there is a donut-shaped area of Arizona that does observe DST, but the “hole” in the center does not.

I couldn't verify these, so I'm not 100% sure about wether they are true

  • Daylight saving time once single handedly thwarted a terrorist attack, causing the would-be terrorists to blow themselves up instead of other people. What happened was, in September 1999, the West Bank was on daylight saving time while Israel was on standard time; West Bank terrorists prepared bombs set on timers and smuggled them to their associates in Israel. As a result, the bombs exploded one hour sooner than the terrorists in Israel thought they would, resulting three terrorists dying instead of the two busloads of people who were the intended targets.
  • I personally cannot really believe this one, I hope someone from the US can verify: To keep to their published timetables, trains cannot leave a station before the scheduled time. So, when the clocks fall back one hour in October, all Amtrak trains in the U.S. that are running on time stop at 2:00 a.m. and wait one hour before resuming. Overnight passengers are often surprised to find their train at a dead stop and their travel time an hour longer than expected. At the spring Daylight Saving Time change, trains instantaneously become an hour behind schedule at 2:00 a.m., but they just keep going and do their best to make up the time.



Sources for this article:

It sounds like a very difficult question. There are so many factors involved, such as laptops, desktops, servers, smart phones, etc. Also, don't forget the routers and the energy to produce all such devices. A pair of researchers from University of California, Berkeley tackled the question and published their results.


Answer: 2% of the whole world energy (somewhere between 107-370GW)!


It might sound a big number but the fact is that it makes up for several more power-intensive activities. For example, the researchers say, attending a meeting physically consumes 100 times more energy than attending it virtually! This is by the way aligned with our mentality in the PI Community: We want to use our infrastructure to make things work more efficiently and get higher value out of our investments.


Where do you think the energy share of the Internet will be in 10 or 20 years from now?

Introduction to Project Roslyn

I have to admit that I was one of those people that were quite worried before the big Microsoft BUILD event. There were a lot of rumors around Microsoft killing off Silverlight and WPF, and going with JavaScript and HTML as the preferred application development tools.


After I watched some of the videos of the BUILD conference I was more at ease, but still quite worried about the direction that was taken. This was my state until I watched this presentation by Anders Hejlsberg.


Anders talks about future directions for C# and Visual Basic. There is a lot of interesting stuff happening. The adoption of (true) async programming in the .NET framework is very promising. One thing really made my day, and got me that same enthusiasm back as when Microsoft announced Silverlight 2. Like back then, I was happy like a little puppy.


I remember I saw something like this at the local Microsoft DevDays when the beta of .NET 4 was announced. The presenter showed a (very simple) C# interactive console, and told us this was something they were working on. The thought of having real C# code evaluation sounded like a dream to me. I have struggled a lot with evaluating simple expressions in C# (up to the point of writing my own interpreter for it). Having this in the framework was huge. We had to wait a few years, and probably have to wait for after the next .NET release... But it is finally happening!


I’m talking about a new project called ‘Project Roslyn’. Microsoft made Roslyn available as a CTP about a week ago. Roslyn will bring a whole variety of new and very powerful features to the C# and VB.NET languages. I think we (including Microsoft itself) cannot envision the implications this new technology is going to have. Basically, Roslyn is a new compiler for VB.NET and C#, written in VB.NET and C# respectively. And it’s all opened up!


Compilers are typically seen as a ‘black box’. That is, if you are not a compiler developer. It works quite simple, you put source code in, and you get back something you can execute. What happens in between is something magical for most of us.


Well, most of us know some things about how a compiler works: there is the parsing of the source code, the creation of the syntax tree, creating a list of all the symbols, binding the symbols with the objects, and so on.


Roslyn opens all this up to us developers. We can now ‘interact’ with the compiler. This will open up a whole new world of possibilities like code generation, code evaluation, code analysis, meta-programming and refactoring.


Roslyn exposes the C# and VB compiler’s code analysis to us by providing an API layer that mirrors the compiler pipeline.


First, we have the parse phase. Here the source is tokenized and lexical analysis is performed. The result is called a ‘syntax tree’. This is a tree representation of the syntactic structure of the source code. Then we have the ‘declaration phase’. Here declarations are analyzed to form named symbols. Next, the identifiers in the code are matched to the symbols in the bind phase. The last phase, the emit phase, will take care of emitting all the information that has been gathered and build in the previous phases to an assembly.


If we look at the structure of the Roslyn project, each phase is represented by an object model that gives information about that phase. The parsing phase is exposed as a Syntax Tree, the declaration phase as a hierarchical symbol table, the binding phase as a model that exposes the result of the semantic analysis of the compiler. And finally the emit phase as an API that produces IL byte codes.


This is all nice and very theoretical. Now for some real world example and application. First we will need to get the Project Roslyn installed.


To get started with Roslyn you will need:


Microsoft Visual Studio 2010 SP1 (download here)
Microsoft Visual Studio 2010 SDK SP1 (download here)
Microsoft ‘Roslyn’ CTP (download here)


Once you have everything installed, you should go to the install directory. This is typically


C:\Program Files\Microsoft Codename Roslyn CTP\ for x86 systems, or
C:\Program Files (x86)\Microsoft Codename Roslyn CTP\ for x64 systems.


You should definitely checkout the ‘Readme’ directory. There is a readme html application there that will take you through some of the basics of the libraries and installed Visual Studio templates. Next is the ‘Documentation’ directory. There are 13 documents there which will be your primary source of information when working with Roslyn.


One of the first and simplest real world examples is the possibility of an interactive C# console, otherwise known as a REPL (Read Evaluate Print Loop) console. You can use the interactive C# console from Visual Studio. You will find it under View > Other Windows > C# Interactive Window.


The interactive window is a prime example of the possibilities of Roslyn. You can create variables, create expressions, etc. You can really use C# in an interactive manner.


You can also create some more complex statements, for example:




This is an even more complex example, where we create a new class called ‘Person’. It has a name and age property. It also has a Speak() function. This example will really show the ‘magic’ of Roslyn.


The real exciting thing here is  not only the (on-the-fly) evaluation of the code, but especially the automatic indentation of the class declaration and the full intellisense and auto completion. The console does not just evaluate a bunch of code (something we could do earlier with the Mono REPL), but we also have dynamic intellisense and auto completion. We had this in Visual Studio, but now it’s open to us!


I would really like to encourage you to install Roslyn. The C# interactive window only is already of great value. You will no longer have your Visual Studio projects folder filled with ConsoleApplication1 to ConsoleApplication123 because you had to check some code.


In the next blog post we will dive deeper into Roslyn, and we will start writing some code using Roslyn. We will be making use of the ScriptEngine that Roslyn provides. We are going to make a simple calculation engine for the PI system, and we will see that with a few lines of code we can already build something powerful and useful.



In the back rooms of OSIsoft (yes, we have those, shhh) we have been pounding out prototypes for cloud (hosted) based extensions to on premises PI Systems including prototypes of hosted PI Coresight and “Coresight-like” experiences for mobile devices (and yes, we are looking at your favorite devices, unless it happens to be running Symbian.) We also have some exciting prototypes of a new “enterprise” grade PI System Search Engine for improved search experiences - designed to run on premises or as a hosted service.


In itself this is all some really exciting work some of which we hope will see the light of day in 2012 (all the PM’s just fell over, you see, I really shouldn’t set expectations around features/products, especially with dates! So, standard disclaimer, none of this has even hit the engineering plan so there are no promises here. But come on, inquiring minds should at least know where we are heading…’nough said.) But, what has really hit home for us as we have pushed in this direction, is the need for an improved (possibly new) security model in our system. Since many users haven’t even been through the upgrade to 2011 to the significant security changes that represents I imagine there were just a few shutters and huh’s – but there are some clear changes and trends that PI needs to honour.


As the boundaries of a “PI System” expands across complex enterprise topologies and out to the cloud, the need for a more flexible access model is made very clear. All of this, without any compromise to the integrity of the system and data. The clear path forward is through “claims based” security models which allows administrative flexibility to enable cross domain/system identities secure access to PI System assets.


From the developer point of view, the claims based approach permits a single implementation where the security aspects such as authentication and authorization are abstracted out of the code. The specifics are implemented through configuration at the deployment and administration phases. Want to allow Facebook or Yahoo users to have read access to specific data in your system? Maybe that is a stretch for some, but how about allowing authorized users from other windows domains or by trusted authorization systems outside of the corporate boundary? This model potentially allows this as an administrative exercise, keeping your system secure, while providing data access from a mobile device which never has to tunnel or join a corporate network or domain. We expect that much of our customer’s future value will come from the way in which their corporate asset, data, is leveraged across corporate boundaries, with partners, with customers, with suppliers and with employees.


As always, anyone excited or interested in discussing these topics is welcome to contact me directly or comment on this blog.

Ahmad Fattahi

Steve Jobs

Posted by Ahmad Fattahi Employee Oct 10, 2011

Everybody was deeply saddened and shocked when the news of Steve Jobs' death was announced last week. The real impact of his absence is yet to be seen over the coming years on Apple as well as the tech world.


It is true that he was a great exceptional visionary shaping several amazing technologies, mindsets, and products. However, how high you place him in all-time ranking is another question. These days many people compare him to Thomas Edison and Henry Ford. Some even say he had what all those folks had collectively. Where would you place him? How would you compare him with likes of Henry Ford and Thomas Edison?


Here is a couple pictures I took the other night from the Apple store in Palo Alto near where he lived and where he used to show up frequently. Countless fans and enthusiasts posted Post-it notes to cherish his legacy.





During the BUILD conference Microsoft announced a lot more details about Windows 8. There are some big changes comming to the OS we all love. In fact, Microsoft is calling it 'The biggest change since Windows 95'. That is quite a bold statement, and with all the radical changes: also quite a gamble. Microsoft is drastically changing their flagship product - the real centerpiece of their business. The 'old' Windows interface is familiar to hundreds of millions of people. I think a lot of us will have to get used to the new 'Metro' style apps.


But not only has the look and feel of Windows 8 been changed, maybe the even bigger changes are 'under the hood'. A lot of the internal libraries and API's have undergone a complete overhaul. All Windows subsystems have been reimagined to be modern.


Windows 8 is designed to be used on Deskop PC's and touch devices. It supports the ARM CPU, which is an architecture that is used a lot on tablet computers and other smart devices.




While Windows 8 got a lot of attention, an even interesting product got a lot less attention. I'm talking about the big server brother of Windows 8: Windows Server 8!


Windows Server 8 has the newest Hyper-V-hypervisor, which supports the vhdx format for virtual storage. Virtual disks can now be 16TB in size, where the previous maximum size was 2TB. Hyper-V 3.0 supports virtual machines with 32 cores, and 512GB internal memory. Also new are two new forms of deduplication. This technique should ensure more efficient use of storage and memory capacity.


Page Combining should ensure identical memory pages to be combined into one page. This should give a performance boost to virtual machines. Storage spaces ensures that a server with multiple disks can be combined into one storage pool, which can then be served over the network. This does not require a SAN array.


It seems the message with WS8 is 'virtualization, virtualization, virtualization'. This paradigm fits perfectly with the great demand for flexible computing (cloud computing).


One other change further emphasizes that paradigm: the removal of the Graphical User Interface (GUI) for the server products. Windows Server 2008 could be installed without a GUI, but that option was somewhat unpopulair. It seems WS8 is here to change that opinion. 'Graphical User Interfaces are for clients, not for server products' according to Jeffrey Snover - Lead Architect of Microsofts' Windows Server Division. 




A lot of existing server software needs a GUI for configuration or operation. That's why the GUI is not completely dissapearing. It's recommended that you use the GUI-less edition, unless you really have to use a GUI. There are two possibilities for users that want to have some kind of graphical capability. The full grahpics shell (Metro Style) can be installed, or you can choose for a 'slim' version. This version does not have the Taskbar, Windows Explorer and Internet Explorer (among other items).


One of the major changes to the Windows 2008 Server GUI-less install, is that you can change your installation type afterwards. If you installed the GUI-less version, you can later switch to one of the versions with a GUI. Maybe this further supports the confidence of users to install their server OS without a GUI. Also the fact that the original Windows 2008 Server Core did not have PowerShell. This only became standard with Windows 2008 R2 Server Core. Besides that, only 230 PowerShell cmdlets were installed by default. With Windows Server 8, almost 2500 cmdlets are installed by default!




The main reason why the GUI is being removed is the fact that a GUI uses CPU and memory which then can not be used to perform 'server duties'. Specially in the age where servers are more likely to be a part of a large cluster, there is no need for GUI's. In an ideal situation, cluster nodes should be controlled 'in bulk' instead of being configured individually.


The memory footprint of Windows Server 8 is far less when no GUI is installed. That installation only requires 512MB internal memory. There is no real data on how much is needed in production/practice; Microsoft says it is still optimizing the product.


One other reason of having a Server product without GUI is that it having a GUI is more of a security risk. The more code you introduce on your system, the more likely that code contains bugs that can be exploited. Having no GUI for your applications means less code, and therefore less change of having criticial bugs that can be exploited. Jeffrey Snover commented that the number of critical security patches required for the Core installation is reduced with 50 to 70 percent. Having no GUI also means faster installation. This is something that is getting more and more important with the dynamic nature of today's computer environments.


What does this mean for us developers? Well, at this point the compatibility with existing products should be garantueed if you install the full version with the GUI. Developers are encouraged (and recommended) to create the configuration options in such a way, that they can also be set without a GUI, for instance with the help of configuration files or command line options. There is an option to detect the installed WS8 version, so your application can adapt accordingly. 




A good example is the new Server Manager. The new Server Manager is a Metro Style application (corrected: it's not a Metro style application, but it does have a new Office 365 like look), which uses powershell commands to control the server configuration.


Windows Server 8 both supports graphical interfaces and command line tools. We are encouraged to make use of the 'Core' edition, without any GUI. There is no real saying wether the GUI will totally dissapear from our servers, but the concept of having no GUI seems very reasonable.


Edit: included a handy table from this source



Server Core Installation

Features on Demand

Full Installation

Windows Core




Windows PowerShell




.Net Framework 4




Server Manager



Microsoft Management Consoles



A subset of Control Panel Applets


All Control Panel Applets


Windows Help


Windows Explorer


Internet Explorer


By now most of you will have gotten the news that Steve Jobs, co-founder and CEO of Apple has died yesterday. 


He was one of the people that changed the computer market forever. I don't think anyone can argue the fact that he was a great visionary and a very creative man.


The reactions of many important people reflect this opinion:


I'm truly saddened to learn of Steve Jobs' death. Melinda and I extend our sincere condolences to his family and friends, and to everyone Steve has touched through his work. Steve and I first met nearly 30 years ago, and have been colleagues, competitors and friends over the course of more than half our lives. The world rarely sees someone who has had the profound impact Steve has had, the effects of which will be felt for many generations to come. For those of us lucky enough to get to work with him, it's been an insanely great honor. I will miss Steve immensely. 


- Bill Gates


Steve was among the greatest of American innovators - brave enough to think differently, bold enough to believe he could change the world, and talented enough to do it. By building one of the planet’s most successful companies from his garage, he exemplified the spirit of American ingenuity. By making computers personal and putting the internet in our pockets, he made the information revolution not only accessible, but intuitive and fun. And by turning his talents to storytelling, he has brought joy to millions of children and grownups alike. Steve was fond of saying that he lived every day like it was his last. Because he did, he transformed our lives, redefined entire industries, and achieved one of the rarest feats in human history: he changed the way each of us sees the world.


- Pres. Barack Obama


I personally was never a big Apple user or fan, but you have to admire the man. He really pushed the boundries between what was real, and what could be real. His strong believe in easy to use Graphical User Interfaces made the company a success, and it is still a big part of Apples success today: from the NeXT Cube to the IPhone. Once every 7 years or so he would present something revolutionary after the famous words 'oh... And one more thing...'


After looking for information about the history of the company, I found this great documentary from 1996. It's called 'Triumph of the Nerds'. It is a 3 part 50 min. documentary, that explains the beginning of the PC/Computer revolution. It features Microsoft, Xerox, IBM, Apple, etc.  I found it very educating and fun to watch. It really shows the politics and atmosphere of the day, and it also makes you feel really nostalgic.


Here are the Youtube links:


Triumph of the Nerds Part I: Impressing their friends


Triumph of the Nerds Part II: Riding the bear


Triumph of the Nerds Part III: Great artists steal





OSIsoft vCampus Live! 2011 is around the corner (Nov 30). With so many valuable presentations and hands-on sessions along with developers lounge and learning labs there is something for every one (hint: the registration is open ).


I will be presenting on "Machine Learning for Prediction Purposes on PI System Data". You will learn what machine learning means and how you can apply it to your existing PI data in order to make predictions, do preemptive maintenance, and make strategic decisions with an invaluable insight in the future trajectory of your data streams. We will talk about PI System along with SQL Server Analysis Services and MATLAB. You will see some real-world examples as well. Here is a short video I made to explain what I mean by my presentation:



If you have not registered for upcoming OSIsoft vCampus Live! 2011, the 2-for-1 special registration rate will be available until 15th October. Register 2 person together to enjoy this special rate. Visit the event website,, to register!


At the same time, remember to nominate your vCampus All-Stars for 2011!



Filter Blog

By date: By tag: