Skip navigation
All Places > PI Developers Club > Blog > 2013 > August


This blog post is inspired by a post titled PI System adoption rate metrics at Generic vCampus Discussions forum but it takes also its root from many questions that we were asked on how someone could identify suspicious connections to PI Server or compare the amount of licensed clients against the actual usage.


When I received this question the first time many years ago, while working at OSIsoft Technical Support, the solution that I suggested was to create a scheduled task that would periodically hit a batch file. The batch file would query PI Network Manager Statistics through the execution of a piconfig.exe script and dump information into a text file. Anyone can think of various solutions for further treatment of data by a custom application or by the usage of Microsoft Excel.


The major disadvantage of this solution is that it only takes a sample at certain times and may miss what has happened between the sampling periods.


Times have changed. With PowerShell, Microsoft has introduced a powerful scripting tool that also supports the usage of .NET objects directly. OSIsoft development was so kind to offer PowerShell Tools for the PI System to the vCampus community; this new tool is building the bridge between PowerShell and the PI System.


I must admit that I am a beginner with PowerShell and don’t yet really feel “comfortable”. During my research I found people praising PowerShell with the highest sounds. Meanwhile I have recognized how it can be easy to do what you want and feel PowerShell has a bright future.


My team mate Mathieu Hamel has created the major part of the solution that we would like to introduce to the vCampus community with this post. While discussing the script concepts and details we recognized potential unexploited areas of PowerShell scripting with the PI System. While reading further to this blog post you might be interested to download and use the project files discussed. Please find them attached to this post or use this link.


As you know, vCampus has a section for community projects. We – the vCampus team – are a little sad that they are not really alive as they could be. We believe that community projects are great and enable a group of individuals and up to the whole community in certain cases to work together to create something that delivers value to the community. Doesn’t that sound great? Doesn’t that sound fun? I couldn’t think of something else delivering a higher social value within a forum.


To get to the point, we believe that PowerShell Tools for the PI System are not only a powerful way of scripting but has the potential for one or more great community projects. We would love to get your opinion about this. Please also let us know what you are interested in but please bear in mind that we don’t just want to serve you with examples. When you suggest something please expect playing an active part if it comes to a community project.


We are enthusiastic about the idea of seeing PI geeks like you more active on community projects with PowerShell Tools for the PI System soon. Hence we have limited the scope of the solution we present here today accordingly. The script is designed to provide basic functionality and leave room for extensibility. It also aims at users who are just interested to test or use it as is without diving into the code. Please document your interest by replying to this blog.








[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]










The concept mainly involves the following steps:


- Retrieval of message attributes from source PI Network Manager through PowerShell Tools for the PI System (used cmdlets GET-PIServer Get-PIMessage)
- Looking up the message ID as the first message attribute, comparing it against the “known” ID’s and deciding about further message treatment
- Creation of Custom Objects table in memory
- Refining information using Regular Expressions and storing results as facts into the Custom Objects table
- Exporting these facts into a comma separated file (CSV) or a self describing XML file that can be used to do further analysis in other software such as Microsoft Excel
- [Optional] Do further analysis to generate certain KPI’s and store them in separate comma separated (CSV) files.










[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]








The solution consists of the following files and folders:




Folder (1)


Log file target

Folder (2)


Holds the PowerShell module files

Folder (3)


Target folder for results

Folder (4)

\PISystemstats\Unit Tests &  Examples

Holds files that can be used for testing and automation



PISYSSTATS Module Manifest (in Folder 2)



PISYSSTATS Core Library (in Folder 2)


Automatic Debug - Test Unit.bat

Windows Batch to launch “Test Unit.ps1” (In Folder 4)


Test Unit.ps1

Test script

When you apply modifications to the module script, all you need to do is save it and reload the module again in a new PowerShell session. Alternatively you can use Remove-Module followed by Add-Module command to load the modified module script. If you are just interested in using the solution there is no need for editing the module script but we hope many of you are curious enough to at least read through the code. There are a lot of comments intended to make it easier for you to understand what is done within certain functions.


To make the usage easier we have created a Windows Batch file (Automatic Debug - Test Unit.bat) and a test script (Test Unit.ps1). When you are interested to use them, you will have to modify the paths specified inside both files. The Windows Batch file refers to the test Unit script while “Test Unit.ps1” refers the folder where the module files are located (Folder 2).


You will likely have to unblock both PowerShell module files (Folder 2). To do so open Windows file explorer, browse for the module files (\PISystemstats\bin\PISYSSTAT), right click one, select properties and click the [Unblock] button (at the very bottom at the General tab).










[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]








Starting with PI Server 3.4.380, structured messages were introduced to the PI Message Log; messages now include fields such as “Message ID”. Hence our example requires PI Server 3.4.380 or later. Please note this is also requirement for PowerShell Tools for PI System.


Because the Message ID is a unique identifier for the kind of message, our example is using to select the pattern of information exposed in the message body.


Furthermore, because the approach is generating statistics based on log messages from PI Network Manager and that PI Network Manager does not log messages about existing connections periodically, we can expect only complete information for those connections that were established after the query period start and closed before the query period ends. Per default (tuning parameter MessageLog_DayLimit) PI Message logs are kept for 35 days only. Please keep this in mind when defining the query period and when interpreting the results.


Connection ID’s are uniquely used within a PI Network Manager session but after a restart PI Network Manager will start over again counting from 0. To avoid confusion we have built in a mimic that recognizes based on connections start- and end-times that PI Network Manager has been restarted. When this happens the index field within the connection facts object is increased by one. All connections with the same index fall into the same period between 2 PI Server restarts (if applicable).


PISYSTATS module may only work against Primary PI Servers in a collective environment. This is due to a bug in PowerShell Tools for the PI System introduced with version (doesn't exist in that will likely be fixed in the next release. I expect this will also allow PISYSTATS to work against Secondary collective members.








[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]








For those users new to PowerShell, please read Jay’s blog post about PowerShell Tools for the PI System


- Windows PowerShell is supported on Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8 and Windows Server 2012. Please note: While recent Windows versions such as Windows 2012 Server ship with Windows PowerShell included, it has to be installed separately on older Windows platforms such as Windows XP.
- Windows PowerShell 2.0 or later
- PowerShell Tools for the PI System or greater








[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]







Preparation / Installation

Some of the following steps may be obvious for experienced PowerShell users but may puzzle one or the other user at "beginner" level.


1. Make sure you have Microsoft PowerShell for Windows 2.0 or higher installed ("Add or Remove Programs" / "Programs and Features") on the machine that you would like to use. If this machine is remote to your PI Server, please note that you don't have to install Microsoft PowerShell on your PI Server node.
2. Download and install OSIsoft the recent version of OSIsoft PowerShell Tools for the PI System available exclusively at vCampus Download Center under "Extras".
3. Download and install the package attached to this post. Please create a folder on the target machine (e.g. D:\PIPC\PISystemStats) and paste the content from attached ZIP file into this folder.


We have some additional recommendations you may find useful in particular as well as in general when working with PowerShell, OSIsoft PowerShell Tools for the PI System and PowerShell scripts.


1. Creating a Profile for PowerShell initialization
You may have recognized that after installing OSIsoft PowerShell Tools for the PI System that the belonging cmdlets are not available within new PowerShell sessions until you execute




Add-PSSnapin OSIsoft.Powershell

Things like this can perfectly be automated by creating a Windows PowerShell Profile. Please see e.g. MSDN Library for additional information about the Windows PowerShell Profile.
To create yourself a PowerShell Profile, please create profile.ps1 at C:\Users\<username>\Documents\WindowsPowerShell where <username> refers your logon name. Edit, enter above command and save profile.ps1
To verify if it is working, please launch a new PowerShell session and see if you can immediately use the PowerShell Tools for the PI System cmdlets e.g.:





Instead of loading the profile, you may see an error like the following one:


File C:\Users\<username>\Documents\WindowsPowerShell\profile.ps1 cannot be loaded because the execution
of scripts is disabled on this system. Please see "get-help about_signing" for more details.           

At line:1 char:2                                                                                       
+ . <<<<  'C:\Users\<username>\Documents\WindowsPowerShell\profile.ps1'                                
    + CategoryInfo          : NotSpecified: (:) [], PSSecurityException                                
    + FullyQualifiedErrorId : RuntimeException                                                         


The reason for this error is that script execution is disabled by default. Please scroll down to learn how you can enable script execution but before moving on to the next topic you may be interested in how to add the Get-PISystemStats module to your profile. For this purpose you need to provide the path to the PISYSSTATS library and manifest file which depends on where you've extracted the package to. In order to do something new, let's introduce the usage of a PowerShell variable for this purpose. Please add the following to lines to your profile.ps1 file:


$modulePath = "D:\PIPC\PS\bin\PISYSSTAT"
Import-Module $modulePath

Save the changes to disk and launch a new PowerShell session. To see if the profile has loaded properly, you can look for the PISYSTATS help:


Get-Help Get-PISystemStats



2. Enable script execution


There exist several different levels with Script Execution Policy. To see details you could e.g. use


Get-Help Set-ExecutionPolicy

There are again many additional options. I personally like using the online help:


Get-Help Set-ExecutionPolicy -online

For sure this doesn't work with Windows core installations but back to the topic... Our recommendation for setting the execution policy is the following one but you may want to check with your local IT:


Set-ExecutionPolicy RemoteSigned

If you run into any issues, please try if launching PowerShell as Administrator helps. Please feel free to open a thread at Scripting with Windows PowerShell forum for any kind of issues you experience.










[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]










At this point we expect you to have the Get-PISystemStats module loaded already. If the following examples do not work for you, please revisit the Preparation / Installation section. If you still see errors, please post them at the Scripting with Windows PowerShell forum.


I would like to start looking at the cmdlets help first what usually is a good idea with PowerShell:


PS C:\Users\Gregor> Get-Help Get-PISystemStats




    This function returns PI System statistics.


    Get-PISystemStats [-ServerName] <String> [-StartTime] <String> [-EndTime <String>] [-OutputMode <Int32>] [-ShowUI <Boolean>] [-DBGLevel <Int32>] [<CommonParameters>]


    This function returns PI System statistics. The Syntax is ...
    Get-PISystemStats ([-ServerName | -sn] <string>)
                      ([-StartTime | -st] <string>)
                      [[-EndTime | -et] <string>]
                      [[-OutputMode | -om] <int>]
                      [[-ShowUI] <boolean>]
                      [[-DBGLevel | -dbgl] <int>]




    To see the examples, type: "get-help Get-PISystemStats -examples".
    For more information, type: "get-help Get-PISystemStats -detailed".
    For technical information, type: "get-help Get-PISystemStats -full".


You should also try the following and look at the results:


Get-Help Get-PISystemStats -Examples
Get-Help Get-PISystemStats -Detailed
Get-Help Get-PISystemStats -Online

The link to the online help directs you to this thread that we will add a link to this blog to as soon as it is published. You may have to sign in with your SSO account but isn't that cool?


Ok, let's calm down and continue discussing the cmdlets parameters:








Name of the PI Server




Period start




Period end




Report type definition




Show output No / Yes




Debug Level

The -ServerName parameter is required and specifies the PI Server to retrieve the messages from. It's also valid to use the name of a Collective but the connection will always go to the Primary because of a bug in PowerShell Tools for the PI System.


Messages from PI Network Manager will be retrieved for the period specified by -StartTime and -EndTime. PI Server time format is used e.g. "*-1d", "*" and "10-Apr-2013 12:23"


The -OutputMode parameter is optional and can be used to specify how PISYSSTATS treats and aggregates results.


- 0 (default) manipulate internally the facts table; will generate up to 5 different reports in CSV format (Application Stats.csv, Applications.csv, IP Addresses.csv, Total Connections during uptime.csv and Trusts.csv).
- 1 will export the facts table in CSV format (connectionfacts.csv).
- 2 will export the facts table in XML format (connectionfacts.xml) including schema and data. 


With -ShowUI one can control if PISYSTATS generates a prompt ($true default) or not ($false). This parameter is useful when running Get-PISystemStats as a scheduled task. With ShowUI enabled PISYSTATS will display its progress and finally output to the prompt where reports have been generated.


Process PI Log messages 
    402 / 654             


With -DbgLevel  = 0 (default) only none verbose messages are logged. This is mainly Start and End of cmdlet execution. 1 is for showing warnings and internal messages, 2 for showing parsing results and internal variables


We found it useful using the combination of a batch file and a PowerShell script file for testing purposes. Another possible use case would be when scheduling PISYSSTATS together with one or more other tasks. Hence we enclosed the "Unit Tests & Examples" folder and it's content into our delivery. Before executing "Automatic Debug - Test Unit.bat" for the first time, please review the content and modify the path to "Test Unit.ps1" if necessary. 


REM 64 bit
powershell -NoLogo -ExecutionPolicy RemoteSigned -WindowStyle Normal -NoExit -File "C:\DevProjects\PISYSSTAT\v1.0.0.8\Unit Tests & Examples\Test Unit.ps1"

Please note that the batch executes PoweShell with ExecutionPolicy set to RemoteSigned. The changed execution policy only lives inside the session launched by the batch. It is not persistent.


Please also review the path to "PISYSSTAT.psd1" and "PISYSSTAT.psm1" within "Test Unit.ps1" as well as the name of your PI Server node (here core-tex).


# Import the module
$modulePath = "C:\DevProjects\PISYSSTAT\v1.0.0.8\bin\PISYSSTAT"
Import-Module $modulePath

# See all available cmdlets
#Get-Command -Module "PISYSSTAT"

#Get-Help Get-PISystemStats -full
Get-PISystemStats "core-tex" "*-5d" -om 1 -dbgl 0







[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]








Except when executing Get-PISystemStats in silent mode (-ShowUI $false), the path to reports will show at the command prompt e.g.


The export process is completed. See the generated files under the folder: C:\DevProjects\PISYSSTAT\v1.0.0.8\Export\20130821_151611


Please note that the date and time is coded to the folder's name that is generated for a specific execution. You will find the resulting reports within that folder. What content you'll find depends on the selected OutputMode. I would have loved sharing an example of the connectionfacts report and to discuss some of the details but even after hiding several cells, there are by far too much to fit into the blog post format. The table is to wide to fit and the majority of the columns become cut. Just explaining the single columns doesn't appear necessary since my believe is that the headers are self explaining.








[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]







Frequently Asked Questions

As just said, I expect reports are being self explaining but you may have one or the other question. I'll try to answer the ones that I expect to come and reserve the freedom to add more later.

Q1: I see complete information for some connections but incomplete information for others. Why is this?

This is one of the limitations mentioned above. PI Network Manager recognizes incoming connection attempts, successful attempts and if a connection becomes closed. When reporting these messages to the PI Message Log on the PI Server, it uses different messages that have unique message ID's and contain certain kind of information. When we query the PI Message Log we need to specify a period by StartTime and EndTime. This introduces the risk to exclude related messages and the information they contain. This said, we pretty likely miss information that was logged before StartTime and after EndTime. This is also valid for connections that are currently alive when specifying EndTime with '*'.

Q2: Connectionfacts report contains an "Index" field that I couldn't locate in any of the messages from PI Network Manager. What is this about?

PI Network Manager counts connections as ID. It starts to count with 1 with the first incoming connection after PI Network Manager service start. With other words, if a PI Server is re-started for some reason, PI Network Manager will recycle connection ID's. To avoid information before and after a restart becomes confused, we introduced the 'Index' column that starts counting with 0 and is increased by 1 every time PI Network Manager reports that it just has been started. You can consider all information with the same Index belonging together while the same connection ID showing with a higher Index indicates PI Network Manager has been restarted. Since almost all PI Services have a service dependency on PI Network Manager you can easily consider a PI Network Manager restart a restart of the PI Server but not necessarily a reboot of the PI Server node.








[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]










Have you tried PISYSTATS already? What information / report do you find valuable? Is it contained in one of the specific reports (OutputMode=0) or within the Connectionfacts (OutputMode 1 and 2)? If it's in the Connectionfacts, what's your idea of further refining (using Excel, enhancing PISYSTATS or other e.g. using the XML Connectionfacts report)? 




We are curiously looking forward to your feedback and encourage you to develop the "project" by introducing new functionality i.e. reports. When you do so we would appreciate you share your "stuff" with the community. Hence, please introduce a new OutputMode when implementing your own treatment of information. 




We also encourage you exploring the abilities and the power of PowerShell and PowerShell Tools for the PI System. This is cool stuff with a bright future! Why waiting for the future? Let's make it happen now!


[ Introduction ]     [ Concept ]     [ Solution ]     [ Limitations ]     [ Requirements ]     [ Preparation / Installation ]      [ Usage ]      [ Reports ]      [ FAQ ]      [ Prospect ]

Connected World Magazine hosted the M2M App Challenge this summer from June 7-9 in Santa Clara, California. This was very lucky for us at DST Controls, as we are only about an hour away. All of our PI System developers were up for the challenge, so we entered and looked forward to the fun!


A little background about M2M App Challenge. Each team had 36 hours to develop an application that used one or more technologies from five partners. The selection was OSIsoft, Aeris Communcations, Esri, ILS Technology, and ioBridge. The U.S. Department of energy also provided us with information on how to access thousands of open data sets from the U.S. Government. The rules were pretty simple. Start an application from scratch at on Friday 9:00 p.m. and be ready for judging the following Sunday at 10:00 a.m. I think there was a one hour break in there.  Like we need only an hour of rest.  Food and drinks, mostly of the quick energy variety, were provided to keep us fueled.


Our team consisted of five DST’ers, Benny Bray, Andrew Pong, Justin Bagley, Roozbeh Nakhaee, and me. We are all PI programmers, so we naturally selected OSIsoft’s platform, but not wanting to just go with what we knew, we also planned on using Esri’s mapping and GIS technology.


Our concept was to create an application that we called SmartCity.  This app would pull data in from a PI System and visualize information associated with a city infrastructure. Things like utilities, traffic, weather, and demographic would be combined to provide a view of a city to help people better understand how their communities behave.  Our inspiration was from SimCity, but instead of watching a simulation, we intended to use real data and have a playback feature. A user could pick any time range, press play, and watch the city come to life. We also wanted to include a live view, where real time information is displayed and objects animated.


Since we had five developers, part of the challenge for us was to figure out how to design, split work, and create a working application in a very short time.  We all had to learn new things. It was great to see everyone focused so intently for a short time on a single purpose.  I was surprised with what we were able to accomplish.


Three of us worked on the back-end of the system.  This involved getting a PI System up and running, creating a cloud base web api (using Micrsoft’s Azure cloud platform) that would allow our application to access AF and PI data directly, and populating the system with data from open data sets provided by the U.S. Government. We also created simulation data for information that we did not have access to, such as utilities.


An interesting twist that OSIsoft provided, was a live feed of a car’s data as it drove around the Bay Area.  This data was collected and stored on our PI Server. In our application the car data represented traffic information.


The other two team members focused on using Esri .Net SDK to create a map and overlay objects which represented various aspects of the city. The last goal was to add animation and playback.


We didn’t just want to use the PI System to store our real-time time series data, but also took advantage of AF to describe our city’s asset.  Because of time limitations, we had to settle on a few asset types. The first one we picked was weather stations, with actual data and locations pulled from the National Oceanic and Atmospheric Administration, also known as NOAA. NOAA National Weather Service is pretty much the primary source of all weather information for the US. The data is freely available to anyone, so we grabbed about a 1000 weather stations with a year’s worth of data and imported it into our PI System.


The other two assets types, were pipe lines, and vehicles.  For all the assets we stored geolocation data to enable us to display objects on the map.  The application used our API to get asset information and the Esri’s API for the map. As the user zoomed in and out and panned, we queried the PI system to get all assets within set radius, like 10 miles.  Current values were shown as animation features, like changing colors based on temperature or line pressure.  If the user picked a date range in the past, then a play button was presented and the user could watch values change over that period. We also provided a slider bar so the user could move back and forth over the time range.


We were able to meet all our goals and showed the judges a working system!


Here are some screen shots:






What was the experience like?


This was the first time any of us had participated in a hackathon. It was long, hard, and at times seemed impossible. I would never do it again.  Just kidding, it was actually the opposite in all aspects. Time was our enemy; we just didn’t have enough.  Sleep was also an enemy; we didn’t have enough. Red Bulls, coffee, and other power drinks were our friends; we had plenty. Support from OSIsoft and Esri staff were huge friends.


The line I gave upper management is that this would be a great team building event. It is a frequently over-used term, so I try to avoid saying stuff like that; but I have to admit, once you have programmed side-by-side with a few people over two days; you get to know each other.  We learned how to work together and manage our time. After it was over we felt pretty proud of what we had done, and each of us did a part of the application, regardless of skills or ability.


Even though I kept telling myself that it does not matter if we win or not, I still wanted to win! I guess it is just that competitive spirit in me. We placed 2nd in both OSIsoft and Esri platforms, which is pretty respectable and even won $3,000. Looking back I think that was a great accomplishment, and am over not taking first. What really matters most to me was the experience.  We learned so much about working together, using new technologies, and just having fun.  Even though we spend many of our normal working days programming; this was nothing like that. It was a blast and I would do it again.


Here is the team:




Top to Bottom, Left to Right: Roozbeh Nakhaee, Justin Bagley, Benny Bray, Andrew Pong, Lonnie Bowling




Roozbeh: That is not a good taste in my mouth, Justin: Let’s step outside, Benny: Can a person be any cooler, Andrew: This is my best tough look, Lonnie: Time to sleep, must sleep.


Here is a video of me summarizing the project.


That is a wrap,  I'm looking forward to the next one!




In this blog post, I will first introduce you to what Geolocation API is and how it could be used to find your current position which is returned with a given accuracy depending on the best location information source available. It will be shown how to create a HTML5 web page that shows you your latitude and longitude as well as its accuracy. On the next blog post, I will show you the mapping technology to add the ability to display your location in a map.


What is Geolocation API?

 The Geolocation API is an effort by the World Wide Web Consortium (W3C) to standardize an interface to retrieve the geographical location information for a client-side device.


It is ideally suited to web applications for mobile devices such as personal digital assistants (PDA) and smartphones. However, because of the wide variety of devices and mobile browsers, which usually lack plugin architecture, there is not yet widespread support on such platforms. On desktop computers, the W3C Geolocation API works in Firefox since version 3.5, Google Chrome, Opera 10.6, Internet Explorer 9.0, and Safari 5. On mobile devices, it works on Android (firmware 2.0 +), iOS, Windows Phone.


The technology that determines the location, however, varies greatly depending on the device capabilities and the client’s environment.


Geolocation Technologies

There are at least 10 different systems in use or being developed that a device could use to identify its location. In most cases, several are used in combination, with one stepping in where another becomes less effective. You can find the 10 different systems in this link. Below you can find information about the four most important ones.



Using GPS satellites to track location allows for maximum accuracy and specificity, especially in a low, flat area. In rural areas the geolocation accuracy by a technology using GPS is nearly 100%. However, if you are trying to track people who live in cities, the "valleys" of the urban landscape and the tall buildings often throw off GPS sensors. Additionally, GPS satellites typically take a long time to "fix" a user's location, and if a user is indoors, the whole system might not work properly.


Wi-Fi Positioning

In any given location, there may be dozens of Wi-Fi routers or providers. Wi-Fi positioning triangulates your position based on how many Wi-Fi networks are in range of your computer. Skyhook Wireless maintains a comprehensive database of wireless access points, and this database allows for highly accurate location gathering, particularly in urban environments which may hold thousands or even millions of wireless networks. It takes a short time to fix a user's location, and all it requires is a single software installation - ideally part of a web browser package - since most computers and mobile devices come with wireless radio. The only problem is that in rural areas, Wi-Fi positioning isn't very useful due to the relative lack of wireless networks.


IP Geolocation

IP Geolocation is easily implemented into a website, but it suffers from limitations of specificity. Each IP block corresponds roughly to a geographical area, so you can usually figure out what city someone is in. However, it often produces false positives, and the data should be checked against other forms of geolocation.


Cell Tower Triangulation

A lot like Wi-Fi positioning but on a larger scale, cell tower triangulation figures out where a user is based on the cellular signals the user is getting from towers. This is mostly useful for mobile devices which have built-in cellular radios, and serves to fill in any gaps from the above three sources of geolocation information. 



Creating a simple HTML5 page showing your geolocation information

When developing web applications, one option is to write code in a text editor, compile code at the command line, write HTML and CSS in a separate application, and manage your database on another one. However, a better option would be using Visual Studio which enable you to perform all of these tasks, and more, from the same environment.


Let’s start creating a web project using Visual Studio 2012 and the downloaded template “Pure HTML Web Site”. You can find more information about this template in my previous blog post “HTML5 in Visual Studio 2012”.  


Choose a name and a location for your solution and click OK.










On the solution explorer, double click on the index.html file to view its content. Replace its content with the code shown below:



<!DOCTYPE html>

    <meta name="viewport" content="width=device-width" />
    <title>Your current position page</title>
    <div style = 'width:800px; height:50px;'>
    <p>Information using the getCurrentPosition method:</p><br />
    <span id = 'gCP'> </span>
        <br />
    <p>Information using the watchPosition method:</p><br />
    <span id = 'wP'> </span>
    <script type = 'text/javascript'>
        var gCP = document.getElementById('gCP');
        var wP = document.getElementById('wP');

        if (navigator.geolocation) { 
                maximumAge: 100,
                timeout: 6000,
                enableHighAccuracy: true

            var l = navigator.geolocation.watchPosition(showMovingLocation,
                maximumAge: 100,
                timeout: 6000,
                enableHighAccuracy: true

        else {
            alert('Geolocation not suported');
        function showLocation(pos) {
            gCP.innerHTML =
            'Your latitude: ' + pos.coords.latitude +
            ' and longitude: ' + pos.coords.longitude +
            ' and timestamp ' + pos.timestamp +
            ' (Accuracy of: ' + pos.coords.accuracy + ' meters)';

        function showMovingLocation(pos) {
            wP.innerHTML =
            'Your latitude: ' + pos.coords.latitude +
            ',longitude: ' + pos.coords.longitude + 
            ',speed: ' + pos.coords.speed +
            ',altitude: ' + pos.coords.altitude +
            ' and timestamp ' + pos.timestamp +
            ' (Accuracy of: ' + pos.coords.accuracy + ' meters)';

        function errorHandler(e) {
            if (e.code === 1) { // PERMISSION_DENIED
                lbl.innerHTML = 'Permission denied. - ' + e.message;
            } else if (e.code === 2) { //POSITION_UNAVAILABLE
                lbl.innerHTML = 'Make sure your network connection is active and ' +
                'try this again. - ' + e.message;
            } else if (e.code === 3) { //TIMEOUT
                lbl.innerHTML = 'A timeout ocurred; try again. - ' + e.message;



 Press F5 to debug your application. Internet Explorer should open showing a similar page to the screenshot below:










The latitude and longitude shown above is from the center of the city of São Paulo, Brazil.


Explaining geolocation objects

As it was mentioned, there are common sources of location information that Geolocation API has access which include Global Positioning System (GPS) and location inferred from network signals such as IP address, RFID, WiFi and Bluetooth MAC addresses, and GSM/CDMA cell IDs, as well as user input. Nevertheless, there is no guarantee that the API will return the correct location of the device.


The Geolocation API is provided by the geolocation object which is accessed through the navigator object.


To begin, the JavaScript code needs to check if Geolocation is supported by the browser. If this is false, a message is displayed to the user. If supported, it runs the getCurrentPosition() and watchPosition() methods.


 If the getCurrentPosition() method is successful, it returns a coordinates object to the function specified in the parameter (showLocation).


The latitude, longitude and accuracy properties are always returned. The other properties below are returned if available. Below there is a description about those properties:

  • The latitude and longitude attributes are geographic coordinates specified in decimal degrees.
  • The altitude attribute denotes the height of the position, specified in meters above the mean sea level. If the implementation cannot provide altitude information, the value of this attribute must be null.
  • The accuracy attribute denotes the accuracy level of the latitude and longitude coordinates. It is specified in meters and must be supported by all implementations. The value of the accuracy attribute must be a non-negative real number.
  • The altitudeAccuracy attribute is specified in meters. If the implementation cannot provide altitude information, the value of this attribute must be null. Otherwise, the value of the altitudeAccuracy attribute must be a non-negative real number.
  • The heading attribute denotes the direction of travel of the hosting device and is specified in degrees, where 0° ≤ heading < 360°, counting clockwise relative to the true north. If the implementation cannot provide heading information, the value of this attribute must be null. If the hosting device is stationary (i.e. the value of the speed attribute is 0), then the value of the heading attribute must be NaN.
  • The speed attribute denotes the magnitude of the horizontal component of the hosting device's current velocity and is specified in meters per second. If the implementation cannot provide speed information, the value of this attribute must be null. Otherwise, the value of the speed attribute must be a non-negative real number.



Concerning error handler, the code attribute must return the appropriate code from the following list:

  • PERMISSION_DENIED (numeric value 1)  - The location acquisition process failed because the document does not have permission to use the Geolocation API.
  • POSITION_UNAVAILABLE (numeric value 2) - The position of the device could not be determined. For instance, one or more of the location providers used in the location acquisition process reported an internal error that caused the process to fail entirely.
  • TIMEOUT (numeric value 3) - The length of time specified by the timeout property has elapsed before the implementation could successfully acquire a new Position object.



If you have read through the HTML code, you will realize that we are not only using the getCurrentPosition() method but also the watchPosition() method. Therefore, what is the difference between them? The watchPosition() method returns the current position of the user and continues to return updated position as the user moves (like the GPS in a car). That is why in order not to see some null values returned by this method, you need an accurate GPS device to test this (like a smartphone with GPS).




I hope you find this blog post about geolocation interesting and stay tuned for my upcoming blog posts! The next one will probably be about mapping using HTML5.

Filter Blog

By date: By tag: