RJKSolutions

I got schooled by a 2 year old.

Blog Post created by RJKSolutions on May 24, 2012

Okay, so it was my son and he is actually 2 and half years old (as he tells me often the half is important) but today he taught me a lesson or two that made me sit back, reflect, and come up with this blog post.

 

 

 

My son, Ethan, is someone I am very proud of because he is already a geek like me [:)] In fact he even has his own iPad (iPad 2, not the “new iPad” – that is mine!!!) and already the way he used the user interface is so natural.  There is no clicking of the maximize button, using “File -> Open”, and so on, no he literally just swipes his way through the iPad with ease finding what he wants. 

 

 

 

I had a long day today that meant I was mentally exhausted and put my iPhone down, which was the cue for Ethan to come and grab my iPhone.  First, he swipes to unlock it and enters my code (I made the mistake of telling him it once, he still hasn’t forgotten it), which he then follows with “Oooo Daddy you have 4 Facebooks and 12 e-mails”.  I smiled, “Facebooks”.  Ethan then proceeded to open Facebook and pulled down the news feed to refresh it.  I thought to myself, “why didn’t he click refresh or hit F5”, then I realized that his generation just isn’t going to know that type of interface especially on mobile or tablet devices.  It is already the norm for them at such a young age, something us old school folk have had to adjust to, although the old way still comes back to haunt us.

 

 

 

The story continues…

 

After refreshing my news feed he then opens my notifications and shouted “Oooo Daddy tagged in a photo” followed by a swift click on the notification link.  At that point I sat up sharply just in case the photo was something I wasn’t expecting.   Up pops the photograph upon which he spreads his fingers to zoom in and finds me in the photograph.  Instead of pinching to zoom out he double-touches the screen and it zooms out…well I did not know that!  So that was my lesson on how to interface with my own phone.  Needless to say I shall be watching how Ethan uses his iPad to pick up some more tips.

 

 

 

The story continues just a little bit more…

 

When I have a few minutes spare I have an obsession, no not the kind you have to go to meetings for, it is the Microsoft Kinect.  To date I have concentrated quietly in my office on data streaming from the Kinect device using AF SDK RDA on Windows Server 8 to the PI Server 2012 as fast and efficiently as possible – some great results so far. 

 

After bathing Ethan I scuttled off to my home office and fired up my laptop.  Ethan comes in to the office whilst I am watching the performance counter of the PI Server snapshot events/second increasing then decreasing as my tracked skeleton is moving about on the screen.  He laughs.  He then spots the second Kinect sensor lying around in my office and says, “I’ll use this one, you use that one”.  “Okay” I said apprehensively.  I started waving at my Kinect sensor and Ethan saw my skeleton on the screen.  He laughed some more.  He then turned around so we were back to back, put his Kinect sensor on the floor facing him and started to wave at it using the opposite arm that I was using.  “Bing” – that was the sound of the light bulb going off above my head.  Before I work on any visualization or application with all this data being collected I need a second pair of eyes, in this case a second Kinect sensor validating the first load of data being produced.  The Kinect SDK supports multiple Kinect sensors so I now know what I am going to be doing for the foreseeable, working with two huge streams of data, validating each other with their inverse.

 

 

 

To give you all an appreciation of how all this data can be used to make sense for visualization or for a natural user interface application, PI Event Frames play a huge part in what I have built so far.  Imagine correlating specific events or gestures a user makes whilst using your application for viewing/interacting with real-time process data…not only do you see the user acknowledge an alarm, but you could check if they were even looking at what they were doing whilst acknowledging that alarm!  Validate quite literally how a user interacts with your application.

 

 

 

Take it even further in to the future and you drive a new car off the dealership with a Kinect sensor embedded in to the dashboard, able to detect if you are tired or you didn’t check your mirrors before taking that turn (face tracking correlated with the signals from indicator stork + light bulb, and speed of the car for distance to the turn)…a cars black box based on the PI System & Kinect!

 

 

 

Anyway, what started out to be a bit of a random story did flourish in to what I was trying to get across, and what I see people like our community friend Lonnie already blogging about, the world is being taken over by 2 year olds who don’t need mice, they just need their fingers and a piece of glass.

 

 

Outcomes