RJKSolutions

How can MAVIS help you search?

Blog Post created by RJKSolutions on Jun 2, 2011

I remember at this years user conference in San Francisco someone (naming no names Alex ) asked me where I find the time to post on vCampus, well here is an example.  Currently I have Ethan (you all must know Ethan by now) who currently has a case of chicken pox lying on one arm whilst I am typing with the other hand; typing with one hand is harder than it sounds and takes ages!  Anyway, enough of the waffling let's get to the real reason for this blog post...

 

OSIsoft has been talking about search quite a lot recently and the project they are running to have a single, unified search system that will be part of Coresight and find it's way in to the other PI client tools.  So far I am liking what I hear from OSIsoft on what the search system will bring, so I am looking forward to that.  Shortly after this I started archiving some prototype projects I once started in PI and came across a project that was a spin off from an archiving of video streams in to the PI system that we had on vCampus.  This soon got me thinking about searching video streams for data (image recognition...) swiftly followed by what I was having for dinner that night.  Once I got my thoughts back on track I wondered about the possibilities of being able to search within a video stream for significant events (this spawned other thoughts of event frames) or somehow find any video streams that have a match to a complex search criteria (even trawling the words spoken in a video stream).  

 

So I set off with my best friend Google (may be I should call on Bing one day) and within minutes, no seconds, I came across MAVIS on Microsoft Research.  MAVIS = 'Microsoft Research Audio Video Indexing System'.  Rather than repeat the content from the article, I highly recommend you read the MAVIS article.  Here is another MAVIS link.

 

I understand some control systems currently archive video streams of a process that can be shown along the same timeline of process data in the historian, but what if you could search the video in PI for either words that are spoken or image recognition (e.g. flaring) along side your process data in one query!

 

Interested to hear your thoughts.

 

After hearing the Microsoft talk at the regional seminar in Barcelona about how Kinect is being used by some developers, I'm off to hunt down the non-commercial SDK for Kinect next.
Coresight gestures here we come!

Outcomes