A couple of weeks ago I participated in the OSIsoft UC2016 Hackathon. We had 22 hours to create an application that was based on a “Smart City” theme. We were given access to a copy of a production PI System for the San Diego International Airport. We also had representatives from the Airport to help explain their goals and challenges with running such a facility. Overall, the event was extremely well organized and I felt like we were able to get to work right away without the usual amount of time lost trying to connect and understand a system you have never seen.

 

Lisa Slaughter, Andrew Pong, and myself formed a team called MOAR BOTS!!!1 We all work at DST Controls and had the idea to design a natural language interface to a PI System. Andrew actually came up with the idea while watching the Microsoft Build Conference keynote, which was the week before our event. As luck would have it, Microsoft had just released a beta Bot Framework that we thought would be great to try out.

 

We spent most of the morning and afternoon learning how the bot technology works and what we could do. As evening rolled around, we spent a few hours spinning up a bot and tying it to the PI System. We kept our sights low. Our goal was to be able to ask the bot in a natural way what the energy usage was for an area of the airport for a certain time. For example we could say, “What was the energy usage for terminal 1 last Saturday?”, or “Tell me how much energy was used in the commuter terminal yesterday.” Either way the bot would need to know three things, what was the KPI of interest, in this case energy usage, what area, terminal 1 or the commuter terminal, and what date. As you could guess, there are countless ways to ask even a simple question like this, so I was skeptical that we would get good results.

Our application followed the following flow:

 

  1. Users ask a question within our application
  2. The question is sent to our bot service via an HTTP post
  3. The bot gets the message payload and sends the message to Microsoft Language Understanding Intelligent Service (LUIS)
  4. LUIS returns a JSON object that gives us a consistent structure of intents and entities
  5. We examine the intents and entities to structure a PI Web API query
  6. We query the PI System and return the results to the bot
  7. The bot sends back the response to the application

 

To get started we developed a bot within Visual Studio.  The bot is an http service that applications can use to have a conversation with some kind of system. Bots handle the connection I/O to applications; they keep track of who the user is, translate languages, and handle conversation state. I think of the bot as that tour guide you have on a vacation to a foreign country. Here is a diagram from Microsoft:

image1.png

After the bot application is developed, it is deployed to Azure and it becomes a simple matter of posting HTTP requests to the endpoint. From there the bot does all the hard work.

For our initial bot service, we are just doing the basics, really not using the bot capabilities, like tracking user or state information. We just grab the message and process. This could have been accomplished with a simple API, but remember, this is just a start and our time was limited!

 

The first line where something interesting happens is with the http request to LUIS, we send the bot message to our LUIS service that will make sense of the question and then returns a JSON object, that object is then deserialized and sent to our PIService.GetKPI (line 23). The code that does this looks like this:

 

public async Task<Message> Post([FromBody]Message message)
{
    if (message.Type == "Message")
    {
        string appId = @"9c1d7df5-92be-4ade-ab29-7affaa91b797";
        string subKey = @"d1a9a95fc7b5400cb7996db63ed26f66";

        string lroot = @"https://api.projectoxford.ai/luis/v1/application?id=" + appId + "&subscription-key=" + subKey + "&q=";

        string uri = lroot + Uri.EscapeDataString(message.Text);
        string val = "I did not understand...";
        using (var client = new HttpClient())
        {
            HttpResponseMessage msg = await client.GetAsync(uri);

            if (msg.IsSuccessStatusCode)
            {
                var response = await msg.Content.ReadAsStringAsync();
                var data = JsonConvert.DeserializeObject<piluis>(response);
                if(data.intents[0].intent == "GetKPI")
                {
                    var piService = new PIServices(serverUrl, baseElement, userName, password);
                    val =  await piService.GetKPI(data);
                }
            }

        }
        // return our reply to the user
        return message.CreateReplyMessage(val);
    }
    else
    {
        return HandleSystemMessage(message);
    }
}        

 

This is just an API call to LUIS. At this point, you might be wondering what LUIS is about? I think this is where the really cool part happens. LUIS, Language Understanding Intelligent Service, is the same technology that Microsoft’s Cortana uses. With a little training, which I will get to in a second, you can send the API some text and it will return a JSON object that breaks down the information into intents and entities. These intents and entities allow us to then further process the request with a typical PI Web API request.

 

To train LUIS, we enter some typical request that a user would make and then tell the system what the intents (like getting a KPI), and the entities like what type of KPI, Asset (area of the airport), and date/time. We were able to use an pre-defined entity for date and time and added KPI and Asset. When LUIS returns the result, it assigns a confidence score (0 to 100%) so you can judge how you want to handle the item.  We can examine a response for LUIS and get the necessary information to go out and find the data:

 

public async Task<string> GetKPI(piluis message)
{
    try
    {
        string time = "*";
        string element = "";
        string kpi = "";
        string kpiDescription = "";
        string asset = "";
        foreach (var e in message.entities)
        {
            switch (e.type)
            {
                case "Asset":
                    asset = e.entity;
                    break;
                case "KPI":
                    if (e.entity.Contains("energy"))
                    {
                        kpi = "Real Power";
                        kpiDescription = "Energy Usage";
                    }
                    break;
                case "builtin.datetime.date":
                case "builtin.datetime.time":
                    time = LUISParse.ParseDateTime(e);
                    break;
                default:
                    break;
            }
        }
        if (time != "")
        {
            var data = await GetKPIData(asset, kpi, time);
            string results = data["AssetName"].Value<string>();
            results += " " + kpiDescription + ": ";
            double x;
            var t = double.TryParse(data["Value"].Value<string>(), out x);
            results += x.ToString("F2", CultureInfo.InvariantCulture);
            results += " " + data["UnitsAbbreviation"].Value<string>();
            results += " " + data["Timestamp"].Value<string>();
            return results;

        }
        else
        {
            return "I did not understand your request...";
        }
    }
    catch (Exception e)
    {
        throw e;
    }
}

 

If all goes well then we can make a call to the PI System using the PI Web API to get the information.  Once that is done, we take the data and give a good response back. Here is a sample conversation:

image2.png

Note how I varied the way the question was asked and in each case the GetKPI, Asset, and time was successfully determined. Training the LUIS application is fairly simple, at least for this example. I defined my intents and entities, then gave sample utterances, and finally, highlighted the text that mapped to my intents and entities. Here is a screenshot:

 

 

The colors show what is mapped. The cool thing is, you can monitor this after the application is published and continue training based on what users are asking. That makes it possible to improve the bot over time based on actual use.

 

I think this technology is very interesting and I can see a lot of useful applications for it. My biggest surprise was how well it actually worked without much training. I’m pretty certain that we all will be interacting with bots much more in the coming years and we most likely won’t even realize that there is a bot on the other side!

 

Notes:

We will be presenting during the PI Dev Club Webinar “The Best of the Best: Smart Cities Programming Hackathon 2016”, on May 4th, 9:00 am PT.

I also want to do a detailed YouTube video that will step through the process. I will update this blog when I have that done.

 

I would love to hear what everyone thinks about this and how this could be used, please post a comment if you have time!

 

Lonnie