Interesting, it looks like the analytics is off.
How do you read the data from the table? If you use a time-based lookup, you might see the effects of interpolation on the data as the timestamps are not on the exact start of the day.
I would approach it like such:
- Create attributes to determine the Begin of Month (BOM), and End of current Day (EOD) meaning today at e.g. 23:59:59.999.
- Use a SUM function in your table lookup to sum all values from the table for which the timestamp column is in between these two timestamps.
I read data from table as showed in the picture below, on this picture results are showed as well:
As you can see the all values have timestamps at 12:00:00 AM, so I use bom() and bod() functions to specify particular event time, but TagTotal returns interpolated data anyway.
I'm not sure that the method you offer is suitable for us. I can assume that I will only see the two values - 0 and total (sum) on the last day. But I need to see the total (sum) from the beginning of the month for every day. Like on the picture:
Yes, the interpolation causes this as you read the data interpolated from the table. That's why i would recommend to NOT use interpolation in your table DR, but just sum the values based on a daterange you feed into the lookup. Don't have the time to craft an example, but essentially you need to feed the BOM and EOD values into the WHERE clause of your lookup: Where Production Day >= BOM And Production_Day <= EOD.
Then use analytics, clock scheduled every day and save that in a PI Point. I use the exact same approach for a very similar situation.
The use of tags is potentially for the appearance of incorrect data. The operator can change the value at any time to any timestamp, and we can not control it.
At the moment I don't see the different way to use AF Analytics unfortunately.
2 of 2 people found this helpful
Then you should not use analytics to save the data in a PI Point, but use the logic described to perform the sum on-the-fly.
The tricky part in doing that is how you ensure that for every evaluation (e.g. when in a trend) you get timestamps feeding your calculations. To do that, your chain of calculations needs at least one PI Point that feeds timestamps. There is a KB article on that somewhere but can't find it quickly.
A different approach could be to use the stepped setting on your table lookup, and depending on the maximum latency the operator enters the data you need to read the data for a specific day e.g. at 8 hours after the start of the day to allow the operator for 8 hours to enter the data. Next, create inidivual expressions in analytics to retrieve the values for individual days based on that latency and build the logic to sum the days you need. Very clumsy approach, but i don't see much other options.
Or you could try to use Event-Based Analytics to write back the data to the start of the day for an x-number of days backward, depending on how many days after the entry you allow the operators to change the value. That way you will have values for each day at the start of the day and use regular logic to create a sum for the month-to-date.
Essentially, it's analyzing the facts you have for the data you receive, and then build a derative dataset on that to suit your final calculation using either ad-hoc calculations or pre-calculated (stored result) calculations.
Hope this helps a bit. Many options to resolve this, but it all depends on the source data you have and to what extend you know what data you have at what point in time.
2 of 2 people found this helpful
The differences you are seeing between the calculation in Excel and the values calculated by Asset Analytics and stored in PI might be due to the time-weighted nature of the TagTot() function(see documentation)
Roger gives the best suggestion in my opinion: BOM('*') and BOD('*') can be stored in attributes, using the Analysis Data Reference, then those attributes can be used as inputs of your table lookup. Something like this:
- Analysis Configuration:
-Table Lookup Configuration:
- The result:
The only downside I found with this approach is that you get a bad value at the beginning of the month:
You can probably work around that with an analysis replacing the value for the first day by 0. Let me know if this helps