Although compression and exception settings are essential for the data quality, the concept is difficult to understand and to apply. Compression improves the signal quality especially for noisy signals if it is set correctly. But finding the correct settings can be tedious especially if this is done by manually tweaking the tags.

 

There is a way to calculate the best fit for the compression, but this requires a lot of data and some assumptions about the true signal. A more hands on approach is to define sampling rates for tags, for example a good target for slow moving signals is ~ 10 sec./point or 6 points/min.:

 

pointDistance [sec.] = Abs(LastTime [sec] - FirstTime[sec])/NoPoints

writeSpeed [point/min.] = 60/pointDistance

 

Since both exception and compression lead to a data reduction most companies use a ratio of 3 to 5 to specify exception based on compression. For example:

 

exception deviation = 3 * compression deviation

 

Since for fast moving signals exception deviation degrades the signal quality, it is recommended to have a larger exception-to-compression ratio. The cost function in C# to optimize the compression looks as follows:

 

   public class OptimizationResult
    {
        public double PointDistance { get; }
        public double WriteSpeed { get; }
        public double Delta { get; }
        public int NoPoints { get; }
        public bool IsValid { get; }
        public OptimizationResult(double pointDistance, double writeSpeed, double delta, int noPoints, bool isValid)
        {
            PointDistance = pointDistance;
            WriteSpeed = writeSpeed;
            Delta = delta;
            NoPoints = noPoints;
            IsValid = isValid;
        }
    }

 

public static OptimizationResult CompressionCostFunction(List<TimeValue> rawValues,
            double targetSpeed,
            double compression,
            double compressionExceptionRatio)
        {
            var exception = compression / compressionExceptionRatio;
            var ExceptionAndCompression = new ExceptionAndCompression(exception, compression);
            var compressedValues = rawValues.Select(
                timeValue => ExceptionAndCompression.Calculate(timeValue)).Where(compValue => compValue != null).ToList();


            if (compressedValues.Count < 2) return new OptimizationResult(0, 0, 0, compressedValues.Count, false);


            var pointDistance = Math.Abs((compressedValues[0].TimeStamp - compressedValues[compressedValues.Count - 1].TimeStamp).TotalSeconds) /
                                      compressedValues.Count;
            var writeSpeed = 60 / pointDistance;
            var delta = Math.Pow(targetSpeed - writeSpeed, 2);
            return new OptimizationResult(pointDistance, writeSpeed, delta, compressedValues.Count, true);
        }

 

To optimize the problem just requires a hill climbing method such as Golden Section Search:

 

public static double GoldenSectionSearch(List<TimeValue> rawValues,
            Func<List<TimeValue>, double, double,double, OptimizationResult> func,
            double a,
            double b,
            double targetSpeed,
            double exceptionCompressionRatio,
            double tau = 1e-7, int maxiter = 1000)
        {
            OptimizationResult cOptimizationResult, dOptimizationResult;
             var gr = (Math.Sqrt(5) - 1) / 2;


            var c = b - gr * (b - a);
            var d = a + gr * (b - a);
            var n = 0;
            while (true)
            {
                n = n + 1;
                cOptimizationResult = func(rawValues, targetSpeed, c, exceptionCompressionRatio);
                dOptimizationResult = func(rawValues, targetSpeed, c, exceptionCompressionRatio);


                if (Math.Abs(c - d) < tau || n > maxiter) break;
               // cost function is not strictly monotonic, so the following tweak avoids convergence
                if ((cOptimizationResult.NoPoints == dOptimizationResult.NoPoints & cOptimizationResult.WriteSpeed < targetSpeed)
                    || (cOptimizationResult.NoPoints != dOptimizationResult.NoPoints & cOptimizationResult.Delta < dOptimizationResult.Delta))
                {
                    b = d;
                    d = c;
                    c = b - gr * (b - a);
                }
                else
                {
                    a = c;
                    c = d;
                    d = a + gr * (b - a);
                }
            }
            return (b + a) / 2;
        }

 

Here are the results of optimizing the default SINUSOID that has been configured with a scan rate of 1 sec.:

 

Settings:

     Target Write Speed = 9 points/min

     Exception Compression Ratio = 4

     Lower Bound = 0

     Upper Bound = 10 (10% of range)

     Window Size = 1000 (1000 data points for calculation)

 

Result:

     No Points = 147 (after exception and compression)

     Point Distance = 6.77 seconds

     Write Speed = 8.86 points/min.

 

Write speed or the average scan rate is a concept which is much easier to understand than exception and compression. Therefore some companies choose selecting an interface scan rate and then remove exception and compression. A much better approach is optimizing the compression settings and down sampling the raw signal.