# Scaling of large numbers

The obvious algorithm for scaling is as follows:

``````public static double Scale(this double value, double scaleMin, double scaleMax)
{
if (scaleMin > scaleMax) throw new ArgumentOutOfRangeException(nameof(scaleMin));
return scaleMin + value * (scaleMax - scaleMin);
}
``````

The less obvious drawback is that it overflows for most of range that it 'claims' it can cover. Consider following test that will fail, but does not have to:

``````    [Theory]
[InlineData(2)]
public void ScaleTestBorderCase10(double factor)
{
double max = double.MaxValue / factor;
double min = double.MinValue / factor;
double value = factor;
}
``````

It fails, because the scaleMin adds to scaleMax. Everywhere I look, I see noone considers these cases (1, 2, 3, 4). The best proposed approach now is by Mark Shevchenko;

``````    public static double ScaleSafe(this double value, double scaleMin, double scaleMax)
{
return scaleMin + value * scaleMax - value * scaleMin;
}

public static double Scale(this double value, double scaleMin, double scaleMax)
{
if (scaleMin > scaleMax) throw new ArgumentOutOfRangeException(nameof(scaleMin));
var tmp = value * (scaleMax - scaleMin);
if (double.IsInfinity(tmp))
{
return value.ScaleSafe(scaleMin, scaleMax);
}
return scaleMin + tmp;
}
``````