In .net, how do I choose between a Decimal and a Double

We were discussing this the other day at work and I wish there was a Stackoverflow question I would point people at so here goes.)

  • What is the difference between a Double and a Decimal ?
  • When (in what cases) should you always use a Double ?
  • When (in what cases) should you always use a Decimal ?
  • What's the driving factors to consider in cases that don't fall into one of the two camps above?
  • There are a lot of questions that overlap this question, but they tend to be asking what someone should do in a given case, not how to decide in the general case.


    I usually think about natural vs artificial quantities.

    Natural quantities are things like weight, height and time. These will never be measured absolutely accurately, and there's rarely any idea of absolutely exact arithmetic on it: you shouldn't generally be adding up heights and then making sure that the result is exactly as expected. Use double for this sort of quantity. Doubles have a huge range, but limited precision; they're also extremely fast.

    The dominant artificial quantity is money. There is such a thing as "exactly $10.52", and if you add 48 cents to it you expect to have exactly $11. Use decimal for this sort of quantity. Justification: given that it's artificial to start with, the numbers involved are artificial too, designed to meet human needs - which means they're naturally expressed in base 10. Make the storage representation match the human representation. decimal doesn't have the range of double , but most artificial quantities don't need that extra range either. It's also slower than double , but I'd personally have a bank account which gave me the right answer slowly than a wrong answer quickly :)

    For a bit more information, I have articles on .NET binary floating point types and the .NET decimal type. (Note that decimal is a floating point type too - but the "point" in question is a decimal point, not a binary point.)


    if you want to keep real precision, stay with decimal

    if you want to compare value, stay with decimal

    if you use double and do this

    ? ctype(1.0, Double ) / 3
    

    you will get

    0.33333333333333331

    if you use decimal and do this

    ? ctype(1.0, Decimal ) /3
    

    you will get

    0.3333333333333333333333333333D

    and one more example, extreme one;

      decimal dec = new decimal(1, 1, 1, false, 28);
      var dou = (double)dec;
    

    would produce this, double would lose some precision

    ? dou
    0.0000000018446744078004519
    ? dec
    0.0000000018446744078004518913D

    in the end,

    double = approximation

    decimal = real thing


    SQL Server Its also worth mentioning that decimal in SQL Server maps to Decimal and Nullable Decimal in the .net framework. While float in sql server maps to Double and Nullable Double. Just in case you end up dealing with a database app.

    Oracle I no longer work with oracle as you can see it is crossed out in my profile information :), however for those who do work with oracle here is an MSDN article mapping the oracle data types:

    http://msdn.microsoft.com/en-us/library/yk72thhd(VS.80).aspx

    链接地址: http://www.djcxy.com/p/48412.html

    上一篇: 为什么Math.Round(2.5)返回2而不是3?

    下一篇: 在.net中,我如何选择Decimal和Double