why 2.9000000000000004 instead of 2.9?

This question already has an answer here:

  • Is floating point math broken? 23 answers

  • How do I tell ghci to not do that, and show the results of Operations on Doubles just as any other programming language (and calculator) would and just as every 15 year old would write them?

    Since those results are the actual results GHCI (and your standard calculator*) calculates you cannot change the internal representation of the result (see TNI's answer). Since you want to show only a fixed number of decimals it's more a matter of the presentation (compare to printf("%f.2",...) in C).

    A solution to this can be found in https://stackoverflow.com/a/2327801/1139697. It can be applied like this:

    import Numeric
    fixedN :: (RealFloat b) => Int -> b -> String
    fixedN a b = showFFloat (Just a) b ""
    
    map (fixedN 2 . (-)2.3) [4.0, 3.8, 5.2, 6.4, 1.3, 8.3, 13.7, 9.0, 7.5, 2.4]
    -- result: ["-1.70","-1.50","-2.90","-4.10","1.00","-6.00",...]
    

    Note that this won't be feasible if you want to continue calculation. If you want exact arithmetic, you're better of by using Rationals anyway. Don't forget that your input should be rational aswell in this case.

    * yes, even your standard calculator does the same thing, the only reason you don't see it is the fixed presentation, it cannot show more than a fixed number of decimals.


    Why does this happen?

    Because certain floating point numbers cannot be represented by a finite number of bits without rounding . Floating-point numbers have a limited number of digits, they cannot represent all real numbers accurately: when there are more digits than the format allows, the leftover ones are omitted - the number is rounded.

    You should probably read What Every Computer Scientist Should Know About Floating-Point Arithmetic and this answer.


    It's the nature of floating point numbers that they cannot exactly represent real (nor rational) numbers. Haskell default conversion to string ensures that when the number is read back in you get exactly the same representation. If you want a different way to print numbers you can make your own type which shows numbers differently.

    Something like (untested):

    newtype MyDouble = MyDouble {getMyDouble :: Double}
                deriving (Eq, Ord, Num, Real, RealFrac, Fractional, Floating)
    instance Show MyDouble where show = printf "%g" . getMyDouble
    default (MyDouble)
    

    This creates a copy of the Double type, but with a different Show instance that just prints a few decimals. The default declaration makes the compiler pick this type when there is an ambiguity. Oh, and to make this work you need a few language extensions.

    You could also try the CReal type from the numbers package.

    链接地址: http://www.djcxy.com/p/27456.html

    上一篇: Javascript数学舍入

    下一篇: 为什么2.9000000000000004而不是2.9?