Problems with enforcing strictness in haskell

If want to pretend that Haskell is strict and I have an algorithm in mind that does not exploit laziness (so for instance it does not use infinite lists), what problems can occur if I used only strict data types and annotated any function that I use, to be strict in its arguments? Will there be a performance penalty, if so how bad; can worse problems occur? I know it is dirty, pointless and ugly to mindlessly make every function and data type strict, and I do not intend to do so in practice but I only want to understand if by doing so, Haskell becomes strict by default?

Secondly, if I tone down the paranoia, and only make the data structures strict: will I have to worry about space leaks brought about by a lazy implementation only when I am using some form of accumulation? In other words, assume that the algorithm would not exhibit a space leak in a strict language. Also assume that I implemented it in Haskell using only strict data structures, but was careful to use seq to evaluate any variable that was being passed on in a recursion, or used functions which internally are careful to do that (like fold'), would I avoid any space leaks? Remember that I am assuming that in a strict language, the same algorithm does not lead to a space leak. So it is a question about the implementation difference between lazy and strict.

The reason I ask the second question is because apart from cases where one is trying to take advantage of laziness by using a lazy data structure, or a spine strict one, all the examples of space leaks that I have seen until now, only involve thunks developing in an accumulator because it was not the function that was recursively called did not evaluate the accumulator before applying itself on it. I am aware that if one wants to take advantage of laziness then one has to be extra careful, but that caution would be needed in a strict by default language too.

Thank you.


Laziness speeding things up

You could be worse off. The naive definition of ++ is:

xs ++ ys = case xs of (x:xs) -> x : (xs ++ ys)
                      []     -> ys

Laziness makes this O(1), though it may also add O(1) processing to extract the cons. Without laziness, the ++ needs to be evaluated immediately causing an O(n) operation. (If you've never seen the O(.) notation, it is something computer science has stolen from engineers: given a function f the set O( f(n) ) is the set of all algorithms which are eventually at-worst-proportional to f(n) , where n is the number of bits of input fed to the function. [Formally, there exists a k and N such that for all n > N the algorithm takes time less than k * f(n) .] So I'm saying that laziness makes the above operation O(1) or eventually constant-time, but adds a constant overhead to each extraction, whereas strictness makes the operation O(n) or eventually linear in the number of list elements, assuming that those elements have a fixed size.

There are some practical examples here but the O(1) added processing time can potentially also "stack up" into an O(n) dependency, so the most obvious examples are O(n2) both ways. Still there can be a difference in these examples. For example, one situation that doesn't work well is using a stack (last-in first-out, which is the style of Haskell lists) for a queue (first-in first-out).

So here's a quick library consisting of strict left-folds; I've used case statements so that each line can be pasted into GHCi (with a let ):

data SL a = Nil | Cons a !(SL a) deriving (Ord, Eq, Show)
slfoldl' f acc xs = case xs of Nil -> acc; Cons x xs' -> let acc' = f acc x in acc' `seq` slfoldl' f acc' xs'
foldl' f acc xs = case xs of [] -> acc; x : xs' -> let acc' = f acc x in acc' `seq` foldl' f acc' xs'
slappend xs ys = case xs of Nil -> ys; Cons x xs' -> Cons x (slappend xs' ys)
sl_test n = foldr Cons Nil [1..n]
test n = [1..n]
sl_enqueue xs x = slappend xs (Cons x Nil)
sl_queue = slfoldl' sl_enqueue Nil
enqueue xs x = xs ++ [x]
queue = foldl' enqueue []

The trick here is that both queue and sl_queue follow the xs ++ [x] pattern to append an element to the end of the list, which takes a list and builds up an exact copy of that list. GHCi can then run some simple tests. First we make two items and force their thunks to prove that this operation itself is quite fast and not too prohibitively expensive in memory:

*Main> :set +s
*Main> let vec = test 10000; slvec = sl_test 10000
(0.02 secs, 0 bytes)
*Main> [foldl' (+) 0 vec, slfoldl' (+) 0 slvec]
[50005000,50005000]
(0.02 secs, 8604632 bytes)

Now we do the actual tests: summing the queue-versions:

*Main> slfoldl' (+) 0 $ sl_queue slvec
50005000
(22.67 secs, 13427484144 bytes)
*Main> foldl' (+) 0 $ queue vec
50005000
(1.90 secs, 4442813784 bytes)

Notice that both of these suck in terms of memory-performance (the list-append stuff is still secretly O(n2)) where they eventually occupy gigabytes of space, but the strict version nevertheless occupies three times the space and takes ten times the time.

Sometimes the data structures should be changed

If you really want a strict queue, there are a couple options. One is finger trees as in Data.Sequence -- the viewr way they do things is a little complicated but works to get the rightmost elements. However that is a bit heavy and one common solution is O(1) amortized: define the structure

data Queue x = Queue !(SL x) !(SL x)

where the SL terms are the strict stacks above. Define a strict reverse , let's call it slreverse , the obvious way, then consider:

enqueue :: Queue x -> x -> Queue x
enqueue (Queue xs ys) el = Queue xs (Cons el ys)

dequeue :: Queue x -> Maybe (x, Queue x)
dequeue (Queue Nil Nil) = Nothing
dequeue (Queue Nil (Cons x xs)) = Just (x, Queue (slreverse xs) Nil)
dequeue (Queue (Cons x xs ys)) = Just (x, Queue xs ys)

This is "amortized O(1)": each time that a dequeue reverses the list, costing O(k) steps for some k , we ensure that we are creating a structure which won't have to pay these costs for k more steps.

Laziness hides errors

Another interesting point comes from the data/codata distinction, where data are finite structures traversed by recursion on subunits (that is, every data expression halts) while codata are the rest of the structures -- strict lists vs. streams. It turns out that when you properly make this distinction, there is no formal difference between strict data and lazy data -- the only formal difference between strict and lazy is how they handle terms within themselves which loop infinitely: strict will explore the loop and hence will also loop infinitely, while lazy will simply hand the infinite-loop onwards without descending into it.

As such you will find that x = slhead (Cons x undefined) will fail where head (x : undefined) succeeds. So you may "uncover" hidden infinite loops or bugs when you do this.

Caution when making "everything strict"

Not everything necessarily becomes strict when you use strict data structures in your language: notice that I made a point above to define strict foldl , not foldl , for both lists and strict-lists. Common data structures in Haskell will be lazy -- lists, tuples, stuff in popular libraries -- and explicit calls to seq still help when building up a complicated expression.

链接地址: http://www.djcxy.com/p/42998.html

上一篇: 为什么seq不好?

下一篇: 在haskell中严格执行问题