ST Monad == code smell?

I'm working on implementing the UCT algorithm in Haskell, which requires a fair amount of data juggling. Without getting into too much detail, it's a simulation algorithm where, at each "step," a leaf node in the search tree is selected based on some statistical properties, a new child node is constructed at that leaf, and the stats corresponding to the new leaf and all of its ancestors are updated.

Given all that juggling, I'm not really sharp enough to figure out how to make the whole search tree a nice immutable data structure à la Okasaki. Instead, I've been playing around with the ST monad a bit, creating structures composed of mutable STRef s. A contrived example (unrelated to UCT):

import Control.Monad
import Control.Monad.ST
import Data.STRef

data STRefPair s a b = STRefPair { left :: STRef s a, right :: STRef s b }

mkStRefPair :: a -> b -> ST s (STRefPair s a b)
mkStRefPair a b = do
    a' <- newSTRef a
    b' <- newSTRef b
    return $ STRefPair a' b'

derp :: (Num a, Num b) => STRefPair s a b -> ST s ()
derp p = do
    modifySTRef (left p) (x -> x + 1)
    modifySTRef (right p) (x -> x - 1)

herp :: (Num a, Num b) => (a, b)
herp = runST $ do
    p <- mkStRefPair 0 0
    replicateM_ 10 $ derp p
    a <- readSTRef $ left p
    b <- readSTRef $ right p
    return (a, b)

main = print herp -- should print (10, -10)

Obviously this particular example would be much easier to write without using ST , but hopefully it's clear where I'm going with this... if I were to apply this sort of style to my UCT use case, is that wrong-headed?

Somebody asked a similar question here a couple years back, but I think my question is a bit different... I have no problem using monads to encapsulate mutable state when appropriate, but it's that "when appropriate" clause that gets me. I'm worried that I'm reverting to an object-oriented mindset prematurely, where I have a bunch of objects with getters and setters. Not exactly idiomatic Haskell...

On the other hand, if it is a reasonable coding style for some set of problems, I guess my question becomes: are there any well-known ways to keep this kind of code readable and maintainable? I'm sort of grossed out by all the explicit reads and writes, and especially grossed out by having to translate from my STRef -based structures inside the ST monad to isomorphic but immutable structures outside.


I don't use ST much, but sometimes it is just the best solution. This can be in many scenarios:

  • There are already well-known, efficient ways to solve a problem. Quicksort is a perfect example of this. It is known for its speed and in-place behavior, which cannot be imitated by pure code very well.
  • You need rigid time and space bounds. Especially with lazy evaluation (and Haskell doesn't even specify whether there is lazy evaluation, just that it is non-strict), the behavior of your programs can be very unpredictable. Whether there is a memory leak could depend on whether a certain optimization is enabled. This is very different from imperative code, which has a fixed set of variables (usually) and defined evaluation order.
  • You've got a deadline. Although the pure style is almost always better practice and cleaner code, if you are used to writing imperatively and need the code soon, starting imperative and moving to functional later is a perfectly reasonable choice.
  • When I do use ST (and other monads), I try to follow these general guidelines:

  • Use Applicative style often. This makes the code easier to read and, if you do switch to an immutable version, much easier to convert. Not only that, but Applicative style is much more compact.
  • Don't just use ST. If you program only in ST, the result will be no better than a huge C program, possibly worse because of the explicit reads and writes. Instead, intersperse pure Haskell code where it applies. I often find myself using things like STRef s (Map k [v]) . The map itself is being mutated, but much of the heavy lifting is done purely.
  • Don't remake libraries if you don't have to. A lot of code written for IO can be cleanly, and fairly mechanically, converted to ST. Replacing all the IORef s with STRef s and IO s with ST s in Data.HashTable was much easier than writing a hand-coded hash table implementation would have been, and probably faster too.
  • One last note - if you are having trouble with the explicit reads and writes, there are ways around it.


    Algorithms which make use of mutation and algorithms which do not are different algorithms. Sometimes there is a strightforward bounds-preserving translation from the former to the latter, sometimes a difficult one, and sometimes only one which does not preserve complexity bounds.

    A skim of the paper reveals to me that I don't think it makes essential use of mutation -- and so I think a potentially really nifty lazy functional algorithm could be developed. But it would be a different but related algorithm to that described.

    Below, I describe one such approach -- not necessarily the best or most clever, but pretty straightforward:

    Here's the setup a I understand it -- A) a branching tree is constructed B) payoffs are then pushed back from the leafs to the root which then indicates the best choice at any given step. But this is expensive, so instead, only portions of the tree are explored to the leafs in a nondeterministic manner. Furthermore, each further exploration of the tree is determined by what's been learned in previous explorations.

    So we build code to describe the "stage-wise" tree. Then, we have another data structure to define a partially explored tree along with partial reward estimates. We then have a function of randseed -> ptree -> ptree that given a random seed and a partially explored tree, embarks on one further exploration of the tree, updating the ptree structure as we go. Then, we can just iterate this function over an empty seed ptree to get a list of increasingly more sampled spaces in the ptree. We then can walk this list until some specified cutoff condition is met.

    So now we've gone from one algorithm where everything is blended together to three distinct steps -- 1) building the whole state tree, lazily, 2) updating some partial exploration with some sampling of a structure and 3) deciding when we've gathered enough samples.


    It's can be really difficult to tell when using ST is appropriate. I would suggest you do it with ST and without ST (not necessarily in that order). Keep the non-ST version simple; using ST should be seen as an optimization, and you don't want to do that until you know you need it.

    链接地址: http://www.djcxy.com/p/80870.html

    上一篇: Haskell库概述及其质量

    下一篇: ST Monad ==代码味道?