4

I’ve been using Scala at work and to understand Functional Programming more deeply I picked Graham Hutton’s Programming in Haskell (love it :)

In the chapter on Monads I got my first look into the concept of Applicative Functors (AFs)

In my (limited) professional-Scala capacity I’ve never had to use AFs and have always written code that uses Monads. I’m trying to distill the understanding of “when to use AFs” and hence the question. Is this insight correct:

If all your computations are independent and parallelizable (i.e., the result of one doesn’t determine the output of another) your needs would be better served by an AF if the output needs to be piped to a pure function without effects. If however, you have even a single dependency AFs won’t help and you’ll be forced to use Monads. If the output needs to be piped to a function with effects (e.g., returning Maybe) you’ll need Monads.

For example, if you have “monadic” code like so:

val result = for {
 x <- callServiceX(...)
 y <- callServiceY(...) //not dependent on X 
} yield f(x,y)

It’s better to do something like (pseudo-AF syntax for scala where |@| is like a separator between parallel/asynch calls).

val result = (callServiceX(...) |@| callServiceY(...)).f(_,_)

  • If f == pure and callService* are independent AFs will serve you better
  • If f has effects i.e., f(x,y): Option[Response] you’ll need Monads
  • If callServiceX(...), y <- callServiceY(...), callServiceZ(y) i.e., there is even a single dependency in the chain, use Monads.

Is my understanding correct? I know there’s a lot more to AFs/Monads and I believe I understand the advantages of one over the other (for the most part). What I want to know is the decision making process of deciding which one to use in a particular context.

2
  • At least in Haskell, there's a language extension that lets you write code using the do construct, but then the compiler determines whether it can be desugared to use an Applicative instance rather than a Monad instance. The primary goal was to determine when the code could be safely parallelized; I'm not sure if there are any other compiler-level reasons to prefer one over the other.
    – chepner
    Commented Mar 3, 2020 at 1:21
  • 1
    This answer I wrote back in 2013 is still relevant: stackoverflow.com/a/19881777/334519 Commented Mar 3, 2020 at 9:03

2 Answers 2

5

There is not really a decision to be made here: always use the Applicative interface, unless it is too weak.1

It's the essential tension of abstraction strength: more computations can be expressed with Monad; computations expressed with Applicative can be used in more ways.

You seem to be mostly correct about the conditions where you need to use Monad. I'm not sure about this one:

  • If f has effects i.e. f(x,y) : Option[Response] you'll need Monads.

Not necessarily. What is the functor in question here? There is nothing stopping you from creating a F[Option[X]] if F is the applicative. But just as before you won't be able to make further decisions in F depending on whether the Option succeeded or not -- the whole "call tree" of F actions must be knowable without computing any values.


1 Readability concerns aside, that is. Monadic code will probably be more approachable to people from traditional backgrounds because of its imperative look.

2
  • Re your footnote, doesn't ApplicativeDo basically make them look the same? Commented Mar 3, 2020 at 17:48
  • @JosephSibleReinstateMonica in principle. I hate that extension though, it doesn't work reliably. (Eg it doesn't know what to do with let)
    – luqui
    Commented Mar 3, 2020 at 19:15
2

I think you'll need to be a little cautious about terms like "independent" or "parallelizable" or "dependency". For example, in the IO monad, consider the computation:

foo :: IO (String, String)
foo = do
    line1 <- getLine
    line2 <- getLine
    return (line1, line2)

The first and second lines are not independent or parallelizable in the usual sense. The second getLine's result is affected by the action of the first getLine through their shared external state (i.e., the first getLine reads a line, implying the second getLine will not read that same line but will rather read the next line). Nonetheless, this action is applicative:

foo = (,) <$> getLine <*> getLine

As a more realistic example, a monadic parser for the expression 3 + 4 might look like:

expr :: Parser Expr
expr = do
    x <- factor
    op <- operator
    y <- factor
    return $ x `op` y

The three actions here are interdependent. The success of the first factor parser determines whether or not the others will be run, and its behavior (e.g., how much of the input stream it absorbs) clearly affects the results of the other parsers. It would not be reasonable to consider these actions as operating "in parallel" or being "independent". Still, it's an applicative action:

expr = factor <**> operator <*> factor

Or, consider this State Int action:

bar :: Int -> Int -> State Int Int
bar x y = do
    put (x + y)
    z <- gets (2*)
    return z

Clearly, the result of the gets (*2) action depends on the computation performed in the put (x + y) action. But, again, this is an applicative action:

bar x y = put (x + y) *> gets (2*)

I'm not sure that there's a really straightforward way of thinking about this intuitively. Roughly, if you think of a monadic action/computation m a as having "monadic structure" m as well as a "value structure" a, then applicatives keep the monadic and value structures separate. For example, the applicative computation:

λ> [(1+),(10+)] <*> [3,4,5]
[4,5,6,13,14,15]

has a monadic (list) structure whereby we always have:

[f,g] <*> [a,b,c] = [f a, f b, f c, g a, g b, g c]

regardless of the actual values involves. Therefore, the resulting list length is the product of the length of both "input" lists, the first element of the result involves the first elements of the "input" lists, etc. It also has a value structure whereby the value 4 in the result clearly depends on the value (1+) and the value 3 in the inputs.

A monadic computation, on the other hand, permits a dependency of the monadic structure on the value structure, so for example in:

quux :: [Int]
quux = do
  n <- [1,2,3]
  drop n [10..15]

we can't write down the structural list computation independent of the values. The list structure (e.g., the length of the final list) is dependent on the value level data (the actual values in the list [1,2,3]). This is the kind of dependency that requires a monad instead of an applicative.

1
  • This is useful. My choice of words "independent" and "parallelizable" are definitely suboptimal and used only as an approximation to communicate intent, lacking the relevant domain vocabulary that would best explain my confusion. Could a similar claim be made for applicatives by translating this: A monadic computation, on the other hand, permits a dependency of the monadic structure on the value structure - an applicative computation ... ?
    – PhD
    Commented Mar 3, 2020 at 21:01

Not the answer you're looking for? Browse other questions tagged or ask your own question.