2

If I understand the concept correctly the goal, which functional languages are trying to achieve is to eliminate any side effects from functions and to eliminate a state. The rationale behind this is to obtain referential transparency, which leads to more predictable execution.

However, nothing prevents me from writing the code with above in mind in imperative language. I'm thinking only about classic constructs and not functional mixins.

Let's say I have following code in C.

int add(int x, int y) {
    return x + y;
}

int sqr(int x)
{
    return x*x;
}

int main()
{
    return sqr(add(3,5));
}

So, I have two functions, which do not posses any side effects. Neither program has any state. Is this code missing something exceptionally functional?

Currently, I perceive functional languages like if they had built impressive decoration made of syntactic sugar over the core concept of functional programming. Their syntax discourages slicing the code into statements, however I don't see any substantial difference that prevents me to take functional approach (and yield its fruits) in imperative language. Am I wrong? Hence my question.

11
  • I'm not sure it's accurate to say that functional languages are trying to eliminate side effects. Most functional languages allow side effects (though usually they must be made explicit). It's probably more accurate to say that the goal of a functional language is to make the creation, composition, and usage of functions a first-class part of the language.
    – KChaloux
    Commented Dec 11, 2014 at 13:17
  • @KChaloux wasn't it a holy grail of functional programming and first-class functions were just a mean to accomplish this mission? I think that weird syntax discouraging one from using statements or state is an artifact from those days. Now it's obvious that side effects and state are unavoidable, but in the past computers were not as interactive as they are today and many problems were expressed in simple input->results terms, so such paradigm could be taken seriously.
    – mip
    Commented Dec 11, 2014 at 16:42
  • P.S. OF course it's still serious paradigm, but it can be applied to parts of a program and that's my point of this question.
    – mip
    Commented Dec 11, 2014 at 16:48
  • The problem here is assuming that all functional languages disallow side-effects (they are 'pure'), and that's just not the case. Most functional programming languages allow mutable state - it's a small, fairly modern set that strictly enforces purity. Also, don't be so quick to write off other languages as having "weird" syntax. Just because it's not based on C doesn't make it weird, just unfamiliar.
    – KChaloux
    Commented Dec 11, 2014 at 17:52
  • @KChaloux when it comes to me I distinguish functional paradigm from functional language. I know that functional languages "gave it up" and allow state and side-effect, thus they are IMO not purely functional. On the other hand imperative languages recently tend to drift into functional paradigm, so I suspect that both approaches will finally meet at some point. When it comes to "weird". I'm quite familiar with CLISP for example and its syntax is just weird, sorry.
    – mip
    Commented Dec 11, 2014 at 18:43

5 Answers 5

5

It is certainly possible to program in a purely functional style in an imperative language. In fact, if you look at books like Effective Java or Java Concurrency in Practice, much of the advice in those books basically boils down to "don't use mutable state", "don't use side-effects", etc.

However, it may not always be a pleasant experience, and you may not be able to do everything you want to do.

You mentioned C specifically as an example. Unless I'm missing something, the purely functional subset of C is not Turing-complete, because there is no way to express iteration. C does not have Proper Tail Calls, so, you will eventually overflow the stack when trying to express something like iterating over an infinite list. You will need to use a loop for iteration, but loops rely on mutable state and side-effects.

Of course, the standard Turing-tarpit argument applies … you can do functional programming C by implementing a Haskell interpreter in C and then program in Haskell. But that's not how I interpret the OP's question.

7
  • 1
    Tail call elimination is an optimisation feature of the compiler not the language, most decent compilers eliminate tail calls. IMHO trying to go out of your way to express iteration as tail recursion just for the sake of being functional and "cool" is silly, I wouldn't consider mutating a non-shared local variable a a significant side-effect, most functional languages do that, behind the hood, anyway if they can detect that the variable is only used in one place.
    – ALXGTV
    Commented Jun 13, 2015 at 14:11
  • P.S. iterating over an infinite list will freeze your computer, nobody ever iterates over an infinite list, what you probably call iteration is the construction of a generator which can be done in C
    – ALXGTV
    Commented Jun 13, 2015 at 14:12
  • 3
    TCO is a compiler optimization, but Proper Tail Calls are not, they are a language feature. You cannot rely on compiler optimizations, because they aren't part of the language's semantics. But if the language semantics guarantee Proper Tail Calls, then you can write code that takes advantage of that. For example, State Machines can be trivially implemented using subroutine calls for state transititions, but those subroutines never return, they go from state to state. Without Proper Tail Calls, you will blow the stack. Another example is OO, Steele, Cook and others have repeatedly demonstrated Commented Jun 13, 2015 at 15:33
  • 1
    that object-oriented data abstraction is impossible to achieve without Proper Tail Calls. Continuation Passing Style is also impossible without PTC, because again, subroutines never return, instead the call a subroutine which represents the next step in the computation. Loops are also hard to implement without Proper Tail Calls, they have to be added as a separate language feature, needlessly complicating the language. Commented Jun 13, 2015 at 15:34
  • I'm not sure what you mean by your distinction between tail-call optimization and "Proper Tail Calls", my assumption is that your referring to the semantics of languages whose standards guarantee tail-call optimization e.g. Scheme. In that case C does not enforce that tail-calls are eliminated however decent compilers will perform the optimization.
    – ALXGTV
    Commented Jun 13, 2015 at 18:09
13

If by functional programming you mean programming only with immutable values, sure, you can do that. But it's going to be painful. In a lot of cases you don't get to take advantage of:

  • First-class functions with lexical scoping (a.k.a. closures)
  • Functions with identifiers that are mostly special characters
  • Infix functions
  • Type inference
  • Tail call optimizations (so no recursion as a form of looping unless you're fine with stack overflows, which means you have to use looping statements, which means you need to mutate something to get results out of a loop)
  • Automatic currying
  • if/else and try expressions instead of statements.
  • Algebraic data types
  • Pattern matching
  • Not having to deal with null
  • Persistent data structures (usually need to pull in external libraries for these if they're available at all)
  • Advanced module systems
  • Lazy evaluation (though this can be simulated using lambda expressions)

And of course a compiler for a functional language will have different optimizations than a compiler for an imperative language. A language where functions aren't a primitive type is unlikely to optimize function composition or currying as well as a language where functions are primitives and it's expected that they'll be composed and curried often.

8
  • 2
    • lazy evaluation.
    – mip
    Commented Dec 11, 2014 at 2:53
  • "Infix functions": maybe you could also mention closures.
    – coredump
    Commented Dec 11, 2014 at 9:16
  • 2
    Some of the things listed here are commonly found in Functional-programming languages, but are not strictly essential to a language being for functional-programming or that their lack is necessary to be imperative. For example, if/else and try as expressions instead of statements; there is no reason why an OP language couldn't have those as expressions. It's just not as common, at least partially due to historical reasons.
    – eques
    Commented Dec 11, 2014 at 13:18
  • What's the difference between an if/else expression and the popular "<expression> ? <value1> : <value2>" syntax found in many programming languages?
    – Simon B
    Commented Dec 11, 2014 at 13:33
  • 1
    @SimonB Nothing other than the syntax of the ternary operator being considered confusing and/or ugly by many and is thus often considered a violation of whatever coding standards you follow. The bigger selling point is the try expression, which is rarer. try statements suck, because function/method calls are expressions, so any time you need to handle an exception you can no longer use that function/method as an expression.
    – Doval
    Commented Dec 11, 2014 at 13:37
0

I'm not an absolute expert on functional languages and paradigms; however, I do know that compilers have the job of translating their native language (C, Ada, Prolog, Java…) to a machine language (x86, JVM, sparc, amd64…). Since both functional and imperative languages can be translated to machine code, given that they are both Turing-complete and not domain specific; it follows that they can be translated to each other. So, I would think yes.

3
  • Right, you can write the code which will produce same results. However you can't practically do object oriented programming in assembly. Thus if functional languages are something more than I can see, it would be impossible to use functional paradigm in imperative languages.
    – mip
    Commented Dec 11, 2014 at 1:21
  • This is a Turing-tarpit argument. It is also wrong, depending on your definition of "translate". Sure, you can do functional programming in C, by implementing an interpreter for Haskell in C. But that's not what the OP means. I interpreted his question as "using a purely functional subset of an imperative language". Commented Dec 11, 2014 at 19:12
  • @JörgWMittag yes I'm asking if such subset natively exists, so that you can do it practically (without the need of writing interpreter/compiler).
    – mip
    Commented Dec 11, 2014 at 19:31
0

I'm learning Scala right now, and it allows for both. You can either write imperatively using standard curly braces like C, or you can omit the braces and write functionally. Obviously the functions you use decide whether or not you'll be able to write it purely functionally, but it is possible.

If I think of a good comparative example, I'll edit it in when I get on my laptop.

0

[...] however I don't see any substantial difference that prevents me to take functional approach (and yield its fruits) in imperative language.

My pragmatic sort of way of trying to enjoy some of those "fruits" in languages like C and C++ is not to go all out functional. It's very, very relaxed. I'm not trying to write higher-order functions all over the place or utilize closures or avoid imperative loops and local counter variables or anything like that. I'm not trying to fight these languages much. Of course there are some cases where lambdas and higher-order functions make a natural fit in certain generic C++ algorithms (including examples from the standard lib), and I do use them when they seem to flow off the fingertips that way, but I'm not trying to force a functional style in such languages so much.

Mostly I'm just trying to eliminate external side effects or, for those who feel the need to point out that real-world programs often need side effects (and sometimes a complex series of them), to centralize side effects to the fewest number of places. I try to move the external side effects towards the "bottom of a thread's call stack" which is a very crude way of describing/thinking about it but I find it practical for my purposes.

And the most practical benefit I've found of favoring that besides finding more opportunities for parallelization and having an easier time reasoning about thread safety and so forth is that I'm not being nearly as overwhelmed by the complexities of my and other's creations. It's allowing me to focus on larger-scale design concerns without feeling like my brain is on the brink of exploding from repeatedly being forced to comprehend so many details. It was one of the major missing puzzle pieces I was missing to prevent me from having to x-ray the abstractions we built to make sense of what was going on.

Because when we have a complex series of function calls or method calls between objects, and many of them have external side effects (to member variables, to parameters passed into the function, or maybe even globals, yuck, etc), then inevitably I find cases (the most glaringly obvious when things bug out and escape our tests, but also when we're trying to make changes or sandwich new functionality inside the system) where I have to try to drill deep and piece everything together to make sense of what is happening to the relevant external (ex: application) state. When a similar series of calls only input and output data without something else on the side being mutated, I find there's so much less information my brain needs to track, as well as just finding (when combined with sound testing procedure) far less mistakes flying under radar.

That said, again I am very relaxed about this. I'm okay if we do something like this:

// 'some_mesh' is passed by value (copied, not passed by ref/pointer)
static Mesh modify_mesh(Mesh some_mesh)
{
     // Transform some_mesh using imperative statements.
     ...

     // Output new, modified mesh.
     return some_mesh;
}

I'm okay with that sort of thing since some_mesh is just a local to this modify_mesh function, not being passed around and mutated elsewhere, and the function has no external side effects and has referential transparency. Maybe it calls some mutating methods like add_triangles or whatever which causes side effects/mutations to that local mesh object. I'm okay with that, as long as we're not passing this mesh around 20 levels deep in the call stack and mutating it while I'm overwhelmed trying to keep track of what's going on to it and in what precise order.

And I've actually built some persistent data structures with atomic ref-counting, immutable interfaces, builders, things of this sort, for the heftier stuff that would be very expensive to copy around and the things that I would particularly prefer the software is not mutating across a series of function calls and across threads (my central application scene which stores everything is immutable now, and the only thing operations can do is output new scenes for wholesale replacement, not mutate the existing one). But mostly it's very relaxed as you can hopefully see. I'm just trying to reduce the amount of information my brain has to keep track of, because my whole goal is to avoid this:

“The computing scientist’s main challenge is not to get confused by the complexities of his own making.” -- ― Edsger W. Dijkstra

... as we build larger and larger scale software.

As a side bonus I have also found a whole lot more opportunities to multithread things. I would have never thought in the past to even consider running physics in parallel with real-time rendering, for example, since I thought of physics as mutating a central scene and rendering as wanting to read from it at the same time in ways where locking might be more expensive than just using threads to make them individually finish faster. Now physics doesn't modify a central scene. It inputs one and outputs a new one, and it can devote a thread doing that as fast as it can, while the renderer inputs a scene, keeps a lightweight copy (since the scene is a PDS), and renders it, and it can do that as fast as it can in its own thread. Before I would have tried to use multiple threads to parallelize loops and so forth in a sequential pipeline to make all of this stuff finish faster (ex: making the physics finish faster with parallel loops while the renderer then renders it in the same thread after it's finished), but running these things in parallel without a care in the world has not only simplified the resulting code, but translated to much smoother and faster and consistent frame rates for users. But that's a side bonus -- I mostly sought out the mitigation of side effects mainly to help comprehend the system as a whole and achieve a greater sense of clarity.

On top of that exception-safety and error-handling is a no-brainer now. Before when external side effects, especially to persistent application state, was involved, the most complex part of recovering from an exception was rolling back those side effects. Now there aren't any such side effects in the vast majority of the functions in the system. If something throws there's nothing to roll back. The undo/redo system is also now ridiculously simple since it's just copying the entire scene (which doesn't take much memory at all since it's a PDS). Non-destructive editing is a piece of cake. Etc. etc. etc.

Not the answer you're looking for? Browse other questions tagged or ask your own question.