Subsequent Thoughts on Concurrency

Over the past few years my opinions on concurrency have changed. I mean, threads were always a pretty scary thing. I figured that no matter what I did, there must be a race condition or deadlock in there somewhere (and really, if you ever think otherwise, you\’re just fooling yourself). I didn\’t know of any alternatives. Java\’s approach seemed to have some merits — all you had to do was write synchronized, right? — and I took it a bit further with a language I was designing some years back. Here\’s an image from the spec:

The idea was that when you defined a class you labeled some methods as variable. Those methods would be available on an instance declared as variable or exclusive. Both immutable (the default) and exclusive objects were thread-safe, and both variable and exclusive objects were mutable. You would then declare objects according to the functionality you needed. Immutable was your safe bet and you added variable or exclusive if you got compiler errors. At the time it seemed ideal. Of course, I still thought of concurrency as something that you should avoid whenever possible. It\’s dangerous. You want to be able to walk through everything step-by-step. Don\’t take the chance.

Working on that language drove me toward the programming language singularity (as I like to call it). I wondered why we had to declare types at all (can\’t the compiler figure it out?) and how to add metaprogramming capabilities. These questions, arrived at from my own frustrations, were answered after discovering languages like Haskell and Common Lisp.

I call this a singularity, because I couldn\’t believe these advances hadn\’t found their way into mainstream programming. My hypothesis was that once a programmer made the leap, he saw no need to look back to the tangled mess he had left. Any other programmer who starts down the same path is drawn more and more quickly toward the realization until pop they cross the singularity.

Back to concurrency, though. Before the singularity, there were only threads. That was just how things worked. And \”green\” threads were simply inferior to native threads. However, on this side, there are so many approaches to the problem, and all of them start at a point far above the best threading available (if you haven\’t noticed yet, I\’m using \”threads\” to specifically refer to shared-memory mutex-based concurrency, just to have to type fewer words everywhere). So what do we have?

Who needs mutation, anyway? Haskell does pretty well with its monads, I haven\’t heard Erlangers complain about the lack of mutability, either. And Guillome has taken Scheme and made it immutable to produce Termite. If you\’re willing to take this (often perceived as drastic) step, you\’ve made concurrency a lot easier. If an object can\’t be changed, what does it matter how many concurrent processes have access to it? You don\’t need to worry if you declared it correctly, or go back and fix declarations so your code will compile, you just send it to whatever thread you want.

Another approach, taken by Orc, is akin to treating everything as a Web service and using a very few basic primitives to \”orchestrate\” them. For those of us working in an SOA and feeling SOL, this model looks very promising. It also seems like it can be built into other languages fairly easily, which is something I\’m planning to undertake with Common Lisp.

When you look at these modern concurrency models, you suddenly have a lot more room to make everything happen in parallel. You stop fearing threading, avoiding communication between your processes, and crossing your fingers every time you load your application; and start thinking in terms of thousands of little robots, simultaneously working on atomic pieces of a larger job, and willing to work faster when you put them on a 4, 8, or 32 CPU machine, without you having to tell them a thing. It\’s a nice future, and there are already a lot of ways to get there.

6 Responses to “Subsequent Thoughts on Concurrency”

  1. mark Says:

    Who needs mutation, anyway? Haskell does pretty well with its monads, I haven?t heard Erlangers complain about the lack of mutability, either. And Guillome has taken Scheme and made it immutable to produce Termite A minor correction (or nitpick, if you wish): Termite is (AFAICT) not really ‘immutable’, but rather abstracts mutable state by ‘hiding’ it behind processes. I also wouldn’t say that Haskell doesn’t allow mutation; instead, it isolates code dealing with side-effects/mutable state from pure code using monads. Also, if you are careful about what you do with the value, you can actually mutate it (in place) without violating referential transparency. Incidentally, I found Haskell’s STM library a very interesting approach on dealing with concurrency (you didn’t mention it in the article, perhaps you already know it).

  2. Lispmeister Says:

    Greg, you might be interested in David P. Reed’s PhD thesis. You can find it at http://lispmeister.com/downloads/MIT-LCS-TR-205.pdf

  3. beza1e1 Says:

    Termite doesn’t make Scheme immutable. It’s just the documentation that says “don’t mutate!”

  4. Greg Pfeil Says:

    beza1e1: I think “mutation of variables and data structures is not available” is a bit stronger than “don’t mutate”. If

    1
    set!
    and other mutating functions are available, I think that’s a bug in the implementation, since the paper is pretty explicit about them not being there.

    mark: Termite’s mutation-through-processes is like Haskell’s monads in that there is no mutation actually taking place, but the programmer can behave as if there is. There’s a great tutorial on monads that shows how they fake mutation, and aren’t just some hand-waving in order to claim a functional language is pure.

    I’ve heard of STM, but haven’t looked into it. I’ll do that now. Thanks.

  5. mark Says:

    Hi Greg, thanks for your answer – I disagree about Termite, I don’t think that Termite’s abstraction of mutable state is very similar to Haskell’s. Termite’s abstraction does not provide the kind of guarantees about pure code that Haskell does — Sending the same message to a process can give you different output every time, while Haskell clearly separates pure functions using the type system. But I guess you are right that it still prevents mutation, if not necessarily side-effects (individual processes can easily change their state, but they do so explicitly by using e.g. recursive function calls). I didn’t mean to say that Haskell’s abstractions must actually mutate anything, but rather that the interface provided by certain monads allows Haskell to use in-place mutation of data structures ‘under the hood’ without introducing (observable) side-effects. I believe that the Data.Array.MArray type uses in-place updates, at least the documentation indicates that this is a possible strategy (to quote : “Note that because the array is possibly not copied, any subsequent modifications made to the mutable version of the array may be shared with the immutable version.”)

  6. Koray Says:

    Greg, since you mention Haskell and concurrency, you ought to know about Leave a Reply