Subsequent Thoughts on Concurrency
Over the past few years my opinions on concurrency have changed. I mean, threads were always a pretty scary thing. I figured that no matter what I did, there must be a race condition or deadlock in there somewhere (and really, if you ever think otherwise, you\’re just fooling yourself). I didn\’t know of any alternatives. Java\’s approach seemed to have some merits — all you had to do was write
synchronized, right? — and I took it a bit further with a language I was designing some years back. Here\’s an image from the spec:
The idea was that when you defined a class you labeled some methods as
variable. Those methods would be available on an instance declared as
exclusive. Both immutable (the default) and
exclusive objects were thread-safe, and both
exclusive objects were mutable. You would then declare objects according to the functionality you needed. Immutable was your safe bet and you added
exclusive if you got compiler errors. At the time it seemed ideal. Of course, I still thought of concurrency as something that you should avoid whenever possible. It\’s dangerous. You want to be able to walk through everything step-by-step. Don\’t take the chance.
Working on that language drove me toward the programming language singularity (as I like to call it). I wondered why we had to declare types at all (can\’t the compiler figure it out?) and how to add metaprogramming capabilities. These questions, arrived at from my own frustrations, were answered after discovering languages like Haskell and Common Lisp.
I call this a singularity, because I couldn\’t believe these advances hadn\’t found their way into mainstream programming. My hypothesis was that once a programmer made the leap, he saw no need to look back to the tangled mess he had left. Any other programmer who starts down the same path is drawn more and more quickly toward the realization until pop they cross the singularity.
Back to concurrency, though. Before the singularity, there were only threads. That was just how things worked. And \”green\” threads were simply inferior to native threads. However, on this side, there are so many approaches to the problem, and all of them start at a point far above the best threading available (if you haven\’t noticed yet, I\’m using \”threads\” to specifically refer to shared-memory mutex-based concurrency, just to have to type fewer words everywhere). So what do we have?
Who needs mutation, anyway? Haskell does pretty well with its monads, I haven\’t heard Erlangers complain about the lack of mutability, either. And Guillome has taken Scheme and made it immutable to produce Termite. If you\’re willing to take this (often perceived as drastic) step, you\’ve made concurrency a lot easier. If an object can\’t be changed, what does it matter how many concurrent processes have access to it? You don\’t need to worry if you declared it correctly, or go back and fix declarations so your code will compile, you just send it to whatever thread you want.
Another approach, taken by Orc, is akin to treating everything as a Web service and using a very few basic primitives to \”orchestrate\” them. For those of us working in an SOA and feeling SOL, this model looks very promising. It also seems like it can be built into other languages fairly easily, which is something I\’m planning to undertake with Common Lisp.
When you look at these modern concurrency models, you suddenly have a lot more room to make everything happen in parallel. You stop fearing threading, avoiding communication between your processes, and crossing your fingers every time you load your application; and start thinking in terms of thousands of little robots, simultaneously working on atomic pieces of a larger job, and willing to work faster when you put them on a 4, 8, or 32 CPU machine, without you having to tell them a thing. It\’s a nice future, and there are already a lot of ways to get there.