Saturday, August 28, 2010

Locking and liveness

I'm back in Boston, and rebooting the blog. I may eventually get back to the series on linear logic. But for now, I'm going to shoot for more bite-sized, self-contained posts.

I've been working on a highly-concurrent implementation of the join calculus, aiming to maximize parallelism. "Highly-concurrent" is often a euphemism for lock-avoiding. Since the whole point of locking is to serialize access to resources, locking destroys concurrency.

There's a nice formal characterization of lock avoidance, called obstruction freedom: if a thread is allowed to run in isolation, the thread will complete its task. In an obstruction-free system, threads can prevent progress only by continually interfering. Locks are ruled out because a lock held by one thread can passively prevent others from progressing.

Obstruction freedom is the weakest "nonblocking" progress condition. A stronger property is lock freedom, which says that the system makes progress in the presence of an arbitrary scheduler. Since the scheduler can run a thread in isolation, lock freedom implies obstruction freedom.

I believe my algorithm is free from deadlock and livelock -- assuming a fair scheduler. On the other hand, it is not obstruction free, much less lock free. Thus, although I do not use actual locks, my use of other synchronization mechanisms (read: CAS) is occasionally lock-like. I've minimized the likelihood of blocking, but it's still possible.

I'm left wondering: how important are these theoretical measures? I can see how to make the algorithm lock free, but it's a complicated change and likely to slow down the average case. Scientifically, I suppose there's only one answer: implement and compare.

No comments:

Post a Comment