Let's say you have a set of keys, and you want to lock on a per-key level. A naive solution could look like this:
class LockManager[K] {
  private[this] val logger = getLogger
  val locks: Atomic[Map[K, Lock]] = Atomic(Map[K, Lock]())
  def run[T](key: K)(fn: => T): T = {
    locks.getAndTransform { locksMap =>
      locksMap.get(key) match {
        case Some(lock) => {
          lock.lock()
          locksMap
        }
        case None => locksMap + (key -> new ReentrantLock())
      }
    }
    try {
      fn
    } finally {
      locks.getAndTransform { locksMap =>
        locksMap.get(key) match {
          case Some(lock) => {
            lock.unlock()
            locksMap - key
          }
          case None => {
            logger.warn(s"Lock for ${key} suspiciously missing")
            locksMap  // wtf?? this shoudnt happen
          }
        }
      }
    }
  }
}
However, there's quite a few things wrong with this:
ConcurrentHashMap, we could get around this, but that'd be un-Scala-like.)The thing is, we usually don't need all resources to be accessible simultaneously. People rarely do. Thus, we can create a locking system using the mod of a hash, similar to a hash table, where each "bucket" is simply a lock. This is also known as lock striping.
With this strategy in mind, we can rewrite the above class like so:
class LockManager[K](power: Int) {
  private[this] val logger = getLogger
  val locks: Vector[Lock] = Vector.fill[Lock](1 << power)(new ReentrantLock())
  val modMask = locks.length - 1
  def get(key: K): Lock = {
    locks(key.hashCode() & modMask)
  }
  def run[T](key: K)(fn: => T): T = {
    get(key).lock()
    try {
      fn
    } finally {
      get(key).unlock()
    }
  }
}
Note that we've fixed quite a few problems:
Vector, yay!)The only downside to this is that each lock could possess multiple keys, though you can probably tweak the numbers to find something with a good balance of throughput and memory usage.
Thanks for reading! Have any questions, comments, or suggestions? Feel free to use the comment section below or email me at blog@igm.pub and I'll do my best to respond.
Alternatively, you can view the source of the post here and send a pull request.