2

clearly updating two maps is not atomic in below code. Any thoughts in how to achieve so without big size synchronizations?

just added more details: getFirstName() is a method which returns value from map2 so if method1 includes synchronization then getFirstName should be synchronized too on the same lock. And which lock to acquire to make this totally atomic? map1 or map2 locks?

object MyApp{
   private val map1 = (new ConcurrentHashMap[String, String]).asScala
   private val map2 = (new ConcurrentHashMap[String, String]).asScala

  def method1(firstName:String,lastName:String) ={
       .....    
        map1 += firstName -> lastName
        map2 += lastName -> firstName
   }
   def getFirstName(string:lastName):Option[String] ={
     map2.get(firstName)
   }
 }
3
  • You can synchronize the entire operation, synchronize the individual steps, or invent some sort of "soft" scheme that will tolerate inconsistency.
    – Hot Licks
    Commented Mar 14, 2014 at 12:03
  • 1
    @HotLicks : just added some more details. there is another method too which returns a value from map2. which locks should be used? map1 or map2? in method1 and getFirstName() methods? Commented Mar 14, 2014 at 12:09
  • You just need to lock against THE EXACT SAME OBJECT in ALL methods which access the sets (in a manner where their being in sync is important). Could be map1, map2, or something else. There are those who strongly advocate using a specific singletonish lock object, but I'm not convinced of that.
    – Hot Licks
    Commented Mar 14, 2014 at 12:17

1 Answer 1

1

Be sure atomicity is the ACID property you're looking for. If you're worried about another thread reading from the first map before the second map is populated, then you're going to need to lock the first map until the second map is populated. I don't think concurrentmaps offer a lot of guarantees on reads.

One option might be to create your own implementation of ConcurrentMap with the same sort of underlying lock (only locking part of the hash table on write), but locking the appropriate part of both underlying hashtables until the write operation is complete.

Not the answer you're looking for? Browse other questions tagged or ask your own question.