8

I'm doing some reading up on the advantages/disadvantages of using timestamps for concurrency control in a distributed database. The material I'm reading mentions that although timestamps overcome traditional deadlock problems which can affect locking there is still the problem of "global deadlock" which it is vulnerable to.

The material describes global deadlock as a situation where no cycle exists in the wait-for graphs of local graphs but that there is a cycle in the global graph.

I'm wondering how this could happen? Could someone describe a situation where a timestamp system could cause this problem?

2 Answers 2

5
+50

Here is an example, the simplest possible probably. We have machines A and B. Machine A has locks T1 and T2 with the relationship T1 < T2. Machine B has T3 and T4 with T3 > T4.

Now, the local graphs are just that T2 must wait for T1 and T3 must wait for T4. So there are no local cycles. But now, assume we have T4 < T1 so T1 has to wait for T4. And at the same time T2 < T3 so T3 has to wait for T2. In this case, there is a cycle globally.

So how does that cycle happen? The key here is that you never have the full information in a distributed system. So we may learn later that the inter-machine dependencies are there. And then we have a problem.

0

Timestamping is used to determine conflictresolution between local processes on a machine. It gives a means to solve deadlocks on that level. For distributed processes there is a possibilty of two processes on different machines to be waiting on each other. Which is in fact a regular deadlock, but across machines. This is called a 'global' deadlock. Imho timestamping might be used there also but is apparantly impractical.

Some info on this can be found on http://www.cse.scu.edu/~jholliday/dd_9_16.htm

Not the answer you're looking for? Browse other questions tagged or ask your own question.