Overview

edit
 
Example of a ticket and "Now Serving" sign used in the Ticket Queue Management System.

The basic concept of a ticket lock is similar to the ticket queue management system. This is the method that many bakeries and delis use to serve customers in the order that they arrive, without making them stand in a line. Generally, there is some type of dispenser that customers pull sequentially numbered tickets from upon arrival. The dispenser usually has a sign above or near it stating something like "Please take a number". There is also typically a dynamic sign (usually digital) that displays the ticket number that is now being served. Each time the next ticket number (customer) is ready to be served, the "Now Serving" sign is incremented and the number called out. This allows all of the waiting customers to know how many people are still ahead of them in the queue (line).

Like this system, a ticket lock is a first in first out (FIFO) queue based mechanism. It adds the benefit of fairness of lock acquisition and works as follows; there are two integer values which begin at 0. The first value is the queue ticket, the second is the dequeue ticket. The queue ticket is the thread's position in the queue, and the dequeue ticket is the ticket (queue position) that now has the lock (Now Serving).

When a thread arrives, it atomically obtains and then increments the queue ticket. The atomicity of this operation is required to prevent two threads from simultaneously being able to obtain the same ticket number. It then atomically compares its ticket value (before the increment) with the dequeue ticket's value. If they are the same, the thread is permitted to enter the critical section. If they are not the same, then another thread must already be in the critical section and this thread must busy wait or yield. When a thread leaves the critical section controlled by the lock, it atomically increments the dequeue ticket. This permits the next waiting thread, the one with the next sequential ticket number, to enter the critical section.

Fairness of Lock Acquisition

edit

The notion of fairness in lock acquisition applies to the order in which threads acquire a lock successfully.[1] If some type of fairness is implemented, it prevents a thread from being starved out of execution for a long time due to inability to acquire a lock in favor of other threads. With no fairness guarantees, the situation where a thread (or multiple threads) can take a disproportionately long time to execute as compared to others can arise. A simple example will now be presented to show how a thread could be excessively delayed due to a lack of fairness in lock acquisition.

Assume a case where three threads, each executing on one of three processors, are executing the following pseudo code that uses a lock with no consideration for fairness.

while(1) {
     lock {
          
          critical section
          
     }
     unlock
}

Now further assume the physical arrangement of the three processors, P1, P2, and P3, results in a non-uniform memory access time to the location of the shared lock variable. The order of increasing access time to the lock variable for the three processors is P1 < P2 < P3. So P1 is always the most advantaged at acquiring the lock, followed by P2, with P3 being most disadvantaged. How this situation leads to thread starvation is shown in the following illustration of the execution of the above pseudo code by these three processors.

    1    2          3    4 5           6    7
    ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯→ time
P1: lock ~crit sec~ unlock lock ~spin~ lock ~crit sec~~~~
P2: lock ~spin~~~~~ lock ~crit sec~~~~ unlock lock ~spin~
P3: lock ~spin~~~~~ lock ~spin ~~~~~~~ lock ~spin~~~~~~~~
Time P1 P2 P3
1 lock attempt (success) lock attempt lock attempt
2 critical section spin spin
3 release lock lock attempt (success) lock attempt
4 ... critical section spin
5 lock attempt ... ...
6 lock attempt (success) release lock lock attempt
7 critical section spin spin
... ... ... ...

Initially, the lock is free and all three processors attempt to acquire the lock simultaneously (point 1). Due to P1 having the fastest access time to the lock, it acquires it first and enters the critical section. P2 and P3 now spin while P1 is in the critical section (point 2). Upon exiting the critical section (point 3), P1 executes an unlock, releasing the lock. Since P2 has faster access to the lock than P3, it acquires the lock next and enters the critical section (point 4). While P2 is in the critical section, P1 once again attempts to acquire the lock but can’t (point 5), forcing it to spin wait along with P3. Once P2 finishes the critical section and issues an unlock, both P1 and P3 simultaneously attempt to acquire it once again (point 6). But P1, with its faster access time wins again, thus entering the critical section (point 7). This pattern of P3 being unable to obtain the lock will continue indefinitely until either P1 or P2 stops attempting to acquire it.

This illustrates the need to ensure some level of fairness in lock acquisition in certain circumstances.

Implementation

edit

In a Non-Uniform Memory Architecture (NUMA) system it is important to have a lock implementation that guarantees some level of fairness of lock acquisition. The ticket lock is an implementation of spinlock that adds this desired attribute.

ticketLock_init(int *next_ticket, int *now_serving)
{
     *now_serving = *next_ticket = 0;
}

ticketLock_acquire(int *next_ticket, int *now_serving)
{
     my_ticket = fetch_and_inc(next_ticket);
     while(*now_serving != my_ticket) {}
}

ticketLock_release(int *now_serving)
{
     now_serving++;
}


.... The rest is written on my page.

References

edit
  1. ^ Sottile, Matthew J.; Mattson, Timothy G.; Rasmussen, Craig E (Sep 28, 2009). Introduction to Concurrency in Programming Languages. Boca Raton, FL, USA: CRC Press. p. 56. ISBN 978-1-4200-7214-3. {{cite book}}: |access-date= requires |url= (help)