Livelock
Livelock is when multiple threads continue to run (ie. do not
block indefinitely like in deadlock), but the system as a whole does not make
progress due to repeating patterns of non-productive resource contention.
Livelock may arise from attempts to avoid threads blocking (which
can hurt performance) via atry-lock.
A try-lock attempts to lock a mutex but does not block if the mutex is already
locked. The following example should make usage of the Boost try-lock clear.
Listing 1. Contrived livelock
boost::try_mutex resourceX;
boost::try_mutex resourceY;
void threadAFunc()
{
uint counter = 0;
while(true)
{
boost::try_mutex::scoped_lock lockX(resourceX);
boost::thread::yield(); // encourage the livelock
boost::try_mutex::scoped_try_lock lockY(resourceY);
if(lockY.locked() == false) continue;
std::cout << "threadA working: " << ++counter << "\n";
}
}
void threadBFunc()
{
uint counter = 0;
while(true)
{
boost::try_mutex::scoped_lock lockY(resourceY);
boost::thread::yield(); // encourage the livelock
boost::try_mutex::scoped_try_lock lockX(resourceX);
if(lockX.locked() == false) continue;
std::cout << "threadB working: " << ++counter << "\n";
}
}
This code exhibits an almost full livelock, though for each yield
statement removed the lock gets a little less severe. When I run this example,
at best threads do a few pieces of work per second. How does the livelock
occur? The probable sequence is illustrated below:
Another use of the term livelock involvesstarvation, where one
part of a system monopolises system resources and starves another part of the
system. For example, a system composed of request-queueing and
request-servicing components might exhibit starvation if an overwhelming number
of requests cause the request-queueing component to use all system resources.
See also:
No comments:
Post a Comment