What is RCU?

Date and time: 
Friday, January 11, 2008 - 15:00
220 Deschutes
Paul McKenney
  • Virginia Lo


Read-copy update (RCU) is a synchronization mechanism that was added to the Linux kernel in October of 2002. RCU achieves scalability improvements by allowing reads to occur concurrently with updates. In contrast with conventional locking primitives that ensure mutual exclusion among concurrent threads regardless of whether they be readers or updaters, or with reader-writer locks that allow concurrent reads but not in the presence of updates, RCU supports concurrency between a single updater and multiple readers. RCU ensures that reads are coherent by maintaining multiple versions of objects and ensuring that they are not freed up until all pre-existing read-side critical sections complete. RCU defines and uses efficient and scalable mechanisms for publishing and reading new versions of an object, and also for deferring the collection of old versions. These mechanisms distribute the work among read and update paths in such a way as to make read paths extremely fast. In some cases (non-preemptable kernels), RCU's read-side primitives have zero overhead.

This leads to the question "what exactly is RCU?", and perhaps also to the question "how can RCU possibly work?" (or, not infrequently, the assertion that RCU cannot possibly work). This talk addresses these questions from a fundamental viewpoint.


Paul E. McKenney is a distinguished engineer at IBM and has worked on SMP, NUMA, and RCU algorithms. He has recently become quite interested in getting realtime response from mid-range SMP systems. Prior to that, he worked on packet-radio and Internet protocols (but long before the Internet became popular), system administration, business applications, and realtime systems, the latter on early-80s eight-bit systems. He is extremely thankful that today's computers have much more than 64Kbytes of memory. His hobbies include running and the usual house-wife-and-kids habit.