July 31, 2012

This article was contributed by Paul E. McKenney

Introduction

This article describes RCU's data structures and their relationship to each other. Other aspects of RCU are covered by other articles in this series.
  1. Data-Structure Relationships
  2. The rcu_state Structure
  3. The rcu_node Structure
  4. The rcu_data Structure
  5. The rcu_dynticks Structure
  6. The rcu_head Structure
  7. RCU-Specific Fields in the task_struct Structure
  8. Accessor Functions
  9. Locking Hierarchy
At the end we have the answers to the quick quizzes.

Data-Structure Relationships

RCU is for all intents and purposes a large state machine, and its data structures maintain the state in such a way as to allow RCU readers to execute extremely quickly, while also processing the RCU grace periods requested by updaters in an efficient and extremely scalable fashion. The efficiency and scalability of RCU updaters is provided primarily by a combining tree, as shown below:

BigTreeClassicRCU.png

This diagram shows an enclosing rcu_state structure containing a tree of rcu_node structures. Each leaf node of the rcu_node tree has up to 16 rcu_data structures associated with it, so that there are NR_CPUS number of rcu_data structures, one for each possible CPU.

The purpose of this combining tree is to allow per-CPU events such as quiescent states, dyntick-idle transitions, and CPU hotplug operations to be processed efficently and scalably. Quiescent states are recorded by the per-CPU rcu_data structures, and other events are recorded by the leaf-level rcu_node structures. All of these events are combined at each level of the tree until finally grace periods are completed at the tree's root rcu_node structure. A grace period can be completed at the root once every CPU (or, in the case of CONFIG_TREE_PREEMPT_RCU, task) has passed through a quiescent state. Once a grace period has completed, record of that fact is propagated back down the tree.

As can be seen from the diagram, on a 64-bit system a two-level tree with 64 leaves can accommodate 1,024 CPUs, with a fanout of 64 at the root and a fanout of 16 at the leaves.

Quick Quiz 1: Why isn't the fanout at the leaves also 64?
Answer

If your system has more than 1,024 CPUs (or more than 512 CPUs on a 32-bit system), then RCU will automatically add more levels to the tree. For example, if you are crazy enough to build a 64-bit system with 65,536 CPUs, RCU would configure the rcu_node tree as follows:

HugeTreeClassicRCU.png

RCU currently permits up to a four-level tree, which on a 64-bit system accommodates up to 4,194,304 CPUs, though only a mere 524,288 CPUs for 32-bit systems. On the other hand, you can set CONFIG_RCU_FANOUT to be as small as 2 if you wish, which would permit only 16 CPUs, which I do for testing purposes.

The Linux kernel actually supports multiple flavors of RCU running concurrently. It accomplishes this by providing separate data structures for each flavor, for example, CONFIG_TREE_RCU builds provide rcu_sched and rcu_bh, as shown below:

BigTreeClassicRCUBH.png

Energy efficiency is increasingly important, and for that reason the Linux kernel provides CONFIG_NO_HZ, which turns off the scheduling-clock interrupts on idle CPUs, which in turn allows those CPUs to attain deeper sleep states and to consume less energy. CPUs whose scheduling-clock interrupts have been turned off are said to be in “dyntick-idle mode”. RCU must handle dyntick-idle CPUs specially because RCU would otherwise need to wake up each CPU on every grace period, which would defeat the purpose of CONFIG_NO_HZ. RCU uses the rcu_dynticks structure to track which CPUs are in dyntick idle mode, as shown below:

BigTreeClassicRCUBHdyntick.png

However, if a CPU is in dyntick-idle mode, it is in that mode for all flavors of RCU. Therefore, a single rcu_dynticks structure is allocated per CPU, and all of a given CPU's rcu_data structures share that rcu_dynticks, as shown in the figure.

CONFIG_TREE_PREEMPT_RCU kernel builds support rcu_preempt in addition to rcu_sched and rcu_bh, as shown below:

BigTreePreemptRCUBHdyntick.png

RCU updaters wait for normal grace periods by registering RCU callbacks, either directly via call_rcu() and friends (namely call_rcu_bh() and call_rcu_sched(), there being a separate interface per flavor of RCU) or indirectly via synchronize_rcu() and friends. RCU callbacks are represented by rcu_head structures, which are queued on rcu_data structures while they are waiting for a grace period to elapse, as shown in the following figure:

BigTreePreemptRCUBHdyntickCB.png

This figure shows how TREE_RCU's and TREE_PREEMPT_RCU's major data structures are related. Lesser data structures will be introduced with the algorithms that make use of them.

Note that each of the data structures in the above figure has its own synchronization:

  1. Each rcu_state structures has a pair of spinlocks, and some fields are protected by the corresponding root rcu_node structure's lock.
  2. Each rcu_node structure has a spinlock.
  3. The fields in rcu_data are private to the corresponding CPU, although a few can be read by other CPUs.
  4. Similarly, the fields in rcu_dynticks are private to the corresponding CPU, although again a few can be read by other CPUs.

It is important to note that different data structures can have very different ideas about the state of RCU at any given time. For but one example, awareness of the start or end of a given RCU grace period propagates slowly through the data structures. This slow propagation is absolutely necessary for RCU to have good read-side performance. If this balkanized implementation seems foreign to you, one useful trick is to consider each instance of these data structures to be a different person, each having the usual slightly different view of reality.

The general role of each of these data structures is as follows:

  1. rcu_state: This structure forms the interconnection between the rcu_node and rcu_data structures, tracks grace periods, contains the lock used to synchronize with CPU-hotplug events, and maintains state used to force quiescent states when grace periods extend too long,
  2. rcu_node: This structure forms the combining tree that propagates quiescent-state information from the leaves to the root, and also propagates grace-period information from the root to the leaves. It provides local copies of the grace-period state in order to allow this information to be accessed in a synchronized manner without suffering the scalability limitations that would otherwise be imposed by global locking. In CONFIG_TREE_PREEMPT_RCU kernels, it manages the lists of tasks that have blocked while in their current RCU read-side critical section. In CONFIG_TREE_PREEMPT_RCU with CONFIG_RCU_BOOST, it manages the per-rcu_node priority-boosting kernel threads (kthreads) and state. Finally, it records CPU-hotplug state in order to determine which CPUs should be ignored during a given grace period.
  3. rcu_data: This per-CPU structure is the focus of quiescent-state detection and RCU callback queuing. It also tracks its relationship to the corresponding leaf rcu_node structure to allow more-efficient propagation of quiescent states up the rcu_node combining tree. Like the rcu_node structure, it provides a local copy of the grace-period information to allow for-free synchronized access to this information from the corresponding CPU. Finally, this structure records past dyntick-idle state for the corresponding CPU and also tracks statistics.
  4. rcu_dynticks: This per-CPU structure tracks the current dyntick-idle state for the corresponding CPU. Unlike the other three structures, the rcu_dynticks structure is not replicated per RCU flavor.
  5. rcu_head: This structure represents RCU callbacks, and is the only structure allocated and managed by RCU users. The rcu_head structure is normally embedded within the RCU-protected data structure.

If all you wanted from this article was a general notion of how RCU's data structures are related, you are done. Otherwise, each of the following sections give more details on the rcu_state, rcu_node, rcu_data, and rcu_dynticks data structures.

The rcu_state Structure

The rcu_state structure is the base structure that represents a flavor of RCU. This structure forms the interconnection between the rcu_node and rcu_data structures, tracks grace periods, contains the lock used to synchronize with CPU-hotplug events, and maintains state used to force quiescent states when grace periods extend too long,

The rcu_state structure's fields are discussed, singly and in groups, in the following sections.

Relationship to rcu_node and rcu_data Structures
This portion of the rcu_state structure is declared as follows:
  1   struct rcu_node node[NUM_RCU_NODES];
  2   struct rcu_node *level[NUM_RCU_LVLS];
  3   u32 levelcnt[MAX_RCU_LVLS + 1];
  4   u8 levelspread[NUM_RCU_LVLS];
  5   struct rcu_data __percpu *rda;

Quick Quiz 2: Wait a minute! You said that the rcu_node structures formed a tree, but they are declared as a flat array! What gives?
Answer

The rcu_node tree is embedded into the ->node[] array as shown in the following figure:

TreeMapping.png

One interesting consequence of this mapping is that a breadth-first traversal of the tree is implemented as a simple linear scan of the array, which is in fact what the rcu_for_each_node_breadth_first() macro does. This macro is used at the beginning and ends of grace periods.

Each entry of the ->level array references the first rcu_node structure on the corresponding level of the tree, for example, as shown below:

TreeMappingLevel.png

The zeroth element of the array references the root rcu_node structure, the first element references the first child of the root rcu_node, and finally the second element references the first leaf rcu_node structure.

Quick Quiz 3: Given that this array represents a tree, why can't the diagram that includes the ->level array be planar?
Answer

Each entry of the ->levelcnt array contains the count of the number of rcu_node structures at that level, with an additional entry that gives the count of rcu_data structures. The ->levelspread array is used internally at initialization to compute the shape of the tree, and will be discussed further with the code that uses it.

Finally, the ->rda field references a per-CPU pointer to the corresponding CPU's rcu_data structure.

All of these fields are constant once initialization is complete, and therefore need no protection.

Grace-Period Tracking

This portion of the rcu_state structure is declared as follows:

  1   unsigned long gpnum;
  2   unsigned long completed;

RCU grace periods are numbered, and the ->gpnum field contains the number of the grace period that started most recently. The ->completed field contains the number of the grace period that completed most recently. If the two fields are equal, the RCU grace period that most recently started has already completed, and therefore the corresponding flavor of RCU is idle. If ->gpnum is one greater than ->completed, then ->gpnum gives the number of the current RCU grace period, which has not yet completed. Any other combination of values indicates that something is broken. These two fields are protected by the root rcu_node's ->lock field.

There are ->gpnum and ->completed fields in the rcu_node and rcu_data structures as well. The fields in the rcu_state structure represent the most current values, and those of the other structures are compared in order to detect the start of a new grace period in a distributed fashion. The values flow from rcu_state to rcu_node (down the tree from the root to the leaves) to rcu_data.

Synchronizing with CPU-Hotplug Events

This field of the rcu_state structure is declared as follows:

  1   raw_spinlock_t onofflock;

This interrupt-disabled spinlock is used to synchronize with RCU's handling of CPU-hotplug events. Given that this is a global lock, most of the RCU implementation strenuously avoids caring about CPU-hotplug events.

Quick Quiz 4: How can the RCU implementation possibly be correct if most of it ignores events as profound as CPUs appearing and disappearing???
Answer

Forcing Quiescent States
This portion of the rcu_state structure is declared as follows:
  1   raw_spinlock_t fqslock;
  2   u8 signaled;
  3   u8 fqs_active;
  4   u8 fqs_need_gp;
  5   u8 boost;
  6   unsigned long gp_start;
  7   unsigned long jiffies_force_qs;
  8   unsigned long jiffies_stall;
  9   unsigned long n_force_qs;
 10   unsigned long n_force_qs_lh;
 11   unsigned long n_force_qs_ngp;

These fields are used to determine when RCU should take corrective action when a grace period has extended for too long. Corrective actions include detecting that CPUs are in dyntick-idle mode (and thus unable to be in RCU read-side critical sections and not to be bothered), sending reschedule interrupts to CPUs that have not yet passed through a grace period, boosting the priority of tasks blocked in RCU read-side critical sections, and detecting that CPUs are offline.

The ->fqslock field is a spinlock that ensures that only one CPU at a time is forcing quiescent states for a given flavor of RCU. It protects all the fields described in this section, unless otherwise stated. This spinlock is conditionally acquired, which prevents contention problems. This works because if any given CPU is forcing quiescent states for a given flavor of RCU, there is no point in other CPUs attempting to also do so.

The ->signaled field is used as the state variable for forcing quiescent states. It takes on values as follows:

  1. RCU_GP_IDLE indicates that there is no quiescent-state forcing in progress.
  2. RCU_GP_INIT indicates that grace-period initialization is in progress, during which time it is forbidden to force quiescent states.
  3. RCU_SAVE_DYNTICK indicates that the forcing of quiescent states is underway, starting with the collection of dyntick-idle states of all CPUs that have not yet passed through a quiescent state for the current grace period. RCU priority boosting might also be carried out in this state, but not typically.
  4. RCU_FORCE_QS indicates that quiescent states are being forced. This includes checks for dyntick-idle and offline CPUs, sending of resched IPIs, and RCU priority boosting.

The ->fqs_active field is used to hold off starting new grace periods while quiescent states are being forced (otherwise it is all too easy for force_quiescent_state() to jump to the conclusion that it has managed to complete not the old grace period, but the new one, with disastrous results). The ->fqs_need_gp field is used to record the fact that someone wanted to start a grace period, but was unable due to the fact that force_quiescent_state() was active. When force_quiescent_state() completes, if the ->fqs_need_gp is set, then force_quiescent_state() starts a new grace period. Accesses to these two fields are protected by the root rcu_node's ->lock. The ->boost field is used to indicate that this flavor of RCU supports priority boosting. It is constant, so needs no protection.

The ->gp_start field records the start of the current grace period in jiffies. The ->jiffies_force_qs field contains the time in jiffies at which the next quiescent-state forcing is scheduled to occur, and the ->jiffies_stall contains the time in jiffies at which the next RCU CPU stall detection is scheduled to occur. These fields are protected by the root rcu_node's ->lock, but may be accessed without protection.

The ->n_force_qs field records the number of times that force_quiescent_state() was invoked and managed to acquire ->fqslock. The ->n_force_qs_lh and ->n_force_qs_ngp fields record the number of times that force_quiescent_state() declined to force quiescent states due to the ->fqslock already being held and there being no active grace period, respectively. The ->n_force_qs_lh field is unprotected and may therefore lose counts.

Quick Quiz 5: If there is no active grace period, why was force_quiescent_state() invoked in the first place???
Answer

Miscellaneous

This portion of the rcu_state structure is declared as follows:

  1   unsigned long gp_max;
  2   char *name;

The ->gp_max field tracks the duration of the longest grace period in jiffies. It is protected by the root rcu_node's ->lock.

The ->name field points to the name of the RCU flavor (for example, “rcu_sched”), and is constant.

The rcu_node Structure

The rcu_node structures form the combining tree that propagates quiescent-state information from the leaves to the root and also that propagates grace-period information from the root down to the leaves. They provides local copies of the grace-period state in order to allow this information to be accessed in a synchronized manner without suffering the scalability limitations that would otherwise be imposed by global locking. In CONFIG_TREE_PREEMPT_RCU kernels, they manage the lists of tasks that have blocked while in their current RCU read-side critical section. In CONFIG_TREE_PREEMPT_RCU with CONFIG_RCU_BOOST, they manage the per-rcu_node priority-boosting kernel threads (kthreads) and state. Finally, they record CPU-hotplug state in order to determine which CPUs should be ignored during a given grace period.

The rcu_node structure's fields are discussed, singly and in groups, in the following sections.

Connection to Combining Tree

This portion of the rcu_node structure is declared as follows:

  1   struct rcu_node *parent;
  2   u8 level;
  3   u8 grpnum;
  4   unsigned long grpmask;
  5   int grplo;
  6   int grphi;

The ->parent pointer references the rcu_node one level up in the tree, and is NULL for the root rcu_node. The RCU implementation makes heavy use of this field to push quiescent states up the tree. The ->level field gives the level in the tree, with the root being at level zero, its children at level one, and so on. The ->grpnum field gives this node's position within the children of its parent, so this number can range between 0 and 31 on 32-bit systems and between 0 and 63 on 64-bit systems. The ->level and ->grpnum fields are used only during initialization and for tracing. The ->grpmask field is the bitmask counterpart of ->grpnum, and therefore always has exactly one bit set. This mask is used to clear the bit corresponding to this rcu_node structure in its parent's bitmasks, which are described later. Finally, the ->grplo and ->grphi fields contain the lowest and highest numbered CPU served by this rcu_node structure, respectively.

All of these fields are constant, and thus do not require any synchronization.

Synchronization

This field of the rcu_node structure is declared as follows:

  1   raw_spinlock_t lock;

This field is used to protect the remaining fields in this structure, unless otherwise stated. That said, all of the fields in this structure can be accessed without locking for tracing purposes. Yes, this can result in confusing traces, but better some tracing confusion than to be heisenbugged out of existence.

Grace-Period Tracking

This portion of the rcu_node structure is declared as follows:

  1   unsigned long gpnum;
  2   unsigned long completed;

These fields are the counterparts of the fields of the same name in the rcu_state structure. They each may lag up to one behind their rcu_state counterparts. If a given rcu_node structure's ->gpnum and ->complete fields are equal, then this rcu_node structure believes that RCU is idle. Otherwise, as with the rcu_state structure, the ->gpnum field will be one greater than the ->complete fields, with ->gpnum indicating which grace period this rcu_node believes is still being waited for.

The >gpnum field of each rcu_node structure is updated at the beginning of each grace period, and the ->completed fields are updated at the end of each grace period.

Quiescent-State Tracking

These fields manage the propagation of quiescent states up the combining tree.

This portion of the rcu_node structure has fields as follows:

  1   unsigned long qsmask;
  2   unsigned long expmask;
  3   unsigned long qsmaskinit;

The ->qsmask field tracks which of this rcu_node structure's children still need to report quiescent states for the current normal grace period. Such children will have a value of 1 in their corresponding bit. Note that the leaf rcu_node structures should be thought of as having rcu_data structures as their children. Similarly, the ->expmask field tracks which of this rcu_node structure's children still need to report quiescent states for the current expedited grace period. An expedited grace period has the same conceptual properties as a normal grace period, but the expedited implementation accepts extreme CPU overhead to obtain much lower grace-period latency, for example, consuming a few tens of microseconds worth of CPU time to reduce grace-period duration from milliseconds to tens of microseconds. The ->qsmaskinit field tracks which of this rcu_node structure's children cover for at least one online CPU. This mask is used to initialize both ->qsmask and ->expmask at the beginning of the corresponding sort of grace period.

Quick Quiz 6: Why are these bitmasks protected by locking? Come on, haven't you heard of atomic instructions???
Answer

Blocked-Task Management

TREE_PREEMPT_RCU allows tasks to be preempted in the midst of their RCU read-side critical sections, and these tasks must be tracked explicitly. The details of exactly why and how they are tracked will be covered in a separate article on RCU read-side processing. For now, it is enough to know that the rcu_node structure tracks them.

  1   struct list_head blkd_tasks;
  2   struct list_head *gp_tasks;
  3   struct list_head *exp_tasks;

The ->blkd_tasks field is a list header for the list of blocked and preempted tasks. As tasks undergo context switches within RCU read-side critical sections, their task_struct structures are enqueued (via the task_struct's ->rcu_node_entry field) onto the head of the ->blkd_tasks list for the leaf rcu_node structure corresponding to the CPU on which the outgoing context switch executed. As these tasks later exit their RCU read-side critical sections, they remove themselves from the list. This list is therefore in reverse time order, so that if one of the tasks is blocking the current grace period, all subsequent tasks must also be blocking that same grace period. Therefore, a single pointer into this list suffices to track all tasks blocking a given grace period. That pointer is stored in ->gp_tasks for normal grace periods and in ->exp_tasks for expedited grace periods. These last two fields are NULL if either there is no grace period in flight or if there are no blocked tasks preventing that grace period from completing. If either of these two pointers is referencing a task that removes itself from the ->blkd_tasks list, then that task must advance the pointer to the next task on the list, or set the pointer to NULL if there are no subsequent tasks on the list.

For example, suppose that tasks T1, T2, and T3 are all hard-affinitied to the largest-numbered CPU in the system. Then if task T1 blocked in an RCU read-side critical section, then an expedited grace period started, then task T2 blocked in an RCU read-side critical section, then a normal grace period started, and finally task 3 blocked in an RCU read-side critical section, then the state of the last leaf rcu_node structure's blocked-task list would be as shown below:

blkd_task.png

Task T1 is blocking both grace periods, task T2 is blocking only the normal grace period, and task T3 is blocking neither grace period. Note that these tasks will not remove themselves from this list immediately upon resuming execution. They will instead remain on the list until they execute the outermost rcu_read_unlock() that ends their RCU read-side critical section.

RCU Priority Boosting

TREE_PREEMPT_RCU implements RCU priority boosting if CONFIG_RCU_BOOST=y. The following rcu_node fields support RCU priority boosting:

  1   struct task_struct *node_kthread_task;
  2   unsigned int node_kthread_status;
  3   atomic_t wakemask;
  4   struct task_struct *boost_kthread_task;
  5   unsigned int boost_kthread_status;
  6   struct list_head *boost_tasks;
  7   unsigned long boost_time;
  8   unsigned long n_tasks_boosted;
  9   unsigned long n_exp_boosts;
 10   unsigned long n_normal_boosts;
 11   unsigned long n_balk_blkd_tasks;
 12   unsigned long n_balk_exp_gp_tasks;
 13   unsigned long n_balk_boost_tasks;
 14   unsigned long n_balk_notblocked;
 15   unsigned long n_balk_notyet;
 16   unsigned long n_balk_nos;

The ->node_kthread_task field references this structure's per-rcu_node task and the ->node_kthread_status field records its status for tracing and debugging purposes. This kthread awakens per-CPU callback-handling kthreads if they remain preempted too long after yielding the CPU. The possible values of this status field are as follows:

  1. RCU_KTHREAD_STOPPED indicates that the kthread is not present, in which case the ->node_kthread_task field should be NULL.
  2. RCU_KTHREAD_RUNNING indicates that the kthread is running (or maybe preempted).
  3. RCU_KTHREAD_WAITING indicates that the kthread is waiting for work to do.
  4. RCU_KTHREAD_OFFCPU indicates that the kthread is refraining from taking any action because it found itself executing on the wrong CPU. This can happen during CPU-hotplug events.
  5. RCU_KTHREAD_YIELDING indicates that the kthread is refraining from executing because it is trying to avoid hogging the CPU.

In the current implementation, rcu_node kthread never actually enters the RCU_KTHREAD_OFFCPU or RCU_KTHREAD_YIELDING states, but the rcu_data discussed later can, and having all the values in one place is convenient.

The bits in the ->wakemask field indicates which of the per-rcu_data kthreads need to be awakened. This field will be non-zero only on leaf rcu_node structures, as only these rcu_node structures have rcu_data structures as descendents.

Quick Quiz 7: But ->wakemask is only 32 bits wide, while the ->qsmask, ->expmask, and ->qsmaskinit fields can be up to 64 bits wide. Just how is this supposed to work on 64-bit systems???
Answer

The ->boost_kthread_task field references this structure's per-rcu_node priority-boosting task and the ->boost_kthread_status field tracks its status in a manner similar to the way the ->node_kthread_status field tracks the status of the task referenced by ->node_kthread_task This kthread boosts the priority of tasks that remain blocked or preeempted for too long within RCU reads-side critical sections.

The ->boost_tasks field references the next task in the ->blkd_tasks list that is to be priority boosted, or NULL is there is no need to priority boost any task on the ->blkd_tasks list. The ->boost_time field indicates the time in jiffies at which boosting will start if the current grace period does not end beforehand.

The remaining fields are used for statistics and tracing, and will be discussed elsewhere.

Sizing the rcu_node Array

The rcu_node array is sized via a series of C-preprocessor expressions as follows:

  1 #define MAX_RCU_LVLS 4
  2 #if CONFIG_RCU_FANOUT > 16
  3 #define RCU_FANOUT_LEAF       16
  4 #else /* #if CONFIG_RCU_FANOUT > 16 */
  5 #define RCU_FANOUT_LEAF       (CONFIG_RCU_FANOUT)
  6 #endif /* #else #if CONFIG_RCU_FANOUT > 16 */
  7 #define RCU_FANOUT_1        (RCU_FANOUT_LEAF)
  8 #define RCU_FANOUT_2        (RCU_FANOUT_1 * CONFIG_RCU_FANOUT)
  9 #define RCU_FANOUT_3        (RCU_FANOUT_2 * CONFIG_RCU_FANOUT)
 10 #define RCU_FANOUT_4        (RCU_FANOUT_3 * CONFIG_RCU_FANOUT)
 11 
 12 #if NR_CPUS <= RCU_FANOUT_1
 13 #  define NUM_RCU_LVLS        1
 14 #  define NUM_RCU_LVL_0        1
 15 #  define NUM_RCU_LVL_1        (NR_CPUS)
 16 #  define NUM_RCU_LVL_2        0
 17 #  define NUM_RCU_LVL_3        0
 18 #  define NUM_RCU_LVL_4        0
 19 #elif NR_CPUS <= RCU_FANOUT_2
 20 #  define NUM_RCU_LVLS        2
 21 #  define NUM_RCU_LVL_0        1
 22 #  define NUM_RCU_LVL_1        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
 23 #  define NUM_RCU_LVL_2        (NR_CPUS)
 24 #  define NUM_RCU_LVL_3        0
 25 #  define NUM_RCU_LVL_4        0
 26 #elif NR_CPUS <= RCU_FANOUT_3
 27 #  define NUM_RCU_LVLS        3
 28 #  define NUM_RCU_LVL_0        1
 29 #  define NUM_RCU_LVL_1        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
 30 #  define NUM_RCU_LVL_2        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
 31 #  define NUM_RCU_LVL_3        (NR_CPUS)
 32 #  define NUM_RCU_LVL_4        0
 33 #elif NR_CPUS <= RCU_FANOUT_4
 34 #  define NUM_RCU_LVLS        4
 35 #  define NUM_RCU_LVL_0        1
 36 #  define NUM_RCU_LVL_1        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_3)
 37 #  define NUM_RCU_LVL_2        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
 38 #  define NUM_RCU_LVL_3        DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
 39 #  define NUM_RCU_LVL_4        (NR_CPUS)
 40 #else
 41 # error "CONFIG_RCU_FANOUT insufficient for NR_CPUS"
 42 #endif /* #if (NR_CPUS) <= RCU_FANOUT_1 */
 43 
 44 #define RCU_SUM (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2 + NUM_RCU_LVL_3 + NUM_RCU_LVL_4)
 45 #define NUM_RCU_NODES (RCU_SUM - NR_CPUS)

The maximum number of levels in the rcu_node structure is currently limited to four, as specified by line 1. For 32-bit systems, this allows 16*32*32*32=524,288 CPUs, which should be sufficient for the next few years at least. For 64-bit systems, 16*64*64*64=4,194,304 CPUs is allowed, which should see us through the next decade or so. This four-level tree also allows kernels built with CONFIG_RCU_FANOUT=8 to support up to 4096 CPUs, which might be useful in very large systems having eight CPUs per socket (but please note that no one has yet shown any measurable performance degradation due to misaligned socket and rcu_node boundaries). In addition, building kernels with a full four levels of rcu_node tree permits better testing of RCU's combining-tree code.

The RCU_FANOUT_LEAF symbol controls how many CPUs are handled by each leaf rcu_node structure. Experience has shown that allowing a given leaf rcu_node structure to handle 64 CPUs, as permitted by the number of bits in the ->qsmask field on a 64-bit system, results in excessive contention for the leaf rcu_node structures' ->lock fields. The number of CPUs per leaf rcu_node structure is therefore limited to 16 or the value specified by CONFIG_RCU_FANOUT, whichever is smaller. Lines 2-6 perform this computation.

Lines 7-10 compute the maximum number of CPUs supported by a single-level (which contains a single rcu_node structure), two-level, three-level, and four-level rcu_node tree, respectively, given the fanout specified by CONFIG_RCU_FANOUT. These numbers of CPUs are retained in the RCU_FANOUT_1, RCU_FANOUT_2, RCU_FANOUT_3, and RCU_FANOUT_4 C-preprocessor variables, respectively.

These variables are used to control the C-preprocessor #if statement spanning lines 12-42 that computes the number of rcu_node structures required for each level of the tree, as well as the number of levels required. The number of levels is placed in the NUM_RCU_LVLS C-preprocessor variable by lines 13, 20, 27, and 34. The number of rcu_node structures for the topmost level of the tree is always exactly one, and this value is unconditionally placed into NUM_RCU_LVL_0 by lines 14, 21, 28, and 35. The rest of the levels (if any) of the rcu_node tree are computed by dividing the maximum number of CPUs by the fanout supported by the number of levels from the current level down, rounding up. This computation is performed by lines 22, 29-30, and 36-38. The level of the combining tree beneath the leaf rcu_node structures (which corresponds to the rcu_data structures) is handled by lines 15, 23, 31, and 39, and is always set to the maximum number of CPUs NR_CPUS. Finally, lines 40-42 produce an error if the maximum number of CPUs is too large for the specified fanout.

Lines 44-45 can then sum the per-level counts and subtract NR_CPUS to obtain the number of rcu_node structures in the combining tree. (Recall that the computations in lines&nbps;12-40 include the number of per-CPU rcu_data structures as well as the rcu_node structures.)

The rcu_data Structure

The rcu_data maintains the per-CPU state for the corresponding flavor of RCU. The fields in this structure may be accessed only from the corresponding CPU (and from tracing) unless otherwise stated. This structure is the focus of quiescent-state detection and RCU callback queuing. It also tracks its relationship to the corresponding leaf rcu_node structure to allow more-efficient propagation of quiescent states up the rcu_node combining tree. Like the rcu_node structure, it provides a local copy of the grace-period information to allow for-free synchronized access to this information from the corresponding CPU. Finally, this structure records past dyntick-idle state for the corresponding CPU and also tracks statistics.

The rcu_data structure's fields are discussed, singly and in groups, in the following sections.

Connection to Other Data Structures

This portion of the rcu_data structure is declared as follows:

  1   int cpu;
  2   struct rcu_state *rsp;
  3   struct rcu_node *mynode;
  4   struct rcu_dynticks *dynticks;
  5   unsigned long grpmask;
  6   bool beenonline;
  7   bool preemptible;

The ->cpu field contains the number of the corresponding CPU, the ->rsp pointer references the corresponding rcu_state structure (and is most frequently used to locate the name of the corresponding flavor of RCU for tracing), and the ->mynode field references the corresponding rcu_node structure. The ->mynode is used to propagate quiescent states up the combining tree.

The ->dynticks pointer references the rcu_dynticks structure corresponding to this CPU. Recall that a single per-CPU instance of the rcu_dynticks structure is shared among all flavors of RCU. These first four fields are constant and therefore require not synchronization.

The ->grpmask field indicates the bit in the ->mynode->qsmask corresponding to this rcu_data structure, and is also used when propagating quiescent states. The ->beenonline flag is set whenever the corresponding CPU comes online, which means that the debugfs tracing need not dump out any rcu_data structure for which this flag is not set. The ->preemptible flag indicates whether or not this flavor of RCU is preemptible, and is used to determine how to go about forcing quiescent states. This last field is constant and therefore requires no synchronization.

Quiescent-State and Grace-Period Tracking

This portion of the rcu_data structure is declared as follows:

  1   unsigned long completed;
  2   unsigned long gpnum;
  3   bool qs_pending;
  4   bool passed_quiesce;
  5   unsigned long passed_quiesce_gpnum;

These fields are the counterparts of the fields of the same name in the rcu_state and rcu_node structures. They may each lag up to one behind their rcu_node counterparts, but in CONFIG_NO_HZ kernels can lag arbitrarily far behind for CPUs in dyntick-idle mode (but these counters will catch up upon exit from dyntick-idle mode). If a given rcu_data structure's ->gpnum and ->complete fields are equal, then this rcu_data structure believes that RCU is idle. Otherwise, as with the rcu_state and rcu_node structure, the ->gpnum field will be one greater than the ->complete fields, with ->gpnum indicating which grace period this rcu_data believes is still being waited for.

Quick Quiz 8: All this replication of the grace period numbers can only cause massive confusion. Why not just keep a global pair of counters and be done with it???
Answer

The ->qs_pending flag indicates that the RCU core needs a quiescent state from the corresponding CPU, while the ->passed_quiesce flag indicates that the CPU has passed through a quiescent state. Finally, ->passed_quiesce_gpnum records which grace period was (and maybe still is) in effect when the ->passed_quiesce flag was set.

RCU Callback Handling

In the absence of CPU-hotplug events, RCU callbacks are invoked by the same CPU that registered them. This is strictly a cache-locality optimization: callbacks can and do get invoked on CPUs other than the one that registered them. After all, if the CPU that registered a given callback has gone offline before the callback can be invoked, there really is no other choice.

This portion of the rcu_data structure is declared as follows:

  1   struct rcu_head *nxtlist;
  2   struct rcu_head **nxttail[RCU_NEXT_SIZE];
  3   long qlen;
  4   long blimit;
  5   long qlen_last_fqs_check;
  6   unsigned long n_force_qs_snap;
  7   unsigned long n_cbs_invoked;
  8   unsigned long n_cbs_orphaned;
  9   unsigned long n_cbs_adopted;

The ->nxtlist pointer and the ->nxttail[] array form a four-segment list with older callbacks near the head and newer ones near the tail. Each segment contains callbacks with the corresponding relationship to the current grace period. The pointer out of the end of each of the four segments is referenced by the element of the ->nxttail[] array indexed by RCU_DONE_TAIL (for callbacks handled by a prior grace period), RCU_WAIT_TAIL (for callbacks waiting on the current grace period), RCU_NEXT_READY_TAIL (for callbacks that will wait on the next grace period), and RCU_NEXT_TAIL (for callbacks that are not yet associated with a specific grace period) respectively, as shown in the following figure.

nxtlist.png

In this figure, the ->nxtlist pointer references the first RCU callback in the list. The ->nxttail[RCU_DONE_TAIL] array element references the ->nxtlist pointer itself, indicating that none of the callbacks is ready to invoke. The ->nxttail[RCU_WAIT_TAIL] array element references callback CB 2's ->next pointer, which indicates that CB 1 and CB 2 are both waiting on the current grace period. The ->nxttail[RCU_NEXT_READY_TAIL] array element references the same RCU callback that ->nxttail[RCU_WAIT_TAIL] does, which indicates that there are no callbacks waiting on the next RCU grace period. The ->nxttail[RCU_NEXT_TAIL] array element references CB 4's ->next pointer, indicating that all the remaining RCU callbacks have not yet been assigned to an RCU grace period. Note that the ->nxttail[RCU_NEXT_TAIL] array element always references the last RCU callback's ->next pointer unless the callback list is empty, in which case it references the ->nxtlist pointer.

CPUs advance their callbacks from the RCU_NEXT_TAIL to the RCU_NEXT_READY_TAIL to the RCU_WAIT_TAIL to the RCU_DONE_TAIL list segments as grace periods advance. The CPU advances the callbacks in its rcu_data structure whenever it notices that another RCU grace period has completed. The CPU detects the completion of an RCU grace period by noticing that the value of its rcu_data structure's ->completed field differs from that of its leaf rcu_node structure. Recall that each rcu_node structure's ->completed field is updated at the end of each grace period.

The ->qlen counter contains the number of callbacks in ->nxtlist. The ->blimit counter is the maximum number of RCU callbacks that may be invoked at a given time. The ->qlen_last_fqs_check and ->n_force_qs_snap coordinate the forcing of quiescent states from call_rcu() and friends when callback lists grow excessively long.

Finally, the ->n_cbs_invoked, ->n_cbs_orphaned, and ->n_cbs_adopted fields count the number of callbacks invoked, sent to other CPUs when this CPU goes offline, and received from other CPUs when those other CPUs go offline.

Dyntick-Idle Handling

This portion of the rcu_data structure is declared as follows:

  1   int dynticks_snap;
  2   unsigned long dynticks_fqs;
The ->dynticks_snap field is used to take a snapshot of the corresponding CPU's dyntick-idle state when forcing quiescent states, and is therefore accessed from other CPUs. Finally, the ->dynticks_fqs field is used to count the number of times this CPU is determined to be in dyntick-idle state, and is used for tracing and debugging purposes.
Statistics

This portion of the rcu_data structure has fields as follows:

  1   unsigned long offline_fqs;
  2   unsigned long resched_ipi;
  3   unsigned long n_rcu_pending;
  4   unsigned long n_rp_qs_pending;
  5   unsigned long n_rp_report_qs;
  6   unsigned long n_rp_cb_ready;
  7   unsigned long n_rp_cpu_needs_gp;
  8   unsigned long n_rp_gp_completed;
  9   unsigned long n_rp_gp_started;
 10   unsigned long n_rp_need_fqs;
 11   unsigned long n_rp_need_nothing;

These fields capture statistics for debugging and tracing, and are discussed elsewhere.

The rcu_dynticks Structure

The rcu_dynticks maintains the per-CPU dyntick-idle state for the corresponding CPU. Unlike the other structures, rcu_dynticks is not replicated over the different flavors of RCU. The fields in this structure may be accessed only from the corresponding CPU (and from tracing) unless otherwise stated. Its fields are as follows:

  1   int dynticks_nesting;
  2   int dynticks_nmi_nesting;
  3   atomic_t dynticks;

The ->dynticks_nesting field counts the nesting depth of normal interrupts. In addition, this counter is incremented when exiting dyntick-idle mode and decremented when entering it. This counter can therefore be thought of as counting the number of reasons why this CPU cannot be permitted to enter dyntick-idle mode, aside from non-maskable interrupts (NMIs). NMIs are counted by the ->dynticks_nmi_nesting field, except that NMIs that interrupt non-dyntick-idle execution are not counted.

Finally, the ->dynticks field counts the corresponding CPU's transitions to and from dyntick-idle mode, so that this counter has an even value when the CPU is in dyntick-idle mode and an odd value otherwise.

Quick Quiz 9: Why not just count all NMIs? Wouldn't that be simpler and less error prone?
Answer

The rcu_head Structure

Each rcu_head structure represents an RCU callback. These structures are normally embedded within RCU-protected data structures whose algorithms use asynchronous grace periods. In contrast, when using algorithms that block waiting for RCU grace periods, RCU users need not provide rcu_head structures.

The rcu_head structure has fields as follows:

  1   struct rcu_head *next;
  2   void (*func)(struct rcu_head *head);

The ->next field is used to link the rcu_head structures together in the lists within the rcu_data structures. The ->func field is a pointer to the function to be called when the callback is ready to be invoked, and this function is passed a pointer to the rcu_head structure. However, kfree_rcu() uses the ->func field to record the offset of the rcu_head structure within the enclosing RCU-protected data structure.

Both of these fields are used internally by RCU. From the viewpoint of RCU users, this structure is an opaque “cookie”.

Quick Quiz 10: Given that the callback function ->func is passed a pointer to the rcu_head structure, how is that function supposed to find the begining of the enclosing RCU-protected data structure?
Answer

RCU-Specific Fields in the task_struct Structure

The CONFIG_TREE_PREEMPT_RCU implementation uses some additional fields in the task_struct structure:

  1 #ifdef CONFIG_PREEMPT_RCU
  2   int rcu_read_lock_nesting;
  3   char rcu_read_unlock_special;
  4 #if defined(CONFIG_RCU_BOOST) && defined(CONFIG_TREE_PREEMPT_RCU)
  5   int rcu_boosted;
  6 #endif /* #if defined(CONFIG_RCU_BOOST) && defined(CONFIG_TREE_PREEMPT_RCU) */
  7   struct list_head rcu_node_entry;
  8 #endif /* #ifdef CONFIG_PREEMPT_RCU */
  9 #ifdef CONFIG_TREE_PREEMPT_RCU
 10   struct rcu_node *rcu_blocked_node;
 11 #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
 12 #ifdef CONFIG_RCU_BOOST
 13   struct rt_mutex *rcu_boost_mutex;
 14 #endif /* #ifdef CONFIG_RCU_BOOST */
 15 
 16 #define RCU_READ_UNLOCK_BLOCKED (1 << 0)
 17 #define RCU_READ_UNLOCK_BOOSTED (1 << 1)
 18 #define RCU_READ_UNLOCK_NEED_QS (1 << 2)

The ->rcu_read_lock_nesting field records the nesting level for RCU read-side critical sections, and the ->rcu_read_unlock_special field is a bitmask that records special conditions that require rcu_read_unlock() to do additional work. There are currently three bits defined as shown on lines 16-18. The ->rcu_boosted field indicates that the current task was subjected to RCU priority boosting during its current RCU read-side critical section.

Quick Quiz 11: Why is ->rcu_boosted required, given that there is a RCU_READ_UNLOCK_BOOSTED bit in ->rcu_read_unlock_special?
Answer

Accessor Functions

The following listing shows the rcu_get_root(), rcu_for_each_node_breadth_first, rcu_for_each_nonleaf_node_breadth_first(), and rcu_for_each_leaf_node() function and macros:

  1 static struct rcu_node *rcu_get_root(struct rcu_state *rsp)
  2 {
  3   return &rsp->node[0];
  4 }
  5 
  6 #define rcu_for_each_node_breadth_first(rsp, rnp) \
  7   for ((rnp) = &(rsp)->node[0]; \
  8        (rnp) < &(rsp)->node[NUM_RCU_NODES]; (rnp)++)
  9 
 10 #define rcu_for_each_nonleaf_node_breadth_first(rsp, rnp) \
 11   for ((rnp) = &(rsp)->node[0]; \
 12        (rnp) < (rsp)->level[NUM_RCU_LVLS - 1]; (rnp)++)
 13 
 14 #define rcu_for_each_leaf_node(rsp, rnp) \
 15   for ((rnp) = (rsp)->level[NUM_RCU_LVLS - 1]; \
 16        (rnp) < &(rsp)->node[NUM_RCU_NODES]; (rnp)++)

The rcu_get_root() simply returns a pointer to the first element of the specified rcu_state structure's ->node[] array, which is the root rcu_node structure.

As noted earlier, the rcu_for_each_node_breadth_first() macro takes advantage of the layout of the rcu_node structures in the rcu_state structure's ->node[] array, performing a breadth-first traversal by simply traversing the array in order. The rcu_for_each_nonleaf_node_breadth_first() macro operates similarly, but traverses only the first part of the array, thus excluding the leaf rcu_node structures. Finally, the rcu_for_each_leaf_node() macro traverses only the last part of the array, thus traversing only the leaf rcu_node structures.

Quick Quiz 12: What do rcu_for_each_nonleaf_node_breadth_first() and rcu_for_each_leaf_node() do if the rcu_node tree contains only a single node?
Answer

Locking Hierarchy

The following is a schematic of RCU's locking hierarchy:

lockhierarchy.png

Note that the pi->lock and rq->lock are scheduler locks rather than RCU locks. However, given that the scheduler invokes RCU with these locks held, and given that RCU invokes portions of the scheduler that acquire these locks, it is important that RCU treat these scheduler locks with as much care and attention as it does to its own locks. And yes, I did find out about this the hard way!

Summary

So each flavor of RCU is represented by an rcu_state structure, which contains a combining tree of rcu_node and rcu_data structures. Finally, in CONFIG_NO_HZ kernels, each CPU's dyntick-idle state is tracked by an rcu_dynticks structure. If you made it this far, you are well prepared to read the code walkthroughs in the other articles in this series.

Acknowledgments

I owe thanks to Cyrill Gorcunov, Mathieu Desnoyers, Dhaval Giani, Paul Turner, Abhishek Srivastava, Matt Kowalczyk, Serge Hallyn, @@@ for helping me get this document into a more human-readable state.

Legal Statement

This work represents the view of the author and does not necessarily represent the view of IBM.

Linux is a registered trademark of Linus Torvalds.

Other company, product, and service names may be trademarks or service marks of others.

Answers to Quick Quizzes

Quick Quiz 1: Why isn't the fanout at the leaves also 64?

Answer: Because there are more types of events that affect the leaf-level rcu_node structures than further up the tree. Therefore, if the leaf rcu_node structures have fanout of 64, the contention on these structures' ->structures becomes excessive. Experimentation on a wide variety of systems has shown that a fanout of 16 works well for the leaves of the rcu_node tree.

Of course, further experience with systems having hundreds or thousands of CPUs may demonstrate that the fanout for the non-leaf rcu_node structures must also be reduced. Such reduction can be easily carried out when and if it proves necessary. In the meantime, if you are using such a system and running into contention problems on the non-leaf rcu_node structures, you may use the CONFIG_RCU_FANOUT kernel configuration parameter to reduce the non-leaf fanout as needed.

Kernels built for systems with strong NUMA characteristics might also need to adjust CONFIG_RCU_FANOUT so that the domains of the rcu_node structures align with hardware boundaries. However, there has thus far been no need for this.

Back to Quick Quiz 1.

Quick Quiz 2: Wait a minute! You said that the rcu_node structures formed a tree, but they are declared as a flat array! What gives?

Answer: The tree is laid out in the array. The first noDE In the array is the head, the next set of nodes in the array are children of the head node, and so on until the last set of nodes in the array are the leaves.

See the following diagrams to see how this works.

Back to Quick Quiz 2.

Quick Quiz 3: Given that this array represents a tree, why can't the diagram that includes the ->level array be planar?

Answer: It can be planar, it is just that it looks uglier that way. But don't take my word for it, draw it yourself!

But if you draw the tree to be tree-shaped rather than array-shaped, it is easy to draw a planar representation:

TreeLevel.png

The above diagram also makes it easy to see how the ->levelcnt array works: element zero contains the value 1 (for the root rcu_node structure), element one contains the number of rcu_node structures on the second level of the tree, element two contains the number of rcu_node structures on the third level of the tree, and element three contains the number of rcu_data structures, which in turn is equal to NR_CPUS.

Back to Quick Quiz 3.

Quick Quiz 4: How can the RCU implementation possibly be correct if most of it ignores events as profound as CPUs appearing and disappearing???

Answer: Very easily. The bulk of the code implementing RCU treats a CPU going offline in exactly the same way that it would treat that CPU having entered any other type of quiescent state, so that there is no need to specially treat CPU-hotplug events, and thus no reason to acquire this global lock. There is also a bitmask that RCU uses to track online CPUs, and this bitmask is updated upon each CPU-hotplug event. This bitmask is referred to when initializing for a new RCU grace period.

As a result, only grace-period initialization (both normal and expedited) needs to synchronize with CPU-hotplug events. And part of that synchronization is the fact that force_quiescent_state() will resolve any races that can cause RCU to think that a CPU is online when in fact it is not.

And sorry, but no, RCU cannot treat CPU-hotplug events as occurring atomically. For more information on this topic, see the article in this series that covers how RCU handles CPU-hotplug events.

Back to Quick Quiz 4.

Quick Quiz 5: If there is no active grace period, why was force_quiescent_state() invoked in the first place???

Answer: Indeed, normally force_quiescent_state() would not be invoked if there was no active grace period. However, it is possible for the grace period to come to an end between the time that ->jiffies_force_qs is checked against the current value of the jiffies counter and the time that force_quiescent_state() acquires the lock. In this case, the ->n_force_qs_ngp counter will be incremented.

Unless of course a new grace period starts during that time. Which can happen...

Back to Quick Quiz 5.

Quick Quiz 6: Why are these bitmasks protected by locking? Come on, haven't you heard of atomic instructions???

Answer: Lockless grace-period computation! Such a tantalizing possibility!

But consider the following sequence of events:

  1. CPU 0 has been in dyntick-idle mode for quite some time. When it wakes up, it notices that the current RCU grace period needs it to report in, so it sets a flag where the scheduling clock interrupt will find it.
  2. Meanwhile, CPU 1 is running force_quiescent_state(), and notices that CPU 0 has been in dyntick idle mode, which qualifies as an extended quiescent state.
  3. CPU 0's scheduling clock interrupt fires in the middle of an RCU read-side critical section, and notices that the RCU core needs something, so commences RCU softirq processing.
  4. CPU 0's softirq handler executes and is just about ready to report its quiescent state up the rcu_node tree.
  5. But CPU 1 beats it to the punch, completing the current grace period and starting a new one.
  6. CPU 0 now reports its quiescent state for the wrong grace period. That grace period might now end before the RCU read-side critical section. If that happens, disaster will ensue.

So the locking is absolutely required in order to coordinate clearing of the bits with the grace-period numbers in ->gpnum and ->completed.

Back to Quick Quiz 6.

Quick Quiz 7: But ->wakemask is only 32 bits wide, while the ->qsmask, ->expmask, and ->qsmaskinit fields can be up to 64 bits wide. Just how is this supposed to work on 64-bit systems???

Answer: The ->wakemask field is used only by leaf-level rcu_node structures, where the fanout is limited to 16. Therefore, 32 bits not only suffices, but is actually twice as large as necessary.

Back to Quick Quiz 7.

Quick Quiz 8: All this replication of the grace period numbers can only cause massive confusion. Why not just keep a global pair of counters and be done with it???

Answer: Because if there was only a single global pair of grace-period numbers, there would need to be a single global lock to allow safely accessing and updating them. And if we are not going to have a single global lock, we need to carefully manage the numbers on a per-node basis. Recall from the answer to a previous Quick Quiz that the consequences of applying a previously sampled quiescent state to the wrong grace period are quite severe.

Back to Quick Quiz 8.

Quick Quiz 9: Why not just count all NMIs? Wouldn't that be simpler and less error prone?

Answer: It seems simpler only until you think hard about how to go about updating the rcu_dynticks structure's ->dynticks field.

Back to Quick Quiz 9.

Quick Quiz 10: Given that the callback function ->func is passed a pointer to the rcu_head structure, how is that function supposed to find the begining of the enclosing RCU-protected data structure?

Answer: In actual practice, there is a separate callback function per type of RCU-protected data structure. The callback function can therefore use the container_of() macro in the Linux kernel (or other pointer-manipulation facilities in other software environments) to find the beginning of the enclosing structure.

Back to Quick Quiz 10.

Quick Quiz 11: Why is ->rcu_boosted required, given that there is a RCU_READ_UNLOCK_BOOSTED bit in ->rcu_read_unlock_special?

Answer: The ->rcu_read_unlock_special field may only be updated by the task itself. By definition, RCU priority boosting must be carried out by some other task. This other task cannot safely update the boosted task's ->rcu_read_unlock_special field without the use of expensive atomic instructions. The ->rcu_boosted field is therefore used by the boosting task to let the boosted task know that it has been boosted. The boosted task makes use of the RCU_READ_UNLOCK_BOOSTED bit in ->rcu_read_unlock_special when deboosting itself.

Back to Quick Quiz 11.

Quick Quiz 12: What do rcu_for_each_nonleaf_node_breadth_first() and rcu_for_each_leaf_node() do if the rcu_node tree contains only a single node?

Answer: In the single-node case, rcu_for_each_nonleaf_node_breadth_first() is a no-op and rcu_for_each_leaf_node() traverses the single node.

Back to Quick Quiz 12.