xref: /xnu-8020.121.3/doc/atomics.md (revision fdd8201d7b966f0c3ea610489d29bd841d358941)
1*fdd8201dSApple OSS DistributionsXNU use of Atomics and Memory Barriers
2*fdd8201dSApple OSS Distributions======================================
3*fdd8201dSApple OSS Distributions
4*fdd8201dSApple OSS DistributionsGoal
5*fdd8201dSApple OSS Distributions----
6*fdd8201dSApple OSS Distributions
7*fdd8201dSApple OSS DistributionsThis document discusses the use of atomics and memory barriers in XNU. It is
8*fdd8201dSApple OSS Distributionsmeant as a guide to best practices, and warns against a variety of possible
9*fdd8201dSApple OSS Distributionspitfalls in the handling of atomics in C.
10*fdd8201dSApple OSS Distributions
11*fdd8201dSApple OSS DistributionsIt is assumed that the reader has a decent understanding of
12*fdd8201dSApple OSS Distributionsthe [C11 memory model](https://en.cppreference.com/w/c/atomic/memory_order)
13*fdd8201dSApple OSS Distributionsas this document builds on it, and explains the liberties XNU takes with said
14*fdd8201dSApple OSS Distributionsmodel.
15*fdd8201dSApple OSS Distributions
16*fdd8201dSApple OSS DistributionsAll the interfaces discussed in this document are available through
17*fdd8201dSApple OSS Distributionsthe `<os/atomic_private.h>` header.
18*fdd8201dSApple OSS Distributions
19*fdd8201dSApple OSS DistributionsNote: Linux has thorough documentation around memory barriers
20*fdd8201dSApple OSS Distributions(Documentation/memory-barriers.txt), some of which is Linux specific,
21*fdd8201dSApple OSS Distributionsbut most is not and is a valuable read.
22*fdd8201dSApple OSS Distributions
23*fdd8201dSApple OSS Distributions
24*fdd8201dSApple OSS DistributionsVocabulary
25*fdd8201dSApple OSS Distributions----------
26*fdd8201dSApple OSS Distributions
27*fdd8201dSApple OSS DistributionsIn the rest of this document we'll refer to the various memory ordering defined
28*fdd8201dSApple OSS Distributionsby C11 as relaxed, consume, acquire, release, acq\_rel and seq\_cst.
29*fdd8201dSApple OSS Distributions
30*fdd8201dSApple OSS Distributions`os_atomic` also tries to make the distinction between compiler **barriers**
31*fdd8201dSApple OSS Distributions(which limit how much the compiler can reorder code), and memory **fences**.
32*fdd8201dSApple OSS Distributions
33*fdd8201dSApple OSS Distributions
34*fdd8201dSApple OSS DistributionsThe dangers and pitfalls of C11's `<stdatomic.h>`
35*fdd8201dSApple OSS Distributions-------------------------------------------------
36*fdd8201dSApple OSS Distributions
37*fdd8201dSApple OSS DistributionsWhile the C11 memory model has likely been one of the most important additions
38*fdd8201dSApple OSS Distributionsto modern C, in the purest C tradition, it is a sharp tool.
39*fdd8201dSApple OSS Distributions
40*fdd8201dSApple OSS DistributionsBy default, C11 comes with two variants of each atomic "operation":
41*fdd8201dSApple OSS Distributions
42*fdd8201dSApple OSS Distributions- an *explicit* variant where memory orderings can be specified,
43*fdd8201dSApple OSS Distributions- a regular variant which is equivalent to the former with the *seq_cst*
44*fdd8201dSApple OSS Distributions  memory ordering.
45*fdd8201dSApple OSS Distributions
46*fdd8201dSApple OSS DistributionsWhen an `_Atomic` qualified variable is accessed directly without using
47*fdd8201dSApple OSS Distributionsany `atomic_*_explicit()` operation, then the compiler will generate the
48*fdd8201dSApple OSS Distributionsmatching *seq_cst* atomic operations on your behalf.
49*fdd8201dSApple OSS Distributions
50*fdd8201dSApple OSS DistributionsThe sequentially consistent world is extremely safe from a lot of compiler
51*fdd8201dSApple OSS Distributionsand hardware reorderings and optimizations, which is great, but comes with
52*fdd8201dSApple OSS Distributionsa huge cost in terms of memory barriers.
53*fdd8201dSApple OSS Distributions
54*fdd8201dSApple OSS Distributions
55*fdd8201dSApple OSS DistributionsIt seems very tempting to use `atomic_*_explicit()` functions with explicit
56*fdd8201dSApple OSS Distributionsmemory orderings, however, the compiler is entitled to perform a number of
57*fdd8201dSApple OSS Distributionsoptimizations with relaxed atomics, that most developers will not expect.
58*fdd8201dSApple OSS DistributionsIndeed, the compiler is perfectly allowed to perform various optimizations it
59*fdd8201dSApple OSS Distributionsdoes with other plain memory accesess such as coalescing, reordering, hoisting
60*fdd8201dSApple OSS Distributionsout of loops, ...
61*fdd8201dSApple OSS Distributions
62*fdd8201dSApple OSS DistributionsFor example, when the compiler can know what `doit` is doing (which due to LTO
63*fdd8201dSApple OSS Distributionsis almost always the case for XNU), is allowed to transform this code:
64*fdd8201dSApple OSS Distributions
65*fdd8201dSApple OSS Distributions```c
66*fdd8201dSApple OSS Distributions    void
67*fdd8201dSApple OSS Distributions    perform_with_progress(int steps, long _Atomic *progress)
68*fdd8201dSApple OSS Distributions    {
69*fdd8201dSApple OSS Distributions        for (int i = 0; i < steps; i++) {
70*fdd8201dSApple OSS Distributions            doit(i);
71*fdd8201dSApple OSS Distributions            atomic_store_explicit(progress, i, memory_order_relaxed);
72*fdd8201dSApple OSS Distributions        }
73*fdd8201dSApple OSS Distributions    }
74*fdd8201dSApple OSS Distributions```
75*fdd8201dSApple OSS Distributions
76*fdd8201dSApple OSS DistributionsInto this, which obviously defeats the entire purpose of `progress`:
77*fdd8201dSApple OSS Distributions
78*fdd8201dSApple OSS Distributions```c
79*fdd8201dSApple OSS Distributions    void
80*fdd8201dSApple OSS Distributions    perform_with_progress(int steps, long _Atomic *progress)
81*fdd8201dSApple OSS Distributions    {
82*fdd8201dSApple OSS Distributions        for (int i = 0; i < steps; i++) {
83*fdd8201dSApple OSS Distributions            doit(i);
84*fdd8201dSApple OSS Distributions        }
85*fdd8201dSApple OSS Distributions        atomic_store_explicit(progress, steps, memory_order_relaxed);
86*fdd8201dSApple OSS Distributions    }
87*fdd8201dSApple OSS Distributions```
88*fdd8201dSApple OSS Distributions
89*fdd8201dSApple OSS Distributions
90*fdd8201dSApple OSS DistributionsHow `os_atomic_*` tries to address `<stdatomic.h>` pitfalls
91*fdd8201dSApple OSS Distributions-----------------------------------------------------------
92*fdd8201dSApple OSS Distributions
93*fdd8201dSApple OSS Distributions1. the memory locations passed to the various `os_atomic_*`
94*fdd8201dSApple OSS Distributions   functions do not need to be marked `_Atomic` or `volatile`
95*fdd8201dSApple OSS Distributions   (or `_Atomic volatile`), which allow for use of atomic
96*fdd8201dSApple OSS Distributions   operations in code written before C11 was even a thing.
97*fdd8201dSApple OSS Distributions
98*fdd8201dSApple OSS Distributions   It is however recommended in new code to use the `_Atomic`
99*fdd8201dSApple OSS Distributions   specifier.
100*fdd8201dSApple OSS Distributions
101*fdd8201dSApple OSS Distributions2. `os_atomic_*` cannot be coalesced by the compiler:
102*fdd8201dSApple OSS Distributions   all accesses are performed on the specified locations
103*fdd8201dSApple OSS Distributions   as if their type was `_Atomic volatile` qualified.
104*fdd8201dSApple OSS Distributions
105*fdd8201dSApple OSS Distributions3. `os_atomic_*` only comes with the explicit variants:
106*fdd8201dSApple OSS Distributions   orderings must be provided and can express either memory orders
107*fdd8201dSApple OSS Distributions   where the name is the same as in C11 without the `memory_order_` prefix,
108*fdd8201dSApple OSS Distributions   or a compiler barrier ordering `compiler_acquire`, `compiler_release`,
109*fdd8201dSApple OSS Distributions   `compiler_acq_rel`.
110*fdd8201dSApple OSS Distributions
111*fdd8201dSApple OSS Distributions4. `os_atomic_*` emits the proper compiler barriers that
112*fdd8201dSApple OSS Distributions   correspond to the requested memory ordering (using
113*fdd8201dSApple OSS Distributions   `atomic_signal_fence()`).
114*fdd8201dSApple OSS Distributions
115*fdd8201dSApple OSS Distributions
116*fdd8201dSApple OSS DistributionsBest practices for the use of atomics in XNU
117*fdd8201dSApple OSS Distributions--------------------------------------------
118*fdd8201dSApple OSS Distributions
119*fdd8201dSApple OSS DistributionsFor most generic code, the `os_atomic_*` functions from
120*fdd8201dSApple OSS Distributions`<os/atomic_private.h>` are the preferred interfaces.
121*fdd8201dSApple OSS Distributions
122*fdd8201dSApple OSS Distributions`__sync_*`, `__c11_*` and `__atomic_*` compiler builtins should not be used.
123*fdd8201dSApple OSS Distributions
124*fdd8201dSApple OSS Distributions`<stdatomic.h>` functions may be used if:
125*fdd8201dSApple OSS Distributions
126*fdd8201dSApple OSS Distributions- compiler coalescing / reordering is desired (refcounting
127*fdd8201dSApple OSS Distributions  implementations may desire this for example).
128*fdd8201dSApple OSS Distributions
129*fdd8201dSApple OSS Distributions
130*fdd8201dSApple OSS DistributionsQualifying atomic variables with `_Atomic` or even
131*fdd8201dSApple OSS Distributions`_Atomic volatile` is encouraged, however authors must
132*fdd8201dSApple OSS Distributionsbe aware that a direct access to this variable will
133*fdd8201dSApple OSS Distributionsresult in quite heavy memory barriers.
134*fdd8201dSApple OSS Distributions
135*fdd8201dSApple OSS DistributionsThe *consume* memory ordering should not be used
136*fdd8201dSApple OSS Distributions(See *dependency* memory order later in this documentation).
137*fdd8201dSApple OSS Distributions
138*fdd8201dSApple OSS Distributions**Note**: `<libkern/OSAtomic.h>` provides a bunch of legacy
139*fdd8201dSApple OSS Distributionsatomic interfaces, but this header is considered obsolete
140*fdd8201dSApple OSS Distributionsand these functions should not be used in new code.
141*fdd8201dSApple OSS Distributions
142*fdd8201dSApple OSS Distributions
143*fdd8201dSApple OSS DistributionsHigh level overview of `os_atomic_*` interfaces
144*fdd8201dSApple OSS Distributions-----------------------------------------------
145*fdd8201dSApple OSS Distributions
146*fdd8201dSApple OSS Distributions### Compiler barriers and memory fences
147*fdd8201dSApple OSS Distributions
148*fdd8201dSApple OSS Distributions`os_compiler_barrier(mem_order?)` provides a compiler barrier,
149*fdd8201dSApple OSS Distributionswith an optional barrier ordering. It is implemented with C11's
150*fdd8201dSApple OSS Distributions`atomic_signal_fence()`. The barrier ordering argument is optional
151*fdd8201dSApple OSS Distributionsand defaults to the `acq_rel` compiler barrier (which prevents the
152*fdd8201dSApple OSS Distributionscompiler to reorder code in any direction around this barrier).
153*fdd8201dSApple OSS Distributions
154*fdd8201dSApple OSS Distributions`os_atomic_thread_fence(mem_order)` provides a memory barrier
155*fdd8201dSApple OSS Distributionsaccording to the semantics of `atomic_thread_fence()`. It always
156*fdd8201dSApple OSS Distributionsimplies the equivalent `os_compiler_barrier()` even on UP systems.
157*fdd8201dSApple OSS Distributions
158*fdd8201dSApple OSS Distributions### Init, load and store
159*fdd8201dSApple OSS Distributions
160*fdd8201dSApple OSS Distributions`os_atomic_init`, `os_atomic_load` and `os_atomic_store` provide
161*fdd8201dSApple OSS Distributionsfacilities equivalent to `atomic_init`, `atomic_load_explicit`
162*fdd8201dSApple OSS Distributionsand `atomic_store_explicit` respectively.
163*fdd8201dSApple OSS Distributions
164*fdd8201dSApple OSS DistributionsNote that `os_atomic_load` and `os_atomic_store` promise that they will
165*fdd8201dSApple OSS Distributionscompile to a plain load or store. `os_atomic_load_wide` and
166*fdd8201dSApple OSS Distributions`os_atomic_store_wide` can be used to have access to atomic loads and store
167*fdd8201dSApple OSS Distributionsthat involve more costly codegen (such as compare exchange loops).
168*fdd8201dSApple OSS Distributions
169*fdd8201dSApple OSS Distributions### Basic RMW (read/modify/write) atomic operations
170*fdd8201dSApple OSS Distributions
171*fdd8201dSApple OSS DistributionsThe following basic atomic RMW operations exist:
172*fdd8201dSApple OSS Distributions
173*fdd8201dSApple OSS Distributions- `inc`: atomic increment (equivalent to an atomic add of `1`),
174*fdd8201dSApple OSS Distributions- `dec`: atomic decrement (equivalent to an atomic sub of `1`),
175*fdd8201dSApple OSS Distributions- `add`: atomic add,
176*fdd8201dSApple OSS Distributions- `sub`: atomic sub,
177*fdd8201dSApple OSS Distributions- `or`: atomic bitwise or,
178*fdd8201dSApple OSS Distributions- `xor`: atomic bitwise xor,
179*fdd8201dSApple OSS Distributions- `and`: atomic bitwise and,
180*fdd8201dSApple OSS Distributions- `andnot`: atomic bitwise andnot (equivalent to atomic and of ~value),
181*fdd8201dSApple OSS Distributions- `min`: atomic min,
182*fdd8201dSApple OSS Distributions- `max`: atomic max.
183*fdd8201dSApple OSS Distributions
184*fdd8201dSApple OSS DistributionsFor any such operation, two variants exist:
185*fdd8201dSApple OSS Distributions
186*fdd8201dSApple OSS Distributions- `os_atomic_${op}_orig` (for example `os_atomic_add_orig`)
187*fdd8201dSApple OSS Distributions  which returns the value stored at the specified location
188*fdd8201dSApple OSS Distributions  *before* the atomic operation took place
189*fdd8201dSApple OSS Distributions- `os_atomic_${op}` (for example `os_atomic_add`) which
190*fdd8201dSApple OSS Distributions  returns the value stored at the specified location
191*fdd8201dSApple OSS Distributions  *after* the atomic operation took place
192*fdd8201dSApple OSS Distributions
193*fdd8201dSApple OSS DistributionsThis convention is picked for two reasons:
194*fdd8201dSApple OSS Distributions
195*fdd8201dSApple OSS Distributions1. `os_atomic_add(p, value, ...)` is essentially equivalent to the C
196*fdd8201dSApple OSS Distributions   in place addition `(*p += value)` which returns the result of the
197*fdd8201dSApple OSS Distributions   operation and not the original value of `*p`.
198*fdd8201dSApple OSS Distributions
199*fdd8201dSApple OSS Distributions2. Most subtle atomic algorithms do actually require the original value
200*fdd8201dSApple OSS Distributions   stored at the location, especially for bit manipulations:
201*fdd8201dSApple OSS Distributions   `(os_atomic_or_orig(p, bit, relaxed) & bit)` will atomically perform
202*fdd8201dSApple OSS Distributions   `*p |= bit` but also tell you whether `bit` was set in the original value.
203*fdd8201dSApple OSS Distributions
204*fdd8201dSApple OSS Distributions   Making it more explicit that the original value is used is hence
205*fdd8201dSApple OSS Distributions   important for readers and worth the extra five keystrokes.
206*fdd8201dSApple OSS Distributions
207*fdd8201dSApple OSS DistributionsTypically:
208*fdd8201dSApple OSS Distributions
209*fdd8201dSApple OSS Distributions```c
210*fdd8201dSApple OSS Distributions    static int _Atomic i = 0;
211*fdd8201dSApple OSS Distributions
212*fdd8201dSApple OSS Distributions    printf("%d\n", os_atomic_inc_orig(&i)); // prints 0
213*fdd8201dSApple OSS Distributions    printf("%d\n", os_atomic_inc(&i)); // prints 2
214*fdd8201dSApple OSS Distributions```
215*fdd8201dSApple OSS Distributions
216*fdd8201dSApple OSS Distributions### Atomic swap / compare and swap
217*fdd8201dSApple OSS Distributions
218*fdd8201dSApple OSS Distributions`os_atomic_xchg` is a simple wrapper around `atomic_exchange_explicit`.
219*fdd8201dSApple OSS Distributions
220*fdd8201dSApple OSS DistributionsThere are two variants of `os_atomic_cmpxchg` which are wrappers around
221*fdd8201dSApple OSS Distributions`atomic_compare_exchange_strong_explicit`. Both of these variants will
222*fdd8201dSApple OSS Distributionsreturn false/0 if the compare exchange failed, and true/1 if the expected
223*fdd8201dSApple OSS Distributionsvalue was found at the specified location and the new value was stored.
224*fdd8201dSApple OSS Distributions
225*fdd8201dSApple OSS Distributions1. `os_atomic_cmpxchg(address, expected, new_value, mem_order)` which
226*fdd8201dSApple OSS Distributions   will atomically store `new_value` at `address` if the current value
227*fdd8201dSApple OSS Distributions   is equal to `expected`.
228*fdd8201dSApple OSS Distributions
229*fdd8201dSApple OSS Distributions2. `os_atomic_cmpxchgv(address, expected, new_value, orig_value, mem_order)`
230*fdd8201dSApple OSS Distributions   which has an extra `orig_value` argument which must be a pointer to a local
231*fdd8201dSApple OSS Distributions   variable and will be filled with the current value at `address` whether the
232*fdd8201dSApple OSS Distributions   compare exchange was successful or not. In case of success, the loaded value
233*fdd8201dSApple OSS Distributions   will always be `expected`, however in case of failure it will be filled with
234*fdd8201dSApple OSS Distributions   the current value, which is helpful to redrive compare exchange loops.
235*fdd8201dSApple OSS Distributions
236*fdd8201dSApple OSS DistributionsUnlike `atomic_compare_exchange_strong_explicit`, a single ordering is
237*fdd8201dSApple OSS Distributionsspecified, which only takes effect in case of a successful compare exchange.
238*fdd8201dSApple OSS DistributionsIn C11 speak, `os_atomic_cmpxchg*` always specifies `memory_order_relaxed`
239*fdd8201dSApple OSS Distributionsfor the failure case ordering, as it is what is used most of the time.
240*fdd8201dSApple OSS Distributions
241*fdd8201dSApple OSS DistributionsThere is no wrapper around `atomic_compare_exchange_weak_explicit`,
242*fdd8201dSApple OSS Distributionsas `os_atomic_rmw_loop` offers a much better alternative for CAS-loops.
243*fdd8201dSApple OSS Distributions
244*fdd8201dSApple OSS Distributions### `os_atomic_rmw_loop`
245*fdd8201dSApple OSS Distributions
246*fdd8201dSApple OSS DistributionsThis expressive and versatile construct allows for really terse and
247*fdd8201dSApple OSS Distributionsway more readable compare exchange loops. It also uses LL/SC constructs more
248*fdd8201dSApple OSS Distributionsefficiently than a compare exchange loop would allow.
249*fdd8201dSApple OSS Distributions
250*fdd8201dSApple OSS DistributionsInstead of a typical CAS-loop in C11:
251*fdd8201dSApple OSS Distributions
252*fdd8201dSApple OSS Distributions```c
253*fdd8201dSApple OSS Distributions    int _Atomic *address;
254*fdd8201dSApple OSS Distributions    int old_value, new_value;
255*fdd8201dSApple OSS Distributions    bool success = false;
256*fdd8201dSApple OSS Distributions
257*fdd8201dSApple OSS Distributions    old_value = atomic_load_explicit(address, memory_order_relaxed);
258*fdd8201dSApple OSS Distributions    do {
259*fdd8201dSApple OSS Distributions        if (!validate(old_value)) {
260*fdd8201dSApple OSS Distributions            break;
261*fdd8201dSApple OSS Distributions        }
262*fdd8201dSApple OSS Distributions        new_value = compute_new_value(old_value);
263*fdd8201dSApple OSS Distributions        success = atomic_compare_exchange_weak_explicit(address, &old_value,
264*fdd8201dSApple OSS Distributions                new_value, memory_order_acquire, memory_order_relaxed);
265*fdd8201dSApple OSS Distributions    } while (__improbable(!success));
266*fdd8201dSApple OSS Distributions```
267*fdd8201dSApple OSS Distributions
268*fdd8201dSApple OSS Distributions`os_atomic_rmw_loop` allows this form:
269*fdd8201dSApple OSS Distributions
270*fdd8201dSApple OSS Distributions```c
271*fdd8201dSApple OSS Distributions    int _Atomic *address;
272*fdd8201dSApple OSS Distributions    int old_value, new_value;
273*fdd8201dSApple OSS Distributions    bool success;
274*fdd8201dSApple OSS Distributions
275*fdd8201dSApple OSS Distributions    success = os_atomic_rmw_loop(address, old_value, new_value, acquire, {
276*fdd8201dSApple OSS Distributions        if (!validate(old_value)) {
277*fdd8201dSApple OSS Distributions            os_atomic_rmw_loop_give_up(break);
278*fdd8201dSApple OSS Distributions        }
279*fdd8201dSApple OSS Distributions        new_value = compute_new_value(old_value);
280*fdd8201dSApple OSS Distributions    });
281*fdd8201dSApple OSS Distributions```
282*fdd8201dSApple OSS Distributions
283*fdd8201dSApple OSS DistributionsUnlike the C11 variant, it lets the reader know in program order that this will
284*fdd8201dSApple OSS Distributionsbe a CAS loop, and exposes the ordering upfront, while for traditional CAS loops
285*fdd8201dSApple OSS Distributionsone has to jump to the end of the code to understand what it does.
286*fdd8201dSApple OSS Distributions
287*fdd8201dSApple OSS DistributionsAny control flow that attempts to exit its scope of the loop needs to be
288*fdd8201dSApple OSS Distributionswrapped with `os_atomic_rmw_loop_give_up` (so that LL/SC architectures can
289*fdd8201dSApple OSS Distributionsabort their opened LL/SC transaction).
290*fdd8201dSApple OSS Distributions
291*fdd8201dSApple OSS DistributionsBecause these loops are LL/SC transactions, it is undefined to perform
292*fdd8201dSApple OSS Distributionsany store to memory (register operations are fine) within these loops,
293*fdd8201dSApple OSS Distributionsas these may cause the store-conditional to always fail.
294*fdd8201dSApple OSS DistributionsIn particular nesting of `os_atomic_rmw_loop` is invalid.
295*fdd8201dSApple OSS Distributions
296*fdd8201dSApple OSS DistributionsUse of `continue` within an `os_atomic_rmw_loop` is also invalid, instead an
297*fdd8201dSApple OSS Distributions`os_atomic_rmw_loop_give_up(goto again)` jumping to an `again:` label placed
298*fdd8201dSApple OSS Distributionsbefore the loop should be used in this way:
299*fdd8201dSApple OSS Distributions
300*fdd8201dSApple OSS Distributions```c
301*fdd8201dSApple OSS Distributions    int _Atomic *address;
302*fdd8201dSApple OSS Distributions    int old_value, new_value;
303*fdd8201dSApple OSS Distributions    bool success;
304*fdd8201dSApple OSS Distributions
305*fdd8201dSApple OSS Distributionsagain:
306*fdd8201dSApple OSS Distributions    success = os_atomic_rmw_loop(address, old_value, new_value, acquire, {
307*fdd8201dSApple OSS Distributions        if (needs_some_store_that_can_thwart_the_transaction(old_value)) {
308*fdd8201dSApple OSS Distributions            os_atomic_rmw_loop_give_up({
309*fdd8201dSApple OSS Distributions                // Do whatever you need to do/store to central memory
310*fdd8201dSApple OSS Distributions                // that would cause the loop to always fail
311*fdd8201dSApple OSS Distributions                do_my_rmw_loop_breaking_store();
312*fdd8201dSApple OSS Distributions
313*fdd8201dSApple OSS Distributions                // And only then redrive.
314*fdd8201dSApple OSS Distributions                goto again;
315*fdd8201dSApple OSS Distributions            });
316*fdd8201dSApple OSS Distributions        }
317*fdd8201dSApple OSS Distributions        if (!validate(old_value)) {
318*fdd8201dSApple OSS Distributions            os_atomic_rmw_loop_give_up(break);
319*fdd8201dSApple OSS Distributions        }
320*fdd8201dSApple OSS Distributions        new_value = compute_new_value(old_value);
321*fdd8201dSApple OSS Distributions    });
322*fdd8201dSApple OSS Distributions```
323*fdd8201dSApple OSS Distributions
324*fdd8201dSApple OSS Distributions### the *dependency* memory order
325*fdd8201dSApple OSS Distributions
326*fdd8201dSApple OSS DistributionsBecause the C11 *consume* memory order is broken in various ways,
327*fdd8201dSApple OSS Distributionsmost compilers, clang included, implement it as an equivalent
328*fdd8201dSApple OSS Distributionsfor `memory_order_acquire`. However, its concept is useful
329*fdd8201dSApple OSS Distributionsfor certain algorithms.
330*fdd8201dSApple OSS Distributions
331*fdd8201dSApple OSS DistributionsAs an attempt to provide a replacement for this, `<os/atomic_private.h>`
332*fdd8201dSApple OSS Distributionsimplements an entirely new *dependency* memory ordering.
333*fdd8201dSApple OSS Distributions
334*fdd8201dSApple OSS DistributionsThe purpose of this ordering is to provide a relaxed load followed by an
335*fdd8201dSApple OSS Distributionsimplicit compiler barrier, that can be used as a root for a chain of hardware
336*fdd8201dSApple OSS Distributionsdependencies that would otherwise pair with store-releases done at this address,
337*fdd8201dSApple OSS Distributionsvery much like the *consume* memory order is intended to provide.
338*fdd8201dSApple OSS Distributions
339*fdd8201dSApple OSS DistributionsHowever, unlike the *consume* memory ordering where the compiler had to follow
340*fdd8201dSApple OSS Distributionsthe dependencies, the *dependency* memory ordering relies on explicit
341*fdd8201dSApple OSS Distributionsannotations of when the dependencies are expected:
342*fdd8201dSApple OSS Distributions
343*fdd8201dSApple OSS Distributions- loads through a pointer loaded with a *dependency* memory ordering
344*fdd8201dSApple OSS Distributions  will provide a hardware dependency,
345*fdd8201dSApple OSS Distributions
346*fdd8201dSApple OSS Distributions- dependencies may be injected into other loads not performed through this
347*fdd8201dSApple OSS Distributions  particular pointer with the `os_atomic_load_with_dependency_on` and
348*fdd8201dSApple OSS Distributions  `os_atomic_inject_dependency` interfaces.
349*fdd8201dSApple OSS Distributions
350*fdd8201dSApple OSS DistributionsHere is an example of how it is meant to be used:
351*fdd8201dSApple OSS Distributions
352*fdd8201dSApple OSS Distributions```c
353*fdd8201dSApple OSS Distributions    struct foo {
354*fdd8201dSApple OSS Distributions        long value;
355*fdd8201dSApple OSS Distributions        long _Atomic flag;
356*fdd8201dSApple OSS Distributions    };
357*fdd8201dSApple OSS Distributions
358*fdd8201dSApple OSS Distributions    void
359*fdd8201dSApple OSS Distributions    publish(struct foo *p, long value)
360*fdd8201dSApple OSS Distributions    {
361*fdd8201dSApple OSS Distributions        p->value = value;
362*fdd8201dSApple OSS Distributions        os_atomic_store(&p->flag, 1, release);
363*fdd8201dSApple OSS Distributions    }
364*fdd8201dSApple OSS Distributions
365*fdd8201dSApple OSS Distributions
366*fdd8201dSApple OSS Distributions    bool
367*fdd8201dSApple OSS Distributions    broken_read(struct foo *p, long *value)
368*fdd8201dSApple OSS Distributions    {
369*fdd8201dSApple OSS Distributions        /*
370*fdd8201dSApple OSS Distributions         * This isn't safe, as there's absolutely no hardware dependency involved.
371*fdd8201dSApple OSS Distributions         * Using an acquire barrier would of course fix it but is quite expensive...
372*fdd8201dSApple OSS Distributions         */
373*fdd8201dSApple OSS Distributions        if (os_atomic_load(&p->flag, relaxed)) {
374*fdd8201dSApple OSS Distributions            *value = p->value;
375*fdd8201dSApple OSS Distributions            return true;
376*fdd8201dSApple OSS Distributions        }
377*fdd8201dSApple OSS Distributions        return false;
378*fdd8201dSApple OSS Distributions    }
379*fdd8201dSApple OSS Distributions
380*fdd8201dSApple OSS Distributions    bool
381*fdd8201dSApple OSS Distributions    valid_read(struct foo *p, long *value)
382*fdd8201dSApple OSS Distributions    {
383*fdd8201dSApple OSS Distributions        long flag = os_atomic_load(&p->flag, dependency);
384*fdd8201dSApple OSS Distributions        if (flag) {
385*fdd8201dSApple OSS Distributions            /*
386*fdd8201dSApple OSS Distributions             * Further the chain of dependency to any loads through `p`
387*fdd8201dSApple OSS Distributions             * which properly pair with the release barrier in `publish`.
388*fdd8201dSApple OSS Distributions             */
389*fdd8201dSApple OSS Distributions            *value = os_atomic_load_with_dependency_on(&p->value, flag);
390*fdd8201dSApple OSS Distributions            return true;
391*fdd8201dSApple OSS Distributions        }
392*fdd8201dSApple OSS Distributions        return false;
393*fdd8201dSApple OSS Distributions    }
394*fdd8201dSApple OSS Distributions```
395*fdd8201dSApple OSS Distributions
396*fdd8201dSApple OSS DistributionsThere are 4 interfaces involved with hardware dependencies:
397*fdd8201dSApple OSS Distributions
398*fdd8201dSApple OSS Distributions1. `os_atomic_load(..., dependency)` to initiate roots of hardware dependencies,
399*fdd8201dSApple OSS Distributions   that should pair with a store or rmw with release semantics or stronger
400*fdd8201dSApple OSS Distributions   (release, acq\_rel or seq\_cst),
401*fdd8201dSApple OSS Distributions
402*fdd8201dSApple OSS Distributions2. `os_atomic_inject_dependency` can be used to inject the dependency provided
403*fdd8201dSApple OSS Distributions   by a *dependency* load, or any other value that has had a dependency
404*fdd8201dSApple OSS Distributions   injected,
405*fdd8201dSApple OSS Distributions
406*fdd8201dSApple OSS Distributions3. `os_atomic_load_with_dependency_on` to do an otherwise related relaxed load
407*fdd8201dSApple OSS Distributions   that still prolongs a dependency chain,
408*fdd8201dSApple OSS Distributions
409*fdd8201dSApple OSS Distributions4. `os_atomic_make_dependency` to create an opaque token out of a given
410*fdd8201dSApple OSS Distributions   dependency root to inject into multiple loads.
411*fdd8201dSApple OSS Distributions
412*fdd8201dSApple OSS Distributions
413*fdd8201dSApple OSS Distributions**Note**: this technique is NOT safe when the compiler can reason about the
414*fdd8201dSApple OSS Distributionspointers that you are manipulating, for example if the compiler can know that
415*fdd8201dSApple OSS Distributionsthe pointer can only take a couple of values and ditch all these manually
416*fdd8201dSApple OSS Distributionscrafted dependency chains. Hopefully there will be a future C2Y standard that
417*fdd8201dSApple OSS Distributionsprovides a similar construct as a language feature instead.
418