xref: /xnu-8796.121.2/doc/atomics.md (revision c54f35ca767986246321eb901baf8f5ff7923f6a) !
1*c54f35caSApple OSS DistributionsXNU use of Atomics and Memory Barriers
2*c54f35caSApple OSS Distributions======================================
3*c54f35caSApple OSS Distributions
4*c54f35caSApple OSS DistributionsGoal
5*c54f35caSApple OSS Distributions----
6*c54f35caSApple OSS Distributions
7*c54f35caSApple OSS DistributionsThis document discusses the use of atomics and memory barriers in XNU. It is
8*c54f35caSApple OSS Distributionsmeant as a guide to best practices, and warns against a variety of possible
9*c54f35caSApple OSS Distributionspitfalls in the handling of atomics in C.
10*c54f35caSApple OSS Distributions
11*c54f35caSApple OSS DistributionsIt is assumed that the reader has a decent understanding of
12*c54f35caSApple OSS Distributionsthe [C11 memory model](https://en.cppreference.com/w/c/atomic/memory_order)
13*c54f35caSApple OSS Distributionsas this document builds on it, and explains the liberties XNU takes with said
14*c54f35caSApple OSS Distributionsmodel.
15*c54f35caSApple OSS Distributions
16*c54f35caSApple OSS DistributionsAll the interfaces discussed in this document are available through
17*c54f35caSApple OSS Distributionsthe `<os/atomic_private.h>` header.
18*c54f35caSApple OSS Distributions
19*c54f35caSApple OSS DistributionsNote: Linux has thorough documentation around memory barriers
20*c54f35caSApple OSS Distributions(Documentation/memory-barriers.txt), some of which is Linux specific,
21*c54f35caSApple OSS Distributionsbut most is not and is a valuable read.
22*c54f35caSApple OSS Distributions
23*c54f35caSApple OSS Distributions
24*c54f35caSApple OSS DistributionsVocabulary
25*c54f35caSApple OSS Distributions----------
26*c54f35caSApple OSS Distributions
27*c54f35caSApple OSS DistributionsIn the rest of this document we'll refer to the various memory ordering defined
28*c54f35caSApple OSS Distributionsby C11 as relaxed, consume, acquire, release, acq\_rel and seq\_cst.
29*c54f35caSApple OSS Distributions
30*c54f35caSApple OSS Distributions`os_atomic` also tries to make the distinction between compiler **barriers**
31*c54f35caSApple OSS Distributions(which limit how much the compiler can reorder code), and memory **fences**.
32*c54f35caSApple OSS Distributions
33*c54f35caSApple OSS Distributions
34*c54f35caSApple OSS DistributionsThe dangers and pitfalls of C11's `<stdatomic.h>`
35*c54f35caSApple OSS Distributions-------------------------------------------------
36*c54f35caSApple OSS Distributions
37*c54f35caSApple OSS DistributionsWhile the C11 memory model has likely been one of the most important additions
38*c54f35caSApple OSS Distributionsto modern C, in the purest C tradition, it is a sharp tool.
39*c54f35caSApple OSS Distributions
40*c54f35caSApple OSS DistributionsBy default, C11 comes with two variants of each atomic "operation":
41*c54f35caSApple OSS Distributions
42*c54f35caSApple OSS Distributions- an *explicit* variant where memory orderings can be specified,
43*c54f35caSApple OSS Distributions- a regular variant which is equivalent to the former with the *seq_cst*
44*c54f35caSApple OSS Distributions  memory ordering.
45*c54f35caSApple OSS Distributions
46*c54f35caSApple OSS DistributionsWhen an `_Atomic` qualified variable is accessed directly without using
47*c54f35caSApple OSS Distributionsany `atomic_*_explicit()` operation, then the compiler will generate the
48*c54f35caSApple OSS Distributionsmatching *seq_cst* atomic operations on your behalf.
49*c54f35caSApple OSS Distributions
50*c54f35caSApple OSS DistributionsThe sequentially consistent world is extremely safe from a lot of compiler
51*c54f35caSApple OSS Distributionsand hardware reorderings and optimizations, which is great, but comes with
52*c54f35caSApple OSS Distributionsa huge cost in terms of memory barriers.
53*c54f35caSApple OSS Distributions
54*c54f35caSApple OSS Distributions
55*c54f35caSApple OSS DistributionsIt seems very tempting to use `atomic_*_explicit()` functions with explicit
56*c54f35caSApple OSS Distributionsmemory orderings, however, the compiler is entitled to perform a number of
57*c54f35caSApple OSS Distributionsoptimizations with relaxed atomics, that most developers will not expect.
58*c54f35caSApple OSS DistributionsIndeed, the compiler is perfectly allowed to perform various optimizations it
59*c54f35caSApple OSS Distributionsdoes with other plain memory accesess such as coalescing, reordering, hoisting
60*c54f35caSApple OSS Distributionsout of loops, ...
61*c54f35caSApple OSS Distributions
62*c54f35caSApple OSS DistributionsFor example, when the compiler can know what `doit` is doing (which due to LTO
63*c54f35caSApple OSS Distributionsis almost always the case for XNU), is allowed to transform this code:
64*c54f35caSApple OSS Distributions
65*c54f35caSApple OSS Distributions```c
66*c54f35caSApple OSS Distributions    void
67*c54f35caSApple OSS Distributions    perform_with_progress(int steps, long _Atomic *progress)
68*c54f35caSApple OSS Distributions    {
69*c54f35caSApple OSS Distributions        for (int i = 0; i < steps; i++) {
70*c54f35caSApple OSS Distributions            doit(i);
71*c54f35caSApple OSS Distributions            atomic_store_explicit(progress, i, memory_order_relaxed);
72*c54f35caSApple OSS Distributions        }
73*c54f35caSApple OSS Distributions    }
74*c54f35caSApple OSS Distributions```
75*c54f35caSApple OSS Distributions
76*c54f35caSApple OSS DistributionsInto this, which obviously defeats the entire purpose of `progress`:
77*c54f35caSApple OSS Distributions
78*c54f35caSApple OSS Distributions```c
79*c54f35caSApple OSS Distributions    void
80*c54f35caSApple OSS Distributions    perform_with_progress(int steps, long _Atomic *progress)
81*c54f35caSApple OSS Distributions    {
82*c54f35caSApple OSS Distributions        for (int i = 0; i < steps; i++) {
83*c54f35caSApple OSS Distributions            doit(i);
84*c54f35caSApple OSS Distributions        }
85*c54f35caSApple OSS Distributions        atomic_store_explicit(progress, steps, memory_order_relaxed);
86*c54f35caSApple OSS Distributions    }
87*c54f35caSApple OSS Distributions```
88*c54f35caSApple OSS Distributions
89*c54f35caSApple OSS Distributions
90*c54f35caSApple OSS DistributionsHow `os_atomic_*` tries to address `<stdatomic.h>` pitfalls
91*c54f35caSApple OSS Distributions-----------------------------------------------------------
92*c54f35caSApple OSS Distributions
93*c54f35caSApple OSS Distributions1. the memory locations passed to the various `os_atomic_*`
94*c54f35caSApple OSS Distributions   functions do not need to be marked `_Atomic` or `volatile`
95*c54f35caSApple OSS Distributions   (or `_Atomic volatile`), which allow for use of atomic
96*c54f35caSApple OSS Distributions   operations in code written before C11 was even a thing.
97*c54f35caSApple OSS Distributions
98*c54f35caSApple OSS Distributions   It is however recommended in new code to use the `_Atomic`
99*c54f35caSApple OSS Distributions   specifier.
100*c54f35caSApple OSS Distributions
101*c54f35caSApple OSS Distributions2. `os_atomic_*` cannot be coalesced by the compiler:
102*c54f35caSApple OSS Distributions   all accesses are performed on the specified locations
103*c54f35caSApple OSS Distributions   as if their type was `_Atomic volatile` qualified.
104*c54f35caSApple OSS Distributions
105*c54f35caSApple OSS Distributions3. `os_atomic_*` only comes with the explicit variants:
106*c54f35caSApple OSS Distributions   orderings must be provided and can express either memory orders
107*c54f35caSApple OSS Distributions   where the name is the same as in C11 without the `memory_order_` prefix,
108*c54f35caSApple OSS Distributions   or a compiler barrier ordering `compiler_acquire`, `compiler_release`,
109*c54f35caSApple OSS Distributions   `compiler_acq_rel`.
110*c54f35caSApple OSS Distributions
111*c54f35caSApple OSS Distributions4. `os_atomic_*` emits the proper compiler barriers that
112*c54f35caSApple OSS Distributions   correspond to the requested memory ordering (using
113*c54f35caSApple OSS Distributions   `atomic_signal_fence()`).
114*c54f35caSApple OSS Distributions
115*c54f35caSApple OSS Distributions
116*c54f35caSApple OSS DistributionsBest practices for the use of atomics in XNU
117*c54f35caSApple OSS Distributions--------------------------------------------
118*c54f35caSApple OSS Distributions
119*c54f35caSApple OSS DistributionsFor most generic code, the `os_atomic_*` functions from
120*c54f35caSApple OSS Distributions`<os/atomic_private.h>` are the preferred interfaces.
121*c54f35caSApple OSS Distributions
122*c54f35caSApple OSS Distributions`__sync_*`, `__c11_*` and `__atomic_*` compiler builtins should not be used.
123*c54f35caSApple OSS Distributions
124*c54f35caSApple OSS Distributions`<stdatomic.h>` functions may be used if:
125*c54f35caSApple OSS Distributions
126*c54f35caSApple OSS Distributions- compiler coalescing / reordering is desired (refcounting
127*c54f35caSApple OSS Distributions  implementations may desire this for example).
128*c54f35caSApple OSS Distributions
129*c54f35caSApple OSS Distributions
130*c54f35caSApple OSS DistributionsQualifying atomic variables with `_Atomic` or even
131*c54f35caSApple OSS Distributions`_Atomic volatile` is encouraged, however authors must
132*c54f35caSApple OSS Distributionsbe aware that a direct access to this variable will
133*c54f35caSApple OSS Distributionsresult in quite heavy memory barriers.
134*c54f35caSApple OSS Distributions
135*c54f35caSApple OSS DistributionsThe *consume* memory ordering should not be used
136*c54f35caSApple OSS Distributions(See *dependency* memory order later in this documentation).
137*c54f35caSApple OSS Distributions
138*c54f35caSApple OSS Distributions**Note**: `<libkern/OSAtomic.h>` provides a bunch of legacy
139*c54f35caSApple OSS Distributionsatomic interfaces, but this header is considered obsolete
140*c54f35caSApple OSS Distributionsand these functions should not be used in new code.
141*c54f35caSApple OSS Distributions
142*c54f35caSApple OSS Distributions
143*c54f35caSApple OSS DistributionsHigh level overview of `os_atomic_*` interfaces
144*c54f35caSApple OSS Distributions-----------------------------------------------
145*c54f35caSApple OSS Distributions
146*c54f35caSApple OSS Distributions### Compiler barriers and memory fences
147*c54f35caSApple OSS Distributions
148*c54f35caSApple OSS Distributions`os_compiler_barrier(mem_order?)` provides a compiler barrier,
149*c54f35caSApple OSS Distributionswith an optional barrier ordering. It is implemented with C11's
150*c54f35caSApple OSS Distributions`atomic_signal_fence()`. The barrier ordering argument is optional
151*c54f35caSApple OSS Distributionsand defaults to the `acq_rel` compiler barrier (which prevents the
152*c54f35caSApple OSS Distributionscompiler to reorder code in any direction around this barrier).
153*c54f35caSApple OSS Distributions
154*c54f35caSApple OSS Distributions`os_atomic_thread_fence(mem_order)` provides a memory barrier
155*c54f35caSApple OSS Distributionsaccording to the semantics of `atomic_thread_fence()`. It always
156*c54f35caSApple OSS Distributionsimplies the equivalent `os_compiler_barrier()` even on UP systems.
157*c54f35caSApple OSS Distributions
158*c54f35caSApple OSS Distributions### Init, load and store
159*c54f35caSApple OSS Distributions
160*c54f35caSApple OSS Distributions`os_atomic_init`, `os_atomic_load` and `os_atomic_store` provide
161*c54f35caSApple OSS Distributionsfacilities equivalent to `atomic_init`, `atomic_load_explicit`
162*c54f35caSApple OSS Distributionsand `atomic_store_explicit` respectively.
163*c54f35caSApple OSS Distributions
164*c54f35caSApple OSS DistributionsNote that `os_atomic_load` and `os_atomic_store` promise that they will
165*c54f35caSApple OSS Distributionscompile to a plain load or store. `os_atomic_load_wide` and
166*c54f35caSApple OSS Distributions`os_atomic_store_wide` can be used to have access to atomic loads and store
167*c54f35caSApple OSS Distributionsthat involve more costly codegen (such as compare exchange loops).
168*c54f35caSApple OSS Distributions
169*c54f35caSApple OSS Distributions### Basic RMW (read/modify/write) atomic operations
170*c54f35caSApple OSS Distributions
171*c54f35caSApple OSS DistributionsThe following basic atomic RMW operations exist:
172*c54f35caSApple OSS Distributions
173*c54f35caSApple OSS Distributions- `inc`: atomic increment (equivalent to an atomic add of `1`),
174*c54f35caSApple OSS Distributions- `dec`: atomic decrement (equivalent to an atomic sub of `1`),
175*c54f35caSApple OSS Distributions- `add`: atomic add,
176*c54f35caSApple OSS Distributions- `sub`: atomic sub,
177*c54f35caSApple OSS Distributions- `or`: atomic bitwise or,
178*c54f35caSApple OSS Distributions- `xor`: atomic bitwise xor,
179*c54f35caSApple OSS Distributions- `and`: atomic bitwise and,
180*c54f35caSApple OSS Distributions- `andnot`: atomic bitwise andnot (equivalent to atomic and of ~value),
181*c54f35caSApple OSS Distributions- `min`: atomic min,
182*c54f35caSApple OSS Distributions- `max`: atomic max.
183*c54f35caSApple OSS Distributions
184*c54f35caSApple OSS DistributionsFor any such operation, two variants exist:
185*c54f35caSApple OSS Distributions
186*c54f35caSApple OSS Distributions- `os_atomic_${op}_orig` (for example `os_atomic_add_orig`)
187*c54f35caSApple OSS Distributions  which returns the value stored at the specified location
188*c54f35caSApple OSS Distributions  *before* the atomic operation took place
189*c54f35caSApple OSS Distributions- `os_atomic_${op}` (for example `os_atomic_add`) which
190*c54f35caSApple OSS Distributions  returns the value stored at the specified location
191*c54f35caSApple OSS Distributions  *after* the atomic operation took place
192*c54f35caSApple OSS Distributions
193*c54f35caSApple OSS DistributionsThis convention is picked for two reasons:
194*c54f35caSApple OSS Distributions
195*c54f35caSApple OSS Distributions1. `os_atomic_add(p, value, ...)` is essentially equivalent to the C
196*c54f35caSApple OSS Distributions   in place addition `(*p += value)` which returns the result of the
197*c54f35caSApple OSS Distributions   operation and not the original value of `*p`.
198*c54f35caSApple OSS Distributions
199*c54f35caSApple OSS Distributions2. Most subtle atomic algorithms do actually require the original value
200*c54f35caSApple OSS Distributions   stored at the location, especially for bit manipulations:
201*c54f35caSApple OSS Distributions   `(os_atomic_or_orig(p, bit, relaxed) & bit)` will atomically perform
202*c54f35caSApple OSS Distributions   `*p |= bit` but also tell you whether `bit` was set in the original value.
203*c54f35caSApple OSS Distributions
204*c54f35caSApple OSS Distributions   Making it more explicit that the original value is used is hence
205*c54f35caSApple OSS Distributions   important for readers and worth the extra five keystrokes.
206*c54f35caSApple OSS Distributions
207*c54f35caSApple OSS DistributionsTypically:
208*c54f35caSApple OSS Distributions
209*c54f35caSApple OSS Distributions```c
210*c54f35caSApple OSS Distributions    static int _Atomic i = 0;
211*c54f35caSApple OSS Distributions
212*c54f35caSApple OSS Distributions    printf("%d\n", os_atomic_inc_orig(&i)); // prints 0
213*c54f35caSApple OSS Distributions    printf("%d\n", os_atomic_inc(&i)); // prints 2
214*c54f35caSApple OSS Distributions```
215*c54f35caSApple OSS Distributions
216*c54f35caSApple OSS Distributions### Atomic swap / compare and swap
217*c54f35caSApple OSS Distributions
218*c54f35caSApple OSS Distributions`os_atomic_xchg` is a simple wrapper around `atomic_exchange_explicit`.
219*c54f35caSApple OSS Distributions
220*c54f35caSApple OSS DistributionsThere are two variants of `os_atomic_cmpxchg` which are wrappers around
221*c54f35caSApple OSS Distributions`atomic_compare_exchange_strong_explicit`. Both of these variants will
222*c54f35caSApple OSS Distributionsreturn false/0 if the compare exchange failed, and true/1 if the expected
223*c54f35caSApple OSS Distributionsvalue was found at the specified location and the new value was stored.
224*c54f35caSApple OSS Distributions
225*c54f35caSApple OSS Distributions1. `os_atomic_cmpxchg(address, expected, new_value, mem_order)` which
226*c54f35caSApple OSS Distributions   will atomically store `new_value` at `address` if the current value
227*c54f35caSApple OSS Distributions   is equal to `expected`.
228*c54f35caSApple OSS Distributions
229*c54f35caSApple OSS Distributions2. `os_atomic_cmpxchgv(address, expected, new_value, orig_value, mem_order)`
230*c54f35caSApple OSS Distributions   which has an extra `orig_value` argument which must be a pointer to a local
231*c54f35caSApple OSS Distributions   variable and will be filled with the current value at `address` whether the
232*c54f35caSApple OSS Distributions   compare exchange was successful or not. In case of success, the loaded value
233*c54f35caSApple OSS Distributions   will always be `expected`, however in case of failure it will be filled with
234*c54f35caSApple OSS Distributions   the current value, which is helpful to redrive compare exchange loops.
235*c54f35caSApple OSS Distributions
236*c54f35caSApple OSS DistributionsUnlike `atomic_compare_exchange_strong_explicit`, a single ordering is
237*c54f35caSApple OSS Distributionsspecified, which only takes effect in case of a successful compare exchange.
238*c54f35caSApple OSS DistributionsIn C11 speak, `os_atomic_cmpxchg*` always specifies `memory_order_relaxed`
239*c54f35caSApple OSS Distributionsfor the failure case ordering, as it is what is used most of the time.
240*c54f35caSApple OSS Distributions
241*c54f35caSApple OSS DistributionsThere is no wrapper around `atomic_compare_exchange_weak_explicit`,
242*c54f35caSApple OSS Distributionsas `os_atomic_rmw_loop` offers a much better alternative for CAS-loops.
243*c54f35caSApple OSS Distributions
244*c54f35caSApple OSS Distributions### `os_atomic_rmw_loop`
245*c54f35caSApple OSS Distributions
246*c54f35caSApple OSS DistributionsThis expressive and versatile construct allows for really terse and
247*c54f35caSApple OSS Distributionsway more readable compare exchange loops. It also uses LL/SC constructs more
248*c54f35caSApple OSS Distributionsefficiently than a compare exchange loop would allow.
249*c54f35caSApple OSS Distributions
250*c54f35caSApple OSS DistributionsInstead of a typical CAS-loop in C11:
251*c54f35caSApple OSS Distributions
252*c54f35caSApple OSS Distributions```c
253*c54f35caSApple OSS Distributions    int _Atomic *address;
254*c54f35caSApple OSS Distributions    int old_value, new_value;
255*c54f35caSApple OSS Distributions    bool success = false;
256*c54f35caSApple OSS Distributions
257*c54f35caSApple OSS Distributions    old_value = atomic_load_explicit(address, memory_order_relaxed);
258*c54f35caSApple OSS Distributions    do {
259*c54f35caSApple OSS Distributions        if (!validate(old_value)) {
260*c54f35caSApple OSS Distributions            break;
261*c54f35caSApple OSS Distributions        }
262*c54f35caSApple OSS Distributions        new_value = compute_new_value(old_value);
263*c54f35caSApple OSS Distributions        success = atomic_compare_exchange_weak_explicit(address, &old_value,
264*c54f35caSApple OSS Distributions                new_value, memory_order_acquire, memory_order_relaxed);
265*c54f35caSApple OSS Distributions    } while (__improbable(!success));
266*c54f35caSApple OSS Distributions```
267*c54f35caSApple OSS Distributions
268*c54f35caSApple OSS Distributions`os_atomic_rmw_loop` allows this form:
269*c54f35caSApple OSS Distributions
270*c54f35caSApple OSS Distributions```c
271*c54f35caSApple OSS Distributions    int _Atomic *address;
272*c54f35caSApple OSS Distributions    int old_value, new_value;
273*c54f35caSApple OSS Distributions    bool success;
274*c54f35caSApple OSS Distributions
275*c54f35caSApple OSS Distributions    success = os_atomic_rmw_loop(address, old_value, new_value, acquire, {
276*c54f35caSApple OSS Distributions        if (!validate(old_value)) {
277*c54f35caSApple OSS Distributions            os_atomic_rmw_loop_give_up(break);
278*c54f35caSApple OSS Distributions        }
279*c54f35caSApple OSS Distributions        new_value = compute_new_value(old_value);
280*c54f35caSApple OSS Distributions    });
281*c54f35caSApple OSS Distributions```
282*c54f35caSApple OSS Distributions
283*c54f35caSApple OSS DistributionsUnlike the C11 variant, it lets the reader know in program order that this will
284*c54f35caSApple OSS Distributionsbe a CAS loop, and exposes the ordering upfront, while for traditional CAS loops
285*c54f35caSApple OSS Distributionsone has to jump to the end of the code to understand what it does.
286*c54f35caSApple OSS Distributions
287*c54f35caSApple OSS DistributionsAny control flow that attempts to exit its scope of the loop needs to be
288*c54f35caSApple OSS Distributionswrapped with `os_atomic_rmw_loop_give_up` (so that LL/SC architectures can
289*c54f35caSApple OSS Distributionsabort their opened LL/SC transaction).
290*c54f35caSApple OSS Distributions
291*c54f35caSApple OSS DistributionsBecause these loops are LL/SC transactions, it is undefined to perform
292*c54f35caSApple OSS Distributionsany store to memory (register operations are fine) within these loops,
293*c54f35caSApple OSS Distributionsas these may cause the store-conditional to always fail.
294*c54f35caSApple OSS DistributionsIn particular nesting of `os_atomic_rmw_loop` is invalid.
295*c54f35caSApple OSS Distributions
296*c54f35caSApple OSS DistributionsUse of `continue` within an `os_atomic_rmw_loop` is also invalid, instead an
297*c54f35caSApple OSS Distributions`os_atomic_rmw_loop_give_up(goto again)` jumping to an `again:` label placed
298*c54f35caSApple OSS Distributionsbefore the loop should be used in this way:
299*c54f35caSApple OSS Distributions
300*c54f35caSApple OSS Distributions```c
301*c54f35caSApple OSS Distributions    int _Atomic *address;
302*c54f35caSApple OSS Distributions    int old_value, new_value;
303*c54f35caSApple OSS Distributions    bool success;
304*c54f35caSApple OSS Distributions
305*c54f35caSApple OSS Distributionsagain:
306*c54f35caSApple OSS Distributions    success = os_atomic_rmw_loop(address, old_value, new_value, acquire, {
307*c54f35caSApple OSS Distributions        if (needs_some_store_that_can_thwart_the_transaction(old_value)) {
308*c54f35caSApple OSS Distributions            os_atomic_rmw_loop_give_up({
309*c54f35caSApple OSS Distributions                // Do whatever you need to do/store to central memory
310*c54f35caSApple OSS Distributions                // that would cause the loop to always fail
311*c54f35caSApple OSS Distributions                do_my_rmw_loop_breaking_store();
312*c54f35caSApple OSS Distributions
313*c54f35caSApple OSS Distributions                // And only then redrive.
314*c54f35caSApple OSS Distributions                goto again;
315*c54f35caSApple OSS Distributions            });
316*c54f35caSApple OSS Distributions        }
317*c54f35caSApple OSS Distributions        if (!validate(old_value)) {
318*c54f35caSApple OSS Distributions            os_atomic_rmw_loop_give_up(break);
319*c54f35caSApple OSS Distributions        }
320*c54f35caSApple OSS Distributions        new_value = compute_new_value(old_value);
321*c54f35caSApple OSS Distributions    });
322*c54f35caSApple OSS Distributions```
323*c54f35caSApple OSS Distributions
324*c54f35caSApple OSS Distributions### the *dependency* memory order
325*c54f35caSApple OSS Distributions
326*c54f35caSApple OSS DistributionsBecause the C11 *consume* memory order is broken in various ways,
327*c54f35caSApple OSS Distributionsmost compilers, clang included, implement it as an equivalent
328*c54f35caSApple OSS Distributionsfor `memory_order_acquire`. However, its concept is useful
329*c54f35caSApple OSS Distributionsfor certain algorithms.
330*c54f35caSApple OSS Distributions
331*c54f35caSApple OSS DistributionsAs an attempt to provide a replacement for this, `<os/atomic_private.h>`
332*c54f35caSApple OSS Distributionsimplements an entirely new *dependency* memory ordering.
333*c54f35caSApple OSS Distributions
334*c54f35caSApple OSS DistributionsThe purpose of this ordering is to provide a relaxed load followed by an
335*c54f35caSApple OSS Distributionsimplicit compiler barrier, that can be used as a root for a chain of hardware
336*c54f35caSApple OSS Distributionsdependencies that would otherwise pair with store-releases done at this address,
337*c54f35caSApple OSS Distributionsvery much like the *consume* memory order is intended to provide.
338*c54f35caSApple OSS Distributions
339*c54f35caSApple OSS DistributionsHowever, unlike the *consume* memory ordering where the compiler had to follow
340*c54f35caSApple OSS Distributionsthe dependencies, the *dependency* memory ordering relies on explicit
341*c54f35caSApple OSS Distributionsannotations of when the dependencies are expected:
342*c54f35caSApple OSS Distributions
343*c54f35caSApple OSS Distributions- loads through a pointer loaded with a *dependency* memory ordering
344*c54f35caSApple OSS Distributions  will provide a hardware dependency,
345*c54f35caSApple OSS Distributions
346*c54f35caSApple OSS Distributions- dependencies may be injected into other loads not performed through this
347*c54f35caSApple OSS Distributions  particular pointer with the `os_atomic_load_with_dependency_on` and
348*c54f35caSApple OSS Distributions  `os_atomic_inject_dependency` interfaces.
349*c54f35caSApple OSS Distributions
350*c54f35caSApple OSS DistributionsHere is an example of how it is meant to be used:
351*c54f35caSApple OSS Distributions
352*c54f35caSApple OSS Distributions```c
353*c54f35caSApple OSS Distributions    struct foo {
354*c54f35caSApple OSS Distributions        long value;
355*c54f35caSApple OSS Distributions        long _Atomic flag;
356*c54f35caSApple OSS Distributions    };
357*c54f35caSApple OSS Distributions
358*c54f35caSApple OSS Distributions    void
359*c54f35caSApple OSS Distributions    publish(struct foo *p, long value)
360*c54f35caSApple OSS Distributions    {
361*c54f35caSApple OSS Distributions        p->value = value;
362*c54f35caSApple OSS Distributions        os_atomic_store(&p->flag, 1, release);
363*c54f35caSApple OSS Distributions    }
364*c54f35caSApple OSS Distributions
365*c54f35caSApple OSS Distributions
366*c54f35caSApple OSS Distributions    bool
367*c54f35caSApple OSS Distributions    broken_read(struct foo *p, long *value)
368*c54f35caSApple OSS Distributions    {
369*c54f35caSApple OSS Distributions        /*
370*c54f35caSApple OSS Distributions         * This isn't safe, as there's absolutely no hardware dependency involved.
371*c54f35caSApple OSS Distributions         * Using an acquire barrier would of course fix it but is quite expensive...
372*c54f35caSApple OSS Distributions         */
373*c54f35caSApple OSS Distributions        if (os_atomic_load(&p->flag, relaxed)) {
374*c54f35caSApple OSS Distributions            *value = p->value;
375*c54f35caSApple OSS Distributions            return true;
376*c54f35caSApple OSS Distributions        }
377*c54f35caSApple OSS Distributions        return false;
378*c54f35caSApple OSS Distributions    }
379*c54f35caSApple OSS Distributions
380*c54f35caSApple OSS Distributions    bool
381*c54f35caSApple OSS Distributions    valid_read(struct foo *p, long *value)
382*c54f35caSApple OSS Distributions    {
383*c54f35caSApple OSS Distributions        long flag = os_atomic_load(&p->flag, dependency);
384*c54f35caSApple OSS Distributions        if (flag) {
385*c54f35caSApple OSS Distributions            /*
386*c54f35caSApple OSS Distributions             * Further the chain of dependency to any loads through `p`
387*c54f35caSApple OSS Distributions             * which properly pair with the release barrier in `publish`.
388*c54f35caSApple OSS Distributions             */
389*c54f35caSApple OSS Distributions            *value = os_atomic_load_with_dependency_on(&p->value, flag);
390*c54f35caSApple OSS Distributions            return true;
391*c54f35caSApple OSS Distributions        }
392*c54f35caSApple OSS Distributions        return false;
393*c54f35caSApple OSS Distributions    }
394*c54f35caSApple OSS Distributions```
395*c54f35caSApple OSS Distributions
396*c54f35caSApple OSS DistributionsThere are 4 interfaces involved with hardware dependencies:
397*c54f35caSApple OSS Distributions
398*c54f35caSApple OSS Distributions1. `os_atomic_load(..., dependency)` to initiate roots of hardware dependencies,
399*c54f35caSApple OSS Distributions   that should pair with a store or rmw with release semantics or stronger
400*c54f35caSApple OSS Distributions   (release, acq\_rel or seq\_cst),
401*c54f35caSApple OSS Distributions
402*c54f35caSApple OSS Distributions2. `os_atomic_inject_dependency` can be used to inject the dependency provided
403*c54f35caSApple OSS Distributions   by a *dependency* load, or any other value that has had a dependency
404*c54f35caSApple OSS Distributions   injected,
405*c54f35caSApple OSS Distributions
406*c54f35caSApple OSS Distributions3. `os_atomic_load_with_dependency_on` to do an otherwise related relaxed load
407*c54f35caSApple OSS Distributions   that still prolongs a dependency chain,
408*c54f35caSApple OSS Distributions
409*c54f35caSApple OSS Distributions4. `os_atomic_make_dependency` to create an opaque token out of a given
410*c54f35caSApple OSS Distributions   dependency root to inject into multiple loads.
411*c54f35caSApple OSS Distributions
412*c54f35caSApple OSS Distributions
413*c54f35caSApple OSS Distributions**Note**: this technique is NOT safe when the compiler can reason about the
414*c54f35caSApple OSS Distributionspointers that you are manipulating, for example if the compiler can know that
415*c54f35caSApple OSS Distributionsthe pointer can only take a couple of values and ditch all these manually
416*c54f35caSApple OSS Distributionscrafted dependency chains. Hopefully there will be a future C2Y standard that
417*c54f35caSApple OSS Distributionsprovides a similar construct as a language feature instead.
418