| /xnu-12377.61.12/doc/observability/ |
| H A D | mt_stackshot.md | 8 - **Initiating / Calling CPU**: The CPU which stackshot was called from. 9 - **Main CPU**: The CPU which populates workqueues and collects global state. 10 - **Auxiliary CPU**: A CPU which is not the main CPU. 16 When a stackshot is taken, the initiating CPU (the CPU from which stackshot was 21 a CPU is derecommended due to thermal limits or otherwise, it will still be 22 IPI'd into the debugger trap, and we want to avoid overheating the CPU). 24 On AMP systems, a suitable P-core is chosen to be the “main” CPU, and begins 26 global state (On SMP systems, the initiating CPU is always assigned to be the 27 main CPU). 29 The other CPUs begin chipping away at the queues, and the main CPU joins [all …]
|
| H A D | recount.md | 3 CPU resource accounting interfaces and implementation. 7 Recount is a resource accounting subsystem in the kernel that tracks the CPU resources consumed by … 8 It supports attributing counts to a specific level of the CPU topology (per-CPU and per-CPU kind). 17 …, Recount tracks its counters per-CPU kind (e.g. performance or efficiency) for threads, per-CPU f… 51 …spects counters in an LLDB session and is generally useful for retrospective analysis of CPU usage. 52 … each metric as a column and then uses rows for the groupings, like per-CPU or per-CPU kind values. 56 - `recount thread <thread-ptr> [...]` prints a table of per-CPU kind counts for threads. 58 - `recount task <task-ptr> [...]` prints a table of per-CPU counts for tasks. 62 - `recount coalition <coalition-ptr>` prints a table of per-CPU kind counts for each coalition, not… 77 To count CPU resource usage, a `struct recount_usage` has the following fields: [all …]
|
| H A D | cpu_counters.md | 1 # CPU Counters 3 The xnu subsystems that manage CPU performance counters. 7 CPU performance counters are hardware registers that count events of interest to efficient CPU exec… 8 Counters that measure events closely correlated with each CPU's execution pipeline are managed by t… 10 … Monitoring Unit (UPMU), which measures effects that aren't necessarily correlated to a single CPU. 20 There are several subsystems that provide access to CPU counter hardware: 28 - cpc: The CPU Performance Counter subsystem provides a policy layer on top of kpc and Monotonic to… 34 - The Recount subsystem makes extensive use of the fixed CPMU counters to attribute CPU resources b… 41 And CPU counter values can be sampled by kperf on other triggers, like timers or kdebug events.
|
| H A D | coalitions.md | 27 …CPU usage, energy consumption, I/O etc. The idea is we can make statements like 'Safari is using 5… 45 …t resource limits like 'this process should use no more than 10 seconds of CPU time in a 20 second… 63 ### CPU time and energy billing 65 Through the magic of Mach vouchers, XNU can track CPU time and energy consumed *on behalf of* other…
|
| /xnu-12377.61.12/bsd/dev/dtrace/ |
| H A D | dtrace_glue.c | 431 (omni->cyo_online)(omni->cyo_arg, CPU, &cH, &cT); in _cyclic_add_omni() 467 (omni->cyo_offline)(omni->cyo_arg, CPU, oarg); in _cyclic_remove_omni() 961 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = uaddr; in dtrace_copycheck() 975 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = src; in dtrace_copyin() 1000 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = src; in dtrace_copyinstr() 1014 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = dst; in dtrace_copyout() 1036 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = dst; in dtrace_copyoutstr() 1085 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = uaddr; in dtrace_fuword8() 1103 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = uaddr; in dtrace_fuword16() 1121 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = uaddr; in dtrace_fuword32() [all …]
|
| H A D | dtrace.c | 517 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = addr; \ 566 &cpu_core[CPU->cpu_id].cpuc_dtrace_flags; \ 582 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = addr; \ 596 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = addr; \ 1134 volatile uint64_t *illval = &cpu_core[CPU->cpu_id].cpuc_dtrace_illval; in dtrace_canload_remains() 1336 flags = (volatile uint16_t *)&cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_strncmp() 1390 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = kaddr; in dtrace_istoxic() 1396 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = taddr; in dtrace_istoxic() 1480 flags = (volatile uint16_t *)&cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_bcmp() 1684 cpu_core[CPU->cpu_id].cpuc_dtrace_flags |= CPU_DTRACE_UPRIV; in dtrace_priv_proc_destructive() [all …]
|
| /xnu-12377.61.12/doc/scheduler/ |
| H A D | sched_clutch_edge.md | 7 …CPU for latency sensitive workloads (eg. UI interactions, multimedia recording/playback) to starva… 9 …CPU accounting at the thread level incentivizes creating more threads on the system. Also in the w… 24 …ementation. The goal of this level is to provide low latency access to the CPU for high QoS classe… 41 …CPU in the recent past such that they slip behind the lower buckets in deadline order. Now, if a s… 55 …CPU access for Above UI threads while supporting the use case of high priority timeshare threads c… 68 …g on behalf of a specific workload. The goal of this level is to share the CPU among various user … 72 …ued on all clusters on the platform. The clutch bucket group maintains the CPU utilization history… 77 …s an interactivity score based on the ratio of voluntary blocking time and CPU usage time for the … 83 * **Clutch Bucket Group CPU Time**: Maintains the CPU time used by all threads of this clutch bucke… 87 …ows for a fair sharing of CPU among thread groups based on their recent behavior. Since the algori… [all …]
|
| /xnu-12377.61.12/bsd/dev/arm64/ |
| H A D | dtrace_isa.c | 110 if (pArg->cpu == CPU->cpu_id || pArg->cpu == DTRACE_CPUALL) { in xcRemote() 193 volatile uint16_t *flags = (volatile uint16_t *) &cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_getustack_common() 231 volatile uint16_t *flags = (volatile uint16_t *) &cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_getupcstack() 341 volatile uint16_t *flags = (volatile uint16_t *) &cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_getufpstack() 473 int on_intr = CPU_ON_INTR(CPU); in dtrace_getpcstack() 476 uintptr_t caller = CPU->cpu_dtrace_caller; in dtrace_getpcstack()
|
| H A D | fbt_arm.c | 118 if (0 == CPU->cpu_dtrace_invop_underway) { in fbt_invop() 119 CPU->cpu_dtrace_invop_underway = 1; /* Race not possible on in fbt_invop() 160 CPU->cpu_dtrace_caller = get_saved_state_lr(regs); in fbt_invop() 170 CPU->cpu_dtrace_caller = 0; in fbt_invop() 171 CPU->cpu_dtrace_invop_underway = 0; in fbt_invop()
|
| H A D | dtrace_subr_arm.c | 117 rwp = &CPU->cpu_ft_lock; in dtrace_user_probe() 129 rwp = &CPU->cpu_ft_lock; in dtrace_user_probe()
|
| H A D | fasttrap_isa.c | 196 pid_mtx = &cpu_core[CPU->cpu_id].cpuc_pid_lock; 965 pid_mtx = &cpu_core[CPU->cpu_id].cpuc_pid_lock;
|
| /xnu-12377.61.12/bsd/dev/i386/ |
| H A D | dtrace_isa.c | 144 if ( pArg->cpu == CPU->cpu_id || pArg->cpu == DTRACE_CPUALL ) { in xcRemote() 282 cpu_core[CPU->cpu_id].cpuc_dtrace_illval = ndx; in dtrace_getvmreg() 443 (volatile uint16_t *)&cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_getustack_common() 526 volatile uint16_t *flags = (volatile uint16_t *) &cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_adjust_stack() 565 missing_tos = cpu_core[CPU->cpu_id].cpuc_missing_tos; in dtrace_adjust_stack() 594 (volatile uint16_t *)&cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_getupcstack() 719 (volatile uint16_t *)&cpu_core[CPU->cpu_id].cpuc_dtrace_flags; in dtrace_getufpstack() 835 uintptr_t caller = CPU->cpu_dtrace_caller; in dtrace_getpcstack() 838 if ((on_intr = CPU_ON_INTR(CPU)) != 0) in dtrace_getpcstack()
|
| H A D | dtrace_subr_x86.c | 136 rwp = &CPU->cpu_ft_lock; in dtrace_user_probe() 152 rwp = &CPU->cpu_ft_lock; in dtrace_user_probe()
|
| H A D | fbt_x86.c | 113 … CPU->cpu_dtrace_caller = *(uintptr_t *)(((uintptr_t)(regs->isf.rsp))+sizeof(uint64_t)); // 8(%rsp) in fbt_invop() 116 CPU->cpu_dtrace_caller = 0; in fbt_invop() 120 CPU->cpu_dtrace_caller = 0; in fbt_invop()
|
| H A D | fasttrap_isa.c | 684 pid_mtx = &cpu_core[CPU->cpu_id].cpuc_pid_lock; in fasttrap_return_common() 736 cpu_core[CPU->cpu_id].cpuc_missing_tos = pc; in fasttrap_return_common() 750 cpu_core[CPU->cpu_id].cpuc_missing_tos = 0; in fasttrap_return_common() 991 pid_mtx = &cpu_core[CPU->cpu_id].cpuc_pid_lock; in fasttrap_pid_probe32() 1555 pid_mtx = &cpu_core[CPU->cpu_id].cpuc_pid_lock; in fasttrap_pid_probe64()
|
| /xnu-12377.61.12/doc/arm/ |
| H A D | sme.md | 68 `PSTATE.SM` moves the CPU in and out of a special execution mode called 72 things even more complicated, these transitions cause the CPU to zero out the 89 `PSTATE.{SM,ZA} = {0,0}` acts as a hint to the CPU that it may power down 110 case, the per-CPU `SMPRI_EL1` controls the relative priority of the SME 111 instructions issued by that CPU. ARM guarantees that higher `SMPRI_EL1` values 150 become illegal while the CPU is in streaming SVE mode. This poses a problem if 183 CPU's `PSTATE.ZA` bit is cleared (executing `smstop za` if necessary). xnu does 185 thread: the next time `PSTATE.ZA` is enabled, the CPU is architecturally 203 Accordingly xnu resets `SMPRI_EL1` to `0` during CPU initialization, and 228 register state, xnu tries to keep the guest matrix state resident in the CPU as [all …]
|
| /xnu-12377.61.12/doc/building/ |
| H A D | xnu_build_consolidation.md | 41 various CPU-specific parameters. 52 ### Performing CPU/Revision-specific checks at runtime 54 CPU and revision checks may be required at various places, although the focus here has been the app… 60 * On a subset of all of the CPU revisions. 73 type, CPU ID, revision(s), or a combination of these. 76 `MIDR_EL1` register against a CPU revision that is passed as a parameter to the macro, where applic… 110 * Similarly, deriving CPU physical IDs from the topology parser.
|
| /xnu-12377.61.12/doc/primitives/ |
| H A D | sched_cond.md | 42 This results in precious CPU cycles being spent in (A) to wake the thread despite the fact that 45 …the thread will still yield (D), thus spending precious CPU cycles setting itself up to block only
|
| /xnu-12377.61.12/tests/sched/sched_test_harness/ |
| H A D | README.md | 19 …ueue, and dequeue threads to validate the order in which they will receive CPU time. `sched_runque… 22 …lidate implementations of a migration policy that determines which cluster/CPU a thread will run o…
|
| /xnu-12377.61.12/doc/lifecycle/ |
| H A D | hibernation.md | 72 is in progress on this CPU. 79 * By the time regular sleep has completed, all CPUs but the boot CPU have been 80 halted, and we are running on the boot CPU's idle thread in the shutdown 226 + The boot CPU's idle thread preemption_count also has to be fixed up. This 230 * After the platform CPU init code is called, `hibernate_machine_init()` is
|
| H A D | startup.md | 128 - `cpu_data_startup_init`: Allocate per-CPU memory that needs to be accessible with MMU disabled
|
| /xnu-12377.61.12/san/coverage/ |
| H A D | kcov-denylist-x86_64 | 24 # sumac, x86, boostrap 2nd+ CPU
|
| /xnu-12377.61.12/bsd/sys/ |
| H A D | dtrace.h | 2608 (cpu_core[CPU->cpu_id].cpuc_dtrace_flags & (flag)) 2611 (cpu_core[CPU->cpu_id].cpuc_dtrace_flags |= (flag)) 2614 (cpu_core[CPU->cpu_id].cpuc_dtrace_flags &= ~(flag))
|
| H A D | dtrace_glue.h | 147 #define CPU (&(cpu_list[cpu_number()])) /* Pointer to current CPU */ macro
|
| /xnu-12377.61.12/iokit/DriverKit/ |
| H A D | IODMACommand.iig | 169 * @brief Perform CPU access to the DMA mapping.
|