~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/tools/sched_ext/scx_flatcg.bpf.c

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /tools/sched_ext/scx_flatcg.bpf.c (Architecture alpha) and /tools/sched_ext/scx_flatcg.bpf.c (Architecture i386)


  1 /* SPDX-License-Identifier: GPL-2.0 */              1 /* SPDX-License-Identifier: GPL-2.0 */
  2 /*                                                  2 /*
  3  * A demo sched_ext flattened cgroup hierarchy      3  * A demo sched_ext flattened cgroup hierarchy scheduler. It implements
  4  * hierarchical weight-based cgroup CPU contro      4  * hierarchical weight-based cgroup CPU control by flattening the cgroup
  5  * hierarchy into a single layer by compoundin      5  * hierarchy into a single layer by compounding the active weight share at each
  6  * level. Consider the following hierarchy wit      6  * level. Consider the following hierarchy with weights in parentheses:
  7  *                                                  7  *
  8  * R + A (100) + B (100)                            8  * R + A (100) + B (100)
  9  *   |         \ C (100)                            9  *   |         \ C (100)
 10  *   \ D (200)                                     10  *   \ D (200)
 11  *                                                 11  *
 12  * Ignoring the root and threaded cgroups, onl     12  * Ignoring the root and threaded cgroups, only B, C and D can contain tasks.
 13  * Let's say all three have runnable tasks. Th     13  * Let's say all three have runnable tasks. The total share that each of these
 14  * three cgroups is entitled to can be calcula     14  * three cgroups is entitled to can be calculated by compounding its share at
 15  * each level.                                     15  * each level.
 16  *                                                 16  *
 17  * For example, B is competing against C and i     17  * For example, B is competing against C and in that competition its share is
 18  * 100/(100+100) == 1/2. At its parent level,      18  * 100/(100+100) == 1/2. At its parent level, A is competing against D and A's
 19  * share in that competition is 100/(200+100)      19  * share in that competition is 100/(200+100) == 1/3. B's eventual share in the
 20  * system can be calculated by multiplying the     20  * system can be calculated by multiplying the two shares, 1/2 * 1/3 == 1/6. C's
 21  * eventual shaer is the same at 1/6. D is onl     21  * eventual shaer is the same at 1/6. D is only competing at the top level and
 22  * its share is 200/(100+200) == 2/3.              22  * its share is 200/(100+200) == 2/3.
 23  *                                                 23  *
 24  * So, instead of hierarchically scheduling le     24  * So, instead of hierarchically scheduling level-by-level, we can consider it
 25  * as B, C and D competing each other with res     25  * as B, C and D competing each other with respective share of 1/6, 1/6 and 2/3
 26  * and keep updating the eventual shares as th     26  * and keep updating the eventual shares as the cgroups' runnable states change.
 27  *                                                 27  *
 28  * This flattening of hierarchy can bring a su     28  * This flattening of hierarchy can bring a substantial performance gain when
 29  * the cgroup hierarchy is nested multiple lev     29  * the cgroup hierarchy is nested multiple levels. in a simple benchmark using
 30  * wrk[8] on apache serving a CGI script calcu     30  * wrk[8] on apache serving a CGI script calculating sha1sum of a small file, it
 31  * outperforms CFS by ~3% with CPU controller      31  * outperforms CFS by ~3% with CPU controller disabled and by ~10% with two
 32  * apache instances competing with 2:1 weight      32  * apache instances competing with 2:1 weight ratio nested four level deep.
 33  *                                                 33  *
 34  * However, the gain comes at the cost of not      34  * However, the gain comes at the cost of not being able to properly handle
 35  * thundering herd of cgroups. For example, if     35  * thundering herd of cgroups. For example, if many cgroups which are nested
 36  * behind a low priority parent cgroup wake up     36  * behind a low priority parent cgroup wake up around the same time, they may be
 37  * able to consume more CPU cycles than they a     37  * able to consume more CPU cycles than they are entitled to. In many use cases,
 38  * this isn't a real concern especially given      38  * this isn't a real concern especially given the performance gain. Also, there
 39  * are ways to mitigate the problem further by     39  * are ways to mitigate the problem further by e.g. introducing an extra
 40  * scheduling layer on cgroup delegation bound     40  * scheduling layer on cgroup delegation boundaries.
 41  *                                                 41  *
 42  * The scheduler first picks the cgroup to run     42  * The scheduler first picks the cgroup to run and then schedule the tasks
 43  * within by using nested weighted vtime sched     43  * within by using nested weighted vtime scheduling by default. The
 44  * cgroup-internal scheduling can be switched      44  * cgroup-internal scheduling can be switched to FIFO with the -f option.
 45  */                                                45  */
 46 #include <scx/common.bpf.h>                        46 #include <scx/common.bpf.h>
 47 #include "scx_flatcg.h"                            47 #include "scx_flatcg.h"
 48                                                    48 
 49 /*                                                 49 /*
 50  * Maximum amount of retries to find a valid c     50  * Maximum amount of retries to find a valid cgroup.
 51  */                                                51  */
 52 enum {                                             52 enum {
 53         FALLBACK_DSQ            = 0,               53         FALLBACK_DSQ            = 0,
 54         CGROUP_MAX_RETRIES      = 1024,            54         CGROUP_MAX_RETRIES      = 1024,
 55 };                                                 55 };
 56                                                    56 
 57 char _license[] SEC("license") = "GPL";            57 char _license[] SEC("license") = "GPL";
 58                                                    58 
 59 const volatile u32 nr_cpus = 32;        /* !0      59 const volatile u32 nr_cpus = 32;        /* !0 for veristat, set during init */
 60 const volatile u64 cgrp_slice_ns = SCX_SLICE_D     60 const volatile u64 cgrp_slice_ns = SCX_SLICE_DFL;
 61 const volatile bool fifo_sched;                    61 const volatile bool fifo_sched;
 62                                                    62 
 63 u64 cvtime_now;                                    63 u64 cvtime_now;
 64 UEI_DEFINE(uei);                                   64 UEI_DEFINE(uei);
 65                                                    65 
 66 struct {                                           66 struct {
 67         __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY     67         __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
 68         __type(key, u32);                          68         __type(key, u32);
 69         __type(value, u64);                        69         __type(value, u64);
 70         __uint(max_entries, FCG_NR_STATS);         70         __uint(max_entries, FCG_NR_STATS);
 71 } stats SEC(".maps");                              71 } stats SEC(".maps");
 72                                                    72 
 73 static void stat_inc(enum fcg_stat_idx idx)        73 static void stat_inc(enum fcg_stat_idx idx)
 74 {                                                  74 {
 75         u32 idx_v = idx;                           75         u32 idx_v = idx;
 76                                                    76 
 77         u64 *cnt_p = bpf_map_lookup_elem(&stat     77         u64 *cnt_p = bpf_map_lookup_elem(&stats, &idx_v);
 78         if (cnt_p)                                 78         if (cnt_p)
 79                 (*cnt_p)++;                        79                 (*cnt_p)++;
 80 }                                                  80 }
 81                                                    81 
 82 struct fcg_cpu_ctx {                               82 struct fcg_cpu_ctx {
 83         u64                     cur_cgid;          83         u64                     cur_cgid;
 84         u64                     cur_at;            84         u64                     cur_at;
 85 };                                                 85 };
 86                                                    86 
 87 struct {                                           87 struct {
 88         __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY     88         __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
 89         __type(key, u32);                          89         __type(key, u32);
 90         __type(value, struct fcg_cpu_ctx);         90         __type(value, struct fcg_cpu_ctx);
 91         __uint(max_entries, 1);                    91         __uint(max_entries, 1);
 92 } cpu_ctx SEC(".maps");                            92 } cpu_ctx SEC(".maps");
 93                                                    93 
 94 struct {                                           94 struct {
 95         __uint(type, BPF_MAP_TYPE_CGRP_STORAGE     95         __uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
 96         __uint(map_flags, BPF_F_NO_PREALLOC);      96         __uint(map_flags, BPF_F_NO_PREALLOC);
 97         __type(key, int);                          97         __type(key, int);
 98         __type(value, struct fcg_cgrp_ctx);        98         __type(value, struct fcg_cgrp_ctx);
 99 } cgrp_ctx SEC(".maps");                           99 } cgrp_ctx SEC(".maps");
100                                                   100 
101 struct cgv_node {                                 101 struct cgv_node {
102         struct bpf_rb_node      rb_node;          102         struct bpf_rb_node      rb_node;
103         __u64                   cvtime;           103         __u64                   cvtime;
104         __u64                   cgid;             104         __u64                   cgid;
105 };                                                105 };
106                                                   106 
107 private(CGV_TREE) struct bpf_spin_lock cgv_tre    107 private(CGV_TREE) struct bpf_spin_lock cgv_tree_lock;
108 private(CGV_TREE) struct bpf_rb_root cgv_tree     108 private(CGV_TREE) struct bpf_rb_root cgv_tree __contains(cgv_node, rb_node);
109                                                   109 
110 struct cgv_node_stash {                           110 struct cgv_node_stash {
111         struct cgv_node __kptr *node;             111         struct cgv_node __kptr *node;
112 };                                                112 };
113                                                   113 
114 struct {                                          114 struct {
115         __uint(type, BPF_MAP_TYPE_HASH);          115         __uint(type, BPF_MAP_TYPE_HASH);
116         __uint(max_entries, 16384);               116         __uint(max_entries, 16384);
117         __type(key, __u64);                       117         __type(key, __u64);
118         __type(value, struct cgv_node_stash);     118         __type(value, struct cgv_node_stash);
119 } cgv_node_stash SEC(".maps");                    119 } cgv_node_stash SEC(".maps");
120                                                   120 
121 struct fcg_task_ctx {                             121 struct fcg_task_ctx {
122         u64             bypassed_at;              122         u64             bypassed_at;
123 };                                                123 };
124                                                   124 
125 struct {                                          125 struct {
126         __uint(type, BPF_MAP_TYPE_TASK_STORAGE    126         __uint(type, BPF_MAP_TYPE_TASK_STORAGE);
127         __uint(map_flags, BPF_F_NO_PREALLOC);     127         __uint(map_flags, BPF_F_NO_PREALLOC);
128         __type(key, int);                         128         __type(key, int);
129         __type(value, struct fcg_task_ctx);       129         __type(value, struct fcg_task_ctx);
130 } task_ctx SEC(".maps");                          130 } task_ctx SEC(".maps");
131                                                   131 
132 /* gets inc'd on weight tree changes to expire    132 /* gets inc'd on weight tree changes to expire the cached hweights */
133 u64 hweight_gen = 1;                              133 u64 hweight_gen = 1;
134                                                   134 
135 static u64 div_round_up(u64 dividend, u64 divi    135 static u64 div_round_up(u64 dividend, u64 divisor)
136 {                                                 136 {
137         return (dividend + divisor - 1) / divi    137         return (dividend + divisor - 1) / divisor;
138 }                                                 138 }
139                                                   139 
140 static bool vtime_before(u64 a, u64 b)            140 static bool vtime_before(u64 a, u64 b)
141 {                                                 141 {
142         return (s64)(a - b) < 0;                  142         return (s64)(a - b) < 0;
143 }                                                 143 }
144                                                   144 
145 static bool cgv_node_less(struct bpf_rb_node *    145 static bool cgv_node_less(struct bpf_rb_node *a, const struct bpf_rb_node *b)
146 {                                                 146 {
147         struct cgv_node *cgc_a, *cgc_b;           147         struct cgv_node *cgc_a, *cgc_b;
148                                                   148 
149         cgc_a = container_of(a, struct cgv_nod    149         cgc_a = container_of(a, struct cgv_node, rb_node);
150         cgc_b = container_of(b, struct cgv_nod    150         cgc_b = container_of(b, struct cgv_node, rb_node);
151                                                   151 
152         return cgc_a->cvtime < cgc_b->cvtime;     152         return cgc_a->cvtime < cgc_b->cvtime;
153 }                                                 153 }
154                                                   154 
155 static struct fcg_cpu_ctx *find_cpu_ctx(void)     155 static struct fcg_cpu_ctx *find_cpu_ctx(void)
156 {                                                 156 {
157         struct fcg_cpu_ctx *cpuc;                 157         struct fcg_cpu_ctx *cpuc;
158         u32 idx = 0;                              158         u32 idx = 0;
159                                                   159 
160         cpuc = bpf_map_lookup_elem(&cpu_ctx, &    160         cpuc = bpf_map_lookup_elem(&cpu_ctx, &idx);
161         if (!cpuc) {                              161         if (!cpuc) {
162                 scx_bpf_error("cpu_ctx lookup     162                 scx_bpf_error("cpu_ctx lookup failed");
163                 return NULL;                      163                 return NULL;
164         }                                         164         }
165         return cpuc;                              165         return cpuc;
166 }                                                 166 }
167                                                   167 
168 static struct fcg_cgrp_ctx *find_cgrp_ctx(stru    168 static struct fcg_cgrp_ctx *find_cgrp_ctx(struct cgroup *cgrp)
169 {                                                 169 {
170         struct fcg_cgrp_ctx *cgc;                 170         struct fcg_cgrp_ctx *cgc;
171                                                   171 
172         cgc = bpf_cgrp_storage_get(&cgrp_ctx,     172         cgc = bpf_cgrp_storage_get(&cgrp_ctx, cgrp, 0, 0);
173         if (!cgc) {                               173         if (!cgc) {
174                 scx_bpf_error("cgrp_ctx lookup    174                 scx_bpf_error("cgrp_ctx lookup failed for cgid %llu", cgrp->kn->id);
175                 return NULL;                      175                 return NULL;
176         }                                         176         }
177         return cgc;                               177         return cgc;
178 }                                                 178 }
179                                                   179 
180 static struct fcg_cgrp_ctx *find_ancestor_cgrp    180 static struct fcg_cgrp_ctx *find_ancestor_cgrp_ctx(struct cgroup *cgrp, int level)
181 {                                                 181 {
182         struct fcg_cgrp_ctx *cgc;                 182         struct fcg_cgrp_ctx *cgc;
183                                                   183 
184         cgrp = bpf_cgroup_ancestor(cgrp, level    184         cgrp = bpf_cgroup_ancestor(cgrp, level);
185         if (!cgrp) {                              185         if (!cgrp) {
186                 scx_bpf_error("ancestor cgroup    186                 scx_bpf_error("ancestor cgroup lookup failed");
187                 return NULL;                      187                 return NULL;
188         }                                         188         }
189                                                   189 
190         cgc = find_cgrp_ctx(cgrp);                190         cgc = find_cgrp_ctx(cgrp);
191         if (!cgc)                                 191         if (!cgc)
192                 scx_bpf_error("ancestor cgrp_c    192                 scx_bpf_error("ancestor cgrp_ctx lookup failed");
193         bpf_cgroup_release(cgrp);                 193         bpf_cgroup_release(cgrp);
194         return cgc;                               194         return cgc;
195 }                                                 195 }
196                                                   196 
197 static void cgrp_refresh_hweight(struct cgroup    197 static void cgrp_refresh_hweight(struct cgroup *cgrp, struct fcg_cgrp_ctx *cgc)
198 {                                                 198 {
199         int level;                                199         int level;
200                                                   200 
201         if (!cgc->nr_active) {                    201         if (!cgc->nr_active) {
202                 stat_inc(FCG_STAT_HWT_SKIP);      202                 stat_inc(FCG_STAT_HWT_SKIP);
203                 return;                           203                 return;
204         }                                         204         }
205                                                   205 
206         if (cgc->hweight_gen == hweight_gen) {    206         if (cgc->hweight_gen == hweight_gen) {
207                 stat_inc(FCG_STAT_HWT_CACHE);     207                 stat_inc(FCG_STAT_HWT_CACHE);
208                 return;                           208                 return;
209         }                                         209         }
210                                                   210 
211         stat_inc(FCG_STAT_HWT_UPDATES);           211         stat_inc(FCG_STAT_HWT_UPDATES);
212         bpf_for(level, 0, cgrp->level + 1) {      212         bpf_for(level, 0, cgrp->level + 1) {
213                 struct fcg_cgrp_ctx *cgc;         213                 struct fcg_cgrp_ctx *cgc;
214                 bool is_active;                   214                 bool is_active;
215                                                   215 
216                 cgc = find_ancestor_cgrp_ctx(c    216                 cgc = find_ancestor_cgrp_ctx(cgrp, level);
217                 if (!cgc)                         217                 if (!cgc)
218                         break;                    218                         break;
219                                                   219 
220                 if (!level) {                     220                 if (!level) {
221                         cgc->hweight = FCG_HWE    221                         cgc->hweight = FCG_HWEIGHT_ONE;
222                         cgc->hweight_gen = hwe    222                         cgc->hweight_gen = hweight_gen;
223                 } else {                          223                 } else {
224                         struct fcg_cgrp_ctx *p    224                         struct fcg_cgrp_ctx *pcgc;
225                                                   225 
226                         pcgc = find_ancestor_c    226                         pcgc = find_ancestor_cgrp_ctx(cgrp, level - 1);
227                         if (!pcgc)                227                         if (!pcgc)
228                                 break;            228                                 break;
229                                                   229 
230                         /*                        230                         /*
231                          * We can be opportuni    231                          * We can be opportunistic here and not grab the
232                          * cgv_tree_lock and d    232                          * cgv_tree_lock and deal with the occasional races.
233                          * However, hweight up    233                          * However, hweight updates are already cached and
234                          * relatively low-freq    234                          * relatively low-frequency. Let's just do the
235                          * straightforward thi    235                          * straightforward thing.
236                          */                       236                          */
237                         bpf_spin_lock(&cgv_tre    237                         bpf_spin_lock(&cgv_tree_lock);
238                         is_active = cgc->nr_ac    238                         is_active = cgc->nr_active;
239                         if (is_active) {          239                         if (is_active) {
240                                 cgc->hweight_g    240                                 cgc->hweight_gen = pcgc->hweight_gen;
241                                 cgc->hweight =    241                                 cgc->hweight =
242                                         div_ro    242                                         div_round_up(pcgc->hweight * cgc->weight,
243                                                   243                                                      pcgc->child_weight_sum);
244                         }                         244                         }
245                         bpf_spin_unlock(&cgv_t    245                         bpf_spin_unlock(&cgv_tree_lock);
246                                                   246 
247                         if (!is_active) {         247                         if (!is_active) {
248                                 stat_inc(FCG_S    248                                 stat_inc(FCG_STAT_HWT_RACE);
249                                 break;            249                                 break;
250                         }                         250                         }
251                 }                                 251                 }
252         }                                         252         }
253 }                                                 253 }
254                                                   254 
255 static void cgrp_cap_budget(struct cgv_node *c    255 static void cgrp_cap_budget(struct cgv_node *cgv_node, struct fcg_cgrp_ctx *cgc)
256 {                                                 256 {
257         u64 delta, cvtime, max_budget;            257         u64 delta, cvtime, max_budget;
258                                                   258 
259         /*                                        259         /*
260          * A node which is on the rbtree can't    260          * A node which is on the rbtree can't be pointed to from elsewhere yet
261          * and thus can't be updated and repos    261          * and thus can't be updated and repositioned. Instead, we collect the
262          * vtime deltas separately and apply i    262          * vtime deltas separately and apply it asynchronously here.
263          */                                       263          */
264         delta = __sync_fetch_and_sub(&cgc->cvt    264         delta = __sync_fetch_and_sub(&cgc->cvtime_delta, cgc->cvtime_delta);
265         cvtime = cgv_node->cvtime + delta;        265         cvtime = cgv_node->cvtime + delta;
266                                                   266 
267         /*                                        267         /*
268          * Allow a cgroup to carry the maximum    268          * Allow a cgroup to carry the maximum budget proportional to its
269          * hweight such that a full-hweight cg    269          * hweight such that a full-hweight cgroup can immediately take up half
270          * of the CPUs at the most while stayi    270          * of the CPUs at the most while staying at the front of the rbtree.
271          */                                       271          */
272         max_budget = (cgrp_slice_ns * nr_cpus     272         max_budget = (cgrp_slice_ns * nr_cpus * cgc->hweight) /
273                 (2 * FCG_HWEIGHT_ONE);            273                 (2 * FCG_HWEIGHT_ONE);
274         if (vtime_before(cvtime, cvtime_now -     274         if (vtime_before(cvtime, cvtime_now - max_budget))
275                 cvtime = cvtime_now - max_budg    275                 cvtime = cvtime_now - max_budget;
276                                                   276 
277         cgv_node->cvtime = cvtime;                277         cgv_node->cvtime = cvtime;
278 }                                                 278 }
279                                                   279 
280 static void cgrp_enqueued(struct cgroup *cgrp,    280 static void cgrp_enqueued(struct cgroup *cgrp, struct fcg_cgrp_ctx *cgc)
281 {                                                 281 {
282         struct cgv_node_stash *stash;             282         struct cgv_node_stash *stash;
283         struct cgv_node *cgv_node;                283         struct cgv_node *cgv_node;
284         u64 cgid = cgrp->kn->id;                  284         u64 cgid = cgrp->kn->id;
285                                                   285 
286         /* paired with cmpxchg in try_pick_nex    286         /* paired with cmpxchg in try_pick_next_cgroup() */
287         if (__sync_val_compare_and_swap(&cgc->    287         if (__sync_val_compare_and_swap(&cgc->queued, 0, 1)) {
288                 stat_inc(FCG_STAT_ENQ_SKIP);      288                 stat_inc(FCG_STAT_ENQ_SKIP);
289                 return;                           289                 return;
290         }                                         290         }
291                                                   291 
292         stash = bpf_map_lookup_elem(&cgv_node_    292         stash = bpf_map_lookup_elem(&cgv_node_stash, &cgid);
293         if (!stash) {                             293         if (!stash) {
294                 scx_bpf_error("cgv_node lookup    294                 scx_bpf_error("cgv_node lookup failed for cgid %llu", cgid);
295                 return;                           295                 return;
296         }                                         296         }
297                                                   297 
298         /* NULL if the node is already on the     298         /* NULL if the node is already on the rbtree */
299         cgv_node = bpf_kptr_xchg(&stash->node,    299         cgv_node = bpf_kptr_xchg(&stash->node, NULL);
300         if (!cgv_node) {                          300         if (!cgv_node) {
301                 stat_inc(FCG_STAT_ENQ_RACE);      301                 stat_inc(FCG_STAT_ENQ_RACE);
302                 return;                           302                 return;
303         }                                         303         }
304                                                   304 
305         bpf_spin_lock(&cgv_tree_lock);            305         bpf_spin_lock(&cgv_tree_lock);
306         cgrp_cap_budget(cgv_node, cgc);           306         cgrp_cap_budget(cgv_node, cgc);
307         bpf_rbtree_add(&cgv_tree, &cgv_node->r    307         bpf_rbtree_add(&cgv_tree, &cgv_node->rb_node, cgv_node_less);
308         bpf_spin_unlock(&cgv_tree_lock);          308         bpf_spin_unlock(&cgv_tree_lock);
309 }                                                 309 }
310                                                   310 
311 static void set_bypassed_at(struct task_struct    311 static void set_bypassed_at(struct task_struct *p, struct fcg_task_ctx *taskc)
312 {                                                 312 {
313         /*                                        313         /*
314          * Tell fcg_stopping() that this bypas    314          * Tell fcg_stopping() that this bypassed the regular scheduling path
315          * and should be force charged to the     315          * and should be force charged to the cgroup. 0 is used to indicate that
316          * the task isn't bypassing, so if the    316          * the task isn't bypassing, so if the current runtime is 0, go back by
317          * one nanosecond.                        317          * one nanosecond.
318          */                                       318          */
319         taskc->bypassed_at = p->se.sum_exec_ru    319         taskc->bypassed_at = p->se.sum_exec_runtime ?: (u64)-1;
320 }                                                 320 }
321                                                   321 
322 s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task    322 s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
323 {                                                 323 {
324         struct fcg_task_ctx *taskc;               324         struct fcg_task_ctx *taskc;
325         bool is_idle = false;                     325         bool is_idle = false;
326         s32 cpu;                                  326         s32 cpu;
327                                                   327 
328         cpu = scx_bpf_select_cpu_dfl(p, prev_c    328         cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle);
329                                                   329 
330         taskc = bpf_task_storage_get(&task_ctx    330         taskc = bpf_task_storage_get(&task_ctx, p, 0, 0);
331         if (!taskc) {                             331         if (!taskc) {
332                 scx_bpf_error("task_ctx lookup    332                 scx_bpf_error("task_ctx lookup failed");
333                 return cpu;                       333                 return cpu;
334         }                                         334         }
335                                                   335 
336         /*                                        336         /*
337          * If select_cpu_dfl() is recommending    337          * If select_cpu_dfl() is recommending local enqueue, the target CPU is
338          * idle. Follow it and charge the cgro    338          * idle. Follow it and charge the cgroup later in fcg_stopping() after
339          * the fact.                              339          * the fact.
340          */                                       340          */
341         if (is_idle) {                            341         if (is_idle) {
342                 set_bypassed_at(p, taskc);        342                 set_bypassed_at(p, taskc);
343                 stat_inc(FCG_STAT_LOCAL);         343                 stat_inc(FCG_STAT_LOCAL);
344                 scx_bpf_dispatch(p, SCX_DSQ_LO    344                 scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
345         }                                         345         }
346                                                   346 
347         return cpu;                               347         return cpu;
348 }                                                 348 }
349                                                   349 
350 void BPF_STRUCT_OPS(fcg_enqueue, struct task_s    350 void BPF_STRUCT_OPS(fcg_enqueue, struct task_struct *p, u64 enq_flags)
351 {                                                 351 {
352         struct fcg_task_ctx *taskc;               352         struct fcg_task_ctx *taskc;
353         struct cgroup *cgrp;                      353         struct cgroup *cgrp;
354         struct fcg_cgrp_ctx *cgc;                 354         struct fcg_cgrp_ctx *cgc;
355                                                   355 
356         taskc = bpf_task_storage_get(&task_ctx    356         taskc = bpf_task_storage_get(&task_ctx, p, 0, 0);
357         if (!taskc) {                             357         if (!taskc) {
358                 scx_bpf_error("task_ctx lookup    358                 scx_bpf_error("task_ctx lookup failed");
359                 return;                           359                 return;
360         }                                         360         }
361                                                   361 
362         /*                                        362         /*
363          * Use the direct dispatching and forc    363          * Use the direct dispatching and force charging to deal with tasks with
364          * custom affinities so that we don't     364          * custom affinities so that we don't have to worry about per-cgroup
365          * dq's containing tasks that can't be    365          * dq's containing tasks that can't be executed from some CPUs.
366          */                                       366          */
367         if (p->nr_cpus_allowed != nr_cpus) {      367         if (p->nr_cpus_allowed != nr_cpus) {
368                 set_bypassed_at(p, taskc);        368                 set_bypassed_at(p, taskc);
369                                                   369 
370                 /*                                370                 /*
371                  * The global dq is deprioriti    371                  * The global dq is deprioritized as we don't want to let tasks
372                  * to boost themselves by cons    372                  * to boost themselves by constraining its cpumask. The
373                  * deprioritization is rather     373                  * deprioritization is rather severe, so let's not apply that to
374                  * per-cpu kernel threads. Thi    374                  * per-cpu kernel threads. This is ham-fisted. We probably wanna
375                  * implement per-cgroup fallba    375                  * implement per-cgroup fallback dq's instead so that we have
376                  * more control over when task    376                  * more control over when tasks with custom cpumask get issued.
377                  */                               377                  */
378                 if (p->nr_cpus_allowed == 1 &&    378                 if (p->nr_cpus_allowed == 1 && (p->flags & PF_KTHREAD)) {
379                         stat_inc(FCG_STAT_LOCA    379                         stat_inc(FCG_STAT_LOCAL);
380                         scx_bpf_dispatch(p, SC    380                         scx_bpf_dispatch(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, enq_flags);
381                 } else {                          381                 } else {
382                         stat_inc(FCG_STAT_GLOB    382                         stat_inc(FCG_STAT_GLOBAL);
383                         scx_bpf_dispatch(p, FA    383                         scx_bpf_dispatch(p, FALLBACK_DSQ, SCX_SLICE_DFL, enq_flags);
384                 }                                 384                 }
385                 return;                           385                 return;
386         }                                         386         }
387                                                   387 
388         cgrp = __COMPAT_scx_bpf_task_cgroup(p)    388         cgrp = __COMPAT_scx_bpf_task_cgroup(p);
389         cgc = find_cgrp_ctx(cgrp);                389         cgc = find_cgrp_ctx(cgrp);
390         if (!cgc)                                 390         if (!cgc)
391                 goto out_release;                 391                 goto out_release;
392                                                   392 
393         if (fifo_sched) {                         393         if (fifo_sched) {
394                 scx_bpf_dispatch(p, cgrp->kn->    394                 scx_bpf_dispatch(p, cgrp->kn->id, SCX_SLICE_DFL, enq_flags);
395         } else {                                  395         } else {
396                 u64 tvtime = p->scx.dsq_vtime;    396                 u64 tvtime = p->scx.dsq_vtime;
397                                                   397 
398                 /*                                398                 /*
399                  * Limit the amount of budget     399                  * Limit the amount of budget that an idling task can accumulate
400                  * to one slice.                  400                  * to one slice.
401                  */                               401                  */
402                 if (vtime_before(tvtime, cgc->    402                 if (vtime_before(tvtime, cgc->tvtime_now - SCX_SLICE_DFL))
403                         tvtime = cgc->tvtime_n    403                         tvtime = cgc->tvtime_now - SCX_SLICE_DFL;
404                                                   404 
405                 scx_bpf_dispatch_vtime(p, cgrp    405                 scx_bpf_dispatch_vtime(p, cgrp->kn->id, SCX_SLICE_DFL,
406                                        tvtime,    406                                        tvtime, enq_flags);
407         }                                         407         }
408                                                   408 
409         cgrp_enqueued(cgrp, cgc);                 409         cgrp_enqueued(cgrp, cgc);
410 out_release:                                      410 out_release:
411         bpf_cgroup_release(cgrp);                 411         bpf_cgroup_release(cgrp);
412 }                                                 412 }
413                                                   413 
414 /*                                                414 /*
415  * Walk the cgroup tree to update the active w    415  * Walk the cgroup tree to update the active weight sums as tasks wake up and
416  * sleep. The weight sums are used as the base    416  * sleep. The weight sums are used as the base when calculating the proportion a
417  * given cgroup or task is entitled to at each    417  * given cgroup or task is entitled to at each level.
418  */                                               418  */
419 static void update_active_weight_sums(struct c    419 static void update_active_weight_sums(struct cgroup *cgrp, bool runnable)
420 {                                                 420 {
421         struct fcg_cgrp_ctx *cgc;                 421         struct fcg_cgrp_ctx *cgc;
422         bool updated = false;                     422         bool updated = false;
423         int idx;                                  423         int idx;
424                                                   424 
425         cgc = find_cgrp_ctx(cgrp);                425         cgc = find_cgrp_ctx(cgrp);
426         if (!cgc)                                 426         if (!cgc)
427                 return;                           427                 return;
428                                                   428 
429         /*                                        429         /*
430          * In most cases, a hot cgroup would h    430          * In most cases, a hot cgroup would have multiple threads going to
431          * sleep and waking up while the whole    431          * sleep and waking up while the whole cgroup stays active. In leaf
432          * cgroups, ->nr_runnable which is upd    432          * cgroups, ->nr_runnable which is updated with __sync operations gates
433          * ->nr_active updates, so that we don    433          * ->nr_active updates, so that we don't have to grab the cgv_tree_lock
434          * repeatedly for a busy cgroup which     434          * repeatedly for a busy cgroup which is staying active.
435          */                                       435          */
436         if (runnable) {                           436         if (runnable) {
437                 if (__sync_fetch_and_add(&cgc-    437                 if (__sync_fetch_and_add(&cgc->nr_runnable, 1))
438                         return;                   438                         return;
439                 stat_inc(FCG_STAT_ACT);           439                 stat_inc(FCG_STAT_ACT);
440         } else {                                  440         } else {
441                 if (__sync_sub_and_fetch(&cgc-    441                 if (__sync_sub_and_fetch(&cgc->nr_runnable, 1))
442                         return;                   442                         return;
443                 stat_inc(FCG_STAT_DEACT);         443                 stat_inc(FCG_STAT_DEACT);
444         }                                         444         }
445                                                   445 
446         /*                                        446         /*
447          * If @cgrp is becoming runnable, its     447          * If @cgrp is becoming runnable, its hweight should be refreshed after
448          * it's added to the weight tree so th    448          * it's added to the weight tree so that enqueue has the up-to-date
449          * value. If @cgrp is becoming quiesce    449          * value. If @cgrp is becoming quiescent, the hweight should be
450          * refreshed before it's removed from     450          * refreshed before it's removed from the weight tree so that the usage
451          * charging which happens afterwards h    451          * charging which happens afterwards has access to the latest value.
452          */                                       452          */
453         if (!runnable)                            453         if (!runnable)
454                 cgrp_refresh_hweight(cgrp, cgc    454                 cgrp_refresh_hweight(cgrp, cgc);
455                                                   455 
456         /* propagate upwards */                   456         /* propagate upwards */
457         bpf_for(idx, 0, cgrp->level) {            457         bpf_for(idx, 0, cgrp->level) {
458                 int level = cgrp->level - idx;    458                 int level = cgrp->level - idx;
459                 struct fcg_cgrp_ctx *cgc, *pcg    459                 struct fcg_cgrp_ctx *cgc, *pcgc = NULL;
460                 bool propagate = false;           460                 bool propagate = false;
461                                                   461 
462                 cgc = find_ancestor_cgrp_ctx(c    462                 cgc = find_ancestor_cgrp_ctx(cgrp, level);
463                 if (!cgc)                         463                 if (!cgc)
464                         break;                    464                         break;
465                 if (level) {                      465                 if (level) {
466                         pcgc = find_ancestor_c    466                         pcgc = find_ancestor_cgrp_ctx(cgrp, level - 1);
467                         if (!pcgc)                467                         if (!pcgc)
468                                 break;            468                                 break;
469                 }                                 469                 }
470                                                   470 
471                 /*                                471                 /*
472                  * We need the propagation pro    472                  * We need the propagation protected by a lock to synchronize
473                  * against weight changes. The    473                  * against weight changes. There's no reason to drop the lock at
474                  * each level but bpf_spin_loc    474                  * each level but bpf_spin_lock() doesn't want any function
475                  * calls while locked.            475                  * calls while locked.
476                  */                               476                  */
477                 bpf_spin_lock(&cgv_tree_lock);    477                 bpf_spin_lock(&cgv_tree_lock);
478                                                   478 
479                 if (runnable) {                   479                 if (runnable) {
480                         if (!cgc->nr_active++)    480                         if (!cgc->nr_active++) {
481                                 updated = true    481                                 updated = true;
482                                 if (pcgc) {       482                                 if (pcgc) {
483                                         propag    483                                         propagate = true;
484                                         pcgc->    484                                         pcgc->child_weight_sum += cgc->weight;
485                                 }                 485                                 }
486                         }                         486                         }
487                 } else {                          487                 } else {
488                         if (!--cgc->nr_active)    488                         if (!--cgc->nr_active) {
489                                 updated = true    489                                 updated = true;
490                                 if (pcgc) {       490                                 if (pcgc) {
491                                         propag    491                                         propagate = true;
492                                         pcgc->    492                                         pcgc->child_weight_sum -= cgc->weight;
493                                 }                 493                                 }
494                         }                         494                         }
495                 }                                 495                 }
496                                                   496 
497                 bpf_spin_unlock(&cgv_tree_lock    497                 bpf_spin_unlock(&cgv_tree_lock);
498                                                   498 
499                 if (!propagate)                   499                 if (!propagate)
500                         break;                    500                         break;
501         }                                         501         }
502                                                   502 
503         if (updated)                              503         if (updated)
504                 __sync_fetch_and_add(&hweight_    504                 __sync_fetch_and_add(&hweight_gen, 1);
505                                                   505 
506         if (runnable)                             506         if (runnable)
507                 cgrp_refresh_hweight(cgrp, cgc    507                 cgrp_refresh_hweight(cgrp, cgc);
508 }                                                 508 }
509                                                   509 
510 void BPF_STRUCT_OPS(fcg_runnable, struct task_    510 void BPF_STRUCT_OPS(fcg_runnable, struct task_struct *p, u64 enq_flags)
511 {                                                 511 {
512         struct cgroup *cgrp;                      512         struct cgroup *cgrp;
513                                                   513 
514         cgrp = __COMPAT_scx_bpf_task_cgroup(p)    514         cgrp = __COMPAT_scx_bpf_task_cgroup(p);
515         update_active_weight_sums(cgrp, true);    515         update_active_weight_sums(cgrp, true);
516         bpf_cgroup_release(cgrp);                 516         bpf_cgroup_release(cgrp);
517 }                                                 517 }
518                                                   518 
519 void BPF_STRUCT_OPS(fcg_running, struct task_s    519 void BPF_STRUCT_OPS(fcg_running, struct task_struct *p)
520 {                                                 520 {
521         struct cgroup *cgrp;                      521         struct cgroup *cgrp;
522         struct fcg_cgrp_ctx *cgc;                 522         struct fcg_cgrp_ctx *cgc;
523                                                   523 
524         if (fifo_sched)                           524         if (fifo_sched)
525                 return;                           525                 return;
526                                                   526 
527         cgrp = __COMPAT_scx_bpf_task_cgroup(p)    527         cgrp = __COMPAT_scx_bpf_task_cgroup(p);
528         cgc = find_cgrp_ctx(cgrp);                528         cgc = find_cgrp_ctx(cgrp);
529         if (cgc) {                                529         if (cgc) {
530                 /*                                530                 /*
531                  * @cgc->tvtime_now always pro    531                  * @cgc->tvtime_now always progresses forward as tasks start
532                  * executing. The test and upd    532                  * executing. The test and update can be performed concurrently
533                  * from multiple CPUs and thus    533                  * from multiple CPUs and thus racy. Any error should be
534                  * contained and temporary. Le    534                  * contained and temporary. Let's just live with it.
535                  */                               535                  */
536                 if (vtime_before(cgc->tvtime_n    536                 if (vtime_before(cgc->tvtime_now, p->scx.dsq_vtime))
537                         cgc->tvtime_now = p->s    537                         cgc->tvtime_now = p->scx.dsq_vtime;
538         }                                         538         }
539         bpf_cgroup_release(cgrp);                 539         bpf_cgroup_release(cgrp);
540 }                                                 540 }
541                                                   541 
542 void BPF_STRUCT_OPS(fcg_stopping, struct task_    542 void BPF_STRUCT_OPS(fcg_stopping, struct task_struct *p, bool runnable)
543 {                                                 543 {
544         struct fcg_task_ctx *taskc;               544         struct fcg_task_ctx *taskc;
545         struct cgroup *cgrp;                      545         struct cgroup *cgrp;
546         struct fcg_cgrp_ctx *cgc;                 546         struct fcg_cgrp_ctx *cgc;
547                                                   547 
548         /*                                        548         /*
549          * Scale the execution time by the inv    549          * Scale the execution time by the inverse of the weight and charge.
550          *                                        550          *
551          * Note that the default yield impleme    551          * Note that the default yield implementation yields by setting
552          * @p->scx.slice to zero and the follo    552          * @p->scx.slice to zero and the following would treat the yielding task
553          * as if it has consumed all its slice    553          * as if it has consumed all its slice. If this penalizes yielding tasks
554          * too much, determine the execution t    554          * too much, determine the execution time by taking explicit timestamps
555          * instead of depending on @p->scx.sli    555          * instead of depending on @p->scx.slice.
556          */                                       556          */
557         if (!fifo_sched)                          557         if (!fifo_sched)
558                 p->scx.dsq_vtime +=               558                 p->scx.dsq_vtime +=
559                         (SCX_SLICE_DFL - p->sc    559                         (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
560                                                   560 
561         taskc = bpf_task_storage_get(&task_ctx    561         taskc = bpf_task_storage_get(&task_ctx, p, 0, 0);
562         if (!taskc) {                             562         if (!taskc) {
563                 scx_bpf_error("task_ctx lookup    563                 scx_bpf_error("task_ctx lookup failed");
564                 return;                           564                 return;
565         }                                         565         }
566                                                   566 
567         if (!taskc->bypassed_at)                  567         if (!taskc->bypassed_at)
568                 return;                           568                 return;
569                                                   569 
570         cgrp = __COMPAT_scx_bpf_task_cgroup(p)    570         cgrp = __COMPAT_scx_bpf_task_cgroup(p);
571         cgc = find_cgrp_ctx(cgrp);                571         cgc = find_cgrp_ctx(cgrp);
572         if (cgc) {                                572         if (cgc) {
573                 __sync_fetch_and_add(&cgc->cvt    573                 __sync_fetch_and_add(&cgc->cvtime_delta,
574                                      p->se.sum    574                                      p->se.sum_exec_runtime - taskc->bypassed_at);
575                 taskc->bypassed_at = 0;           575                 taskc->bypassed_at = 0;
576         }                                         576         }
577         bpf_cgroup_release(cgrp);                 577         bpf_cgroup_release(cgrp);
578 }                                                 578 }
579                                                   579 
580 void BPF_STRUCT_OPS(fcg_quiescent, struct task    580 void BPF_STRUCT_OPS(fcg_quiescent, struct task_struct *p, u64 deq_flags)
581 {                                                 581 {
582         struct cgroup *cgrp;                      582         struct cgroup *cgrp;
583                                                   583 
584         cgrp = __COMPAT_scx_bpf_task_cgroup(p)    584         cgrp = __COMPAT_scx_bpf_task_cgroup(p);
585         update_active_weight_sums(cgrp, false)    585         update_active_weight_sums(cgrp, false);
586         bpf_cgroup_release(cgrp);                 586         bpf_cgroup_release(cgrp);
587 }                                                 587 }
588                                                   588 
589 void BPF_STRUCT_OPS(fcg_cgroup_set_weight, str    589 void BPF_STRUCT_OPS(fcg_cgroup_set_weight, struct cgroup *cgrp, u32 weight)
590 {                                                 590 {
591         struct fcg_cgrp_ctx *cgc, *pcgc = NULL    591         struct fcg_cgrp_ctx *cgc, *pcgc = NULL;
592                                                   592 
593         cgc = find_cgrp_ctx(cgrp);                593         cgc = find_cgrp_ctx(cgrp);
594         if (!cgc)                                 594         if (!cgc)
595                 return;                           595                 return;
596                                                   596 
597         if (cgrp->level) {                        597         if (cgrp->level) {
598                 pcgc = find_ancestor_cgrp_ctx(    598                 pcgc = find_ancestor_cgrp_ctx(cgrp, cgrp->level - 1);
599                 if (!pcgc)                        599                 if (!pcgc)
600                         return;                   600                         return;
601         }                                         601         }
602                                                   602 
603         bpf_spin_lock(&cgv_tree_lock);            603         bpf_spin_lock(&cgv_tree_lock);
604         if (pcgc && cgc->nr_active)               604         if (pcgc && cgc->nr_active)
605                 pcgc->child_weight_sum += (s64    605                 pcgc->child_weight_sum += (s64)weight - cgc->weight;
606         cgc->weight = weight;                     606         cgc->weight = weight;
607         bpf_spin_unlock(&cgv_tree_lock);          607         bpf_spin_unlock(&cgv_tree_lock);
608 }                                                 608 }
609                                                   609 
610 static bool try_pick_next_cgroup(u64 *cgidp)      610 static bool try_pick_next_cgroup(u64 *cgidp)
611 {                                                 611 {
612         struct bpf_rb_node *rb_node;              612         struct bpf_rb_node *rb_node;
613         struct cgv_node_stash *stash;             613         struct cgv_node_stash *stash;
614         struct cgv_node *cgv_node;                614         struct cgv_node *cgv_node;
615         struct fcg_cgrp_ctx *cgc;                 615         struct fcg_cgrp_ctx *cgc;
616         struct cgroup *cgrp;                      616         struct cgroup *cgrp;
617         u64 cgid;                                 617         u64 cgid;
618                                                   618 
619         /* pop the front cgroup and wind cvtim    619         /* pop the front cgroup and wind cvtime_now accordingly */
620         bpf_spin_lock(&cgv_tree_lock);            620         bpf_spin_lock(&cgv_tree_lock);
621                                                   621 
622         rb_node = bpf_rbtree_first(&cgv_tree);    622         rb_node = bpf_rbtree_first(&cgv_tree);
623         if (!rb_node) {                           623         if (!rb_node) {
624                 bpf_spin_unlock(&cgv_tree_lock    624                 bpf_spin_unlock(&cgv_tree_lock);
625                 stat_inc(FCG_STAT_PNC_NO_CGRP)    625                 stat_inc(FCG_STAT_PNC_NO_CGRP);
626                 *cgidp = 0;                       626                 *cgidp = 0;
627                 return true;                      627                 return true;
628         }                                         628         }
629                                                   629 
630         rb_node = bpf_rbtree_remove(&cgv_tree,    630         rb_node = bpf_rbtree_remove(&cgv_tree, rb_node);
631         bpf_spin_unlock(&cgv_tree_lock);          631         bpf_spin_unlock(&cgv_tree_lock);
632                                                   632 
633         if (!rb_node) {                           633         if (!rb_node) {
634                 /*                                634                 /*
635                  * This should never happen. b    635                  * This should never happen. bpf_rbtree_first() was called
636                  * above while the tree lock w    636                  * above while the tree lock was held, so the node should
637                  * always be present.             637                  * always be present.
638                  */                               638                  */
639                 scx_bpf_error("node could not     639                 scx_bpf_error("node could not be removed");
640                 return true;                      640                 return true;
641         }                                         641         }
642                                                   642 
643         cgv_node = container_of(rb_node, struc    643         cgv_node = container_of(rb_node, struct cgv_node, rb_node);
644         cgid = cgv_node->cgid;                    644         cgid = cgv_node->cgid;
645                                                   645 
646         if (vtime_before(cvtime_now, cgv_node-    646         if (vtime_before(cvtime_now, cgv_node->cvtime))
647                 cvtime_now = cgv_node->cvtime;    647                 cvtime_now = cgv_node->cvtime;
648                                                   648 
649         /*                                        649         /*
650          * If lookup fails, the cgroup's gone.    650          * If lookup fails, the cgroup's gone. Free and move on. See
651          * fcg_cgroup_exit().                     651          * fcg_cgroup_exit().
652          */                                       652          */
653         cgrp = bpf_cgroup_from_id(cgid);          653         cgrp = bpf_cgroup_from_id(cgid);
654         if (!cgrp) {                              654         if (!cgrp) {
655                 stat_inc(FCG_STAT_PNC_GONE);      655                 stat_inc(FCG_STAT_PNC_GONE);
656                 goto out_free;                    656                 goto out_free;
657         }                                         657         }
658                                                   658 
659         cgc = bpf_cgrp_storage_get(&cgrp_ctx,     659         cgc = bpf_cgrp_storage_get(&cgrp_ctx, cgrp, 0, 0);
660         if (!cgc) {                               660         if (!cgc) {
661                 bpf_cgroup_release(cgrp);         661                 bpf_cgroup_release(cgrp);
662                 stat_inc(FCG_STAT_PNC_GONE);      662                 stat_inc(FCG_STAT_PNC_GONE);
663                 goto out_free;                    663                 goto out_free;
664         }                                         664         }
665                                                   665 
666         if (!scx_bpf_consume(cgid)) {             666         if (!scx_bpf_consume(cgid)) {
667                 bpf_cgroup_release(cgrp);         667                 bpf_cgroup_release(cgrp);
668                 stat_inc(FCG_STAT_PNC_EMPTY);     668                 stat_inc(FCG_STAT_PNC_EMPTY);
669                 goto out_stash;                   669                 goto out_stash;
670         }                                         670         }
671                                                   671 
672         /*                                        672         /*
673          * Successfully consumed from the cgro    673          * Successfully consumed from the cgroup. This will be our current
674          * cgroup for the new slice. Refresh i    674          * cgroup for the new slice. Refresh its hweight.
675          */                                       675          */
676         cgrp_refresh_hweight(cgrp, cgc);          676         cgrp_refresh_hweight(cgrp, cgc);
677                                                   677 
678         bpf_cgroup_release(cgrp);                 678         bpf_cgroup_release(cgrp);
679                                                   679 
680         /*                                        680         /*
681          * As the cgroup may have more tasks,     681          * As the cgroup may have more tasks, add it back to the rbtree. Note
682          * that here we charge the full slice     682          * that here we charge the full slice upfront and then exact later
683          * according to the actual consumption    683          * according to the actual consumption. This prevents lowpri thundering
684          * herd from saturating the machine.      684          * herd from saturating the machine.
685          */                                       685          */
686         bpf_spin_lock(&cgv_tree_lock);            686         bpf_spin_lock(&cgv_tree_lock);
687         cgv_node->cvtime += cgrp_slice_ns * FC    687         cgv_node->cvtime += cgrp_slice_ns * FCG_HWEIGHT_ONE / (cgc->hweight ?: 1);
688         cgrp_cap_budget(cgv_node, cgc);           688         cgrp_cap_budget(cgv_node, cgc);
689         bpf_rbtree_add(&cgv_tree, &cgv_node->r    689         bpf_rbtree_add(&cgv_tree, &cgv_node->rb_node, cgv_node_less);
690         bpf_spin_unlock(&cgv_tree_lock);          690         bpf_spin_unlock(&cgv_tree_lock);
691                                                   691 
692         *cgidp = cgid;                            692         *cgidp = cgid;
693         stat_inc(FCG_STAT_PNC_NEXT);              693         stat_inc(FCG_STAT_PNC_NEXT);
694         return true;                              694         return true;
695                                                   695 
696 out_stash:                                        696 out_stash:
697         stash = bpf_map_lookup_elem(&cgv_node_    697         stash = bpf_map_lookup_elem(&cgv_node_stash, &cgid);
698         if (!stash) {                             698         if (!stash) {
699                 stat_inc(FCG_STAT_PNC_GONE);      699                 stat_inc(FCG_STAT_PNC_GONE);
700                 goto out_free;                    700                 goto out_free;
701         }                                         701         }
702                                                   702 
703         /*                                        703         /*
704          * Paired with cmpxchg in cgrp_enqueue    704          * Paired with cmpxchg in cgrp_enqueued(). If they see the following
705          * transition, they'll enqueue the cgr    705          * transition, they'll enqueue the cgroup. If they are earlier, we'll
706          * see their task in the dq below and     706          * see their task in the dq below and requeue the cgroup.
707          */                                       707          */
708         __sync_val_compare_and_swap(&cgc->queu    708         __sync_val_compare_and_swap(&cgc->queued, 1, 0);
709                                                   709 
710         if (scx_bpf_dsq_nr_queued(cgid)) {        710         if (scx_bpf_dsq_nr_queued(cgid)) {
711                 bpf_spin_lock(&cgv_tree_lock);    711                 bpf_spin_lock(&cgv_tree_lock);
712                 bpf_rbtree_add(&cgv_tree, &cgv    712                 bpf_rbtree_add(&cgv_tree, &cgv_node->rb_node, cgv_node_less);
713                 bpf_spin_unlock(&cgv_tree_lock    713                 bpf_spin_unlock(&cgv_tree_lock);
714                 stat_inc(FCG_STAT_PNC_RACE);      714                 stat_inc(FCG_STAT_PNC_RACE);
715         } else {                                  715         } else {
716                 cgv_node = bpf_kptr_xchg(&stas    716                 cgv_node = bpf_kptr_xchg(&stash->node, cgv_node);
717                 if (cgv_node) {                   717                 if (cgv_node) {
718                         scx_bpf_error("unexpec    718                         scx_bpf_error("unexpected !NULL cgv_node stash");
719                         goto out_free;            719                         goto out_free;
720                 }                                 720                 }
721         }                                         721         }
722                                                   722 
723         return false;                             723         return false;
724                                                   724 
725 out_free:                                         725 out_free:
726         bpf_obj_drop(cgv_node);                   726         bpf_obj_drop(cgv_node);
727         return false;                             727         return false;
728 }                                                 728 }
729                                                   729 
730 void BPF_STRUCT_OPS(fcg_dispatch, s32 cpu, str    730 void BPF_STRUCT_OPS(fcg_dispatch, s32 cpu, struct task_struct *prev)
731 {                                                 731 {
732         struct fcg_cpu_ctx *cpuc;                 732         struct fcg_cpu_ctx *cpuc;
733         struct fcg_cgrp_ctx *cgc;                 733         struct fcg_cgrp_ctx *cgc;
734         struct cgroup *cgrp;                      734         struct cgroup *cgrp;
735         u64 now = bpf_ktime_get_ns();             735         u64 now = bpf_ktime_get_ns();
736         bool picked_next = false;                 736         bool picked_next = false;
737                                                   737 
738         cpuc = find_cpu_ctx();                    738         cpuc = find_cpu_ctx();
739         if (!cpuc)                                739         if (!cpuc)
740                 return;                           740                 return;
741                                                   741 
742         if (!cpuc->cur_cgid)                      742         if (!cpuc->cur_cgid)
743                 goto pick_next_cgroup;            743                 goto pick_next_cgroup;
744                                                   744 
745         if (vtime_before(now, cpuc->cur_at + c    745         if (vtime_before(now, cpuc->cur_at + cgrp_slice_ns)) {
746                 if (scx_bpf_consume(cpuc->cur_    746                 if (scx_bpf_consume(cpuc->cur_cgid)) {
747                         stat_inc(FCG_STAT_CNS_    747                         stat_inc(FCG_STAT_CNS_KEEP);
748                         return;                   748                         return;
749                 }                                 749                 }
750                 stat_inc(FCG_STAT_CNS_EMPTY);     750                 stat_inc(FCG_STAT_CNS_EMPTY);
751         } else {                                  751         } else {
752                 stat_inc(FCG_STAT_CNS_EXPIRE);    752                 stat_inc(FCG_STAT_CNS_EXPIRE);
753         }                                         753         }
754                                                   754 
755         /*                                        755         /*
756          * The current cgroup is expiring. It     756          * The current cgroup is expiring. It was already charged a full slice.
757          * Calculate the actual usage and accu    757          * Calculate the actual usage and accumulate the delta.
758          */                                       758          */
759         cgrp = bpf_cgroup_from_id(cpuc->cur_cg    759         cgrp = bpf_cgroup_from_id(cpuc->cur_cgid);
760         if (!cgrp) {                              760         if (!cgrp) {
761                 stat_inc(FCG_STAT_CNS_GONE);      761                 stat_inc(FCG_STAT_CNS_GONE);
762                 goto pick_next_cgroup;            762                 goto pick_next_cgroup;
763         }                                         763         }
764                                                   764 
765         cgc = bpf_cgrp_storage_get(&cgrp_ctx,     765         cgc = bpf_cgrp_storage_get(&cgrp_ctx, cgrp, 0, 0);
766         if (cgc) {                                766         if (cgc) {
767                 /*                                767                 /*
768                  * We want to update the vtime    768                  * We want to update the vtime delta and then look for the next
769                  * cgroup to execute but the l    769                  * cgroup to execute but the latter needs to be done in a loop
770                  * and we can't keep the lock     770                  * and we can't keep the lock held. Oh well...
771                  */                               771                  */
772                 bpf_spin_lock(&cgv_tree_lock);    772                 bpf_spin_lock(&cgv_tree_lock);
773                 __sync_fetch_and_add(&cgc->cvt    773                 __sync_fetch_and_add(&cgc->cvtime_delta,
774                                      (cpuc->cu    774                                      (cpuc->cur_at + cgrp_slice_ns - now) *
775                                      FCG_HWEIG    775                                      FCG_HWEIGHT_ONE / (cgc->hweight ?: 1));
776                 bpf_spin_unlock(&cgv_tree_lock    776                 bpf_spin_unlock(&cgv_tree_lock);
777         } else {                                  777         } else {
778                 stat_inc(FCG_STAT_CNS_GONE);      778                 stat_inc(FCG_STAT_CNS_GONE);
779         }                                         779         }
780                                                   780 
781         bpf_cgroup_release(cgrp);                 781         bpf_cgroup_release(cgrp);
782                                                   782 
783 pick_next_cgroup:                                 783 pick_next_cgroup:
784         cpuc->cur_at = now;                       784         cpuc->cur_at = now;
785                                                   785 
786         if (scx_bpf_consume(FALLBACK_DSQ)) {      786         if (scx_bpf_consume(FALLBACK_DSQ)) {
787                 cpuc->cur_cgid = 0;               787                 cpuc->cur_cgid = 0;
788                 return;                           788                 return;
789         }                                         789         }
790                                                   790 
791         bpf_repeat(CGROUP_MAX_RETRIES) {          791         bpf_repeat(CGROUP_MAX_RETRIES) {
792                 if (try_pick_next_cgroup(&cpuc    792                 if (try_pick_next_cgroup(&cpuc->cur_cgid)) {
793                         picked_next = true;       793                         picked_next = true;
794                         break;                    794                         break;
795                 }                                 795                 }
796         }                                         796         }
797                                                   797 
798         /*                                        798         /*
799          * This only happens if try_pick_next_    799          * This only happens if try_pick_next_cgroup() races against enqueue
800          * path for more than CGROUP_MAX_RETRI    800          * path for more than CGROUP_MAX_RETRIES times, which is extremely
801          * unlikely and likely indicates an un    801          * unlikely and likely indicates an underlying bug. There shouldn't be
802          * any stall risk as the race is again    802          * any stall risk as the race is against enqueue.
803          */                                       803          */
804         if (!picked_next)                         804         if (!picked_next)
805                 stat_inc(FCG_STAT_PNC_FAIL);      805                 stat_inc(FCG_STAT_PNC_FAIL);
806 }                                                 806 }
807                                                   807 
808 s32 BPF_STRUCT_OPS(fcg_init_task, struct task_    808 s32 BPF_STRUCT_OPS(fcg_init_task, struct task_struct *p,
809                    struct scx_init_task_args *    809                    struct scx_init_task_args *args)
810 {                                                 810 {
811         struct fcg_task_ctx *taskc;               811         struct fcg_task_ctx *taskc;
812         struct fcg_cgrp_ctx *cgc;                 812         struct fcg_cgrp_ctx *cgc;
813                                                   813 
814         /*                                        814         /*
815          * @p is new. Let's ensure that its ta    815          * @p is new. Let's ensure that its task_ctx is available. We can sleep
816          * in this function and the following     816          * in this function and the following will automatically use GFP_KERNEL.
817          */                                       817          */
818         taskc = bpf_task_storage_get(&task_ctx    818         taskc = bpf_task_storage_get(&task_ctx, p, 0,
819                                      BPF_LOCAL    819                                      BPF_LOCAL_STORAGE_GET_F_CREATE);
820         if (!taskc)                               820         if (!taskc)
821                 return -ENOMEM;                   821                 return -ENOMEM;
822                                                   822 
823         taskc->bypassed_at = 0;                   823         taskc->bypassed_at = 0;
824                                                   824 
825         if (!(cgc = find_cgrp_ctx(args->cgroup    825         if (!(cgc = find_cgrp_ctx(args->cgroup)))
826                 return -ENOENT;                   826                 return -ENOENT;
827                                                   827 
828         p->scx.dsq_vtime = cgc->tvtime_now;       828         p->scx.dsq_vtime = cgc->tvtime_now;
829                                                   829 
830         return 0;                                 830         return 0;
831 }                                                 831 }
832                                                   832 
833 int BPF_STRUCT_OPS_SLEEPABLE(fcg_cgroup_init,     833 int BPF_STRUCT_OPS_SLEEPABLE(fcg_cgroup_init, struct cgroup *cgrp,
834                              struct scx_cgroup    834                              struct scx_cgroup_init_args *args)
835 {                                                 835 {
836         struct fcg_cgrp_ctx *cgc;                 836         struct fcg_cgrp_ctx *cgc;
837         struct cgv_node *cgv_node;                837         struct cgv_node *cgv_node;
838         struct cgv_node_stash empty_stash = {}    838         struct cgv_node_stash empty_stash = {}, *stash;
839         u64 cgid = cgrp->kn->id;                  839         u64 cgid = cgrp->kn->id;
840         int ret;                                  840         int ret;
841                                                   841 
842         /*                                        842         /*
843          * Technically incorrect as cgroup ID     843          * Technically incorrect as cgroup ID is full 64bit while dsq ID is
844          * 63bit. Should not be a problem in p    844          * 63bit. Should not be a problem in practice and easy to spot in the
845          * unlikely case that it breaks.          845          * unlikely case that it breaks.
846          */                                       846          */
847         ret = scx_bpf_create_dsq(cgid, -1);       847         ret = scx_bpf_create_dsq(cgid, -1);
848         if (ret)                                  848         if (ret)
849                 return ret;                       849                 return ret;
850                                                   850 
851         cgc = bpf_cgrp_storage_get(&cgrp_ctx,     851         cgc = bpf_cgrp_storage_get(&cgrp_ctx, cgrp, 0,
852                                    BPF_LOCAL_S    852                                    BPF_LOCAL_STORAGE_GET_F_CREATE);
853         if (!cgc) {                               853         if (!cgc) {
854                 ret = -ENOMEM;                    854                 ret = -ENOMEM;
855                 goto err_destroy_dsq;             855                 goto err_destroy_dsq;
856         }                                         856         }
857                                                   857 
858         cgc->weight = args->weight;               858         cgc->weight = args->weight;
859         cgc->hweight = FCG_HWEIGHT_ONE;           859         cgc->hweight = FCG_HWEIGHT_ONE;
860                                                   860 
861         ret = bpf_map_update_elem(&cgv_node_st    861         ret = bpf_map_update_elem(&cgv_node_stash, &cgid, &empty_stash,
862                                   BPF_NOEXIST)    862                                   BPF_NOEXIST);
863         if (ret) {                                863         if (ret) {
864                 if (ret != -ENOMEM)               864                 if (ret != -ENOMEM)
865                         scx_bpf_error("unexpec    865                         scx_bpf_error("unexpected stash creation error (%d)",
866                                       ret);       866                                       ret);
867                 goto err_destroy_dsq;             867                 goto err_destroy_dsq;
868         }                                         868         }
869                                                   869 
870         stash = bpf_map_lookup_elem(&cgv_node_    870         stash = bpf_map_lookup_elem(&cgv_node_stash, &cgid);
871         if (!stash) {                             871         if (!stash) {
872                 scx_bpf_error("unexpected cgv_    872                 scx_bpf_error("unexpected cgv_node stash lookup failure");
873                 ret = -ENOENT;                    873                 ret = -ENOENT;
874                 goto err_destroy_dsq;             874                 goto err_destroy_dsq;
875         }                                         875         }
876                                                   876 
877         cgv_node = bpf_obj_new(struct cgv_node    877         cgv_node = bpf_obj_new(struct cgv_node);
878         if (!cgv_node) {                          878         if (!cgv_node) {
879                 ret = -ENOMEM;                    879                 ret = -ENOMEM;
880                 goto err_del_cgv_node;            880                 goto err_del_cgv_node;
881         }                                         881         }
882                                                   882 
883         cgv_node->cgid = cgid;                    883         cgv_node->cgid = cgid;
884         cgv_node->cvtime = cvtime_now;            884         cgv_node->cvtime = cvtime_now;
885                                                   885 
886         cgv_node = bpf_kptr_xchg(&stash->node,    886         cgv_node = bpf_kptr_xchg(&stash->node, cgv_node);
887         if (cgv_node) {                           887         if (cgv_node) {
888                 scx_bpf_error("unexpected !NUL    888                 scx_bpf_error("unexpected !NULL cgv_node stash");
889                 ret = -EBUSY;                     889                 ret = -EBUSY;
890                 goto err_drop;                    890                 goto err_drop;
891         }                                         891         }
892                                                   892 
893         return 0;                                 893         return 0;
894                                                   894 
895 err_drop:                                         895 err_drop:
896         bpf_obj_drop(cgv_node);                   896         bpf_obj_drop(cgv_node);
897 err_del_cgv_node:                                 897 err_del_cgv_node:
898         bpf_map_delete_elem(&cgv_node_stash, &    898         bpf_map_delete_elem(&cgv_node_stash, &cgid);
899 err_destroy_dsq:                                  899 err_destroy_dsq:
900         scx_bpf_destroy_dsq(cgid);                900         scx_bpf_destroy_dsq(cgid);
901         return ret;                               901         return ret;
902 }                                                 902 }
903                                                   903 
904 void BPF_STRUCT_OPS(fcg_cgroup_exit, struct cg    904 void BPF_STRUCT_OPS(fcg_cgroup_exit, struct cgroup *cgrp)
905 {                                                 905 {
906         u64 cgid = cgrp->kn->id;                  906         u64 cgid = cgrp->kn->id;
907                                                   907 
908         /*                                        908         /*
909          * For now, there's no way find and re    909          * For now, there's no way find and remove the cgv_node if it's on the
910          * cgv_tree. Let's drain them in the d    910          * cgv_tree. Let's drain them in the dispatch path as they get popped
911          * off the front of the tree.             911          * off the front of the tree.
912          */                                       912          */
913         bpf_map_delete_elem(&cgv_node_stash, &    913         bpf_map_delete_elem(&cgv_node_stash, &cgid);
914         scx_bpf_destroy_dsq(cgid);                914         scx_bpf_destroy_dsq(cgid);
915 }                                                 915 }
916                                                   916 
917 void BPF_STRUCT_OPS(fcg_cgroup_move, struct ta    917 void BPF_STRUCT_OPS(fcg_cgroup_move, struct task_struct *p,
918                     struct cgroup *from, struc    918                     struct cgroup *from, struct cgroup *to)
919 {                                                 919 {
920         struct fcg_cgrp_ctx *from_cgc, *to_cgc    920         struct fcg_cgrp_ctx *from_cgc, *to_cgc;
921         s64 vtime_delta;                          921         s64 vtime_delta;
922                                                   922 
923         /* find_cgrp_ctx() triggers scx_ops_er    923         /* find_cgrp_ctx() triggers scx_ops_error() on lookup failures */
924         if (!(from_cgc = find_cgrp_ctx(from))     924         if (!(from_cgc = find_cgrp_ctx(from)) || !(to_cgc = find_cgrp_ctx(to)))
925                 return;                           925                 return;
926                                                   926 
927         vtime_delta = p->scx.dsq_vtime - from_    927         vtime_delta = p->scx.dsq_vtime - from_cgc->tvtime_now;
928         p->scx.dsq_vtime = to_cgc->tvtime_now     928         p->scx.dsq_vtime = to_cgc->tvtime_now + vtime_delta;
929 }                                                 929 }
930                                                   930 
931 s32 BPF_STRUCT_OPS_SLEEPABLE(fcg_init)            931 s32 BPF_STRUCT_OPS_SLEEPABLE(fcg_init)
932 {                                                 932 {
933         return scx_bpf_create_dsq(FALLBACK_DSQ    933         return scx_bpf_create_dsq(FALLBACK_DSQ, -1);
934 }                                                 934 }
935                                                   935 
936 void BPF_STRUCT_OPS(fcg_exit, struct scx_exit_    936 void BPF_STRUCT_OPS(fcg_exit, struct scx_exit_info *ei)
937 {                                                 937 {
938         UEI_RECORD(uei, ei);                      938         UEI_RECORD(uei, ei);
939 }                                                 939 }
940                                                   940 
941 SCX_OPS_DEFINE(flatcg_ops,                        941 SCX_OPS_DEFINE(flatcg_ops,
942                .select_cpu              = (voi    942                .select_cpu              = (void *)fcg_select_cpu,
943                .enqueue                 = (voi    943                .enqueue                 = (void *)fcg_enqueue,
944                .dispatch                = (voi    944                .dispatch                = (void *)fcg_dispatch,
945                .runnable                = (voi    945                .runnable                = (void *)fcg_runnable,
946                .running                 = (voi    946                .running                 = (void *)fcg_running,
947                .stopping                = (voi    947                .stopping                = (void *)fcg_stopping,
948                .quiescent               = (voi    948                .quiescent               = (void *)fcg_quiescent,
949                .init_task               = (voi    949                .init_task               = (void *)fcg_init_task,
950                .cgroup_set_weight       = (voi    950                .cgroup_set_weight       = (void *)fcg_cgroup_set_weight,
951                .cgroup_init             = (voi    951                .cgroup_init             = (void *)fcg_cgroup_init,
952                .cgroup_exit             = (voi    952                .cgroup_exit             = (void *)fcg_cgroup_exit,
953                .cgroup_move             = (voi    953                .cgroup_move             = (void *)fcg_cgroup_move,
954                .init                    = (voi    954                .init                    = (void *)fcg_init,
955                .exit                    = (voi    955                .exit                    = (void *)fcg_exit,
956                .flags                   = SCX_    956                .flags                   = SCX_OPS_HAS_CGROUP_WEIGHT | SCX_OPS_ENQ_EXITING,
957                .name                    = "fla    957                .name                    = "flatcg");
958                                                   958 

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php