summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2024-08-07 12:31:19 +0200
committerVlastimil Babka <vbabka@suse.cz>2024-08-27 14:12:51 +0200
commit6c6c47b063b593785202be158e61fe5c827d6677 (patch)
treeb321aa9e9442a580ea0cf785a352e1af2bea4230 /mm
parent2b55d6a42d14c8675e38d6d9adca3014fdf01951 (diff)
mm, slab: call kvfree_rcu_barrier() from kmem_cache_destroy()
We would like to replace call_rcu() users with kfree_rcu() where the existing callback is just a kmem_cache_free(). However this causes issues when the cache can be destroyed (such as due to module unload). Currently such modules should be issuing rcu_barrier() before kmem_cache_destroy() to have their call_rcu() callbacks processed first. This barrier is however not sufficient for kfree_rcu() in flight due to the batching introduced by a35d16905efc ("rcu: Add basic support for kfree_rcu() batching"). This is not a problem for kmalloc caches which are never destroyed, but since removing SLOB, kfree_rcu() is allowed also for any other cache, that might be destroyed. In order not to complicate the API, put the responsibility for handling outstanding kfree_rcu() in kmem_cache_destroy() itself. Use the newly introduced kvfree_rcu_barrier() to wait before destroying the cache. This is similar to how we issue rcu_barrier() for SLAB_TYPESAFE_BY_RCU caches, but has to be done earlier, as the latter only needs to wait for the empty slab pages to finish freeing, and not objects from the slab. Users of call_rcu() with arbitrary callbacks should still issue rcu_barrier() before destroying the cache and unloading the module, as kvfree_rcu_barrier() is not a superset of rcu_barrier() and the callbacks may be invoking module code or performing other actions that are necessary for a successful unload. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'mm')
-rw-r--r--mm/slab_common.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/slab_common.c b/mm/slab_common.c
index c40227d5fa07..1a2873293f5d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -508,6 +508,9 @@ void kmem_cache_destroy(struct kmem_cache *s)
if (unlikely(!s) || !kasan_check_byte(s))
return;
+ /* in-flight kfree_rcu()'s may include objects from our cache */
+ kvfree_rcu_barrier();
+
cpus_read_lock();
mutex_lock(&slab_mutex);