最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

linux - Does the virtual-to-physical address translation for slab memory (e.g., xarray) get recorded in the page table? - Stack

programmeradmin4浏览0评论

My question is similar to this (What is the relation between virt_to_phys and the CPU's MMU in the Linux kernel?), but not clearly answered yet.

In the Linux kernel (currently using version 6.10), the xas_alloc(line 364 in the code below) API is used for node allocation in xarray (see line 697 in the code below), and this function allocates slab memory via kmalloc (line 380). I have the following questions (I would appreciate it even if you could answer just one of the three questions below.):

[Q1] Are the virtual-to-physical address entries for this kind of kernel slab memory (which seems to be in low memory) stored in each application's page table?

[Q2] If so, then I assume the page table entry is created when accessing the allocated xarray node (line 698 in the code below). In that case, how does the kernel determine that the physical address corresponding to the virtual address is the one previously allocated by the slab allocator, in order to create the page table entry?

Or, is it possible that slab memory belongs to the linear mapping region (Why is there a 1:1 linear mapping in Linux kernel address space?), so the physical address can be directly derived from the virtual address—allowing the kernel to take advantage of this property when constructing the page table entry (PTE)?

(Since slab memory is shared by all processes once it's allocated, the page table entry (PTE) creation mechanism used by malloc()—where a new physical page is allocated for each PTE—likely doesn't apply in this case. (referred to What is the difference between vmalloc and kmalloc?))

 658 static void *xas_create(struct xa_state *xas, bool allow_root)
 659 {
 660         struct xarray *xa = xas->xa;
 661         void *entry;
 662         void __rcu **slot;
 663         struct xa_node *node = xas->xa_node;
 664         int shift;
 665         unsigned int order = xas->xa_shift;
 666 
 667 
 668         if (xas_top(node)) {
 669                 entry = xa_head_locked(xa);
 670                 xas->xa_node = NULL;
 671                 if (!entry && xa_zero_busy(xa))
 672                         entry = XA_ZERO_ENTRY;
 673                 shift = xas_expand(xas, entry);
 674                 if (shift < 0) 
 675                         return NULL;
 676                 if (!shift && !allow_root)
 677                         shift = XA_CHUNK_SHIFT;
 678                 entry = xa_head_locked(xa);
 679                 slot = &xa->xa_head;
 680         } else if (xas_error(xas)) {
 681                 return NULL;
 682         } else if (node) {
 683                 unsigned int offset = xas->xa_offset;
 684         
 685                 shift = node->shift;
 686                 entry = xa_entry_locked(xa, node, offset);
 687                 slot = &node->slots[offset];
 688         } else {
 689                 shift = 0;
 690                 entry = xa_head_locked(xa); 
 691                 slot = &xa->xa_head;
 692         } 
 693                 
 694         while (shift > order) {
 695                 shift -= XA_CHUNK_SHIFT;
 696                 if (!entry) {
 697                         node = xas_alloc(xas, shift);
 698                         if (!node)
 699                                 break;
 700                         if (xa_track_free(xa))
 701                                 node_mark_all(node, XA_FREE_MARK);
     ...




 364 static void *xas_alloc(struct xa_state *xas, unsigned int shift)
 365 {
 366         struct xa_node *parent = xas->xa_node;
 367         struct xa_node *node = xas->xa_alloc;
 368 
 369         if (xas_invalid(xas))
 370                 return NULL;
 371 
 372         if (node) {
 373                 xas->xa_alloc = NULL;
 374         } else {
 375                 gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN;
 376 
 377                 if (xas->xa->xa_flags & XA_FLAGS_ACCOUNT)
 378                         gfp |= __GFP_ACCOUNT;
 379 
 380                 node = kmem_cache_alloc_lru(radix_tree_node_cachep, xas->xa_lru, gfp); 
 381                 if (!node) {
 382                         xas_set_err(xas, -ENOMEM);
 383                         return NULL;
 384                 }
 385         }               
 386                         
 387         if (parent) {

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论