本文介绍了为什么使用__get_free_pages()顺序分配10或11的页面通常会失败?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的系统内存充足(一台24GB的服务器).在我的系统中,为崩溃内核分配了320MB和120MB的内核空间.内存的其余部分用于其他目的.但是,当我使用__get_free_pages()分配顺序为11的连续页面时,内核无法分配2 ^ 10的页面.为什么?

My system memory is plenty (a server with 24GB). In my system, the kernel space is allocated with 320MB and 120MB for crash kernel. The rest of the memory is used for other purposes. However, when I use __get_free_pages() to allocate contiguous pages with order of 11, and the kernel fails to allocate 2^10 pages. Why?

根据 makelinux

为什么会这样?我系统中的每个页面为4KB(4096字节),2 ^ 10页= 1024页,总大小为1024 * 4096 = 4194304(字节)〜4MB.它只有4MB的连续空间,内核非常小:vmlinuz只有2.1MB,initrd是15MB.整个内核的总内存消耗约为300MB.对于内核来说,分配4MB的连续页面必须绰绰有余.即使在具有1GB/3GB内核/用户的普通计算机中,也要确保内核不会用完1GB.但是只有4MB连续页面的分配怎么可能失败?而且我认为,在内核空间中,内存不是分散在物理内存中(由于虚拟内存映射),而是线性且连续的.

Why is that so? Each page in my system is 4KB (4096 bytes), 2^10 pages = 1024 pages, and the total size is 1024*4096 = 4 194 304 (bytes) ~ 4MB. It's only 4MB of contiguous space and the kernel is very small: vmlinuz is only 2.1MB and initrd is 15MB. The total memory consumption of the whole kernel is ~300MB. It must be more than enough for the kernel to allocate 4MB of contiguous pages. Even in normal machine with 1GB/3GB kernel/user, and sure the kernel won't use up the whole 1 GB. But how can the allocation with only 4MB contiguous pages possibly fail? And I think, in kernel space, memory is not scattered in physical memory (due to virtual memory mapping), but is linear and contiguous.

我尝试先以2 ^ 10页分配的方式加载内核模块,但失败并转储堆栈跟踪:

I tried to load my kernel module first with 2^10 pages allocation, but it fails and dump the stack trace:

[    6.037056]  [<ffffffff810041ec>] dump_trace+0x86/0x2de
[    6.037063]  [<ffffffff8122fe83>] dump_stack+0x69/0x6f
[    6.037070]  [<ffffffff8108704e>] warn_alloc_failed+0x13f/0x151
[    6.037076]  [<ffffffff8108786a>] __alloc_pages_nodemask+0x80a/0x871
[    6.037081]  [<ffffffff81087959>] __get_free_pages+0x12/0x50

推荐答案

如果我没记错的话,__get_free_pages使用伙伴分配,不仅分散在整个物理内存中,而且还以最可能的模式进行分配,以便随后尝试分配大的连续块.如果我的计算是正确的,则在具有24GB物理RAM的系统上,即使伙伴分配没有占用任何空间, ,渲染所需的订单数也将少于8192个order-0(4KB) __get_free_pages不可能分配4MB的块.

If I remember correctly, __get_free_pages uses buddy allocation, which not only does scatter its allocations throughout physical memory, it does so in the worst possible pattern for subsequent attempts to allocate large contiguous blocks. If my calculations are correct, on your system with 24GB of physical RAM, even if no space whatsoever were occupied by anything but buddy allocations, it would take less than 8192 order-0 (4KB) allocations to render allocation of a 4MB chunk with __get_free_pages impossible.

有一种叫做连续内存分配器的东西,应该可以满足对设备驱动程序在物理上的大量连续分配;截至2011年6月,它尚未在官方内核中,但已超过一年前.你应该调查一下.

There is a thing called the contiguous memory allocator, which is supposed to address the genuine need for large physically-contiguous allocations by device drivers; as of June 2011 it was not in the official kernel, but that was more than a year ago now. You should look into that.

这篇关于为什么使用__get_free_pages()顺序分配10或11的页面通常会失败?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-31 12:28