CN103984599B - Method for improving utilization rate of large pages of operating system - Google Patents

Method for improving utilization rate of large pages of operating system Download PDF

Info

Publication number
CN103984599B
CN103984599B CN201410146873.3A CN201410146873A CN103984599B CN 103984599 B CN103984599 B CN 103984599B CN 201410146873 A CN201410146873 A CN 201410146873A CN 103984599 B CN103984599 B CN 103984599B
Authority
CN
China
Prior art keywords
heap
heap top
top position
variable
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410146873.3A
Other languages
Chinese (zh)
Other versions
CN103984599A (en
Inventor
汪小林
罗韬威
罗英伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201410146873.3A priority Critical patent/CN103984599B/en
Publication of CN103984599A publication Critical patent/CN103984599A/en
Application granted granted Critical
Publication of CN103984599B publication Critical patent/CN103984599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method for improving the utilization rate of large pages of an operating system. The method comprises the steps that (1) the system adds a variable a in a virtual address space data structure of each process, and each variable a is recorded as a heap top position of a distributed virtual address of the corresponding process; (2) in the process starting, the system initiates a to 0; when any process calls a heap top setting function for the application of an internal storage, a heap top parameter b is transmitted into the system; (3) the system calculates a value c after the alignment of upward large pages of the heap top positions according to a previously-requested heap top position and a currently-requested heap top position b of the process; (4) the system assigns the current value of a to a highest heap address variable of the recorded and distributed internal storage of the process, and compares the current value with c; if the current value is smaller than c, a stack space of the process is increased according to a difference value between the current value and c; if the current value is smaller than c, the release of the internal storage is implemented; if the current value is equal to c, the calling of the internal storage is not implemented. According to the method, the utilization rate of the large pages is fully improved, so that the performance of a program is improved.

Description

A kind of method for improving operating system big page utilization rate
Technical field
The present invention relates to a kind of method for improving operating system big page utilization rate, belongs to operating system memory management technique neck Domain.
Technical background
Paging system is the common features of modern CPU.It establishes a virtual memory space from program to machine Physical memory between mapping.Internal memory is managed in units of page.Tie including many CPU systems including Intel x86 Structure realizes a kind of big page mechanism, supports various different size of Physical Page granularities such as 4k, 2M, 4M.
Current operating system (such as Linux) is for simplicity, it is common to use the page of 4K sizes, this way is caused The scope of the virtual address that each mapping is covered only has 4K.If necessary to access internal memory of the segment length for 2M, operating system Need to process 512 page faults, each page faults distribute 4K internal memories.
By taking (SuSE) Linux OS as an example, Linux starts to support transparent big page from 2.6.38.Operating system is set up Size is the virtual address of 2M (using being 4M during 32-bit operating system) to physical address map, referred to as big page mapping (abbreviation Big page).Calculating and the Cache expenses that TLB failures and CPU occur when addressing can be reduced using big page.The mechanism contains several Individual function, such as arrangement of Physical Page fragment, to the physical memory of big page alignment that ensures to have enough for using.Occurring to lack Page is judged when interrupting (page fault), is set up using big page if conditions permit and mapped.It is a set of in big page Deposit operation instrument, there is provided the base memory such as copy-on-write is operated.
The big page module mainly for program in privately owned anonymous internal memory be optimized that (internal memory that program is used is big absolutely Part is used in the way of private anonymity.And big page is not appropriate for using in other circumstances.Shared drive uses big Page can increase the expense of copy-on-write, and being mapped to the internal memory of file can increase the burden of caching using big page).The optimization of big page is not It is compulsory, but adopts the logic " if meeting condition, then use big page ".Linux is weighed with " big page utilization rate " The usage degree of big page module.It is equal to:In the privately owned anonymous internal memory of one process, with the interior poke that big page form is distributed, remove With the total privately owned anonymous internal memory usage amount of program.
Distribution of the operating system to internal memory is with page fault drivings.Each process have oneself address space ( Correspondence mm_struct structures in linux system), the virtual address and virtual address of an address space maintenance process is to thing The mapping of reason address.When one process has just been set up or applied for internal memory, process can call function application some virtual memorys. At this time operating system can't at once distribute physical memory, but set up a virtual memory area (hereinafter referred to as Vma), represent and say, this section of virtual address has been used.Each mm_struct includes several vma, and each vma Some mm_struct must be belonged to.When process is actual reads and writes this section of virtual address, because without storage allocation, It is bound to page fault.Operating system navigates to correspondence according to the address that page fault occur by mm_struct Vma, according to occur page fault the reason for and vma attribute determine how to process page fault.If for example accessed The virtual address for arriving is not inside any vma, or the region for accessing is not readable, then be exactly to have accessed illegal ground Location, returns segment fault, and kills process.If write operation have accessed a vma indicate it is writeable, but page table subscript When remembering not writeable address, then be the page for having write copy-on-write, operating system can be a by page copy and be set to again The writeable page.If the position for accessing is not set up virtual address to the mapping of physical address in vma, that explanation is also Without storage allocation, now according to the type of vma, distribute physical memory, set up mapping.
Setting up when internal memory maps can decide whether to use big page according to the situation of vma, in general, if on reference address The vma that one section of 2M Virtual Space is located that lower 2M alignment is constituted is included, and this section of 2M virtual address spaces the inside is not set up Cross mapping (two grades of page tables are sky), then linux kernel can distribute big page for this section of virtual address.If upper strata application internal memory When do not account for the demand of big page alignment, the vma Jing of application often will not 2M alignment, cannot thus use big page.
Due to the inherent limitations of big page Paging system, it is desirable to which all of physical address and virtual address must be alignment. If 64 bit manipulation systems are mapped using the page of 2M, then the necessary 2M alignment of the head and the tail of virtual address, the head of physical address Tail is also required to 2M alignment.For 32-bit operating system, then could use in the case where physical address and virtual address 4M align Big page.The management of Physical Page is that operating system oneself is safeguarded, defragmentation work(is already had in the realization of transparent big page Can, it is ensured that the Physical Page for having sufficiently large page alignment is used to set up big page mapping.Virtual address is usually user's application, no Can take into account whether bottom has the support of transparent big page, therefore be frequently not big page alignment during virtual address application.For example it is existing This problem is there is in the transparent big pages of Linux.The application of application program virtual address is unsatisfactory under many circumstances big page pair Neat requirement, this causes linux kernel not distribute big page, then using little page.This way is caused in many cases, Linux kernel will not distribute big page to program, and big page utilization rate is not high.
The content of the invention
For technical problem present in prior art, it is an object of the invention to provide a kind of improve operating system big page The method of utilization rate, can effectively lift the utilization rate of big page, and then improve the performance of program.
The technical scheme is that:
A kind of method for improving operating system big page utilization rate, its step is:
1) system increases by a variable a in the virtual address space data structure of each process, for being recorded as process The heap top position of the allocated virtual address;And change heap starting and function is set so as to the big page alignment of heap initial address of return;
2) during process initiation, variable a is initialized as 0 by system;Calling heap top to arrange function when the process carries out internal memory Shen Please when, to incoming a pile top parameter b of system, b be request arrange new heap top position;
3) system is calculated to meet and is somebody's turn to do according to the heap top position of the last request of the process and the heap top position b of current request The heap top position of process current memory demand value c upwards after big page alignment;
4) currency of process variable a is assigned to system the record storage allocation highest heap address change of the process Amount, and it is compared with c:If less than c, then system increases the stack space of the process according to the two difference;If big In c, then internal memory release is carried out according to the two difference, reduce the stack space of the process, if equal, do not carry out internal memory tune With.
Further, if the heap top position of the last request of the process more than current request heap top position b, from upper Choose a smaller value between the heap top position once asked and c, systems inspection b to the virtual address between the smaller value whether Physical memory is assigned, if be allocated, by this section of physical memory clear 0.
Further, if the value of variable a is 0, i.e. system also to the Heap Allocation of the process does not cross internal memory, then system should The heap top position value of the last request of process is the initial address of the heap of the process.
Further, the heap original position that the heap starting setting function is returned sets to deduct one again after big page alignment upwards Definite value.
Further, the variable a is long shaping address variable.
The present invention solves the problems, such as that alignment core concept is:When the system for receiving internal memory application in operating system is called, repair The logic of changed handling internal memory application, on the premise of user's application is met, a part of internal memory of additional allocation, so as to set up alignment vma.When distribution extra memory, effectively utilizes these extra memorys are required consideration for how, reduce internal memory and waste;For can not The waste for avoiding, needs to weigh between systematic function and memory cost, with optimum scheme comparison.Invention needs modification operation system The function of system kernel so that even if the internal memory application on upper strata is unsatisfactory for alignment requirements, kernel can create the vma of big page alignment, Meet upper strata application, and realize big page.During process, needs make done modification and existing mechanism compatible, and consider The overhead being likely to result in.
Fig. 1 is the schematic diagram that process uses internal memory under (SuSE) Linux OS.Arrow is represented and called, when textual representation is called The parameter for providing is provided.One C programmer, mainly has these places using internal memory:Code segment and data segment, global variable Section, heap, stack.In process initiation, code segment, data segment (initialized global variable), bss sections (no initializtion it is complete Office variable storage where) and stack just have been set up.They have the function for being responsible for each setting up, wherein the part for using exists Indicate in figure.
If necessary to the dynamic application internal memory in program operation, there are two methods to use.
A kind of method is to call sys_brk functions.Each process has a heap space, for quickly applying in new Deposit.It is a vma to pile up operating system the inside, is responsible for by a set of special function, and the low address of vma is referred to as heap bottom, High address is referred to as heap top, and the internal memory occupation mode of this section of vma is privately owned anonymous, and user can arbitrarily visit in this section of region Ask.User only has a kind of method operation heap, and (position at heap bottom initializes when process is set up, and whole exactly to arrange heap top Can not change in individual process running).Internal memory if necessary to apply for K bytes, it is original heap top+K just to arrange new heap top. Contrary, if it is desired to releasing memory, then the internal memory at heap top point can only be discharged, must be than original heap by the way that new heap top is arranged Top is little, and operating system will be automatically releasable the internal memory above new heap top.The function for arranging heap top that operating system is provided is sys_ brk。
The privately owned anonymous internal memory that one program is used all is distributed by the two functions.This patent is changed by brk letters The internal memory of number application.Purpose is, by modification, to make it produce the vma of big page alignment.During so that there are page faults, operating system Big page can be distributed in this section of vma, so as to lift big page utilization rate.The difficult point of this problem is the upper procedure such as malloc Shen Big page alignment please not be accounted for during internal memory, their parameter is not necessarily big page alignment.We need to be modified in bottom, On the one hand meet the demand of upper strata internal memory application, on the other hand safeguard that vma meets the restriction of big page alignment.
For operating system, the modification of sys_brk functions is a specific vma, as long as we ensure that this vma begins Eventually big page alignment to be ensured that and can all use big page by the internal memory of sys_brk function applications.Vma is changed by sys_brk When, the low address of vma will not be changed, only the high address of vma can be moved back and forth.The calculation that we optimize for this characteristic Design Method.The low address of heap correspondence vma is arranged to into big page alignment when heap is initialized, so as to ensure that side is alignd.Another Side, the heap top pointer that user is arranged is not necessarily big page alignment.Using method be up to extend to the high address of vma greatly Page alignment, and record the position on the actually used heap top of user.The big of high address is realized by allocating the method for internal memory in advance Page alignment.Simultaneously the actually used position of user is have recorded, what user saw, be oneself actually used position, and is not known More internal memories may be had been allocated for.Later during user's application internal memory, application can be started from the position on last time heap top, if one Partial memory has been carried out distribution, then can directly use, not result in waste.This mechanism be to upper strata it is transparent, can To realize in the case where upper layer application need not be changed.
Generally speaking, by controlling the length that heap initial address and each heap extend, it is ensured that the vma corresponding to heap is complete Big page is used entirely.
Compared with prior art, the positive effect of the present invention is:
The performance of the computer system that the invention is lifted with less cost.It is virtual that the cost of optimization mainly expands big page During address, extra memory cost.It can effectively lift the performance of program.The restriction of original Linux modules causes big page Utilization rate is relatively low, in some cases, or even will not use big page completely.By the optimization, the use of big page is fully improved Rate, and then lift the performance of program.The optimization is the optimization for kernel memory management, therefore the program limit of optimization impact Very wide, almost all of program can be transferred through the optimization and be benefited.The advantage that will be invented with description of test below.
Using SPECCPU2006 as test program set, Linux big pages module and optimization are tested, as a result such as Table 1:
Table 1 is test result contrast table
Perlbench to xalancbmk is 12 groups of test programs of spec2006 in table 1, when the right is to run accordingly Between.We estimate the coverage rate of big page by the processing mode of page fault in detection linux kernel.Native is to open Ruuning situation during the Transparenet Hugepage Support functions that Linux is carried, optimized is only to change The effect of linux kernel, as with reference to fortune during also Linux disabling Transparenet Hugepage Support functions Row data None.
Data display, optimization can further lift on the original basis program feature, for bzip2, gcc the two samples Original mechanism is improved can hardly program feature, but be with the addition of after optimization, and performance is improved.For Omnetpp, astar, xalancbmk these three examples, original big page module can have some to be lifted, increased my this portion After the optimization for dividing, the amplitude of lifting is bigger, has exceeded original amplitude.Other examples, though there is the optimization of big page alignment not Make performance get a promotion, but also do not allow hydraulic performance decline, it is possible to conclude, big page alignment this optimization will not bring volume Outer performance cost.
Page faults are an important indicators for affecting program feature.System carries out the process of page faults and will produce volume Outer expense, using big page page faults can be efficiently reduced.Here each single test program page faults is measured Generation number of times.
Vertical coordinate is the number of times of page faults in Fig. 2, because difference benchmark memory access characteristic is different, there is larger difference It is different, therefore vertical coordinate is exponential increasing.Abscissa is the statistics respectively of all benchmark.Three are counted altogether, are not had respectively Using page faults number during linux kernel Transparent huge page modules;After the module;After optimization Page faults number.As shown by data, after optimization, the number of page faults is substantially reduced, substantially 10000 times with It is interior, can effectively reduce the number of times that program produces page faults.
Above analytic explanation, the optimization pin of the invention has good effect, can efficiently reduce the number of times of page faults, carries High big page utilization rate, and then lift program feature.
Description of the drawings
Fig. 1 is internal memory application function call schematic diagram.
Fig. 2 is that in front and back page faults number compares after big page optimization.
Specific embodiment
The method for realizing that big page optimizes on (SuSE) Linux OS is described below.The method is based on linux kernel version 3.6.3, the modification to glibc is based on glibc-2.17, it is adaptable to the machine of 64 x86 architectures.The method is to realize greatly One instantiation of page optimization.
Data structure mm_struct of modification record the process virtual address space, increases member variable in structure Longallocate_brk, is recorded as the heap top position of process the allocated virtual address.It is initial in process virtual address space During change, 0 is set the values to.
In linux system, the include/linux/mm_types.h document definitions data structure related to internal memory, Including mm_struct.Kernel/fork.c files realize the correlation function of newly-built process in linux system, wherein Mm_struct structures can be initialized in mm_init functions, sentence mm- is increased in this structure>allocated_ Brk=0, can be initialized as 0 by newly-increased heap top location variable.
Make the big page alignment of vma initial addresses corresponding to heap:Find the function of initialization brk segment bases.It is general right In different architectures, there are different initialization functions.For x86 frameworks, architecture spy's functional text is being realized In part arch/x86/kernel/process.c, the power function arch_randomize_brk of initialization brk sections is realized. In the function, the return value for calling randomize_range functions to obtain big page alignment upwards can be caused corresponding to heap The big page alignment of vma initial addresses.
Modification performs the logic of sys_brk functions:The operation of managing internal memory defined in mm/mmap.c files, wherein Function SYSCALL_DEFINE1 (brk, unsigned long, the brk) system of realizing of definition calls the function of sys_brk.
When the function can calculate current heap top position and call function, user requires the heap top position for reaching.Obtain two Behind the position of heap top, operating system determines to extend up heap or contraction heap space and releasing memory.
In sentence
Newbrk=PAGE_ALIGN (brk);
Oldbrk=PAGE_ALIGN (mm->brk);
In calculate original heap top position and new heap top position.After increasing the restriction of big page alignment, need to count again Calculate the position on heap top.Add following sentence after above-mentioned sentence:
Newbrk=(brk+SUPERPAGE_MASK) &PMD_MASK;
if(likely(mm->Allocated_brk)) oldbrk=mm->allocated_brk;
New heap top position is the desired heap top position of user position upwards after big page alignment, original heap top position The heap top position of sys_brk functions storage allocation was called for last time.
If calling sys_brk functions to change heap success, function can enter the sentence that label set_brk are administered, and use In the associated member's variable for arranging mm_struct, sentence mm- is increased here>Allocated_brk=newbrk updates heap top Pointer.
Also need to increase logic in label set_brk:Judge that current sys_brk calls whether user will reduce heap Top, if it is, check that user reduces heap top and needs whether the virtual memory of release is assigned with physical memory, if be assigned with, will This partial memory clear 0.Increase this step and be because that upper level applications are given tacit consent to:Call the internal memory meeting of sys_brk newly applications all It is initialized as 0.In unmodified version, because all of operation is in units of page, during each do_munmap, can be this All virtual addresses are all discharged to the mapping of physical address in individual region, and if the newly assigned regions of do_brk access, Be bound to occur pagefault, into linux kernel Memory Allocation program distribute physical memory when, linux kernel will can be treated The physical memory clear 0 of distribution.In version after the modification, because big page alignment, has partial memory not to be released, inspection is needed Look into whether this part is allocated physical memory, if be allocated, need clear 0.
There is provided an implementation for reference:Code below is added in label set_brk
This section of code first checks for the whether situation that heap top is shunk.If meeting condition, into if sentences, in if sentences Find_vma find by primitive justice should releasing memory office of vma areas, in units of page, using follow_page functions check Whether virtual address is assigned with physical memory, if being assigned with internal memory, clear 0.
After increasing big page optimization, operational process description:
When process goes out beginningization, additionally to mm->Allocated_brk variables clear 0.Then arch_randomize_ is called Brk functions initialize the initial address of heap.Modification heap starting arranges function arch_randomize_brk, when function is returned, After by original return address upwards big page alignment, then 16K is deducted, using calculated new address as function return value.With When afterwards routine call heap top arranges function sys_brk, incoming heap top parameter brk represents the heap top position that schemes call is arranged.It is logical Such adjustment is crossed, if the heap memory that function is used is little, then this partial memory is less than big page aligned position, can use Little page.Can be prevented effectively from and cause to waste using little excessive internal memory of program distribution to internal memory.It is larger for internal memory usage amount Program, remainder can distribute big page, generally ensure preferable big page utilization rate.
Make mm->Brk represents the heap top position that a front schemes call is arranged.
Make heap top position that newbrk represents that schemes call arranges value upwards after big page alignment.
Variable oldbrk records the highest heap address of storage allocation.If mm->Allocated_brk variate-values are 0, represent that heap did not also distribute internal memory, make the initial address that oldbrk is heap;Otherwise make oldbrk=mm->allocated_ brk。
Compare the value of oldbrk and newbrk:
If oldbrk<Newbrk explanations need to expand vma, call do_brk functions to increase vma.
If oldbrk>Newbrk explanations need releasing memory, call do_munmap functions to discharge newbrk to oldbrk Between internal memory.
Relatively brk and mm->The value of brk, if brk<mm->Brk, checks brk to min (mm->Brk, newbrk) between Virtual address, if physical memory is assigned, then by this section of internal memory clear 0.
System is called successfully, updates the data structure, makes mm->Brk=brk, mm->Allocated_brk=newbrk, letter Number terminates.
The big page optimization of heap can be realized by above step.
For other situations, as long as architecture supports that different size of page maps, and system realizes Dram Distribution, it is possible to be optimized using method proposed by the present invention.Every storage allocation by way of the pointer of setting heap top System, it is possible to utilize the method for the present invention, arranges an auxiliary variable, records the position of storage allocation;Arrange application heap to rise The big page alignment in beginning position;And the address after alignment is recalculated when increasing and reclaiming internal memory, and the optimization is can be achieved with, reach Lift the purpose of big page utilization rate.
It is proposed that optimization operation big page utilization rate scheme, it is mainly technically characterized by by giving birth in an operating system Into the mode of the vma of big page alignment, it is ensured that the utilization rate of big page.Its technical scheme can be used in all support Paging systems In operating system.Its main method is to limit heap growth and the granularity for discharging in heap part, reaches and makes operating system generate big page Alignment vma, so as to the purpose of lift system performance.The method selectively Extended RAM so that in lift system performance Meanwhile, effectively prevent the waste that Extended RAM may be brought.It is every by reasonable Extended RAM partition size, reach operation system Unite and required using big page, the method so as to lift big page utilization rate, all in this patent protection domain.

Claims (4)

1. a kind of method for improving operating system big page utilization rate, its step is:
1) system increases by a variable a in the virtual address space data structure of each process, has divided for being recorded as process Heap top position with virtual address;And change heap starting and function is set so as to the big page alignment of heap initial address of return;
2) during process initiation, variable a is initialized as 0 by system;When the process calls heap top setting function to carry out internal memory application, To incoming a pile top parameter b of system, b is the new heap top position that request is arranged;
3) according to the heap top position of the last request of the process and the heap top position b of current request, calculating meets the process to system The heap top position of current memory demand value c upwards after big page alignment;
4) currency of process variable a is assigned to the record of process storage allocation highest heap address variable by system, and It is compared with c:If less than c, then system increases the stack space of the process according to the two difference;If greater than c, then Internal memory release is carried out according to the two difference, reduces the stack space of the process, if equal, do not carried out internal memory and call;
Wherein, if the heap top position of the last request of the process is asked more than the heap top position b of current request from last Heap top position and c between choose a smaller value, whether systems inspection b to the virtual address between the smaller value has been assigned Physical memory, if be allocated, by this section of physical memory clear 0.
2. the method for claim 1, it is characterised in that if the value of variable a is 0, i.e. system also not to the heap of the process Distributed internal memory, then system is by the initial address of heap that the heap top position value of the last request of the process is the process.
3. the method for claim 1, it is characterised in that the heap starting arrange the heap original position that function returns be to A setting value is deducted again after upper big page alignment.
4. the method as described in claim 1 or 2 or 3, it is characterised in that the variable a is long shaping address variable.
CN201410146873.3A 2014-04-14 2014-04-14 Method for improving utilization rate of large pages of operating system Active CN103984599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410146873.3A CN103984599B (en) 2014-04-14 2014-04-14 Method for improving utilization rate of large pages of operating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410146873.3A CN103984599B (en) 2014-04-14 2014-04-14 Method for improving utilization rate of large pages of operating system

Publications (2)

Publication Number Publication Date
CN103984599A CN103984599A (en) 2014-08-13
CN103984599B true CN103984599B (en) 2017-05-17

Family

ID=51276590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410146873.3A Active CN103984599B (en) 2014-04-14 2014-04-14 Method for improving utilization rate of large pages of operating system

Country Status (1)

Country Link
CN (1) CN103984599B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326144B (en) * 2015-06-24 2019-08-06 龙芯中科技术有限公司 Method for reading data and device based on big page mapping
CN105893269B (en) * 2016-03-31 2018-08-21 武汉虹信技术服务有限责任公司 EMS memory management process under a kind of linux system
CN106843906B (en) * 2017-02-22 2021-02-02 苏州浪潮智能科技有限公司 Method and server for adjusting system page size
CN108664419A (en) * 2018-04-03 2018-10-16 郑州云海信息技术有限公司 A kind of method and its device of determining memory big page number
CN110109761B (en) * 2019-05-11 2021-06-04 广东财经大学 Method and system for managing kernel memory of operating system in user mode
CN112035379B (en) * 2020-09-09 2022-06-14 浙江大华技术股份有限公司 Method and device for using storage space, storage medium and electronic device
CN112596913B (en) * 2020-12-29 2022-08-02 海光信息技术股份有限公司 Method and device for improving performance of transparent large page of memory, user equipment and storage medium
CN112905497B (en) * 2021-02-20 2022-04-22 迈普通信技术股份有限公司 Memory management method and device, electronic equipment and storage medium
CN113687873B (en) * 2021-07-30 2024-02-23 济南浪潮数据技术有限公司 Large page memory configuration method, system and related device in cloud service page table
CN115061954B (en) * 2022-08-18 2022-11-29 统信软件技术有限公司 Missing page interrupt processing method, computing device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8417913B2 (en) * 2003-11-13 2013-04-09 International Business Machines Corporation Superpage coalescing which supports read/write access to a new virtual superpage mapping during copying of physical pages
EP2449469B1 (en) * 2009-06-29 2019-04-03 Hewlett-Packard Enterprise Development LP Hypervisor-based management of local and remote virtual memory pages
CN102446136B (en) * 2010-10-14 2014-09-03 无锡江南计算技术研究所 Self-adaptive large-page allocation method and device
CN103019949B (en) * 2012-12-27 2015-08-19 华为技术有限公司 A kind of distribution method and device writing merging Attribute Memory space
CN103257929B (en) * 2013-04-18 2016-03-16 中国科学院计算技术研究所 A kind of virutal machine memory mapping method and system

Also Published As

Publication number Publication date
CN103984599A (en) 2014-08-13

Similar Documents

Publication Publication Date Title
CN103984599B (en) Method for improving utilization rate of large pages of operating system
CN105893269B (en) EMS memory management process under a kind of linux system
US8866831B2 (en) Shared virtual memory between a host and discrete graphics device in a computing system
US8176282B2 (en) Multi-domain management of a cache in a processor system
EP2266040B1 (en) Methods and systems for dynamic cache partitioning for distributed applications operating on multiprocessor architectures
US8661181B2 (en) Memory protection unit in a virtual processing environment
US9183157B2 (en) Method for creating virtual machine, a virtual machine monitor, and a virtual machine system
US9280486B2 (en) Managing memory pages based on free page hints
EP2581828B1 (en) Method for creating virtual machine, virtual machine monitor and virtual machine system
CN103116556B (en) Internal storage static state partition and virtualization method
KR20120106696A (en) Extended page size using aggregated small pages
Haldar et al. Operating systems
US9208088B2 (en) Shared virtual memory management apparatus for providing cache-coherence
DE112005003736T5 (en) Virtual Translation Buffer
US20160259689A1 (en) Managing reuse information in caches
US20160259732A1 (en) Managing reuse information for memory pages
US9471509B2 (en) Managing address-independent page attributes
US10268592B2 (en) System, method and computer-readable medium for dynamically mapping a non-volatile memory store
US10013360B2 (en) Managing reuse information with multiple translation stages
CN112445767A (en) Memory management method and device, electronic equipment and storage medium
US7562204B1 (en) Identifying and relocating relocatable kernel memory allocations in kernel non-relocatable memory
CN112596913A (en) Method and device for improving performance of transparent large page of memory, user equipment and storage medium
KR20220024206A (en) Hardware-Based Memory Compression
Wu et al. DWARM: A wear-aware memory management scheme for in-memory file systems
CN116681578B (en) Memory management method, graphic processing unit, storage medium and terminal equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant