CN102184142B - A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource - Google Patents

A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource Download PDF

Info

Publication number
CN102184142B
CN102184142B CN201110097693.7A CN201110097693A CN102184142B CN 102184142 B CN102184142 B CN 102184142B CN 201110097693 A CN201110097693 A CN 201110097693A CN 102184142 B CN102184142 B CN 102184142B
Authority
CN
China
Prior art keywords
huge page
map table
region
address
huge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110097693.7A
Other languages
Chinese (zh)
Other versions
CN102184142A (en
Inventor
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201110097693.7A priority Critical patent/CN102184142B/en
Publication of CN102184142A publication Critical patent/CN102184142A/en
Priority to PCT/CN2012/072481 priority patent/WO2012142894A1/en
Application granted granted Critical
Publication of CN102184142B publication Critical patent/CN102184142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/652Page size control

Abstract

The invention discloses a kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource, described method comprises: record by the huge page map table of virtual address to physical address map relation for each system process generates; When system process accesses certain virtual address, if skip leaf flow process, then search described huge page map table, be loaded in translation lookaside buffer list item after obtaining described virtual address to the mapping relations of physical address.Described device comprises: huge page map table generation unit and huge page map performance element.The method of the invention and device are by carrying out huge page mapping by certain section of internal memory, greatly reduce the demand to TLB number, make the TLB carried in the memory management unit of general CPU can map very large memory headroom, allow business processing process in operational process, no longer produce TLB MISS after reaching a stable state in short time abnormal, thus greatly improve performance.

Description

A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource
Technical field
The present invention relates to field of computer technology, particularly relate to a kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource.
Background technology
Current, (SuSE) Linux OS is widely applied in built-in field, as fields such as data communication, medicine equipment, Industry Control.Along with the continuous growth of demand, processor is from strength to strength applied to these fields, and polycaryon processor is applied more and more widely, plays increasing effect at industry-by-industry.
But, need high performance application at some, as the direction such as data communication, media processing, have high requirement to performance; And Linux is while providing powerful memory management function, also an inferior position being difficult to avoid is brought: TLB (the Translation Lookaside Buffer that the memory management of (section) page brings, translation lookaside buffer) switch the performance reduction caused, particularly at RISC (Reduced Instruction SetComputing, Reduced Instruction Set Computer) when carrying out loaded page list item by abnormality processing, performance reduces particularly serious.This be due to, a lot of employing RISC framework embedded type CPU, for the consideration of hardware design complexity, cost etc., TLB number is few, all to load TLB page table entry by abnormality processing, this just brings key to the issue point: each page table switching loading is all complete in the abnormality processing under (SuSE) Linux OS kernel, and abnormal (interruption) process needs the cpu resource that consumption is very large.Illustrate, in a polycaryon processor, achieve communication traffic process, a physical core needs the processing speed reaching 200Kpps, that is per secondly will process 20W message; If it is abnormal to produce TLB MISS in each Message processing flow process, so per secondly have 200,000 abnormality processing.Like this, a large amount of cpu resources all consumes at the Locale Holding of abnormality processing, Recovery processing.
At present, in linux kernel, realized huge page file system and solved this problem, but it realizes complexity, applies all underactions, there is a lot of restrictions.First, existing huge page functional realiey is a file system, complicated in realization, process; Secondly, its huge page size used must be fixing, can not change flexibly and join; Again, inconvenient to use; Its method must create a file in huge page file system, and then carrying out File Mapping to this file could use, and just can use through a lot of step the benefit that huge page brings, and subsidiary a lot of restriction.
Summary of the invention
The invention provides a kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource, abnormal owing to easily producing TLB MISS in order to solve in prior art, cause cpu resource by the problem consumed in a large number.
In order to solve the problem, the present invention adopts following scheme;
On the one hand, the invention provides a kind of method utilizing huge page to map the consumption of reduction cpu resource, comprising:
The huge page map table of virtual address to physical address map relation is recorded for each system process generates;
When system process accesses certain virtual address, if skip leaf flow process, then search described huge page map table, be loaded in translation lookaside buffer list item after obtaining described virtual address to the mapping relations of physical address.
Wherein, the list item of described huge page map table comprises: physical address, virtual address, page attribute and huge page size information.
Further, describedly record virtual address comprise to the huge page map table of physical address map relation for each system process generate:
For described each system process creates huge page map table, and configure the list item number of huge page map table and huge page size according to user's request;
For described each system process, for needing the region, untapped Virtual Space of the legal physical space region application coupling mapped;
According to the corresponding relation in described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table.
Preferably, the described corresponding relation according to physical space region and region, Virtual Space, obtains, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table, comprising:
Calculate the list item number taking huge page map table needed for region, described Virtual Space to physical space area maps, and check in described huge page map table whether have enough list item numbers, if, according to the corresponding relation in described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table; Otherwise, return mapping failed message.
Wherein, described legal physical space region refers to: the huge page size of the start address in described physical space region and the length in physical space region and configuration meets alignment requirements.
Further, the method for the invention also comprises:
According to instruction, the list item of the region, Virtual Space that release is specified and the huge page map table that this region, Virtual Space takies, completes demapping.
On the other hand, the invention provides a kind of device utilizing huge page to map the consumption of reduction cpu resource, comprising:
Huge page map table generation unit, for recording the huge page map table of virtual address to physical address map relation for each system process generates;
Huge page maps performance element, for access certain virtual address at system process and skip leaf flow process time, search described huge page map table, be loaded in translation lookaside buffer list item after obtaining described virtual address to the mapping relations of physical address.
Wherein, the list item of the huge page map table of described huge page map table generation unit generation comprises: physical address, virtual address, page attribute and huge page size information.
Further, described huge page map table generation unit specifically comprises:
Table creates subelement, for creating huge page map table for described each system process, and configures the list item number of huge page map table and huge page size according to user's request;
Virtual memory application subelement, for for described each system process, for needing the region, untapped Virtual Space of the legal physical space region application coupling mapped;
Map subelement, for the corresponding relation according to described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table.
Wherein, in described virtual memory application subelement:
Described legal physical space region refers to: the huge page size of the start address in described physical space region and the length in physical space region and configuration meets alignment requirements.
Compared with prior art, beneficial effect of the present invention is as follows:
The method of the invention and device are by carrying out huge page mapping by certain section of internal memory, greatly reduce the demand to TLB number, make the TLB carried in the memory management unit of general CPU can map very large memory headroom, allow business processing process in operational process, TLBMISS is no longer produced abnormal after reaching a stable state in short time, thus greatly improve performance, so namely, the power that operating system provides can be used, can system performance be improved again, make it to reach large discharge Communication processing ability;
In addition, huge page mapping function of the present invention and operating system memory map, demapping treatment scheme Seamless integration-, simple and convenient in application; And can specify huge page size when mapping, extensibility is very strong.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the method flow diagram utilizing huge page to map the consumption of reduction cpu resource provided by the invention;
Fig. 2 is that in the embodiment of the present invention, huge page maps process flow diagram;
Fig. 3 is the structural drawing of huge page map table in the embodiment of the present invention;
Fig. 4 is the realization flow figure of huge page mapping function in the embodiment of the present invention;
Fig. 5 is demapping process flow diagram in the embodiment of the present invention;
Fig. 6 is the structure drawing of device utilizing huge page to map the consumption of reduction cpu resource provided by the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Conveniently in LINUX operating system, reduce the TLB processing time, do not affect again the power that operating system provides simultaneously, the invention provides a kind of method and apparatus utilizing huge page to map to reduce cpu resource and consume.
As shown in Figure 1, a kind of method utilizing huge page to map the consumption of reduction cpu resource provided by the invention, specifically comprises:
Step S101, record by the huge page map table of virtual address to physical address map relation for each system process generates; Wherein, the list item of described huge page map table comprises: physical address, virtual address, page attribute and huge page size information.
Step S102, when system process accesses certain virtual address, if skip leaf flow process, then search described huge page map table, be loaded in translation lookaside buffer list item after obtaining described virtual address to the mapping relations of physical address.
In this step, the flow process that skips leaf when described system process accesses certain virtual address refers to:
When described system process accesses certain virtual address, if all do not find the mapping relations of described virtual address to physical address in the tlb entry in system CPU core and current system process page table, then skip leaf flow process.
Further, in this step, when not finding the mapping relations of described virtual address to physical address in described huge page map table, perform the treatment scheme that skips leaf.
The method of the invention is by carrying out huge page mapping by certain section of internal memory, greatly reduce the demand to TLB number, make the TLB carried in the memory management unit of general CPU can map very large memory headroom, allow business processing process in operational process, no longer produce TLB MISS after reaching a stable state in short time abnormal, thus greatly improve performance.
In order to clearer explanation specific implementation process of the present invention, provide the present invention's preferred embodiment according to Fig. 2 ~ Fig. 5 below, and combine the description to embodiment, provide ins and outs of the present invention further.
Described in the embodiment of the present invention, the core concept of method is, first creates huge page map table, and carries out huge page mapping, obtains system process by the huge page map table item of virtual address to physical address; Then, when skipping leaf flow process, first searching huge page map table, if there are the mapping relations of search, being loaded in tlb entry by these mapping relations, system process returns User space to be continued to perform, otherwise performs the treatment scheme that skips leaf.
Just be described in detail from the generative process of huge page map table and implementation procedure two aspect of huge page mapping function respectively below, concrete:
One, generate huge page map table, specifically comprise:
Step 1, before carrying out huge page mapping, first having checked whether as current system process creation huge page map table, if do not create, is then the huge page map table of current system process creation, performs step 2; Otherwise, directly perform step 2;
Wherein, list item number and the huge page size of the huge page map table of establishment all can carry out flexible configuration according to user's request, and can be changed by kernel menuconfig and join, to adapt to the needs of different application;
The huge page map table created is recorded in system process descriptor memory body (struct mm_struct).
Step 2, carry out huge page mapping: for each system process generates by the mapping relations of virtual address to physical address, and stored by the list item of these mapping relations as huge page map table.
As shown in Figure 2, the huge page mapping process described in this step is specially:
Step S201, to need map physical space region carry out legitimate verification;
Wherein, legitimate verification is that digital examination needs the start address in the physical space region mapped and ground, physical space region length whether to meet alignment requirements with the huge page size of configuration, that is, the start address in physical space region and the length in physical space region are necessary for the integral multiple of the huge page size of configuration.
Step S202, call hugetlb_get_unmapped_area kernel interface function be current system process find one section with legal physical space region onesize and not by the region, Virtual Space used;
Wherein, described region, Virtual Space can be the region performed also can be the region of stochastic searching; And the plot in the region, described Virtual Space obtained should be the integral multiple of huge page size.
Step S203, based on the region, Virtual Space found, call kernel do_mmap function, be region, current system process creation VMA Virtual Space (that is, by the Virtual Space regions transform found being the legal region, Virtual Space of system process).
It should be noted that; after region, VMA Virtual Space creates, follow-up huge page maps in treatment scheme and can't use it, just utilizes it to protect huge page mapping space; prevent this section of space from being mapped shared by (such as, IO/MEM mapping, anonymous mapping etc.) by other.
Behind the physical space region obtaining needing to map and region, Virtual Space, preferably, first calculate the list item mapping and take required for above-mentioned physical address space and virtual address space; Then judge whether there are enough unappropriated list items in huge page map table for mapping, if so, perform step S204; Otherwise, return mapping failure information.Certainly, also can not carrying out this calculating inspection process when specific implementation, to precalculate if do not carry out and inspection can make mistakes when generating mapping item, and then return mapping failure information.
Step S204, corresponding relation according to described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, in described huge page map table, generating this mapping item.
It should be noted that after huge page map table is created, can carry out repeatedly huge page map for each system process, each huge page maps the physical address that obtains and stores as the list item of huge page map table to the mapping relations of virtual address.
As shown in Figure 3, the structural drawing of the huge page map table obtained for the embodiment of the present invention; Each legal item can both reflect that in the process space, huge page mapping area is to the relation of physical memory.Following information is mainly comprised: physical address, virtual address, page attribute (readable/can to write// available buffer can be performed, Readable/Writable/Exectable/Cachable etc.), huge page size information in list item.
Huge page map table described in the embodiment of the present invention, complete compatible Linux memory management requirement; The more important thing is, the huge page size that huge page mapping function can be supported user to use and specify maps, and prerequisite is that current C PU supports this page size; This and other huge page file systems significant difference.The benefit using the page of which kind of size to carry out mapping can be specified to it is possible to the demand of adapt user as much as possible; For the memory-mapped of bulk, can map with the large page, reduce huge page and map item, improve performance; For the internal memory of fritter, can map with the little page, huge page can be used equally to map; Can avoid like this to use huge page, and internal memory must be wasted to adapt to the situation of huge page size.
Two, the implementation procedure of huge page mapping function, as shown in Figure 4, comprising:
Step S401, User space system process access certain virtual address A;
Step S402, judge whether tlb entry exists the mapping of virtual address A to physical address in current operation core cpu, if so, continue to perform subsequent operation; Otherwise, produce a TLB MISS abnormal, go to step S403;
Step S403, in the TLB MISS abnormality processing service routine of linux kernel, search current process page table according to virtual address A, judge whether to there is the mapping of virtual address A to physical address, if so, perform step S404; Otherwise, perform step S405;
Step S404, taking-up virtual address A, to the mapping page table of physical address, are loaded in tlb entry, turn back to User space and continue down to perform;
Step S405, enter the treatment scheme that skips leaf, search for huge page map table, judge in huge page map table, whether to there is the mapping of virtual address A to physical address, if so, perform step S406; Otherwise, perform step S407;
Wherein, before the huge page map table of search, preferably check that whether virtual address A is the legal address of current process, anonymously to map as determined whether respectively, whether being the process such as copy-on-write context, when illegal, directly return abnormal signal to process.
Step S406, adopt TLB replace algorithm the huge page map table item of virtual address A is loaded in core cpu tlb entry, then turn back to User space continue down perform.
Wherein, TLB replaces algorithm and is determined by CPU, and some realizes pseudorandom to replace, and some is by least recently used algorithm LRU.
The flow process that skips leaf of step S407, execution linux kernel standard.
In sum, in the huge page mapping function implementation procedure described in the visible embodiment of the present invention, skipping leaf in flow process, increase a new process branch, i.e. huge page process branch: do_hugetlb_fault, in this treatment scheme, checks whether this virtual address A is in huge page map table.Like this, huge page treatment scheme can accomplish to skip leaf with core standard flow process seamless combination, improves portability, extensibility, the maintainability of this huge page function, and this is a very large improvement relative to existing huge page file system.
Further, method described in the embodiment of the present invention, after realizing huge page mapping function, can also carry out demapping operation according to real needs.This demapping is operating as the inverse process mapping and create, operating process is less, mainly: use the huge page region of do_munmap demapping process virtual, region, Virtual Space under release current process, then use huge page table entry to discharge in region, this section of Virtual Space in huge page map table.
As shown in Figure 5, be the specific implementation flow process of demapping operation, comprise:
Step S501, to obtain according to instruction and need the region, Virtual Space of demapping;
Step S502, legitimacy detection is carried out to region, described Virtual Space, if legal, perform step S503; Otherwise, perform step S505;
Wherein, legitimacy detects and refers to whether the length of the plot detecting region, described Virtual Space aligns with huge page size.
Region, described Virtual Space under step S503, release current process;
Step S504, discharge huge page map table item shared by region, described Virtual Space;
Step S505, operation terminate.
In sum, the method that the visible embodiment of the present invention provides, can following beneficial effect be brought:
(1) huge page mapping mode can be used to use for business internal memory, greatly reduce the demand to TLB number, make the TLB carried in the MMU (Memory Management Unit, memory management unit) of general CPU can map very large memory headroom; In the middle of actual use procedure, adopt huge page mapping function can allow business processing process in operational process, after reaching a stable state in the short time, no longer produce TLB MISS; Allly need the mapping of access memory can both be resident in the MMU/TLB of CPU, thus in non-real time operating system, greatly can promote handling property, there is very large practical value.
(2) huge page mapping function of the present invention and (SuSE) Linux OS memory-mapped, demapping treatment scheme Seamless integration-, simple and convenient in application; Can specify huge page size when mapping, extensibility is very strong.
And huge page of the present invention maps and huge page file system also exists very large difference, is mainly manifested in: 1) on a user interface, adopts the mapping flow process of Linux standard, why not same user awareness is mapped with less than normal map and huge page; 2) the present invention supports to specify huge page size to map, and have very strong extensibility, huge page file system then can not; 3) the present invention uses step simple and flexible convenient, and could use huge page mapped inner-storage unlike huge page file system through multistep.
As shown in Figure 6, the present invention also provides a kind of and utilizes huge page to map the device reducing cpu resource consumption, comprising:
Huge page map table generation unit, for recording by the huge page map table of virtual address to physical address map relation for each system process generates;
Huge page maps performance element, for access certain virtual address at system process and skip leaf flow process time, search described huge page map table, be loaded in translation lookaside buffer list item after obtaining described virtual address to the mapping relations of physical address.
Wherein, the list item of the huge page map table of described huge page map table generation unit generation comprises: physical address, virtual address, page attribute and huge page size information.
In device of the present invention, described huge page map table generation unit specifically comprises:
Table creates subelement, for creating huge page map table for described each system process, and configures the list item number of huge page map table and huge page size according to user's request;
Virtual memory application subelement, for for described each system process, for needing the region, untapped Virtual Space of the legal physical space region application coupling mapped;
Map subelement, for the corresponding relation according to described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table.
Wherein, described in described virtual memory application subelement, legal physical space region refers to: the huge page size of the start address in described physical space region and the length in physical space region and configuration meets alignment requirements.
Further, device of the present invention also comprises: de-mapping unit;
Described de-mapping unit, for discharging the list item of the huge page map table that the region, Virtual Space of specifying and this region, Virtual Space take according to instruction, completes demapping operation.
Device of the present invention is by carrying out huge page mapping by certain section of internal memory, greatly reduce the demand to TLB number, make the TLB carried in the memory management unit of general CPU can map very large memory headroom, allow business processing process in operational process, no longer produce TLB MISS after reaching a stable state in short time abnormal, thus greatly improve performance.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (8)

1. utilize huge page to map the method reducing cpu resource consumption, it is characterized in that, comprising:
For each system process creates huge page map table, and configure the list item number of huge page map table and huge page size according to user's request;
For described each system process, for needing the region, untapped Virtual Space of the legal physical space region application coupling mapped;
According to the corresponding relation in described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table;
When system process accesses certain virtual address, if skip leaf flow process, then search described huge page map table, be loaded in translation lookaside buffer list item after obtaining described virtual address to the mapping relations of physical address.
2. the method for claim 1, is characterized in that, the list item of described huge page map table comprises: physical address, virtual address, page attribute and huge page size information.
3. the method for claim 1, is characterized in that, the described corresponding relation according to physical space region and region, Virtual Space, obtains, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table, comprising:
Calculate the list item number taking huge page map table needed for region, described Virtual Space to physical space area maps, and check in described huge page map table whether have enough list item numbers, if, according to the corresponding relation in described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table; Otherwise, return mapping failed message.
4. the method for claim 1, is characterized in that,
Described legal physical space region refers to: the huge page size of the start address in described physical space region and the length in physical space region and configuration meets alignment requirements.
5. the method for claim 1, is characterized in that, described method also comprises:
According to instruction, the list item of the region, Virtual Space that release is specified and the huge page map table that this region, Virtual Space takies, completes demapping.
6. utilize huge page to map the device reducing cpu resource consumption, it is characterized in that, comprising:
Huge page map table generation unit, for recording the huge page map table of virtual address to physical address map relation for each system process generates;
Huge page maps performance element, for access certain virtual address at system process and skip leaf flow process time, search described huge page map table, be loaded into after obtaining described virtual address to the mapping relations of physical address in translation lookaside buffer list item;
Wherein, described huge page map table generation unit specifically comprises:
Table creates subelement, for creating huge page map table for described each system process, and configures the list item number of huge page map table and huge page size according to user's request;
Virtual memory application subelement, for for described each system process, for needing the region, untapped Virtual Space of the legal physical space region application coupling mapped;
Map subelement, for the corresponding relation according to described physical space region and region, Virtual Space, obtain, by the mapping relations of virtual address to physical address, generating the list item of described huge page map table.
7. device as claimed in claim 6, is characterized in that, the list item of the huge page map table that described huge page map table generation unit generates comprises: physical address, virtual address, page attribute and huge page size information.
8. device as claimed in claim 6, is characterized in that:
Described legal physical space region refers to: the huge page size of the start address in described physical space region and the length in physical space region and configuration meets alignment requirements.
CN201110097693.7A 2011-04-19 2011-04-19 A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource Active CN102184142B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201110097693.7A CN102184142B (en) 2011-04-19 2011-04-19 A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource
PCT/CN2012/072481 WO2012142894A1 (en) 2011-04-19 2012-03-16 Method and apparatus for reducing cpu resource consumption using giant page mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110097693.7A CN102184142B (en) 2011-04-19 2011-04-19 A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource

Publications (2)

Publication Number Publication Date
CN102184142A CN102184142A (en) 2011-09-14
CN102184142B true CN102184142B (en) 2015-08-12

Family

ID=44570322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110097693.7A Active CN102184142B (en) 2011-04-19 2011-04-19 A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource

Country Status (2)

Country Link
CN (1) CN102184142B (en)
WO (1) WO2012142894A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184142B (en) * 2011-04-19 2015-08-12 中兴通讯股份有限公司 A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource
CN102662864B (en) * 2012-03-29 2015-07-08 华为技术有限公司 Processing method, device and system of missing page abnormality
CN102929722A (en) * 2012-10-18 2013-02-13 曙光信息产业(北京)有限公司 Packet reception based on large-page 10-gigabit network card and system thereof
CN103793331B (en) * 2012-10-31 2016-12-21 安凯(广州)微电子技术有限公司 A kind of physical memory management method and device
CN104516826B (en) * 2013-09-30 2017-11-17 华为技术有限公司 The corresponding method and device of a kind of virtual big page and the big page of physics
CN104899159B (en) 2014-03-06 2019-07-23 华为技术有限公司 The mapping treatment method and device of the address cache memory Cache
US20170046274A1 (en) * 2015-08-14 2017-02-16 Qualcomm Incorporated Efficient utilization of memory gaps
CN107766259B (en) * 2016-08-23 2021-08-20 华为技术有限公司 Page table cache access method, page table cache, processor chip and storage unit
CN110321079B (en) * 2019-06-27 2023-04-25 暨南大学 Disk cache deduplication method based on mixed page
CN111666230B (en) * 2020-05-27 2023-08-01 江苏华创微系统有限公司 Method for supporting macro page in set associative TLB
CN111913893A (en) * 2020-06-22 2020-11-10 成都菁蓉联创科技有限公司 Mapping method and device for reserved memory, equipment and storage medium
CN112612623B (en) * 2020-12-25 2022-08-09 苏州浪潮智能科技有限公司 Method and equipment for managing shared memory
CN113360243B (en) * 2021-03-17 2023-07-14 龙芯中科技术股份有限公司 Device processing method, device, electronic device and readable medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246452A (en) * 2007-02-12 2008-08-20 国际商业机器公司 Method and apparatus for fast performing MMU analog, and total system simulator

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7243208B2 (en) * 2003-08-13 2007-07-10 Renesas Technology Corp. Data processor and IP module for data processor
US7552308B2 (en) * 2006-04-04 2009-06-23 International Business Machines Corporation Method and apparatus for temporary mapping of executable program segments
CN102184142B (en) * 2011-04-19 2015-08-12 中兴通讯股份有限公司 A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246452A (en) * 2007-02-12 2008-08-20 国际商业机器公司 Method and apparatus for fast performing MMU analog, and total system simulator

Also Published As

Publication number Publication date
CN102184142A (en) 2011-09-14
WO2012142894A1 (en) 2012-10-26

Similar Documents

Publication Publication Date Title
CN102184142B (en) A kind of method and apparatus utilizing huge page to map the consumption of reduction cpu resource
US20220027287A1 (en) System for address mapping and translation protection
US10552337B2 (en) Memory management and device
EP3575970B1 (en) Process-based multi-key total memory encryption
EP2889777B1 (en) Modifying memory permissions in a secure processing environment
US10416890B2 (en) Application execution enclave memory page cache management method and apparatus
US8402248B2 (en) Explicitly regioned memory organization in a network element
CN109074316B (en) Page fault solution
US9703703B2 (en) Control of entry into protected memory views
US10558584B2 (en) Employing intermediary structures for facilitating access to secure memory
US8521919B2 (en) Direct memory access in a computing environment
EP2889778B1 (en) Shared memory in a secure processing environment
KR20120113584A (en) Memory device, computer system having the same
AU2009308007A1 (en) Opportunistic page largification
TWI526832B (en) Methods and systems for reducing the amount of time and computing resources that are required to perform a hardware table walk (hwtw)
US20140040577A1 (en) Automatic Use of Large Pages
CN104781794A (en) In-place change between transient and persistent state for data structures in non-volatile memory
EP2965211A1 (en) Method and apparatus for preventing unauthorized access to contents of a register under certain conditions when performing a hardware table walk (hwtw)
US20130297881A1 (en) Performing zero-copy sends in a networked file system with cryptographic signing
US10372606B2 (en) System and method for integrating overprovisioned memory devices
KR20120070326A (en) A apparatus and a method for virtualizing memory
US20150186240A1 (en) Extensible i/o activity logs
CN115481053A (en) Independently controlled DMA and CPU access to a shared memory region
CN113590509B (en) Page exchange method, storage system and electronic equipment
EP2889757A1 (en) A load instruction for code conversion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant