CN108897618A - The resource allocation methods that task based access control perceives under a kind of isomery memory architecture - Google Patents

The resource allocation methods that task based access control perceives under a kind of isomery memory architecture Download PDF

Info

Publication number
CN108897618A
CN108897618A CN201810632230.8A CN201810632230A CN108897618A CN 108897618 A CN108897618 A CN 108897618A CN 201810632230 A CN201810632230 A CN 201810632230A CN 108897618 A CN108897618 A CN 108897618A
Authority
CN
China
Prior art keywords
task
node
memory
page
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810632230.8A
Other languages
Chinese (zh)
Other versions
CN108897618B (en
Inventor
许胤龙
陈吉强
李永坤
郭帆
刘军明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pingkai Star Beijing Technology Co ltd
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201810632230.8A priority Critical patent/CN108897618B/en
Publication of CN108897618A publication Critical patent/CN108897618A/en
Application granted granted Critical
Publication of CN108897618B publication Critical patent/CN108897618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration

Abstract

The invention discloses the resource allocation methods that task based access control under a kind of isomery memory architecture perceives, it is characterized in including process performance metadata record, node tasks assignment record, the migration strategy step of the scheduling strategy of task characteristic perception and page perception.Due to being distinguished to different tasks, distribute the task in each NUMA node relatively uniform, compared with the Task Assigned Policy of system default, the cache contention and internal storage access for alleviating system CPU are competed;Simultaneously because being distinguished to the page of task difference read write attribute, the Placement Strategy of adaptability is used under isomery memory architecture, reduces the write operation number of NVM memory, extends the service life of NVM;Using the method for the present invention since most of write operation all occurs in DRAM, so reducing performance loss as far as possible.

Description

The resource allocation methods that task based access control perceives under a kind of isomery memory architecture
Technical field
The invention belongs to calculator memory administrative skill fields, and in particular in widely used nonuniform memory access (NUMA) it in IA frame serverPC, is constructed using novel nonvolatile memory (NVM) and conventional dynamic random access memory (DRAM) Isomery memory, and perceived on this basis by task characteristic, the method for realizing efficient task resource distribution.
Background technique
In September, 1999, IBM Corporation is by NUMA Integration ofTechnology into IBMUnix.The disruptive technology of NUMA is thoroughly got rid of Constraint of traditional super large bus to multiprocessing structure.It greatly enhance the manageable processor of single operation system, memory and I/O slot.Due to facing current big data scene, more and more applications are changed by traditional compute-intensive applications Data-intensive application gradually proposes isomery memory architecture to meet using bigger memory requirements.Therefore, following novel NUMA isomery memory architecture will show the nonuniformity of height:It is asymmetric using kind of a quasi-complexity, storage medium read or write speed Property and the intrinsic access nonuniformity of NUMA.The characteristic of different memory mediums cannot be distinguished in traditional NUMA technology, for not Congener application cannot be distinguished to optimized operation performance to be obtained, for different storage mediums can not the placement of the specific aim page obtain Optimal storage performance is obtained, system actual performance is caused to differ greatly with theoretical optimal performance.
Summary of the invention
The purpose of the present invention is to propose to the resource allocation methods that task based access control under a kind of isomery memory architecture perceives, a sides Face, for different types of application, using the CPU and Memory Allocation of adaptability;On the other hand, it is visited for the application of different characteristics Ask the page, using different page Placement Strategies, to solve defect of the existing NUMA administrative skill applied to isomery memory when, In the case where guaranteeing low software overhead, the efficient distribution of multitask and the effective use of isomery memory are realized.
The resource allocation methods that task based access control perceives under isomery memory architecture of the present invention, it is characterised in that including following step Suddenly:
The first step:Process performance metadata record
For the task process of all optimizations, process memory write request number per second is obtained by hardware performance counter WAPS (WriteAccesses per Second) and the total occupancy MF of memory (Memory footprint) of process the two Thus performance parameter calculates classification of task standard TC (Task classification)=WAPS*MF, wherein WAPS unit It is set as million, MF unit is set as GB;According to TC value, task is divided into two major classes:Work as TC<It is computation-intensive when 1 Using;Work as TC>It is data-intensive applications when 1;
Second step:Node tasks assignment record
It is each in NUMA architecture according to CPU occupancy, Memory Allocation and the performance metadata record of each process Node creates a task process record sheet, records the metadata of associated process in node;One is created simultaneously for each node Resource allocation record sheet records the capacity of the CPU core occupancy situation and node free memory in node;
Third step:The scheduling strategy of task characteristic perception
The task resource method of salary distribution based on system default is periodically completed according to the task assignment record of each node Between node task immigration adjustment so that it is different types of apply it is evenly distributed in all nodes;
The assignment record table for traversing all nodes of NUMA first finds out the most i.e. TC of compute-intensive applications of operation<1 section Point Node1, and the most data-intensive applications, that is, TC of operation>1) node Node2;Maximum value is recorded as respectively Computing_task_MAX and data_task_MAX;
If computing_task_MAX subtract data_task_MAX absolute value be greater than 1, that is, illustrate Node1 with The task of Node2 node is placed not uniform enough;If two node free memories can support task immigration, just by one in Node1 Compute-intensive applications are migrated to Node2, while a data-intensive applications in Node2 being migrated to Node1;And if two sections Point free memory does not support task immigration, then does not do task immigration operation;
If the absolute value that computing_task_MAX subtracts data_task_MAX is less than or equal to 1, that is, illustrate Node1 Substantially uniformity is placed with the task of Node2 node, the different type application for further relating to all nodes is evenly distributed, Then without carrying out task immigration adjustment;
4th step:The migration strategy of task page perception
If application EMS memory occupation amount still increasing, illustrate task still in initial memory allocated phase, then without Page migration;
If the EMS memory occupation amount of application is relatively stable, illustrates that task is in and calculates the operation phase, then enable page migration, It is specifically divided into two parts:(1) page set in DRAM recently there is no write operation is moved in NVM;It (2) will be in NVM The page set that write operation occurred recently moves in DRAM.
The resource allocation methods that task based access control perceives under the isomery memory architecture of aforementioned present invention, have been substantially carried out following behaviour Make step:What process performance metadata record, node tasks assignment record, the scheduling strategy of task characteristic perception and the page perceived Migration strategy.Due to being distinguished to different tasks, distribute the task in each NUMA node relatively uniform, with system The Task Assigned Policy of default is compared, and the cache contention and internal storage access for alleviating system CPU compete;Simultaneously because to task The page of different read write attributes distinguishes, and the Placement Strategy of adaptability is used under isomery memory architecture, reduces NVM memory Write operation number, extend the service life of NVM;Since most of write operation all occurs in DRAM, so subtracting as far as possible Performance loss is lacked.
Detailed description of the invention
Fig. 1 is the implementation operating process signal for the resource allocation methods that task based access control perceives under isomery memory architecture of the present invention Figure.
Fig. 2 is that default allocation schematic diagram is applied under two node NUMA architectures.
Fig. 3 characterization perceives task adjusted and places schematic diagram.
Fig. 4 is indicated using EMS memory occupation schematic diagram after original allocation.
EMS memory occupation schematic diagram after Fig. 5 representation page aware migration.
Specific embodiment
The resource point that task based access control under isomery memory architecture of the present invention is perceived by specific embodiment with reference to the accompanying drawing Method of completing the square is described in further detail.
Embodiment 1:
The resource allocation methods that task based access control perceives under the present embodiment isomery memory architecture, are to answer 4 computation-intensives It is run in the NUMA isomery of two nodes with (A1, A2, A3, A4) and 4 data-intensive applications (B1, B2, B3, B4) distribution It deposits in framework.Each node has 4 core, 4GB DRAM memory and 12GB NVM memory.It is silent using system using original allocation Recognize the method for salary distribution, is periodically adjusted using task perceptual strategy of the present invention.Attached drawing 1 gives in the present embodiment NUMA isomery Deposit the operating process schematic diagram for the resource allocation methods that task based access control perceives under framework, the tune including periodic task characteristic perception The migration strategy two large divisions of degree strategy and the perception of the periodic task page.
The resource allocation methods that task based access control perceives under the present embodiment isomery memory architecture, specifically include following steps:
The first step:Process performance metadata record
For the task process of all optimizations, process memory write request number per second is obtained by hardware performance counter WAPS (WriteAccesses per Second) and the total occupancy MF of memory (Memory footprint) of process the two Performance parameter (1. referring to the flow operations frame in Fig. 1), thus calculates classification of task standard TC (Task Classification)=WAPS*MF, wherein WAPS unit is set as million, and MF unit is set as GB;Foundation TC value, Task is divided into two major classes:Work as TC<It is compute-intensive applications when 1;Work as TC>It is data-intensive applications when 1.
Attached drawing 2 gives 8 distribution results applied under the system default method of salary distribution taken in the present embodiment.Dotted line Frame represents NUMANode node, and a solid square represents an application, and grey square represents compute-intensive applications, white side Block represents data-intensive applications.Compute-intensive applications feature is:It is less to access data volume, access data locality is stronger, Task main bottleneck is CPU calculating.Data-intensive applications feature is:It is more to access data volume, access data locality is poor, Task main bottleneck is internal storage access.As shown in Figure 2,3 compute-intensive applications (A1, A2, A3) are assigned in Node1 With 1 data-intensive applications (B1), it is assigned with 1 compute-intensive applications (A4) in Node2 and data-intensive is answered with 3 With (B2, B3, B4).The performance metadata of each application distinguishes sampling statistics according to hardware performance counter, and memory per second is write Number of request WAPS and the total occupancy MF of memory are as shown in the figure.The TC value for calculating each application, the mark as system task perception It is quasi-.For compute-intensive applications A1, A2, A3, A4, TC value is respectively:0.0005,0.002,0.0075,0.0005.For number According to intensive applications B1, B2, B3, B4, TC value is respectively:2.5,1.8,2.5,2.5.In the present invention, TC=1 is set to distinguish The threshold value of compute-intensive applications and data-intensive applications.
Second step:Node tasks assignment record
It is each in NUMA architecture according to CPU occupancy, Memory Allocation and the performance metadata record of each process Node creates a task process record sheet, records the metadata of associated process in node;One is created simultaneously for each node Resource allocation record sheet records the capacity of the CPU core occupancy situation and node free memory in node (referring in Fig. 1 Flow operations frame 2.).
In the present embodiment, all process metadata records are periodically traversed, update the task process note of each NUMA node Record table.Under the system default method of salary distribution, compute-intensive applications quantity is 3 in Node1, and data-intensive applications quantity is 1; Compute-intensive applications quantity is 1 in Node2, and data-intensive applications quantity is 3.
Simultaneously according to system resource occupancy situation, the resource allocation record sheet of more new node.Due to 4 applications in Node1 EMS memory occupation amount MF be respectively:0.5,1.0,1.5,5.0, so remaining free memory is 8GB, idle core is 0.Due to The EMS memory occupation amount MF of 4 applications is respectively in Node2:0.5,4.0,5.0,5.0, so remaining free memory is 1.5GB, it is empty Not busy core is 0.
Third step:The scheduling strategy of task characteristic perception
The task resource method of salary distribution based on system default is periodically completed according to the task assignment record of each node Between node task immigration adjustment so that it is different types of apply it is evenly distributed in all nodes;
The assignment record table for traversing all nodes of NUMA first finds out the most i.e. TC of compute-intensive applications of operation<1 section Point Node1, and the most data-intensive applications, that is, TC of operation>1 node Node2;Maximum value is recorded as respectively Computing_task_MAX and data_task_MAX;
If computing_task_MAX subtract data_task_MAX absolute value be greater than 1, that is, illustrate Node1 with The task of Node2 node is placed not uniform enough;If two node free memories can support task immigration, just by one in Node1 Compute-intensive applications are migrated to Node2, while a data-intensive applications in Node2 being migrated to Node1;And if two sections Point free memory does not support task immigration, then does not do task immigration operation;
If the absolute value that computing_task_MAX subtracts data_task_MAX is less than or equal to 1, that is, illustrate Node1 Substantially uniformity is placed with the task of Node2 node, the different type application for further relating to all nodes is evenly distributed, Then without carrying out task immigration adjustment.
The task resource allocation strategy defaulted in existing system at present is the arrival time according to task, by task to rotate Mode is evenly distributed to different nodes, and guarantees the CPU of task distribution as far as possible and interior there are same nodes.Such mode is It is limited, the characteristic of arrival task is not accounted for, the adjustment of adaptability cannot be made.The scheduling plan of task characteristic perception Slightly, the task resource method of salary distribution based on system default, according to the task assignment record of each node, periodically between completion node Task immigration adjustment, guarantee it is different types of apply it is evenly distributed in all nodes.
In the present embodiment, the task process record sheet of all nodes is traversed first, comparing can obtain:Node1 is that operation is most Compute-intensive applications (TC<1) node, Node2 are to run most data-intensive applications (TC>1) node.Maximum value Computing_task_MAX=3, data_task_MAX=3.Meet in current adjustment | computing_task_MAX- data_task_MAX|>It is not uniform enough to illustrate that the task of Node1 and Node2 node is placed for 1 (3. referring to Fig. 1 frame).Select preparation The compute-intensive applications and data-intensive applications (4. referring to Fig. 1 frame) of scheduling, check the resource allocation record sheet of node, are No task immigration adjustment (5. referring to Fig. 1 frame) between having two nodes of enough free memory supports:Since memory minimum in Node1 accounts for Compute-intensive applications A1 occupies 0.5GB memory, is less than Node2 residue free memory 1.5GB.Meanwhile it is minimum in Node2 The data-intensive applications B2 of EMS memory occupation occupies 4GB memory, is less than Node1 residue free memory 8GB.So free memory is sufficient Task immigration is enough completed, task A1 is migrated by Node1 to Node2, task B2 and is migrated by Node1 to Node2.Attached drawing 3 is shown Task places schematic diagram after adjustment.
The task process record sheet of all nodes is periodically traversed again, and comparing can obtain:In Node1 and Node2, calculate close Collection type is applied and is equal with data-intensive applications number.Maximum c omputing_task_MAX=2, data_task_MAX =2.It is unsatisfactory in current adjustment | computing_task_MAX-data_task_MAX |>1 (3. referring to Fig. 1 frame), explanation The task of Node1 and Node2 node places substantially uniformity, without carrying out task immigration adjustment.
4th step:The migration strategy of task page perception
If application EMS memory occupation amount still increasing, illustrate task still in initial memory allocated phase, then without Page migration;
If the EMS memory occupation amount of application is relatively stable, illustrates that task is in and calculates the operation phase, then enable page migration, It is specifically divided into two parts:(1) page set in DRAM recently there is no write operation is moved in NVM;It (2) will be in NVM The page set that write operation occurred recently moves in DRAM.
It is slower there are writing speed and erasable number is limited etc. since novel NVM medium is compared with traditional DRAM medium Defect.Correlative study the result shows that:Slow 10~20 times of DRAM of NVM write operation ratio, server scene erasing and writing life is 3~5 years.But NVM has memory capacity big simultaneously, the characteristics such as non-volatile.In order to preferably play the performance of isomery memory, the present invention takes week Phase property carries out task page perception and carries out corresponding page migration strategy according to the different read write attributes of the page.
The migration strategy of task page perception first is based on following experiment conclusion:A large amount of application page characteristic is analyzed, it is right In major applications, following experiment conclusion is found:The write operation using initial memory allocated phase is not considered, in computation In operational process, the page number that write operation occurs is much smaller than the page number of all distribution of application.And the page of write operation occurs It is all relatively fixed constant in entire calculate in operational process of application.Therefore following page migration strategy is periodically carried out:
The task process record sheet for traversing all nodes first, is compared with last traversing result.Judge which is answered With still in EMS memory occupation build phase, these are applied without page migration.It is metastable for EMS memory occupation to answer With progress page aware migration strategy:
Attached drawing 4 gives 8 that the present embodiment the is taken EMS memory occupation schematic diagrames applied after initial memory allocated phase. The memory of each node is divided into two parts according to storage medium by dotted line frame:DRAM and NVM.White square representative is not sent out recently The page set of raw write operation, grey square represent the page set that write operation occurs recently.Due to system default Memory Allocation Mode can preferentially distribute DRAM memory, reallocation NVM memory, so the page for causing a part that write operation does not occur recently distributes In DRAM (A, C, E, F in figure), the page that write operation occurs recently for a part is distributed in NVM (H, I, K in figure).Traversal institute There is the page table entry (6. referring to Fig. 1 frame) of the page, the page is divided by (7. referring to Fig. 1 frame) according to containing dirty pages flag bit:A,C,G,E,F, J, L is the page set that write operation does not occur recently;B, H, I, D, K are the page set that write operation occurs.It has detected whether not Occur write operation the page in DRAM (8. referring to Fig. 1 frame):A, C, E, F page set are moved in NVM.It has detected whether Occur write operation the page in NVM (9. referring to Fig. 1 frame):H, I, K page set are moved in DRAM.
Attached drawing 5 illustrate migration after the page place as a result, due to occur write operation the page it is relatively fixed, so Using in operational process, it can be ensured that the page of write operation distributes in DRAM as far as possible, and not will cause a large amount of DRAM and NVM Swapping in and out.The containing dirty pages flag bit for corresponding to page table entry according to the page simultaneously, that is, can determine whether the page occurred write operation recently, Without carrying out additional metadata record to the page, operation expense is low.
In the present embodiment, the resource allocation methods perceived by using task based access control under isomery memory architecture make 4 meters It calculates intensive applications and 4 data-intensive applications is evenly distributed in two NUMA nodes, extenuated the cache contention of CPU It is competed with internal storage access.And the different read write attribute pages for distinguishing application, the putting using adaptability under isomery memory architecture Strategy is set, reduces the write operation number of NVM memory, extends the service life of NVM.Since most of write operation all occurs In DRAM, so having extenuated influence of the isomery memory to the runing time of application.

Claims (1)

1. the resource allocation methods that task based access control perceives under a kind of isomery memory architecture, it is characterised in that include the following steps:
The first step:Process performance metadata record
For the task process of all optimizations, by hardware performance counter obtain process memory write request number WAPS per second and Total the two performance parameters of occupancy MF of the memory of process, thus calculate classification of task standard TC=WAPS*MF, wherein WAPS unit is set as million, and MF unit is set as GB;According to TC value, task is divided into two major classes:Work as TC<When 1, for meter Calculate intensive applications;Work as TC>It is data-intensive applications when 1;
Second step:Node tasks assignment record
It is each node in NUMA architecture according to CPU occupancy, Memory Allocation and the performance metadata record of each process A task process record sheet is created, the metadata of associated process in node is recorded;A resource is created simultaneously for each node Assignment record table records the capacity of the CPU core occupancy situation and node free memory in node;
Third step:The scheduling strategy of task characteristic perception
The task resource method of salary distribution based on system default periodically completes node according to the task assignment record of each node Between task immigration adjustment so that it is different types of apply it is evenly distributed in all nodes;
The assignment record table for traversing all nodes of NUMA first finds out the most i.e. TC of compute-intensive applications of operation<1 node Node1, and the most data-intensive applications, that is, TC of operation>1) node Node2;Maximum value is recorded as computing_ respectively Task_MAX and data_task_MAX;
If the absolute value that computing_task_MAX subtracts data_task_MAX is greater than 1, that is, illustrate that Node1 and Node2 is saved The task of point is placed not uniform enough;It is just computation-intensive by one in Node1 if two node free memories can support task immigration Type application is migrated to Node2, while a data-intensive applications in Node2 being migrated to Node1;And if in the two nodes free time It deposits and does not support task immigration, then do not do task immigration operation;
If computing_task_MAX subtract data_task_MAX absolute value be less than or equal to 1, that is, illustrate Node1 with The task of Node2 node places substantially uniformity, and the different type application for further relating to all nodes is evenly distributed, then Without carrying out task immigration adjustment;
4th step:The migration strategy of task page perception
If the EMS memory occupation amount of application is still increasing, illustrate task still in initial memory allocated phase, then without the page Migration;
If the EMS memory occupation amount of application is relatively stable, illustrates that task is in and calculate the operation phase, then enable page migration, specifically It is divided into two parts:(1) page set in DRAM recently there is no write operation is moved in NVM;It (2) will be nearest in NVM The page set that write operation occurred moves in DRAM.
CN201810632230.8A 2018-06-19 2018-06-19 Resource allocation method based on task perception under heterogeneous memory architecture Active CN108897618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810632230.8A CN108897618B (en) 2018-06-19 2018-06-19 Resource allocation method based on task perception under heterogeneous memory architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810632230.8A CN108897618B (en) 2018-06-19 2018-06-19 Resource allocation method based on task perception under heterogeneous memory architecture

Publications (2)

Publication Number Publication Date
CN108897618A true CN108897618A (en) 2018-11-27
CN108897618B CN108897618B (en) 2021-10-01

Family

ID=64345409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810632230.8A Active CN108897618B (en) 2018-06-19 2018-06-19 Resource allocation method based on task perception under heterogeneous memory architecture

Country Status (1)

Country Link
CN (1) CN108897618B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214302A (en) * 2020-10-30 2021-01-12 中国科学院计算技术研究所 Process scheduling method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117285A (en) * 2015-09-09 2015-12-02 重庆大学 Non-volatile memory schedule optimization method based on mobile virtualization system
CN107391031A (en) * 2017-06-27 2017-11-24 北京邮电大学 Data migration method and device in a kind of computing system based on mixing storage
US20180024750A1 (en) * 2016-07-19 2018-01-25 Sap Se Workload-aware page management for in-memory databases in hybrid main memory systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117285A (en) * 2015-09-09 2015-12-02 重庆大学 Non-volatile memory schedule optimization method based on mobile virtualization system
US20180024750A1 (en) * 2016-07-19 2018-01-25 Sap Se Workload-aware page management for in-memory databases in hybrid main memory systems
CN107391031A (en) * 2017-06-27 2017-11-24 北京邮电大学 Data migration method and device in a kind of computing system based on mixing storage

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AMRO AWAD: "Write-Aware Management of NVM-based Memory Extensions", 《PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON SUPERCOMPUTING》 *
SOYOON LEE: "CLOCK-DWF: A Write-History-Aware Page Replacement Algorithm for Hybrid PCM and DRAM Memory Architectures", 《IEEE TRANSACTIONS ON COMPUTERS》 *
孙志文: "混合存储架构的自适应页面管理算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214302A (en) * 2020-10-30 2021-01-12 中国科学院计算技术研究所 Process scheduling method
CN112214302B (en) * 2020-10-30 2023-07-21 中国科学院计算技术研究所 Process scheduling method

Also Published As

Publication number Publication date
CN108897618B (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN110134514B (en) Extensible memory object storage system based on heterogeneous memory
CN105205014B (en) A kind of date storage method and device
CN107168654B (en) A kind of isomery memory allocation method and system based on data object temperature
US8812454B2 (en) Apparatus and method for managing storage of data blocks
CN109144887B (en) Memory system and control method for controlling nonvolatile memory
DE602005005557T2 (en) Module for reducing the power consumption of a hard disk drive
WO2017000658A1 (en) Storage system, storage management device, storage device, hybrid storage device, and storage management method
US10049040B2 (en) Just in time garbage collection
US20090210464A1 (en) Storage management system and method thereof
CN108139872A (en) A kind of buffer memory management method, cache controller and computer system
US8572321B2 (en) Apparatus and method for segmented cache utilization
CN111427969A (en) Data replacement method of hierarchical storage system
CN103324466A (en) Data dependency serialization IO parallel processing method
CN113093993A (en) Flash memory space dynamic allocation method and solid state disk
CN103514110A (en) Cache management method and device for nonvolatile memory device
US20170123975A1 (en) Centralized distributed systems and methods for managing operations
US7660964B2 (en) Windowing external block translations
US20070011214A1 (en) Oject level adaptive allocation technique
JP2023508676A (en) Memory manipulation considering wear leveling
CN110347338B (en) Hybrid memory data exchange processing method, system and readable storage medium
An et al. Avoiding read stalls on flash storage
CN109726145B (en) Data storage space distribution method and device and electronic equipment
CN108897618A (en) The resource allocation methods that task based access control perceives under a kind of isomery memory architecture
CN111538677A (en) Data processing method and device
CN106326132A (en) Storage system, storage management device, storage, hybrid storage device and storage management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220908

Address after: 100192 207, floor 2, building C-1, Zhongguancun Dongsheng science and Technology Park, No. 66, xixiaokou Road, Haidian District, Beijing

Patentee after: Pingkai star (Beijing) Technology Co.,Ltd.

Address before: 230026 Jinzhai Road, Baohe District, Hefei, Anhui Province, No. 96

Patentee before: University of Science and Technology of China

TR01 Transfer of patent right