CN102662865B - Multi-core CPU (central processing unit) cache management method, device and equipment - Google Patents

Multi-core CPU (central processing unit) cache management method, device and equipment Download PDF

Info

Publication number
CN102662865B
CN102662865B CN201210098772.4A CN201210098772A CN102662865B CN 102662865 B CN102662865 B CN 102662865B CN 201210098772 A CN201210098772 A CN 201210098772A CN 102662865 B CN102662865 B CN 102662865B
Authority
CN
China
Prior art keywords
cpu
buffer
queue
buffer memory
exclusive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210098772.4A
Other languages
Chinese (zh)
Other versions
CN102662865A (en
Inventor
彭琮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie Networks Co Ltd
Original Assignee
Fujian Star Net Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Star Net Communication Co Ltd filed Critical Fujian Star Net Communication Co Ltd
Priority to CN201210098772.4A priority Critical patent/CN102662865B/en
Publication of CN102662865A publication Critical patent/CN102662865A/en
Application granted granted Critical
Publication of CN102662865B publication Critical patent/CN102662865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a multi-core CPU (central processing unit) cache management method, device and equipment. The method comprises the steps of: when the first CPU in the multi-core CPU allocates cache, beginning allocating cache at a cache pointer pointed by a head pointer of a queue in multiple first cache queues exclusive to the first CPU; and when the first CPU in the multi-core CPU releases cache, determining a second CPU of the cache to be released, determining the queue capable of being released by the first CPU in multiple second cache queues exclusive to the second CPU, and releasing the cache to be released according to the cache pointer pointed by a tail pointer of the queue capable of being released by the first CPU, wherein the head pointer and the tail pointer of each cache queue are not coincided.

Description

The buffer memory management method of multi-core CPU, device and equipment
Technical field
The present invention relates to caching technology, relate in particular to a kind of buffer memory management method, device and equipment of multi-core CPU.
Background technology
In embedded system, be used for the buffer zone of digital received and sent can be referred to as buffer (buffer memory).The size of buffer is generally all fixed, and is mainly used in the processing of various data fields.In Multi-task real-time system, buffer management is basic function, and the processing power of system is had to obvious impact.In prior art; the implementation of the buffer management of multi-core CPU is generally that the special instruction providing with multi-core CPU is realized spin lock; with spin lock, realize the protection to buffer, guarantee that any time only has a CPU buffer is distributed or discharge.When multi-core CPU all carries out buffer distribution and/or release, this mode can be protected buffer effectively, avoids a buffer to be obtained by plural CPU.But the mode of the buffer of above-mentioned multi-core CPU management can make other CPU enter busy grade for state, cannot process other things, causes the reduction of system effectiveness.
Summary of the invention
In order to realize the effective management to multi-core CPU buffer memory, improve system effectiveness, the invention provides a kind of buffer memory management method of multi-core CPU, comprising:
When the CPU in multi-core CPU distributes buffer memory, the buffer pointers that the head pointer of a queue in exclusive a plurality of the first buffer queues of a CPU points to starts to distribute buffer memory;
When the CPU in multi-core CPU discharges buffer memory, determine the 2nd CPU that buffer memory to be discharged is affiliated, and in exclusive a plurality of the second buffer queues of the 2nd CPU, determine the releasable queue of a described CPU, buffer memory to be discharged described in the buffer pointers of pointing to according to the tail pointer of described definite releasable queue of a CPU discharges;
Wherein, the head pointer in each buffer queue does not overlap each other with tail pointer.
Another aspect of the present invention is to provide a kind of cache management device of multi-core CPU, comprising:
Distribution module, while distributing buffer memory for the CPU when multi-core CPU, the buffer pointers that the head pointer of a queue in exclusive a plurality of the first buffer queues of a CPU points to starts to distribute buffer memory;
Release module, while discharging buffer memory for the CPU when multi-core CPU, determine the 2nd CPU that buffer memory to be discharged is affiliated, and in exclusive a plurality of the second buffer queues of the 2nd CPU, determine the releasable queue of a described CPU, buffer memory to be discharged described in the buffer pointers of pointing to according to the tail pointer of described definite releasable queue of a CPU discharges; Wherein, the head pointer in each buffer queue does not overlap each other with tail pointer.
Another aspect of the present invention is to provide a kind of network equipment, comprises the cache management device of multi-core CPU as above.
Technique effect of the present invention is: the buffer pointers of the exclusive buffer memory of each CPU in multi-core CPU is arranged in to a plurality of exclusive buffer queues, each queue has head pointer and tail pointer, when a CPU distributes buffer memory, according to the head pointer buffer pointers pointed of a queue in exclusive a plurality of the first buffer queues of a CPU, start to distribute buffer memory, when a CPU discharges buffer memory, according to CPU under buffer memory to be discharged, determine the queue that in the exclusive a plurality of buffer queues of under it CPU, a CPU can discharge, and discharge buffer memory to be discharged according to the tail pointer of definite queue buffer pointers pointed.Wherein, head pointer and the tail pointer of each buffer queue do not overlap, thereby for the optional position buffer memory in the exclusive buffer queue of each CPU; synchronization all only has a CPU to operate on it; realized the protection to buffer memory, do not needed to use spin lock, improved the efficiency of system simultaneously.
Accompanying drawing explanation
The buffer memory management method process flow diagram of the multi-core CPU that Fig. 1 provides for the embodiment of the present invention one;
The schematic diagram of exclusive the first buffer queue of the CPU that Fig. 2 provides for the embodiment of the present invention one;
The basic format that Fig. 3 is the buffer that provides in the embodiment of the present invention one;
The cache management device structural representation of the multi-core CPU that Fig. 4 provides for the embodiment of the present invention two;
The network equipment structural representation that Fig. 5 provides for the embodiment of the present invention three.
Embodiment
The buffer memory management method process flow diagram of the multi-core CPU that Fig. 1 provides for the embodiment of the present invention one, as shown in Figure 1, the method comprises:
When step 101, the CPU in multi-core CPU distribute buffer memory, the buffer pointers that the head pointer of a queue in exclusive a plurality of the first buffer queues of a CPU points to starts to distribute buffer memory.
First, a CPU is a kind of acute pyogenic infection of finger tip, is used to indicate any one CPU in multi-core CPU.Multi-core CPU technology couples together a plurality of CPU with an internal bus exactly, shares various storeies and peripheral hardware.
Secondly, it is example that the CPU of take has 4 the first exclusive buffer queues, and the schematic diagram of exclusive the first buffer queue of the CPU that embodiment mono-provides can be as shown in Figure 2.Each queue can be with the form appearance of an array of pointers, and the length of each queue can be set according to user's needs.The exclusive buffer memory one of supposing a CPU has 1024, and the length of setting each queue is 1024, and the buffer pointers number of filling in each queue is at least 1, can divide equally each other, also can not divide equally, and does not limit herein.The present embodiment be take and in each queue, filled respectively 256 buffer pointers and describe as example, and each buffer pointers is pointed to a specific buffer memory.Each queue has a head pointer (head) and a tail pointer (tail), and head, tail pointer do not overlap each other.Head pointer be used for representing from queue where start to distribute buffer, namely take out buffer pointer and insert 0, according to buffer pointer, determine buffer.Tail pointer be used for sign from queue where start to discharge buffer, namely the buffer pointer of buffer to be discharged is filled in the indicated position of tail and is gone.Each first buffer queue can provide the support that distributes buffer by an own exclusive CPU, but the quantity of the buffer that each first buffer queue once can distribute is no more than the quantity of all buffer pointers in this queue subtracts 1 again, that is to say, if current in a queue, there is a m buffer pointer, once can distribute the quantity of m-1 buffer so at most.The benefit of doing is like this, effectively avoided the overlapping of head pointer and tail pointer, also effectively avoided overlapping to the assign action of same buffer and release movement, because assign action can only be that exclusive CPU carries out, and release movement can be carried out by any one CPU, therefore, avoided overlapping to the assign action of same buffer and release movement, meaned and effectively avoided a plurality of CPU operation to a buffer simultaneously.
When step 102, the CPU in multi-core CPU discharge buffer memory, determine the 2nd CPU that buffer memory to be discharged is affiliated, and in exclusive a plurality of the second buffer queues of the 2nd CPU, determine the releasable queue of a CPU, the buffer pointers of pointing to according to the tail pointer of the releasable queue of a CPU of determining discharges buffer memory to be discharged.
Here it should be noted that, the 2nd CPU is a kind of acute pyogenic infection of finger tip equally, is used to indicate any one CPU in multi-core CPU.That is to say, a CPU and the 2nd CPU can be same CPU, can be also different CPU.
Wherein, the basic format of the buffer providing in the embodiment of the present invention one can be as shown in Figure 3, and each buffer is at least divided into two parts, and a part is management fields, and a part is data segment.These two parts, in a continuous memory headroom, that is to say, after management fields, and data segment and then.The length of data segment can be according to user's oneself the setting that needs, the length of management fields also can be according to user's oneself the setting that needs, in management fields, can carry a CPU id field, be used for identifying this buffer and be specific to which CPU, the other guide of management fields can be set according to user's oneself needs, does not repeat herein.In other words, in embodiments of the present invention, a buffer only can be specific to a CPU.But this is not transmitting buffer to use each other in order to limit between CPU.That is to say, the exclusive CPU of buffer can carry out assign action to this buffer, but because buffer can be for transmitting data between CPU, so when this buffer is released, the CPU that discharges this buffer may not be the exclusive CPU of this buffer, at this time just can determine which CPU this buffer is specific to according to the CPU id field in management fields, thereby it is discharged.
In other words, each buffer memory can carry the sign of its affiliated CPU, determine that so the 2nd CPU under buffer memory to be discharged can be understood as: read the sign of the affiliated CPU carrying in this buffer memory to be discharged, and according to the 2nd CPU under the definite buffer memory to be discharged of this sign.
For example, when a CPU discharges a certain buffer, the CPU ID carrying in the management fields of this buffer is 2, determine that this buffer is specific to CPU2, and in a plurality of exclusive queue of this CPU2, obtaining the releasable queue of a CPU, the tail pointer buffer pointer pointed that can discharge queue according to a CPU discharges this buffer.It should be noted that, as said above, the 2nd CPU is actually a kind of general reference, and in the present example, CPU2 is a kind of refering in particular to, and that is to say, the 2nd CPU can refer to CPU2, also can refer to other CPU.
Wherein, between each queue in exclusive a plurality of the second buffer queues of the 2nd CPU and each CPU in multi-core CPU, there is corresponding relation, this corresponding relation is for characterizing the queue that each CPU can discharge at exclusive a plurality of the second buffer queues of the 2nd CPU,, according to this corresponding relation, can in exclusive a plurality of the second buffer queues of the 2nd CPU, obtain the queue that a CPU can discharge.Corresponding relation can be, but not limited to as shown in table 1 below:
Table 1
CPU sign Can in CPU1 Can in CPU2 …… Can in CPUN
The queue discharging The queue discharging The queue discharging
CPU1 CPU1-CPU1 CPU2-CPU1 …… CPUN-CPU1
CPU2 CPU1-CPU2 CPU2-CPU2 …… CPUN-CPU2
…… …… …… …… ……
CPUN CPU1-CPUN CPU2-CPUN …… CPUN-CPUN
Take the corresponding relation shown in table 1 as example, suppose that present CPU1 prepares to discharge a buffer, what in the CPU id field of this buffer, record is 2, that is to say, this buffer is specific to CPU2, so, consult table 1 known, CPU1 releasable queue in CPU2 is CPU2-CPU1, so, CPU1 adds the pointer of this buffer to position that in queue CPU2-CPU1, tail pointer points to, this buffer is discharged simultaneously.
Certainly, it is above-mentioned that only to provide between queue in the exclusive a plurality of buffer queues of CPU and the CPU in multi-core CPU be man-to-man corresponding relation, further, many-to-one relation also allows, and in the exclusive a plurality of buffer queues of CPU, has a more than queue corresponding with a CPU in multi-core CPU.Here do not repeat.Above-mentioned a plurality of exclusive buffer queue all has the sign of oneself, and the sign of queue is different each other, can use the various ways such as numeral, letter, character string.
Order for step 101 and step 102 it should be noted that, Fig. 1 is only for characterizing a kind of situation of order of occurrence between step 101 and step 102, in fact after step 101 also can occur in step 102, or occur with step 102, so the situation that Fig. 1 characterizes is not in order to limit the order between step of the present invention simultaneously.
The method that the embodiment of the present invention provides, the buffer pointers of the exclusive buffer memory of each CPU in multi-core CPU is arranged in to a plurality of exclusive buffer queues, each queue has head pointer and tail pointer, when a CPU distributes buffer memory, according to the head pointer buffer pointers pointed of a queue in exclusive a plurality of the first buffer queues of a CPU, start to distribute buffer memory, when a CPU discharges buffer memory, according to CPU under buffer memory to be discharged, determine the queue that in the exclusive a plurality of buffer queues of under it CPU, a CPU can discharge, and discharge buffer memory to be discharged according to the tail pointer of definite queue buffer pointers pointed.Wherein, head pointer and the tail pointer of each buffer queue do not overlap, thereby for the optional position buffer memory in the exclusive buffer queue of each CPU; synchronization all only has a CPU to operate on it; realized the protection to buffer memory, do not needed to use spin lock, improved the efficiency of system simultaneously.
One of ordinary skill in the art will appreciate that: all or part of step that realizes above-mentioned each embodiment of the method can complete by the relevant hardware of programmed instruction.Aforesaid program can be stored in a computer read/write memory medium.This program, when carrying out, is carried out the step that comprises above-mentioned each embodiment of the method; And aforesaid storage medium comprises: various media that can be program code stored such as ROM, RAM, magnetic disc or CDs.
The cache management device structural representation of the multi-core CPU that Fig. 4 provides for the embodiment of the present invention two, this device is the main body of carrying out said method embodiment, concrete execution step can, with reference to said method embodiment, not repeat herein.As shown in Figure 4, this device can comprise: distribution module 401 and release module 402.Wherein, when distribution module 401 is distributed buffer memory for the CPU when multi-core CPU, the buffer pointers that the head pointer of a queue in exclusive a plurality of the first buffer queues of a CPU points to starts to distribute buffer memory; When release module 402 discharges buffer memory for the CPU when multi-core CPU, determine the 2nd CPU that buffer memory to be discharged is affiliated, and in exclusive a plurality of the second buffer queues of the 2nd CPU, determine the releasable queue of a CPU, the buffer pointers of pointing to according to the tail pointer of the releasable queue of a CPU of determining discharges buffer memory to be discharged; Wherein, the head pointer in each buffer queue does not overlap each other with tail pointer.
Wherein, release module 402 can comprise:
Affiliated CPU determining unit, for the 2nd CPU under definite buffer memory to be discharged;
Can discharge queue determining unit, for determining the releasable queue of a CPU at exclusive a plurality of the second buffer queues of the 2nd CPU;
Releasing unit, discharges buffer memory to be discharged for the buffer pointers of pointing to according to the tail pointer of the releasable queue of a CPU of determining.
Wherein, the sign of CPU under carrying in each buffer memory, affiliated CPU determining unit can comprise:
Read subelement, for reading the sign of the affiliated CPU that buffer memory to be discharged carries;
Determine subelement, for determine the 2nd CPU under buffer memory to be discharged according to sign.
Wherein, a CPU and the 2nd CPU can be same CPU.
On the basis of above-mentioned embodiment, between each queue in exclusive a plurality of the second buffer queues of the 2nd CPU and each CPU in multi-core CPU, have corresponding relation, corresponding relation is for characterizing the queue that each CPU can discharge at exclusive a plurality of the second buffer queues of the 2nd CPU;
Can discharge queue determining unit is used for: according to corresponding relation, determine the queue that a CPU can discharge in exclusive a plurality of the second buffer queues of the 2nd CPU.
In above-mentioned corresponding relation, between each CPU in each second buffer queue and multi-core CPU for one to one or many-one.
The network equipment structural representation that Fig. 5 provides for the embodiment of the present invention three, as shown in Figure 5, this network equipment comprises the cache management device 501 of the multi-core CPU as described in said apparatus embodiment.It should be noted that, the hardware that the cache management device 501 of this multi-core CPU can be used as the network equipment forms and to be present in this network equipment, also can be used as the software function module that operates in this network equipment and is present in this network equipment.
Finally it should be noted that: each embodiment, only in order to technical scheme of the present invention to be described, is not intended to limit above; Although the present invention is had been described in detail with reference to aforementioned each embodiment, those of ordinary skill in the art is to be understood that: its technical scheme that still can record aforementioned each embodiment is modified, or some or all of technical characterictic is wherein equal to replacement; And these modifications or replacement do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (7)

1. a buffer memory management method for multi-core CPU, is characterized in that, comprising:
When the CPU in multi-core CPU distributes buffer memory, the buffer pointers that the head pointer of a queue in exclusive a plurality of the first buffer queues of a CPU points to starts to distribute buffer memory;
When the CPU in multi-core CPU discharges buffer memory, determine the 2nd CPU that buffer memory to be discharged is affiliated, and in exclusive a plurality of the second buffer queues of the 2nd CPU, determine the releasable queue of a described CPU, buffer memory to be discharged described in the buffer pointers of pointing to according to the tail pointer of described definite releasable queue of a CPU discharges;
Wherein, the head pointer in each buffer queue does not overlap each other with tail pointer;
Wherein, the sign of CPU under carrying in each buffer memory, the 2nd CPU under described definite buffer memory to be discharged comprises:
Read the sign of the affiliated CPU carrying in buffer memory to be discharged, according to the 2nd CPU under the definite buffer memory to be discharged of described sign;
Wherein, between each queue in exclusive a plurality of the second buffer queues of described the 2nd CPU and each CPU in multi-core CPU, have corresponding relation, described corresponding relation is for characterizing the queue that each CPU can discharge at exclusive a plurality of the second buffer queues of the 2nd CPU; The described queue that a definite CPU can discharge in exclusive a plurality of the second buffer queues of the 2nd CPU, comprising:
According to described corresponding relation, in exclusive a plurality of the second buffer queues of the 2nd CPU, determine the queue that a described CPU can discharge.
2. method according to claim 1, is characterized in that, a described CPU and described the 2nd CPU are same CPU.
3. method according to claim 2, is characterized in that, in described corresponding relation, between each CPU in each second buffer queue and multi-core CPU for one to one or many-one.
4. a cache management device for multi-core CPU, is characterized in that, comprising:
Distribution module, while distributing buffer memory for the CPU when multi-core CPU, the buffer pointers that the head pointer of a queue in exclusive a plurality of the first buffer queues of a CPU points to starts to distribute buffer memory;
Release module, while discharging buffer memory for the CPU when multi-core CPU, determine the 2nd CPU that buffer memory to be discharged is affiliated, and in exclusive a plurality of the second buffer queues of the 2nd CPU, determine the releasable queue of a described CPU, buffer memory to be discharged described in the buffer pointers of pointing to according to the tail pointer of described definite releasable queue of a CPU discharges; Wherein, the head pointer in each buffer queue does not overlap each other with tail pointer;
Wherein, described release module comprises:
Affiliated CPU determining unit, for the 2nd CPU under definite buffer memory to be discharged;
Can discharge queue determining unit, for determining the releasable queue of a described CPU at exclusive a plurality of the second buffer queues of the 2nd CPU;
Releasing unit, buffer memory to be discharged described in discharging for the buffer pointers of pointing to according to the tail pointer of described definite releasable queue of a CPU;
Wherein, the sign of CPU under carrying in each buffer memory, affiliated CPU determining unit comprises:
Read subelement, for reading the sign of the affiliated CPU that buffer memory to be discharged carries;
Determine subelement, for determine the 2nd CPU under buffer memory to be discharged according to described sign;
Wherein, between each queue in exclusive a plurality of the second buffer queues of described the 2nd CPU and each CPU in multi-core CPU, have corresponding relation, described corresponding relation is for characterizing the queue that each CPU can discharge at exclusive a plurality of the second buffer queues of the 2nd CPU; , can discharge queue determining unit is used for: according to described corresponding relation, determine the queue that a described CPU can discharge in exclusive a plurality of the second buffer queues of the 2nd CPU.
5. device according to claim 4, is characterized in that, a described CPU and described the 2nd CPU are same CPU.
6. device according to claim 5, is characterized in that, in described corresponding relation, between each CPU in each second buffer queue and multi-core CPU for one to one or many-one.
7. a network equipment, is characterized in that, comprises the cache management device of the multi-core CPU as described in any one in claim 4 to 6.
CN201210098772.4A 2012-04-06 2012-04-06 Multi-core CPU (central processing unit) cache management method, device and equipment Active CN102662865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210098772.4A CN102662865B (en) 2012-04-06 2012-04-06 Multi-core CPU (central processing unit) cache management method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210098772.4A CN102662865B (en) 2012-04-06 2012-04-06 Multi-core CPU (central processing unit) cache management method, device and equipment

Publications (2)

Publication Number Publication Date
CN102662865A CN102662865A (en) 2012-09-12
CN102662865B true CN102662865B (en) 2014-11-26

Family

ID=46772361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210098772.4A Active CN102662865B (en) 2012-04-06 2012-04-06 Multi-core CPU (central processing unit) cache management method, device and equipment

Country Status (1)

Country Link
CN (1) CN102662865B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5808450B1 (en) * 2014-04-04 2015-11-10 ファナック株式会社 Control device for executing sequential program using multi-core processor
CN108897630B (en) * 2018-06-06 2021-11-09 郑州云海信息技术有限公司 OpenCL-based global memory caching method, system and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6405292B1 (en) * 2000-01-04 2002-06-11 International Business Machines Corp. Split pending buffer with concurrent access of requests and responses to fully associative and indexed components
CN101051281A (en) * 2007-05-16 2007-10-10 杭州华三通信技术有限公司 Method and device for mutual repulsion access of multiple CPU to critical resources
CN101650698A (en) * 2009-08-28 2010-02-17 曙光信息产业(北京)有限公司 Method for realizing direct memory access
CN101853149A (en) * 2009-03-31 2010-10-06 张力 Method and device for processing single-producer/single-consumer queue in multi-core system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8327047B2 (en) * 2010-03-18 2012-12-04 Marvell World Trade Ltd. Buffer manager and methods for managing memory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6405292B1 (en) * 2000-01-04 2002-06-11 International Business Machines Corp. Split pending buffer with concurrent access of requests and responses to fully associative and indexed components
CN101051281A (en) * 2007-05-16 2007-10-10 杭州华三通信技术有限公司 Method and device for mutual repulsion access of multiple CPU to critical resources
CN101853149A (en) * 2009-03-31 2010-10-06 张力 Method and device for processing single-producer/single-consumer queue in multi-core system
CN101650698A (en) * 2009-08-28 2010-02-17 曙光信息产业(北京)有限公司 Method for realizing direct memory access

Also Published As

Publication number Publication date
CN102662865A (en) 2012-09-12

Similar Documents

Publication Publication Date Title
CN104866428B (en) Data access method and data access device
CN102473113A (en) Thread shift: allocating threads to cores
CN101122886B (en) Method and device for dispensing cache room and cache controller
CN101859279B (en) Memory allocation and release method and device
CN102483703A (en) Mapping Of Computer Threads Onto Heterogeneous Resources
EP2927779B1 (en) Disk writing method for disk arrays and disk writing device for disk arrays
CN101799788B (en) Level-to-level administration method and system of storage resources
CN105159841B (en) A kind of internal memory migration method and device
CN110825690B (en) Inter-core communication method and device of multi-core processor
CN101661486A (en) Method and system for fragment sorting for hard disk of host comprising virtual computer
KR20180089273A (en) Method and apparatus for implementing out-of-order resource allocation
CN106406756A (en) Space allocation method of file system, and apparatuses
CN104954400A (en) Cloud computing system and realizing method thereof
CN109992566A (en) A kind of file access method, device, equipment and readable storage medium storing program for executing
CN104317734A (en) Memory allocation method and device applicable to SLAB
CN104932933A (en) Spin lock acquisition method and apparatus
CN111124254A (en) Method, electronic device and program product for scheduling storage space reclamation requests
CN102662865B (en) Multi-core CPU (central processing unit) cache management method, device and equipment
JP2020194523A (en) Method, apparatus, device, and storage medium for processing access request
Koo et al. An empirical study of I/O separation for burst buffers in HPC systems
US20160182620A1 (en) Data distribution apparatus, data distribution method, and data distribution program for parallel computing processing system
CN103324599A (en) Inter-processor communication method and system on chip
CN114327917A (en) Memory management method, computing device and readable storage medium
CN105677491A (en) Method and device for transmitting data
CN117076353B (en) Descriptor configuration method and descriptor configuration device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP01 Change in the name or title of a patent holder

Address after: Cangshan District of Fuzhou City, Fujian province 350002 Jinshan Road No. 618 Garden State Industrial Park building 19#

Patentee after: RUIJIE NETWORKS CO., LTD.

Address before: Cangshan District of Fuzhou City, Fujian province 350002 Jinshan Road No. 618 Garden State Industrial Park building 19#

Patentee before: Fujian Xingwangruijie Network Co., Ltd.