CN103927123B - Buffer management method and device - Google Patents
Buffer management method and device Download PDFInfo
- Publication number
- CN103927123B CN103927123B CN201310013751.2A CN201310013751A CN103927123B CN 103927123 B CN103927123 B CN 103927123B CN 201310013751 A CN201310013751 A CN 201310013751A CN 103927123 B CN103927123 B CN 103927123B
- Authority
- CN
- China
- Prior art keywords
- user
- packet buffer
- queue
- privately owned
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000000872 buffer Substances 0.000 title claims abstract description 213
- 238000007726 management method Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims description 44
- 230000008569 process Effects 0.000 claims description 34
- 230000001934 delay Effects 0.000 claims description 9
- 239000000203 mixture Substances 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 abstract description 6
- 238000004891 communication Methods 0.000 abstract description 3
- 230000005540 biological transmission Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004080 punching Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides a buffer management method. The buffer management method includes that a second user is informed of the address of a message buffer and access permission of the message buffer is changed into the permission that only the second user can access the message buffer after a first user performs first processing on a message to be processed in the message buffer of a privately owned queue of the first user; the second user accesses the message buffer and performs second processing on the message to be processed. The buffer management method avoids the phenomenon that when sharing is buffered, the sharing needs to be buffered through a memory copying and pasting mode, can obtain different user sharing buffer memory space under a uniform lock management mechanism, and improves buffer communication performance built by each user.
Description
Technical field
The present invention relates to internet arena is and in particular to a kind of buffer memory management method and server.
Background technology
Access route(English:Access router, abbreviation:AR)Need to be unified between each functional module of equipment
Caching(English:buffer)Management, the receiving process of such as receiver module applies for that when receiving message one section of memory headroom is made
For packet buffer(English:Packet buffer, abbreviation:PBUF), PBUF is configured to receive in network for storing
Etc. the space of message waiting for transmission, the forwarding process of forwarding module applies for another section of memory headroom as forwarding when E-Packeting
The packet buffer of message, the kernel mode of operating system(English:kernel mode)Process and User space(English:user mode)
Carry out between process being also required to during high-speed traffic between message transmission apply for memory headroom as packet buffer.Common, different
Process carry out message transmission when, how using shared drive.Shared drive (sharedmemory) is under Unix or Linux
Communication means between multi-process, this method is generally used for communicating between the multi-process of a program, between actually multiple programs
Can also be by shared drive come transmission information.Shared drive prevents internal memory between multiple processes from using punching by lock mechanism
Prominent.For example, under a linux operating system, kernel state process and the message transmission of User space process, need to be solved with lock mechanism
Shared drive is concurrent, and for example two or more processes take place at the same instant the problem calling same section of memory headroom,
But kernel state process and User space process cannot use a set of lock mechanism.
Content of the invention
The present invention provides a kind of buffer memory management method, to realize the unique caching pipe under a kind of public lock mechanism of multiple subsystem
Reason.
A first aspect of the present invention provides a kind of buffer memory management method, and methods described includes:
The address that clear text is stored in the privately owned queue distributing to described first user by first user is corresponding
In packet buffer, and described address is removed described privately owned queue, only first user is able to access that the private of described first user
There is the corresponding packet buffer in the address in queue;
Described first user carries out the first process to described clear text;
First user will be processed by first after the address of described packet buffer that is located of described clear text notify to
Second user, the access rights of corresponding for described address packet buffer is changed to only second user and is able to access that;
Second user accesses the corresponding packet buffer in described address, carries out second processing to described clear text.
Based in a first aspect, in the first possible embodiment of first aspect, described first user will be pending
Packet storage is in the corresponding packet buffer in the address in the privately owned queue distributing to described first user, and described address is moved
Go out described privately owned queue, the corresponding message in address that only first user is able to access that in the privately owned queue of described first user delays
Deposit, also include before:
In the case that the corresponding packet buffer in all addresses in the privately owned queue of described first user is depleted, to institute
State the first resource pond application packet buffer of first user, the corresponding packet buffer in address in described first resource pond is first
The memory headroom that user priority uses;
In the case that the corresponding packet buffer in all addresses in the first resource pond of described first user exhausts, with
The Secondary resource pond of two users carries out packet buffer exchange, by the corresponding packet buffer in whole addresses in described first resource pond
Address corresponding packet buffer space in space and described Secondary resource pond swaps, the Secondary resource of described second user
The corresponding whole cachings in whole addresses in pond are in upstate, and the corresponding message in address in described Secondary resource pond delays
Save as the preferential memory headroom using of second user;
Do not exist in the case that the whole packet buffers in resource pool are in the second user of upstate, treating described
Process the corresponding packet buffer in address in shared queue for the packet storage, the corresponding message in the address in described shared queue delays
Save as the lock-free queue that whole users can access.
Based on the first possible embodiment of first aspect or first aspect, possible in the second of first aspect
In embodiment, methods described also includes, and exceedes given threshold or described privately owned in the privately owned queue size of described first user
When available packet buffer space in queue is less than given threshold, the described packet buffer depositing described clear text is released
Put.
The possible embodiment of second based on first aspect, the embodiment of the present invention additionally provides the third possible reality
Apply mode, exceed the available packet buffer in given threshold or described privately owned queue in the privately owned queue size of described first user
When space is less than given threshold, the described packet buffer release of described clear text will be deposited, specially:
If the privately owned queue of described first user is not up to the available packet buffer in given threshold or described privately owned queue
Space is less than setting value, then the address of described packet buffer is discharged into the privately owned queue of described first user;
If the privately owned queue of described first user has reached given threshold, and the resource pool of first user not up to sets
Determine threshold value, then the address of described packet buffer is discharged in the resource pool of described first user;
If the privately owned queue of described second user has reached given threshold, and the resource pool of described first user reaches
To given threshold, then the address of described packet buffer is discharged in described shared queue.
The first possible embodiment based on first aspect and first aspect is to the third possible embodiment
In any one, in the 4th kind of possible embodiment of first aspect, clear text is stored in point by first user
In the corresponding packet buffer in address in the privately owned queue of first user described in dispensing, and described address is removed described privately owned team
Row, the corresponding packet buffer in address that only first user is able to access that in the privately owned queue of described first user, also wraps before
Include:
Create shared queue, and preserve the address information of described shared queue;
Create multiple privately owned queues corresponding with each user, described privately owned queue is believed according to the address of described shared queue
Breath and the privately owned queue size setting create;
Establishing resource, described resource pool in the memory headroom outside described shared queue and described privately owned queue, with each
User corresponds to.
A second aspect of the present invention provides a kind of cache management device, and described device includes:
Clear text is stored in the privately owned queue distributing to described first user by memory element for first user
The corresponding packet buffer in address in, and described address is removed described privately owned queue, only first user be able to access that described
The corresponding packet buffer in address in the privately owned queue of first user;
First processing units, carry out the first process for described first user to described clear text;
Control unit, the described packet buffer described clear text after being processed by first being located for first user
Address notify to second user, the access rights of corresponding for described address packet buffer are changed to only second user can
Access;
Second processing unit, accesses the corresponding packet buffer in described address for second user, to described clear text
Carry out second processing.
Based on second aspect, in the first possible embodiment of second aspect, described memory element is additionally operable to:
In the case that the corresponding packet buffer in all addresses in the privately owned queue of described first user is depleted, to institute
State the first resource pond application packet buffer of first user, the corresponding packet buffer in address in described first resource pond is first
The memory headroom that user priority uses;
In the case that the corresponding packet buffer in all addresses in the first resource pond of described first user exhausts, with
The Secondary resource pond of two users carries out packet buffer exchange, by the corresponding packet buffer in whole addresses in described first resource pond
Address corresponding packet buffer space in space and described Secondary resource pond swaps, the Secondary resource of described second user
The corresponding whole cachings in whole addresses in pond are in upstate, and the corresponding message in address in described Secondary resource pond delays
Save as the preferential memory headroom using of second user;
Do not exist in the case that the whole packet buffers in resource pool are in the second user of upstate, treating described
Process the corresponding packet buffer in address in shared queue for the packet storage, the corresponding message in the address in described shared queue delays
Save as the lock-free queue that whole users can access.
The first the possible embodiment of second aspect of being lived based on second aspect, possible in the second of second aspect
In embodiment, described device also includes, releasing unit, exceedes setting threshold for the privately owned queue size in described first user
When available packet buffer space in value or described privately owned queue is less than given threshold, will deposit described in described clear text
Packet buffer discharges.
The possible embodiment of second based on second aspect, the embodiment of the present invention additionally provides the third possible reality
Apply mode, described releasing unit, specifically for:
If the privately owned queue of described first user is not up to the available packet buffer in given threshold or described privately owned queue
Space is less than setting value, then the address of described packet buffer is discharged into the privately owned queue of described first user;
If the privately owned queue of described first user has reached given threshold, and the resource pool of first user not up to sets
Determine threshold value, then the address of described packet buffer is discharged in the resource pool of described first user;
If the privately owned queue of described second user has reached given threshold, and the resource pool of described first user reaches
To given threshold, then the address of described packet buffer is discharged in described shared queue.
The first possible embodiment based on second aspect and second aspect is to the third possible embodiment
In any one, in the 4th kind of possible embodiment of second aspect, also include creating unit, be used for:
Create shared queue, and preserve the address information of described shared queue;
Create multiple privately owned queues corresponding with each user, described privately owned queue is believed according to the address of described shared queue
Breath and the privately owned queue size setting create;
Establishing resource, described resource pool in the memory headroom outside described shared queue and described privately owned queue, with each
User corresponds to.
Embodiments provide a kind of buffer memory management method, by first user in the privately owned queue of first user
Packet buffer in the first process is carried out to described clear text after, by the address of described packet buffer notify to second use
Family, the access rights of described packet buffer are changed to only second user and are able to access that by central processing unit;Second user accesses
Described packet buffer, carries out second processing to described clear text.By the embodiment of the present invention, it is to avoid in Cache Design
When, need to carry out Cache Design by way of internal memory replicates and pastes, be capable of under unified lock management mechanism, different user is common
Enjoy spatial cache, improve the self-built Cache Communication performance of each user.
Brief description
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, below will be to embodiment or description of the prior art
In required use accompanying drawing be briefly described it should be apparent that, drawings in the following description be only the present invention some
Embodiment, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these
Accompanying drawing obtains other accompanying drawings.
The schematic diagram of the spatial cache that Fig. 1 creates for the embodiment of the present invention;
Fig. 2 is a kind of flow chart of embodiment of buffer memory management method provided in an embodiment of the present invention;
Fig. 3 is a kind of structure chart of embodiment of cache management device provided in an embodiment of the present invention;
Fig. 4 is the structure chart of cache management device another kind embodiment provided in an embodiment of the present invention.
Specific embodiment
Below by drawings and Examples, technical scheme is described in further detail.
The core concept of the embodiment of the present invention is, by the caching of system(buffer)Space be divided into private room and
The communal space, private room is the spatial cache for storing clear text that specific user is able to access that.The communal space is
The spatial cache for storing clear text that all users can access.By point to private room in stored messages each
Described in nodal value before the pointer of the corresponding address of individual packet buffer and each pointer, the queue of composition is referred to as privately owned queue, points to
The team of the nodal value composition before the pointer of the corresponding address of each packet buffer of stored messages and each pointer in the communal space
Row are referred to as shared queue, and each nodal value is stored in one section of packet buffer that the first address of the pointer sensing of previous node starts
In space.Each functional module can select message to be processed will to be needed to leave the finger in privately owned queue according to different situations
The packet buffer corresponding to the corresponding address of pointer in packet buffer corresponding to the corresponding address of pin or in shared queue
In.
In the embodiment of the present invention, the spatial cache of system is divided by CPU first, is divided into shared queue and privately owned team
Row, first in internal memory, CPU creates shared queue according to default size, using the partial address in internal memory as in queue
The address that pointer points to, as shared queue, and preserves the address information of shared queue, described shared queue can use by multiple use
Family is shared that is to say, that all of user can access the corresponding packet buffer of addressing in shared queue;Afterwards, CPU is
Each user creates a privately owned queue, and described privately owned queue is the queue only allowing a user to access, described privately owned queue
Address according to described shared queue and default privately owned queue size create;CPU is each privately owned queue establishing resource afterwards
Pond, the resource pool of each user is that this user priority uses, and tradable memory headroom is equally by multiple packet buffers ground
The queue of location composition, as shown in Figure 1.
As shown in Fig. 2 embodiments providing a kind of buffer memory management method, methods described includes:
201, clear text is stored in the address pair in the privately owned queue distributing to described first user by first user
In the packet buffer answered, and described address is removed described privately owned queue, only first user is able to access that described first user
Privately owned queue in the corresponding packet buffer in address;
Specifically, in CPU, different functional modules needs to do different process to different messages, such as compression,
Encryption, coding/decoding, forwarding etc. are processed, and therefore when processing to these clear texts, complete to clear text
The module processing can be referred to as user, and each user, when processing to clear text, is required for corresponding message and delays
To store the clear text needing this user to process.
Therefore, when the first user in multiple users needs clear text is processed, first user will be corresponding
In the corresponding packet buffer in certain address in the privately owned queue of first user, only first user is able to access that packet storage
The corresponding packet buffer in address in the privately owned queue of first user.
Before the first process is carried out to clear text, in order that the packet buffer space that clear text takies can not
Taken by other processes in first user, first user is by corresponding for this packet buffer address from the privately owned queue of first user
Middle removal, the pointer of the first address of the described packet buffer of sensing in certain node in the privately owned queue of such first user is not
Refer again to the first address of described packet buffer, the address being equivalent in privately owned queue is accordingly reduced, the private of corresponding first user
Queue size is had also to reduce.
Wherein, privately owned queue can be the single-track link table of multiple node compositions, point to different memory headrooms by pointer
Corresponding address, in queue, the pointer of previous node points to the first address of second node, and each node correspondence is from first address
The one section of memory headroom starting, the like.
202, described first user carries out the first process to described clear text;
Specifically, first user in the packet buffer in its privately owned queue, clear text is carried out son be good at enough enter
The process of row, such as compression, coding etc. are processed.
After the first process is carried out to clear text, if described message also needs to other users and carries out its elsewhere
Reason, then execution step 203.
203, the address of described packet buffer is notified, to second user, corresponding for described address message to be delayed by first user
The access rights deposited are changed to only second user and are able to access that;
Specifically, can be that the address storing the packet buffer of clear text in step 202 is notified by first user
It is responsible for first to second user or by an address notification module of the responsible execution address informing function in CPU
The address storing the packet buffer of clear text in the privately owned queue of user notifies to second user.But due to aforesaid private
There is the queue that queue is that first user could access, therefore, CPU needs to change the access rights of this section of packet buffer, by this section
The access rights of packet buffer are changed to only second user and are able to access that.
Certainly, after completing above-mentioned adjustment, the privately owned queue size of first user can diminish, and second user is privately owned
The size of queue can become big.
Additionally, first user is likely to accept after the 3rd user processed a certain section of message, from the private of the 3rd user
Queue is had to incorporate the packet buffer in the privately owned queue of first user into.
204th, second user accesses described packet buffer, carries out second processing to described clear text.
Specifically, CPU stores the clear text needing second user to process in the privately owned queue by first user
Packet buffer be allocated to second user after, second user accesses this packet buffer, and clear text is carried out at second
Reason, the first process is different with first user is carried out to clear text for second processing, for example, can be forward process or add
Close process.
After second user has processed clear text, the address of packet buffer can be discharged into the privately owned of second user
In queue, be equivalent to and one section of packet buffer in the privately owned queue of first user is allocated to second user.
At this point it is possible to point to realization, the such as message in step 202 by adjusting the single-track link table pointer in privately owned queue
Cache the spatial cache for the 102nd node in the privately owned queue of first user, the pointer of the 101st node points to the 102nd
The first address of individual node, the pointer of the 102nd node points to the first address of the 103rd node.Needing in the 102nd node
Spatial cache when incorporating second user into, as long as the pointer in the 101st node is pointed to spatial cache in the 103rd node
First address.102nd node of the former first user of pointer in last node in the privately owned queue of second user slow
Deposit the first address in space, you can complete the adjustment of privately owned queue.
By way of above-mentioned, when carrying out Cache Design, first user only needs to inform second user report by message
The address of literary composition caching, without memory copying, reaches and saves time-consuming effect.
After second user carries out second processing to clear text, if described clear text still needs the 3rd
User is processed, or needs first user to be processed, then second user the address of packet buffer is notified to the 3rd user or
Person's first user, realizes Cache Design.
In step 201, clear text is stored in the privately owned queue distributing to described first user by first user
The corresponding packet buffer in address in, but if the packet buffer in the privately owned queue of first user is depleted, also
It is that the corresponding packet buffer of privately owned queue saying in first user has all been given other users or occupied, then
One user to first resource pond application packet buffer, the corresponding packet buffer in address in described first resource pond be first use
The preferential memory headroom using in family;
If the corresponding packet buffer in first resource pond of first user is not fully occupied, CPU is by first resource
Segment message in pond caches corresponding address and is divided in the privately owned queue of first user, for example, first user is privately owned
The pointer of last node in queue points to the first address of first packet buffer in first resource pond.
If the packet buffer in the first resource pond of described first user has exhausted, first user and second user
Secondary resource pond carry out packet buffer exchange, that is, by corresponding for described first resource pond packet buffer space and described
Two resource pools corresponding packet buffer space swaps, and the whole cachings in the Secondary resource pond of described second user are in can
With state, the corresponding packet buffer in the address in Secondary resource pond is the preferential memory headroom using of second user;Second use
Whole cachings in the Secondary resource pond at family can use and refer to, all available packet buffer space in Secondary resource pond all not by
Second user takies.
Wherein, the implementation that caching exchanges can be by last node in the privately owned queue of described first user
Pointer points to the first address in the Secondary resource pond of described second user;By last section in the privately owned queue of described second user
The pointer of point points to the first address in the first resource pond of described first user.
If the resource pool of whole users all occupied that is to say, that the whole packet buffers not existed in resource pool are in
The second user of upstate, then first user described clear text is stored in the packet buffer in shared queue, described
The corresponding packet buffer in address in shared queue is the lock-free queue that whole users can access, and first user is in shared queue
In the corresponding packet buffer in certain address in the first process is carried out to clear text.
In a kind of possible embodiment, in step 203, first user is constantly by the packet buffer in privately owned queue
Give second queue, or first user constantly receives the 3rd user or fourth user marks packet buffer it is thus possible to
There is the not enough situation of available packet buffer in the privately owned queue of first user, and the privately owned queue of first user constantly increases
Big situation.
First user clear text has been carried out first process after, if the privately owned queue of described first user
The available packet buffer space that size exceedes in given threshold, or described privately owned queue is less than setting value, and first user will be deposited
The described packet buffer release of described clear text, described release refers to, by corresponding for packet buffer address, be discharged into private
Have in queue or resource pool or shared queue in, allow users to reuse the corresponding packet buffer in this address, such as by private
The pointer having last node in queue points to the first address of described packet buffer, and described given threshold is for example available to be
Privately owned queue size value when creating privately owned queue.
Specifically, if the privately owned queue of described first user is not up to can use in given threshold or described privately owned queue
Packet buffer space is less than setting value, then described packet buffer is discharged into the privately owned queue of described first user, that is, will
The corresponding address of packet buffer is discharged into the privately owned queue of first user;
That is predetermined insufficient space in privately owned queue, then be discharged into described by the packet buffer address of buffer queue
The privately owned queue of first user is so that the privately owned queue satisfaction of first user is sized.
If the privately owned queue of described first user has reached given threshold, and the resource pool of first user not up to sets
Determine threshold value, then described packet buffer is discharged in the resource pool of described first user, by the address chain of the packet buffer of release
It is connected to the tail address pointer in the first resource pond of first user;
If the privately owned queue of described second user has reached given threshold, and the resource pool of described first user reaches
To given threshold, then described packet buffer is discharged in described shared queue.
The reason above-mentioned process, is, if buff is directly released into shared queue by first user, that first user is again
When secondary application, can find that privately owned queue not can use packet buffer again, then judge that resource pool does not have available packet buffer again,
That first user is applied in shared queue again, and result is the message that user is difficult to be stored in clear text in privately owned queue
Caching.
Accordingly, the embodiment of the present invention also provides a kind of cache management device 300, and it includes:
Clear text is stored in the privately owned team distributing to described first user by memory element 301 for first user
In the corresponding packet buffer in address in row, and described address is removed described privately owned queue, only first user is able to access that
The corresponding packet buffer in address in the privately owned queue of described first user;
Clear text is stored in, for first user, the private distributing to described first user by first processing units 302
Have in the corresponding packet buffer in the address in queue, and described address is removed described privately owned queue, only first user can
Access the corresponding packet buffer in address in the privately owned queue of described first user;
Control unit 303, the described message described clear text after being processed by first being located for first user
The address of caching notifies, to second user, the access rights of corresponding for described address packet buffer to be changed to only second user
It is able to access that;
Second processing unit 304, accesses the corresponding packet buffer in described address for second user, to described pending report
Literary composition carries out second processing.
Preferably, described memory element 301 is additionally operable to:
In the case that the packet buffer in the privately owned queue of described first user is depleted, to the of described first user
One resource pool application packet buffer, the corresponding packet buffer in address in described first resource pond is that first user is preferential to be used
Memory headroom;
In the case that the corresponding packet buffer in all addresses in the first resource pond of described first user exhausts, with
The Secondary resource pond of two users carries out packet buffer exchange, by the corresponding packet buffer in whole addresses in described first resource pond
Address corresponding packet buffer space in space and described Secondary resource pond swaps, the Secondary resource of described second user
The corresponding whole cachings in whole addresses in pond are in upstate, and the corresponding message in address in described Secondary resource pond delays
Save as the preferential memory headroom using of second user;
Do not exist in the case that the whole packet buffers in resource pool are in the second user of upstate, treating described
Process the corresponding packet buffer in address in shared queue for the packet storage, the corresponding message in the address in described shared queue delays
Save as the lock-free queue that whole users can access.
Preferably, described cache management device also includes:Releasing unit is big for the privately owned queue in described first user
The little available packet buffer space exceeding in given threshold or described privately owned queue be less than given threshold when, will deposit described in wait to locate
The described packet buffer release of reason message.
Described releasing unit, specifically for:
If the privately owned queue of described first user is not up to the available packet buffer in given threshold or described privately owned queue
Space is less than setting value, then the address of described packet buffer is discharged into the privately owned queue of described first user;
If the privately owned queue of described first user has reached given threshold, and the resource pool of first user not up to sets
Determine threshold value, then the address of described packet buffer is discharged in the resource pool of described first user;
If the privately owned queue of described second user has reached given threshold, and the resource pool of described first user reaches
To given threshold, then the address of described packet buffer is discharged in described shared queue.
Described cache management device 300 may also include creating unit, is used for:Create shared queue, and preserve shared queue
Address information;Create multiple privately owned queues corresponding with each user, described privately owned queue is according to the ground of described shared queue
Location information and the privately owned queue size setting create;Create pond, described resource pool is in described shared queue and described privately owned queue
In outer memory headroom, corresponding with each user.
Accordingly, the embodiment of the present invention also provides a kind of cache management device, and Fig. 4 fills for embodiment of the present invention cache management
The schematic diagram put, as illustrated, the present embodiment includes network interface 41, processor 42 and memorizer 43.System bus 44 is used for
Connect network interface 41, processor 42 and memorizer 43.
Network interface 41 is used for communicating with other equipment.
Memorizer 43 can be permanent memory, such as hard disk drive and flash memory, has software module in memorizer 43
And device driver.Software module is able to carry out the various functions module of said method of the present invention;Device driver is permissible
It is network and interface drive program.
On startup, these software modules are loaded in memorizer 43, are then accessed by processor 42 and execute as follows
Instruction:
The address that clear text is stored in the privately owned queue distributing to described first user by first user is corresponding
In packet buffer, and described address is removed described privately owned queue, only first user is able to access that the private of described first user
There is the corresponding packet buffer in the address in queue;
Described first user carries out the first process to described clear text;
First user will be processed by first after the address of described packet buffer that is located of described clear text notify to
Second user, the access rights of corresponding for described address packet buffer is changed to only second user and is able to access that;
Second user accesses the corresponding packet buffer in described address, carries out second processing to described clear text.
Professional should further appreciate that, each example describing in conjunction with the embodiments described herein
Unit, can be with electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware and software
Interchangeability, generally describes composition and the step of each example in the above description according to function.These functions are studied carefully
Unexpectedly to be executed with hardware or software mode, the application-specific depending on technical scheme and design constraint.Professional technique people
Member can use different methods to each specific application realize described function, but this realization is it is not considered that surpass
Go out the scope of the present invention.
The step of the method in conjunction with the embodiments described herein description or algorithm can be with hardware, computing device
Software module, or the combination of the two is implementing.Software module can be placed in random access memory(English random access
Memory, abbreviation:RAM), internal memory, read only memory(English:Read only memory, abbreviation:ROM), electrically programmable ROM,
Electrically erasable ROM, depositor, hard disk, moveable magnetic disc, read-only optical disc(English:Compact Disc Read-Only
Memory, abbreviation:CD-ROM), or technical field in known any other form of storage medium.
Above-described specific embodiment, has been carried out to the purpose of the present invention, technical scheme and beneficial effect further
Describe in detail, be should be understood that the specific embodiment that the foregoing is only the present invention, be not intended to limit the present invention
Protection domain, all any modification, equivalent substitution and improvement in the technical foundation of the present invention, done etc., should be included in this
Within the protection domain of invention.
Claims (10)
1. a kind of buffer memory management method is it is characterised in that methods described includes:
The corresponding message in address that clear text is stored in the privately owned queue distributing to described first user by first user
In caching, and described address is removed described privately owned queue, only first user is able to access that the privately owned team of described first user
The corresponding packet buffer in address in row;
Described first user carries out the first process to described clear text;
The address of the described packet buffer that the described clear text after being processed by first is located by first user notifies to second
User, the access rights of corresponding for described address packet buffer is changed to only second user and is able to access that;
Second user accesses the corresponding packet buffer in described address, carries out second processing to described clear text.
2. buffer memory management method as claimed in claim 1 is it is characterised in that clear text is stored in distribution by first user
To in the corresponding packet buffer in the address in the privately owned queue of described first user, and described address is removed described privately owned team
Row, the corresponding packet buffer in address that only first user is able to access that in the privately owned queue of described first user, also wraps before
Include:
In the case that the corresponding packet buffer in all addresses in the privately owned queue of described first user is depleted, to described
The first resource pond application packet buffer of one user, the corresponding packet buffer in address in described first resource pond is first user
The preferential memory headroom using;
In the case that the corresponding packet buffer in all addresses in the first resource pond of described first user exhausts, with the second use
The Secondary resource pond at family carries out packet buffer exchange, by the whole addresses corresponding packet buffer space in described first resource pond
Swap with the address corresponding packet buffer space in described Secondary resource pond, in the Secondary resource pond of described second user
The corresponding whole cachings in whole addresses be in upstate, the corresponding packet buffer in address in described Secondary resource pond is the
The memory headroom that two user priorities use;
Do not exist in the case that the whole packet buffers in resource pool are in the second user of upstate, will be described pending
The corresponding packet buffer in address in shared queue for the packet storage, described shared queue is the no lock that whole users can access
Queue.
3. buffer memory management method as claimed in claim 2 is it is characterised in that also include, in the privately owned team of described first user
When the available packet buffer space that row size exceedes in given threshold or described privately owned queue is less than given threshold, will deposit described
The described packet buffer release of clear text.
4. buffer memory management method as claimed in claim 3 is it is characterised in that the privately owned queue size in described first user surpasses
When crossing available packet buffer space in given threshold or described privately owned queue and being less than given threshold, described pending report will be deposited
The described packet buffer release of literary composition, specially:
If the privately owned queue of described first user is not up to the available packet buffer space in given threshold or described privately owned queue
Less than setting value, then the address of described packet buffer is discharged into the privately owned queue of described first user;
If the privately owned queue of described first user has reached given threshold, and the resource pool of first user not up to sets threshold
Value, then be discharged into the address of described packet buffer in the resource pool of described first user;
If the privately owned queue of described second user has reached given threshold, and the resource pool of described first user reaches and sets
Determine threshold value, then the address of described packet buffer is discharged in described shared queue.
5. the buffer memory management method as described in any one in Claims 1-4 is it is characterised in that first user will be pending
Packet storage is in the corresponding packet buffer in the address in the privately owned queue distributing to described first user, and described address is moved
Go out described privately owned queue, the corresponding message in address that only first user is able to access that in the privately owned queue of described first user delays
Deposit, also include before:
Create shared queue, and preserve the address information of described shared queue;
Create multiple privately owned queues corresponding with each user, described privately owned queue according to the address information of described shared queue with
The privately owned queue size setting creates;
Establishing resource, resource pool is in the memory headroom outside described shared queue and described privately owned queue, corresponding with each user.
6. a kind of cache management device is it is characterised in that described device includes:
Clear text is stored in the ground in the privately owned queue distributing to described first user for first user by memory element
In the corresponding packet buffer in location, and described address is removed described privately owned queue, only first user is able to access that described first
The corresponding packet buffer in address in the privately owned queue of user;
First processing units, carry out the first process for described first user to described clear text;
Control unit, the ground of the described packet buffer described clear text after being processed by first being located for first user
Location notifies to second user, and the access rights of corresponding for described address packet buffer are changed to only second user can visit
Ask;
Second processing unit, accesses the corresponding packet buffer in described address for second user, described clear text is carried out
Second processing.
7. cache management device as claimed in claim 6 is it is characterised in that described memory element is additionally operable to:
In the case that the corresponding packet buffer in all addresses in the privately owned queue of described first user is depleted, to described
The first resource pond application packet buffer of one user, the corresponding packet buffer in address in described first resource pond is first user
The preferential memory headroom using;
In the case that the corresponding packet buffer in all addresses in the first resource pond of described first user exhausts, with the second use
The Secondary resource pond at family carries out packet buffer exchange, by the whole addresses corresponding packet buffer space in described first resource pond
Swap with the address corresponding packet buffer space in described Secondary resource pond, in the Secondary resource pond of described second user
The corresponding whole cachings in whole addresses be in upstate, the corresponding packet buffer in address in described Secondary resource pond is the
The memory headroom that two user priorities use;
Do not exist in the case that the whole packet buffers in resource pool are in the second user of upstate, will be described pending
The corresponding packet buffer in address in shared queue for the packet storage, described shared queue is the no lock that whole users can access
Queue.
8. cache management device as claimed in claim 7 is it is characterised in that also include, releasing unit, for described first
The available packet buffer space that the privately owned queue size of user exceedes in given threshold or described privately owned queue is less than given threshold
When, the described packet buffer release of described clear text will be deposited.
9. cache management device as claimed in claim 8 is it is characterised in that described releasing unit, specifically for:
If the privately owned queue of described first user is not up to the available packet buffer space in given threshold or described privately owned queue
Less than setting value, then the address of described packet buffer is discharged into the privately owned queue of described first user;
If the privately owned queue of described first user has reached given threshold, and the resource pool of first user not up to sets threshold
Value, then be discharged into the address of described packet buffer in the resource pool of described first user;
If the privately owned queue of described second user has reached given threshold, and the resource pool of described first user reaches and sets
Determine threshold value, then the address of described packet buffer is discharged in described shared queue.
10. cache management device as claimed in claim 6, it is characterised in that also including creating unit, is used for:
Create shared queue, and preserve the address information of described shared queue;
Create multiple privately owned queues corresponding with each user, described privately owned queue according to the address information of described shared queue with
The privately owned queue size setting creates;
Establishing resource, resource pool is in the memory headroom outside described shared queue and described privately owned queue, corresponding with each user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310013751.2A CN103927123B (en) | 2013-01-15 | 2013-01-15 | Buffer management method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310013751.2A CN103927123B (en) | 2013-01-15 | 2013-01-15 | Buffer management method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103927123A CN103927123A (en) | 2014-07-16 |
CN103927123B true CN103927123B (en) | 2017-02-08 |
Family
ID=51145360
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310013751.2A Active CN103927123B (en) | 2013-01-15 | 2013-01-15 | Buffer management method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103927123B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105912273B (en) * | 2016-04-15 | 2019-05-24 | 成都欧飞凌通讯技术有限公司 | A kind of message shares the FPGA implementation method of storage management |
CN109543080B (en) * | 2018-12-04 | 2020-11-06 | 北京字节跳动网络技术有限公司 | Cache data processing method and device, electronic equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100384178C (en) * | 2004-05-10 | 2008-04-23 | 北京航空航天大学 | Process method for parsing communication message data |
CN102204183A (en) * | 2011-05-09 | 2011-09-28 | 华为技术有限公司 | Message order-preserving processing method, order-preserving coprocessor and network equipment |
US9379970B2 (en) * | 2011-05-16 | 2016-06-28 | Futurewei Technologies, Inc. | Selective content routing and storage protocol for information-centric network |
CN102404213B (en) * | 2011-11-18 | 2014-09-10 | 盛科网络(苏州)有限公司 | Method and system for cache management of message |
-
2013
- 2013-01-15 CN CN201310013751.2A patent/CN103927123B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN103927123A (en) | 2014-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2386962B1 (en) | Programmable queue structures for multiprocessors | |
EP3073374B1 (en) | Thread creation method, service request processing method and related device | |
WO2018035856A1 (en) | Method, device and system for implementing hardware acceleration processing | |
US9411637B2 (en) | Adaptive process importance | |
EP3758311B1 (en) | Techniques to facilitate a hardware based table lookup | |
CN102577278B (en) | For the Dynamic Resource Allocation for Multimedia of distributed cluster storage network | |
CN106406764A (en) | A high-efficiency data access system and method for distributed SAN block storage | |
AU2021269201B2 (en) | Utilizing coherently attached interfaces in a network stack framework | |
CN104424122B (en) | A kind of electronic equipment and memory division methods | |
CN103729236A (en) | Method for limiting resource using limit of cloud computing users | |
WO2017032152A1 (en) | Method for writing data into storage device and storage device | |
WO2015084506A1 (en) | System and method for managing and supporting virtual host bus adaptor (vhba) over infiniband (ib) and for supporting efficient buffer usage with a single external memory interface | |
CN108351838B (en) | Memory management functions are provided using polymerization memory management unit (MMU) | |
Egi et al. | Forwarding path architectures for multicore software routers | |
CN103927123B (en) | Buffer management method and device | |
CN106326143B (en) | A kind of caching distribution, data access, data transmission method for uplink, processor and system | |
CN106803841A (en) | The read method of message queue data, device and distributed data-storage system | |
So et al. | Toward terabyte-scale caching with SSD in a named data networking router | |
US9304706B2 (en) | Efficient complex network traffic management in a non-uniform memory system | |
US9128771B1 (en) | System, method, and computer program product to distribute workload | |
CN105144099B (en) | Communication system | |
JP2016509306A (en) | System and method for supporting work sharing multiplexing in a cluster | |
CN115766729A (en) | Data processing method for four-layer load balancing and related device | |
US10565004B2 (en) | Interrupt and message generation independent of status register content | |
Xue et al. | Network interface architecture for remote indirect memory access (rima) in datacenters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231201 Address after: No. 1-9, 24th Floor, Unit 2, Building 1, No. 28, North Section of Tianfu Avenue, High tech Zone, Chengdu, Sichuan Province, 610000 Patentee after: Sichuan Huakun Zhenyu Intelligent Technology Co.,Ltd. Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd. |