CN108234348A - A kind of processing method and processing device in queue operation - Google Patents
A kind of processing method and processing device in queue operation Download PDFInfo
- Publication number
- CN108234348A CN108234348A CN201611158994.5A CN201611158994A CN108234348A CN 108234348 A CN108234348 A CN 108234348A CN 201611158994 A CN201611158994 A CN 201611158994A CN 108234348 A CN108234348 A CN 108234348A
- Authority
- CN
- China
- Prior art keywords
- queue
- descriptor
- information
- address
- updated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/625—Queue scheduling characterised by scheduling criteria for service slots or service orders
- H04L47/627—Queue scheduling characterised by scheduling criteria for service slots or service orders policing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses the processing method in a kind of queue operation, including:Obtain the queue number of the affiliated queue of outstanding message information;The storage location information and address information of queue descriptor corresponding with the queue number are inquired in the mapping table;According to the storage location information and address information of the queue descriptor, queue descriptor is obtained, the queue descriptor is moved into register, and update the storage location information and address information of queue descriptor corresponding with the queue number in mapping table;Queue operation is carried out to the outstanding message information according to the queue descriptor, and the queue descriptor is updated according to the queue descriptor, the updated storage location information of the queue descriptor and address information after queue operation is carried out.The present invention further simultaneously discloses a kind of processing unit of queue operation.
Description
Technical field
The present invention relates to the processing method and processing devices in Internet technical field more particularly to a kind of queue operation.
Background technology
In network devices, such as network processing unit (Network Processor, NP) exchanges access (Fabric
Access Processor, FAP), interchanger, bridge or traffic management chip be integrated with queue management function, message presses team
Row are handled.Increase along with business, the number of queues that network devices need to be supported exponentially increases, and has reached 512K even 1M
The queue of capacity stored above, for enormous amount high density queue carry out queuing management difficulty rise therewith, queue and
The storage of identifier, that is, queue descriptor (Queue Descriptor, QD) of queue is required for occupying a large amount of memory capacity,
Under the premise of ensureing performance, it is to have to consider the problems of to reduce chip cost.
In the prior art, have patent document description using being cached outside the high density piece of inexpensive large capacity --- dynamically with
Machine memory (DRAM), caches for making up in piece --- and Static RAM (SRAM) is due to storage high density queue and QD
And the problem of leading to lack of memory capacity.Such as patent document US008180966B2《System and method for
operating a packet buffer in an intermediate node》A kind of message data caching method is provided,
This method come storage queue and QD, is solved using cache memory (Cache) in DRAM combination low capacity high-speed chips
The problem of Cache lack of memory capacities.Each queue has independent memory space in Cache, and several queues are stored in Cache
First message fragment, and QD also is stored in Cache, remaining fragment of queue and tail fragment are then stored in DRAM.New report
When text is joined the team, message fragment is written to the tail portion of correspondence queue being stored in DRAM;When new message goes out group, reading is stored in
The first message of correspondence queue in Cache carries out out team, all goes out team if on the message fragment in Cache, then
Can the message fragment newly joined the team be stored in Cache or can move the message fragment being stored in DRAM to Cache
In.
Although the above method compensates for SRAM storage queues and QD and leads to asking for lack of memory capacity to a certain extent
Topic, however, for high density queue, the storage of QD is also required to consume a large amount of memory space, such as has the team of 1M memory capacity
Columns, the memory capacity of the QD of each queue needs 80bit, then needs the SRAM of 80Mbit memory capacity, if all QD
It being stored in Cache, then cost of implementation is huge under prior art, therefore, and in queue operation, storage and place for QD
Reason method needs to improve.Patent document US7277990《Method and apparatus providing efficient
queue descriptor memory access》In propose QD storage processing improved method.In the improved method, QD is deposited
Storage passes through team request triggering Content Addressable Memory (the Content Addressable that join the team out in Cache or DRAM
Memory, CAM), whether inquiry QD is stored in Cache, if inquired, CAM returns to storage address of the QD in Cache,
QD is read from the correspondence storage address of Cache;If do not inquired, QD is read from DRAM, while CAM will discharge 1
The QD labels of queue unrelated with trigger action, the QD labels newly moved in of QD labels list item storage vacateed, the QD of discharge is from Cache
It moves into DRAM.However, this method has three, the readwrite bandwidth of first, CAM is unable to arbitrary extension, when to CAM frequencies
During numerous write-in, the queue number that CAM is safeguarded at this time frequently changes, and will certainly influence CAM effectivenesss of retrieval;Second, similary capacity
CAM occupies the area of bigger than Cache, and power consumption bigger, CAM cannot support enough QD labels simultaneously, therefore be placed on Cache's
QD quantity can be by CAM capacity impacts;Third, high-speed high-performance traffic management need QD in the piece of the big bandwidth of operation of large capacity to deposit
Storage, Cache cannot support large capacity and big readwrite bandwidth simultaneously.As it can be seen that in queue operation, this method is frequently accessed due to needing
Cache and DRAM obtains QD, so as to cause the access efficiency of QD low.
Therefore, it is a kind of in queue operation there is an urgent need for finding to solve the problems, such as that QD access efficiencies are low in queue operation
Processing method.
Invention content
Existing to solve the problems, such as, an embodiment of the present invention is intended to provide the processing methods and dress in a kind of queue operation
It puts, the access efficiency of the QD in queue operation can be improved, realize the quick access of QD.
In order to achieve the above objectives, the technical proposal of the invention is realized in this way:
An embodiment of the present invention provides the processing method in a kind of queue operation, the method includes:
Obtain the queue number of the affiliated queue of outstanding message information;
The storage location information and address information of queue descriptor corresponding with the queue number are inquired in the mapping table;
According to the storage location information and address information of the queue descriptor, queue descriptor is obtained, by the queue
Descriptor is moved into register, and updates the storage location information of queue descriptor corresponding with the queue number in mapping table
And address information;
Queue operation carries out the outstanding message information, and carrying out queue operation according to the queue descriptor
Later according to the queue descriptor, the updated storage location information of the queue descriptor and address information, to the team
Row descriptor is updated.
It is described that queue operation is carried out to the outstanding message information according to the queue descriptor in said program,
Including:
Based on the queue descriptor and the queue number, according to preset congestion avoidance policy, pending report is determined
When literary information is joined the team, enqueue operations are carried out to the outstanding message information according to the queue descriptor;Alternatively,
Dequeue operation is carried out to pending message information according to the queue descriptor.
It is described to be treated according to the queue descriptor to described when the queue operation is enqueue operations in said program
The message information of processing carries out queue operation, and is retouched after queue operation is carried out according to the queue descriptor, the queue
The updated storage location information of symbol and address information are stated, the queue descriptor is updated, including:
The packet buffer pointer of dynamic RAM is directed toward in application;
According to the packet buffer pointer, the outstanding message information and packet buffer pointer are stored to described
In dynamic RAM;
According to the packet buffer pointer and the message information of enqueue operations, the queue descriptor is updated, and will more
Queue descriptor after new is stored into destination address, and the destination address is the updated storage location of the queue descriptor
Address determined by information and address information.
It is described to be treated according to the queue descriptor to described when the queue operation is dequeue operation in said program
The message information of processing carries out queue operation, and is retouched after queue operation is carried out according to the queue descriptor, the queue
The updated storage location information of symbol and address information are stated, the queue descriptor is updated, including:
According to the queue descriptor, the dynamic RAM read outstanding message information and it is next just
Team is carried out out in the packet buffer pointer of queuing, and by the outstanding message information;
Release has gone out the packet buffer pointer of the message information of team;
According to next packet buffer pointer being lined up and the message information of the dequeue operation, institute is updated
Queue descriptor is stated, and updated queue descriptor is stored into destination address, the destination address is retouched for the queue
State address determined by the updated storage location information of symbol and address information.
In said program, it is described the queue descriptor is updated after, the method further includes:
Queue descriptor before detection update corresponds to the queue number of queue;
When updated queue descriptor corresponds to the queue number queue corresponding with the queue descriptor before all updates of queue
Queue number it is all inconsistent when, the updated queue descriptor in the register will be stored in move to speed buffering and deposit
In reservoir;
Update the storage location information of the updated queue descriptor corresponding with queue number and address in mapping table
Information.
In said program, the method further includes:
The space service condition of the cache memory is detected in real time, obtains the first testing result;
When first testing result be more than preset first threshold when, will storage in the cache,
And liveness is moved less than the queue descriptor of the queue of preset second threshold to dynamic RAM;
The pointer of the queue descriptor of the cache memory is moved out of in release.
In said program, the method further includes:
The space service condition of the register is detected in real time, obtains the second testing result;
When the second testing result is more than preset third threshold value, stop exporting the report in the outstanding message information
Literary descriptor and enqueue operations are carried out to pending message information, until the second testing result is less than preset 4th threshold value
When, restore the message descriptor exported in the outstanding message information and the behaviour that joins the team is carried out to pending message information
Make, preset 4th threshold value is less than the preset third threshold value.
The embodiment of the present invention additionally provides the processing unit in a kind of queue operation, and described device includes:Acquisition module is looked into
Inquiry module, first move module and first processing module;Wherein,
The acquisition module, for obtaining the queue number of the affiliated queue of outstanding message information;
The enquiry module, for inquiring the storage location of queue descriptor corresponding with the queue number in the mapping table
Information and address information;
Described first moves module, for the storage location information and address information according to the queue descriptor, obtains
Queue descriptor moves the queue descriptor into register, and updates team corresponding with the queue number in mapping table
The storage location information and address information of row descriptor;
The first processing module, for carrying out queue to the outstanding message information according to the queue descriptor
Operation, and believed after queue operation is carried out according to the queue descriptor, the updated storage location of the queue descriptor
Breath and address information, are updated the queue descriptor.
In said program, the first processing module specifically for being based on the queue descriptor and the queue number, is pressed
According to preset congestion avoidance policy, when determining that outstanding message information is joined the team, wait to locate to described according to the queue descriptor
The message information of reason carries out enqueue operations;Alternatively, team behaviour is carried out out to pending message information according to the queue descriptor
Make.
In said program, when the queue operation is enqueue operations, the first processing module includes:Application unit,
Storage unit and the first updating unit;Wherein,
The application unit, for applying being directed toward the packet buffer pointer of dynamic RAM;
The storage unit, for according to the packet buffer pointer, by the outstanding message information and message
Buffer pointers are stored into the dynamic RAM;
First updating unit, for according to the packet buffer pointer and the message information of enqueue operations, update
The queue descriptor, and updated queue descriptor is stored into destination address, the destination address is the queue
Address determined by the updated storage location information of descriptor and address information.
In said program, when the queue operation is dequeue operation, the first processing module includes:Reading unit,
Releasing unit and the second updating unit;Wherein,
The reading unit, for according to the queue descriptor, being read in the dynamic RAM pending
Message information and next packet buffer pointer being lined up, and the outstanding message information is subjected to out team;
The releasing unit, for discharging the packet buffer pointer of the message information of team;
Second updating unit, for according to next packet buffer pointer being lined up and it is described gone out team
The message information of operation updates the queue descriptor, and updated queue descriptor is stored into destination address, described
Destination address is address determined by the updated storage location information of the queue descriptor and address information.
In said program, described device further includes:First detection module, second move module and update mapping table module;
Wherein,
The first detection module, for detecting the queue number that the queue descriptor before updating corresponds to queue;
Described second moves module, before queue number and all updates that queue is corresponded to for working as updated queue descriptor
Queue descriptor correspond to queue queue number it is all inconsistent when, the updated queue in the register will be stored in and retouched
Symbol is stated to move into cache memory;
The update mapping table module, for updating the updated queue description corresponding with queue number in mapping table
The storage location information and address information of symbol.
In said program, described device further includes:Second detection module, third move module and release module;Wherein,
Second detection module for detecting the space service condition of the cache memory in real time, obtains the
One testing result;
The third moves module, for when first testing result is more than preset first threshold, will be stored in
In the cache memory and liveness be less than preset second threshold queue queue descriptor move to dynamic with
In machine memory;
The release module, for discharging the pointer for the queue descriptor for moving out of the cache memory.
In above device, described device further includes:Third detection module, Second processing module;Wherein,
The third detection module for detecting the space service condition of the register in real time, obtains the second detection knot
Fruit;
The Second processing module, for when the second testing result is more than preset third threshold value, stopping described in output
Message descriptor in outstanding message information and enqueue operations are carried out to pending message information, until the second detection is tied
When fruit is less than preset four threshold value, restore to export message descriptor in the outstanding message information and to pending
Message information carries out enqueue operations, and preset 4th threshold value is less than the preset third threshold value.
Processing method and processing device in queue operation provided in an embodiment of the present invention first, obtains outstanding message letter
The queue number of queue belonging to breath;Then, the storage location information of QD corresponding with the queue number is inquired in (Map) table is mapped
And address information;According to the storage location information and address information of the QD, QD is obtained, the QD is moved to register
(Reg) in, and the storage location information and address information of QD corresponding with the queue number in Map tables are updated;Finally, according to institute
It states QD and queue operation is carried out to the outstanding message information, and according to the QD, the QD after queue operation is carried out
Updated storage location information and address information, are updated the QD.
As it can be seen that the embodiment of the present invention is corresponding with the queue number of the affiliated queue of outstanding message information by the inquiry of Map tables
QD storage location and storage address;According to the storage location and storage address of the QD, so as to obtain QD, the QD is removed
It moves in Reg, and the storage location for updating QD corresponding with the queue number in Map is Reg and storage address is in Reg
Corresponding address;Queue operation, the basis after queue operation is carried out are carried out to the outstanding message information according to the QD
The QD, the updated storage locations of the QD and storage address, are updated the QD, so as to ensure that in queue operation
QD dynamic access actual effect, improve the access efficiency of the QD in queue operation, realize the quick access of QD, ensure that
The system performance of integrated queue queuing management function.
Description of the drawings
Fig. 1 is one of realization flow diagram of processing method embodiment one in queue operation of the present invention;
Fig. 2 is the mapping relations schematic diagram between Reg, Cache and DRAM in Map tables;
Fig. 3 is the two of the realization flow diagram of the processing method embodiment one in queue operation of the present invention;
Fig. 4 is the three of the realization flow diagram of the processing method embodiment one in queue operation of the present invention;
Fig. 5 is the realization flow diagram of the processing method embodiment two in queue operation of the present invention;
Fig. 6 is the real-time testing process schematic diagram using space of Cache;
Fig. 7 is moved from Cache to the realization flow diagram of DRAM for QD;
Fig. 8 a are queue number and the mapping relations schematic diagram of QD;
Fig. 8 b are the structure diagram that Cache enlivens chained list;
Fig. 8 c are the structure diagram of Cache free time chained lists;
Fig. 9 is the real-time testing process schematic diagram using space of Reg;
Figure 10 is one of application scenarios schematic diagram of processing method embodiment five in queue operation of the present invention;
Figure 11 is to move flow diagram when reading QD;
Figure 12 is the two of the application scenarios schematic diagram of the processing method embodiment five in queue operation of the present invention;
Figure 13 is the composition structure diagram of the processing unit embodiment one in queue operation of the present invention;
Figure 14 is that the refinement of first processing module in processing unit shown in Figure 13 forms one of structure diagram;
Figure 15 is that the refinement of first processing module in processing unit shown in Figure 13 forms second structural representation;
Figure 16 is the composition structure diagram of the processing unit embodiment two in queue operation of the present invention;
Figure 17 is the real-time detection function module diagram using space of Cache;
Figure 18 is the real-time detection function module diagram using space of Reg.
Specific embodiment
Processing method in queue operation provided in an embodiment of the present invention is mainly used in integrated queue queuing management function
System on, by Map tables inquire QD corresponding with the queue number of the affiliated queue of outstanding message information storage location and
Storage address;According to the storage location and storage address of the QD, so as to obtain QD, the QD is moved into Reg, and is updated
The storage location of QD corresponding with the queue number is Reg in Map and storage address is the corresponding address in Reg;According to institute
It states QD and queue operation is carried out to the outstanding message information, after queue operation is carried out according to the QD, the QD more
Storage location and storage address after new, are updated the QD, can improve the access efficiency of the QD in queue operation, real
The quick access of existing QD.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.It should
Understand, the specific embodiments described herein are merely illustrative of the present invention, is not intended to limit the present invention.
Fig. 1 is one of realization flow diagram of processing method embodiment one in queue operation of the present invention, with reference to Fig. 1 institutes
Show, the processing method in the present embodiment in queue operation includes the following steps:
Step 101, the queue number of the affiliated queue of outstanding message information is obtained;
Here, the outstanding message information includes message data and message descriptor, and the message descriptor includes
Queue number and message length.
Specifically, according to the difference that queue operation is carried out to pending message information, the method for queue number is obtained also not
Together, queue operation includes enqueue operations and dequeue operation;In the present embodiment, it in enqueue operations, is received from network pending
Message information, message descriptor in message information determines queue number;In dequeue operation, acquisition etc. is to be scheduled
Port information and etc. queuing message to be scheduled, according to the port information and queuing message, pass through dispatching algorithm such as RR scheduling
Algorithm or SP dispatching algorithms, are calculated queue number.
Step 102, the storage location information and ground of queue descriptor corresponding with the queue number are inquired in the mapping table
Location information;
Here, the Map tables are used for storing the storage location information and address information with the one-to-one QD of queue number;Figure
2 be the mapping relations schematic diagram in Map tables between Reg, Cache and DRAM, with reference to shown in Fig. 2, often row expression and team in Map tables
The storage location information and address information of the corresponding QD of row number, if the first row serial number " 1 " is queue number, " in Reg " is and team
The storage location information of 1 corresponding QD of row number, expression are stored in 1 corresponding QD of queue number in Reg, " Reg pointer 5 "
For the address information with 1 corresponding QD of queue number, expression is stored in Reg with 1 corresponding QD of queue number indicated by pointer 5
Address;In the present embodiment, according to the queue number of the affiliated queue of outstanding message information, in the inquiry of Map tables and the team
The storage location information and address information of the corresponding QD of row number, storage location information represent that QD corresponding with the queue number is to deposit
Storage is in Reg or in Cache or in DRAM or QD shows queue as sky, and the QD shows that queue is represented to be empty in Reg
In, QD corresponding with the queue number is all not present in Cache and in DRAM;Address information represents corresponding with the queue number
Address in the storage location of QD.
Step 103, according to the storage location information and address information of the queue descriptor, queue descriptor is obtained, it will
The queue descriptor is moved into register, and updates the storage of queue descriptor corresponding with the queue number in mapping table
Location information and address information;
Here, a specific address is determined based on the storage location information and address information of the QD, from determining specific
QD is read in address;And according to the storage location information of the QD, the QD being not stored in Reg is moved into Reg,
Update Map tables in QD corresponding with the queue number storage location information be " in Reg ", while update in Map tables with institute
The address information for stating the corresponding QD of queue number is " address in storage location Reg ".
Specifically, in the present embodiment, according to the storage location information of the QD, judge the QD be stored in Reg,
Or in Cache or in DRAM or QD shows queue for sky;When the QD is stored in Reg, then using the QD's
QD is read in indicated address from Reg in address information, at this point, since queue description is stored in Reg, does not need to pair
It carries out moving operation and update Map table handlings;When the QD is stored in Cache, then using the address information of the QD
In indicated address QD is read from Cache, at this point, applying for a free address, the institute that will be stored in Cache to Reg
It states QD to move into the free address of Reg, and updates the storage location information of QD corresponding with the queue number in Map tables
It is " in Reg, the address in storage location Reg " with address information;When the QD is stored in DRAM, at this point, the QD
Address information it is meaningless, using the queue number, QD, the Shen into Reg are read in corresponding with queue number address from DRAM
Please a free address, the QD being stored in DRAM is moved into the free address of Reg, and is updated in Map tables
The storage location information and address information of QD corresponding with the queue number is " in Reg, the address in storage location Reg ";
When it is empty that the QD, which shows queue, at this point, applying for a free address to Reg, QD is created, and update in the free address
The storage location information of QD corresponding with the queue number and address information are " in Reg, in storage location Reg in Map tables
Address ".
Step 104, queue operation is carried out, and carrying out to the outstanding message information according to the queue descriptor
According to the queue descriptor, the updated storage location information of the queue descriptor and address information after queue operation,
The queue descriptor is updated.
Specifically, in enqueue operations, message information to be joined the team is received from network, the message in message information
Descriptor determines queue number;Or in dequeue operation, obtain etc. port information to be scheduled and etc. queuing message to be scheduled, root
According to the port information and queuing message, by dispatching algorithm such as RR dispatching algorithms or SP dispatching algorithms, queue number is calculated;
The storage location information and address information of QD corresponding with the queue number are inquired in Map tables;
According to the storage location information of the QD, judge that the QD is stored in Reg or in Cache or DRAM
In or QD show queue for sky;When the QD is stored in Reg, using address indicated in the address information of the QD
QD is read from Reg;When the QD is stored in Cache, using address indicated in the address information of the QD from
QD is read in Cache, while applies for a free address in Reg, the QD stored in Cache is moved into Reg
In the free address, and update in Map tables the storage location information of QD corresponding with the queue number and address information for "
In Reg, the address in storage location Reg ";When the QD is stored in DRAM, at this point, using the queue number from DRAM
QD is read in address corresponding with the queue number, while applies for a free address in Reg, will be stored in described in DRAM
QD is moved in the free address into Reg, and updates the storage location information of QD corresponding with the queue number in Map tables
It is " in Reg, the address in storage location Reg " with address information;When it is empty that the QD, which shows queue, to Reg applications one
A free address creates QD in the free address, and updates the storage location letter of QD corresponding with the queue number in Map tables
Breath and address information are " in Reg, the address in storage location Reg ";
Queue operation, and the basis after queue operation is carried out carry out the outstanding message information according to the QD
The updated storage location information of the QD, the QD and address information, are updated the QD.
It is understood that in queue operation, the storage location information of QD corresponding with queue number is inquired according to Map tables
And address information, according to the storage location information and address information of the QD, obtain QD;The QD is moved into Reg, and more
The storage location information and address information of QD corresponding with the queue number in new Map tables;According to the QD, updated QD
Storage location information and address information, the QD is updated can be achieved with QD quick access be because:It is stored in Map tables
The storage location information and address information with the one-to-one QD of queue number, in queue operation, by inquiring Map tables with regard to energy
The storage location and storage address of QD corresponding with the queue number is obtained, thus when QD is stored in Cache or DRAM, only
Needing once to access Cache or DRAM just can get QD, improve the acquisition efficiency of QD, realize the quick obtaining of QD;It will storage
QD in Cache or DRAM is moved into Reg, and updates the storage location letter of QD corresponding with the queue number in Map tables
Breath and address information, after queue operation, according to the updated storage location information of the QD, the QD and address information pair
The QD is updated, and so as to only need to access Reg when updating the QD, is improved the storage efficiency of QD, is realized that QD's is quick
Storage.
Further, Fig. 3 is the two of the realization flow diagram of the processing method embodiment one in queue operation of the present invention,
Based on the QD and the queue number, according to preset congestion avoidance policy, when determining that outstanding message information is joined the team, according to
The QD carries out enqueue operations to the outstanding message information, and with reference to shown in Fig. 3, step 104 specifically includes following steps:
Step 1041, the packet buffer pointer of dynamic RAM is directed toward in application;
Step 1042, according to the packet buffer pointer, by the outstanding message information and packet buffer pointer
It stores into the dynamic RAM;
Step 1043, according to the packet buffer pointer and the message information of enqueue operations, the queue description is updated
Symbol, and updated queue descriptor is stored into destination address, after the destination address is queue descriptor update
Storage location information and address information determined by address.
Here, the QD includes the owner pointer of queue, the tail pointer of queue and queue depth, the preset Congestion Avoidance
Strategy can be Weighted random early detection algorithm;In the present embodiment, according to the affiliated queue of outstanding message information
QD and the queue number, using Weighted random early detection algorithm, adjudicate whether the outstanding message information joins the team;When
When judgement message information cannot join the team, the outstanding message information is abandoned;It is right when judgement message information is joined the team
The outstanding message information carries out enqueue operations.
Specifically, according to the QD and the queue number, using Weighted random early detection algorithm, determine that message information enters
During team, the packet buffer pointer of DRAM is directed toward in application;
The DRAM message datas that message data in outstanding message information is written to packet buffer pointer direction delay
Region is deposited, the message descriptor in outstanding message information is written to the DRAM message descriptors of packet buffer pointer direction
Buffer zone, and packet buffer pointer is written to DRAM packet buffer pointers buffer zone;
After carrying out enqueue operations to the outstanding message information, the tail pointer of the queue in the QD is updated to
Packet buffer pointer, the queue depth that the queue depth in the QD is updated in the QD add the report in message descriptor
Literary length stores updated QD to the address based on determined by the updated storage location information of the QD and address information
In.
Fig. 4 is the three of the realization flow diagram of the processing method embodiment one in queue operation of the present invention, when the team
When row operation is dequeue operation, with reference to shown in Fig. 4, step 104 specifically includes following steps:
Step 1044, according to the queue descriptor, outstanding message information is read in the dynamic RAM
With next packet buffer pointer being lined up, and the outstanding message information is subjected to out team;
Step 1045, release has gone out the packet buffer pointer of the message information of team;
Step 1046, believed according to next packet buffer pointer being lined up and the message of the dequeue operation
Breath, updates the queue descriptor, and updated queue descriptor is stored into destination address, the destination address is institute
State address determined by the updated storage location information of queue descriptor and address information.
Specifically, message data is read from DRAM message datas buffer zone with the owner pointer of the queue in QD, by message
Data carry out out team, read message descriptor from DRAM message descriptors buffer zone with the owner pointer of the queue in QD, determine
Message length reads packet buffer pointer from DRAM packet buffer pointers buffer zone with the owner pointer of the queue in QD, is used in combination
The owner pointer of queue in QD reads next packet buffer pointer in queuing from DRAM packet buffer pointers buffer zone;
Release has gone out the packet buffer pointer of the message information of team;
After carrying out dequeue operation to the outstanding message information, the owner pointer of the queue in the QD is updated to
The queue depth that queue depth in the QD is updated in the QD is subtracted report by next packet buffer pointer in queuing
Message length in literary descriptor stores updated QD to based on the updated storage location information of the QD and address letter
In address determined by breath.
Fig. 5 is the realization flow diagram of the processing method embodiment two in queue operation of the present invention, with reference to shown in Fig. 5,
The step of processing method in the queue operation of the present embodiment is the processing method embodiment one in queue operation of the present invention
It is further included after 104:
Step 105, the queue descriptor before detection update corresponds to the queue number of queue;
Step 106, when updated queue descriptor corresponds to the queue number of queue and the queue descriptor before all updates
When the queue number of corresponding queue is all inconsistent, will be stored in the updated queue descriptor in the register move it is supreme
In fast buffer storage;
Step 107, the storage location information of the corresponding updated queue descriptor of queue number in mapping table is updated
And address information.
Based on the processing method in queue operation in embodiment one, in the present embodiment, to the processing side in enqueue operations
The specific implementation citing of method is described in detail.
Outstanding message information is received from network, the message descriptor in the outstanding message information is true
Determine that queue number is 2 and message length is 128;
With reference to shown in Fig. 2, it is " in Cache " to be inquired in Map tables with the storage location information of 2 corresponding QD of queue number
It is " Cache pointer 1 " with address information, according to the storage location information and address information of QD, determines that QD is stored in
The address 1 that pointer pointer 1 in Cache is directed toward, QD is read from the address, applies for a free address 6 in Reg,
The QD being stored in Cache is moved into Reg, and updates the storage position in Map tables with the 2 corresponding QD of queue number
Confidence cease and address information for " in Reg, Reg pointer 6 ";
According to the QD and the exit port of queue number 2, using Weighted random early detection algorithm, determine that message information is joined the team
When, the packet buffer pointer of DRAM is directed toward in application;
The DRAM message datas that message data in outstanding message information is written to packet buffer pointer direction delay
Region is deposited, the message descriptor in outstanding message information is written to the DRAM message descriptors of packet buffer pointer direction
Buffer zone, and packet buffer pointer is written to DRAM packet buffer pointers buffer zone;
After carrying out enqueue operations to pending message information, it will refer to the tail of the queue in 2 corresponding QD of queue number
Needle is updated to packet buffer pointer, by with the queue depth in 2 corresponding QD of queue number be updated to in 2 corresponding QD of queue number
Queue depth plus message length 128, updated QD is stored into the address 6 of Reg;
It updates with after 2 corresponding QD of queue number, detecting that the corresponding queue numbers of QD before update are 5 and 10;
Queue number 2 and queue number 5 and queue number 10 are all inconsistent, apply for a free address 6 in Cache at this time, will
The QD being stored in the address 6 of Reg is moved into the address 6 of Cache;
Be with the storage location information and address information of 2 corresponding QD of queue number in update Map tables " in Cache,
Cache pointer 6”。
Further, based on the processing method in queue operation in embodiment one, in the present embodiment, in dequeue operation
Processing method specific implementation citing be described in detail.
Obtain etc. port information to be scheduled and etc. queuing message to be scheduled, believed according to the port information and queue
Breath, by RR dispatching algorithms, it is 65536 that queue number, which is calculated,;
With reference to shown in Fig. 2, it is " in DRAM to be inquired in Map tables with the storage location information of 65536 corresponding QD of queue number
In " and address information be " Null ", at this point, read QD from the address of DRAM 65536 using queue number 65536, the Shen in Reg
Please a free address 7, the QD being stored in DRAM is moved into Reg, and update in Map tables with the queue number
The storage location information and address information of 65536 corresponding QD is " in Reg, Reg pointer 7 ";
Message data is read from the message data buffer zone of DRAM, and to message with the owner pointer of the queue in the QD
Data carry out out team, and reading message from the message descriptor buffer zone of DRAM with the owner pointer of the queue in the QD describes
Symbol, it is 128 to determine message length, and is read with the owner pointer of the queue in the QD from the packet buffer pointer buffer zone of DRAM
Take packet buffer pointer and next packet buffer pointer being lined up;
Release has gone out the packet buffer pointer of the message data of team;
After carrying out dequeue operation, it will be updated to the owner pointer of the queue in 65536 corresponding QD of queue number next
The packet buffer pointer being lined up will be updated to and queue number 65536 with the queue depth in 65536 corresponding QD of queue number
Queue depth in corresponding QD subtracts message length 128, and updated QD is stored into the address 7 of Reg;
It updates with after 65536 corresponding QD of queue number, detecting that the corresponding queue numbers of QD before update are 4 and 9;
Queue number 65536 and queue number 4 and queue number 9 are all inconsistent, apply for a free address 7 in Cache at this time,
QD in the address 7 for being stored in Reg is moved into the address 7 of Cache;
Be with the storage location information and address information of 65536 corresponding QD of queue number in update Map tables " in Cache,
Cache pointer 7”。
Further, due to needing to move the QD for carrying out queue operation processing into Reg, while needs will be complete
The QD being processed and stored at into queue operation in Reg is moved into Cache, therefore, in order to ensure that Cache has sufficient space can
QD in storage queue operation processing ensures being normally carried out for queue operation processing, the processing side in queue operation of the present invention
In method embodiment three, Cache can also be detected in real time using space.
Fig. 6 is the real-time testing process schematic diagram using space of Cache, with reference to shown in Fig. 6, in queue operation of the present invention
Processing method further include:
Step 201, the space service condition of the cache memory is detected in real time, obtains the first testing result;
Step 202, when first testing result is more than preset first threshold, the speed buffering will be stored in and deposited
In reservoir and liveness is moved less than the queue descriptor of the queue of preset second threshold to dynamic RAM, is made
The space for obtaining the cache memory meets preset condition;
The preset first threshold can be configured according to actual needs, such as the value range of the first threshold can
Think 90% to 98% percentage value;In the present embodiment, it can be described in detail so that first threshold is 95% as an example.
When the QD stored in Reg is moved into Cache, expression has carried out a queue operation processing, therefore, at this
In embodiment, the liveness of the queue can store the time length to Cache to divide, such as be stored in QD
In Cache maximum duration do not carry out the queue of the QD of queue operation processing liveness it is minimum, could be provided as 0, work as detection
To have QD from Reg move into Cache when, illustrate there is queue carrying out queue operation processing, queue at this time enlivens, Ke Yishe
The liveness for putting queue is 10, and the liveness of remaining queue could be provided as 1 to 9 integer value.
The preset second threshold can be configured according to actual needs, in the present embodiment, can be with the second threshold
Be worth be 0.5 for be described in detail.
Step 203, the pointer of the queue descriptor of the cache memory is moved out of in release.
Specifically, the space service condition of Cache is detected in real time, obtains the first testing result;
When the first testing result is more than 95%, and the liveness for being stored in the corresponding queues of the QD in Cache is less than
When 0.5, the QD is moved into DRAM;
The pointer of the QD of Cache is moved out of in release.
Fig. 7 is moved from Cache to the realization flow diagram of DRAM for QD, with reference to shown in Fig. 7, QD from Cache move to
The realization flow of DRAM includes the following steps:
Step 301, the corresponding queue numbers of all QD being stored in Cache are established into a doubly linked list;If having and team
The corresponding QD of row number qN are discharged into the trigger event in Cache from Reg, then perform step 302;If have corresponding with queue number qM
QD moves the trigger event into Reg from Cache, then performs step 303;
Step 302, queue number qN is added to chained list tail portion, chained list depth adds 1, performs step 304;
Step 303, queue number qM from chained list is rejected, and connects the upper lower node of queue number qM;
Step 304, the quantity (i.e. number of queues) of the corresponding queue numbers of QD present in detection chained list depth, that is, Cache,
Judge whether chained list depth is more than preset first threshold, if chained list depth is more than preset first threshold, perform step
305, otherwise perform step 307;
Step 305, the corresponding QD of first queue number Cache enlivened in chained list is moved into DRAM;
Step 306, the pointer of the QD of Cache is moved out of in release;
Step 307, the QD in Cache is not moved.
To illustrate that, when chained list depth is more than preset first threshold, first queue number Cache enlivened in chained list corresponds to
QD the reason of moving into DRAM, mapping relations, Cache first to queue number and QD enliven chained list and Cache idle chains
Table is introduced.
Fig. 8 a are queue number and the mapping relations schematic diagram of QD, and with reference to shown in Fig. 8 a, Cache chained lists depth is 1024, energy
The mapping depth for enough storing the QD in 1024 queue numbers corresponding QD, queue number qnum and Cache is 1024, queue number and
QD is there are one-to-one relationship in Cache, as shown in Cache-qnum tables in Fig. 8 a.
Fig. 8 b are the structure diagram that Cache enlivens chained list, and with reference to shown in Fig. 8 b, Cache enlivens chained list next-hop RAM depths
It is 1024 to spend, the upper node (an i.e. upper cache pointer) and next section of the corresponding Cache pointers of each list item storage
Point (i.e. next cache pointers), and Cache is enlivened and the owner pointer for enlivening chained list is also contained in chained list and enlivens chained list
Tail pointer, instruction enliven the corresponding Cache addresses of head and the tail node of chained list;In figure 8b, Cache, which enlivens chained list node number, 4
A, this 4 nodes represent the connection relation of Cache addresses 0,1,2 and 3 respectively.
Fig. 8 c are the structure diagram of Cache free time chained lists, and with reference to shown in Fig. 8 c, Cache free time chained list includes idle chain
The tail pointer of the owner pointer of table and idle chained list indicates the corresponding Cache free address of head and the tail node of idle chained list, Cache
Idle chained list next-hop RAM is used for managing the free address of Cache.
Then, the event for carrying out queue operation processing is known as Active event, Cache enlivens the first queue number in chained list
Queue is that the queue of Active event did not occur for maximum duration in the queue of all queue numbers in chained list, i.e. liveness is minimum
Queue, the queue that Cache enlivens the tail queue number in chained list is that the queue of an Active event occurred recently, therefore, when
When chained list depth is more than preset first threshold, the corresponding QD of first queue number Cache enlivened in chained list is moved into DRAM.
Further, QD is carried out from the specific implementation flow that Cache is moved to DRAM with reference to Fig. 8 a, Fig. 8 b and Fig. 8 c detailed
It describes in detail bright.
When having detected that QD corresponding with queue number n is moved from Reg to Cache events, to Cache free time chained list applications
QD using cp as address is written in Cache, queue number n is written in Cache-qnum tables by address cp;It lives for Cache
Jump chained list next-hop RAM, and cp is written to next node by the tail pointer for enlivening chained list using Cache as address, using cp as address
The tail pointer that Cache is enlivened to chained list is written to a upper node, and cp is updated to enliven to the tail pointer of chained list, chained list depth
Add 1;
When detect moved with the QD of queue number m from the address cp' in Cache to Reg events when, from Cache
Address cp' reads QD corresponding with queue number m, while the institute that will be stored in Cache in Reg free time chained list application free address
QD is stated to move into Reg;Using cp' as address read Cache enliven chained list next-hop RAM, obtain queue number m upper hop x and
Queue number y is written to next node using queue number x as address, queue number x is written using queue number y as address by next-hop y
To a upper node, i.e., queue number m is deleted from enlivening in chained list, queue number x and queue number y are connected, simultaneously will
Cache enlivens chained list depth minus 1;
The active queue quantity i.e. first threshold for presetting support is Cache_th, if Cache chained list depth is more than
Cache_th enlivens the queue in QD the and Cache-qnum tables that are stored in Cache of owner pointer reading of chained list with Cache
Number, the corresponding QD of the queue number is moved into DRAM.
Further, in order to ensure QD that Reg has sufficient space to be capable of storage queue operation processing, ensure at queue operation
Reason is normally carried out, and in the processing method example IV in queue operation of the present invention, Reg can also be carried out using space
Detection in real time.
Fig. 9 is the real-time testing process schematic diagram using space of Reg, with reference to shown in Fig. 9, in queue operation of the present invention
Processing method further includes:
Step 401, the space service condition of the register is detected in real time, obtains the second testing result;
Step 402, when the second testing result is more than preset third threshold value, stop exporting the outstanding message letter
Message descriptor in breath and enqueue operations are carried out to pending message information, until the first testing result is less than preset the
During four threshold values, restore the message descriptor exported in the outstanding message information and pending message information is entered
Team operates, and preset 4th threshold value is less than the preset third threshold value.
Here, the preset third threshold value can be configured, such as the value of the third threshold value according to actual needs
It may range from 90% to 98% percentage value;In the present embodiment, it can be carried out specifically so that third threshold value is 95% as an example
It is bright.
Preset 4th threshold value can be configured, such as the value range of the 4th threshold value can according to actual needs
Think 80% to 88% percentage value;In the present embodiment, it can be described in detail so that the 4th threshold value is 85% as an example.
Specifically, the space service condition of Reg is detected in real time, obtains the second testing result;
When the second testing result is more than 95%, stop the message descriptor in outgoing message information, and stop message letter
Flow of joining the team is ceased, until when the second testing result is less than 85%, restores the message descriptor in outgoing message information, and restore to report
Literary information is joined the team flow.
In processing method embodiment five in queue operation of the present invention, the processing side in queue operation to illustrate the invention
The processing method in queue operation of the present invention is described in detail in the practical application of method, connected applications scene graph.
The processing method of queue operation of the present invention is mainly used on network devices queue management system, and Figure 10 is the present invention
One of application scenarios schematic diagram of processing method embodiment five in queue operation, with reference to shown in Figure 10, which includes
Enter to packet buffer module, go out to packet buffer module, Congestion Avoidance module, processing module of joining the team, queue scheduling module, go out team
Processing module, packet buffer management module, Map tables, QD management modules and QD cache modules;Wherein, QD management modules include
Management module is moved, QD cache modules include DRAM, Reg and Cache.
The flow of moving when Figure 10 introduces reading QD in queue operation processing is combined first, and Figure 11 is moving when reading QD
Flow diagram, with reference to shown in Figure 11, the flow of moving when reading QD specifically includes following steps:
Step 501, Map tables, which receive Congestion Avoidance module or join the team, module or goes out the inquiry request information of group module, described
Inquiry request information includes queue number;
Step 502, the storage location information and address information of QD corresponding with queue number is inquired in Map tables, is sent to
QD management modules;
Step 503, the storage location information for moving the QD that management module is inquired according to Map tables in QD management modules and
Address information to QD processing, when the storage location information instruction QD of QD is stored in Reg, performs step 504;Work as QD
Storage location information instruction QD when storing in Cache, perform step 505;When the storage location information instruction QD storages of QD
When in DRAM, step 508 is performed;
Step 504, QD is obtained from address determined by the storage location information and address information based on QD, performs step
5010;
Step 505, apply for free pointer into Reg;
Step 506, QD is obtained from address determined by the storage location information and address information based on QD, while will be deposited
The QD stored up in Cache is moved in the address being directed toward to Reg free pointers;
Step 507, the pointer of the QD of Cache is moved out of in release, performs step 5010;
Step 508, apply for free pointer into Reg;
Step 509, QD is obtained from address determined by the storage location information and address information based on QD, while will be deposited
The QD stored up in DRAM is moved in the address being directed toward to Reg free pointers;
Step 5010, QD returned into Congestion Avoidance module or joins the team and processing module or go out group processing module.
Enqueue operations process flow is introduced then in conjunction with Figure 10, message information is received from network, is stored in into packet buffer
Message descriptor in message information is sent to Congestion Avoidance module by module, and Congestion Avoidance module is described according to the message
Symbol determines the queue number of the affiliated queue of message information, and the queue number is sent to Map tables application inquiry and the queue number
Corresponding QD;
Map tables receive the inquiry application of Congestion Avoidance module, apply for the inquiry of triggering Map tables and the queue number pair by inquiry
The storage location information and address information of the QD answered, is sent to QD management modules;
QD is obtained in address determined by QD management modules from the storage location information and address information based on QD, and by QD
Management module of moving in management module moves the QD being stored in Cache or DRAM into Reg;Update Map tables in institute
The storage location information and address information of the corresponding QD of queue number is stated, by the updated storage location information of the QD, the QD
Congestion Avoidance module is sent to address information;
After Congestion Avoidance module receives the updated storage location information of the QD and QD and address information, read
The exit port of the queue number according to the QD and the exit port of the queue number, is sentenced using Weighted random early detection algorithm
Whether the message information certainly entered can join the team, by the report in court verdict, the exit port of the queue number, QD and message information
Literary descriptor is sent to processing module of joining the team by enqueue operations assembly line;And it does not join the team in court verdict instruction message information
When, Congestion Avoidance module decides whether to need to discharge the QD being stored in Reg according to the queue number, when the queue number with
Carrying out queue operation processing queue number it is all inconsistent when, Congestion Avoidance module judgement will be stored in Reg QD release,
It is moved at this point, moving management module and will be stored in QD corresponding with the queue number in Reg to Cache;
Processing module of joining the team receives the message in court verdict, the exit port of the queue number, the QD and message information
Descriptor according to court verdict, is handled message information, and when court verdict instruction message information is not joined the team, join the team place
Reason module directly abandons message data after reading message data from entering to packet buffer module;When court verdict is indicated message
When information is joined the team, processing module of joining the team initiates inquiry application to Map tables, obtains the QD and QD corresponding with the queue number more
Storage location information and address information after new;
Processing module of joining the team is to packet buffer management module application packet buffer pointer, through enqueue operations assembly line by message
Information is written in the DRAM of packet buffer pointer direction;After enqueue operations processing being carried out to message information, processing mould of joining the team
Block is updated the QD, and newer QD is stored to based on the updated storage location information of the QD and address information
In identified address;After the completion of processing module of joining the team processing, the message information joined the team supplies queue scheduling module schedules;
Congestion Avoidance module decides whether to need to be stored in corresponding with the queue number in Reg according to the queue number
QD release, when the queue number and carrying out queue operation processing queue number it is all inconsistent when, Congestion Avoidance module is sentenced
Certainly the QD being stored in Reg is discharged, is removed at this point, moving management module and will be stored in QD corresponding with the queue number in Reg
It moves in Cache.
Finally introduce dequeue operation process flow with reference to Figure 10, the port information to be scheduled such as queue scheduling module obtains and
According to the port information and queuing message, by dispatching algorithm, queue number is calculated, transmission is provided at team in queuing message
Manage module;
Go out group processing module and the queue number is sent to Map tables application inquiry QD corresponding with the queue number;
Map tables receive out the inquiry application of group processing module, apply for the inquiry of triggering Map tables and the queue number pair by inquiry
The storage location information and address information of the QD answered, is sent to QD management modules;
QD is obtained in address determined by QD management modules from the storage location information and address information based on QD, and by QD
Management module of moving in management module moves the QD being stored in Cache or DRAM into Reg;Update Map tables in institute
The storage location information and address information of the corresponding QD of queue number is stated, by the updated storage location information of the QD, the QD
Group processing module is provided with address information transmission;
Go out the queue in group processing module QD owner pointer read DRAM in message information, packet buffer pointer and under
One packet buffer pointer being lined up;After carrying out dequeue operation processing to message information, go out team of team processing module to institute
It states QD to be updated, and newer QD is stored to based on the updated storage location information of the QD and address information and is determined
Address in;
Congestion Avoidance module decides whether to need to be stored in corresponding with the queue number in Reg according to the queue number
QD release, when the queue number and carrying out queue operation processing queue number it is all inconsistent when, Congestion Avoidance module is sentenced
Certainly the QD being stored in Reg is discharged, is removed at this point, moving management module and will be stored in QD corresponding with the queue number in Reg
It moves in Cache.
In order to ensure QD that Reg and Cache has sufficient space to be capable of storage queue operation processing, it is real-time to move management module
Detect the space service condition of Reg and Cache;Detect that Cache's is more than preset the using space when moving management module
During one threshold value, the QD of the minimum queue of liveness in Cache is moved into DRAM, the pointer of the QD of Cache is moved out of in release;
When being more than preset third threshold value using space of Reg is detected when moving management module, and back-pressure enters to packet buffer module, stops
Only outgoing message descriptor, and stop message information and join the team flow, detect that Reg's is small using space until moving management module
When preset four threshold value, restore outgoing message descriptor and message information and join the team flow.
Further, Figure 12 be queue operation of the present invention in processing method embodiment five application scenarios schematic diagram it
Two, with reference to shown in Figure 12, Cell is received from network, according to the port numbers that Cell is carried, inquiry and port numbers pair in Map tables
The storage location information and address information of the QD answered, so as to from determined by the storage location information and address information based on QD
QD is read in location, tail pointer and the queue depth of queue are determined according to QD, wherein port numbers can be equal to queue number;Cell is write
Enter to queue tail, the tail pointer for updating the queue in corresponding QD is the storage address of Cell of currently joining the team, by the queue in QD
Cell data are written to Cell data cache region in DRAM, Cell pointers are write by depth plus current Cell length of joining the team
Enter into DRAM Cell pointers buffer zone, at this point, enqueue operations processing is completed;When going out group, according to RR scheduling rules between queue
Output scheduling queue number is selected, according to queue number, the storage location information and ground of QD corresponding with queue number are inquired in Map tables
So as to read QD from address determined by the storage location information and address information based on QD, team is determined according to QD for location information
The owner pointer of row and queue depth read Cell data from DRAM with the owner pointer of queue and are exported, referred to the head of queue
Needle reads next Cell pointers being lined up from DRAM, and the owner pointer of the queue in QD is updated to the pointer, will
The queue depth that queue depth in QD is updated in QD, which subtracts, currently goes out group Cell length, at this point, dequeue operation processing is completed.
As it can be seen that the processing method in queue operation of the present invention can be applied in cell Cell by virtual output queue VOQ
It is lined up on the device of outgoing management, therefore, the application of the processing method in queue operation of the present invention is not limited to network devices
Queue management system can be all applied in the system or device of all integrated queue queuing management functions.
The present invention also provides the processing units in a kind of queue operation, are used to implement the processing side in queue operation of the present invention
The detail of method achievees the effect that identical.
Figure 13 is the composition structure diagram of the processing unit embodiment one in queue operation of the present invention, with reference to Figure 13 institutes
Show, the processing unit in the present embodiment in queue operation includes:Acquisition module 61, enquiry module 62, first move 63 and of module
First processing module 64;Wherein,
The acquisition module 61, for obtaining the queue number of the affiliated queue of outstanding message information;
The enquiry module 62, for inquiring the storage position of queue descriptor corresponding with the queue number in the mapping table
Confidence ceases and address information;
Described first moves module 63, for the storage location information and address information according to the queue descriptor, obtains
Queue descriptor is taken, the queue descriptor is moved into register, and is updated corresponding with the queue number in mapping table
The storage location information and address information of queue descriptor;
The first processing module 64, for carrying out team to the outstanding message information according to the queue descriptor
Row operation, and according to the queue descriptor, the updated storage location of the queue descriptor after queue operation is carried out
Information and address information are updated the queue descriptor.
Optionally, the first processing module 64, specifically for being based on the queue descriptor and the queue number, according to
Preset congestion avoidance policy, when determining that outstanding message information is joined the team, according to the queue descriptor to described pending
Message information carry out enqueue operations;Alternatively, dequeue operation is carried out to pending message information according to the queue descriptor.
Figure 14 is that the refinement of first processing module in processing unit shown in Figure 13 forms one of structure diagram, with reference to Figure 14
Shown, when the queue operation is enqueue operations, the first processing module 64 includes:Application unit 641, storage unit
642 and first updating unit 643;Wherein,
The application unit 641, for applying being directed toward the packet buffer pointer of dynamic RAM;
The storage unit 642, for according to the packet buffer pointer, by the outstanding message information and report
Literary buffer pointers are stored into the dynamic RAM;
First updating unit 643, for according to the packet buffer pointer and the message information of enqueue operations, more
The new queue descriptor, and updated queue descriptor is stored into destination address, the destination address is the team
Address determined by the updated storage location information of row descriptor and address information.
Figure 15 is that the refinement of first processing module in processing unit shown in Figure 13 forms second structural representation, with reference to Figure 15
Shown, when the queue operation is dequeue operation, the first processing module 64 includes:Reading unit 644, releasing unit
645 and second updating unit 646;Wherein,
The reading unit 644, for according to the queue descriptor, being read in the dynamic RAM pending
Message information and next packet buffer pointer being lined up, and the outstanding message information is subjected to out team;
The releasing unit 645, for discharging the packet buffer pointer of the message information of team;
Second updating unit 646, for according to next packet buffer pointer being lined up and it is described
The message information of dequeue operation updates the queue descriptor, and updated queue descriptor is stored into destination address,
The destination address is address determined by the updated storage location information of the queue descriptor and address information.
Composition structure diagrams of the Figure 16 for the processing unit embodiment two in queue operation of the present invention, the team of the present embodiment
Processing unit in row operation is in addition to including acquisition module 61, enquiry module 62, first moving module 63 and first processing module
Outside 64, further include:First detection module 65, second move module 66 and update mapping table module 67;Wherein,
The first detection module 65, for detecting the queue number that the queue descriptor before updating corresponds to queue;
Described second moves module 66, and the queue number of queue and all updates are corresponded to for working as updated queue descriptor
Preceding queue descriptor correspond to queue queue number it is all inconsistent when, the updated queue in the register will be stored in
Descriptor is moved into cache memory;
The update mapping table module 67, is retouched for updating the updated queue corresponding with queue number in mapping table
State the storage location information and address information of symbol.
Due to needing to move the QD for carrying out queue operation processing into Reg, while need that queue behaviour will have been completed
It deals with and is stored in the QD in Reg and move into Cache, therefore, in order to ensure that Cache has sufficient space being capable of storage queue
QD in operation processing ensures being normally carried out for queue operation processing, the processing unit embodiment in queue operation of the present invention
In three, Cache can also be detected in real time using space.
Specifically, Figure 17 is the real-time detection function module diagram using space of Cache, with reference to shown in Figure 17,
Cache's is included using the real-time detection function module in space:Second detection module 71, third move module 72 and release module
73;Wherein,
Second detection module 71 for detecting the space service condition of the cache memory in real time, obtains
First testing result;
The third moves module 72, for when first testing result is more than preset first threshold, will store
In the cache and liveness is moved less than the queue descriptor of the queue of preset second threshold to dynamic
In random access memory;
The release module 73, for discharging the pointer for the queue descriptor for moving out of the cache memory.
In order to ensure QD that Reg has sufficient space to be capable of storage queue operation processing, ensure the normal of queue operation processing
It carries out, in the processing unit example IV in queue operation of the present invention, Reg can also be examined in real time using space
It surveys.
Specifically, Figure 18 is the real-time detection function module diagram using space of Reg, with reference to shown in Figure 18, Reg's
Included using the real-time detection function module in space:Third detection module 81, Second processing module 82;Wherein,
The third detection module 81 for detecting the space service condition of the register in real time, obtains the second detection
As a result;
The Second processing module 82, for when the second testing result is more than preset third threshold value, stopping output institute
It states the message descriptor in outstanding message information and enqueue operations is carried out to pending message information, until the second detection
When being as a result less than preset four threshold value, restore to export message descriptor in the outstanding message information and to pending
Message information carry out enqueue operations, preset 4th threshold value be less than the preset third threshold value.
In practical applications, the acquisition module 61, enquiry module 62, first move module 63, first processing module 64,
First detection module 65, second move module 66, update mapping table module 67, the second detection module 71, third move module 72,
Release module 73, third detection module 81, Second processing module 82 and application unit 641, storage unit 642, first update
Unit 643, reading unit 644,645 and second updating unit 646 of releasing unit can be by the centres that are located in mobile terminal
Manage device (CPU, Central Processing Unit), microprocessor (MPU, Micro Processor Unit), digital signal
Processor (DSP, Digital Signal Processor) or field programmable gate array (FPGA, Field
Programmable Gate Array) etc. realizations.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.It is all
All any modification, equivalent and improvement made within the spirit and scope of the present invention etc. are all contained in the protection model of the present invention
Within enclosing.
Claims (14)
1. a kind of processing method in queue operation, which is characterized in that the method includes:
Obtain the queue number of the affiliated queue of outstanding message information;
The storage location information and address information of queue descriptor corresponding with the queue number are inquired in the mapping table;
According to the storage location information and address information of the queue descriptor, queue descriptor is obtained, the queue is described
Symbol is moved into register, and update the storage location information and ground of queue descriptor corresponding with the queue number in mapping table
Location information;
Queue operation is carried out, and after queue operation is carried out to the outstanding message information according to the queue descriptor
According to the queue descriptor, the updated storage location information of the queue descriptor and address information, the queue is retouched
Symbol is stated to be updated.
2. according to the method described in claim 1, it is characterized in that, it is described according to the queue descriptor to described pending
Message information carries out queue operation, including:
Based on the queue descriptor and the queue number, according to preset congestion avoidance policy, determine that outstanding message is believed
When breath is joined the team, enqueue operations are carried out to the outstanding message information according to the queue descriptor;Alternatively,
Dequeue operation is carried out to pending message information according to the queue descriptor.
3. according to the method described in claim 2, it is characterized in that, when the queue operation be enqueue operations when, the basis
The queue descriptor carries out the outstanding message information queue operation, and after queue operation is carried out according to
Queue descriptor, the updated storage location information of the queue descriptor and address information carry out the queue descriptor
Update, including:
The packet buffer pointer of dynamic RAM is directed toward in application;
According to the packet buffer pointer, the outstanding message information and packet buffer pointer are stored to the dynamic
In random access memory;
According to the packet buffer pointer and the message information of enqueue operations, the queue descriptor is updated, and will be after update
Queue descriptor store into destination address, the destination address be the updated storage location information of the queue descriptor
With address information determined by address.
4. according to the method described in claim 2, it is characterized in that, when the queue operation be dequeue operation when, the basis
The queue descriptor carries out the outstanding message information queue operation, and after queue operation is carried out according to
Queue descriptor, the updated storage location information of the queue descriptor and address information carry out the queue descriptor
Update, including:
According to the queue descriptor, read outstanding message information in the dynamic RAM and next arranging
The packet buffer pointer of team, and the outstanding message information is subjected to out team;
Release has gone out the packet buffer pointer of the message information of team;
According to next packet buffer pointer being lined up and the message information of the dequeue operation, the team is updated
Row descriptor, and updated queue descriptor is stored into destination address, the destination address is the queue descriptor
Address determined by updated storage location information and address information.
5. according to the method described in claim 1, it is characterized in that, it is described the queue descriptor is updated after, institute
The method of stating further includes:
Queue descriptor before detection update corresponds to the queue number of queue;
When updated queue descriptor corresponds to the team of the queue number queue corresponding with the queue descriptor before all updates of queue
When row number is all inconsistent, the updated queue descriptor in the register will be stored in and moved to cache memory
In;
Update the storage location information and address information of the updated queue descriptor corresponding with queue number in mapping table.
6. according to the method described in claim 5, it is characterized in that, the method further includes:
The space service condition of the cache memory is detected in real time, obtains the first testing result;
It is when first testing result is more than preset first threshold, storage is in the cache and living
The queue descriptor that jerk is less than the queue of preset second threshold is moved to dynamic RAM;
The pointer of the queue descriptor of the cache memory is moved out of in release.
7. according to the method described in claim 2, it is characterized in that, the method further includes:
The space service condition of the register is detected in real time, obtains the second testing result;
When the second testing result is more than preset third threshold value, the message for stopping exporting in the outstanding message information is retouched
It states symbol and enqueue operations is carried out to pending message information, until when the second testing result is less than preset four threshold value, it is extensive
The message descriptor in the outstanding message information is exported again and enqueue operations are carried out to pending message information, it is described
Preset 4th threshold value is less than the preset third threshold value.
8. the processing unit in a kind of queue operation, which is characterized in that described device includes:Acquisition module, enquiry module, first
Move module and first processing module;Wherein,
The acquisition module, for obtaining the queue number of the affiliated queue of outstanding message information;
The enquiry module, for inquiring the storage location information of queue descriptor corresponding with the queue number in the mapping table
And address information;
Described first moves module, for the storage location information and address information according to the queue descriptor, obtains queue
Descriptor moves the queue descriptor into register, and updates queue corresponding with the queue number in mapping table and retouch
State the storage location information and address information of symbol;
The first processing module, for carrying out queue behaviour to the outstanding message information according to the queue descriptor
Make, and according to the queue descriptor, the updated storage location information of the queue descriptor after queue operation is carried out
And address information, the queue descriptor is updated.
9. device according to claim 8, which is characterized in that the first processing module, specifically for being based on the team
Row descriptor and the queue number, according to preset congestion avoidance policy, when determining that outstanding message information is joined the team, according to institute
It states queue descriptor and enqueue operations is carried out to the outstanding message information;Alternatively, place is treated according to the queue descriptor
The message information of reason carries out dequeue operation.
10. device according to claim 9, which is characterized in that when the queue operation is enqueue operations, described first
Processing module includes:Application unit, storage unit and the first updating unit;Wherein,
The application unit, for applying being directed toward the packet buffer pointer of dynamic RAM;
The storage unit, for according to the packet buffer pointer, by the outstanding message information and packet buffer
Pointer is stored into the dynamic RAM;
First updating unit, for according to the packet buffer pointer and the message information of enqueue operations, described in update
Queue descriptor, and updated queue descriptor is stored into destination address, the destination address is described for the queue
Accord with address determined by updated storage location information and address information.
11. device according to claim 9, which is characterized in that when the queue operation is dequeue operation, described first
Processing module includes:Reading unit, releasing unit and the second updating unit;Wherein,
The reading unit, for according to the queue descriptor, outstanding message to be read in the dynamic RAM
Information and next packet buffer pointer being lined up, and the outstanding message information is subjected to out team;
The releasing unit, for discharging the packet buffer pointer of the message information of team;
Second updating unit, for according to next packet buffer pointer being lined up and the dequeue operation
Message information, update the queue descriptor, and updated queue descriptor is stored into destination address, the target
Address is address determined by the updated storage location information of the queue descriptor and address information.
12. device according to claim 8, which is characterized in that described device further includes:First detection module, second remove
Shifting formwork block and update mapping table module;Wherein,
The first detection module, for detecting the queue number that the queue descriptor before updating corresponds to queue;
Described second moves module, for working as queue number and the team before all updates that updated queue descriptor corresponds to queue
Row descriptor correspond to queue queue number it is all inconsistent when, the updated queue descriptor in the register will be stored in
It moves into cache memory;
The update mapping table module, for updating the updated queue descriptor corresponding with queue number in mapping table
Storage location information and address information.
13. device according to claim 12, which is characterized in that described device further includes:Second detection module, third are removed
Shifting formwork block and release module;Wherein,
Second detection module for detecting the space service condition of the cache memory in real time, obtains the first inspection
Survey result;
The third moves module, for when first testing result is more than preset first threshold, will be stored in described
In cache memory and liveness is moved to dynamic random less than the queue descriptor of the queue of preset second threshold and is deposited
In reservoir;
The release module, for discharging the pointer for the queue descriptor for moving out of the cache memory.
14. device according to claim 9, which is characterized in that described device further includes:Third detection module, at second
Manage module;Wherein,
The third detection module for detecting the space service condition of the register in real time, obtains the second testing result;
The Second processing module, for when the second testing result is more than preset third threshold value, stopping waiting to locate described in output
Message descriptor in the message information of reason and enqueue operations are carried out to pending message information, until the second testing result is small
When preset four threshold value, restore to export message descriptor in the outstanding message information and to outstanding message
Information carries out enqueue operations, and preset 4th threshold value is less than the preset third threshold value.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611158994.5A CN108234348B (en) | 2016-12-13 | 2016-12-13 | Processing method and device in queue operation |
PCT/CN2017/088613 WO2018107681A1 (en) | 2016-12-13 | 2017-06-16 | Processing method, device, and computer storage medium for queue operation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611158994.5A CN108234348B (en) | 2016-12-13 | 2016-12-13 | Processing method and device in queue operation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108234348A true CN108234348A (en) | 2018-06-29 |
CN108234348B CN108234348B (en) | 2020-09-25 |
Family
ID=62557883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611158994.5A Active CN108234348B (en) | 2016-12-13 | 2016-12-13 | Processing method and device in queue operation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108234348B (en) |
WO (1) | WO2018107681A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109656515A (en) * | 2018-11-16 | 2019-04-19 | 深圳证券交易所 | Operating method, device and the storage medium of queue message |
CN111526097A (en) * | 2020-07-03 | 2020-08-11 | 新华三半导体技术有限公司 | Message scheduling method, device and network chip |
CN112804156A (en) * | 2019-11-13 | 2021-05-14 | 深圳市中兴微电子技术有限公司 | Congestion avoidance method and device and computer readable storage medium |
WO2021128104A1 (en) * | 2019-12-25 | 2021-07-01 | 华为技术有限公司 | Message buffering method, integrated circuit system, and storage medium |
CN113343735A (en) * | 2021-08-05 | 2021-09-03 | 深圳市成为信息技术有限公司 | Tag processing method of reader-writer, reader-writer and storage medium |
CN114844847A (en) * | 2021-12-14 | 2022-08-02 | 合肥哈工轩辕智能科技有限公司 | High-reliability real-time message distribution method and device |
CN115277607A (en) * | 2022-07-15 | 2022-11-01 | 天津市滨海新区信息技术创新中心 | Two-stage mimicry judgment method under heterogeneous system complex flow condition |
CN117193669A (en) * | 2023-11-06 | 2023-12-08 | 格创通信(浙江)有限公司 | Discrete storage method, device and equipment for message descriptors and storage medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210130766A (en) | 2019-02-22 | 2021-11-01 | 후아웨이 테크놀러지 컴퍼니 리미티드 | Memory management methods and devices |
CN112350996A (en) * | 2020-10-15 | 2021-02-09 | 中国船舶重工集团公司第七一六研究所 | Communication message analysis system and method adaptable to protocol upgrading |
CN114401072B (en) * | 2021-12-12 | 2024-02-06 | 西安电子科技大学 | Dynamic buffer control method and system for frame disassembly reordering queue based on HINOC protocol |
CN115242726B (en) * | 2022-07-27 | 2024-03-01 | 阿里巴巴(中国)有限公司 | Queue scheduling method and device and electronic equipment |
CN115955441A (en) * | 2022-11-22 | 2023-04-11 | 中国第一汽车股份有限公司 | Management scheduling method and device based on TSN queue |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060069854A1 (en) * | 2004-09-30 | 2006-03-30 | Sanjeev Jain | Method and apparatus providing efficient queue descriptor memory access |
CN102750245A (en) * | 2012-05-29 | 2012-10-24 | 中国人民解放军国防科学技术大学 | Message receiving method, module and system as well as device |
CN103546392A (en) * | 2012-07-12 | 2014-01-29 | 中兴通讯股份有限公司 | Single queue cycle dispatching method and device |
US20140181409A1 (en) * | 2012-12-20 | 2014-06-26 | Oracle International Corporation | Method and system for queue descriptor cache management for a host channel adapter |
CN103914341A (en) * | 2013-01-06 | 2014-07-09 | 中兴通讯股份有限公司 | Data queue de-queuing control method and device |
-
2016
- 2016-12-13 CN CN201611158994.5A patent/CN108234348B/en active Active
-
2017
- 2017-06-16 WO PCT/CN2017/088613 patent/WO2018107681A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060069854A1 (en) * | 2004-09-30 | 2006-03-30 | Sanjeev Jain | Method and apparatus providing efficient queue descriptor memory access |
CN102750245A (en) * | 2012-05-29 | 2012-10-24 | 中国人民解放军国防科学技术大学 | Message receiving method, module and system as well as device |
CN103546392A (en) * | 2012-07-12 | 2014-01-29 | 中兴通讯股份有限公司 | Single queue cycle dispatching method and device |
US20140181409A1 (en) * | 2012-12-20 | 2014-06-26 | Oracle International Corporation | Method and system for queue descriptor cache management for a host channel adapter |
CN103914341A (en) * | 2013-01-06 | 2014-07-09 | 中兴通讯股份有限公司 | Data queue de-queuing control method and device |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109656515A (en) * | 2018-11-16 | 2019-04-19 | 深圳证券交易所 | Operating method, device and the storage medium of queue message |
CN112804156A (en) * | 2019-11-13 | 2021-05-14 | 深圳市中兴微电子技术有限公司 | Congestion avoidance method and device and computer readable storage medium |
WO2021128104A1 (en) * | 2019-12-25 | 2021-07-01 | 华为技术有限公司 | Message buffering method, integrated circuit system, and storage medium |
CN111526097A (en) * | 2020-07-03 | 2020-08-11 | 新华三半导体技术有限公司 | Message scheduling method, device and network chip |
CN111526097B (en) * | 2020-07-03 | 2020-10-30 | 新华三半导体技术有限公司 | Message scheduling method, device and network chip |
CN113343735A (en) * | 2021-08-05 | 2021-09-03 | 深圳市成为信息技术有限公司 | Tag processing method of reader-writer, reader-writer and storage medium |
CN113343735B (en) * | 2021-08-05 | 2021-11-05 | 深圳市成为信息技术有限公司 | Tag processing method of reader-writer, reader-writer and storage medium |
CN114844847A (en) * | 2021-12-14 | 2022-08-02 | 合肥哈工轩辕智能科技有限公司 | High-reliability real-time message distribution method and device |
CN115277607A (en) * | 2022-07-15 | 2022-11-01 | 天津市滨海新区信息技术创新中心 | Two-stage mimicry judgment method under heterogeneous system complex flow condition |
CN115277607B (en) * | 2022-07-15 | 2023-12-26 | 天津市滨海新区信息技术创新中心 | Two-stage mimicry judgment method under complex flow condition of heterogeneous system |
CN117193669A (en) * | 2023-11-06 | 2023-12-08 | 格创通信(浙江)有限公司 | Discrete storage method, device and equipment for message descriptors and storage medium |
CN117193669B (en) * | 2023-11-06 | 2024-02-06 | 格创通信(浙江)有限公司 | Discrete storage method, device and equipment for message descriptors and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2018107681A1 (en) | 2018-06-21 |
CN108234348B (en) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108234348A (en) | A kind of processing method and processing device in queue operation | |
CN105224692B (en) | Support the system and method for the SDN multilevel flow table parallel searchs of multi-core processor | |
CN110808910B (en) | OpenFlow flow table energy-saving storage framework supporting QoS and method thereof | |
US7158964B2 (en) | Queue management | |
CN104601696B (en) | Service processing method, service calling system, device and system | |
CN107528789A (en) | Method for dispatching message and device | |
CN111382327A (en) | Character string matching device and method | |
CN101227341A (en) | Method for fast catching Ethernet card on Linux system | |
CN107665146A (en) | Memory management apparatus and method | |
US20030095558A1 (en) | High efficiency data buffering in a computer network device | |
CN101789908A (en) | Fragmental message receiving and processing method and device | |
US20030056073A1 (en) | Queue management method and system for a shared memory switch | |
CN108259346A (en) | A kind of equivalent route item establishing method and device | |
CN107360268A (en) | A kind of data package processing method, device and equipment | |
CN107613529A (en) | Message treatment method and base station | |
CN109271113B (en) | Data management system and method based on cloud storage | |
CN110209493A (en) | EMS memory management process, device, electronic equipment and storage medium | |
US20150058438A1 (en) | System and method providing hierarchical cache for big data applications | |
CN106254270A (en) | A kind of queue management method and device | |
CN107453948A (en) | The storage method and system of a kind of network measurement data | |
CN103514164B (en) | Data migration method and system | |
US7466716B2 (en) | Reducing latency in a channel adapter by accelerated I/O control block processing | |
CN109933303B (en) | Multi-user high-speed pseudo-random sequence generator circuit and working method thereof | |
CN107682446A (en) | A kind of message mirror-image method, device and electronic equipment | |
CN109656730A (en) | A kind of method and apparatus of access cache |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |