CN105099948A - Message information processing method and device - Google Patents

Message information processing method and device Download PDF

Info

Publication number
CN105099948A
CN105099948A CN201510386869.9A CN201510386869A CN105099948A CN 105099948 A CN105099948 A CN 105099948A CN 201510386869 A CN201510386869 A CN 201510386869A CN 105099948 A CN105099948 A CN 105099948A
Authority
CN
China
Prior art keywords
message information
queue number
message
storage space
internal storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510386869.9A
Other languages
Chinese (zh)
Other versions
CN105099948B (en
Inventor
王彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou H3C Technologies Co Ltd
Original Assignee
Hangzhou H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou H3C Technologies Co Ltd filed Critical Hangzhou H3C Technologies Co Ltd
Priority to CN201510386869.9A priority Critical patent/CN105099948B/en
Publication of CN105099948A publication Critical patent/CN105099948A/en
Application granted granted Critical
Publication of CN105099948B publication Critical patent/CN105099948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Multi Processors (AREA)

Abstract

The invention provides a message information processing method, comprising the steps as follows: extracting message information after receiving a message, writing the message information with same sequence number in adjacent nodes of a first internal storage space according to the receiving order in turn, wherein each node stores one message information and the message information comprises a sequence number and message length of the message; reading the message information from the first internal storage space according to the write-in sequence in turn; compressing the message information with same sequence number in a message information package and storing in the second internal storage space; and storing the message information in the second internal storage space in an external information cache. The invention further provides a message information processing device. The method and the device of the invention could solve the problem that how to enable the queued message information processing to be linear under the condition of large scale queue and high bandwidth.

Description

A kind of message information processing method and device
Technical field
The present invention relates to communication technical field, particularly a kind of message information processing method and device.
Background technology
Data message is divided into not homogeneous turbulence according to said features, these not homogeneous turbulence be referred to as queue.In prior art, when queue size is less, queuing message can be placed on chip internal process, and when queue size is larger, internal information buffer memory is just not enough, at this moment needs external information buffer memory to store these queuing messages.
In the process that prior art processes message information, send read command from externally information cache, to external information buffer memory returned packet information, probably need 8 clock cycle.For avoiding occurring, when in front and back being the message information of same queue, last message information does not have enough time to brush in external information buffer memory, this queue is read again, the message information causing this queue of taking out is not up-to-date, finally cause the problem of system information mistake, therefore after the last message information of wait is disposed, the operation of next message information could be started.But, when outside information cache access time delay is larger, when interface flow is especially big simultaneously, if still after the last message information of wait is disposed, the operation of next message information could be started, entrance message information can be caused not process in time, cause flow cannot reach the problem of linear speed.
Summary of the invention
Embodiments provide a kind of message information processing method and device, solve in extensive queue, in high bandwidth situation, how to enable the problem of the message information process linear speed of queue.
Embodiments provide a kind of message information processing method, comprising: receive after message and extract message information, by reception order, the message information of same queue number is written to successively the adjacent node of the first internal storage space; Wherein, each node stores a message information, and described message information comprises queue number and the message length of this message; Read message information from the first internal storage space successively by write sequence, the message information of same queue number is compressed into a message information bag stored in the second internal storage space; Message information in second internal storage space is stored into external information buffer memory.
The embodiment of the present invention additionally provides a kind of message information processing unit, comprising: receiving element, after receiving message, extract message information, by reception order, the message information of same queue number is written to successively the adjacent node of the first internal storage space; Wherein, each node stores a message information, and described message information comprises queue number and the message length of this message; Recomposition unit, reads message information from the first internal storage space successively for pressing write sequence, the message information of same queue number is compressed into a message information bag stored in the second internal storage space; Processing unit, for being stored into external information buffer memory by the message information in the second internal storage space.
By a kind of message information processing method provided by the present invention and device, the message information of same queue number is write successively the adjacent node of the first internal storage space, then the message information of described same queue number is compressed into a message information bag stored in the second internal storage space, the message information in the second internal storage space is stored into external information buffer memory.So, when outside information cache access time delay is larger, when interface flow is especially big simultaneously, the message information process of queue also can reach linear speed.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of embodiment of the present invention message information processing method.
Fig. 2 is the structural representation of embodiment of the present invention message information processing unit.
The structural representation that Fig. 3 is the embodiment of the present invention first internal storage space when being the ring comprising multiple node.
Embodiment
For making object of the present invention, technical scheme and advantage clearly understand, to develop simultaneously embodiment referring to accompanying drawing, the present invention is described in more detail.
Thus, propose the present invention, embodiments provide a kind of message information processing method, as shown in Figure 1, the method comprises its schematic flow sheet:
Step 11, receive after message and extract message information, by reception order, the message information of same queue number is written to successively the adjacent node of the first internal storage space; Wherein, each node stores a message information, and described message information comprises queue number corresponding to this message and message length.
First internal storage space of the present invention can comprise multiple node, and each node can be a register.In a particular embodiment, the first internal storage space can be the ring comprising multiple node, and each node stores a message information, and the direction of displacement of read operation and write operation is predetermined direction.
By reception order, the adjacent node that the message information of same queue number is written to the first internal storage space is successively comprised in the present embodiment: using write operation node as searching start node, search message information that node stores until read operation node by the rightabout of predetermined direction, the queue number of the queue number of the message information found with current message information is compared;
If the queue number of the message information found is consistent with the queue number of current message information, then according to predetermined direction, the message information that the next node of found message information to write operation node stores is shifted, current message information is write on the next node of the message information found, using the next node of write operation node as new write operation node.
If the queue number of the queue number of the message information found and current message information is inconsistent, by current message information write write operation node, using the next node of write operation node as new write operation node.
Step 12, read message information from the first internal storage space successively by write sequence, and the message information of the same queue of reading number is compressed into a message information bag stored in the second internal storage space.
In the present invention, the second internal storage space can be FIFO, FIFO is a kind of push-up storage, for the message information that will read from the first internal storage space stored in FIFO, then reads message information from FIFO, processes.
Particularly, described write sequence of pressing reads message information from the first internal storage space successively, the message information of same queue number is compressed into a message information bag and comprises stored in the second internal storage space:
When the second internal storage space is in non-by dummy status, read message information successively;
When read message information is different from the queue number of the message information read before, then the message length of multiple message informations identical for queue number is added up, form the message information bag of this queue number stored in the second internal storage space.
Its degree of depth of different FIFO is different, and the threshold value corresponding to the state of setting is also different.In one embodiment, suppose that the degree of depth of FIFO is 16, can store 16 messages, arrange 0 for empty, [1,5] are by dummy status, and [6,16], for non-by dummy status, [10,16] will be for expiring state.It can thus be appreciated that it is 16 by the upper limit threshold of sky that FIFO is in non-, and lower threshold is 6.
Further, before the message length of described multiple message informations that queue number is identical carries out adding up, judge whether the message information that queue number is identical reaches predetermined quantity, if reached, then the message length of message information identical for a predetermined quantity queue number is added up, form a message information bag of this queue number stored in the second internal storage space.
Step 13, the message information in the second internal storage space is stored into external information buffer memory.
Read the message information in the second internal storage space, the queue number of the queue number of current read message information and last message information is compared; If queue number is identical, then after waiting for that message information corresponding to last queue number is stored into external information buffer memory, then the message information of current queue number is stored into external information buffer memory; Otherwise the message information of current queue number is stored into external information buffer memory.
Process in this step, carries out read and write operation to outside information cache exactly.Specifically comprise externally information cache and send read command, receive the read message that external information buffer memory returns, wherein read message comprises the message number of current queue, and then externally information cache sends write order, is backfilling in external information buffer memory after the message number of current queue is added 1.
The present invention is described in detail to enumerate embodiment below.
According to working frequency of chip, message length, interface bandwidth, determine the clock cycle required for a Message processing, be 312.5MHZ at working frequency of chip in the embodiment of the present invention, interface bandwidth 100Gbps, and message is when being the packet of the shortest 64 byte lengths, require that the treatment cycle of each message information is 2 clock cycle, just can guarantee flow line-speed processing.Therefore, the present invention adds recomposition unit between receiving element and processing unit, the message information of same queue is compressed into a new message information, make the queue number of processing unit reading message information all not identical on the one hand, so just not needing to wait for that last Message processing is complete just processes next message, still processes a message information according to every 2 clock cycle.On the other hand, reduce the bag frequency occurring same queue, even if processing unit reads the message information of same queue, as long as 4 original message informations drawn together by each large handbag after compression, recomposition unit every 8 clock cycle are made just to be deposited in the FIFO of processing unit by the large bag after compression, bag frequency is not less than the time delay of external information buffer memory, even if each Message processing waits for that last Message processing is complete also can not affect flow linear speed.
Fig. 2 is the structural representation of embodiment of the present invention message information processing unit, comprises receiving element 201, recomposition unit 202 and processing unit 203.Wherein recomposition unit 202 comprises the first internal storage space, and the first internal storage space is the ring comprising multiple node.Processing unit 203 comprises the second internal storage space, and the second internal storage space is FIFO.Recomposition unit 202 to link point write message information, and from link point read message information compression be deposited in the FIFO of processing unit.Processing unit 203 reads message information and carries out read-write operation to outside information cache from FIFO.
Below the processing unit of message information shown in Fig. 2 is described in detail.
Receiving element 201, extracts message information after receiving message, by reception order, the message information of same queue number is written to successively the adjacent node of the first internal storage space; Wherein, each node stores a message information, and described message information comprises queue number and the message length of this message;
Recomposition unit 202, reads message information from the first internal storage space successively for pressing write sequence, the message information of same queue number is compressed into a message information bag stored in the second internal storage space;
Processing unit 203, for being stored into external information buffer memory by the message information in the second internal storage space.
The direction of displacement of read operation and write operation is predetermined direction;
Described receiving element 201 specifically comprises:
Using write operation node as searching start node, searching message information that node stores until read operation node by the rightabout of predetermined direction, the queue number of the queue number of the message information found with current message information is compared;
If the queue number of the message information found is consistent with the queue number of current message information, then according to predetermined direction, the message information that the next node of found message information to write operation node stores is shifted, current message information is write on the next node of the message information found, using the next node of write operation node as new write operation node.
Described the queue number of the message information found compared with the queue number of the message information of current write after, described receiving element 201 specifically comprises:
If the queue number of the queue number of the message information found and current message information is inconsistent, by current message information write write operation node, using the next node of write operation node as new write operation node.
Described recomposition unit 202 specifically comprises:
When the second internal storage space is in non-by dummy status, read message information successively;
When read message information is different from the queue number of the message information read before, then the message length of multiple message informations identical for queue number is added up, form the message information bag of this queue number stored in the second internal storage space.
The message length of described multiple message informations that queue number is identical carry out cumulative before, described recomposition unit 202 also for:
Judge whether the message information that queue number is identical reaches predetermined quantity;
If reach predetermined quantity, then the message length of message information identical for a predetermined quantity queue number is added up, form a message information bag of this queue number stored in the second internal storage space.
Described processing unit 203 specifically comprises:
Read the message information in the second internal storage space, the queue number of the queue number of current read message information and last message information is compared;
If queue number is identical, then after waiting for that message information corresponding to last queue number is stored into external information buffer memory, then the message information of current queue number is stored into external information buffer memory; Otherwise the message information of current queue number is stored into external information buffer memory.
Below the course of work of unit in Fig. 2 is described in detail.
After receiving element receives message, a message information is written on the link point of recomposition unit by every 2 clock cycle by recomposition unit all the time, and message information comprises queue number corresponding to this message and message length.Can describe in detail below the method that recomposition unit specifically writes link point.
1) at FIFO by sky or dummy status, so illustrate that treatment unit efficiency is enough high, without the need to improving its efficiency by Packet reassembling, now recomposition unit receives FIFO sky or by after the status signal of sky, only needs still to be deposited in FIFO from ring reading by 1 message information by every 2 clock cycle.It should be noted that, when FIFO is in sky or dummy status, recomposition unit does not need the message information to reading out from ring to compress, and is directly deposited in FIFO, even if the message information having queue number identical occurs continuously, recomposition unit also can not be compressed it.
Processing unit is by every 2 clock cycle sense datas from FIFO, the queue number reading current message information when processing unit is identical with last queue number, after before then needing to wait for, queuing message is disposed, reprocessing current queue, this stand-by period is exactly 8 clock cycle.Within these 8 clock cycle, processing unit is sense data from FIFO no longer, and after 8 clock cycle, processing unit is sense data from FIFO again.Now, read by message information because processing unit is not timely from FIFO, the message number in FIFO just has overstocked, and the message number overstock is 4.Otherwise the queue number reading current message information when processing unit is not identical with last queue number, and pipeline processes carrys out guaranteed flow linear speed always, also namely without the need to waiting for last time, end of operation just can start the process of next message information.
2) non-by dummy status at FIFO, so illustrate that treatment unit efficiency is not high enough, now to receive FIFO non-by after the status signal of sky for recomposition unit, still by every 2 clock cycle, 1 message information is read from ring, the message information read out from ring is compressed simultaneously, be then deposited in FIFO.
Recomposition unit reads current message information from ring, then the queue number of next message information is searched, if identical with the queue number of current message information, then message length is added up, until the queue number of next message information is different from the queue number of current message information, then the length of multiple messages identical for queue number is added up, form a new message information bag, stored in FIFO, the queue number that this message information bag is corresponding is identical with the queue number of multiple message information before compression.It should be noted that, owing to considering that the time delay reading external information buffer memory is 8 clock cycle, and require that processing unit every 2 clock cycle process a message information, even if 4 message informations so compressing same queue number can accomplish that same queue is waited for also can linear speed.Therefore, recomposition unit just completes this second compression after compressing at most 4 message informations of same queue number.The message information of following same queue, is compressed in next new packet.
It should be noted that, when recomposition unit reads current message information from ring, if this message length is greater than certain value, even if FIFO is in non-by dummy status, whether whether the queue number that recomposition unit does not need to search next message information yet number identical with current message message queue, directly by this message information write FIFO.This is because, in the present embodiment, when message is the packet of the shortest 64 byte lengths, require that the treatment cycle of each message information is 2 clock cycle, and the time delay of external information buffer memory the chances are 8 clock cycle, so, when message length is greater than certain value: during 64*4=256 byte length, the time processing this message has been greater than external cache time delay, therefore there will not be system packet loss problem of the present invention.
Processing unit is by every 2 clock cycle sense datas from FIFO, and at this moment, the data that processing unit is read are likely the data after compression, and message information corresponding to queue number is likely a large packet of multiple message information compression.The queue number reading current message information when processing unit is identical with last queue number, then, after before needing to wait for, queuing message is disposed, reprocessing current queue, this stand-by period is exactly 8 clock cycle.Otherwise the queue number reading current message information when processing unit is not identical with last queue number, and pipeline processes carrys out guaranteed flow linear speed always, also namely without the need to waiting for last time, end of operation just can start the process of next message information.
3) will expire state at FIFO, now recomposition unit receives FIFO by after full status signal, and recomposition unit stops message information reading to be deposited in FIFO, meanwhile, also can not carry out the compression of message information again.But a message information is still written on the link point of recomposition unit by every 2 clock cycle by recomposition unit.
For processing unit, state will be expired at FIFO, if the queue number that processing unit reads current message information is identical with last queue number, then need after 8 clock cycle, namely after last queue processing, start sense data from FIFO again, fifo status changes, from non-by sky by completely becoming, when recomposition unit receive FIFO non-by the status signal of sky after, start by every 2 clock cycle 1 message information to be read again to be deposited in FIFO.Another situation is, if the queue number reading current message information when processing unit is not identical with last queue number, then still according to line-speed processing, after 2 clock cycle, start sense data from FIFO again, fifo status changes, from non-by sky by completely becoming, when recomposition unit receive FIFO non-by the status signal of sky after, start by every 2 clock cycle 1 message information to be read again to be deposited in FIFO.That is, after certain clock cycle, processing unit starts sense data from FIFO, cause message decreased number in FIFO, thus state changes, thus 1 message information read from ring by every 2 clock cycle by triggering recomposition unit, compresses, be then deposited in FIFO to the message information read out from ring simultaneously.
If within these 8 clock cycle, 4 message informations will be had to be written on link point.After these 8 clock cycle, FIFO is from non-by sky by completely becoming, it is non-by after the status signal of sky that recomposition unit receives FIFO, start by every 2 clock cycle, 1 message information reading to be deposited in FIFO again, and above-mentioned 4 messages be written on link point are deposited in the process of FIFO in reading, also need the Information Compression of same queue on ring, parcel when multiple message informations of same queue number being compressed into a new packet becomes new large bag, and the length of new data packets is the message information length sum of corresponding same queue on ring.
It should be noted that, only can overstock to 4 messages on ring at FIFO by full, but due to after 8 clock cycle, fifo status can change, FIFO is from non-by sky by completely becoming, now enter ring to have caused with the information handling rate going out ring is another, therefore can not overstock more message and cause ring to overflow.
Because processing unit sends read command from externally information cache, return to processing unit read message to external information buffer memory, probably need 8 clock cycle, that is, the time intactly processing a message information in fact required is 8 clock cycle.Require that every 2 clock cycle process a message in the present embodiment.According to explanation before, within 8 clock cycle of time delay, if process a message according to every 2 clock cycle, be then written to maximum 4 messages on link point, be written on a link point according to a message information, therefore link count out can be set to 8 just enough.
The structural representation that Fig. 3 is the embodiment of the present invention first internal storage space when being the ring comprising multiple node.Node on the ring number is 8, and a message information is written in ring corresponding node by every 2 clock cycle.Suppose in the present embodiment that the direction of read-write operation displacement is all clockwise.
Using write operation node as searching start node, oppositely search message information that node stores counterclockwise until read operation node, the queue number of the queue number of the message information found with current write is compared, if inconsistent, then by current message information write write operation node, using the next node of write operation node as new write operation node.
If consistent, then according to predetermined direction, the message information that the next node of found message information to write operation node stores is shifted, current message information is write on the next node of the message information found, using the next node of write operation node as new write operation node.
In concrete scene, suppose that the degree of depth of FIFO is 16, can store 16 messages, arrange 0 for empty, [1,5] are by dummy status, and [6,16], for non-by dummy status, [10,16] will be for expiring state.
Message information processing unit is when initial condition, and treatment unit efficiency is enough high, and FIFO is in sky or dummy status, all can not trigger recomposition unit and carry out message information compression.
When message number in the FIFO of processing unit overstocks and reaches 6, FIFO is in non-by dummy status, triggers recomposition unit and carries out message information compression.
When message number in the FIFO of processing unit overstocks and reaches 10, FIFO is in full state, triggers recomposition unit and stops carrying out message information compression, stops message information reading to be deposited in FIFO simultaneously.After 8 or 2 clock cycle, processing unit starts sense data from FIFO, in FIFO, message number is reduced to 9 by 10, this just means that fifo status changes, from non-by sky by completely becoming, when recomposition unit receive FIFO non-by the status signal of sky after, start by every 2 clock cycle, 1 message information is read from ring again, the message information read out from ring is compressed simultaneously, be then deposited in FIFO.
In whole processing procedure, no matter stored in the message information in FIFO whether through overcompression, processing unit is read the queue in FIFO all the time, the queue number of current read message information and last queue number are compared, if identical, after then waiting for that queuing message corresponding to last queue number is disposed, reprocessing current queue; Otherwise directly the queuing message of current queue is processed.
In sum, the present invention recombinates to message information, and the message information of same queue is packaged into a new message information bag, in this new message information bag, queue number is constant, and message length adds up.Thus reduce the bag frequency occurring same queue, solve wait delay problem when same queue is joined the team.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within the scope of protection of the invention.

Claims (12)

1. a message information processing method, is characterized in that, comprising:
Receive after message and extract message information, by reception order, the message information of same queue number is written to successively the adjacent node of the first internal storage space; Wherein, each node stores a message information, and described message information comprises queue number and the message length of this message;
Read message information from the first internal storage space successively by write sequence, the message information of same queue number is compressed into a message information bag stored in the second internal storage space;
Message information in second internal storage space is stored into external information buffer memory.
2. the method for claim 1, is characterized in that, the direction of displacement of read operation and write operation is predetermined direction;
Describedly by reception order, the adjacent node that the message information of same queue number is written to the first internal storage space successively to be comprised:
Using write operation node as searching start node, searching message information that node stores until read operation node by the rightabout of predetermined direction, the queue number of the queue number of the message information found with current message information is compared;
If the queue number of the message information found is consistent with the queue number of current message information, then according to predetermined direction, the message information that the next node of found message information to write operation node stores is shifted, current message information is write on the next node of the message information found, using the next node of write operation node as new write operation node.
3. method as claimed in claim 2, is characterized in that, described the queue number of the message information found compared with the queue number of the message information of current write after, described method also comprises:
If the queue number of the queue number of the message information found and current message information is inconsistent, by current message information write write operation node, using the next node of write operation node as new write operation node.
4. the method for claim 1, is characterized in that, described write sequence of pressing reads message information from the first internal storage space successively, the message information of same queue number is compressed into a message information bag and comprises stored in the second internal storage space:
When the second internal storage space is in non-by dummy status, read message information successively;
When read message information is different from the queue number of the message information read before, then the message length of multiple message informations identical for queue number is added up, form the message information bag of this queue number stored in the second internal storage space.
5. method as claimed in claim 4, is characterized in that, before the message length of described multiple message informations that queue number is identical carries out adding up, the method comprises further:
Judge whether the message information that queue number is identical reaches predetermined quantity;
If reach predetermined quantity, then the message length of message information identical for a predetermined quantity queue number is added up, form a message information bag of this queue number stored in the second internal storage space.
6. the method for claim 1, is characterized in that, describedly message information in the second internal storage space is stored into external information buffer memory comprises:
Read the message information in the second internal storage space, the queue number of the queue number of current read message information and last message information is compared;
If queue number is identical, then after waiting for that message information corresponding to last queue number is stored into external information buffer memory, then the message information of current queue number is stored into external information buffer memory; Otherwise the message information of current queue number is stored into external information buffer memory.
7. a message information processing unit, is characterized in that, comprising:
Receiving element, extracts message information after receiving message, by reception order, the message information of same queue number is written to successively the adjacent node of the first internal storage space; Wherein, each node stores a message information, and described message information comprises queue number and the message length of this message;
Recomposition unit, reads message information from the first internal storage space successively for pressing write sequence, the message information of same queue number is compressed into a message information bag stored in the second internal storage space;
Processing unit, for being stored into external information buffer memory by the message information in the second internal storage space.
8. device as claimed in claim 7, it is characterized in that, the direction of displacement of read operation and write operation is predetermined direction;
Described receiving element specifically comprises:
Using write operation node as searching start node, searching message information that node stores until read operation node by the rightabout of predetermined direction, the queue number of the queue number of the message information found with current message information is compared;
If the queue number of the message information found is consistent with the queue number of current message information, then according to predetermined direction, the message information that the next node of found message information to write operation node stores is shifted, current message information is write on the next node of the message information found, using the next node of write operation node as new write operation node.
9. device as claimed in claim 8, is characterized in that, described the queue number of the message information found compared with the queue number of the message information of current write after, described receiving element specifically comprises:
If the queue number of the queue number of the message information found and current message information is inconsistent, by current message information write write operation node, using the next node of write operation node as new write operation node.
10. device as claimed in claim 7, it is characterized in that, described recomposition unit specifically comprises:
When the second internal storage space is in non-by dummy status, read message information successively;
When read message information is different from the queue number of the message information read before, then the message length of multiple message informations identical for queue number is added up, form the message information bag of this queue number stored in the second internal storage space.
11. devices as claimed in claim 10, is characterized in that, the message length of described multiple message informations that queue number is identical carry out cumulative before, described recomposition unit also for:
Judge whether the message information that queue number is identical reaches predetermined quantity;
If reach predetermined quantity, then the message length of message information identical for a predetermined quantity queue number is added up, form a message information bag of this queue number stored in the second internal storage space.
12. devices as claimed in claim 7, it is characterized in that, described processing unit specifically comprises:
Read the message information in the second internal storage space, the queue number of the queue number of current read message information and last message information is compared;
If queue number is identical, then after waiting for that message information corresponding to last queue number is stored into external information buffer memory, then the message information of current queue number is stored into external information buffer memory; Otherwise the message information of current queue number is stored into external information buffer memory.
CN201510386869.9A 2015-06-30 2015-06-30 A kind of message information processing method and processing device Active CN105099948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510386869.9A CN105099948B (en) 2015-06-30 2015-06-30 A kind of message information processing method and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510386869.9A CN105099948B (en) 2015-06-30 2015-06-30 A kind of message information processing method and processing device

Publications (2)

Publication Number Publication Date
CN105099948A true CN105099948A (en) 2015-11-25
CN105099948B CN105099948B (en) 2018-06-15

Family

ID=54579527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510386869.9A Active CN105099948B (en) 2015-06-30 2015-06-30 A kind of message information processing method and processing device

Country Status (1)

Country Link
CN (1) CN105099948B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019024763A1 (en) * 2017-07-31 2019-02-07 新华三技术有限公司 Message processing
WO2020073907A1 (en) * 2018-10-09 2020-04-16 华为技术有限公司 Method and apparatus for updating forwarding entry

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080279195A1 (en) * 2007-05-07 2008-11-13 Hitachi, Ltd. Multi-plane cell switch fabric system
CN101714947A (en) * 2009-10-30 2010-05-26 清华大学 Extensible full-flow priority dispatching method
CN102594694A (en) * 2012-03-06 2012-07-18 北京中创信测科技股份有限公司 Data distribution method and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080279195A1 (en) * 2007-05-07 2008-11-13 Hitachi, Ltd. Multi-plane cell switch fabric system
CN101714947A (en) * 2009-10-30 2010-05-26 清华大学 Extensible full-flow priority dispatching method
CN102594694A (en) * 2012-03-06 2012-07-18 北京中创信测科技股份有限公司 Data distribution method and equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019024763A1 (en) * 2017-07-31 2019-02-07 新华三技术有限公司 Message processing
US11425057B2 (en) 2017-07-31 2022-08-23 New H3C Technologies Co., Ltd. Packet processing
WO2020073907A1 (en) * 2018-10-09 2020-04-16 华为技术有限公司 Method and apparatus for updating forwarding entry
CN111026324A (en) * 2018-10-09 2020-04-17 华为技术有限公司 Updating method and device of forwarding table entry
CN111026324B (en) * 2018-10-09 2021-11-19 华为技术有限公司 Updating method and device of forwarding table entry
US11316804B2 (en) 2018-10-09 2022-04-26 Huawei Technologies Co., Ltd. Forwarding entry update method and apparatus in a memory

Also Published As

Publication number Publication date
CN105099948B (en) 2018-06-15

Similar Documents

Publication Publication Date Title
CN104092717B (en) Message treatment method and system, message destination equipment
US11425057B2 (en) Packet processing
CN105573711B (en) A kind of data cache method and device
CN104065588A (en) Device for scheduling and buffering data packets and method thereof
CN103605485A (en) Variable-length data storing method and device
CN104219639A (en) Method and device for displaying text message record
CN114584560A (en) Fragmented frame recombination method and device
CN105099948A (en) Message information processing method and device
CN113590512A (en) Self-starting DMA device capable of directly connecting peripheral equipment and application
CN102629235A (en) Method for increasing read-write speed of double data rate (DDR) memory
CN110232029A (en) The implementation method of DDR4 packet caching in a kind of FPGA based on index
CN104199783A (en) Method and device for caching and transmitting of Ethernet data frames in FPGA (field programmable gate array)
CN110597482B (en) Method for searching latest effective data packet in FIFO by serial port
CN105577985A (en) Digital image processing system
CN101777037A (en) Method and system for searching data transmission in engine real-time system
US20160085683A1 (en) Data receiving device and data receiving method
CN114153758B (en) Cross-clock domain data processing method with frame counting function
CN105512075A (en) High-speed output interface circuit, high-speed input interface circuit and data transmission method
CN114422597B (en) FPGA-based data frame timing forwarding method and device, FPGA and data exchange equipment
CN104572572A (en) Data receiving device and data sending device
CN112882657B (en) Data reading method and device, storage medium and electronic device
CN111865741B (en) Data transmission method and data transmission system
CN117827733A (en) Communication method, device, equipment and storage medium of dual-core processor
CN114610661A (en) Data processing device and method and electronic equipment
CN113810063A (en) Data transmission method in wifi and Bluetooth system sharing radio frequency

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310052 Binjiang District Changhe Road, Zhejiang, China, No. 466, No.

Applicant after: Xinhua three Technology Co., Ltd.

Address before: 310052 Binjiang District Changhe Road, Zhejiang, China, No. 466, No.

Applicant before: Huasan Communication Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant