WO2021047612A1 - 报文处理方法、装置以及计算机存储介质 - Google Patents

报文处理方法、装置以及计算机存储介质 Download PDF

Info

Publication number
WO2021047612A1
WO2021047612A1 PCT/CN2020/114600 CN2020114600W WO2021047612A1 WO 2021047612 A1 WO2021047612 A1 WO 2021047612A1 CN 2020114600 W CN2020114600 W CN 2020114600W WO 2021047612 A1 WO2021047612 A1 WO 2021047612A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
original
group
size
original messages
Prior art date
Application number
PCT/CN2020/114600
Other languages
English (en)
French (fr)
Inventor
史济源
陈庆
张明礼
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201911207236.1A external-priority patent/CN112564856A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20863129.1A priority Critical patent/EP3993294A4/en
Priority to KR1020227004719A priority patent/KR20220029751A/ko
Priority to MX2022002529A priority patent/MX2022002529A/es
Priority to JP2022515555A priority patent/JP7483867B2/ja
Publication of WO2021047612A1 publication Critical patent/WO2021047612A1/zh
Priority to US17/679,758 priority patent/US11683123B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • H04L1/0042Encoding specially adapted to other signal generation operation, e.g. in order to reduce transmit distortions, jitter, or to improve signal shape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0064Concatenated codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0006Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format
    • H04L1/0007Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format by modifying the frame length
    • H04L1/0008Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format by modifying the frame length by supplementing frame payload, e.g. with padding bits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0078Avoidance of errors by organising the transmitted data in a format specifically designed to deal with errors, e.g. location
    • H04L1/0083Formatting with frames or packets; Protocol or part of protocol for error control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/18Automatic repetition systems, e.g. Van Duuren systems
    • H04L1/1809Selective-repeat protocols

Definitions

  • This application relates to the field of data encoding technology, and in particular to a method, device and computer-readable storage medium for message processing.
  • network equipment can first perform forward error correction (FEC) encoding on multiple original messages in the data stream to obtain redundant messages when transmitting data streams. Send the original message and the redundant message to the decoding end.
  • FEC forward error correction
  • the process of a network device encoding multiple original messages can be as follows: the network device fills invalid data in the original messages of the multiple original messages except the largest message, so that the size of the filled message is all For the size of the largest packet among multiple original packets, the network device performs FEC encoding based on the filled packet and the largest packet to obtain redundant packets.
  • the network device will fill invalid data in each original message smaller than the maximum message, the filled original message is filled with a large amount of invalid data, and the original message after filling is filled with invalid data.
  • the amount of data is relatively large.
  • the network device sends the filled original message to the decoding end, the filled original message occupies a large amount of network bandwidth, resulting in a waste of network resources.
  • the present application provides a message processing method, device, and computer-readable storage medium, which can reduce the network bandwidth occupied in the process of message sending and avoid waste of network resources.
  • the technical solution is as follows:
  • a message processing method includes:
  • the first message is the largest message among the multiple original messages
  • any spliced message in the at least one spliced message is smaller than the size of the first message
  • padding is performed on the at least one spliced message to obtain an equal-length data block, and the equal-length data
  • the size of each data block in the block is the size of the first message
  • This method performs splicing processing on the original messages except the largest first message among multiple original messages. Only when the size of the spliced message is smaller than the size of the largest message, the spliced message will be processed. It is not necessary to fill each other original packets. Therefore, the filled equal-length data blocks are filled with less data, which reduces the network bandwidth occupied during the transmission of FEC encoded packets and avoids The waste of network resources.
  • the original packets other than the first packet among the multiple original packets Performing splicing processing to obtain at least one spliced message includes:
  • each A message group includes at least one original message, and the sum of the sizes of all original messages in each message group is less than or equal to the size of the first message;
  • the multiple original packets other than the first packet are spliced Processing to obtain at least one spliced message includes:
  • the processor groups other original messages in the plurality of original messages except the first message to obtain at least one message group , And generate an encoding task according to the at least one message group, where the encoding task includes address information of each message group;
  • the target hardware engine performs splicing processing on each message group according to the encoding task to obtain the at least one spliced message.
  • the original packets other than the first packet in the multiple original packets Grouping to obtain at least one packet group includes:
  • each of the multiple original messages sort the multiple original messages to obtain the order i of each original message, where i is an integer greater than or equal to zero, where the order is 0
  • the original message is the first message
  • the j-th target message group and the order of i are merged into the i-th target message group, the target size is the size of the first message, and the j-th target message group is the smallest-order message group among at least one target message group , And the order of the j-th target message group is j, and the sum of the size of each message group in the at least one target message group and the size of the original message whose order is i is less than or equal to the Target size, 0 ⁇ j ⁇ i;
  • each remaining target message group is used as a message group in the at least one message group.
  • the performing splicing processing on each message group to obtain the at least one spliced message includes:
  • each message group is spliced, and the first target number is the dimension of the matrix to be coded when FEC coding is performed once; or,
  • the second target number is the preset number of original messages used for performing FEC encoding once.
  • the method further includes:
  • each spliced message in the at least one spliced message When the size of each spliced message in the at least one spliced message is equal to the size of the first message, perform FEC encoding processing on the first message and the at least one spliced message to obtain at least A redundant message.
  • a message processing device which includes a plurality of functional modules, and the plurality of functional modules interact with each other to implement the methods in the first aspect and various embodiments thereof.
  • the multiple functional modules may be implemented based on software, hardware, or a combination of software and hardware, and the multiple functional modules may be combined or divided arbitrarily based on specific implementations.
  • a computer-readable storage medium stores instructions, and the instructions are loaded and executed by a processor to implement the message processing method described above.
  • FIG. 1 is a schematic diagram of a message transmission system provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a message processing method provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for implementing packet processing in a network device according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a message group determination process provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a message grouping provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an original message grouping process provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an encoding task provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an FEC message provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a flow chart of internal interaction of a network device provided by an embodiment of the present application.
  • FIG. 10 is a flowchart of an encoding and decoding provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a message processing apparatus provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a message transmission system provided by an embodiment of the present application.
  • a first terminal 101 a first network device 102, a second network device 103, and a second terminal 104, wherein the first terminal 101 It is used to generate a data stream and send the data stream to the first network device 102.
  • the first network device 102 splices multiple original packets in the data stream into one equal-length data block, and each data block in the equal-length data block has the same size, and performs FEC encoding processing on the equal-length data block to obtain multiple data blocks.
  • Redundant messages of the original message and add the same FEC header to each data block in the equal-length data block and each redundant message, encapsulate it into multiple FEC messages, and then send to the second network device 103 A coded stream composed of FEC packets.
  • the second network device 303 decodes the original FEC message lost in the encoded stream, and restores the lost original FEC message, where the original FEC message is any FEC message.
  • the second network device 103 restores the data stream based on the restored original FEC message and the FEC message that is not lost, and sends the data stream to the second terminal 104.
  • the first terminal 101 and the second terminal 104 may be mobile phones, laptop computers, etc., and the first network device 102 and the second network device 103 may be devices such as routers and switches.
  • the data stream may be a video stream, an audio stream, or a text stream composed of text data. The embodiment of the present application does not specifically limit the type of the data stream.
  • Figure 1 is a schematic diagram of a video stream as an example.
  • the first user can use the camera on the first terminal 302 to record the video of the first user and send it to the first network.
  • the device 102 sends the original packets used to compose the video to form a video stream (data stream).
  • the first network device 102 performs FEC encoding on the original packets in the video stream to obtain redundant packets.
  • the FEC message header is added to each redundant message and each redundant message to obtain the corresponding FEC message.
  • the first network device 102 can send the FEC message to the second network device 103 through a wide area network (WAN).
  • WAN wide area network
  • the second network device 103 judges which original messages are lost according to the FEC message header, and according to the received FEC messages (including original messages and redundant messages) ), recover the lost original message, the second network device 103 composes the recovered original message and the original message that is not lost into a data stream, and sends the data stream to the second terminal 104 in the local area network A and B respectively.
  • the second terminal 104 After the second terminal 104 receives the video stream, it plays the video, so that the second user can watch the video of the first user on the second terminal 304, thereby realizing a cross-LAN video session between the first user and the second user .
  • the first terminal 101, the first network device 102, the second network device 103, and the second terminal 104 may be distributed in the same local area network or in different local area networks.
  • the distribution mode of the first terminal 101, the first network device 102, the second network device 103, and the second terminal 104 is not specifically limited.
  • the first network device may directly use the processor to splice the original messages in the data stream to obtain equal-length data blocks, and the processor then performs FEC encoding on the equal-length data blocks.
  • the processor in the first network device can determine the splicing scheme according to the size of each original message, and send the splicing scheme to the target hardware engine, and the target hardware engine will use the splicing scheme provided by the processor. , Perform splicing processing on the original messages in the data stream to obtain equal-length data blocks, and perform FEC encoding on the equal-length data blocks.
  • the processor may be a network processor (network processer, NP), a central processing unit (central processing unit, CPU), or a combination of NP and CPU.
  • the processor may also include a hardware chip, and the above-mentioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the above-mentioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL) or any combination thereof.
  • CPLD complex programmable logic device
  • FPGA field-programmable gate array
  • GAL generic array logic
  • an embodiment of the present application further provides a computer-readable storage medium, such as a memory including instructions, which may be executed by a processor in a network device to implement the methods provided in the following embodiments.
  • the computer-readable storage medium may be read-only memory (ROM), random access memory (RAM), compact disc read-only memory (CD-ROM), Tapes, floppy disks and optical data storage devices, etc.
  • Steps 201-208 In order to further illustrate the process of encoding the original message in the data stream by the first network device, refer to the flowchart of a message processing method provided by an embodiment of the present application shown in FIG. 2. The method flow may specifically include the following Steps 201-208. In a possible implementation, when describing the process shown in FIG. 2 in this application, the steps included in the entire process of FIG. 2 are first described, and then each step is described in detail.
  • the first network device obtains multiple original messages.
  • the first network device can obtain multiple data streams.
  • the multiple original packets are packets in the first data stream among the multiple data streams, and are packets used by the first network device to perform FEC encoding once.
  • the first data stream is any data stream among multiple data streams.
  • the first data stream may include multiple original messages, each original message is used to carry data, and the data carried in the original message may be video data, For audio data or text data, the embodiment of the present application does not specifically limit the type of data in the first data stream.
  • the processor groups the original messages of the multiple original messages except the first message according to the size of the first message in the multiple original messages to obtain at least one message group, and each message group is
  • the packet includes at least one original packet, and the sum of the sizes of all the original packets in each packet group is less than or equal to the size of the first packet.
  • the processor generates an encoding task according to the at least one message group, where the encoding task includes address information of each message group.
  • the processor sends the encoding task to the target hardware engine.
  • the processor may also deliver the encoding task to the memory, and the target hardware engine reads the encoding task from the memory.
  • the processor can also issue encoding tasks to the memory or the target hardware engine according to the priority of the data stream. The processor can first issue the encoding task of the data stream with higher priority, and then issue the encoding task of the lower priority.
  • the embodiment of the application does not specifically limit the manner in which the processor issues the encoding task.
  • the target hardware engine performs splicing processing on each message group according to the encoding task to obtain at least one spliced message.
  • the target hardware engine performs padding processing on the at least one spliced message to obtain a data block of equal length.
  • the size of each data block in the block is the size of the first message.
  • the target hardware engine can fill any spliced message with target data to obtain a data block of the first message size.
  • the target data can be data such as 0 or 1, when the target hardware engine fills at least one spliced message Then, a data block of equal length is obtained, and each data block in the equal length data block corresponds to a splicing message, and the size of each data block is the size of the first message.
  • this step 206 may be executed by the processor.
  • the target hardware engine performs forward error correction FEC encoding processing on the first message and the long data blocks to obtain at least one redundant message.
  • the target hardware engine performs FEC encoding processing on the first message and the at least one spliced message to obtain At least one redundant message.
  • the first network device may perform FEC encoding on multiple original packets in the data stream. After one FEC encoding is completed, perform FEC encoding on other multiple original packets in the data stream.
  • step 201 can be implemented through the process shown in the following steps 2011-2012.
  • Step 2011 The first network device stores the received original message in the data stream into the memory of the first network device.
  • FIG. 3 shows a flowchart of a method for implementing packet processing in a network device according to an embodiment of the present invention.
  • the network interface of the first network device The original message is sent to the packet parse engine (PPE).
  • PPE packet parse engine
  • the PPE receives the original message, it stores the original message in the memory and records the storage address of the original message, so that the PPE
  • the original messages in the data stream received by the network interface can be collected.
  • the network interface may be a gigabit ethernet (GE) network port or a ten-gigabit ethernet (GXE) network port.
  • GE gigabit ethernet
  • GXE ten-gigabit ethernet
  • step 2012 if the amount of uncoded original packets of the data stream in the memory is equal to or greater than the second target number, the processor obtains the second target number from the uncoded original packets of the data stream.
  • the second target number is a preset number of original messages for performing FEC encoding once.
  • the first network device Since the first network device performs FEC encoding on the original message with the second target number once, it will encode the next set of uncoded original messages with the second target number. Therefore, for this data stream, Each time the second target number of original messages are newly cached in the memory of the first network device, the newly cached original messages of the second target number may be used as multiple original messages to be encoded next time.
  • the processor includes at least one of the NP and the CPU.
  • Figure 3 is still used as an example for description.
  • the PPE collects the first original message, it stores the first original message in the memory and stores the first original message.
  • the storage complete message of the first original message is sent to the NP.
  • the storage complete message of the first original message may carry the flow identifier of the first data flow to which the first original message belongs and the storage address of the first original message.
  • the flow identifier may be the The name or number of the first data stream is used to indicate that the first original packet belongs to the first data stream.
  • the first original message is any original message in the first data stream.
  • the first network device sets a corresponding priority for each received data stream according to service requirements.
  • the first network device can preferentially process the data stream with high priority, and accordingly, store the completion message
  • the priority of the data stream can also be carried, so that each module in the first network device can process the original message in the data stream according to the priority.
  • a priority information table can be configured on the first network device.
  • the priority information table includes multiple priorities from high to low. Each priority corresponds to a service type.
  • the service type of the data stream can be determined according to the data carried in the original message in the data stream, and the priority corresponding to the service type of the data stream can be determined from the priority information table, so that the first network The device can set the corresponding priority for the data stream.
  • the NP when the NP receives the storage completion message of the first original message, according to the storage address of the first original message carried in the storage completion message, it compares the first original message in the memory to the storage address of the first original message. The message is parsed to obtain whether the first original message needs encoding. If the first original message needs to be encoded, the NP generates an encoding notification message.
  • the encoding notification message can carry the storage address of the first original message, At least one of the size of the first original message, the flow identifier and the priority of the first data stream, and send the encoding notification message to the protocol stack; the user datagram protocol (UDP) agent can download from The coded notification message is collected in the protocol stack.
  • UDP user datagram protocol
  • the UDP proxy can send the encoded notification message of the second target number to the FEC software module (this process is also the process in which the FEC software module obtains the notification message from the NP), The FEC software module determines the splicing scheme of the original message with the second target number.
  • the first network device does not limit the number of original packets to be FEC encoded each time, but instead limits the number of original packets to be encoded each time FEC encoding is performed.
  • the number of messages is the dimension of the matrix to be encoded during FEC encoding, that is, the first network device does not preset the second target number, but presets a first target number.
  • Each time the UPD agent receives a coded notification message it will send the coded notification message to the FEC software module, so that the FEC software module can get multiple coded notification messages. When the FEC software module can get multiple coded notification messages.
  • the FEC software module can determine the largest first message from multiple original messages, and divide the first message according to the size of the largest message (denoted as the target size) and the number of original messages.
  • the size of the message other than the message determines the splicing scheme of the third target number (that is, the process of determining the splicing scheme of the original message by the FEC software module in Fig. 3), and each splicing scheme is composed of at least two splicing schemes.
  • the original message splicing is a splicing message splicing scheme, the size of each spliced message is less than or equal to the target size, and the sum of the number of the third target and the number of the first message in the multiple original messages is equal to the first target
  • the subsequent FEC software module sends the splicing scheme to the target hardware engine, and the target hardware engine splices the original message according to the splicing scheme.
  • the step 202 is described as follows:
  • the first message is the largest message among the multiple original messages
  • each message group can be regarded as a splicing scheme
  • the process of determining the message group is also the process of determining the splicing scheme.
  • the size of each message group is also the size of all original messages in each message group.
  • the processor may first sort the multiple original messages according to the sizes of the multiple original messages, and then determine at least one message group according to the sorting of the multiple original messages and the size of the first message.
  • this step 202 can be implemented by the following process shown in 2021-2025. In a possible implementation, this step can be executed by the FEC software module in the processor.
  • Step 2021 The processor sorts the multiple original messages according to the size of each of the multiple original messages, and obtains the order i of each original message, where i is an integer greater than or equal to zero, where the order is 0
  • the original message is the first message.
  • FIG. 4 shows a schematic diagram of a message group determination process provided by an embodiment of the present application.
  • FIG. 4 includes the left, middle, and right pictures, where the left picture includes an original message sequence, and the original message sequence has The five original messages are messages P1-P5, and message P1 is the first message.
  • the processor sorts the messages P1-P5 according to the size of the messages P1-P5, and obtains the five messages shown in the middle figure.
  • the order of messages messages P1, P4, P3, P5 and P2, the order is 0-4 respectively.
  • the processor Since the size of the first message is the target size, the processor does not need to group the first messages, or may directly treat each first message as a message group.
  • the processor may sequentially group other original messages except the first message according to the order of the multiple original messages from low to high. Still taking the middle diagram in FIG. 4 as an example, the order of the message P4 is 1, and the processor can directly divide the message P4 into the first target message group.
  • Step 2023 When i>1, if the sum of the size of the j-th target message group and the size of the original message in the order of i is less than or equal to the target size, the processor combines the j-th target message group with all the original messages.
  • the original messages in the order i are merged into the i-th target message group, the target size is the size of the first message, and the j-th target message group is the smallest-order message in at least one target message group.
  • Packet group, and the order of the j-th target packet group is j, the sum of the size of any packet group in the at least one target packet group and the size of the original packet whose order is i is less than or equal to the target size , 0 ⁇ j ⁇ i.
  • the size of the target message group that is, the sum of the sizes of each message in the target message group, the j-th target message group is the message group where the original message of order j is located, and the i-th target message
  • the group is the message group in which the original message of order i is located, that is, after the j original messages are grouped, the j-th target message group is obtained, and the i-th original message group is completed, and the i-th target message group is obtained.
  • a target message group Since the processor has grouped multiple original messages, there may be multiple target message groups in the grouping process. For ease of description, each target message group is assigned an order, the j-th target message group The order of is j, and the order of the i-th target message group is i, that is, what is the order of the number of target message groups.
  • the processor groups packets P3 with order 2.
  • the first target packet group only includes packets P4, and the first The sum of the size of the target packet group and the size of the packet P3 of order 2, that is, the sum of the sizes of the packets P4 and P3, if the sum of the sizes of the packets P4 and P3 is less than or equal to the first packet P1
  • the size of the target message group that is, the target size
  • the processor merges messages P4 and P3 into the second target message group. Since the first target message group is merged into the second target message group, the current Only the second target message group remains.
  • the processor combines the original message with sequence i and the j-th target message to form the i-th target message group, where the j-th target message group is at least one target The message group with the smallest sequence in the message group.
  • Step 2024 When i>1, if there is no j-th target message group, the processor adds an i-th target message group, and divides the original message with the order of i into the i-th target message group. Text group.
  • the processor may additionally add a target message group as the i-th target message group, and divide the original message with the order of i into the i-th target message group.
  • the current remaining target message group is the second target message group composed of messages P4 and P3, and messages P4, P3 and the order are 3.
  • the size of the message P5 is less than or equal to the target size, it means that the second target message group is the jth target message group in at least one target message group, and the messages P4, P3, and P5 can be directly grouped. Merge into the third target message group. If the size of messages P4, P3, and message P5 in sequence 3 is greater than the target size, indicating that the jth target message group does not exist in the current remaining target message groups, the processor adds a third target message Group, and divide the message P5 into the newly added third target message group. At this time, the second target message group and the third target message group will remain.
  • Step 2025 After grouping the last original message among the multiple original messages, the processor uses each remaining target message group as a message group in the at least one message group.
  • the processor After the processor completes the grouping of the last original message, the current remaining target message group is also the final message group. Therefore, the processor can use each remaining target message group as the at least one message group.
  • the processor merges messages P4, P3, and P5 into the third target message group.
  • the message P2 with order 4 is the message P1-5
  • the processor merges the third target message group and message P2 into the fourth target message group.
  • the fourth target message group is the final grouping of messages P2-P5
  • the fourth target message group includes messages P2-P5
  • the fourth target message group A group is also a splicing scheme.
  • each splicing scheme is used to indicate the splicing message of the size of the first message, if the size of the splicing message is smaller than the size of the first message, it can be added to the splicing message. Fill the data so that the size of the message after the spelling is equal to the size of the first message.
  • the processor can cache the first network device for the preset time every interval The original message is grouped.
  • the grouping process can be the process shown in steps 21-25.
  • the target size can be the size of the largest message among the original messages buffered by the first network device within the preset time.
  • the preset packet size may be the largest packet in the first data stream to which the multiple original packets belong; when the first network device buffers the packet within the preset time When the original message is just divided into the third target number of message groups, and the third target number of message groups cannot accommodate other messages, then this grouping ends; when the first network device After the original message buffered within the preset time is just classified into at least one message group, if the number of the at least one message group is equal to the third target number, but there is still in the at least one message group If other messages can be accommodated, the processor can continue to collect new original messages until the at least one message group cannot accommodate new original messages, then the current grouping ends; if the number of at least one message group is less than the first If there are three target numbers, the processor will continue to collect new original messages and group the collected new original messages until the third target number of message groups is obtained, and the third target number of message groups are all When the new original message cannot be accommodated, the current grouping ends.
  • steps 2021-2025 In order to facilitate the understanding of the process shown in steps 2021-2025, refer to the flowchart of a packet grouping provided in an embodiment of the present application shown in FIG.
  • Step 501 The processor sets the number of original messages to be sorted once as the second target number M, where M is a positive integer.
  • Step 502 The network interface accumulatively receives M original messages.
  • Step 503 The processor arranges the M original messages in descending order according to the size of the M original messages to obtain an original message sequence including M original messages.
  • the larger the M original messages The smaller the original message sequence is, the smaller the original message sequence, the larger the original message sequence
  • Step 504 The processor constructs a virtual message according to the maximum message size of the M original messages, and inserts the virtual message into the virtual message sequence, where the size of the virtual message is the maximum message of the M original messages The size of the virtual message sequence is used to place virtual messages.
  • Each virtual message can be regarded as a virtual storage space, and the virtual storage size of the virtual storage space is the maximum message size of M original messages, that is, a pseudo message can be regarded as a message group.
  • Step 505 The processor traverses the message Pk of order k in the original message sequence, where 0 ⁇ k ⁇ M.
  • Step 507 If it does not exist, the processor creates a new virtual message, inserts the newly created virtual message into the virtual message sequence, and places the message Pk in the newly created virtual message.
  • Step 508 After placing the message Pk in the virtual message, the processor queries whether there is an original message that has not been traversed in the original message, and if it exists, jumps to step 505, otherwise, executes step 509.
  • Step 509 After all the original messages in the original message sequence are traversed, the processor treats each virtual message in the virtual message sequence as a final message group.
  • the processor when the processor groups multiple original messages, it may divide part of the data in one original message into a message group, and divide other data in the original message into other groups.
  • Message group In a possible implementation, in a grouping process, if the processor divides the first original message received into the first message group, the size of the first message group is the target size, and When the target size can be any preset size, if the size of an original message is less than or equal to the target size, the processor will divide the first original message into the first message group. If the size of the original message is greater than the target size, the processor virtually splits the first original message into at least two data blocks. The size of the last data block in the at least two data blocks is less than or equal to the target size.
  • the size of the other data blocks except the last data block in each data block is equal to the target size, and the processor divides each data block into a message group; when the processor subsequently receives a new original message If the sum of the size of the new original message and the size of the last message group in all current message groups is less than or equal to the target size, the new original message is divided into the last message group; if the new original message When the sum of the size of the message and the size of the last message group in all current message groups is greater than the target size, the new original message is virtually divided into a first target data block and at least one second target data block, where, The size of the first target data block is the difference between the last target size and the current size of the last packet, the size of the last data block in at least one second target data block is less than or equal to the target size, and the at least one second target The size of the data blocks except the last data block in the data block is equal to the target size.
  • the processor divides the first target data block into the last message group, and adds at least one message group to each second target
  • the data blocks are divided into a newly-added message group; when the processor completes the division of the original message with the second target number, or when the message group with the first target number is determined, and the first target After the number of message groups can no longer accommodate other original messages, this grouping ends.
  • FIG. 6 shows a schematic diagram of an original message grouping process provided by an embodiment of the present application.
  • the original message 1-2 and the former part of the original message 4 are divided into message group 1, and the latter part of the original message 4, the original message 5 and the former part of the original message 6 are divided into Message group 2, divide the latter part of the original message 6 and original messages 7-8 into message group 2.
  • the size of message group 3 is smaller than the target size, and subsequent messages can be spliced in message 3
  • the output spliced message is filled with data so that the size of the filled message is the target size.
  • the processor when the first network device directly performs FEC encoding on multiple original packets through the processor, the processor directly executes the following step 205, which is to perform splicing processing on each packet group , At least one splicing message is obtained, and each splicing message corresponds to a message group.
  • the processor executes the following step 203.
  • the step 203 is described as follows:
  • the encoding task is used to instruct to splice the original message in each message group into a spliced message, and encode the spliced messages, the address information of each message group Including the storage addresses of all original messages in a message group.
  • the address information of each message group also includes a splicing identifier, for example, a code provided by the embodiment of the application shown in FIG. 7 A schematic diagram of the task.
  • the encoding task in FIG. 7 includes multiple address information, and each address information includes a splicing identifier and the storage address of each original message.
  • the splicing identifier can be used to indicate that all original messages in a message group are spliced into one splicing message.
  • the splicing identifier can also be used to indicate that all storage addresses in the address information of a message group are grouped together.
  • the application embodiment does not specifically limit the way of representing the splicing mark.
  • the address information of each message group can be represented in the form of a set.
  • message group 2 includes original messages 3 and 4, and if set 1 includes the storage addresses and splicing identifiers of original messages 3 and 4, the Set 1 is the address information of the message group 2.
  • the address information of each message group can be represented in the form of a character string. Still taking message group 2 as an example, string 1 is the storage address of “original message 3; 4's storage address; splicing identification", then the character string 1 is also the address information of the message group 2.
  • the address information of each message group can be represented in the form of a table. Still taking message group 2 as an example, see Table 1 below. The original messages 3 and 4 in Table 1 The storage addresses all correspond to the same splicing identifier, and Table 1 is also the address information of the message group 2.
  • the encoding task may also carry the storage address of the first message, so that the subsequent target hardware engine can obtain the first message according to the storage address of the first message.
  • the encoding task may also carry at least one of an encoding parameter, an identifier of a data stream, and a priority identifier of the first data stream.
  • the first data stream is a data stream in which multiple original packets are located, and the encoding parameter may include the to-be-encoded data stream.
  • the amount of packets (that is, the number of the first target), the amount of redundant packets, and the target size, where the amount of packets to be encoded is the number of the first packets in the multiple original packets and at least one packet group
  • the step 205 is described as follows:
  • the target hardware engine may obtain at least one message group from the memory according to the address information in the encoding task, and the original message in each message group is a message in a storage address in one address information. ; The target hardware engine splices the original message in each message group into a spliced message, so that the target hardware engine can obtain at least one spliced message.
  • the processor When the first network device encodes multiple original messages through the processor, if the sum of the number of at least one message group and the number of first messages in the multiple original messages reaches the first target number, And when the last message group in the at least one message group cannot add other original messages other than the multiple original messages, the processor splices each message group, and the first target number is The dimension of the matrix to be coded in one FEC coding.
  • the processor performs splicing processing on each message group ,
  • the second target number is the preset number of original messages used for one FEC encoding.
  • the process of splicing each message group by the processor is the same as the process of splicing each message group by the target hardware engine.
  • the embodiment of the present application performs splicing processing on each message group by the processor. The process will not be repeated.
  • Step 207 is described as follows:
  • the target hardware engine can obtain the encoding parameters from the encoding task, and assume that the amount of packets to be encoded in the encoding parameters is Q, the amount of redundant packets is R, and the target size is L, where Q And R are both positive integers, L is a value greater than 0, the amount of redundant messages R is the number of redundant messages obtained after FEC encoding Q messages to be coded, and Q original messages to be coded Including the first message among multiple original messages and the equal-length data block.
  • the target hardware engine may acquire the first message from the memory according to the storage address of the first message carried by the encoding task, and use the acquired first message and the equal-length data block obtained in step 206 as the message to be encoded.
  • the target hardware engine can first compose Q messages to be encoded into a Q*L matrix to be encoded, and each row of the matrix to be encoded is a data block of equal length or a first message; the target hardware engine is based on the amount of messages to be encoded Q and Redundant message volume R, construct a generator matrix of (Q+R)*Q, the generator matrix is composed of the first sub-matrix and the second sub-matrix, where the first sub-matrix is the identity matrix of Q*Q, and the second The sub-matrix is the Cauchy matrix of R*Q.
  • the element in the i-th row and j-th column in the second sub-matrix is
  • x i-1 and y i-1 and sum are all elements in the galois field (GF) (2 w ), where i and j are integers greater than or equal to 0, and w can be 8;
  • the target hardware engine performs multiplication calculation on the generator matrix and the matrix to be encoded to obtain the encoding matrix of (Q+R)*L, where the encoding matrix includes the matrix to be encoded and the check matrix of R*L.
  • Each line has a redundant message.
  • the process shown in steps 205-207 is also a process in which the target hardware engine performs splicing processing on each packet group according to the encoding task.
  • this step 207 may be executed by the processor.
  • the step 208 is described as follows:
  • step 208 when the size of each spliced message in the at least one spliced message is equal to the size of the first message, the at least one message can be regarded as an equal-length data block. Therefore, the target hardware engine This step 208 can be executed directly. In a possible implementation manner, when the first network device encodes multiple original messages through the processor, this step 207 may be executed by the processor.
  • the target hardware engine When the target hardware engine obtains the at least one redundant message, it stores the at least one redundant message in the memory, and sends an encoding completion message to the processor.
  • the encoding completion message may carry information about the at least one redundant message.
  • the address is stored so that the processor can obtain the at least one redundant message from the memory.
  • the target engine directly sends the at least one redundant message to the processor, and the implementation of this application does not specifically limit the manner in which the processor obtains the at least one redundant message.
  • the processor can obtain the original message in each message group from the memory, and perform splicing processing on each message group to obtain at least one spliced message.
  • the size of any spliced message in the at least one spliced message is less than
  • the size of the first message is the same, at least one spliced message is filled to obtain a data block of equal length, and the first message of the multiple original messages is obtained from the memory.
  • the target hardware engine can also directly send the equal-length data block and the first message to the processor, and the embodiment of the present application does not specifically limit the manner in which the processor obtains the equal-length data block and the first message.
  • the processor After the processor obtains the equal-length data block, the first message, and at least one redundant message, it adds an FEC header to each data block in the equal-length data block and the first message to obtain multiple original FEC message, and add an FEC message header to each redundant message to obtain at least one redundant FEC message, and the FEC message header carries coding parameters.
  • the FEC message in the schematic diagram of an FEC message provided by the embodiment of the application shown in FIG. 8, the FEC message includes the FEC message and the load message.
  • the FEC message is an original FEC message
  • the FEC message is a redundant FEC message
  • the FEC message header is used to indicate the encoding conditions of the multiple original messages.
  • the FEC packet header carries multiple encoding parameters of the original packet. For example, the FEC packet header in FIG. 8 carries the amount of packets to be encoded, the amount of redundant packets, and the target size.
  • the FEC packet header may also carry the target identifiers of the multiple original packets, and the target identifier is used to indicate the number of times the data streams to which the multiple original packets belong is encoded.
  • the FEC packet header can also carry the sequence number of the multiple original packets in the data stream (that is, the original sequence number of the original packet before splicing), the size of the original packet, and the length of each first packet.
  • the sequence number of each data block in the matrix to be coded in the data block, the sequence number of each redundant message in the coding matrix, and the target splicing identifier corresponding to each data block, the target splicing identifier is used to indicate the data
  • the block is a spliced data block.
  • the processor After the processor obtains the original FEC message and the redundant FEC message, it may send the original FEC message and the redundant FEC message to the second network device.
  • the target hardware engine can directly encapsulate the first message and the equal-length data block into the original FEC message, encapsulate the redundant message into a redundant FEC message, and send it to the second network device Send original FEC messages and redundant FEC messages.
  • Figure 9 illustrates the exchange of internal modules when the first network device performs FEC encoding on multiple original messages.
  • Step 901 Whenever the network interface receives an original message from the network, it sends the original message to the PPE.
  • Step 902 The PPE writes the received original message into the memory.
  • Step 903 The PPE sends the storage completion message of the original message to the NP to notify the NP to process the original message in the memory.
  • Step 904 After receiving the storage completion message, the NP analyzes the original message to determine whether the original message needs encoding, and if encoding is required, the NP sends an encoding notification message of the original message to the CPU.
  • the CPU can send the encoding completion message to the FEC software module through the protocol stack and the UDP proxy.
  • the specific process is described in the above step 2012, and will not be repeated here.
  • Step 905 After the FEC software module in the CPU collects the encoding notification message of the second target number, it is equivalent to that the FEC software module collects all the original messages of the second target number that need to be encoded. The FEC software module is based on the second target number. In the original message of the second target number, the encoding task of the original message of the second target number is sent to the target hardware engine.
  • the encoding task is equivalent to a notification message, informing the target hardware processor to perform FEC encoding according to the encoding task.
  • Step 906 The target hardware engine obtains the second target number of original messages from the memory according to the encoding task, and performs FEC encoding on the second target number of original messages to obtain at least one redundant message.
  • step 906 is the same as the process shown in steps 205-207.
  • the embodiment of the present application does not repeat the process shown in this step 906.
  • Step 907 The target hardware engine writes at least one redundant message into the memory, and sends the storage address of the at least one redundant message to the FEC software module, so that the outgoing FEC software module obtains at least one redundant message and sends it to the first 2.
  • the network device sends a second target number of original messages and at least one redundant message.
  • the FEC software module may be deployed in the NP, and may also be deployed in the CPU.
  • the embodiment of the present application does not specifically limit the deployment manner of the FEC software module.
  • the original messages other than the largest first message among the multiple original messages are spliced, and only when the spliced message size is smaller than the maximum message size , The spliced message will be filled without filling every other original message. Therefore, the filled equal-length data block contains less data, which reduces the time required for FEC encoded message transmission.
  • the occupied network bandwidth avoids the waste of network resources.
  • FIG. 10 is a flow chart of encoding and decoding provided in an embodiment of this application. The process includes the following steps 1001-1006.
  • Step 1001 After the FEC software module in the first network device collects the second target number of original messages, it determines the second target number of original messages according to the second target number of original messages. One message and at least one message group, and send an encoding task to the target hardware engine.
  • the UDP agent collects the original packets in the data stream sent by the first terminal, and sends the storage address of the collected original packets and the size of the original packets to the FEC software module.
  • the FEC software module receives After the storage address and size of the original message of the second target number, this step 1001 is executed.
  • the number of the second target as 6 as an example, when the FEC software module collects the storage address and size of the original message 1-6 from the UDP agent, it means that the FEC software module has collected 6 original messages, and the FEC software module will
  • the original messages 1 and 6 are respectively used as the first messages, the original messages 2 and 4 are divided into message group 1, and the original messages 3 and 5 are divided into message group 2.
  • the FEC software module can also splice the original messages 2 and 4 into splicing message 1, and the FEC software module can also splice the original messages 3 and 5 into splicing message 2.
  • the size of any spliced message is smaller than the original
  • any splicing message is filled to obtain a filling message, otherwise, any splicing message is not filled.
  • the size of any spliced message is the size of original message 3 as an example.
  • the FEC software module uses original messages 1 and 6 and spliced messages 1-2 as a matrix to be encoded.
  • the original message in the matrix to be encoded is recorded as message 1
  • the spliced message 1-2 is recorded as message 2-3 in the matrix to be encoded
  • the original message 6 is recorded as message 4 in the matrix to be encoded.
  • Step 1002 The target hardware engine obtains the matrix to be encoded formed by the original messages 1 and 6 and the spliced messages 1-2 according to the address set in the encoding task, and constructs the corresponding generating matrix according to the encoding parameters in the encoding task, and according to
  • the matrix to be encoded and the generator matrix are FEC encoded to obtain redundant messages a, b, and c, and the redundant messages a, b, and c are stored in the memory, and the redundant messages a, b, and c are The storage address is sent to the FEC software module.
  • Step 1003 The FEC software module obtains redundant packets a, b, and c from the memory according to the storage addresses of redundant packets a, b, and c, and adds them to packets 1-4 and redundant packets ac respectively
  • One FEC message header obtains multiple FEC messages, and sends the encoded stream composed of the FEC messages to the UDP proxy, and the UDP proxy sends the encoded stream to the second network device through the WAN.
  • Step 1004 The UDP proxy of the second network device receives the encoded stream, and packets 1 and 3 in the encoded stream are lost.
  • the FEC software module of the second network device collects packets 2, 4 and redundant packets a, b
  • the FEC software module issues a decoding task to the target hardware engine in the second network device.
  • the decoding task carries packets 2, 4 and any of the redundant packets a, b, and c.
  • the storage addresses and decoding parameters of the two redundant messages, where the decoding parameters include the number of messages with missing coding parameters, the position of the missing messages in the coding matrix, and the target size.
  • Step 1005 According to the storage address in the decoding task, the target hardware engine obtains packets 2, 4 and any two redundant packets among redundant packets a, b, and c from the memory, and combines packets 2, 4 and redundant packets. Any two redundant packets of the remaining packets a, b, and c form the matrix to be decoded.
  • the target hardware engine constructs the corresponding decoding matrix according to the decoding parameters.
  • the target hardware engine performs FEC decoding based on the matrix to be decoded and the decoding matrix, and the loss is obtained.
  • the messages 1 and 3 are stored in the memory, and the storage addresses of messages 1 and 3 are sent to the FEC software module.
  • Step 1006 The FEC software module obtains the storage addresses of the original message 1 and the spliced message 1 from the memory according to the storage addresses of the original message 1 and the spliced message 1, and obtains the original message 2 and the spliced message 2 from the spliced message.
  • Split message 1 into original messages 2 and 3, split message 3 and 5 from splicing message 2 restore the sequence of original messages 1-6 in the data stream, and send the original messages to the UDP proxy in order 1-6, forming a data stream, and the UDP proxy sends the data stream to the second terminal.
  • FIG. 11 is a schematic structural diagram of a message processing apparatus provided by an embodiment of the present application.
  • the apparatus 1100 includes:
  • the processor 1101 is configured to execute the above step 201;
  • the processor 1101 is further configured to splice other original messages among the plurality of original messages except the first message according to the size of the first message among the plurality of original messages Processing to obtain at least one spliced message, where the first message is the message with the largest amount of data among the multiple original messages;
  • the processor 1101 is further configured to perform padding processing on the at least one spliced message when the size of any spliced message in the at least one spliced message is smaller than the size of the first message to obtain A long data block, the size of each data block in the equal-length data block is the size of the first message;
  • the processor 1101 is further configured to perform forward error correction FEC encoding processing on the first message and the equal-length data block to obtain at least one redundant message.
  • the device further includes a memory for storing program instructions.
  • the processor 1101 may be a CPU, configured to run instructions stored in the memory, and used to implement the message processing method shown in FIG. 2.
  • the processor 1101 is configured to:
  • the message processing apparatus further includes a target hardware engine 1102;
  • the processor 1101 is further configured to execute the above steps 202-203;
  • the target hardware engine 1102 is used to execute step 205 described above.
  • the processor 1101 is configured to:
  • each of the multiple original messages sort the multiple original messages to obtain the order i of each original message, where i is an integer greater than or equal to zero, where the order is 0
  • the original message is the first message
  • the j-th target message group and the order of i are merged into the i-th target message group, the target size is the size of the first message, and the j-th target message group is the smallest-order message group among at least one target message group , And the order of the j-th target message group is j, and the sum of the size of each message group in the at least one target message group and the size of the original message whose order is i is less than or equal to the Target size, 0 ⁇ j ⁇ i;
  • each remaining target message group is used as a message group in the at least one message group.
  • the processor 1101 is further configured to:
  • each message group is spliced, and the first target number is the dimension of the matrix to be coded when FEC coding is performed once; or,
  • the second target number is the preset number of original messages used for performing FEC encoding once.
  • the processor 1101 is further configured to:
  • each spliced message in the at least one spliced message is equal to the size of the first message, perform forward error correction FEC encoding on the first message and the at least one spliced message Processing, at least one redundant message is obtained.
  • the target hardware engine 1101 is also used to execute step 208 above.
  • the message processing device When the message processing device provided in the above embodiment processes a message, it only uses the division of the above-mentioned functional modules as an example. In actual applications, the above-mentioned function allocation can be completed by different functional modules as required, that is, the device The internal structure is divided into different functional modules to complete all or part of the functions described above.
  • the message processing method embodiments provided in the foregoing embodiments belong to the same concept, and the specific implementation process is detailed in the method embodiments, which will not be repeated here.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

一种报文处理方法、装置以及计算机可读存储介质,属于数据编码技术领域。本方法通过将多个原始报文中除最大的第一报文以外的其他原始报文进行了拼接处理,只有当拼接出的拼接报文大小小于最大报文的大小时,才会对拼接报文进行填充处理,而无需对每个其他原始报文都进行填充处理,因此,填充后的等长数据块中填充的数据较少,降低了FEC编码报文传输时所占用的网络带宽,避免了网络资源浪费。

Description

报文处理方法、装置以及计算机存储介质
本申请要求了2019年11月29日提交的,申请号为201911207236.1,发明名称为“报文处理方法、装置及计算机存储介质”的中国申请的优先权,以及要求2019年09月10日提交的申请号为201910854179.X、申请名称为“等长编码块生成方法、装置和系统”的中国专利申请的优先权,它们全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据编码技术领域,特别涉及一种报文处理方法、装置以及计算机可读存储介质。
背景技术
随着编解码技术的发展,网络设备在传输数据流过程中,可以先对数据流中的多个原始报文进行前向纠错(forward error correction,FEC)编码,得到冗余报文,并将原始报文和冗余报文发送至解码端。
目前,网络设备对多个原始报文进行编码的过程可以是:网络设备在多个原始报文中除最大报文以外的其他原始报文中填充无效数据,使得填充后的报文大小均为多个原始报文中最大报文的大小,网络设备基于填充后的报文以及最大报文进行FEC编码,得到冗余报文。
在上述编码过程中,由于网络设备会在每个比最大报文小的原始报文内填充无效数据,从而得到填充后的原始报文中填充了大量的无效数据,填充后的原始报文的数据量较大,当网络设备将填充后的原始报文发送至解码端的过程中,导致填充后的原始报文占用大量的网络带宽,造成网络资源浪费。
发明内容
本申请提供了一种报文处理方法、装置以及计算机可读存储介质,能够降低报文发送过程中所占用的网络带宽,避免网络资源浪费。该技术方案如下:
第一方面,提供了一种报文处理方法,该方法包括:
获取多个原始报文;
根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的其他原始报文进行拼接处理,得到至少一个拼接报文,所述第一报文为所述多个原始报文中最大的报文;
当所述至少一个拼接报文中任一拼接报文的大小小于所述第一报文的大小时,对所述至少一个拼接报文进行填充处理,得到等长数据块,所述等长数据块中每个数据块的大小为所述第一报文的大小;
将所述第一报文和所述等长数据块做前向纠错FEC编码处理,得到至少一个冗余报文。
本方法通过将多个原始报文中除最大的第一报文以外的其他原始报文进行了拼接处理,只有当拼接出的拼接报文大小小于最大报文的大小时,才会对拼接报文进行填充处理,而无 需对每个其他原始报文都进行填充处理,因此,填充后的等长数据块中填充的数据较少,降低了FEC编码报文传输时所占用的网络带宽,避免了网络资源浪费。
在一种可能的实现方式中,所述根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的其他原始报文进行拼接处理,得到至少一个拼接报文包括:
根据所述多个原始报文中的第一报文的大小,对所述多个原始报文中除所述第一报文以外的其他原始报文进行分组,得到至少一个报文组,每个报文组包括至少一个原始报文,每个报文组中所有原始报文的大小之和小于或等于所述第一报文的大小;
对每个报文组进行拼接处理,得到所述至少一个拼接报文,每个拼接报文对应一个报文组。
在一种可能的实现方式中,所述根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的报文进行拼接处理,得到至少一个拼接报文包括:
处理器根据所述多个原始报文中的第一报文的大小,对所述多个原始报文中除所述第一报文以外的其他原始报文进行分组,得到至少一个报文组,并根据所述至少一个报文组,生成编码任务,所述编码任务包括每个报文组的地址信息;
目标硬件引擎根据所述编码任务,对每个报文组进行拼接处理,得到所述至少一个拼接报文。
在一种可能的实现方式中,所述根据所述多个原始报文中的第一报文的大小,对所述多个原始报文中除所述第一报文以外的其他原始报文进行分组,得到至少一个报文组包括:
根据所述多个原始报文中各个原始报文的大小,对所述多个原始报文进行排序,得到每个原始报文的次序i,i为大于等于零的整数,其中,次序为0的原始报文为所述第一报文;
当i=1时,将次序为1的原始报文划分至第1个目标报文组,所述第1个目标报文组的次序为1;
当i>1时,若第j个目标报文组的大小和次序为i的原始报文的大小之和小于或等于目标大小,则将第j个目标报文组和所述次序为i的原始报文合并为第i个目标报文组,所述目标大小为所述第一报文的大小,所述第j个目标报文组为至少一个目标报文组中次序最小的报文组,且所述第j个目标报文组的次序为j,所述至少一个目标报文组中每个报文组的大小和次序为i的原始报文的大小之和均小于或等于所述目标大小,0<j<i;
当i>1时,若不存在所述第j个目标报文组,则增加一个所述第i个目标报文组,将所述次序为i的原始报文划分至所述第i个目标报文组;
当将所述多个原始报文中最后一个原始报文分组完成后,将当前剩余的每个目标报文组作为所述至少一个报文组中的一个报文组。
在一种可能的实现方式中,所述对每个报文组进行拼接处理,得到所述至少一个拼接报文包括:
当所述至少一个报文组的个数与所述多个原始报文中第一报文的个数之和达到第一目标个数,且所述至少一个报文组中不能再添加所述多个原始报文以外的其他原始报文时,对每个报文组进行拼接处理,所述第一目标个数为进行一次FEC编码时待编码矩阵的维数;或,
当所述多个原始报文的个数等于第二目标个数,且将所述第二目标个数的原始报文划分为至少一个报文组时,对每个报文组进行拼接处理,所述第二目标个数为预设的用于进行一 次FEC编码的原始报文的个数。
在一种可能的实现方式中,所述方法还包括:
当所述至少一个拼接报文中每个拼接报文的大小均等于所述第一报文的大小时,将所述第一报文和所述至少一个拼接报文做FEC编码处理,得到至少一个冗余报文。
第二方面,提供了一种报文处理装置,包括多个功能模块,所述多个功能模块相互作用,实现上述第一方面及其各实施方式中的方法。所述多个功能模块可以基于软件、硬件,或软件和硬件的结合实现,且所述多个功能模块可以基于具体实现进行任意组合或分割。
第三方面,提供一种计算机可读存储介质,该存储介质中存储有指令,该指令由处理器加载并执行以实现如上述报文处理方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种报文传输系统的示意图;
图2是本申请实施例提供的一种报文处理方法的流程图;
图3是本申请实施例提供的一种网络设备内实现报文处理方法的流程图;
图4是本申请实施例提供的一种报文组确定过程的示意图;
图5是本申请实施例提供的一种报文分组的流程图;
图6是本申请实施例提供的一种原始报文分组流程的示意图;
图7是本申请实施例提供的一种编码任务的示意图;
图8是本申请实施例提供的一种FEC报文的示意图;
图9是本申请实施例提供的一种网络设备内部交互的流程示意图;
图10是本申请实施例提供的一种编解码的流程图;
图11是本申请实施例提供的一种报文处理装置的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种报文传输系统的示意图,参见图2,第一终端101、第一网络设备102、第二网络设备103以及第二终端104,其中,第一终端101用于生成数据流,并将数据流发送至第一网络设备102。第一网络设备102将数据流中的多个原始报文拼接为一个等长数据块,等长数据块中的每个数据块的大小相等,对等长数据块做FEC编码处理,得到多个原始报文的冗余报文,并为等长数据块中每个数据块以及每个冗余报文添加相同的FEC头,封装成多个FEC报文,然后向第二网络设备103发送由FEC报文组成的编码流。第二网络设备303对编码流中丢失的原始FEC报文进行解码,恢复丢失的原始FEC报文,其中,原始FEC报文为任一FEC报文。第二网络设备103基于恢复的原始FEC报文和未丢失的FEC报文,恢复数据流,并将数据流发送至第二终端104。第一终端101和第二终端104可以是手机、笔记本电脑等,第一网络设备102和第二网络设备103可以是路由器和交换机等设备。 数据流可以是视频流、音频流,也可以是文本数据组成的文本流,本申请实施例对数据流类型不做具体限定。
图1中是以视频流为例的示意图,当第一用户和第二用户在进行视频会话时,第一用户可以使用第一终端302上的摄像头录制第一用户的视频,并向第一网络设备102发送用于组成该视频的原始报文,形成视频流(数据流),由第一网络设备102对视频流中的原始报文进行FEC编码,得到冗余报文,在每个原始报文以及每个冗余报文上添加FEC报文头,得到对应的FEC报文,第一网络设备102可以通过广域网(wide area network,WAN)向第二网络设备103发送由FEC报文组成的编码流;第二网络设备103在获取到编码流中的FEC报文后,根据FEC报文头判断哪些原始报文丢失,并根据收到的FEC报文(包括原始报文以及冗余报文),恢复丢失的原始报文,第二网络设备103将恢复的原始报文和未丢失的原始报文组成数据流,并向局域网A和B中的第二终端104分别发送数据流,当多个第二终端104接收到视频流以后,进行视频播放,从而使得第二用户可以在第二终端304上观看到第一用户的视频,从而实现第一用户与第二用户的跨局域网的视频会话。在一种可能的实现中,第一终端101、第一网络设备102、第二网络设备103以及第二终端104,可以分布在同一局域网,也可以分布在不同的局域网,本申请实施例对第一终端101、第一网络设备102、第二网络设备103以及第二终端104的分布方式不做具体限定。
在一些可能的实现中,第一网络设备可以直接通过处理器对数据流中的原始报文进行拼接处理,得到等长数据块,处理器再对等长数据块进行FEC编码。在一些可能的实现中,第一网络设备中的处理器可以根据各个原始报文的大小,确定出拼接方案,并将拼接方案发送至目标硬件引擎,由目标硬件引擎根据处理器提供的拼接方案,对数据流中的原始报文进行拼接处理,得到等长数据块,并对等长数据块进行FEC编码。其中,处理器可以是网络处理器(network processer,NP),中央处理器(central processing units,CPU),或者NP与CPU的结合。所述处理器还可以包括硬件芯片,上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
可选的,本申请实施例还提供了一种计算机可读存储介质,例如包括指令的存储器,上述指令可由网络设备中的处理器执行以完成下述实施例中的所提供的方法。例如,该计算机可读存储介质可以是只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、只读光盘(compact disc read-only memory,CD-ROM)、磁带、软盘和光数据存储设备等。
为了进一步说明第一网络设备对数据流中的原始报文进行编码的过程,参见图2所示的本申请实施例提供的一种报文处理方法的流程图,该方法流程具体可以包括下述步骤201-208。在一种可能的实现中,本申请在对图2所示的过程进行叙述时,先对图2的整个流程所包括的步骤进行描述,再对各个步骤进行细化描述。
201、第一网络设备获取多个原始报文。
该第一网络设备可以获取多个数据流。该多个原始报文为该多个数据流中的第一数据流,内的报文,且为第一网络设备进行一次FEC编码所使用到的报文。该第一数据流为多个数据 流中任一个数据流,该第一数据流可以包括多个原始报文,每个原始报文用于携带数据,原始报文携带的数据可以是视频数据、音频数据或文本数据等,本申请实施例对第一数据流内的数据的类型不做具体限定。
202、处理器根据多个原始报文中的第一报文的大小,对多个原始报文中除第一报文以外的其他原始报文进行分组,得到至少一个报文组,每个报文组包括至少一个原始报文,每个报文组中所有原始报文的大小之和小于或等于该第一报文的大小。
203、处理器根据该至少一个报文组,生成编码任务,该编码任务包括每个报文组的地址信息。
204、处理器向目标硬件引擎发送该编码任务。
在一些可能的实现中,处理器还可以将编码任务下发至内存,由目标硬件引擎从内存中读取该编码任务。在一些可能的实现中,处理器还可以根据数据流的优先级向内存或目标硬件引擎下发编码任务,处理器可以先下发优先级高的数据流的编码任务,再下发优先级低的数据流的编码任务,本申请实施例对处理器下发编码任务的方式不做具体限定。
205、目标硬件引擎根据该编码任务,对每个报文组进行拼接处理,得到至少一个拼接报文。
206、当该至少一个拼接报文中任一拼接报文的大小小于第一报文的大小时,目标硬件引擎对该至少一个拼接报文进行填充处理,得到等长数据块,该等长数据块中每个数据块的大小为该第一报文的大小。
该目标硬件引擎可以在该任一拼接报文中填充目标数据,得到第一报文大小的数据块,该目标数据可以是0或1等数据,当目标硬件引擎对至少一个拼接报文填充完成后,得到一个等长数据块,且等长数据块中每个数据块对应一个拼接报文,且每个数据块的大小均为第一报文的大小。
在一种可能的实现中,当第一网络设备通过处理器对多个原始报文进行编码时,本步骤206可以由处理器来执行。
207、目标硬件引擎将该第一报文和该等长数据块做前向纠错FEC编码处理,得到至少一个冗余报文。
208、当该至少一个拼接报文中每个拼接报文的大小均等于该第一报文的大小时,目标硬件引擎将该第一报文和该至少一个拼接报文做FEC编码处理,得到至少一个冗余报文。
下面开始对图2中的步骤进行细化描述。
在步骤201中,第一网络设备可以对该数据流中的多个原始报文进行FEC编码,当一次FEC编码完成后,再对该数据流中另外多个原始报文进行FEC编码,在一种可能的实现方式中,步骤201可以通过如下步骤2011-2012所示的过程来实现。
步骤2011、第一网络设备将接收到的该数据流中的原始报文存入第一网络设备的内存中。
例如,图3所示的本发明实施例提供的一种网络设备内实现报文处理方法的流程图,第一网络设备的网络接口接收每到该数据流中的一个原始报文后,将该原始报文发送至包解析引擎(packet parse engine,PPE),每当PPE接收到该原始报文后,将该原始报文存储在内存,并记录下该原始报文的存储地址,从而使得PPE可以对网络接口接收的该数据流中的原始报文进行收集。其中,网络接口可以是千兆(gigabit ethernet,GE)网络端口或万兆(ten-gigabit ethernet,GXE)网络端口。
步骤2012,若内存中该数据流的未编码的原始报文的报文量等于或大于第二目标个数,处理器从该数据流的未编码的原始报文中获取第二目标个数的原始报文,该第二目标个数为预设的用于进行一次FEC编码的原始报文的个数。
由于第一网络设备每对第二目标个数的原始报文进行一次FEC编码后,就会对下一组第二目标个数的未编码的原始报文进行编码,因此,对于该数据流,当第一网络设备的内存中每新缓存了第二目标个数的原始报文时,就可以将新缓存的第二目标个数的原始报文作为下一次待编码的多个原始报文。
该处理器包括NP和CPU中的至少一个,仍以图3为例进行说明,当PPE每收集到第一原始报文后,将第一原始报文存储在内存,并将第一原始报文的存储完成消息发送至NP,第一原始报文的存储完成消息可以携带该第一原始报文所属的第一数据流的流标识以及该第一原始报文的存储地址,流标识可以是该第一数据流的名称或编号,用于指示该第一原始报文属于第一数据流。第一原始报文为第一数据流中任一原始报文。
在一种可能的实现方式中,第一网络设备根据业务需求,对每个接收的数据流设置对应的优先级,第一网络设备可以优先处理优先级高的数据流,相应地,存储完成消息还可以携带数据流的优先级,以便第一网络设备内的各个模块可以根据优先级处理数据流中的原始报文。可以在该第一网络设备上配置优先级信息表,该优先级信息表中包括从高到低的多个优先级,每个优先级对应一种业务类型,当第一网络设备接收到一个数据流时,可以根据该数据流中的原始报文携带的数据,确定该数据流的业务类型,并从该优先级信息表中确定该数据流的业务类型所对应的优先级,从而第一网络设备可以为该数据流设置对应的优先级。
仍以图3为例进行说明,当NP接收到该第一原始报文的存储完成消息后,根据该存储完成消息携带的该第一原始报文的存储地址,对内存中的该第一原始报文进行解析,获取该第一原始报文是否需要编码,若该第一原始报文需要编码,则该NP生成编码通知消息,该编码通知消息可以携带该第一原始报文的存储地址、该第一原始报文的大小、该第一数据流的流标识以及优先级中的至少一个,并将该编码通知消息发送至协议栈;用户数据报协议(user datagram protocol,UDP)代理可以从协议栈中收集编码通知消息,每当UPD代理新收集的编码通知消息的数目为第二目标个数时,则新收集的第二目标个数的编码通知消息可以指示内存中新存储了第二目标个数的待编码的原始报文,因此,UDP代理可以将该第二目标个数的编码通知消息发送给FEC软件模块(该过程也即是FEC软件模块从NP获取通知消息的过程),由FEC软件模块确定第二目标个数的原始报文的拼接方案。
在一些可能的实现中,第一网络设备不限定每一次进行FEC编码的原始报文的个数,而是限定每一次进行FEC编码时待编码的原始报文的个数,该待编码的原始报文的个数为在进行FEC编码时待编码矩阵的维数,也即是第一网络设备不预设第二目标个数,而是预设一个第一目标个数,对于这种情况,UPD代理每接收到一个编码通知消息,就会将该编码通知消息发送给FEC软件模块,从而使得FEC软件模块可以得到多个编码通知消息,当FEC软件模块可以从多个编码通知消息获取到多个原始报文的大小,该FEC软件模块可以从多个原始报文中确定最大的第一报文,并根据最大报文的大小(记为目标大小)以及多个原始报文中除第一报文以外的报文的大小,确定出第三目标个数的拼接方案(也即是图3中的FEC软件模块确定原始报文的拼接方案的过程),每个拼接方案为将至少两个原始报文拼接为一个拼接报文拼接方案,每个拼接报文的大小小于或等于目标大小,第三目标个数与多个原始报文中第 一报文的个数之和等于第一目标个数,具体拼接方案的确定过程参见步骤202;后续FEC软件模块向目标硬件引擎发送拼接方案,由目标硬件引擎根据拼接方案,对原始报文进行拼接。
对于步骤202进行如下描述:
在步骤202中,该第一报文为该多个原始报文中最大的报文,每个报文组可以视为一个拼接方案,报文组的确定过程也即是拼接方案确定过程。为了使得每个报文组的大小小于或等于第一报文的大小,其中每个报文组的大小也即是每个报文组中所有原始报文的大小。处理器可以先根据多个原始报文的大小,对多个原始报文进行排序,然后在根据多个原始报文的排序以及第一报文的大小,确定至少一个报文组。在一种可能的实现方式中,本步骤202可以由下述2021-2025所示的过程来实现。在一种可能的实现中,本步骤可以由处理器中FEC软件模块来执行。
步骤2021、处理器根据多个原始报文中各个原始报文的大小,对多个原始报文进行排序,得到每个原始报文的次序i,i为大于等于零的整数,其中,次序为0的原始报文为该第一报文。
处理器在对多个原始报文排序时,越大的原始报文的次序越小,越小的原始报文次序越大,由于第一报文为多个原始报文中最大的报文,第一报文的次序最小且为0,多个原始报文中最小的原始报文的次序最大。例如图4所示的本申请实施例提供的一种报文组确定过程的示意图,图4包括左图、中图和右图,其中左图包括一个原始报文序列,该原始报文序列有5个原始报文分别为报文P1-P5,报文P1为第一报文,处理器根据报文P1-P5的大小,对报文P1-P5进行排序,得到中图所示的5个报文的排序:报文P1、P4、P3、P5以及P2,次序分别为0-4。
步骤2022、当i=1时,处理器将次序为1的原始报文划分至第1个目标报文组,该第1个目标报文组的次序为1。
由于第一报文的大小为目标大小,则处理器无需对第一报文进行分组,或者可以直接将每个第一报文视为一个报文组。处理器可以根据多个原始报文的由低至高的顺序,依次对除第一报文以外的其他原始报文进行分组。仍以图4中的中图为例,报文P4的次序为1,处理器可以直接将报文P4划分至第1个目标报文组。
步骤2023、当i>1时,若第j个目标报文组的大小和次序为i的原始报文的大小之和小于或等于目标大小,则处理器将第j个目标报文组和所述次序为i的原始报文合并为第i个目标报文组,该目标大小为该第一报文的大小,该第j个目标报文组为至少一个目标报文组中次序最小的报文组,且该第j个目标报文组的次序为j,该至少一个目标报文组中任一报文组的大小和次序为i的原始报文的大小之和小于或等于该目标大小,0<j<i。
目标报文组的大小,也即是目标报文组内各个报文的大小之和,第j个目标报文组是次序为j的原始报文所在的报文组,第i个目标报文组是次序为i的原始报文所在的报文组,也即是对j个原始报文分组完成后,得到第j个目标报文组,对i个原始报文分组完成后,得到第i个目标报文组。由于处理器为多个原始报文进行了分组,因此,在分组过程中可能存在多个目标报文组,为了便于描述,为每个目标报文组分配一个次序,第j个目标报文组的次序为j,第i个目标报文组的次序为i,也即是第几个目标报文组的次序为几。
仍以图4中的中图为例,当i=2时,处理器对次序为2的报文P3进行分组,此时,第1个目标报文组仅包括报文P4,则第1个目标报文组的大小与次序为2的报文P3的大小之和, 也即是报文P4和P3的大小之和,若报文P4和P3的大小之和小于或等于第一报文P1的大小(也即是目标大小),则处理器将报文P4和P3合并为第2个目标报文组,由于第1个目标报文组被合并在第2个目标报文组,则当前仅剩余第2个目标报文组。
在一些可能的实现中,当前可能剩余多个目标报文组,若多个目标报文组中的至少一个目标报文组内每个报文组的大小与次序为i的原始报文的大小之和均小于或等于目标大小,处理器将次序为i的原始报文与第j个目标报文组合并为第i个目标报文组,其中,第j个目标报文组为至少一个目标报文组中次序最小的报文组。
步骤2024、当i>1时,若不存在第j个目标报文组,处理器则增加一个第i个目标报文组,将该次序为i的原始报文划分至该第i个目标报文组。
当前剩余的多个目标报文组中的每个报文组的大小与次序为i的原始报文的大小之和均大于目标大小,则说明该多个目标报文组中不存在第j个目标报文组,则处理器可以另外新增一个目标报文组,作为第i个目标报文组,并将次序为i的原始报文划分至该第i个目标报文组。
仍以图4中的中图为例,当i=3时,当前剩余的目标报文组为由报文P4和P3组成的第2个目标报文组,报文P4、P3以及次序为3的报文P5的大小小于或等于目标大小,则说明第2个目标报文组也即是至少一个目标报文组中的第j个目标报文组,可以直接将报文P4、P3以及p5合并为第3个目标报文组。若报文P4、P3以及次序为3的报文P5的大小大于目标大小,说明当前剩余的目标报文组中不存在第j个目标报文组,则处理器新增第3个目标报文组,并将报文P5划分组新增的第3个目标报文组,此时,就会剩余第2个目标报文组和第3个目标报文组。
步骤2025、当将多个原始报文中最后一个原始报文分组完成后,处理器将当前剩余的每个目标报文组作为该至少一个报文组中的一个报文组。
当处理器将最后一个原始报文分组完成后,当前剩余的目标报文组也即是最终确定的报文组,因此,处理器可以将当前剩余的每个目标报文组作为该至少一个报文组中的一个报文组。
以图4中的右图为例,处理器将报文P4、P3以及P5合并为第3个目标报文组,当i=4时,次序为4的报文P2为报文P1-5中的最后一个报文,报文P4、P3、P5以及P2的大小之和小于目标大小,则处理器将第3个目标报文组和报文P2合并为第4个目标报文组,当前仅剩下第4个目标报文组,因此,第4个目标报文组为对报文P2-P5的最终分组,第4个目标报文组包括报文P2-P5,第4个目标报文组也即是一个拼接方案,由于每个拼接方案用于指示拼接一个第一报文大小的拼接报文,后续若拼接报文的大小小于第一报文的大小时,可以在拼接报文上填充数据,使得拼填充后的报文的大小等于第一报文的大小。
当处理器没有将原始报文的个数设置成第二目标个数,而设置了第一目标个数时,处理器可以每间隔预设时间,对第一网络设备在该预设时间内缓存的原始报文进行分组,分组过程可以是步骤21-25所示的过程,此时的目标大小可以是第一网络设备在该预设时间内缓存的原始报文中最大报文的大小,也可以是预设的报文大小,该预设的报文大小可以是该多个原始报文所属第一数据流中的最大报文的报文;当第一网络设备在该预设时间内缓存的原始报文正好被分入至第三目标个数的报文组内,且第三目标个数的报文组均不能容纳其他报文时,则而本次分组结束;当第一网络设备在该预设时间内缓存的原始报文正好被分入至少一 个的报文组内后,若该至少一个报文组的个数等于第三目标个数,但是该至少一个报文组内还能容纳其他报文,则该处理器可以持续收集新原始报文,直至该至少一个报文组内不能容纳新原始报文,则本次分组结束;若至少一个报文组的个数小于第三目标个数,则该处理器持续收集新原始报文,将收集到的新原始报文进行分组,直至得到第三目标个数的报文组,且第三目标个数的报文组均不能容纳新原始报文时,则本次分组结束。
为了便于理解步骤2021-2025所示的过程,参见图5所示的本申请实施例提供的一种报文分组的流程图,该流程包括步骤501-509。
步骤501、处理器设定进行一次排序的原始报文的个数为第二目标个数M,其中,M为正整数。
步骤502、网络接口累计接收M个原始报文。
步骤503、处理器根据M个原始报文的大小,对M个原始报文降序排列,得到包括M个原始报文的原始报文序列,在原始报文序列中,M个原始报文中越大的原始报文排序越小,越小的原始报文排序越大
步骤504、处理器根据M个原始报文的最大报文的大小,构建虚拟报文,并将该虚拟报插入虚拟报文序列,该虚拟报文的大小为M个原始报文的最大报文的大小,该虚拟报文序列用于放置虚拟报文。
每个虚拟报文可以看做一个虚拟存储空间,该虚拟存储空间的虚拟存储大小为M个原始报文的最大报文的大小,也即是一个拟报文可以视为一个报文组。
步骤505、处理器遍历原始报文序列中的次序为k的报文Pk,其中,0≤k<M。
步骤506、处理器查询虚拟报文序列中是否存在可以容纳该报文Pk的虚拟报文,若存在,则处理器将报文Pk存储到虚拟报文序列中第一个可以容纳报文Pk的虚拟报文中。
步骤507、若不存在,则处理器新建一个虚拟报文,并将新建的虚拟报文插入虚拟报文序列,并将报文Pk放置在新建的虚拟报文中。
步骤508、当将报文Pk放置在虚拟报文后,处理器查询原始报文中是否还存在未遍历的原始报文,若存在则跳转执行步骤505,否则执行步骤509。
步骤509、当原始报文序列中的所有原始报文均遍历完成后,则处理器将虚拟报文序列中的每个虚拟报文视为一个最终的报文组。
在一些可能的实现中,处理器在对多个原始报文进行分组时,可能会将一个原始报文中的部分数据分至一个报文组,将该原始报文中的其他数据分至其他报文组。在一种可能的实现方式中,在一次分组过程中,若处理器将接收的第一个原始报文划分至第一个报文组中,第一个报文组的大小为目标大小,此时该目标大小可以是任一预设的大小,若一个原始报文的大小小于或等于目标大小,处理器则将该第一个原始报文划分子第一个报文组,若第一个原始报文的大小大于目标大小,处理器则将第一原始报文虚拟拆分成至少两个数据块,该至少两个数据块中最后一个数据块的大小小于或等于目标大小,该至少两个数据块中除最后一个数据块以外的其他数据块的大小均等于目标大小,处理器将每个数据块分别划分至一个报文组中;当处理器后续每接收到一个新原始报文时,若新原始报文的大小和当前所有报文组中最后一个报文组的大小之和小于或等于目标大小时,将该新原始报文划分至该最后一个报文组;若新原始报文的大小和当前所有报文组中最后一个报文组的大小之和大于目标大小时,将该新原始报文虚拟划分为一个第一目标数据块和至少一个第二目标数据块,其中,第一目 标数据块的大小为最后一个目标大小与该最后一个报文当前大小的差值,至少一个第二目标数据块中最后一个数据块的大小小于或等于目标大小,该至少一个第二目标数据块中除最后一个数据块以外的数据块的大小均等于目标大小,处理器将第一目标数据块划分至最后一个报文组,并新增至少一个报文组,将每个第二目标数据块划分别划分至一个新增的报文组内;当处理器将第二目标个数的原始报文划分完成后,或者当确定出第一目标个数的报文组,且第一目标个数的报文组不能再容纳其他原始报文后,本次分组结束。例如,图6所示的本申请实施例提供的一种原始报文分组流程的示意图,本次对原报文1-8进行分组,每个报文组的大小固定为目标大小,处理器将原报文1-2以及原报文4的前部分报文划分至报文组1,将原报文4的后部分报文、原本报文5以及原报文6的前部分报文划分至报文组2,将原报文6的后部分报文以及原报文7-8划分至报文组2,报文组3的大小小于目标大小,后续可以在报文3中的报文拼接出的拼接报文上填充数据,以便填充后的报文的大小为目标大小。
在一种可能的实现中,当第一网络设备通过处理器直接对多个原始报文进行FEC编码时,则处理器直接执行下述步骤205,也即是对每个报文组进行拼接处理,得到至少一个拼接报文,每个拼接报文对应一个报文组。而不与目标硬件引擎进行交互,若第一网络设备通过目标硬件引擎对多个原始报文进行FEC编码时,则处理器执行下述步骤203。
对于步骤203进行如下描述:
在步骤203中,该编码任务用于指示将每个报文组内的原始报文拼接为一个拼接报文,并对拼接出的多个拼接报文进行编码,每个报文组的地址信息包括一个报文组中所有原始报文的存储地址,在一些可能的实现中,每个报文组的地址信息还包括拼接标识,例如,图7所示的本申请实施例提供的一种编码任务的示意图,图7中的编码任务包括多个地址信息,每个地址信息包括一个拼接标识和各个原始报文的存储地址。该拼接标识可以用于指示将一个报文组内的所有原始报文拼接为一个拼接报文,该拼接标识还可以用于指示一个报文组的地址信息内的所有存储地址为一组,本申请实施例对该拼接标识的表示方式不做具体限定。
每个报文组的地址信息可以以一个集合的形式来表示,例如,报文组2包括原始报文3和4,若集合1包括原始报文3和4的存储地址以及拼接标识,则该集合1为该报文组2的地址信息。在一些可能的实现中,每个报文组的地址信息可以以一个字符串的形式来表示,仍以报文组2为例,字符串1为“原始报文3的存储地址;原始报文4的存储地址;拼接标识”,则该字符串1也即是报文组2的地址信息。在一些可能的实现中,每个报文组的地址信息可以以一个表格的形式来表示,仍以报文组2为例,参见下述表1,表1中的原始报文3和4的存储地址均对应同一个拼接标识,则表1也即是报文组2的地址信息。
表1
Figure PCTCN2020114600-appb-000001
该编码任务还可以携带第一报文的存储地址,以便后续目标硬件引擎可以根据第一报文的存储地址获取第一报文。该编码任务还可以携带编码参数、数据流的标识、第一数据流的优先级标识中的至少一个,该第一数据流为多个原始报文所在的数据流,该编码参数可以包括待编码报文量(也即是第一目标个数)、冗余报文量以及目标大小,其中,待编码报文量为多个原始报文中第一报文的个数与至少一个报文组的个数之和,以便后续目标硬件引擎可以 根据编码参数,对多个原始报文进行编码。
对于步骤205进行如下描述:
在步骤205中,目标硬件引擎可以根据编码任务中的地址信息,从内存中获取至少一个报文组,每个报文组中的原始报文为一个地址信息内的一个存储地址中的报文;目标硬件引擎将每个报文组内的原始报文拼接为一个拼接报文,从而目标硬件引擎可以得到至少一个拼接报文。
当第一网络设备通过处理器对多个原始报文进行编码时,若至少一个报文组的个数与多个原始报文中第一报文的个数之和达到第一目标个数,且该至少一个报文组中的最后一个报文组不能再添加该多个原始报文以外的其他原始报文时,处理器对每个报文组进行拼接处理,第一目标个数为进行一次FEC编码时待编码矩阵的维数。
或者是,当多个原始报文的个数等于第二目标个数,且将第二目标个数的原始报文划分为至少一个报文组时,处理器对每个报文组进行拼接处理,该第二目标个数为预设的用于进行一次FEC编码的原始报文的个数。其中,处理器对每个报文组进行拼接处理于目标硬件引擎对每个报文组进行拼接处理的过程同理,在此,本申请实施例对处理器对每个报文组进行拼接处理的过程不做赘述。
对于步骤207进行如下描述:
在步骤207中,该目标硬件引擎可以从该编码任务中获取到编码参数,设该编码参数中的待编码报文量为Q、冗余报文量为R以及目标大小为L,其中,Q和R均为正整数,L为大于0的数值,冗余报文量R为对Q个待编码报文进行FEC编码后所得的冗余报文的个数,Q个待编码的原始报文包括多个原始报文中的第一报文以及等长数据块。目标硬件引擎可以根据编码任务携带的第一报文的存储地址,从内存中获取第一报文,将获取的第一报文以及步骤206中获得的等长数据块作为待编码报文。目标硬件引擎可以先将Q个待编码报文组成Q*L待编码矩阵,待编码矩阵的每一行为一个等长数据块或一个第一报文;目标硬件引擎根据待编码报文量Q以及冗余报文量R,构建一个(Q+R)*Q的生成矩阵,生成矩阵由第一子矩阵和第二子矩阵组成,其中,第一子矩阵为Q*Q的单位矩阵,第二子矩阵为R*Q的柯西矩阵。
其中,第二子矩阵中第i行第j列的元素为
Figure PCTCN2020114600-appb-000002
x i-1以y i-1及和均为迦罗华域(galois field,GF)(2 w)中的元素,其中,i和j均为大于或等于0的整数,w可以是8;目标硬件引擎将该生成矩阵和待编码矩阵进行乘法计算,得到(Q+R)*L的编码矩阵,其中,编码矩阵包括待编码矩阵以及R*L的校验矩阵,其中,校验矩阵的每一行为一个冗余报文。
在一种可能的实现中,步骤205-207所示的过程也即是目标硬件引擎根据编码任务,对每个报文组进行拼接处理的过程。当第一网络设备通过处理器对多个原始报文进行编码时,本步骤207可以由处理器来执行。
对于步骤208进行如下描述:
在步骤208中,当该至少一个拼接报文中每个拼接报文的大小均等于该第一报文的大小时,该至少一个报文可以视为一个等长数据块,因此,目标硬件引擎可以直接执行本步骤208。在一种可能的实现方式中,当第一网络设备通过处理器对多个原始报文进行编码时,本步骤207可以由处理器来执行。
当该目标硬件引擎获取到该至少一个冗余报文后,将至少一个冗余报文存储至内存,并向处理器发送编码完成消息,该编码完成消息可以携带该至少一个冗余报文的存储地址,以便处理器可以从内存中获取到该至少一个冗余报文。或者是,该目标引擎直接将该至少一个冗余报文发送给处理器,本申请实施对处理器获取至少一个冗余报文的方式不做具体限定。
处理器可以从内存中获取每个报文组内的原始报文,对每个报文组进行拼接处理,得到至少一个拼接报文,当至少一个拼接报文中任一拼接报文的大小小于第一报文的大小时,对至少一个拼接报文进行填充处理,得到等长数据块,并从内存中获取多个原始报文中的第一报文。当然,该目标硬件引擎还可以直接将该等长数据块以及第一报文发送给处理器,本申请实施例对处理器获取等长数据块与第一报文的方式不做具体限定。
当处理器获取到等长数据块、第一报文以及至少一个冗余报文后,在等长数据块及第一报文中的每个数据块上添加FEC报文头,得到多个原始FEC报文,并在每个冗余报文中均添加FEC报文头,得到至少一个冗余FEC报文,FEC报文头携带编码参数。例如,图8所示的本申请实施例提供的一种FEC报文的示意图中的FEC报文,FEC报文包括FEC报文和负载报文,当该负载报文为第一报文或等长数据块中的一个数据块时,该FEC报文为原始FEC报文,当该FEC报文的负载报文为冗余报文时,该FEC报文为冗余FEC报文。
FEC报文头用于指示该多个原始报文的编码情况。FEC报文头携带多个原始报文的编码参数,例如图8中的FEC报文头携带待编码报文量、冗余报文量以及目标大小。FEC报文头还可以携带该多个原始个报文的目标标识,该目标标识用于指示对该多个原始报文所属数据流进行编码的次数。该FEC报文头还可以携带该多个原始报文在数据流中的序列号(也即是拼接前原始报文的原始序号)、原始报文的大小、每个第一报文以及等长数据块中的每个数据块在待编码矩阵中的序列号、每个冗余报文在编码矩阵中的序列号以及与每个数据块对应的目标拼接标识,该目标拼接标识用于指示数据块为拼接出的数据块。
当处理器获取到原始FEC报文和冗余FEC报文后,可以将原始FEC报文和冗余FEC报文发送至第二网络设备。当然,在一些可能的实现中,目标硬件引擎可以直接将第一报文和等长数据块封装为原始FEC报文,将冗余报文封装为冗余FEC报文,并向第二网络设备发送原始FEC报文和冗余FEC报文。
参见图9所示的本申请实施例提供一种网络设备内部交互的流程示意图,该流程具体包括步骤901-907。图9示意了第一网络设备对多个原始报文进行FEC编码时,内部各个模块的交换情况。
步骤901、网络接口每从网络中接收到1个原始报文,将该原始报文发送给PPE。
步骤902、PPE将接收到的该原始报文写入内存。
步骤903、PPE将该原始报文的存储完成消息发送至NP,以通知NP处理内存中的该原始报文。
步骤904、NP接收到该存储完成消息后,对该原始报文进行解析,以确定该原始报文是否需要编码,若需要编码,则NP向CPU发送该原始报文的编码通知消息。
CPU可以通过协议栈以及UDP代理将编码完成消息发送至FEC软件模块,具体的过程在上述步骤2012中有相关描述,在此不做赘述。
步骤905、当CPU中的FEC软件模块收集到第二目标个数的编码通知消息后,相当于FEC软件模块收集齐了需要编码的第二目标个数的原始报文,该FEC软件模块基于第二目标个数 的原始报文中,向目标硬件引擎发送对该第二目标个数的原始报文的编码任务。
该编码任务相当于一种通知消息,通知目标硬件处理器根据编码任务进行FEC编码。
步骤906、目标硬件引擎根据编码任务,从内存中获取第二目标个数的原始报文,并对第二目标个数的原始报文进行FEC编码,得到至少一个冗余报文。
本步骤906所示的过程与步骤205-207所示的过程同理,在此,本申请实施例对本步骤906所示的过程不做赘述。
步骤907、目标硬件引擎将至少一个冗余报文写入内存,并将至少一个冗余报文的存储地址发送至FEC软件模块,以便出FEC软件模块获取至少一个冗余报文,并向第二网络设备发送第二目标个数的原始报文和至少一个冗余报文。
在一种可能的实现方式中,FEC软件模块可以部署在NP中,还可以部署在CPU中,本申请实施例对FEC软件模块的部署方式不做具体限定。
本申请实施例提供的方法,通过将多个原始报文中除最大的第一报文以外的其他原始报文进行了拼接处理,只有当拼接出的拼接报文大小小于最大报文的大小时,才会对拼接报文进行填充处理,而无需对每个其他原始报文都进行填充处理,因此,填充后的等长数据块中填充的数据较少,降低了FEC编码报文传输时所占用的网络带宽,避免了网络资源浪费。
当第二网络设备在接收到第一网络设备发送的FEC报文时,若原始FEC报文在传输过程中有丢失,则第二网络设备对未丢失的FEC报文进行解码,以恢复丢失的报文。为了进一步说明多个原始报文的编解码过程,参见图10,为本申请实施例提供的一种编解码的流程图,该流程包括下述步骤1001-1006。
步骤1001、当第一网络设备内的FEC软件模块收集到第二目标个数的原始报文后,根据第二目标个数的原始报文,确定第二目标个数的原始报文中的第一报文和至少一个报文组,并向目标硬件引擎发送编码任务。
在步骤1001之前,UDP代理收集第一终端发送的数据流中的原始报文,并将其收集的原始报文的存储地址以及原始报文的大小发送至FEC软件模块,当FEC软件模块接收到第二目标个数的原始报文的存储地址和大小后,执行本步骤1001。以第二目标个数为6为例,当FEC软件模块从UDP代理收集到原始报文1-6的存储地址以及大小后,说明FEC软件模块集齐了6个原始报文,FEC软件模块将原始报文1和6分别作为第一报文,将原始报文2和4划分至报文组1,将原始报文3和5划分至报文组2。FEC软件模块还可以将该原始报文2和4拼接为拼接报文1,FEC软件模块还可以将该原始报文3和5拼接为拼接报文2,当任一拼接报文的大小小于原始报文3的大小时,对任一拼接报文进行填充处理,得到填充报文,否则不对任一拼接报文进行填充处理。在图10中,任一拼接报文的大小为原始报文3的大小为例进行说明,FEC软件模块将原始报文1和6以及拼接报文1-2作为一个待编码矩阵,在图10中原始报文在待编码矩阵中记为报文1,拼接报文1-2在待编码矩阵中分别记为报文2-3,原始报文6在待编码矩阵中记为报文4。
步骤1002、目标硬件引擎根据编码任务中的地址集合,获取原始报文1和6以及拼接报文1-2所形成的待编码矩阵,根据编码任务中的编码参数构建对应的生成矩阵,并根据待编码矩阵和生成矩阵,进行FEC编码,得到冗余报文a、b和c,并将冗余报文a、b和c存储在内存中,并且将冗余报文a、b和c的存储地址发送给FEC软件模块。
步骤1003、FEC软件模块根据冗余报文a、b和c的存储地址,从内存中获取冗余报文a、 b和c,并在报文1-4以及冗余报文a-c上分别添加一个FEC报文头,得到多个FEC报文,并将FEC报文组成的编码流发送UDP代理,UDP代理通过WAN将编码流发送给第二网络设备。
步骤1004、第二网络设备的UDP代理接收编码流,编码流中的报文1和3丢失,当第二网络设备的FEC软件模块收集到报文2、4以及冗余报文a、b、c中任意2个冗余报文后,FEC软件模块向第二网络设备内的目标硬件引擎下发解码任务,该解码任务携带报文2、4以及冗余报文a、b、c中任意2个冗余报文的存储地址以及解码参数,其中,解码参数包括编码参数丢失的报文的个数、丢失的报文在编码矩阵中的位置以及目标大小。
步骤1005、目标硬件引擎根据解码任务中的存储地址,从内存中获取报文2、4以及冗余报文a、b、c中任意2个冗余报文,将报文2、4以及冗余报文a、b、c中任意2个冗余报文组成待解码矩阵,目标硬件引擎根据解码参数,构建对应的解码矩阵,目标硬件引擎基于待解码矩阵和解码矩阵进行FEC解码,得到丢失的报文1和3,并将丢失的报文1和3存储在内存中,且将报文1和3的存储地址发送中FEC软件模块。
步骤1006、FEC软件模块根据原始报文1和拼接报文1存储地址,从内存中获取原始报文1和拼接报文1存储地址,并获取原始报文2和拼接报文2,从拼接报文1拆分出原始报文2和3、从拼接报文2拆分出原始报文3和5,并恢复原始报文1-6在数据流中顺序,按照顺序向UDP代理发送原始报文1-6,形成数据流,UDP代理向第二终端发送数据流。
图11是本申请实施例提供的一种报文处理装置的结构示意图,该装置1100包括:
处理器1101,用于执行上述步骤201;
所述处理器1101,还用于根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的其他原始报文进行拼接处理,得到至少一个拼接报文,所述第一报文为所述多个原始报文中数据量最大的报文;
所述处理器1101,还用于当所述至少一个拼接报文中任一拼接报文的大小小于所述第一报文的大小时,对所述至少一个拼接报文进行填充处理,得到等长数据块,所述等长数据块中每个数据块的大小为所述第一报文的大小;
所述处理器1101,还用于将所述第一报文和所述等长数据块做前向纠错FEC编码处理,得到至少一个冗余报文。
可选地,所述装置还包括存储器,用于存储程序指令。所述处理器1101可以为CPU,用于运行所述存储器中存储的指令,用于实现如图2所示的报文处理方法。
在一种可能的实现方式中,所述处理器1101用于:
执行上述步骤202;
对每个报文组进行拼接处理,得到所述至少一个拼接报文,每个拼接报文对应一个报文组。
在一种可能的实现方式中所述报文处理装置还包括目标硬件引擎1102;
所述处理器1101,还用于执行上述步骤202-203;
所述目标硬件引擎1102,用于执行上述步骤205。
在一种可能的实现方式中,所述处理器1101用于:
根据所述多个原始报文中各个原始报文的大小,对所述多个原始报文进行排序,得到每个原始报文的次序i,i为大于等于零的整数,其中,次序为0的原始报文为所述第一报文;
当i=1时,将次序为1的原始报文划分至第1个目标报文组,所述第1个目标报文组的 次序为1;
当i>1时,若第j个目标报文组的大小和次序为i的原始报文的大小之和小于或等于目标大小,则将第j个目标报文组和所述次序为i的原始报文合并为第i个目标报文组,所述目标大小为所述第一报文的大小,所述第j个目标报文组为至少一个目标报文组中次序最小的报文组,且所述第j个目标报文组的次序为j,所述至少一个目标报文组中每个报文组的大小和次序为i的原始报文的大小之和均小于或等于所述目标大小,0<j<i;
当i>1时,若不存在所述第j个目标报文组,则增加一个所述第i个目标报文组,将所述次序为i的原始报文划分至所述第i个目标报文组;
当将所述多个原始报文中最后一个原始报文分组完成后,将当前剩余的每个目标报文组作为所述至少一个报文组中的一个报文组。
在一种可能的实现方式中,所述处理器1101还用于:
当所述至少一个报文组的个数与所述多个原始报文中第一报文的个数之和达到第一目标个数,且所述至少一个报文组中不能再添加所述多个原始报文以外的其他原始报文时,对每个报文组进行拼接处理,所述第一目标个数为进行一次FEC编码时待编码矩阵的维数;或,
当所述多个原始报文的个数等于第二目标个数,且将所述第二目标个数的原始报文划分为至少一个报文组时,对每个报文组进行拼接处理,所述第二目标个数为预设的用于进行一次FEC编码的原始报文的个数。
在一种可能的实现方式中,所述处理器1101还用于:
当所述至少一个拼接报文中每个拼接报文的大小均等于所述第一报文的大小时,将所述第一报文和所述至少一个拼接报文做前向纠错FEC编码处理,得到至少一个冗余报文。
在一种可能的实现方式中,所述目标硬件引擎1101还用于执行上述步骤208。
上述所有可选技术方案,可以采用任意结合形成本公开的可选实施例,在此不再一一赘述。
上述实施例提供的报文处理装置在在报文进行处理时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的报文处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (11)

  1. 一种报文处理方法,其特征在于,所述方法包括:
    获取多个原始报文;
    根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的其他原始报文进行拼接处理,得到至少一个拼接报文,所述第一报文为所述多个原始报文中最大的报文;
    当所述至少一个拼接报文中任一拼接报文的大小小于所述第一报文的大小时,对所述至少一个拼接报文进行填充处理,得到等长数据块,所述等长数据块中每个数据块的大小为所述第一报文的大小;
    将所述第一报文和所述等长数据块做前向纠错FEC编码处理,得到至少一个冗余报文。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的其他原始报文进行拼接处理,得到至少一个拼接报文包括:
    根据所述多个原始报文中的第一报文的大小,对所述多个原始报文中除所述第一报文以外的其他原始报文进行分组,得到至少一个报文组,每个报文组包括至少一个原始报文,每个报文组中所有原始报文的大小之和小于或等于所述第一报文的大小;
    对每个报文组进行拼接处理,得到所述至少一个拼接报文,每个拼接报文对应一个报文组。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的报文进行拼接处理,得到至少一个拼接报文包括:
    处理器根据所述多个原始报文中的第一报文的大小,对所述多个原始报文中除所述第一报文以外的其他原始报文进行分组,得到至少一个报文组,并根据所述至少一个报文组,生成编码任务,所述编码任务包括每个报文组的地址信息;
    目标硬件引擎根据所述编码任务,对每个报文组进行拼接处理,得到所述至少一个拼接报文。
  4. 根据权利要求1所述的方法,其特征在于,所述对每个报文组进行拼接处理,得到所述至少一个拼接报文包括:
    当所述至少一个报文组的个数与所述多个原始报文中第一报文的个数之和达到第一目标个数,且所述至少一个报文组中不能再添加所述多个原始报文以外的其他原始报文时,对每个报文组进行拼接处理,所述第一目标个数为进行一次FEC编码时待编码矩阵的维数;或,
    当所述多个原始报文的个数等于第二目标个数,且将所述第二目标个数的原始报文划分为至少一个报文组时,对每个报文组进行拼接处理,所述第二目标个数为预设的用于进行一次FEC编码的原始报文的个数。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述至少一个拼接报文中每个拼接报文的大小均等于所述第一报文的大小时,将所述 第一报文和所述至少一个拼接报文做FEC编码处理,得到至少一个冗余报文。
  6. 一种报文处理装置,其特征在于,所述装置包括:
    处理器,用于获取多个原始报文;
    所述处理器,还用于根据所述多个原始报文中的第一报文的大小,将所述多个原始报文中除所述第一报文以外的其他原始报文进行拼接处理,得到至少一个拼接报文,所述第一报文为所述多个原始报文中数据量最大的报文;
    所述处理器,还用于当所述至少一个拼接报文中任一拼接报文的大小小于所述第一报文的大小时,对所述至少一个拼接报文进行填充处理,得到等长数据块,所述等长数据块中每个数据块的大小为所述第一报文的大小;
    所述处理器,还用于将所述第一报文和所述等长数据块做前向纠错FEC编码处理,得到至少一个冗余报文。
  7. 根据权利要求6所述的装置,其特征在于,所述处理器用于:
    根据所述多个原始报文中的第一报文的大小,对所述多个原始报文中除所述第一报文以外的其他原始报文进行分组,得到至少一个报文组,每个报文组包括至少一个原始报文,每个报文组中所有原始报文的大小之和小于或等于所述第一报文的大小;
    对每个报文组进行拼接处理,得到所述至少一个拼接报文,每个拼接报文对应一个报文组。
  8. 根据权利要求7所述的装置,其特征在于,所述装置还包括目标硬件引擎;
    所述处理器,还用于根据所述多个原始报文中的第一报文的大小,对所述多个原始报文中除所述第一报文以外的其他原始报文进行分组,得到至少一个报文组,并根据所述至少一个报文组,生成编码任务,所述编码任务包括每个报文组的地址信息;
    所述目标硬件引擎,用于根据所述编码任务,对每个报文组进行拼接处理,得到所述至少一个拼接报文。
  9. 根据权利要求7所述的装置,其特征在于,所述处理器还用于:
    当所述至少一个报文组的个数与所述多个原始报文中第一报文的个数之和达到第一目标个数,且所述至少一个报文组中不能再添加所述多个原始报文以外的其他原始报文时,对每个报文组进行拼接处理,所述第一目标个数为进行一次FEC编码时待编码矩阵的维数;或,
    当所述多个原始报文的个数等于第二目标个数,且将所述第二目标个数的原始报文划分为至少一个报文组时,对每个报文组进行拼接处理,所述第二目标个数为预设的用于进行一次FEC编码的原始报文的个数。
  10. 根据权利要求8所述的装置,其特征在于,所述处理器还用于:
    当所述至少一个拼接报文中每个拼接报文的大小均等于所述第一报文的大小时,将所述第一报文和所述至少一个拼接报文做前向纠错FEC编码处理,得到至少一个冗余报文。
  11. 一种计算机可读存储介质,其特征在于,所述存储介质中存储指令,所述指令由处理器加载并执行以实现如权利要求1至权利要求5任一项所述的报文处理方法。
PCT/CN2020/114600 2019-09-10 2020-09-10 报文处理方法、装置以及计算机存储介质 WO2021047612A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP20863129.1A EP3993294A4 (en) 2019-09-10 2020-09-10 DEVICE AND METHOD FOR PROCESSING PACKETS AND COMPUTER STORAGE MEDIA
KR1020227004719A KR20220029751A (ko) 2019-09-10 2020-09-10 패킷 처리 방법 및 장치, 그리고 컴퓨터 저장 매체
MX2022002529A MX2022002529A (es) 2019-09-10 2020-09-10 Metodo y aparato de procesamiento de paquete, y medio de almacenamiento en computadora.
JP2022515555A JP7483867B2 (ja) 2019-09-10 2020-09-10 パケット処理方法及び装置、及びコンピュータ記憶媒体
US17/679,758 US11683123B2 (en) 2019-09-10 2022-02-24 Packet processing method and apparatus, and computer storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910854179.X 2019-09-10
CN201910854179 2019-09-10
CN201911207236.1A CN112564856A (zh) 2019-09-10 2019-11-29 报文处理方法、装置以及计算机可读存储介质
CN201911207236.1 2019-11-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/679,758 Continuation US11683123B2 (en) 2019-09-10 2022-02-24 Packet processing method and apparatus, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2021047612A1 true WO2021047612A1 (zh) 2021-03-18

Family

ID=74866584

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114600 WO2021047612A1 (zh) 2019-09-10 2020-09-10 报文处理方法、装置以及计算机存储介质

Country Status (6)

Country Link
US (1) US11683123B2 (zh)
EP (1) EP3993294A4 (zh)
JP (1) JP7483867B2 (zh)
KR (1) KR20220029751A (zh)
MX (1) MX2022002529A (zh)
WO (1) WO2021047612A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656593A (zh) * 2009-09-15 2010-02-24 中国人民解放军国防科学技术大学 前向纠错编码方法、前向纠错译码方法及其装置
CN101662335A (zh) * 2009-09-15 2010-03-03 中国人民解放军国防科学技术大学 前向纠错编码方法、前向纠错译码方法及其装置
CN101667887A (zh) * 2009-09-02 2010-03-10 中兴通讯股份有限公司 编码方法及其装置、解码方法及其装置
WO2015135120A1 (zh) * 2014-03-11 2015-09-17 华为技术有限公司 端到端的网络QoS控制系统、通信设备和端到端的网络QoS控制方法
CN108650061A (zh) * 2018-04-24 2018-10-12 达闼科技(北京)有限公司 基于fec的vpn代理方法、装置、存储介质和系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524001A (en) * 1994-02-07 1996-06-04 Le Groupe Videotron Ltee Dynamic cable signal assembly
US7218632B1 (en) 2000-12-06 2007-05-15 Cisco Technology, Inc. Packet processing engine architecture
JP2006217530A (ja) 2005-02-07 2006-08-17 Mitsubishi Electric Corp データ伝送システム、送信側端末装置、受信側端末装置、データ送信プログラム及びデータ受信プログラム
JP4485383B2 (ja) 2005-03-03 2010-06-23 日本電信電話株式会社 データ送受信システム及びデータ送信装置
ATE410874T1 (de) * 2005-09-20 2008-10-15 Matsushita Electric Ind Co Ltd Vefahren und vorrichtung zur packetsegmentierung und verknüpfungssignalisierung in einem kommunikationssystem
US8483125B2 (en) * 2007-04-27 2013-07-09 Intellectual Ventures Holding 81 Llc Multiplexing packets in high speed downlink packet access (HSDPA) communications
KR101829923B1 (ko) * 2011-10-13 2018-02-22 삼성전자주식회사 데이터 통신 시스템에서 부호화 장치 및 방법
US10003434B2 (en) * 2016-04-08 2018-06-19 Cisco Technology, Inc. Efficient error correction that aggregates different media into encoded container packets
US11272391B2 (en) 2017-02-03 2022-03-08 Apple Inc. Concatenation of service data units above a packet data convergence protocol layer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667887A (zh) * 2009-09-02 2010-03-10 中兴通讯股份有限公司 编码方法及其装置、解码方法及其装置
CN101656593A (zh) * 2009-09-15 2010-02-24 中国人民解放军国防科学技术大学 前向纠错编码方法、前向纠错译码方法及其装置
CN101662335A (zh) * 2009-09-15 2010-03-03 中国人民解放军国防科学技术大学 前向纠错编码方法、前向纠错译码方法及其装置
WO2015135120A1 (zh) * 2014-03-11 2015-09-17 华为技术有限公司 端到端的网络QoS控制系统、通信设备和端到端的网络QoS控制方法
CN108650061A (zh) * 2018-04-24 2018-10-12 达闼科技(北京)有限公司 基于fec的vpn代理方法、装置、存储介质和系统

Also Published As

Publication number Publication date
KR20220029751A (ko) 2022-03-08
US20220182180A1 (en) 2022-06-09
JP7483867B2 (ja) 2024-05-15
EP3993294A1 (en) 2022-05-04
EP3993294A4 (en) 2022-09-07
MX2022002529A (es) 2022-04-01
US11683123B2 (en) 2023-06-20
JP2022547179A (ja) 2022-11-10

Similar Documents

Publication Publication Date Title
US9158806B2 (en) Integrity checking and selective deduplication based on network parameters
Wu Existence and construction of capacity-achieving network codes for distributed storage
US6320520B1 (en) Information additive group code generator and decoder for communications systems
US6701479B2 (en) Fast cyclic redundancy check (CRC) generation
MX2014013560A (es) Aparato y metodo de transmision y recepcion de paquete en sistema de radiofusion y comunicacion.
WO2020210779A2 (en) Coded data chunks for network qualitative services
EP3497839A2 (en) Method and apparatuse for coding and decoding polar codes
CN111291770A (zh) 一种参数配置方法及装置
CN114513418A (zh) 一种数据处理方法及相关设备
KR101314301B1 (ko) 통신 네트워크에서 데이터를 인코딩하기 위한 방법 및 장치
RU2646346C2 (ru) Устройство и способ передачи и приема пакета с прямой коррекцией ошибок
WO2021047612A1 (zh) 报文处理方法、装置以及计算机存储介质
CN105763375A (zh) 一种数据包发送方法、接收方法及微波站
CN112564856A (zh) 报文处理方法、装置以及计算机可读存储介质
US20230057487A1 (en) Data packet format to communicate across different networks
CN114157716B (zh) 基于区块链的数据处理方法、装置和电子设备
KR20210022102A (ko) 데이터 송신 방법, 송신 장치, 데이터 수신 방법 및 수신 장치
CN112564855A (zh) 报文处理方法、装置以及芯片
WO2021047606A1 (zh) 报文处理方法、装置以及芯片
KR100605117B1 (ko) Wcdma 패킷 과금 데이터 처리 방법 및 시스템
Cabrera-Medina et al. Video transmission in multicast networks using network coding
Li et al. Design and implementation of multimedia file transfer System using fountain codes technology on Android
CN117440067A (zh) 获取微观特征的方法、装置、设备及计算机可读存储介质
Jamali et al. Encoding and Compressing Data for Decreasing Number of Switches in Baseline Networks
CN116527202A (zh) 数据传输的方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20863129

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020863129

Country of ref document: EP

Effective date: 20220127

ENP Entry into the national phase

Ref document number: 20227004719

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022515555

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE