CN116112119A - Method and device for processing data - Google Patents

Method and device for processing data Download PDF

Info

Publication number
CN116112119A
CN116112119A CN202111331219.6A CN202111331219A CN116112119A CN 116112119 A CN116112119 A CN 116112119A CN 202111331219 A CN202111331219 A CN 202111331219A CN 116112119 A CN116112119 A CN 116112119A
Authority
CN
China
Prior art keywords
data
node
sets
codewords
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111331219.6A
Other languages
Chinese (zh)
Inventor
李沫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111331219.6A priority Critical patent/CN116112119A/en
Priority to PCT/CN2022/125804 priority patent/WO2023082950A1/en
Publication of CN116112119A publication Critical patent/CN116112119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0009Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the channel coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

The embodiment of the application provides a method and a device for processing data. The method comprises the following steps: the first node divides first data into K data groups, K is an integer greater than 1, the first data is data sent to the second node, the K data groups are encoded by using a first coding scheme to obtain second data, the second data comprises M1 first code words, the first code words are generated by encoding part of data selected from each data group of the K data groups by using the first coding scheme, M1 is an integer greater than 1, and the second data is sent to the second node. By the method and the device provided by the embodiment of the application, the problem of packet loss in the data exchange process can be effectively avoided, so that the data exchange efficiency is improved.

Description

Method and device for processing data
Technical Field
Embodiments of the present application relate to the field of communications, and more particularly, to a method and apparatus for processing data.
Background
In current data center networks (data center networks, DCN) and other switching networks, multi-layer switching architectures are mostly employed, under which data exchanges, such as ethernet protocols, are typically performed in a "header + payload" manner. The header comprises routing information and control information, and the header can enable the data in the payload to be transmitted to the destination node through each intermediate switching node.
In the data exchange process, the intermediate exchange node stores and forwards the received data packets according to the routing information and the control information in the header, and configures a buffer memory to avoid port congestion, for example, when a plurality of data packets need to be sent out from the same port at the same time.
However, the buffer resources at the switching node are limited, and when the buffer occupancy reaches the upper limit, the switching node discards additional packets, resulting in packet loss. After the receiving node finds that the data packet is lost, the sending node is informed to resend the lost data packet, namely a retransmission mechanism, and the retransmission can cause millisecond-level switching delay to influence the data switching efficiency.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing data, which can effectively avoid the problem of packet loss in the data exchange process, thereby improving the data exchange efficiency.
In a first aspect, a method of processing data is provided, the method being applied to a data switching network. The method comprises the following steps:
the first node divides first data into K data groups, K is an integer greater than 1, the first data is data sent to the second node, the first node encodes the K data groups by using a first coding scheme to obtain second data, the second data comprises M1 first code words, the first code words are code words generated by encoding part of data selected from each data group of the K data groups by using the first coding scheme, M1 is an integer greater than 1, and the first node sends the second data to the second node.
Based on the above scheme, in the process of generating the first codeword, the data in each data set is selected, so if the data set belonging to the second data is lost at the receiving end of the second data, the lost data set can be retrieved according to the decoding of the first codeword, thereby improving the efficiency of data exchange.
With reference to the first aspect, in certain implementations of the first aspect, the M1 first codewords include N data sets, a first data set of the N data sets includes location information, the location information indicates a location of the first data set in the N data sets, the first data set is any data set of the N data sets, and N is an integer greater than K.
Based on the above scheme, adding the position information to the data set encoded by the first encoding scheme can enable the receiving end of the second data to recover the arrangement of the data set encoded by the first encoding scheme according to the received position information in each data set, thereby determining the first codeword to recover the lost data set.
With reference to the first aspect, in certain implementations of the first aspect, the N data sets are the same length.
With reference to the first aspect, in some implementations of the first aspect, when the first codeword is generated using the first coding scheme, a number of symbols selected to carry data in each of the K data groups is the same.
With reference to the first aspect, in some implementations of the first aspect, when the first codeword is generated using the first coding scheme, a position of a symbol selected to carry data in each of the K data groups is the same.
With reference to the first aspect, in certain implementations of the first aspect, the M1 first codewords include N data groups, N is an integer greater than K, the first node encodes the N data groups using a second coding scheme to generate third data, the third data includes M2 second codewords, the second codewords are codewords generated using the second coding scheme for all data of U data groups, the U data groups are selected from the N data groups, 1.ltoreq.u.ltoreq.n, 1.ltoreq.m2.ltoreq.n, and the third data is transmitted to the second node.
Based on the scheme, on the basis of using the first coding scheme, the N data sets are coded again by using the second coding scheme, so that each data set can be accurately transmitted, and the messy codes and the staggered codes of the data sets in the exchanging process are avoided.
With reference to the first aspect, in certain implementations of the first aspect, each of the N data sets includes an identification characterizing the second data.
Based on the scheme, after the second node receives the data set, the data set belonging to the second data can be distinguished through the identifier, so that whether the second data has the lost data set in the data exchange process can be judged more accurately.
With reference to the first aspect, in certain implementations of the first aspect, the first coding scheme includes forward error correction coding, FEC.
With reference to the first aspect, in certain implementations of the first aspect, the second coding scheme includes forward error correction coding, FEC.
In a second aspect, a method of processing data is provided, the method being applied to a data switching network. The method comprises the following steps: the second node receives a data group belonging to second data from the first node, the second data including M1 first codewords, the first codewords being codewords generated by encoding a portion of data selected from each of the K data groups using a first coding scheme, K and M1 being integers greater than 1.
Based on the scheme, the first node selects the data in each data set in the process of generating the first code word, so that the second node can decode the first code word to retrieve the lost data set under the condition that the data set of the second data is lost, and the efficiency of data exchange is improved.
With reference to the second aspect, in some implementations of the second aspect, the M1 first codewords include N data groups, N is an integer greater than K, the second node starts a timer when receiving the first data group belonging to the second data, determines that the second data has a missing data group if, before the timer ends, the number of data groups belonging to the second data received by the second node is less than N, and determines that the second data has no missing data group if, before the timer ends, the number of data groups belonging to the second data received by the second node is equal to N.
Based on the scheme, a time period can be set manually, and whether the data has the problem of packet loss or not is judged in the time period, so that judgment delay is avoided.
With reference to the second aspect, in some implementations of the second aspect, the second node caches the received data set, and in a case that it is determined that the second data has a missing data set, the second node restores the missing data set according to the received data set that belongs to the second data.
With reference to the second aspect, in some implementations of the second aspect, a first data set of the N data sets includes location information, the first data set is any one of the N data sets, and the second node determines M1 first codewords according to the received data set belonging to the second data and the location information in the data sets, and uses the first decoding to recover the lost data set for the M1 first codewords.
Based on the above scheme, the second node restores the arrangement of the data set encoded by the first node using the first coding scheme through the position information added by the first node in the data set encoded by the first coding scheme, thereby determining the first codeword capable of restoring the lost data set.
With reference to the second aspect, in certain implementations of the second aspect, the cached data set of the second data is purged.
Based on the scheme, the cache capacity at the second node is improved by clearing the cache.
With reference to the second aspect, in certain implementations of the second aspect, the N data sets are the same length.
With reference to the second aspect, in some implementations of the second aspect, the first codeword is a codeword generated using a first coding scheme to encode data over the same number of symbols selected from each of the K data sets.
With reference to the second aspect, in certain implementations of the second aspect, the same number of symbols as described above are located in the same position in each of the K data sets.
With reference to the second aspect, in certain implementations of the second aspect, each of the N data sets includes an identification characterizing the second data.
Based on the scheme, after the second node receives the data set, the data set belonging to the second data can be distinguished through the identifier, so that whether the second data has the lost data set or not can be judged more accurately.
With reference to the second aspect, in some implementations of the second aspect, the M1 first codewords include N data groups, the second node receives third data from the first node, the third data includes M2 second codewords, the second codewords are codewords generated by encoding all data of U data groups using the second coding scheme, 1+.u+.n, 1+.m2+.n, the U data groups are selected from the N data groups, and the second node generates second data by second decoding the M2 second codewords.
Based on the above scheme, since the second data is encoded twice, the second node can accurately receive each data set through the second decoding (if there is an error in the transmission process of the data set, the second node can correct the error of the data set through the second decoding), and can recover the lost data set through the first decoding.
With reference to the second aspect, in certain implementations of the second aspect, the first decoding includes decoding corresponding to forward error correction coding FEC.
With reference to the second aspect, in certain implementations of the second aspect, the second decoding includes decoding corresponding to forward error correction coding FEC.
In a third aspect, an apparatus for processing data is provided, the apparatus being an apparatus in a data switching network. The device comprises: the processing unit is used for dividing first data into K data groups, K is an integer larger than 1, the first data are data transmitted to a second node, the processing unit is further used for encoding the K data groups by using a first encoding scheme to obtain second data, the second data comprise M1 first code words, the first code words are code words generated by encoding part of data selected from each data group of the K data groups by using the first encoding scheme, and M1 is an integer larger than 1; the transceiver unit is configured to send the second data to a second node.
Based on the above scheme, in the process of generating the first codeword, the data in each data set is selected, so if the data set belonging to the second data is lost at the receiving end (second node) of the second data, the lost data set can be retrieved according to the decoding of the first codeword, thereby improving the efficiency of data exchange.
With reference to the third aspect, in certain implementations of the third aspect, the M1 first codewords include N data sets, a first data set of the N data sets includes location information, the location information indicates a location of the first data set in the N data sets, the first data set is any data set of the N data sets, and N is an integer greater than K.
Based on the above scheme, adding the position information to the data set encoded by the first encoding scheme can enable the receiving end of the second data to recover the arrangement of the data set encoded by the first encoding scheme according to the received position information in each data set, thereby determining the first codeword to recover the lost data set.
With reference to the third aspect, in some implementations of the third aspect, the N data sets are the same length.
With reference to the third aspect, in some implementations of the third aspect, when the processing unit generates the first codeword using the first coding scheme, a number of symbols selected to carry data in each of the K data groups is the same.
With reference to the third aspect, in some implementations of the third aspect, when the processing unit generates the first codeword using the first coding scheme, a position of a symbol selected to carry data in each of the K data groups is the same.
With reference to the third aspect, in some implementations of the third aspect, the M1 first codewords include N data groups, N is an integer greater than K, the processing unit is further configured to encode the N data groups using a second encoding scheme to generate third data, the third data includes M2 second codewords, the second codewords are codewords generated by encoding all data of U data groups using the second encoding scheme, the U data groups are selected from the N data groups, 1.ltoreq.u.ltoreq.n, 1.ltoreq.m2.ltoreq.n, and the transceiver unit is configured to transmit the third data to the second node.
Based on the scheme, on the basis of using the first code, the N data sets are coded again by using the second code, so that each data set can be accurately transmitted, and the messy codes and the error codes of the data sets in the transmission process are avoided.
With reference to the third aspect, in certain implementations of the third aspect, each of the N data sets includes an identification characterizing the second data.
Based on the scheme, after the second node receives the data set, the data set belonging to the second data can be distinguished through the identifier, so that whether the second data has the lost data set in the data exchange process can be judged more accurately.
With reference to the third aspect, in certain implementations of the third aspect, the first coding scheme includes forward error correction coding FEC.
With reference to the third aspect, in certain implementations of the third aspect, the second coding scheme includes forward error correction coding FEC.
In a fourth aspect, there is provided an apparatus for processing data, the apparatus being an apparatus in a data switching network. The device comprises: the data transmission system comprises a transceiving unit and a processing unit, wherein the transceiving unit is used for receiving a data group belonging to second data from a first node, the second data comprises M1 first code words, the first code words are code words generated by encoding part of data selected from each data group of K data groups by using a first coding scheme, and K and M1 are integers larger than 1.
Based on the scheme, the first node selects the data in each data set in the process of generating the first code word, so that the second node can decode the first code word to retrieve the lost data set under the condition that the data set of the second data is lost, and the efficiency of data exchange is improved.
With reference to the fourth aspect, in some implementations of the fourth aspect, the M1 first codewords include N data groups, where N is an integer greater than K, and the processing unit is configured to start a timer when the transceiver unit receives a first data group belonging to the second data, determine that the second data has a missing data group if, before the timer ends, the number of data groups belonging to the second data received by the transceiver unit is less than N, and determine that the second data has not lost data group if, before the timer ends, the number of data groups belonging to the second data received by the transceiver unit is equal to N.
Based on the scheme, a time period can be set manually, and whether the data has the problem of packet loss or not is judged in the time period, so that judgment delay is avoided.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the apparatus further includes a storage unit: the storage unit is used for caching the received data set, and the processing unit is also used for recovering the lost data set according to the data set belonging to the second data received by the receiving and transmitting unit under the condition that the second data has the lost data set.
With reference to the fourth aspect, in some implementations of the fourth aspect, the second data includes N data groups, a first data group of the N data groups includes location information, the first data group is any one of the N data groups, and the processing unit is further configured to determine the first codeword according to the data group belonging to the second data and the location information in the data group received by the transceiver unit, and recover the lost data group using a first decoding on the first codeword.
Based on the above scheme, the second node restores the arrangement of the data set encoded by the first node using the first coding scheme through the position information added by the first node in the data set encoded by the first coding scheme, thereby determining the first codeword capable of restoring the lost data set.
With reference to the fourth aspect, in some implementations of the fourth aspect, the processing unit is further configured to purge the data set of the second data cached in the storage unit.
Based on the scheme, the cache capacity at the second node is improved by clearing the cache.
With reference to the fourth aspect, in some implementations of the fourth aspect, the N data sets are the same length.
With reference to the fourth aspect, in some implementations of the fourth aspect, the first codeword is a codeword generated using a first coding scheme to encode data over the same number of symbols selected from each of the K data sets.
With reference to the fourth aspect, in some implementations of the fourth aspect, the same number of symbols are located the same in each of the K data sets.
With reference to the fourth aspect, in certain implementations of the fourth aspect, each of the N data sets includes an identification characterizing the second data.
Based on the scheme, after the second node receives the data set, the data set belonging to the second data can be distinguished through the identifier, so that whether the second data has the lost data set or not can be judged more accurately.
With reference to the fourth aspect, in some implementations of the fourth aspect, the M1 first codewords include N data groups, the transceiver unit is configured to receive third data from the first node, the third data includes M2 second codewords, the second codewords are codewords generated by encoding all data of U data groups using the second coding scheme, 1+.u+.n, 1+.m2+.n, the U data groups are selected from the N data groups, and the processing unit is further configured to generate second data by using a second decoding for the M2 second codewords.
Based on the above scheme, since the second data is encoded twice, the second node can accurately receive each data set through the second decoding (if there is an error in the transmission process of the data set, the second node can correct the error of the data set through the second decoding), and can recover the lost data set through the first decoding.
With reference to the fourth aspect, in some implementations of the fourth aspect, the first decoding includes decoding corresponding to forward error correction coding FEC.
With reference to the fourth aspect, in some implementations of the fourth aspect, the second decoding includes decoding corresponding to forward error correction coding FEC.
In a fifth aspect, there is provided an apparatus for processing data, the apparatus comprising a processor coupled to a memory, operable to execute instructions in the memory to implement the method of the first aspect or any one of the possible implementations of the first aspect. Optionally, the apparatus further comprises a memory, which may be disposed separately from the processor or may be disposed centrally. Optionally, the apparatus further comprises a communication interface, the processor being coupled to the communication interface.
In one implementation, the communication interface may be a transceiver, or an input/output interface.
In another implementation, the device is a device in a data exchange network, and when the device may be a chip, the communication interface may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit, etc. on the chip or system-on-chip. The processor may also be embodied as processing circuitry or logic circuitry.
Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
In a specific implementation process, the processor may be one or more chips, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a flip-flop, various logic circuits, and the like. The input signal received by the input circuit may be, but not limited to, received by and input to the receiver, the output signal output by the output circuit may be, but not limited to, output to and transmitted by the transmitter, and the input circuit and the output circuit may be the same circuit, which functions as the input circuit and the output circuit, respectively, at different times. The embodiments of the present application do not limit the specific implementation manner of the processor and the various circuits.
In a sixth aspect, there is provided an apparatus for processing data, the apparatus comprising a processor coupled to a memory, operable to execute instructions in the memory to implement the method of the second aspect, or any one of the possible implementations of the second aspect. Optionally, the apparatus further comprises a memory, which may be disposed separately from the processor or may be disposed centrally. Optionally, the apparatus further comprises a communication interface, the processor being coupled to the communication interface.
In one implementation, the communication interface may be a transceiver, or an input/output interface.
In another implementation, the device is a device in a data exchange network, and when the device may be a chip, the communication interface may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit, etc. on the chip or system-on-chip. The processor may also be embodied as processing circuitry or logic circuitry.
Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
In a specific implementation process, the processor may be one or more chips, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a flip-flop, various logic circuits, and the like. The input signal received by the input circuit may be, but not limited to, received by and input to the receiver, the output signal output by the output circuit may be, but not limited to, output to and transmitted by the transmitter, and the input circuit and the output circuit may be the same circuit, which functions as the input circuit and the output circuit, respectively, at different times. The embodiments of the present application do not limit the specific implementation manner of the processor and the various circuits.
In a seventh aspect, there is provided an apparatus for processing data, the apparatus comprising logic circuitry for coupling with an input/output interface through which data is transmitted to perform any of the first to second aspects described above, and a method in any of the possible implementations of the first to second aspects.
In an eighth aspect, there is provided a computer readable storage medium storing a computer program (which may also be referred to as code, or instructions) which, when run on a computer, causes the computer to perform any one of the above-described first to second aspects, and a method in any one of the possible implementations of the first to second aspects.
In a ninth aspect, there is provided a computer program product comprising: a computer program (which may also be referred to as code, or instructions) which, when executed, causes a computer to perform the method of any of the above-described first to second aspects, and any of the possible implementations of the first to second aspects.
The advantages of the fifth to ninth aspects may be specifically referred to the description of the advantages of the first to second aspects, and are not repeated here.
Drawings
Fig. 1 is a schematic diagram of a data exchange architecture according to an embodiment of the present application.
Fig. 2 shows a method of processing data at an intermediate switching node.
Fig. 3 is a flowchart of a method for processing data by a transmitting node according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of processing data at a physical layer according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a data set before encoding using a first encoding scheme according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a data set encoded using a first encoding scheme according to an embodiment of the present application.
Fig. 7 is a schematic flow chart of another embodiment of processing data at a physical layer.
Fig. 8 is a schematic diagram of data after two encoding processes according to an embodiment of the present application.
Fig. 9 is a schematic block diagram of an apparatus for processing data according to an embodiment of the present application.
Fig. 10 is a schematic block diagram of another apparatus for processing data provided by an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an apparatus for processing data according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of another apparatus for processing data according to an embodiment of the present application.
Fig. 13 is a schematic structural diagram of yet another apparatus for processing data according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
The technical solution of the embodiment of the application can be applied to various communication systems with data exchange, for example: global system for mobile communications (global system of mobile communication, GSM), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA) systems, general packet radio service (general packet radio service, GPRS), long term evolution (long term evolution, LTE) systems, LTE frequency division duplex (frequency division duplex, FDD) systems, LTE time division duplex (time division duplex, TDD), universal mobile telecommunications system (universal mobile telecommunication system, UMTS), worldwide interoperability for microwave access (worldwide interoperability for microwave access, wiMAX) communication systems, fifth generation (5th generation,5G) or future systems or New Radio (NR), etc.
Fig. 1 is a schematic diagram of a data exchange architecture according to an embodiment of the present application.
Fig. 1 illustrates a multi-layer switching architecture in a data switching network, the architecture including nodes such as a central switching (central switching, CSW) node (e.g., an intranet switch), a converged switching (aggregation switching, ASW) node (e.g., an access layer switch), a top of rack (TOR), and a server (or computing element array), under which the distribution distances between switching nodes at different levels are different. Specifically, each switching node stores and forwards the received data packets according to the routing information and the control information in the header, when a plurality of data packets need to be sent out from the same port at the same time, a buffer memory needs to be configured to avoid port congestion, but buffer memory resources at each node are limited, and when the buffer memory occupation reaches the upper limit, the node discards additional data packets to cause packet loss. When a certain node receives a data packet, a retransmission mechanism is started after the packet is found to be lost, namely, the receiving node sends a message to enable a sending node to retransmit the data packet, and the process has a certain time delay and can influence the efficiency of data exchange.
Fig. 2 shows a method of processing data at an intermediate switching node.
At the transmitting node, the data to be transmitted is grouped, the grouped data (data group) is encapsulated from the upper layer to the lower layer in sequence according to a network seven-layer protocol, the encapsulated data is transmitted after being coded by using forward error correction coding (forward error coding, FEC) in the physical layer, in the coding process, the data is coded only once, and in the coding process, one encapsulated data group or a plurality of data groups are used as units, and the FEC coding is performed on all data in one encapsulated data group to generate one codeword on the assumption that the encapsulated data group is used as a unit. It is assumed that a codeword is generated by FEC encoding all data in a plurality of encapsulated data sets as a unit. The encoding process can thus improve the accuracy of each data set transmission, but the problem of missing data sets cannot be avoided.
And decoding and decapsulating the received data according to the inverse process of the transmitting node at the receiving node to obtain the data of the corresponding user. And detecting whether the received data is complete or not (i.e. whether the number of received data groups is consistent with the number of groups into which the sending node divides the data to be sent) at a certain data layer, for example, a transmission control protocol layer (transmission control protocol, TCP), namely, detecting whether packet loss exists or not, and if the data groups are found to be lost, notifying the sending node to resend the data to be sent. In this process, there is a delay of the order of milliseconds from the time the data is found lost to the time the retransmitted data is received again, greatly affecting the data exchange efficiency of the network.
In view of this, the embodiments of the present application provide a method and apparatus for processing data at a physical layer, which can effectively avoid the problem of packet loss in the data exchange process, thereby improving the efficiency of data exchange.
Fig. 3 is a flowchart of a method for processing data by a transmitting node according to an embodiment of the present application. The method 300 shown in fig. 3 includes:
in step S310, the first node divides the first data into K data groups, where K is an integer greater than 1, and the first data is data sent to the second node.
It should be understood that the first node, and the second node are nodes in the data exchange network, where the first node and the second node may be intermediate nodes, and the first node may also be a starting node for sending the first data, which is not limited in this application.
Illustratively, the first node, and the second node may be devices in a data exchange network, such as TORs, computers, a single central processor (central processing unit, CPU), or a graphics processor (graphics processing unit, GPU).
Optionally, the length of each of the K data sets is the same, which may also be described as the number of symbols or bits occupied by each of the K data sets is the same, which will be understood herein, and will not be described herein.
Illustratively, as shown in fig. 4, the first node groups data to be transmitted (first data) into K data groups, each of which has the same length.
In step S320, the first node encodes the K data sets using a first encoding scheme to obtain second data, where the second data includes M1 first codewords, the first codewords are generated by encoding a portion of data selected from each of the K data sets using the first encoding scheme, and M1 is an integer greater than 1.
It should be appreciated that in the process of encoding using the first encoding scheme to generate a first codeword, the portion of data selected by the first node in each of the K data groups may be a portion of symbols in each data group or data carried on a portion of bits.
Optionally, the M1 first codewords comprise N data sets, a first data set of the N data sets comprising position information indicating a position of the first data set in the N data sets, the first data set being any one of the N data sets N being an integer greater than K.
It should be understood that the second data comprises M1 first codewords, which can also be described as the second data comprising N data sets, or that the M1 first codewords comprise N data sets. The N data sets include K encoded data sets and Q encoded overhead data sets, i.e., n=k+q.
Optionally, the N data sets are the same length.
Optionally, the first codeword is generated using a first coding scheme encoding data over the same number of symbols selected from each of the K data sets. That is, in the process of generating a first codeword, the number of symbols or bits occupied by data selected by the first node from each of the K data sets is the same.
Optionally, the first codeword is generated using a first coding scheme encoding data on symbols that select the same position from each of the K data sets. That is, in the process of generating a first codeword, the symbol or bit position occupied by the data selected by the first node from each of the K data sets is the same.
Optionally, the first codeword is generated using a first coding scheme encoding data over the same number of symbols selected from the same position in each of the K data sets. That is, in generating a first codeword, the first node selects data over the same number of symbols or bits from the same position in each of the K data sets.
Optionally, each of the N data sets includes an identification characterizing the second data.
I.e. the second node receives the data sets, can determine which data sets are data sets belonging to the second data by means of the identification in the data sets.
It will be appreciated that the second data is generated by encoding the first data, and thus characterizes the identity of the second data, and may also characterize the first data.
In step S330, the first node sends second data to the second node, and the second node receives the second data.
The process of encoding first data using a first encoding scheme to generate second data is illustrated below in connection with fig. 4-6.
For example, as shown in fig. 4, the first node divides the first data into K data groups, and then vertically arranges the K data groups, where the arranged data group 1, the data group 2, and the data group …, and the data group K is shown in fig. 5, and each data group is laterally formed by data units, where the data units may be symbols or bits. The data unit occupied by one dashed box in the longitudinal direction in each data group is the smallest coding unit of the data group, and the smallest coding unit in different domains is different, for example, in the binary domain, the smallest coding unit is 1 bit, and in the finite domain, the smallest coding unit is 1 symbol, which is not limited in this application.
In the process of encoding the first data by using the first encoding scheme, assuming that the lengths of the K data groups are the same, encoding the first data to generate second data shown in fig. 6, where the second data includes N data groups, the lengths of the N data groups are the same, N is an integer greater than K, and the data groups that are more than the first data after encoding by using the first encoding scheme are encoding overheads.
The second data includes M1 first codewords (a column of the second data is a first codeword), for a first codeword, for example, the leftmost first codeword shown in fig. 6, the first codeword may be obtained by using the first coding scheme for data in a data unit selected from each of K data sets shown in fig. 5, during which the number of data units selected in each data set is the same, the positions of the data units selected in each data set may be the same (for example, the 1 st data unit is selected in all K data sets), or may be different (for example, interleaving coding, the 1 st data unit is selected in the 1 st data set, the 2 nd data unit is selected in the 2 nd data set, the 1 st data unit is selected in the 3 data set, the 2 nd data unit is selected in the 4 data set, and so on, the 1 st data unit is selected in the K-1 data set, and the 2 nd data unit is selected in the K data set). It will be appreciated that the number of data units selected among the K data sets may be different in generating the different first codewords. For example, the first codeword #1 is obtained using the first coding scheme for data in one data unit selected from each data group, and the first codeword #2 is obtained using the first coding scheme for data in 2 data units selected from each data group.
It will be appreciated that the second data comprises in each data set position information indicating the position of the data set 4 in the fourth row of the second data, for example, the data set 4 in fig. 6 comprises position information. The receiving node (second node) of the data may recover the second data from the location information and determine M1 first codewords. Or the second data may include no location information in the data sets, the first node sequentially sends the N data sets according to the arrangement position of each data set in the N data sets in the longitudinal direction, and the second node may recover the second data as shown in fig. 6 according to the sequence of receiving the data sets.
It should also be understood that the above description with respect to fig. 6 is by way of example only and is not limiting of the present application.
After encoding the first data using the first encoding scheme, the second data may be processed in 3 ways:
mode 1 (without performing the second encoding)
As shown in fig. 4, the first node adds a header to each of the N data groups belonging to the second data generated by the encoding using the first encoding scheme, the header including routing information and control information for forwarding the data by the intermediate switching node.
The first node adds a preamble for delimitation for the N data groups after adding the header.
For example, one preamble may be added to one data set, or the same preamble may be added to a plurality of data sets, which is not limited in this application.
Step S330 is replaced by the first node sending second data to the second node, wherein the second data comprises N data groups to which the preamble and the header are added. Correspondingly, the second node receives the second data.
For the second node, after receiving the data (including the second data), the second node identifies the data set according to the preamble delimitation, determines the data set belonging to the second data, and determines whether the second data has the missing data set.
It should be understood that the second data comprises M1 first codewords, which can also be described as the second data comprising N data sets, or that the M1 first codewords comprise N data sets. As shown in fig. 6, the second data is divided in the lateral direction, one data group for each behavior; the second data is divided in the longitudinal direction, with each column being a first codeword. The first node transmits the second data to the second node in the form of a data set, and the second node receives the second data in the form of a data set. The second node may vertically arrange the received data sets belonging to the second data according to the received data sets and the location information included in the data sets, as shown in fig. 6 after the arrangement, so that M1 first code words may be determined after the vertical arrangement, and if a missing data set exists, the missing data set may be decoded and recovered for the first code words.
Optionally, the second node starts a timer after receiving the first data group belonging to the second data, and if the number of the data groups belonging to the second data received by the second node is smaller than N before the timer ends, it is determined that the second data has a lost data group; and if the number of the data groups belonging to the second data received by the second node is equal to N before the timer is finished, determining that the second data does not lose the data groups.
The second data comprises, for example, 10 data sets, and is considered to have a missing data set if only 9 data sets belonging to the second data are received before the timer expires. If 10 data sets belonging to the second data are received before the timer expires, the second data is considered to have no missing data set.
It should be appreciated that the timer may be a time period set by a person.
Optionally, the second node caches the received data set, and in case it is determined that the second data has a missing data set, the second node restores the missing data set according to the cached data set belonging to the second data.
Optionally, the data set belonging to the second data includes location information, and the second node determines a first codeword according to the received data set belonging to the second data and the location information in the data set, and restores the lost data set using the first decoding on the first codeword.
It should be understood that the first coding is a coding corresponding to the first coding, for example, the first coding is a coding corresponding to FEC, which is not limited in this application.
Optionally, the second node receives the data set including an identification characterizing the second data.
It will be appreciated that the second node may distinguish whether the data set belongs to the second data based on the identity in the data set.
Optionally, the second node clears the cached data set.
For example, if the second data is completely received by the second node within a set period of time, for example, before the timer expires, it is determined that no packet loss occurs in the second data during the data transmission process, at which time the timing may be stopped and the buffered data group belonging to the second data may be cleared.
If the second data is not completely received by the second node within a set period of time, for example, before the timer expires, that is, there is a packet loss occurring during the transmission of the second data, the lost data set is recovered according to the above-mentioned method, after the recovery is completed, the complete second data is sent to the upper layer for processing, and then the cached data set belonging to the second data is cleared.
If the recovery of the lost data set fails according to the above-mentioned method, the cached data set belonging to the second data may also be purged at this time. In this process, the physical layer attempts to recover the lost packet and fails to process the data by the upper layer, and the upper layer can notify the first node to retransmit the second data.
It should be understood that, since the buffer resources at the switching node in the data switching network are limited, clearing the buffer can release the buffer resources at the switching node (the second node), and packet loss caused by the buffer of the switching node reaching the upper limit can be avoided, so as to improve the performance of data transmission.
Mode 2 (second encoding, header and payload jointly):
as shown in fig. 4, the first node adds a header to each of the N data groups generated by the encoding using the first encoding scheme, the header including routing information and control information for forwarding data by the intermediate switching node.
The first node encodes the N data groups added with the header by using a second encoding scheme to generate third data, wherein the third data comprises M2 second code words, the second code words are generated by encoding all data of the U data groups and the header by using the second encoding scheme, and U is more than or equal to 1 and less than or equal to N, and M is more than or equal to 1 and less than or equal to M2 and less than or equal to N.
It will be appreciated that the third data generated using the second coding scheme comprises N data sets, i.e. the number of data sets before and after coding does not change during the coding using the second coding scheme, but that a second codeword is generated based on the total data of at least one data set when coded using the second coding scheme.
It should also be understood that the third data comprises M2 second codewords, which can also be described as the third data comprising N data sets, or the M2 second codewords comprising N data sets.
The second data comprises, for example, 10 data sets, the third data generated using the second coding scheme encoding also comprises 10 data sets, the third data may also be described as comprising 5 second code words (2 data sets of the second data as a whole are encoded using the second coding scheme to generate one second code word, and thus the third data may also be described as comprising 5 second code words).
The U data sets are second encoded encoding units, and assuming that u=1, it can be understood that, for all data of 1 data set of the N data sets to which the header is added, one second codeword is encoded using the second encoding scheme, and m2=n, that is, the third data includes N second codewords. Assuming u=2, it can be understood that, for all data of 2 data groups in the N data groups added with the header, a second codeword is generated by using the second coding scheme, where if N is an even number, m2=n/2, that is, the third data includes N/2 second codewords; if N is an odd number, m2= (n+1)/2, i.e., the third data includes (n+1)/2 second codewords.
It should be understood that in mode 2, as shown in fig. 4, the data group (payload) and the header are encoded with the same encoding (second encoding) to generate the second codeword, that is, to generate the third data.
Thereafter, the first node adds a preamble for delimitation for the second codeword in the third data.
For example, one second codeword (which may be generated by one data set encoding or may be generated by multiple data set encoding) may be added with one preamble, or multiple second codewords may be added with the same preamble, which is not limited in this application.
Step S330 is replaced by the first node sending third data to the second node, wherein the third data comprises a preamble and a header. Correspondingly, the second node receives the third data.
For the second node, after the second node receives the data (including the third data), it delimits according to the preamble, identifies the data set, and determines the data set belonging to the third data.
The second node may generate the second data and header by using a second decoding on M2 second codewords (including the payload and header since the first node is the second codeword generated by encoding the data set and header together by the second coding scheme).
After the second data is generated, whether the second data has the condition of losing the data group or not is judged.
Specifically, the method for the second node to determine whether the second data is missing the data set, and the method for the second node to retrieve the missing data set if the second node determines that the second data is missing the data set, and other operations of the second node may refer to the description of the specific operation steps of the second node in the mode 1, which is not repeated herein for brevity.
Mode 3 (second encoding, header and payload encoded separately):
as shown in fig. 7, the first node encodes N data groups encoded using the first encoding scheme to generate third data using the second encoding scheme, the third data including M2 second codewords encoded using the second encoding scheme for all data of U data groups, where 1.ltoreq.u.ltoreq.n, and M2 is an integer greater than 1.
The U data sets are second encoded encoding units, and assuming that u=1, it can be understood that, for all data of 1 data set of the N data sets, one second codeword is encoded using the second encoding scheme, where m2=n, that is, the third data includes N second codewords. Assuming u=2, it can be understood that, for all data of 2 data sets in the N data sets, a second codeword is generated by encoding using the second encoding scheme, where if N is an even number, m2=n/2, that is, the third data includes N/2 second codewords; if N is an odd number, m2= (n+1)/2, i.e., the third data includes (n+1)/2 second codewords.
Next, the first node adds a header to each second codeword in the third data, the header including routing information and control information, the header encoded using the third encoding scheme to generate a header codeword.
It should be understood that in mode 3, the data set (payload) and the header are encoded separately using different encodings.
After encoding the data set and the header respectively, the first node adds a preamble for delimitation for the third data to which the header is added.
For example, when adding a preamble to the second codeword included in the third data, one second codeword may be added with one preamble, or a plurality of second codewords may be added with the same preamble, which is not limited in this application.
Step S330 may be replaced by the first node sending third data to the second node, the third data comprising a preamble and a header. Correspondingly, the second node receives the third data.
For the second node, after the second node receives the data (including the third data), it delimits according to the preamble, identifies the data set, and determines the data set belonging to the third data.
The second node generates a header using a third decoding of the header codeword, and generates second data using a second decoding of M2 second codewords (including the payload since the second codeword was generated by the first node encoding the data set using the second encoding scheme).
After the second data is generated, whether the second data has the condition of losing the data group or not is judged.
Specifically, the method for the second node to determine whether the second data is missing the data set, and the method for the second node to retrieve the missing data set if the second node determines that the second data is missing the data set, and other operations of the second node may refer to the description of the specific operation steps of the second node in the mode 1, which is not repeated herein for brevity.
In modes 1,2 and 3, the second node may receive or forward data according to the routing information and the control information included in the header, and for the data of the present node, the load is extracted from the data set and then delivered to the upper layer for processing.
Optionally, the first coding scheme, the second coding scheme, or the third coding scheme in the above embodiment includes FEC.
Optionally, the first decoding, the second decoding, or the third decoding in the foregoing embodiments includes decoding corresponding to FEC.
Fig. 8 is a schematic diagram of data after two encoding processes according to an embodiment of the present application.
Fig. 8 is a specific example of data (the first data before) to be transmitted encoded using the first encoding (the vertical encoding) and the second encoding (the horizontal encoding) (the third data before), the third data encoded using the first encoding and the second encoding scheme includes 98 data groups, 2 overhead data groups (the overhead generated during the encoding using the first encoding scheme), that is, the above-mentioned first data is divided into K data groups, N data groups are generated using the first encoding for the K data groups, the N data groups include K data groups encoded and Q overhead data groups encoded, wherein k=98, q=2, n=100 illustrated in fig. 8, the header of each data group encoded twice occupies 9 symbols or bits, the position information in each data group occupies 1 symbol or bit, the load in each data group occupies 504 symbols or bits, and the overhead encoded using the second encoding scheme occupies 30 symbols or bits for each data group, which is not limited by the above-mentioned number.
In the embodiments provided herein, the second coding scheme employs KP4 FEC, i.e., reed-Solomon (RS) (544,514), which is based on Galois Field (GF) (2 10 ) The error correction capability is constructed as 15 intra-domain symbols.
For the scheme provided by the application, a first coding scheme (longitudinal direction) is additionally added on the basis of a second coding scheme (transverse direction), and for the sake of simplicity of data distribution, the first coding scheme can adopt a coding scheme based on GF (2 10 ) The above constructed RS (100,98) codeword, when used for error correction and erasure correction, can recover the data carried by any two lost intra-field symbols.
It should be understood that the first codeword may be arbitrarily designed according to the requirement to balance the searching capability with the indexes such as overhead, delay, buffering, etc., which is not limited in this application.
With the embodiment illustrated in fig. 8, an additional 2% overhead may allow any two or less of 100 data sets to be discarded in the data exchange, and the lost data set may be completely retrieved at the receiving node (the second node).
By adopting the method provided by the embodiment of the application, under the condition of packet loss, the delay of retrieving the lost data set through the first code word can be understood as the sum of the delay of completely collecting the data set and the erasure decoding delay. Compared with the time delay from the start of the retransmission mechanism when the packet loss is found to the re-reception of the complete data set, the method provided by the embodiment of the application can reduce the time delay of exchanging data.
It should be understood that the methods provided in the embodiments of the present application may be used alone or in combination, and are not limited in this application.
It should be noted that the execution body mentioned in the above method embodiment is only an example, and the execution body may also be a chip, a chip system, or a processor that supports the execution body to implement the above method embodiment, which is not limited in this application.
Method embodiments of the present application are described above with reference to the accompanying drawings, and device embodiments of the present application are described below. It will be appreciated that the description of the method embodiments and the description of the apparatus embodiments may correspond to each other and that accordingly, non-described parts may be referred to the previous method embodiments.
It will be appreciated that in the foregoing embodiments of the methods and operations implemented by the first node may also be implemented by a component (e.g., a chip or a circuit) that may be used in the first node, and that the methods and operations implemented by the second node may also be implemented by a component (e.g., a chip or a circuit) that may be used in the second node.
The above description has been presented mainly in terms of interaction between nodes, and the solution provided in the embodiments of the present application is described. It will be appreciated that each network element, e.g. the transmitting device or the receiving device, in order to implement the above-mentioned functions, comprises corresponding hardware structures and/or software modules for performing each function. Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application may divide the function modules of the transmitting end device or the receiving end device according to the above method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation. The following description will take an example of dividing each functional module into corresponding functions.
Fig. 9 is a schematic block diagram of an apparatus for processing data provided in an embodiment of the present application. The apparatus 400 comprises a transceiver unit 410 and a processing unit 420. The transceiver unit 410 may communicate with the outside, and the processing unit 420 is used for data processing. The transceiver unit 410 may also be referred to as a communication interface or a communication unit.
Optionally, the apparatus 400 may further include a storage unit, where the storage unit may be used to store instructions and/or data, and the processing unit 420 may read the instructions and/or data in the storage unit.
In one design, the apparatus 400 may be a first node in a data exchange network, the transceiver unit 410 is configured to perform the operations of receiving or transmitting by the first node in the above method embodiment, and the processing unit 420 is configured to perform the operations of processing inside the first node in the above method embodiment.
In another design, the apparatus 400 may be a device including a first node. Alternatively, the apparatus 400 may be a component configured in the first node, for example, a chip in the first node. In this case, the transceiver unit 410 may be an interface circuit, a pin, or the like. In particular, the interface circuit may include an input circuit and an output circuit, and the processing unit 420 may include a processing circuit.
In a possible implementation manner, the processing unit 420 is configured to divide first data into K data groups, where K is an integer greater than 1, and the first data is data sent to the second node, and the processing unit 420 is further configured to encode the K data groups using a first encoding scheme to obtain second data, where the second data includes M1 first codewords, where the first codewords are codewords generated by encoding a portion of data selected from each of the K data groups using the first encoding scheme, and M1 is an integer greater than 1; the transceiver unit 410 is configured to send the second data to a second node.
Based on the above scheme, in the process of generating the first codeword, the data in each data set is selected, so if the data set belonging to the second data is lost at the receiving end (second node) of the second data, the lost data set can be retrieved according to the decoding of the first codeword, thereby improving the efficiency of data exchange.
In one possible implementation, the M1 first codewords include N data sets, a first data set of the N data sets includes location information, the location information indicates a location of the first data set in the N data sets, the first data set is any data set of the N data sets, and N is an integer greater than K.
Based on the above scheme, adding the position information to the data set encoded by the first encoding scheme can enable the receiving end of the second data to recover the arrangement of the data set encoded by the first encoding scheme according to the received position information in each data set, thereby determining the first codeword to recover the lost data set.
One possible implementation, the N data sets are the same length.
In one possible implementation, when the processing unit 420 generates the first codeword by using the first coding scheme, the number of symbols carrying data selected in each of the K data groups is the same.
In one possible implementation, when the processing unit 420 generates the first codeword by using the first coding scheme, the positions of the symbols carrying data selected in each of the K data groups are the same.
In a possible implementation manner, the M1 first codewords include N data sets, N is an integer greater than K, the processing unit 420 is further configured to encode the N data sets using a second encoding scheme to generate third data, where the third data includes M2 second codewords, and the second codewords are codewords generated by encoding all data of U data sets using the second encoding scheme, where the U data sets are selected from the N data sets, 1N, 1M 2N, and 1N, and the transceiver unit 410 is configured to send the third data to the second node.
Based on the scheme, on the basis of using the first coding scheme, the N data sets are coded again by using the second coding scheme, so that each data set can be accurately transmitted, and messy codes and error codes of the data sets in the transmission process are avoided.
In one possible implementation, each of the N data sets includes an identification characterizing the second data.
Based on the scheme, after the second node receives the data set, the data set belonging to the second data can be distinguished through the identifier, so that whether the second data has the lost data set in the data exchange process can be judged more accurately.
In one possible implementation, the first coding scheme includes forward error correction coding, FEC.
In one possible implementation, the second coding scheme includes forward error correction coding FEC.
Fig. 10 is a schematic block diagram of an apparatus for processing data provided in an embodiment of the present application. The apparatus 500 comprises a transceiver unit 510 and a processing unit 520. The transceiver unit 510 may communicate with the outside, and the processing unit 520 is used for data processing. The transceiver unit 510 may also be referred to as a communication interface or a communication unit.
Optionally, the apparatus 500 may further include a storage unit 530, where the storage unit 530 may be used to store instructions or and/or data, and the processing unit 520 may read the instructions or and/or data in the storage unit 530.
In one design, the apparatus 500 may be a second node in a data exchange network, where the transceiver unit 510 is configured to perform the operations of receiving or transmitting by the second node in the above method embodiment, and the processing unit 520 is configured to perform the operations of processing inside the second node in the above method embodiment.
In another design, the apparatus 500 may be a device including a second node. Alternatively, the apparatus 500 may be a component configured in the second node, for example, a chip in the second node. In this case, the transceiver unit 510 may be an interface circuit, a pin, or the like. In particular, the interface circuit may include an input circuit and an output circuit, and the processing unit 520 may include a processing circuit.
In a possible implementation manner, the transceiver unit 510 is configured to receive, from the first node, a data group that belongs to second data, where the second data includes M1 first codewords, and the first codewords are codewords generated by encoding a portion of data selected from each of K data groups using the first coding scheme, where K and M1 are integers greater than 1.
Based on the scheme, the first node selects the data in each data set in the process of generating the first code word, so that the second node can decode the first code word to retrieve the lost data set under the condition that the data set of the second data is lost, and the efficiency of data exchange is improved.
In one possible implementation manner, the M1 first codewords include N data groups, where N is an integer greater than K, and the processing unit 520 is configured to start a timer when the transceiver unit 510 receives a first data group belonging to the second data, if before the timer ends, the number of data groups belonging to the second data received by the transceiver unit 510 is less than N, the processing unit 520 determines that the second data has a missing data group, and if before the timer ends, the number of data groups belonging to the second data received by the transceiver unit 510 is equal to N, the processing unit 520 determines that the second data has no missing data group.
Based on the scheme, a time period can be set manually, and whether the data has the problem of packet loss or not is judged in the time period, so that judgment delay is avoided.
In a possible implementation manner, the storage unit 530 is configured to buffer the received data set, and the processing unit 520 is further configured to, in case it is determined that the second data has a missing data set, recover the missing data set according to the data set belonging to the second data received by the transceiver unit 510.
In a possible implementation manner, the second data includes N data sets, a first data set of the N data sets includes location information, the first data set is any one of the N data sets, the processing unit 520 is further configured to determine the first codeword according to the data set belonging to the second data and the location information in the data sets received by the transceiver unit 510, and recover the lost data set using the first decoding on the first codeword.
Based on the above scheme, the second node restores the arrangement of the data set encoded by the first node using the first coding scheme through the position information added by the first node in the data set encoded by the first coding scheme, thereby determining the first codeword capable of restoring the lost data set.
In a possible implementation, the processing unit 520 is further configured to purge the data set of the second data buffered in the storage unit 530.
Based on the scheme, the cache capacity at the second node is improved by clearing the cache.
One possible implementation, the N data sets are the same length.
One possible implementation is that the first codeword is a codeword generated using a first coding scheme encoding data over the same number of symbols selected from each of the K data sets.
One possible implementation is that the same number of symbols are located the same in each of the K data sets.
In one possible implementation, each of the N data sets includes an identification characterizing the second data.
Based on the scheme, after the second node receives the data set, the data set belonging to the second data can be distinguished through the identifier, so that whether the second data has the lost data set or not can be judged more accurately.
In a possible implementation manner, the M1 first code words include N data groups, the transceiver unit 510 is configured to receive third data from the first node, the third data includes M2 second code words, the second code words are code words generated by encoding all data of U data groups using the second coding scheme, 1+.u+.n, 1+.m2+.n, the U data groups are selected from the N data groups, and the processing unit 520 is further configured to generate second data by using the second decoding for the M2 second code words.
Based on the above scheme, since the second data is encoded twice, the second node can accurately receive each data set through the second decoding (if there is an error in the transmission process of the data set, the second node can correct the error of the data set through the second decoding), and can recover the lost data set through the first decoding.
In one possible implementation, the first decoding includes decoding corresponding to forward error correction coding FEC.
In one possible implementation, the second decoding includes decoding corresponding to forward error correction coding FEC.
As shown in fig. 11, the embodiment of the present application further provides an apparatus 600 for processing data. The apparatus 600 comprises a processor 610, the processor 610 being coupled to a memory 620, the memory 620 being for storing computer programs or instructions or and/or data, the processor 610 being for executing the computer programs or instructions and/or data stored by the memory 620, such that the method in the above method embodiments is performed.
Optionally, the apparatus 600 includes one or more processors 610.
Optionally, as shown in fig. 11, the apparatus 600 may further include a memory 620.
Optionally, the apparatus 600 may include one or more memories 620.
Alternatively, the memory 620 may be integrated with the processor 610 or provided separately.
Optionally, as shown in fig. 11, the apparatus 600 may further comprise a transceiver 630 and/or a communication interface, the transceiver 630 and/or the communication interface being used for receiving and/or transmitting signals. For example, the processor 610 is configured to control the transceiver 630 and/or the communication interface to receive and/or transmit signals.
As an alternative, the apparatus 600 is configured to implement the operations performed by the first node in the above method embodiment. For example, the processor 610 is configured to implement operations performed internally by the first node in the above method embodiment, and the transceiver 630 is configured to implement operations performed by the first node in the above method embodiment for receiving or transmitting. The processing unit 420 in the apparatus 400 may be a processor in fig. 11, and the transceiver unit 410 may be a transceiver in fig. 11. The operations performed by the processor 610 may be specifically referred to the above description of the processing unit 420, and the operations performed by the transceiver 630 may be referred to the description of the transceiver unit 410, which is not repeated herein.
As shown in fig. 12, the embodiment of the present application further provides an apparatus 700 for processing data. The apparatus 700 comprises a processor 710, the processor 710 being coupled to a memory 720, the memory 720 being for storing computer programs or instructions or and/or data, the processor 710 being for executing the computer programs or instructions and/or data stored by the memory 720, such that the method in the above method embodiments is performed.
Optionally, the apparatus 700 includes one or more processors 710.
Optionally, as shown in fig. 12, the apparatus 700 may further comprise a memory 720.
Alternatively, the apparatus 700 may include one or more memories 720.
Alternatively, the memory 720 may be integrated with the processor 710 or provided separately.
Optionally, as shown in fig. 12, the apparatus 700 may further comprise a transceiver 730 and/or a communication interface, the transceiver 730 and/or the communication interface being used for receiving and/or transmitting signals. For example, the processor 710 is configured to control the transceiver 730 to receive and/or transmit signals.
As an aspect, the apparatus 700 is configured to implement the operations performed by the second node in the above method embodiment.
For example, the processor 710 is configured to implement operations performed internally by the second node in the above method embodiment, and the transceiver 730 is configured to implement operations of receiving or transmitting performed by the second node in the above method embodiment. The processing unit 520 in the apparatus 500 may be the processor in fig. 12, and the transceiver unit 510 may be the transceiver and/or the communication interface in fig. 12. The operations performed by the processor 710 may be specifically referred to the above description of the processing unit 520, and the operations performed by the transceiver 730 may be referred to the description of the transceiver unit 510, which is not repeated herein.
The embodiment of the application also provides a device for processing data, which comprises a processor, wherein the processor is coupled with an input/output interface, and is used for transmitting data through the input/output interface, and the processor is used for executing the method in any method embodiment.
As shown in fig. 13, the embodiment of the present application further provides an apparatus 800 for processing data. The apparatus 800 includes logic 810 and an input/output interface (input/output interface) 820.
Logic 810 may be, among other things, processing circuitry in apparatus 800. Logic 810 may be coupled to a memory unit to invoke instructions in the memory unit so that apparatus 800 may implement the methods and functions of embodiments of the present application. The input/output interface 820 may be an input/output circuit in the apparatus 800, outputting information processed by the apparatus 800, or inputting data or signaling information to be processed into the apparatus 800.
As an alternative, the apparatus 800 is configured to implement the operations performed by the first node in the method embodiments above.
For example, logic 810 is configured to implement the processing-related operations performed by the first node in the above method embodiments, and input/output interface 820 is configured to implement the sending and/or receiving-related operations performed by the first node in the above method embodiments. The operations performed by the logic 810 may be specifically referred to the above description of the processing unit 420, and the operations performed by the input/output interface 820 may be referred to the above description of the transceiver unit 410, which is not repeated here.
Alternatively, the apparatus 800 is configured to implement the operations performed by the second node in the method embodiments above.
For example, logic 810 is configured to implement the processing-related operations performed by the second node in the above method embodiments, and input/output interface 820 is configured to implement the sending and/or receiving-related operations performed by the second node in the above method embodiments. The operations performed by the logic circuit 810 may be specifically referred to the above description of the processing unit 520, and the operations performed by the input/output interface 820 may be referred to the above description of the transceiver unit 510, which is not repeated here.
It should be understood that the above-described device may be one or more chips. For example, the device may be a field programmable gate array (field programmable gate array, FPGA), an application specific integrated chip (application specific integrated circuit, ASIC), a system on chip (SoC), a central processing unit (central processor unit, CPU), a network processor (network processor, NP), a digital signal processing circuit (digital signal processor, DSP), a microcontroller (micro controller unit, MCU), a programmable controller (programmable logic device, PLD) or other integrated chip.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
It should be noted that the processor in the embodiments of the present application may be an integrated circuit chip with signal processing capability. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
It will be appreciated that the memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The embodiment of the application also provides a data exchange system, which comprises a first node and a second node.
According to the method provided by the embodiment of the application, the application further provides a computer readable medium storing program code which when run on a computer causes the computer to perform the method of the above embodiment. For example, the computer program, when executed by a computer, enables the computer to implement the method performed by the first node or the method performed by the second node in the above-described method embodiments.
Embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to implement the method performed by the first node or the method performed by the second node in the method embodiments described above.
Any of the above-mentioned explanation and beneficial effects of the related content in the data processing apparatus can refer to the corresponding method embodiments provided above, and are not repeated here.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The network device and the terminal device in the above respective apparatus embodiments correspond to the network device and the terminal device in the method embodiments, and the respective steps are performed by respective modules or units, for example, the communication unit (transceiver) performs the steps of receiving or transmitting in the method embodiments, and other steps than transmitting and receiving may be performed by the processing unit (processor). Reference may be made to corresponding method embodiments for the function of a specific unit. Wherein the processor may be one or more.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Furthermore, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with one another in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (27)

1. A method of processing data, the method being applied to a data switching network, comprising:
the first node divides first data into K data groups, wherein K is an integer greater than 1, and the first data is data sent to the second node;
the first node encodes the K data groups by using a first encoding scheme to obtain second data, wherein the second data comprises M1 first code words, the first code words are code words generated by encoding part of data selected from each data group of the K data groups by using the first encoding scheme, and M1 is an integer greater than 1;
the first node sends the second data to the second node.
2. The method of claim 1, wherein the M1 first codewords comprise N data sets, a first data set of the N data sets comprising location information indicating a location of the first data set among the N data sets, the first data set being any data set of the N data sets, N being an integer greater than K.
3. A method according to claim 1 or 2, characterized in that the N data sets are of the same length.
4. A method according to any one of claims 1 to 3, characterized in that the number of data-carrying symbols selected in each of the K data sets is the same when the first codeword is generated using a first coding scheme encoding.
5. The method according to any of claims 1 to 4, wherein the position of the data-carrying symbol selected in each of the K data sets is the same when the first codeword is generated using a first coding scheme encoding.
6. The method according to any one of claims 1 to 5, wherein the M1 first codewords comprise N data sets, N being an integer greater than K, the method further comprising:
the first node encodes the N data groups by using a second encoding scheme to generate third data, wherein the third data comprises M2 second code words, the second code words are code words generated by encoding all data of U data groups by using the second encoding scheme, and the U data groups are selected from the N data groups, wherein U is more than or equal to 1 and less than or equal to N, and M2 is more than or equal to 1 and less than or equal to N;
The first node sending the second data to the second node, comprising:
the first node sends the third data to the second node.
7. The method of any of claims 2 to 6, wherein each of the N data sets comprises an identification characterizing the second data.
8. A method of processing data, the method being applied to a data switching network, comprising:
the second node receives a data group belonging to second data from the first node, the second data including M1 first codewords, the first codewords being codewords generated by encoding a portion of data selected from each of the K data groups using a first coding scheme, K and M1 being integers greater than 1.
9. The method of claim 8, wherein the M1 first codewords comprise N data sets, N being an integer greater than K, the method further comprising:
the second node starts a timer upon receipt of a first data set belonging to the second data,
if the number of the data groups belonging to the second data received by the second node before the timer is finished is smaller than N, determining that the second data has a lost data group;
And if the number of the data groups belonging to the second data received by the second node is equal to N before the timer is finished, determining that the second data has no lost data groups.
10. The method according to claim 9, wherein the method further comprises:
the second node caches the received data set;
and under the condition that the second data has the lost data group, the second node restores the lost data group according to the received data group belonging to the second data.
11. The method of claim 10, wherein a first data set of the N data sets includes location information, the first data set being any one of the N data sets, the second node recovering a lost data set from a received data set belonging to the second data, comprising:
the second node determines the M1 first code words according to the received data group belonging to the second data and the position information in the data group;
the second node recovers the lost data set using a first coding on the M1 first codewords.
12. The method according to any of claims 9 to 11, wherein the N data sets are of the same length.
13. The method according to any of claims 8 to 12, wherein the first codeword is a codeword generated using the first coding scheme for data over the same number of symbols selected from each of the K data sets.
14. The method of claim 13, wherein the same number of symbols are located the same in each of the K data sets.
15. The method of any of claims 9 to 14, wherein each of the N data sets comprises an identification characterizing the second data.
16. The method according to any of claims 8 to 15, wherein the M1 first codewords comprise N data sets, and the second node receives a data set from the first node that belongs to the second data, comprising:
the second node receives third data from the first node, the third data comprises M2 second code words, the second code words are code words generated by encoding all data of U data groups by using a second encoding scheme, U is not less than 1 and not more than N, M2 is not less than 1 and not more than N, and the U data groups are selected from the N data groups;
The second node generates the second data using a second decoding of the M2 second codewords.
17. An apparatus for processing data, the apparatus being in a data switching network, comprising:
the processing unit is used for dividing first data into K data groups, wherein K is an integer greater than 1, and the first data is data sent to the second node; the processing unit is further configured to encode the K data sets using a first encoding scheme to obtain second data, where the second data includes M1 first codewords, the first codewords are codewords generated by encoding a portion of data selected from each of the K data sets using the first encoding scheme, and M1 is an integer greater than 1;
and the receiving and transmitting unit is used for transmitting the second data to the second node.
18. The apparatus of claim 17, wherein the M1 first codewords comprise N data sets, a first data set of the N data sets comprising location information indicating a location of the first data set among the N data sets, the first data set being any data set of the N data sets, N being an integer greater than K.
19. The apparatus of claim 17 or 18, wherein the N data sets are the same length.
20. The apparatus according to any of claims 17 to 19, wherein the number of data carrying symbols selected in each of the K data sets is the same when the processing unit generates the first codeword using a first encoding scheme encoding.
21. The apparatus according to any of claims 17 to 20, wherein the processing unit uses a first coding scheme to encode the first codeword with the same position of the selected data-carrying symbol in each of the K data sets.
22. The apparatus according to any one of claims 17 to 21, wherein the M1 first codewords comprise N data sets, N being an integer greater than K,
the processing unit is further configured to encode the N data sets using a second encoding scheme to generate third data, where the third data includes M2 second codewords, the second codewords are codewords generated by encoding all data of U data sets using the second encoding scheme, and the U data sets are selected from the N data sets, where 1 is greater than or equal to U is greater than or equal to N, and 1 is greater than or equal to M2 is greater than or equal to N;
The receiving and transmitting unit is configured to send the third data to the second node.
23. The apparatus of any one of claims 18 to 22, wherein each of the N data sets comprises an identification characterizing the second data.
24. An apparatus for processing data, the apparatus being an apparatus in a data switching network, the apparatus comprising a processor coupled to a memory, the memory storing instructions that, when executed by the processor,
causing the processor to perform the method of any one of claims 1 to 7, or
Causing the processor to perform the method of any one of claims 8 to 16.
25. An apparatus for processing data, characterized in that the apparatus is an apparatus in a data switching network, the apparatus comprising logic circuitry for coupling with an input/output interface through which data is transmitted for performing the method of any of claims 1 to 7 or for performing the method of any of claims 8 to 16.
26. A computer readable storage medium for storing a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7 or causes the computer to perform the method of any one of claims 8 to 16.
27. A computer program product, the computer program product comprising: computer program code implementing the method according to any of claims 1 to 7 or implementing the method according to any of claims 8 to 16 when said computer program code is run.
CN202111331219.6A 2021-11-11 2021-11-11 Method and device for processing data Pending CN116112119A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111331219.6A CN116112119A (en) 2021-11-11 2021-11-11 Method and device for processing data
PCT/CN2022/125804 WO2023082950A1 (en) 2021-11-11 2022-10-18 Method and apparatus for processing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111331219.6A CN116112119A (en) 2021-11-11 2021-11-11 Method and device for processing data

Publications (1)

Publication Number Publication Date
CN116112119A true CN116112119A (en) 2023-05-12

Family

ID=86253203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111331219.6A Pending CN116112119A (en) 2021-11-11 2021-11-11 Method and device for processing data

Country Status (2)

Country Link
CN (1) CN116112119A (en)
WO (1) WO2023082950A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7151754B1 (en) * 2000-09-22 2006-12-19 Lucent Technologies Inc. Complete user datagram protocol (CUDP) for wireless multimedia packet networks using improved packet level forward error correction (FEC) coding
US8230316B2 (en) * 2008-01-25 2012-07-24 Nevion Usa, Inc. Forward error correction for burst and random packet loss for real-time multi-media communication
CN110324115B (en) * 2019-06-10 2022-01-04 普联技术有限公司 Data transmission method and device, storage medium and terminal equipment
CN111555855B (en) * 2020-05-22 2022-11-11 乐鑫信息科技(上海)股份有限公司 Wireless transmission method and device
CN113258936B (en) * 2021-06-03 2021-10-15 成都信息工程大学 Dual coding construction method based on cyclic shift

Also Published As

Publication number Publication date
WO2023082950A1 (en) 2023-05-19

Similar Documents

Publication Publication Date Title
US9118353B2 (en) System and method for communicating with low density parity check codes
US5946320A (en) Method for transmitting packet data with hybrid FEC/ARG type II
US8156407B2 (en) Method and system for memory management in a HARQ communications system
JP5425397B2 (en) Apparatus and method for adaptive forward error correction
CN108173621B (en) Data transmission method, transmitting device, receiving device and communication system
CN110710141B (en) Systems and techniques for packet generation based on sliding window network coding
CN113541856A (en) Data recovery method and device
CN110943800A (en) Method, device and system for sending data packet, storage medium and electronic device
CN112636879B (en) Method and device for code block processing based on hybrid automatic repeat request
JP2023157921A (en) Method, device, system, and medium for self-adaptive system code fec encoding and decoding based on media content
WO2020163124A1 (en) In-packet network coding
JP4922242B2 (en) Encoding device, encoding method, and Viterbi decoding device
CN109756307B (en) Data retransmission method and device
US20230353284A1 (en) Qualitative Communication Using Adaptive Network Coding with a Sliding Window
US20230224082A1 (en) Retransmission method and apparatus
CN116112119A (en) Method and device for processing data
CN108631928B (en) Data transmission method, sending equipment and receiving equipment
CN112187409B (en) Decoding method and device, terminal, chip and storage medium
CN114520709A (en) Network data coding transmission method and device
CN109905130B (en) Method, device and equipment for encoding and decoding polarization code
CN114285528A (en) Method for transmitting message and communication device
CN109474379B (en) Encoding method and device
CN109474378B (en) Encoding method and device
CN109474376B (en) Encoding method and device
CN109474380B (en) Encoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication