CN111316582A - Transmission channel rate matching method and device, unmanned aerial vehicle and storage medium - Google Patents

Transmission channel rate matching method and device, unmanned aerial vehicle and storage medium Download PDF

Info

Publication number
CN111316582A
CN111316582A CN201980005400.2A CN201980005400A CN111316582A CN 111316582 A CN111316582 A CN 111316582A CN 201980005400 A CN201980005400 A CN 201980005400A CN 111316582 A CN111316582 A CN 111316582A
Authority
CN
China
Prior art keywords
bit data
bit
parallel
preset
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980005400.2A
Other languages
Chinese (zh)
Inventor
刘瑛
翟春华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Shenzhen Dajiang Innovations Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111316582A publication Critical patent/CN111316582A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0067Rate matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0041Arrangements at the transmitter end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0057Block codes
    • H04L1/0058Block-coded modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0071Use of interleaving

Abstract

The embodiment of the invention provides a rate matching method and device for a transmission channel, an unmanned aerial vehicle and a storage medium, wherein when first bit data output by an encoder is received, the first bit data are stored in a plurality of parallel caches according to a preset storage mode, second bit data are read from the plurality of parallel caches based on a preset reading mode, the second bit data obtained by reading are equal to data obtained by interleaving the first bit data, and target data are obtained by carrying out bit deletion and bit splicing on the second bit data. The embodiment of the invention can improve the rate matching efficiency.

Description

Transmission channel rate matching method and device, unmanned aerial vehicle and storage medium
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a rate matching method and device of a transmission channel, an unmanned aerial vehicle and a storage medium.
Background
In the prior art, data needs to be channel coded before uplink or downlink transmission, and rate matching of a transmission channel is performed after channel coding is completed.
Taking channel coding as Turbo coding as an example, rate matching of a transmission channel of Turbo coding is performed in units of code blocks. When the rate matching is carried out, the three information bit streams output by the Turbo coding are respectively interleaved, and then the interleaved bit streams are processed by bit collection, bit selection, bit deletion and the like, thereby completing the rate matching of the transmission channel. However, the existing rate matching method has the defects of long time consumption and low efficiency.
Disclosure of Invention
The embodiment of the invention provides a rate matching method and device for a transmission channel, an unmanned aerial vehicle and a storage medium, which are used for improving the rate matching efficiency of the transmission channel.
A first aspect of an embodiment of the present invention provides a rate matching method for a transmission channel, where the method includes:
when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel caches based on a preset storage mode; reading second bit data from the plurality of parallel caches based on a preset reading mode to enable the second bit data to be equal to data obtained after the first bit data is subjected to interleaving processing; and carrying out bit deletion and bit splicing processing on the second bit data to obtain target data.
A second aspect of the embodiments of the present invention is to provide a rate matching device for a transmission channel, including a communication interface, one or more processors, and a plurality of parallel caches; the one or more processors work independently or cooperatively, the communication interface is connected with the plurality of parallel caches, and the plurality of parallel caches are respectively connected with the processors; the communication interface is to: when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel caches based on a preset storage mode; the processor is configured to: reading second bit data from the plurality of parallel caches based on a preset reading mode to enable the second bit data to be equal to data obtained after the first bit data is subjected to interleaving processing; and carrying out bit deletion and bit splicing processing on the second bit data to obtain target data.
A third aspect of an embodiment of the present invention provides an unmanned aerial vehicle, including:
a body;
a wireless communication device mounted on the body for performing wireless communication;
the power system is arranged on the machine body and used for providing power;
and the rate matching device provided by the second aspect above.
A fourth aspect of embodiments of the present invention is to provide a computer-readable storage medium, comprising instructions which, when executed on a computer, cause the computer to perform the method according to the first aspect.
According to the embodiment of the invention, when first bit data output by an encoder is received, the first bit data are stored in a plurality of parallel caches according to a preset storage mode, second bit data are read from the plurality of parallel caches based on a preset reading mode, the second bit data obtained by reading are equal to data obtained by interleaving the first bit data, and target data are obtained by carrying out bit deletion and bit splicing on the second bit data. According to the embodiment of the invention, when the rate is matched, the data is stored in the plurality of buffers in parallel, and the data after the first bit data interleaving processing is directly read from the plurality of buffers according to the preset reading mode, so that the interleaving process of the data is saved, the rate matching efficiency is improved, and the rate matching efficiency can also be improved through the parallel storage and the parallel reading of the data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic diagram of a rate matching method for a transmission channel based on Turbo coding according to the prior art;
fig. 2 is a schematic diagram of a communication scenario according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for rate matching of a transmission channel according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an 8-cache data storage according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a method for adding dummy bit data according to an embodiment of the present invention;
fig. 6 is a diagram of a system architecture for rate matching according to an embodiment of the present invention;
fig. 7 is a flowchart of a method for rate matching of a transmission channel according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a rate matching apparatus for a transmission channel according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a schematic diagram of a rate matching method for a transmission channel based on Turbo coding according to the prior art, in fig. 1,
Figure BDA0002467768680000031
are the three bit streams output by the Turbo encoder. When the rate matching is performed, the three bit streams output by the Turbo encoder are interleaved respectively, and then the bit streams obtained after the interleaving are subjected to bit collection, bit selection, bit deletion and other processing, so that target data after the rate matching is obtained. The bit stream interleaving method comprises the following steps:
suppose, use
Figure BDA0002467768680000032
Denotes the input bits of the sub-block interleaver, D is the number of bits. The output bit sequence generation process of the sub-block interleaver is as follows:
1. order to
Figure BDA0002467768680000033
The column numbers of the matrix are 0,1,2, … from left to right,
Figure BDA0002467768680000034
2. number of rows of matrix
Figure BDA0002467768680000035
To satisfy the minimum integer of the following formula, simply denoted as R:
Figure BDA0002467768680000036
the serial numbers of the rows of the matrix are 0,1,2, … from top to bottom,
Figure BDA0002467768680000037
(i.e., is R-1).
3. If it is not
Figure BDA0002467768680000038
N is added to the head position of the input bitDA dummy bit to obtain a bit sequence
Figure BDA0002467768680000039
(k-0, 1, …, D-1). Wherein:
Figure BDA00024677686800000310
further, the slave matrix
Figure BDA0002467768680000041
Writing bit sequence line by line starting from the 0 th line and 0 th column position of 0 th line
Figure BDA0002467768680000042
Obtain matrix (1):
Figure BDA0002467768680000043
for the
Figure BDA0002467768680000044
And
Figure BDA0002467768680000045
1. based on table one
Figure BDA0002467768680000046
Of the matrix, ofAnd (4) performing inter-column replacement.
Watch 1
Figure BDA0002467768680000047
Where P (j) represents the original column position of the jth transformed column. Obtained after intercolumn substitution
Figure BDA0002467768680000048
The dimension matrix (2) is:
Figure BDA0002467768680000049
2. the output of the sub-block interleaver being after the conversion from the columns
Figure BDA00024677686800000410
The dimensional matrix (the sequence of bits read column by column. the output bits of the sub-block interleaving are represented as:
Figure BDA00024677686800000411
wherein
Figure BDA00024677686800000412
Corresponds to yP(0)
Figure BDA00024677686800000413
Correspond to
Figure BDA00024677686800000414
And is
Figure BDA00024677686800000415
For the
Figure BDA00024677686800000416
1. By using
Figure BDA00024677686800000417
The output of the sub-block interleaver is represented,wherein
Figure BDA00024677686800000418
At the same time
Figure BDA00024677686800000419
Wherein, the permutation pattern P is shown in Table I.
As known from the prior art, the existing rate matching process is very complex, and especially, the interleaving process occupies more processing resources and consumes more time, and the rate matching efficiency is low.
In order to solve the problems in the prior art, embodiments of the present invention provide a rate matching scheme for a transmission channel, where in rate matching, first bit data output by an encoder is stored in parallel in multiple parallel caches, and data after interleaving of the first bit data is obtained by directly reading the data from the multiple parallel caches in a preset reading manner, and target data is obtained by performing bit deletion and bit concatenation on the data, so that an interleaving process in the rate matching process is saved, the rate matching efficiency is improved, and the rate matching efficiency can also be improved by parallel storage and parallel reading of the data.
The technical solution of the present invention is explained in detail with reference to the following examples.
A first aspect of embodiments of the present invention provides a method for rate matching of a transmission channel, where the method may be performed by a rate matching device installed in a device having a wireless communication function, where this embodiment is taken as an example of an aircraft. As shown in fig. 2, fig. 2 is a schematic diagram of a communication scenario provided by an embodiment of the present invention, in which the scenario includes an aircraft 20, a rate matching device 21 mounted on the aircraft 20, and a ground station 22. Wherein the ground station 22 is a device having wireless communication capabilities, as well as computing functionality and/or processing capabilities, which may be in particular a remote control, a smartphone, a tablet, a laptop, a watch, a bracelet, etc. and combinations thereof. The aircraft 20 may specifically be an unmanned aircraft with wireless communication capability, a helicopter, a manned fixed wing aircraft, a hot air balloon, or the like. As shown in fig. 2, the ground station 22 and the aircraft 20 are connected via a mobile communications network (e.g., a 4G or 5G mobile communications network, but not limited to 4G or 5G). When the aircraft 20 communicates with the ground station 22, the transmission data is rate-matched to the transmission channel by using the rate matching method provided by the present embodiment.
Fig. 3 is a flowchart of a method for rate matching of a transmission channel according to an embodiment of the present invention, and as shown in fig. 3, the method includes:
step 101, when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel caches based on a preset storage mode.
And 102, reading second bit data from the plurality of parallel caches based on a preset reading mode, so that the second bit data is equal to data obtained by interleaving the first bit data.
And 103, carrying out bit deletion and bit splicing processing on the second bit data to obtain target data.
The cache related to this embodiment may be a dual-port type cache or a single-port type cache, and the number of caches may be set as needed.
The encoder according to the present embodiment may be understood as a Turbo encoder, for example, which may perform parallel storage and parallel reading of data when performing an encoding operation.
For example, fig. 4 is a schematic diagram of data storage of 8 caches according to an embodiment of the present invention, and in fig. 4, 8 caches are connected in parallel, where the 8 caches may be dual-port caches or single-port caches, and it is assumed in this embodiment that the 8 caches are both dual-port caches. When a code block to be coded is input into a Turbo encoder, the Turbo encoder stores data in the code block in 8 parallel caches in a blocking manner according to a preset storage strategy, an upper branch encoder of the Turbo encoder reads the data from the 8 caches, sequences the read data based on the storage strategy, and codes a data stream obtained after sequencing. In a lower branch encoder of the Turbo encoder, an inner interleaver reorders data streams obtained by ordering of an upper branch encoder based on a preset interleaving relation, and the lower branch encoder performs encoding based on the data streams generated after reordering. For example, in one possible design, the storage policy may be set to store the first bit data in the code block in the first buffer, store the second bit data in the second buffer, and so on, store the 8 th bit data in the 8 th buffer, and then perform the storage process circularly from the ninth bit data. In another possible design, the storage strategy may be set to perform blocking processing on the code block, so that each data block obtained after blocking includes 8 bits of data, and then store the data of each data block according to a method similar to that of the first possible design. In yet another possible design, a preset amount of data (e.g., 2-bit data or 3-bit data, etc.) may be stored in the same cache at a time based on the first possible design. Correspondingly, in the process of reading and sequencing data, the reverse process of the design is adopted for reading and sequencing the data. Of course, the two possible designs are only for illustration and are not the only limitations of the present invention.
In this embodiment, the first bit data output by the encoder may include multiple bit streams. For example, when the encoder in the present embodiment is specifically the Turbo encoder, the first bit data output by the Turbo encoder may include a bit stream of system bits (hereinafter referred to as a first bit stream), a bit stream of first check bits (hereinafter referred to as a second bit stream), and a bit stream of second check bits (hereinafter referred to as a third bit stream).
In addition, considering that the number of columns of the existing interleaving matrix is generally 32, dummy bit data may be added to the first bit data after the first bit data is received so that the sum of the number of bits of the first bit data and the dummy bit data is a multiple of 32 in order to match the number of columns of the existing interleaving matrix. Still taking the Turbo encoder as an example, after the Turbo encoder outputs the first bit stream, the second bit stream, and the third bit stream, dummy bit data may be added to the first bit stream, the second bit stream, and the third bit stream, respectively, so that the number of bits of the first bit stream, the second bit stream, and the third bit stream after adding the dummy bit data is a multiple of 32. Fig. 5 is a schematic diagram of an adding method of dummy bit data according to an embodiment of the present invention, and as shown in fig. 5, in an implementation, dummy bit data (e.g., dummy bit data composed of multiple "0") may be added to a header position of each bit stream, so that the number of bits of the bit stream is a multiple of 32.
Of course, the above method for adding dummy bit data is only an example, and is not limited to the only method, and actually, in other scenarios, if the number of bits of each path of bit stream included in the first bit data is already an integer multiple of 32, the step of adding dummy bit data may not be executed.
Further, after the first bit data is obtained, the present embodiment stores the first bit data in parallel in a plurality of parallel buffers.
For example, fig. 6 is a system architecture diagram of rate matching according to an embodiment of the present invention, in the system architecture, an encoder is specifically a Turbo encoder, the Turbo encoder outputs a bit width of a first bit stream dk (0), a second bit stream dk (1), a third bit stream dk (2), dk (0), dk (1), and dk (2) as N (N is a positive integer), and the dk (0), dk (1), and dk (2) are stored in multiple buffers in parallel after adding dummy bit data, where each bit stream may be stored in one or more buffers separately. For example, in one implementation, dk (0) may be stored in two parallel caches (e.g., two 52X64 single-port caches), and dk (1) and dk (2) may be stored in one cache (e.g., one 100X64 single-port cache), respectively. When storing the bit data of dk (0), the 0-31 th bit data of dk (0) can be stored in the first block buffer, the 32-63 th bit data can be stored in the second block buffer, the 64-95 th bit data can be stored in the first block buffer, the 96-127 th bit data can be stored in the first block buffer, and so on, and the dk (0) can be finally stored in the two block buffers. That is, in the scenario shown in fig. 6, the storage method of the first bit data may be exemplarily described as storing the first bit stream in a first buffer in parallel in two blocks, storing the second bit stream in a second buffer, storing the third bit stream in a third buffer, and arranging the first buffer, the second buffer, and the third buffer in parallel. It is to be understood that this is by way of illustration and not by way of limitation.
Further, in this embodiment, data is read from multiple caches in parallel, for example, in a scenario shown in fig. 6, data may be read from the first cache, the second cache, and the third cache in parallel based on a reading manner corresponding to the first cache, a reading manner corresponding to the second cache, and a reading manner corresponding to the third cache. In order to reduce the influence of interleaving on the rate matching efficiency, the present embodiment may set the reading mode in advance, so that the second bit data read from the parallel cache is equal to the data obtained by interleaving the first bit data, and thus a complex interleaving process is not required, the occupation of interleaving on resources is reduced, and the rate matching efficiency is improved.
In one embodiment, the manner of reading data from the plurality of buffers according to this embodiment may be set based on the storage manner of the first bit data. Specifically, the storage position of each bit of the first bit data in the parallel cache can be known according to the storage manner of the first bit data, and the position of each bit on the first bit data in the interleaved data can be known based on the interleaving principle, so that the data (i.e., the second bit data) obtained after interleaving the first bit data can be directly read from the multiple caches in parallel according to the two positions. Further, the second bit data is subjected to bit deletion, bit splicing and other processing to obtain the target data after rate matching.
In this embodiment, when first bit data output by an encoder is received, the first bit data is stored in a plurality of parallel caches according to a preset storage manner, and second bit data is obtained by reading from the plurality of parallel caches based on a preset reading manner, so that the second bit data obtained by reading is equal to data obtained by interleaving the first bit data, and the target data is obtained by performing bit deletion and bit splicing processing on the second bit data. In the embodiment, when the rate is matched, the data is stored in the plurality of buffers in parallel, and the data after the first bit data interleaving processing is directly read from the plurality of buffers according to the preset reading mode, so that the interleaving process of the data is saved, the rate matching efficiency is improved, and the rate matching efficiency can also be improved through the parallel storage and the parallel reading of the data.
Further optimization and extension of the above embodiment are provided below.
Fig. 7 is a flowchart of a method for rate matching of a transmission channel according to an embodiment of the present invention, and as shown in fig. 7, on the basis of the foregoing embodiment, the method according to the present embodiment includes:
step 201, when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel buffers row by row or column by column according to a preset arrangement order of the first bit data in an interleaving matrix.
Step 202, reading second bit data from the plurality of parallel buffers based on a preset reading mode, so that the second bit data is equal to data obtained by interleaving the first bit data.
And 203, performing bit deletion and bit splicing processing on the second bit data to obtain target data.
The method for storing the first bit data in the multiple parallel buffers row by row or column by column according to the preset arrangement order of the first bit data in the interleaving matrix may include the following steps:
in one embodiment, the bits in the first bit data may be stored in the plurality of parallel buffers row by row or column by column according to a bit arrangement order after the first bit data is input to the interleaving matrix. Specifically, in the prior art, the first bit data needs to be stored into the interleaving matrix row by row according to the bit arrangement sequence thereof, so that in order to facilitate directly reading the row or column of the first bit data in the interleaving matrix from the parallel buffer, this embodiment may store the first bit data into the parallel buffer according to the row or column of the first bit data in the interleaving matrix according to the method of writing the first bit data into the interleaving matrix in the prior art. However, in this method, before reading data from the parallel buffer, it is necessary to determine the position h of a bit at the position i in the first bit data after interleaving processing in advance based on the storage method and the interleaving method of the first bit data. Therefore, according to the position relation of the bits before and after interleaving, second bit data is directly extracted from the parallel cache, and the second bit data is equal to data obtained after interleaving of the first bit data.
In another embodiment, the bits in the first bit data may be stored in the plurality of parallel buffers row by row or column by column according to the arrangement order of the bits in the interleaving matrix after the first bit data is subjected to row transformation or column transformation in the interleaving matrix. Specifically, in the actual interleaving process, after the bit data is input into the interleaving matrix, column conversion or row conversion processing is performed on the interleaving matrix, and in this case, in order to improve the efficiency of reading data from the parallel buffer, the bit position in the first bit data may be adjusted first, so that the adjusted bit position is consistent with the position of the first bit data after column conversion or row conversion in the interleaving matrix, and then the adjusted data is input into the parallel buffer. Thus, the second bit data can be obtained only by reading the data in the order of writing the data into the parallel buffer.
It will of course be appreciated by those skilled in the art that the above two approaches are merely illustrative and are not all embodiments of the invention. In fact, the storage method of the first bit data in the parallel cache may be set according to needs, and is not limited to a specific method.
In this embodiment, the first bit data is stored in the plurality of parallel buffers row by row or column by column according to the preset arrangement order of the first bit data in the interleaving matrix, and when reading, the second bit data is directly read from the parallel buffers according to the corresponding reading mode, so that the second bit data is equal to the data obtained by interleaving the first bit data, the time required by interleaving can be saved, and the rate matching efficiency can be improved.
Fig. 8 is a schematic structural diagram of a rate matching apparatus for a transmission channel according to an embodiment of the present invention, and as shown in fig. 8, the rate matching apparatus 70 includes: a communication interface 71, one or more processors 72, a plurality of parallel caches 73; said one or more processors 72 operating alone or in conjunction, said communication interface 71 coupled to said processor 72, said communication interface 71 coupled to said plurality of parallel caches 73, said plurality of parallel caches 73 coupled to said one or more processors 72; the communication interface 71 is configured to: when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel caches based on a preset storage mode; the processor 72 is configured to: reading second bit data from the plurality of parallel caches based on a preset reading mode to enable the second bit data to be equal to data obtained after the first bit data is subjected to interleaving processing; and carrying out bit deletion and bit splicing processing on the second bit data to obtain target data.
In one possible design, when performing the operation of storing the first bit data in a plurality of parallel caches based on a preset storage manner, the communication interface 71 is configured to:
when first bit data output by an encoder is received, the first bit data are stored in a plurality of parallel caches row by row or column by column according to the preset arrangement sequence of the first bit data in an interleaving matrix.
In one possible design, when performing an operation of storing the first bit data row by row or column by column in a plurality of parallel buffers according to a preset arrangement order of the first bit data in the interleaving matrix, the communication interface 71 is configured to:
when first bit data output by an encoder is received, storing bits in the first bit data in a plurality of parallel caches line by line or column by column according to a bit arrangement sequence after the first bit data is input into an interleaving matrix.
In one possible design, when performing an operation of storing the first bit data row by row or column by column in a plurality of parallel buffers according to a preset arrangement order of the first bit data in the interleaving matrix, the communication interface 71 is configured to:
when first bit data output by an encoder is received, storing bits in the first bit data in a plurality of parallel caches according to the arrangement sequence of the bits in an interleaving matrix after the first bit data is subjected to row transformation or column transformation in the interleaving matrix.
In one possible design, when performing an operation of storing the first bit data row by row or column by column in a plurality of parallel buffers according to a preset arrangement order of the first bit data in the interleaving matrix, the communication interface 71 is configured to:
when first bit data output by an encoder is received, dummy bit data is added to the first bit data so that the sum of the number of bits of the first bit data and the dummy bit data is a multiple of 32.
In one possible design, the first bit data includes a first bit stream of system bits, a second bit stream of first parity bits, and a third bit stream of second parity bits; the communication interface 71, when performing an operation of storing the first bit data in a plurality of parallel buffers based on a preset storage manner, is configured to:
and storing the first bit stream in two parallel first caches in parallel, storing the second bit stream in a second cache, and storing the third bit stream in a third cache, wherein the first cache, the second cache and the third cache are caches arranged in parallel.
In one possible design, when performing an operation of reading second bit data from the multiple parallel buffers based on a preset reading manner, so that the second bit data is equal to data obtained by interleaving the first bit data, the processor 72 is configured to:
and reading data from the first cache, the second cache and the third cache in parallel based on a preset reading mode corresponding to the first cache, a preset reading mode corresponding to the second cache and a preset reading mode corresponding to the third cache.
In one possible design, before the processor 72 reads the second bit data from the multiple parallel buffers based on a preset reading manner, so that the second bit data is equal to the data obtained by interleaving the first bit data, the processor is further configured to:
and determining a reading mode of the data in the plurality of parallel caches based on a preset storage mode and a preset interleaving processing mode.
In one possible design, the encoder includes a Turbo encoder. .
The device provided by this embodiment can be used to execute the method of the above embodiment, and the execution mode and the beneficial effect are similar, which are not described herein again.
An embodiment of the present invention further provides an unmanned aerial vehicle, including:
a body;
a wireless communication device mounted on the body for performing wireless communication;
the power system is arranged on the machine body and used for providing power;
and the rate matching device to which the above embodiments relate.
Wherein, unmanned aerial vehicle includes unmanned vehicles or unmanned vehicles.
The embodiment of the present invention also provides a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to execute the technical solutions of the embodiments. The computer referred to in this embodiment is a device having an operation processing capability, for example, the device may be an unmanned aerial vehicle, a mobile phone, or the like, but is not limited to the unmanned aerial vehicle and the mobile phone. Computer-readable storage media refer to storage media that store instructions executable by the apparatus.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is obvious to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the above described functions. For the specific working process of the device described above, reference may be made to the corresponding process in the foregoing method embodiment, which is not described herein again.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (20)

1. A method for rate matching of a transmission channel, comprising:
when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel caches based on a preset storage mode;
reading second bit data from the plurality of parallel caches based on a preset reading mode to enable the second bit data to be equal to data obtained after the first bit data is subjected to interleaving processing;
and carrying out bit deletion and bit splicing processing on the second bit data to obtain target data.
2. The method according to claim 1, wherein the storing the first bit data in a plurality of parallel buffers based on a preset storage mode when receiving the first bit data output by the encoder comprises:
when first bit data output by an encoder is received, the first bit data are stored in a plurality of parallel caches row by row or column by column according to the preset arrangement sequence of the first bit data in an interleaving matrix.
3. The method according to claim 2, wherein the storing the first bit data row by row or column by column in a plurality of parallel buffers according to a preset arrangement order of the first bit data in the interleaving matrix when receiving the first bit data output by the encoder, comprises:
when first bit data output by an encoder is received, storing bits in the first bit data in a plurality of parallel caches line by line or column by column according to a bit arrangement sequence after the first bit data is input into an interleaving matrix.
4. The method according to claim 2, wherein the storing the first bit data row by row or column by column in a plurality of parallel buffers according to a preset arrangement order of the first bit data in the interleaving matrix when receiving the first bit data output by the encoder, comprises:
when first bit data output by an encoder is received, storing bits in the first bit data in a plurality of parallel caches according to the arrangement sequence of the bits in an interleaving matrix after the first bit data is subjected to row conversion or column conversion in the interleaving matrix.
5. The method according to claim 2, wherein the storing the first bit data row by row or column by column in a plurality of parallel buffers according to a preset arrangement order of the first bit data in the interleaving matrix when receiving the first bit data output by the encoder, comprises:
when first bit data output by an encoder is received, dummy bit data is added to the first bit data so that the sum of the number of bits of the first bit data and the dummy bit data is a multiple of 32.
6. The method of claim 1, wherein the first bit data comprises a first bit stream of system bits, a second bit stream of first parity bits, and a third bit stream of second parity bits;
when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel caches based on a preset storage mode, including:
and storing the first bit stream in two parallel first caches in parallel, storing the second bit stream in a second cache, and storing the third bit stream in a third cache, wherein the first cache, the second cache and the third cache are caches arranged in parallel.
7. The method according to claim 6, wherein the reading second bit data from the plurality of parallel buffers based on a preset reading manner to make the second bit data equal to data obtained by interleaving the first bit data includes:
and reading data from the first cache, the second cache and the third cache in parallel based on a preset reading mode corresponding to the first cache, a preset reading mode corresponding to the second cache and a preset reading mode corresponding to the third cache.
8. The method according to claim 1, wherein before reading second bit data from the plurality of parallel buffers based on a preset reading manner so that the second bit data is equal to data obtained by interleaving the first bit data, the method further comprises:
and determining a reading mode of the data in the plurality of parallel caches based on a preset storage mode and a preset interleaving processing mode.
9. The method of any of claims 1-8, wherein the encoder comprises a Turbo encoder.
10. A rate matching device for a transmission channel, comprising: a communication interface, one or more processors, and a plurality of parallel caches;
the one or more processors work independently or cooperatively, the communication interface is connected with the plurality of parallel caches, and the plurality of parallel caches are respectively connected with the processors;
the communication interface is to: when first bit data output by an encoder is received, storing the first bit data in a plurality of parallel caches based on a preset storage mode;
the processor is configured to: reading second bit data from the plurality of parallel caches based on a preset reading mode to enable the second bit data to be equal to data obtained after the first bit data is subjected to interleaving processing; and carrying out bit deletion and bit splicing processing on the second bit data to obtain target data.
11. The apparatus according to claim 10, wherein the communication interface, when performing the operation of storing the first bit data in a plurality of parallel buffers based on a preset storage manner, is configured to:
when first bit data output by an encoder is received, the first bit data are stored in a plurality of parallel caches row by row or column by column according to the preset arrangement sequence of the first bit data in an interleaving matrix.
12. The apparatus according to claim 11, wherein the communication interface, when performing the operation of storing the first bit data in a plurality of parallel buffers row by row or column by column according to a preset arrangement order of the first bit data in the interleaving matrix, is configured to:
when first bit data output by an encoder is received, storing bits in the first bit data in a plurality of parallel caches line by line or column by column according to a bit arrangement sequence after the first bit data is input into an interleaving matrix.
13. The apparatus according to claim 11, wherein the communication interface, when performing the operation of storing the first bit data in a plurality of parallel buffers row by row or column by column according to a preset arrangement order of the first bit data in the interleaving matrix, is configured to:
when first bit data output by an encoder is received, storing bits in the first bit data in a plurality of parallel caches according to the arrangement sequence of the bits in an interleaving matrix after the first bit data is subjected to row transformation or column transformation in the interleaving matrix.
14. The apparatus according to claim 11, wherein the communication interface, when performing the operation of storing the first bit data in a plurality of parallel buffers row by row or column by column according to a preset arrangement order of the first bit data in the interleaving matrix, is configured to:
when first bit data output by an encoder is received, dummy bit data is added to the first bit data so that the sum of the number of bits of the first bit data and the dummy bit data is a multiple of 32.
15. The apparatus of claim 10, wherein the first bit data comprises a first bit stream of system bits, a second bit stream of first parity bits, and a third bit stream of second parity bits;
when the communication interface executes an operation of storing the first bit data in a plurality of parallel caches based on a preset storage mode, the communication interface is configured to:
and storing the first bit stream in two parallel first caches in parallel, storing the second bit stream in a second cache, and storing the third bit stream in a third cache, wherein the first cache, the second cache and the third cache are caches arranged in parallel.
16. The apparatus of claim 15, wherein the processor, when performing an operation of reading second bit data from the plurality of parallel buffers based on a preset reading manner to make the second bit data equal to data obtained by interleaving the first bit data, is configured to:
and reading data from the first cache, the second cache and the third cache in parallel based on a preset reading mode corresponding to the first cache, a preset reading mode corresponding to the second cache and a preset reading mode corresponding to the third cache.
17. The apparatus of claim 10, wherein the processor, before reading second bit data from the plurality of parallel buffers based on a preset reading manner to make the second bit data equal to data obtained by interleaving the first bit data, is further configured to:
and determining a reading mode of the data in the plurality of parallel caches based on a preset storage mode and a preset interleaving processing mode.
18. The apparatus of any of claims 10-17, wherein the encoder comprises a Turbo encoder.
19. An unmanned aerial vehicle, comprising:
a body;
a wireless communication device mounted on the body for performing wireless communication;
the power system is arranged on the machine body and used for providing power;
and a rate matching device as claimed in any one of claims 10 to 18.
20. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-9.
CN201980005400.2A 2019-04-26 2019-04-26 Transmission channel rate matching method and device, unmanned aerial vehicle and storage medium Pending CN111316582A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/084632 WO2020215326A1 (en) 2019-04-26 2019-04-26 Rate matching method and device for transmission channel, unmanned aerial vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN111316582A true CN111316582A (en) 2020-06-19

Family

ID=71162803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980005400.2A Pending CN111316582A (en) 2019-04-26 2019-04-26 Transmission channel rate matching method and device, unmanned aerial vehicle and storage medium

Country Status (2)

Country Link
CN (1) CN111316582A (en)
WO (1) WO2020215326A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112787762A (en) * 2021-04-12 2021-05-11 南京创芯慧联技术有限公司 Rate matching method and device for channel coded data
CN115603864A (en) * 2022-12-13 2023-01-13 南京创芯慧联技术有限公司(Cn) Rate matching method and device, and channel interleaving method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101895374A (en) * 2010-07-20 2010-11-24 华为技术有限公司 Method and device for velocity matching
CN102098126A (en) * 2009-12-15 2011-06-15 上海贝尔股份有限公司 Interleaving device, rating matching device and device used for block coding
CN102118217A (en) * 2010-01-04 2011-07-06 中兴通讯股份有限公司 Rate matching parallel processing method and device
CN103546232A (en) * 2012-07-11 2014-01-29 中兴通讯股份有限公司 Data processing method and data processing device
CN103684659A (en) * 2012-09-04 2014-03-26 中兴通讯股份有限公司 Velocity matching processing method and device in long term evolution system
CN108323228A (en) * 2017-09-06 2018-07-24 南通朗恒通信技术有限公司 A kind of user being used for low latency communication, the method and apparatus in base station
CN108631919A (en) * 2017-03-24 2018-10-09 华为技术有限公司 The speed matching method and equipment of polar code
US10103843B1 (en) * 2017-12-08 2018-10-16 Qualcomm Incorporated On the fly interleaving/rate matching and deinterleaving/de-rate matching for 5G NR
CN108809573A (en) * 2017-05-05 2018-11-13 华为技术有限公司 The method and apparatus for determining the QCL of antenna port

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098126A (en) * 2009-12-15 2011-06-15 上海贝尔股份有限公司 Interleaving device, rating matching device and device used for block coding
CN102118217A (en) * 2010-01-04 2011-07-06 中兴通讯股份有限公司 Rate matching parallel processing method and device
CN101895374A (en) * 2010-07-20 2010-11-24 华为技术有限公司 Method and device for velocity matching
CN103546232A (en) * 2012-07-11 2014-01-29 中兴通讯股份有限公司 Data processing method and data processing device
CN103684659A (en) * 2012-09-04 2014-03-26 中兴通讯股份有限公司 Velocity matching processing method and device in long term evolution system
CN108631919A (en) * 2017-03-24 2018-10-09 华为技术有限公司 The speed matching method and equipment of polar code
CN108809573A (en) * 2017-05-05 2018-11-13 华为技术有限公司 The method and apparatus for determining the QCL of antenna port
CN108323228A (en) * 2017-09-06 2018-07-24 南通朗恒通信技术有限公司 A kind of user being used for low latency communication, the method and apparatus in base station
US10103843B1 (en) * 2017-12-08 2018-10-16 Qualcomm Incorporated On the fly interleaving/rate matching and deinterleaving/de-rate matching for 5G NR

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112787762A (en) * 2021-04-12 2021-05-11 南京创芯慧联技术有限公司 Rate matching method and device for channel coded data
CN115603864A (en) * 2022-12-13 2023-01-13 南京创芯慧联技术有限公司(Cn) Rate matching method and device, and channel interleaving method and device
CN115603864B (en) * 2022-12-13 2023-02-17 南京创芯慧联技术有限公司 Rate matching method and device, and channel interleaving method and device

Also Published As

Publication number Publication date
WO2020215326A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
RU2459372C2 (en) Llr nulling, using bit map of demodulator to enhance efficiency of modem decoder
CN101800619B (en) Interleaver or deinterleaver method and device thereof based on block interleaver
CN111316582A (en) Transmission channel rate matching method and device, unmanned aerial vehicle and storage medium
JP6022085B2 (en) Method and apparatus for realizing multimode decoder
CN102414991B (en) Data rearrangement for decoder
CN101707510B (en) High-speed Turbo decoding method and device
CN109981117B (en) Four-mode forward error correction code processor
CN105874774A (en) Count table maintenance apparatus for maintaining count table during processing of frame and related count table maintenance method
CN102801981B (en) Multipath compressed kernel parallel encoding control method on basis of JPEG-LS (Joint Pho-tographic Experts Group-Lossless Standard) algorithm
CN101938330A (en) Multi-code rate Turbo encoder and storage resource optimization method thereof
CN103812510A (en) Decoding method and device
CN102857239A (en) LDPC (Low Density Parity Check) serial encoder and encoding method based on lookup table in CMMB (China Mobile Multimedia Broadcasting)
CN102868495B (en) Lookup table based LDPC (low-density parity-check) serial encoder and encoding method in near-earth communication
CN102468902B (en) Method for Turbo coding of rate match/de-rate match in LTE (long term evolution) system
CN1954503A (en) Turbo decoder input reordering
CN108270452A (en) A kind of Turbo decoders and interpretation method
CN111047037A (en) Data processing method, device, equipment and storage medium
CN102594369A (en) Quasi-cyclic low-density parity check code decoder based on FPGA (field-programmable gate array) and decoding method
CN103546232A (en) Data processing method and data processing device
CN102769506B (en) The de-interweaving method of a kind of rate de-matching and device
CN110710112A (en) Turbo coding method, Turbo encoder and unmanned aerial vehicle
CN103873188A (en) Parallel rate de-matching method and parallel rate de-matching device
CN110022158B (en) Decoding method and device
CN106712778A (en) Turbo decoding device and method
CA2817467C (en) Method and apparatus for decoding low-density parity-check codes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200619