WO2019218130A1 - Turbo编码方法、Turbo编码器及无人机 - Google Patents

Turbo编码方法、Turbo编码器及无人机 Download PDF

Info

Publication number
WO2019218130A1
WO2019218130A1 PCT/CN2018/086799 CN2018086799W WO2019218130A1 WO 2019218130 A1 WO2019218130 A1 WO 2019218130A1 CN 2018086799 W CN2018086799 W CN 2018086799W WO 2019218130 A1 WO2019218130 A1 WO 2019218130A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
bit data
parallel
turbo
bit
Prior art date
Application number
PCT/CN2018/086799
Other languages
English (en)
French (fr)
Inventor
刘瑛
翟春华
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/086799 priority Critical patent/WO2019218130A1/zh
Priority to CN201880031265.4A priority patent/CN110710112A/zh
Publication of WO2019218130A1 publication Critical patent/WO2019218130A1/zh
Priority to US17/096,140 priority patent/US20210083691A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2771Internal interleaver for turbo codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/27Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes using interleaving techniques
    • H03M13/2739Permutation polynomial interleaver, e.g. quadratic permutation polynomial [QPP] interleaver and quadratic congruence interleaver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0817Cache consistency protocols using directory methods
    • G06F12/082Associative directories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0884Parallel mode, e.g. in parallel with main memory or CPU
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2903Methods and arrangements specifically for encoding, e.g. parallel encoding of a plurality of constituent codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6508Flexibility, adaptability, parametrability and configurability of the implementation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6561Parallelized implementations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/65Purpose and implementation aspects
    • H03M13/6563Implementations using multi-port memories

Definitions

  • the embodiments of the present invention relate to the field of communications technologies, and in particular, to a Turbo coding method, a Turbo encoder, and a drone.
  • channel coding In the prior art, data is first subjected to channel coding before performing uplink or downlink transmission.
  • channel coding methods include Turbo coding.
  • the uplink or downlink data is input to the Turbo encoder in a serial manner.
  • the Turbo encoder includes two encoders, one of which directly encodes the serial input data, and the other encoder and interleaver.
  • the data input by the serial input is processed by the interleaver and then transmitted to the encoder for encoding, and the encoding rate of the existing Turbo code is low.
  • the embodiment of the invention provides a Turbo coding method, a Turbo encoder and a drone to improve the efficiency of Turbo coding.
  • a first aspect of the embodiments of the present invention provides a Turbo coding method, where the method includes:
  • the parallel data is obtained from the plurality of parallel caches for Turbo coding.
  • a second aspect of the embodiments of the present invention provides a Turbo encoder, including: a communication interface, one or more processors, multiple parallel buffers, a first branch encoder, a second branch encoder, and an inner interleaver;
  • the one or more processors operate separately or in cooperation, the communication interface is coupled to the processor, the communication interface and the plurality of parallel cache connections, the plurality of parallel caches respectively a branch encoder is coupled to the inner interleaver, the inner interleaver being coupled to the second branch encoder;
  • the communication interface is configured to: acquire a code block for Turbo coding
  • the processor is configured to: control the communication interface to store data in the code block in the plurality of parallel caches;
  • the first branch encoder is configured to: obtain parallel data from the plurality of parallel caches for Turbo coding;
  • the second branch encoder is configured to: obtain, by the inner interleaver, parallel data from the plurality of parallel caches for Turbo coding.
  • a third aspect of the embodiments of the present invention provides a drone, including:
  • a wireless communication device mounted on the body for wireless communication
  • a power system mounted to the fuselage for providing power
  • a fourth aspect of an embodiment of the present invention provides a computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the Turbo encoding method as described in the first aspect above.
  • the embodiment of the present invention by acquiring a code block for Turbo coding, the data in the code block is block-stored in a plurality of parallel caches, and parallel data is obtained from a plurality of parallel caches for Turbo coding.
  • parallel storage and parallel reading of data during Turbo encoding are realized, thereby improving the efficiency of Turbo coding, and the embodiment of the present invention can reduce the intermediate variables generated by intra-interleaving when Turbo coding is compared with the conventional serial mode. The quantity, therefore, can reduce the cost of an application specific integrated circuit (ASIC) in a Turbo encoder.
  • ASIC application specific integrated circuit
  • FIG. 1 is a schematic structural diagram of a Turbo encoder provided by the prior art
  • FIG. 2 is a schematic diagram of a communication scenario according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a Turbo coding method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of data storage of an 8-cache according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a Turbo coding method according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a Turbo coding method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a Turbo encoder according to an embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • FIG. 1 is a schematic structural diagram of a Turbo encoder provided by the prior art.
  • the existing Turbo encoder includes two 8-state branch encoders: a first branch encoder 11 and a second branch.
  • the encoder 12, wherein the second branch encoder is connected to the inner interleaver 13 of the turbo encoder.
  • the transmission block needs to be subjected to code block division processing to obtain a plurality of code blocks of length K+, K-, where K+ and K- are positive integers.
  • the code blocks obtained after the division of the transport block are: c 0 , c 1 , c 2 , c 3 , ..., c K-1 , c i is the number of c 0 to c K-1
  • c 0 , c 1 , c 2 , c 3 , ..., c K-1 are serially input to the Turbo encoder.
  • the first branch encoder 11 directly encodes the code block based on the serial input, and outputs the encoded data z K .
  • the inner interleaver interleaves the code blocks of the serial input.
  • the second branch encoder encodes based on the output data of the inner interleaver, and outputs data z' k , wherein the relationship between the input c ⁇ (i) and the output c' i of the inner interleaver is as follows:
  • the parameters f 1 and f 2 are dependent on the code block size K, and the parameters f 1 and f 2 can be obtained by a preset correspondence list of K and f 1 and f 2 .
  • an embodiment of the present invention provides a new Turbo coding method, which is configured to store data in a code block in multiple parallel caches by acquiring a code block for Turbo coding.
  • parallel data is obtained from multiple parallel caches for Turbo coding.
  • parallel storage and parallel reading of data during Turbo encoding are realized, thereby improving the efficiency of Turbo coding, and the embodiment of the present invention can reduce the intermediate variables generated by intra-interleaving when Turbo coding is compared with the conventional serial mode. The quantity, therefore, can reduce the cost of an application specific integrated circuit (ASIC) in a Turbo encoder.
  • ASIC application specific integrated circuit
  • FIG. 2 is a schematic diagram of a communication scenario according to an embodiment of the present invention.
  • the scenario includes an aircraft 20, a Turbo encoder 21 mounted on the aircraft 20, and a ground station 22.
  • the ground station 22 is a device having wireless communication capability, and computing function and/or processing capability, and the device may specifically be a remote controller, a smart phone, a tablet computer, a laptop computer, a watch, a wristband, etc., and combinations thereof. .
  • the aircraft 20 may specifically be an unmanned aerial vehicle having a wireless communication function, a helicopter, a manned fixed-wing aircraft, a hot air balloon, or the like. As shown in FIG. 2, the ground station 22 and the aircraft 20 are connected by a mobile communication network such as a 4G or 5G mobile communication network, but not limited to 4G or 5G. When the aircraft 20 communicates with the ground station 22, the transmission data is encoded using the Turbo coding method provided in this embodiment.
  • the Turbo coding method provided in this embodiment includes:
  • Step 101 Acquire a code block for Turbo coding.
  • Step 102 Store data in the code block into multiple parallel caches.
  • Step 103 Acquire parallel data from the plurality of parallel caches for Turbo coding.
  • the "code block for turbo coding” refers to a code block obtained after the transmission block to be transmitted is subjected to code block division processing.
  • the number of parallel caches can be set as needed.
  • the explanation is made by taking 8 parallel caches as an example:
  • FIG. 4 is a schematic diagram of data storage of an 8-cache according to an embodiment of the present invention. As shown in FIG. 4, eight caches are connected in parallel in FIG. 4, wherein the eight caches may be dual-port type caches, or It is a single-port type cache. This embodiment assumes that the eight caches are all dual-port type caches.
  • the Turbo encoder When the code block to be coded is input into the Turbo encoder, the Turbo encoder stores the data in the code block in 8 parallel buffers according to a preset storage strategy, and the first branch encoder reads from the 8 block buffers. Data, and based on the above storage strategy, sorts the read data and encodes the data stream obtained after sorting. In another branch, the inner interleaver reorders the data streams obtained by the first branch sorting based on the preset interleaving relationship, and the second branch encoder encodes the data streams generated based on the reordering.
  • the storage strategy of the data in the parallel cache can be set as needed. For example, in a possible design, the storage policy can be set to store the first bit data in the code block in the first cache.
  • the second bit data is stored in the second buffer, and so on, the eighth bit data is stored in the eighth buffer, and the above stored procedure is cyclically executed from the ninth bit data.
  • the storage policy may be set to perform block processing on the code block first, so that each data block obtained after the block includes 8 bits of data, and then according to the first
  • a possible design-like approach is to store data for each data block.
  • a predetermined amount of data e.g., 2-bit data or 3-bit data, etc.
  • the reverse process of the above design can be used for data reading and data sorting.
  • the above two possible designs are for illustrative purposes only and are not intended to be limiting of the invention.
  • the data in the code block is divided into a plurality of parallel caches, and parallel data is obtained from a plurality of parallel caches for Turbo coding.
  • parallel storage and parallel reading of data during Turbo encoding are realized, thereby improving the efficiency of Turbo coding, and the present embodiment can reduce the number of intermediate variables generated by intra-interleaving when Turbo coding is compared with the conventional serial mode.
  • ASIC application specific integrated circuit
  • FIG. 5 is a flowchart of a Turbo coding method according to an embodiment of the present invention. As shown in FIG. 5, based on the embodiment of FIG. 3, the method includes:
  • Step 201 Acquire a code block for Turbo coding.
  • Step 202 Store bit data in the code block in a corresponding cache according to its order in the code block based on an association relationship between a preset bit order and a cache.
  • Step 203 Obtain bit data from the plurality of parallel caches, and reorder the acquired bit data based on an association relationship between the bit order and the cache.
  • Step 204 Perform Turbo coding based on the reordered bit data.
  • the association between bit ordering and caching refers to which of the plurality of parallel buffers are used to store bit data in which sort positions in the code block.
  • the 8 parallel cache is still taken as an example.
  • the data sorted as 0, 8, ..., i+8... in the code block can be stored in the first cache.
  • the second storage store the data sorted as 1, 9, ... i+8... in the code block, and so on, and sort the data blocks in the 8th storage into 7, 15, ... i+ 8... data.
  • the association between the bit ordering and the buffer may be irregularly set, that is, the order of the bit data stored in each cache corresponding to the preset is irregular, and is completely hard specified. For example, setting the bit data of the first cache type memory code block sorted to 0, 3, 11 and the like, and storing the bit data sorted by the order of 1, 4, 16, etc. in the code block in the second buffer, etc. Etc., no longer one by one here.
  • the above examples are merely illustrative and not limiting of the invention.
  • bit ordering and caching is specifically the first design in the above example
  • one bit of data can be read in each cache in parallel, and The read bit data is sorted in the order of the first cache to the eighth cache, and then the second, third, ... nth read, each time the data is read according to the first
  • a sequence of buffers to the eighth cache is sorted, and the data strings read by the first, second, ... nth times are stringed together to form a data stream.
  • the data is read in a manner opposite to the data storage method, and the read data is reordered based on the storage manner of the data, that is, the association relationship between the bit ordering and the buffer.
  • the first branch encoder can directly encode based on the bit data obtained after the above reordering.
  • the inner interleaver on the second branch needs to reorder the bit data obtained after the reordering based on the pre-set interleaving relationship, and then the second branch encoder performs Turbo based on the reordered bit data.
  • Encoding, wherein the above-mentioned interleaving relationship may be specifically a relationship between pre-interleaving processing and bit data sorting after interleaving processing in this embodiment.
  • r(8,N) (r(0,N)+16*N*f2)mod K;
  • the interleave calculation obtains eight interleave addresses, denoted as N1 to N8, and the bit data read from the N1 to N8 interleave addresses is the input data of the second branch encoder.
  • the bit data in the obtained code block is stored according to its order in the code block based on the association relationship between the preset bit order and the buffer.
  • the bit data in the obtained code block is stored according to its order in the code block based on the association relationship between the preset bit order and the buffer.
  • the bit data in the obtained code block is stored according to its order in the code block based on the association relationship between the preset bit order and the buffer.
  • the data is Turbo coded, so that the fast storage and reading of the data to be encoded can be conveniently and quickly realized, and the efficiency of Turbo coding is improved.
  • FIG. 6 is a flowchart of a Turbo coding method according to an embodiment of the present invention. As shown in FIG. 6, on the basis of the embodiment of FIG. 3, the method includes:
  • Step 301 Acquire a code block for Turbo coding.
  • Step 302 Perform block processing on the data in the code block based on the number of buffers, so that each of the obtained data blocks includes the buffered bit data.
  • Step 303 Store bit data in the data block in the plurality of parallel caches in sequence based on a storage order of the preset bit data in the plurality of caches.
  • Step 304 Acquire bit data from the same location of the multiple parallel caches, and reorder the acquired bit data based on the storage order.
  • Step 305 Perform Turbo coding based on the reordered bit data.
  • every 8 consecutive bit data is divided into one data block according to the order of the bit data in the code block.
  • the bit data in each data block is stored in the eight buffers in the order in which the preset bit data is stored in the eight buffers.
  • the first bit data is stored in the first cache
  • the second bit data is stored in the second cache
  • the eighth bit data is stored in the eighth cache, etc., of course, this is only an example. The description is not the only limit.
  • the data in each data block is sequentially stored in the same location of the corresponding cache.
  • the bit data in the first data block is stored in the first cache.
  • the bit data in the second data block is stored in the second bit of the shadow buffer, and so on, until the data storage is completed.
  • the bit data when reading data from eight parallel buffers, the bit data is read from the same position of the eight buffers each time, and the read data is determined based on the storage order of the bit data.
  • a first sorting number of each bit data sorting each bit data based on a first sorting number of each bit data, to generate a first data stream
  • the inner interleaver determines the slave buffer according to a preset interleaving relationship And acquiring, by the second sorting sequence number corresponding to the first sorting number of the bit data, the bit data according to the second sorting sequence of each bit data, and sorting each bit data to generate a second data stream.
  • the first branch encoder encodes based on the first data stream and the second branch encoder encodes based on the second data stream.
  • FIG. 7 is a schematic structural diagram of a turbo encoder according to an embodiment of the present invention.
  • the turbo encoder 70 includes: a communication interface 71, one or more processors 72, and multiple parallel caches 73. a branch encoder 74, a second branch encoder 75, and an inner interleaver 76; the one or more processors 72 operate separately or in cooperation, and the communication interface 71 is coupled to the processor 72, the communication interface 71 Connected to the plurality of parallel caches 73, the plurality of parallel caches 73 being respectively coupled to the first branch encoder 74 and the inner interleaver 76, the inner interleaver 76 and the second branch An encoder 75 is connected; the communication interface 71 is configured to: acquire a code block for Turbo coding; the processor 72 is configured to: control the communication interface 71 to store data in the code block in the block a plurality of parallel caches; the first branch encoder 74 is configured to: obtain parallel data from the plurality of parallel caches 73 for
  • the processor 72 controls the communication interface 71 to store data in the code block in the plurality of parallel caches, specifically for: based on preset storage. A strategy of storing data in the code block in a plurality of parallel caches.
  • the processor 72 when the processor 72 stores the data in the code block in multiple parallel caches based on a preset storage policy, the processor 72 is specifically configured to: based on preset bits. Sorting the relationship between the cache and the cache, and storing the bit data in the code block in a corresponding cache according to its order in the code block.
  • the method is: acquiring bit data from the multiple parallel caches.
  • the processor 72 is configured to: reorder the bit data acquired by the first branch encoder based on an association relationship between the bit order and the cache; the first branch encoder 74 is based on reordering
  • the bit data is Turbo encoded.
  • the inner interleaver 76 when the second branch encoder 75 obtains parallel data from the plurality of parallel buffers 73 for turbo coding through the inner interleaver 76, the inner interleaver 76 is specifically used for Reordering the reordered bit data based on a preset interleaving relationship; the second branch encoder 75 is configured to perform Turbo encoding based on the reordered bit data.
  • the processor 72 stores the data in the code block in a plurality of parallel caches based on a preset storage policy, specifically, based on the number of cache pairs.
  • the data in the code block is subjected to block processing, so that each of the obtained data blocks includes the buffered bit data; and the storage order of the preset bit data in the plurality of caches is The bit data in the data block is sequentially stored in the plurality of parallel buffers.
  • the processor 72 sequentially stores the bit data in the data block in the plurality of parallel caches based on a storage order of the preset bit data in the plurality of caches.
  • the specific time is: storing bit data on each data bit in the data block in the same position of the corresponding cache based on a storage order of the preset bit data in the multiple caches.
  • the method is: acquiring bits from the same position of the multiple parallel caches. Data; the processor 72 is configured to: reorder the acquired bit data based on the storage order; the first branch encoder 74 performs Turbo coding based on the reordered bit data.
  • the processor 72 when the processor 72 reorders the acquired bit data based on the storage order, specifically, determining, according to the storage order, determining bit data acquired from each cache. a first sorting sequence; sorting each bit of data based on a first sorting number of each bit of data to generate a first data stream.
  • the inner interleaver 76 when the second branch encoder 75 obtains parallel data from the plurality of parallel buffers for turbo encoding by the inner interleaver, the inner interleaver 76 is specifically configured to: Determining an interleaving relationship, determining a second sorting number corresponding to the first sorting number of the bit data acquired from each buffer, sorting each bit data based on the second sorting number of each bit data, and generating a second data stream
  • the second branch encoder 75 is specifically configured to perform Turbo coding based on the second data stream.
  • the embodiment of the invention also provides a drone, the drone comprising:
  • a wireless communication device mounted on the body for wireless communication
  • a power system mounted to the fuselage for providing power
  • the drone includes an unmanned aerial vehicle or an unmanned vehicle.
  • the embodiment of the invention further provides a computer readable storage medium, comprising instructions, when executed on a computer, causing the computer to execute the technical solution of the embodiment.
  • the computer referred to in this embodiment refers to a device having an arithmetic processing capability, for example, a device such as a drone or a mobile phone, but is not limited to a drone and a mobile phone.
  • a computer readable storage medium refers to a storage medium storing executable instructions of the device.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Error Detection And Correction (AREA)

Abstract

一种Turbo编码方法、Turbo编码器及无人机,该方法通过获取用于Turbo编码的码块,将码块中的数据分块存储在多个并行的缓存中(101),从多个并行的缓存中获取并行数据进行Turbo编码(103)。从而实现了Turbo编码时数据的并行存储和并行读取,从而提高了Turbo编码的效率,并且与传统串行的方式相比,能够减少Turbo编码时,内交织产生的中间变量的数量,因而,能够降低Turbo编码器中特定用途集成电路(ASIC)的成本。

Description

Turbo编码方法、Turbo编码器及无人机 技术领域
本发明实施例涉及通信技术领域,尤其涉及一Turbo编码方法、Turbo编码器及无人机。
背景技术
现有技术中,数据在进行上行或者下行传输之前都要先进行信道编码,其中,比较常用的信道编码方式包括Turbo编码。
在进行Turbo编码时,上行或者下行的数据以串行的方式输入Turbo编码器,Turbo编码器中包括两个编码器,其中一个直接对串行输入的数据进行编码,另一个编码器与交织器连接,串行输入的数据经过交织器的处理后传输给该编码器进行编码,现有Turbo编码的编码速率较低。
发明内容
本发明实施例提供一种Turbo编码方法、Turbo编码器及无人机,以提高Turbo编码的效率。
本发明实施例的第一方面是提供一种Turbo编码方法,该方法包括:
获取用于Turbo编码的码块;
将所述码块中的数据分块存储在多个并行的缓存中;
从所述多个并行的缓存中获取并行数据进行Turbo编码。
本发明实施例的第二方面是提供一种Turbo编码器,包括:通信接口、一个或多个处理器、多个并行的缓存,第一分支编码器、第二分支编码器、内交织器;
所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接,所述通信接口和所述多个并行的缓存连接,所述多个并行的缓存分别与所述第一分支编码器和所述内交织器连接,所述内交织器与所述第二分支编码器连接;
所述通信接口用于:获取用于Turbo编码的码块;
所述处理器用于:控制所述通信接口将所述码块中的数据分块存储在所述多个并行的缓存中;
所述第一分支编码器用于:从所述多个并行的缓存中获取并行数据进行Turbo编码;
所述第二分支编码器用于:通过所述内交织器从所述多个并行的缓存中获取并行数据进行Turbo编码。
本发明实施例的第三方面是提供一种无人机,包括:
机身;
无线通信设备,安装在所述机身上,用于进行无线通信;
动力系统,安装在所述机身,用于提供动力;
以及上述第二方面提供的Turbo编码器。
本发明实施例的第四方面是提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如上述第一方面所述的Turbo编码方法。
本发明实施例,通过获取用于Turbo编码的码块,将码块中的数据分块存储在多个并行的缓存中,从多个并行的缓存中获取并行数据进行Turbo编码。从而实现了Turbo编码时数据的并行存储和并行读取,从而提高了Turbo编码的效率,并且与传统串行的方式相比,本发明实施例能够减少Turbo编码时,内交织产生的中间变量的数量,因而,能够降低Turbo编码器中特定用途集成电路(ASIC)的成本。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为现有技术提供的一种Turbo编码器的结构示意图;
图2为本发明实施例提供的一种通信场景示意图;
图3是本发明实施例提供的一种Turbo编码方法的流程图;
图4是本发明实施例提供的一种8缓存的数据存储示意图;
图5是本发明实施例提供的一种Turbo编码方法的流程图;
图6是本发明实施例提供的一种Turbo编码方法的流程图
图7是本发明实施例提供的一种Turbo编码器的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
图1为现有技术提供的一种Turbo编码器的结构示意图,如图1所示,现有Turbo编码器包括两个8状态的分支编码器分别为:第一分支编码器11、第二分支编码器12,其中,第二分支编码器与Turbo编码器的内交织器13连接。在通信场景中,当实际要发送的传输块数据较大时,需要对传输块进行码块分割处理,得到若干个长度为K+,K-的码块,其中K+和K-为正整数。这里为叙述方便,假设传输块分割后获得的码块为:c 0,c 1,c 2,c 3,...,c K-1,c i为c 0到c K-1中的第i个码块,则c 0,c 1,c 2,c 3,...,c K-1串行输入Turbo编码器。在Turbo编码器中,一方面第一分支编码器11直接基于串行输入的码块进行编码,并输出编码后的数据z K,另一方面,内交 织器对串行输入的码块进行交织处理,第二分支编码器基于内交织器的输出数据进行编码,并输出数据z′ k,其中,内交织器的输入c Π(i)与输出c′ i之间的关系如下:
c′ i=c Π(i),i=0,1,…,(K-1)
其中,输出序号i和输入序号Π(i)的关系满足如下二次形式,即:
Π(i)=(f 1·i+f 2·i 2)mod K
其中,参数f 1和f 2取决于码块大小K,参数f 1和f 2可以通过预设的K与f 1和f 2的对应关系列表获得。
由上述现有技术可知,在现有的Turbo编码器中待编码的数据是串行的,这致使Turbo编码器的编码效率较低,并且内交织器在进行交织处理时,也会因此而产生较多的中间变量,这些中间变量需要占用较多的缓存,从而需要较复杂的特定用途集成电路(ASIC)来进行支持,增加了Turbo编码器的成本。
针对现有技术存在的上述问题,本发明实施例提供了一种新的Turbo编码方法,该方法通过获取用于Turbo编码的码块,将码块中的数据分块存储在多个并行的缓存中,从多个并行的缓存中获取并行数据进行Turbo编码。从而实现了Turbo编码时数据的并行存储和并行读取,从而提高了Turbo编码的效率,并且与传统串行的方式相比,本发明实施例能够减少Turbo编码时,内交织产生的中间变量的数量,因而,能够降低Turbo编码器中特定用途集成电路(ASIC)的成本。
下面结合附图对本发明实施例进行详细说明。
本发明实施例第一方面提供一种Turbo编码方法,该方法可以由一种Turbo编码器来执行,该Turbo编码器安装在具有无限通信功能的设备中,本实施例以飞行器为例。如图2所示,图2为本发明实施例提供的一种通信场景示意图,在该场景中包括飞行器20、飞行器20上搭载的Turbo编码器21、以及地面站22。其中地面站22是一种具有无线通信能力,以及计算功能和/或处理能力的设备,该设备具体可以是遥控器、智能手机、平板电脑、膝上型电脑、手表、手环等及其组合。飞行器20具体可以是具有无线通信功能的无人飞行器、直升机、载人固定翼飞行器、热气球 等。如图2所示,地面站22和飞行器20通过移动通信网络(比如,4G或5G移动通信网络,但不局限于4G或5G)连接。在飞行器20与地面站22进行通信时,发送数据采用本实施例提供的Turbo编码方法进行编码。
其中,图3是本发明实施例提供的一种Turbo编码方法的流程图,如图3所示,本实施例提供的Turbo编码方法,包括:
步骤101、获取用于Turbo编码的码块。
步骤102、将所述码块中的数据分块存储在多个并行的缓存中;
步骤103、从所述多个并行的缓存中获取并行数据进行Turbo编码。
在本实施例中,“用于Turbo编码的码块”是指待传输的传输块经过码块分割处理后获得的码块。
本实施例中,并行缓存的数量可以根据需要进行设定。这里为了方便理解以8个并行缓存为例来进行说明:
图4是本发明实施例提供的一种8缓存的数据存储示意图,如图4所示,在图4中8个缓存并行连接,其中,该8个缓存可以是双口类型的缓存,也可以是单口类型的缓存,本实施例假设该8个缓存均为双口类型的缓存。
当待编码的码块输入Turbo编码器时,Turbo编码器按照预设的存储策略将码块中的数据分块存储在8个并行的缓存中,第一分支编码器从8块缓存中读取数据,并基于上述存储策略,将读取到的数据进行排序,并对排序后获得的数据流进行编码。在另一分支中,内交织器基于预设的交织关系,对第一分支排序获得的数据流进行重排序,第二分支编码器基于重排序后生成的数据流进行编码。其中,数据在并行缓存中的存储策略可以根据需要进行设定,比如,在一种可能的设计中,可以将存储策略设定为将码块中的第一个比特数据存储在第一个缓存中,将第二个比特数据存储在第二缓存中,依次类推,将第8个比特数据存储在第8个缓存之中,再从第九个比特数据开始循环执行上述存储过程。在另一种可能的设计中,可以将存储策略设定为先对码块进行分块处理,使得分块后获得的每个数据分块中均包括8个比特的数据,再依据与第一种可能的设计相似的方法对每个数据分块的数据进行存储。在又一种可能的设计中,还可以在第一 种可能的设计的基础上,每次在同一个缓存中存储预设数量的数据(比如,2比特数据或3比特数据等)。相应的,在数据的读取和排序过程中,采用上述设计的逆过程进行数据读取和数据排序即可。当然上述两种可能的设计仅是用来举例说明,而不是对本发明的唯一限定。
本实施例,通过获取用于Turbo编码的码块,将码块中的数据分块存储在多个并行的缓存中,从多个并行的缓存中获取并行数据进行Turbo编码。从而实现了Turbo编码时数据的并行存储和并行读取,从而提高了Turbo编码的效率,并且与传统串行的方式相比,本实施例能够减少Turbo编码时,内交织产生的中间变量的数量,因而,能够降低Turbo编码器中特定用途集成电路(ASIC)的成本。
图5是本发明实施例提供的一种Turbo编码方法的流程图,如图5所示,在图3实施例的基础上,该方法包括:
步骤201、获取用于Turbo编码的码块。
步骤202、基于预先设定的比特排序与缓存之间的关联关系,将所述码块中的比特数据按照其在所述码块中的排序存储在相应的缓存中。
步骤203、从所述多个并行的缓存中获取比特数据,并基于所述比特排序与缓存之间的关联关系对获取到的比特数据进行重新排序。
步骤204、基于重新排序后的比特数据进行Turbo编码。
在本实施例中,“比特排序与缓存之间的关联关系”是指多个并行缓存中每个缓存用来存储码块中哪些排序位置上的比特数据。为了便于理解这里仍以8并行缓存为例来进行说明:在一个可能的设计中,可以在第一个缓存中存储码块中排序为0、8、...i+8...的数据,在第二个存储中存储码块中排序为1、9、…i+8...的数据,以此类推,在第8个存储中存储码块中排序为7、15、…i+8...的数据。或者,在另一个可能的设计中,也可以对比特排序与缓存之间的关联关系进行不规则设置,即预先设定的每个缓存对应存储的比特数据的排序没有规律,完全是硬性指定的,比如,设定第一个缓存种存储码块中排序为0、3、11等排序的比特数据,在第二个缓存中存储码块中排序为1、4、16等排序的比特数据等等,在此不再一一举例。当然上述示例仅为示例说明而不是对本发明的唯一限定。
进一步的,本实施例在从多个并行的缓存中读取数据,以及对读取到的数据进行重新排序时,依旧需要依据上述所涉及的对比特排序与缓存之间的关联关系,举例来说,假设比特排序与缓存之间的关联关系被具体为上述示例中的第一种设计时,在一次读取数据的操作中,可以并行的在每个缓存中读取一个比特数据,并将读取到的比特数据按照第一个缓存到第八个缓存的顺序进行排序,然后,再进行第二次、第三次,…第n次读取,每次读取到的数据均按照第一个缓存到第八个缓存的顺序进行排序,并将第一次、第二次…第n次读取到的数据串串起来形成数据流。当然这里仅是以上述示例中的第一种设计为例进行的说明,而不是对本发明的唯一限定,实际上,当比特排序与缓存之间的关联关系被具体为其他形式时,也可以采用与上述类似的方法,采用与数据存储方式相反的方式读取数据,并基于数据的存储方式,即上述比特排序与缓存之间的关联关系,对读取到的数据进行重新排序。
进一步的,在进行Turbo编码时,一方面第一分支编码器可以直接基于上述重新排序后得到的比特数据进行编码。另一方面,第二分支上的内交织器需要基于预先设置的交织关系,对上述重新排序后获得的比特数据进行再次排序,然后,第二分支编码器再基于再次排序后的比特数据进行Turbo编码,其中,上述的交织关系,在本实施例中可以被具体为交织处理前与交织处理后比特数据排序之间的关系。
下面仍以上述示例中的第一种设计为例来对内交织器的交织算法进行说明:
假设,基于比特排序与缓存之间的关联关系对获取到的比特数据进行重新排序后,第i个比特数据交织后的排序为f(i),则上述预设的交织关系可以表示为:
f(i)=(f1*i+f2*i*i)mod K,其中,K为数据长度;
则,f(i+N)=(f1*(i+N)+f2*(i+N)*(i+N))mod K N=0..7,i=0,8,16,24,…..
=((f1*i+f2*i*i)+(f1*N+2*i*N*f2+f2*N*N))mod K
=((f(i)mod K)+((f1*N+2*i*N*f2+f2*N*N)mod K))mod K
设g(i,N)=(f1*N+2*i*N*f2+f2*N*N)mod K,则,
f(i+N)=(f(i)+g(i,N))mod K
设const1(N)=f1*N+f2*N*N)mod K,r(i,N)=(2*i*N*f2)mod K
则,g(i,N)=(const1(N)+r(i,N))mod K
f(i+N)=(f(i)+g(i,N))mod K
=(f(i)+((const1(N)+r(i,N))mod K))mod K
基于,r(i,N)=(2*i*N*f2)mod K,可以获得:
r(0,N)=0
r(8,N)=(r(0,N)+16*N*f2)mod K;
r(16,N)=(r(8,N)+16*N*f2)mod K;
.
.
.
r(i+8,N)=(r(i,N)+(2*8*N*f2)mod K))mod K
=(r(i,N)+const2(N))mod K
可见如果提前计算好const1(N),const2(N)(const1(N)、const2(N)小于K),那么每次计算f(i+N)时,f(i+N)=(f(i)+const1(N)+r(i,N))mod K实际就是做两次加法,然后与K,2*K比较关系,再分别减去与计算结果较接近的K或者2*K即可得到准确的f(i+N)的结果。
在本实施例中,并行缓存有8个,即N=0~7,则基于上述r(i,N)的表达式可计算如下:
初始r(0,0)=0;
通过r(0,0)可以计算得到r(0,1)~r(0,8);r(8,0)=r(0,8);
通过r(8,0)计算得到r(8,1)~r(8,8);r(16,0)=r(8,8);
通过r(16,0)计算得到r(16,1)~r(16,8);r(24,0)=r(16,8);依次类推。
交织计算得到8个交织地址,记作N1~N8,从N1~N8交织地址读出来的比特数据就是第二分支编码器的输入数据。
当然上述举例仅为示例说明而不是对本发明的唯一限定。
本实施例通过在获取到用于Turbo编码的码块后,基于预先设定的比特排序与缓存之间的关联关系,将获取到的码块中的比特数据按照其在码块中的排序存储在相应的缓存中,并在编码时通过从多个并行的缓存中获取比特数据,基于所述比特排序与缓存之间的关联关系对获取到的比特数 据进行重新排序,基于重新排序后的比特数据进行Turbo编码,从而能够方便快捷的实现待编码数据的快速存储和读取,提高了Turbo编码的效率。
图6是本发明实施例提供的一种Turbo编码方法的流程图,如图6所示,在图3实施例的基础上,该方法包括:
步骤301、获取用于Turbo编码的码块。
步骤302、基于缓存个数对所述码块中的数据进行分块处理,使得获得的每个数据分块中均包括所述缓存个数的比特数据。
步骤303、基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中的比特数据依次存储在所述多个并行的缓存中。
步骤304、从所述多个并行缓存的相同位置上获取比特数据,并基于所述存储顺序对获取到的比特数据进行重新排序。
步骤305、基于重新排序后的比特数据进行Turbo编码。
仍以8并行缓存为例,在对码块中的数据进行分块处理时,按照码块中比特数据的排序,将每8个连续的比特数据划分为一个数据分块。再将每个数据分块中的比特数据按照预设的比特数据在所述8个缓存中的存储顺序存储在8个缓存中。比如,第一个比特数据存储在第一个缓存中,第二个比特数据存储在第二个缓存中,依次类推,第八个比特数据存储在第八个缓存中等等,当然这里仅为示例说明不是唯一限定。
这里本实施例为了方便数据读取,设定每个数据分块中的数据依次存储在相应缓存的相同位置上,比如,第一个数据分块中的比特数据均存储在相应缓存的第一个比特位上,第二个数据分块中的比特数据均存储在形影缓存的第二个比特位上,依次类推,直至完成数据存储为止。
基于上述的存储方法,相应的在从8个并行缓存中读取数据时,每次也都从8个缓存的相同位置上读取比特数据,并基于上述比特数据的存储顺序确定读取到的各比特数据的第一排序序号,基于各比特数据的第一排序序号对各比特数据进行排序,生成第一数据流,在第二分支上内交织器基于预设的交织关系,确定从各缓存中获取到的比特数据的第一排序序号所对应的第二排序序号,基于各比特数据的第二排序序号对各比特数据进行排序,生成第二数据流。在进行编码时,第一分支编码器基于第一数据 流进行编码,第二分支编码器基于第二数据流进行编码。
当然上述示例仅是以8并行缓存为例所进行的示例说明,而不是对本发明的唯一限定。
本实施例的有益效果与上述实施例的类似,在这里不再赘述。
图7是本发明实施例提供的一种Turbo编码器的结构示意图,如图7所示,Turbo编码器70包括:通信接口71、一个或多个处理器72、多个并行的缓存73,第一分支编码器74、第二分支编码器75、内交织器76;所述一个或多个处理器72单独或协同工作,所述通信接口71和所述处理器72连接,所述通信接口71和所述多个并行的缓存73连接,所述多个并行的缓存73分别与所述第一分支编码器74和所述内交织器76连接,所述内交织器76与所述第二分支编码器75连接;所述通信接口71用于:获取用于Turbo编码的码块;所述处理器72用于:控制所述通信接口71将所述码块中的数据分块存储在所述多个并行的缓存中;所述第一分支编码器74用于:从所述多个并行的缓存73中获取并行数据进行Turbo编码;所述第二分支编码器75用于:通过所述内交织器76从所述多个并行的缓存中获取并行数据进行Turbo编码。
在一种可能的设计中,所述处理器72控制所述通信接口71将所述码块中的数据分块存储在所述多个并行的缓存中时,具体用于:基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中。
在一种可能的设计中,所述处理器72基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中时,具体用于:基于预先设定的比特排序与缓存之间的关联关系,将所述码块中的比特数据按照其在所述码块中的排序存储在相应的缓存中。
在一种可能的设计中,所述第一分支编码器74从所述多个并行的缓存73中获取并行数据进行Turbo编码时,具体用于:从所述多个并行的缓存中获取比特数据;所述处理器72用于:基于所述比特排序与缓存之间的关联关系对所述第一分支编码器获取到的比特数据进行重新排序;所述第一分支编码器74基于重新排序后的比特数据进行Turbo编码。
在一种可能的设计中,所述第二分支编码器75通过所述内交织器76 从所述多个并行的缓存73中获取并行数据进行Turbo编码时,所述内交织器76具体用于:基于预设的交织关系对所述重新排序后的比特数据进行再次排序;所述第二分支编码器75用于基于再次排序后的比特数据进行Turbo编码。
在一种可能的设计中,所述处理器72基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中时,具体用于:基于缓存个数对所述码块中的数据进行分块处理,使得获得的每个数据分块中均包括所述缓存个数的比特数据;基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中的比特数据依次存储在所述多个并行的缓存中。
在一种可能的设计中,所述处理器72基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中的比特数据依次存储在所述多个并行的缓存中时,具体用于:基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中各数据位上的比特数据存储在相应缓存的相同位置上。
在一种可能的设计中,所述第一分支编码器74从所述多个并行的缓存中获取并行数据进行Turbo编码时,具体用于:从所述多个并行缓存的相同位置上获取比特数据;所述处理器72用于:基于所述存储顺序对获取到的比特数据进行重新排序;所述第一分支编码器74基于重新排序后的比特数据进行Turbo编码。
在一种可能的设计中,所述处理器72基于所述存储顺序对获取到的比特数据进行重新排序时,具体用于:基于所述存储顺序,确定从各缓存中获取到的比特数据的第一排序序号;基于各比特数据的第一排序序号对各比特数据进行排序,生成第一数据流。
在一种可能的设计中,所述第二分支编码器75通过所述内交织器从所述多个并行的缓存中获取并行数据进行Turbo编码时,所述内交织器76具体用于:基于预设的交织关系,确定从各缓存中获取到的比特数据的第一排序序号所对应的第二排序序号,基于各比特数据的第二排序序号对各比特数据进行排序,生成第二数据流;所述第二分支编码器75具体用于:基于所述第二数据流进行Turbo编码。
本发明实施例还提用一种无人机,该无人机包括:
机身;
无线通信设备,安装在所述机身上,用于进行无线通信;
动力系统,安装在所述机身,用于提供动力;
以及上述实施例涉及的Turbo编码器。
其中,所述无人机包括无人飞行器或无人车。
本发明实施例还提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行实施例的技术方案。其中,本实施例所称的计算机是指具有运算处理能力的设备,比如,可以是无人机、手机等设备,但不局限于无人机和手机。计算机可读存储介质是指存储有该设备可执行指令的存储介质。
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包 括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (22)

  1. 一种Turbo编码方法,其特征在于,包括:
    获取用于Turbo编码的码块;
    将所述码块中的数据分块存储在多个并行的缓存中;
    从所述多个并行的缓存中获取并行数据进行Turbo编码。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述码块中的数据分块存储在多个并行的缓存中,包括:
    基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中。
  3. 根据权利要求2所述的方法,其特征在于,所述基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中,包括:
    基于预先设定的比特排序与缓存之间的关联关系,将所述码块中的比特数据按照其在所述码块中的排序存储在相应的缓存中。
  4. 根据权利要求3所述的方法,其特征在于,所述从所述多个并行的缓存中获取并行数据进行Turbo编码,包括:
    从所述多个并行的缓存中获取比特数据,并基于所述比特排序与缓存之间的关联关系对获取到的比特数据进行重新排序;
    基于重新排序后的比特数据进行Turbo编码。
  5. 根据权利要求3所述的方法,其特征在于,所述基于重新排序后的比特数据进行Turbo编码,包括:
    基于预设的交织关系对所述重新排序后的比特数据进行再次排序;
    基于再次排序后的比特数据,以及所述重新排序后的比特数据进行Turbo编码。
  6. 根据权利要求2所述的方法,其特征在于,所述基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中,包括:
    基于缓存个数对所述码块中的数据进行分块处理,使得获得的每个数据分块中均包括所述缓存个数的比特数据;
    基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中的比特数据依次存储在所述多个并行的缓存中。
  7. 根据权利要求6所述的方法,其特征在于,所述基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中的比特数据依次存储在所述多个并行的缓存中,包括:
    基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中各数据位上的比特数据存储在相应缓存的相同位置上。
  8. 根据权利要求7所述的方法,其特征在于,所述从所述多个并行的缓存中获取并行数据进行Turbo编码,包括:
    从所述多个并行缓存的相同位置上获取比特数据,并基于所述存储顺序对获取到的比特数据进行重新排序;
    基于重新排序后的比特数据进行Turbo编码。
  9. 根据权利要求8所述的方法,其特征在于,所述从所述多个并行的缓存中获取比特数据,并基于所述存储顺序对获取到的比特数据进行重新排序,包括:
    基于所述存储顺序,确定从各缓存中获取到的比特数据的第一排序序号;
    基于各比特数据的第一排序序号对各比特数据进行排序,生成第一数据流;
    基于预设的交织关系,确定从各缓存中获取到的比特数据的第一排序序号所对应的第二排序序号,基于各比特数据的第二排序序号对各比特数据进行排序,生成第二数据流;
    所述基于重新排序后的比特数据进行Turbo编码,包括:
    基于所述第一数据流和所述第二数据流进行Turbo编码。
  10. 一种Turbo编码器,其特征在于,包括:通信接口、一个或多个处理器、多个并行的缓存,第一分支编码器、第二分支编码器、内交织器;
    所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接,所述通信接口和所述多个并行的缓存连接,所述多个并行的缓存分别与所述第一分支编码器和所述内交织器连接,所述内交织器与所述第二分支编码器连接;
    所述通信接口用于:获取用于Turbo编码的码块;
    所述处理器用于:控制所述通信接口将所述码块中的数据分块存储在 所述多个并行的缓存中;
    所述第一分支编码器用于:从所述多个并行的缓存中获取并行数据进行Turbo编码;
    所述第二分支编码器用于:通过所述内交织器从所述多个并行的缓存中获取并行数据进行Turbo编码。
  11. 根据权利要求10所述的Turbo编码器,其特征在于,所述处理器控制所述通信接口将所述码块中的数据分块存储在所述多个并行的缓存中时,具体用于:
    基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中。
  12. 根据权利要求11所述的Turbo编码器,其特征在于,所述处理器基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中时,具体用于:
    基于预先设定的比特排序与缓存之间的关联关系,将所述码块中的比特数据按照其在所述码块中的排序存储在相应的缓存中。
  13. 根据权利要求12所述的Turbo编码器,其特征在于,所述第一分支编码器从所述多个并行的缓存中获取并行数据进行Turbo编码时,具体用于:从所述多个并行的缓存中获取比特数据;
    所述处理器用于:基于所述比特排序与缓存之间的关联关系对所述第一分支编码器获取到的比特数据进行重新排序;
    所述第一分支编码器基于重新排序后的比特数据进行Turbo编码。
  14. 根据权利要求13所述的Turbo编码器,其特征在于,所述第二分支编码器通过所述内交织器从所述多个并行的缓存中获取并行数据进行Turbo编码时,所述内交织器具体用于:基于预设的交织关系对所述重新排序后的比特数据进行再次排序;
    所述第二分支编码器用于基于再次排序后的比特数据进行Turbo编码。
  15. 根据权利要求11所述的Turbo编码器,其特征在于,所述处理器基于预设的存储策略,将所述码块中的数据分块存储在多个并行的缓存中时,具体用于:
    基于缓存个数对所述码块中的数据进行分块处理,使得获得的每个数 据分块中均包括所述缓存个数的比特数据;
    基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中的比特数据依次存储在所述多个并行的缓存中。
  16. 根据权利要求15所述的Turbo编码器,其特征在于,所述处理器基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中的比特数据依次存储在所述多个并行的缓存中时,具体用于:
    基于预设的比特数据在所述多个缓存中的存储顺序,将所述数据分块中各数据位上的比特数据存储在相应缓存的相同位置上。
  17. 根据权利要求16所述的Turbo编码器,其特征在于,所述第一分支编码器从所述多个并行的缓存中获取并行数据进行Turbo编码时,具体用于:从所述多个并行缓存的相同位置上获取比特数据;
    所述处理器用于:基于所述存储顺序对获取到的比特数据进行重新排序;
    所述第一分支编码器基于重新排序后的比特数据进行Turbo编码。
  18. 根据权利要求17所述的Turbo编码器,其特征在于,所述处理器基于所述存储顺序对获取到的比特数据进行重新排序时,具体用于:
    基于所述存储顺序,确定从各缓存中获取到的比特数据的第一排序序号;
    基于各比特数据的第一排序序号对各比特数据进行排序,生成第一数据流。
  19. 根据权利要求18所述的Turbo编码器,其特征在于,所述第二分支编码器通过所述内交织器从所述多个并行的缓存中获取并行数据进行Turbo编码时,所述内交织器具体用于:基于预设的交织关系,确定从各缓存中获取到的比特数据的第一排序序号所对应的第二排序序号,基于各比特数据的第二排序序号对各比特数据进行排序,生成第二数据流;
    所述第二分支编码器具体用于:基于所述第二数据流进行Turbo编码。
  20. 一种无人机,其特征在于,包括:
    机身;
    无线通信设备,安装在所述机身上,用于进行无线通信;
    动力系统,安装在所述机身,用于提供动力;
    以及如权利要求10-19中任一项所述的Turbo编码器。
  21. 根据权利要求20所述的无人机,其特征在于,所述无人机包括无人飞行器或无人车。
  22. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1-9中任一项所述的方法。
PCT/CN2018/086799 2018-05-15 2018-05-15 Turbo编码方法、Turbo编码器及无人机 WO2019218130A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/086799 WO2019218130A1 (zh) 2018-05-15 2018-05-15 Turbo编码方法、Turbo编码器及无人机
CN201880031265.4A CN110710112A (zh) 2018-05-15 2018-05-15 Turbo编码方法、Turbo编码器及无人机
US17/096,140 US20210083691A1 (en) 2018-05-15 2020-11-12 Turbo encoding method, turbo encoder and uav

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/086799 WO2019218130A1 (zh) 2018-05-15 2018-05-15 Turbo编码方法、Turbo编码器及无人机

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/096,140 Continuation US20210083691A1 (en) 2018-05-15 2020-11-12 Turbo encoding method, turbo encoder and uav

Publications (1)

Publication Number Publication Date
WO2019218130A1 true WO2019218130A1 (zh) 2019-11-21

Family

ID=68539239

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/086799 WO2019218130A1 (zh) 2018-05-15 2018-05-15 Turbo编码方法、Turbo编码器及无人机

Country Status (3)

Country Link
US (1) US20210083691A1 (zh)
CN (1) CN110710112A (zh)
WO (1) WO2019218130A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259992A (zh) * 2021-06-11 2021-08-13 苏州华兴源创科技股份有限公司 码块分割方法、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674161A (zh) * 2009-10-15 2010-03-17 华为技术有限公司 解速率匹配方法及装置
CN101777924A (zh) * 2010-01-11 2010-07-14 新邮通信设备有限公司 一种Turbo码译码方法和装置
CN102111163A (zh) * 2009-12-25 2011-06-29 中兴通讯股份有限公司 Turbo编码器及编码方法
US20160028513A1 (en) * 2013-12-10 2016-01-28 Telefonaktiebolaget L M Ericsson (Publ) Group-Based Resource Element Mapping for Radio Transmission of Data

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6298463B1 (en) * 1998-07-31 2001-10-02 Nortel Networks Limited Parallel concatenated convolutional coding
JP4298140B2 (ja) * 2000-06-29 2009-07-15 富士通株式会社 送受信装置
US8706968B2 (en) * 2007-12-06 2014-04-22 Fusion-Io, Inc. Apparatus, system, and method for redundant write caching
CN102098061B (zh) * 2009-12-15 2014-09-17 上海贝尔股份有限公司 并行Turbo编码器
US20150014482A1 (en) * 2013-07-15 2015-01-15 Design Intelligence Incorporated, LLC Unmanned aerial vehicle (uav) with inter-connecting wing sections
WO2016112286A1 (en) * 2015-01-09 2016-07-14 Massachusetts Institute Of Technology Link architecture and spacecraft terminal for high rate direct to earth optical communications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674161A (zh) * 2009-10-15 2010-03-17 华为技术有限公司 解速率匹配方法及装置
CN102111163A (zh) * 2009-12-25 2011-06-29 中兴通讯股份有限公司 Turbo编码器及编码方法
CN101777924A (zh) * 2010-01-11 2010-07-14 新邮通信设备有限公司 一种Turbo码译码方法和装置
US20160028513A1 (en) * 2013-12-10 2016-01-28 Telefonaktiebolaget L M Ericsson (Publ) Group-Based Resource Element Mapping for Radio Transmission of Data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259992A (zh) * 2021-06-11 2021-08-13 苏州华兴源创科技股份有限公司 码块分割方法、计算机设备及存储介质

Also Published As

Publication number Publication date
CN110710112A (zh) 2020-01-17
US20210083691A1 (en) 2021-03-18

Similar Documents

Publication Publication Date Title
US11431351B2 (en) Selection of data compression technique based on input characteristics
JP5479580B2 (ja) Lteにおける並列turboデコーディングの方法及び装置
US10187081B1 (en) Dictionary preload for data compression
KR20090094262A (ko) 비경쟁 인터리버들을 이용한 터보-인코딩
US9176977B2 (en) Compression/decompression accelerator protocol for software/hardware integration
JP2008219892A (ja) データを符号化および復号化する方法および装置
CN111222532B (zh) 具有分类精度保持和带宽保护的边云协同深度学习模型训练方法
WO2018171401A1 (zh) 一种信息处理方法、装置及设备
WO2019218130A1 (zh) Turbo编码方法、Turbo编码器及无人机
JP2021505028A (ja) エンコーディング方法および装置、電子デバイスおよび記憶媒体
CN102523076A (zh) 通用可配置的高速率Turbo码译码系统及其方法
CN103354483A (zh) 通用的高性能Radix-4SOVA译码器及其译码方法
Xianjun et al. A 122Mb/s turbo decoder using a mid-range GPU
CN109495116A (zh) 极化码的sc-bp混合译码方法及其可调式硬件架构
Halim et al. Software-based turbo decoder implementation on low power multi-processor system-on-chip for Internet of Things
KR20140124214A (ko) 디코딩 장치 및 디코딩 방법
CN101882933B (zh) 一种LTE中进行Turbo译码的方法及Turbo译码器
CN105375934A (zh) 一种针对咬尾卷积码的Viterbi解码器及解码方法
Krishnan et al. A universal parallel two-pass MDL context tree compression algorithm
CN103152567A (zh) 一种任意阶数指数哥伦布编码器及其方法
CN106712778A (zh) 一种turbo译码装置及方法
CN105515591B (zh) 一种Turbo码译码系统及方法
CN109831217A (zh) 一种Turbo码译码器、用于Turbo码的分量译码器及分量译码方法
CN103905066A (zh) Turbo码译码装置和方法
CN103208997B (zh) 一种边输入/边译码/边输出的编译码器结构

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18918793

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18918793

Country of ref document: EP

Kind code of ref document: A1