CN113472478A - Decoding front-end processing method, device, computer equipment and storage medium - Google Patents

Decoding front-end processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113472478A
CN113472478A CN202010245005.6A CN202010245005A CN113472478A CN 113472478 A CN113472478 A CN 113472478A CN 202010245005 A CN202010245005 A CN 202010245005A CN 113472478 A CN113472478 A CN 113472478A
Authority
CN
China
Prior art keywords
decoder
data
target data
buffer area
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010245005.6A
Other languages
Chinese (zh)
Other versions
CN113472478B (en
Inventor
彭剑
王宗谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Haige Communication Group Inc Co
Original Assignee
Guangzhou Haige Communication Group Inc Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Haige Communication Group Inc Co filed Critical Guangzhou Haige Communication Group Inc Co
Priority to CN202010245005.6A priority Critical patent/CN113472478B/en
Publication of CN113472478A publication Critical patent/CN113472478A/en
Application granted granted Critical
Publication of CN113472478B publication Critical patent/CN113472478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0036Systems modifying transmission characteristics according to link quality, e.g. power backoff arrangements specific to the receiver
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application relates to a decoding front-end processing method, a decoding front-end processing device, computer equipment and a storage medium. The method comprises the following steps: receiving target data sent by a sending end, analyzing the target data to obtain associated state information, and transmitting the target data through communication resources with unfixed time slot lengths; then storing the target data into a first buffer area, and storing the associated state information into a second buffer area; then judging whether the data reading condition is met according to the storage state of the second buffer area and the working state of a decoder; if the data reading condition is met, reading the target data from the first buffer area according to the associated state information to obtain the read target data; and finally, controlling the decoder to decode according to the read target data. By adopting the method, the decoding efficiency can be improved.

Description

Decoding front-end processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a decoding front-end processing method, apparatus, computer device, and storage medium.
Background
In the field of communication technology, the application of ad hoc network systems is very wide, and in the ad hoc network systems, communication links can be directly established between terminals without passing through a central base station.
When data transmission is performed between terminals in the ad hoc network system, encoding and decoding operations must be performed. Including coding and decoding in fixed time slots and coding and decoding in variable time slots. For decoding in a fixed time slot, a ping-pong buffer processing strategy is usually adopted, and the principle of the ping-pong buffer processing strategy is that a receiving terminal is provided with two same buffer areas, the two same buffer areas can be alternately read and written, and a receiving terminal switches between the two buffer areas according to a fixed time interval, so that data is read from the buffer areas, and finally the data is decoded.
However, the conventional ping-pong buffer processing strategy can only read data from the buffer at regular time intervals, and must wait for the buffer to be full of data before reading the data from the buffer and decoding the data, so the conventional ping-pong buffer processing strategy has a problem of low decoding efficiency.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a decoding front-end processing method, apparatus, computer device and storage medium capable of improving efficiency.
In a first aspect, a method for decoding front-end processing is provided, the method including:
receiving target data sent by a sending end, analyzing the target data to obtain associated state information, and transmitting the target data through communication resources with unfixed time slot lengths;
storing the target data into a first buffer area, and storing the associated state information into a second buffer area;
judging whether a data reading condition is met or not according to the storage state of the second buffer area and the working state of a decoder;
if the data reading condition is met, reading the target data from the first buffer area according to the associated state information to obtain the read target data;
and controlling the decoder to decode according to the read target data.
In one embodiment, the determining whether the data reading condition is satisfied according to the storage state of the second buffer and the operating state of the decoder includes:
detecting whether the storage state of the second buffer area is a non-empty state;
detecting whether the working state of the decoder is a non-busy state;
and when the storage state of the second buffer area is a non-empty state and the working state of the decoder is a non-busy state, determining that the data reading condition is met.
In one embodiment, the controlling the decoder to decode according to the read target data includes:
de-interleaving and de-spreading the read target data according to the channel associated state information of the second buffer area to obtain processed data;
and sending the processed data to the decoder, wherein the processed data is used for decoding by the decoder.
In one embodiment, the sending the processed data to the decoder includes:
dividing the processed data into N data segments, wherein N is an integer greater than or equal to 1;
and sending the N data segments to the decoder respectively.
In one embodiment, the target data is equalized single-hop soft information, and the first buffer area is provided with a safety interval, and the size of the safety interval is 1.5 times of the data volume of the soft information contained in the maximum hop count of the time slot.
In one embodiment, the associated state information includes: at least one of code rate, spreading multiple, interleaved hop count, decoding length, and number of symbols.
In one embodiment, the first buffer is a LoopBuffer buffer; the second buffer area is a FIFO buffer area; the decoder is a Turbo decoder, a convolution decoder, a Polar decoder or an LDPC decoder.
In a second aspect, a decoding front-end processing apparatus is provided, which includes:
the receiving module is used for receiving target data sent by the sending end and analyzing the target data to obtain associated state information, and the target data is transmitted through communication resources with unfixed time slot lengths;
the storage module is used for storing the target data into a first buffer area and storing the associated state information into a second buffer area;
the judging module is used for judging whether the data reading condition is met or not according to the storage state of the second buffer area and the working state of the decoder;
the reading module is used for reading the target data from the first buffer area according to the associated state information if the data reading condition is met, and obtaining the read target data;
and the control module is used for controlling the decoder to decode according to the read target data.
In one embodiment, the determining module is specifically configured to detect whether a storage state of the second buffer is a non-empty state; detecting whether the working state of the decoder is a non-busy state; and when the storage state of the second buffer area is a non-empty state and the working state of the decoder is a non-busy state, determining that the data reading condition is met.
In one embodiment, the control module is specifically configured to perform de-interleaving processing and de-spreading processing on the read target data according to the channel associated state information of the second buffer area to obtain processed data; and sending the processed data to the decoder, wherein the processed data is used for decoding by the decoder.
In one embodiment, the control module is specifically configured to divide the processed data into N data segments, where N is an integer greater than or equal to 1; and sending the N data segments to the decoder respectively.
In one embodiment, the target data is equalized single-hop soft information, and the first buffer area is provided with a safety interval, and the size of the safety interval is 1.5 times of the data volume of the soft information contained in the maximum hop count of the time slot.
In one embodiment, the associated state information includes: at least one of code rate, spreading multiple, interleaved hop count, decoding length, and number of symbols.
In one embodiment, the first buffer is a LoopBuffer buffer; the second buffer area is a FIFO buffer area; the decoder is a Turbo decoder, a convolution decoder, a Polar decoder or an LDPC decoder.
In a third aspect, a computer device is provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the decoding front-end processing method according to any one of the first aspect when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the decoding front-end processing method according to any one of the first aspect.
According to the decoding front-end processing method, the decoding front-end processing device, the computer equipment and the storage medium, the target data sent by the sending end is received and analyzed to obtain the associated state information, and the target data is transmitted through the communication resource with unfixed time slot length; then storing the target data into a first buffer area, and storing the associated state information into a second buffer area; then judging whether the data reading condition is met according to the storage state of the second buffer area and the working state of a decoder; if the data reading condition is met, reading the target data from the first buffer area according to the associated state information to obtain the read target data; and finally, controlling the decoder to decode according to the read target data. According to the decoding front-end processing method, after the target data is obtained and the channel associated state information of the target data and the target data is stored in the first buffer area and the second buffer area respectively, whether decoding is performed or not can be judged by combining the states of the second buffer area and the decoder, if the decoding condition is met, the decoding action is triggered immediately, the waiting time is shortened, the system maximization efficiency is brought into play, and the decoding efficiency is improved.
Drawings
FIG. 1 is a flow diagram illustrating a method for decoding front-end processing in one embodiment;
FIG. 2 is a flowchart illustrating a method for determining whether a data read condition is satisfied in a decoding front-end processing method according to an embodiment;
FIG. 3 is a flowchart illustrating a method for controlling a decoder to decode according to the read target data in another embodiment of a decoding front-end processing method;
FIG. 4 is a flowchart illustrating a method for sending processed data to a decoder in a front-end decoding processing method according to another embodiment;
FIG. 5 is a schematic diagram of a decode front-end processing method in one embodiment;
FIG. 6 is a block diagram of a decoding front-end processing device in one embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a decoding front-end processing method is provided, which is exemplified by applying the method to a terminal, which may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the method includes the following steps:
step 101, a terminal receives target data sent by a sending end, analyzes the target data to obtain associated state information, and transmits the target data through communication resources with unfixed time slot lengths.
In the ad hoc network system, terminals can directly communicate with each other in a wireless mode, and in the communication process, a time slot refers to the minimum time unit for transmitting data during communication between the terminals. Data can be transmitted in a fixed time slot length mode or a variable time slot length mode in the communication process. However, if the structure of the fixed slot length is adopted, there are several disadvantages as follows. Firstly, the time slot length is fixed, the waveform structure is easy to be intercepted and identified, and the risk of communication deciphering is increased. The timeslot structure with flexible and variable length can effectively solve the above problems, so that the timeslot length is variable, which is one of the current directions of the evolution of the ad hoc network technology.
After receiving the target data sent by the sending end, the terminal analyzes the target data to obtain the associated state information of the target data. The associated state information includes a time slot parameter of the target data. Because the target data is transmitted through the communication resource with unfixed time slot length, the parameters of the time slot need to be analyzed, which is convenient for decoding the target data subsequently.
And 102, the terminal stores the target data into a first buffer area and stores the associated state information into a second buffer area.
After the destination data and the associated state information of the destination data are acquired, the destination data and the associated state information need to be stored respectively. The target data is the data to be decoded next, and the associated state information is the data for assisting decoding, and the associated state information records the time slot parameters of the target data, generally including the time slot rate, the spreading multiple, the number of hops in the time slot, and the like, and the time slot parameters correspond to the format of the recorded target data. Generally, the data formats that different decoders can decode are different, and the formats of the data are changed under the condition that the time slot is variable, so that the channel associated state information of the target data needs to be analyzed, and the format of the target data input into the decoder is convenient to control.
And 103, the terminal judges whether the data reading condition is met or not according to the storage state of the second buffer area and the working state of the decoder.
In this step, the terminal comprehensively considers the storage state of the second buffer area and the working state of the decoder, so as to judge whether to trigger the decoding action. The action may be in real time or timed. The main purpose is to comprehensively consider the data processing capacity of the whole system and facilitate timely processing of target data.
And 104, if the data reading condition is met, the terminal reads the target data from the first buffer area according to the associated state information to obtain the read target data.
In this step, the data reading condition may be preset, for example, the data reading condition may be that the second buffer is not empty and the decoder is not busy.
If the data reading condition is met, the target data in the first buffer area can be controlled to be read according to the channel associated state information, wherein the target data is read according to the channel associated state information, and data in a format which can be processed by a decoder can be obtained.
And 105, the terminal controls the decoder to decode according to the read target data.
After the processing in the above steps, the target data has been converted into a format that can be processed by the decoder, so that the processed target data can be input into the decoder for decoding, and optionally, the read target data can be deinterleaved and despread before formal decoding.
In the decoding front-end processing method, the channel associated state information is obtained by receiving the target data sent by the sending end and analyzing the target data, and the target data is transmitted through the communication resource with unfixed time slot length; then storing the target data into a first buffer area, and storing the associated state information into a second buffer area; then judging whether the data reading condition is met according to the storage state of the second buffer area and the working state of a decoder; if the data reading condition is met, reading the target data from the first buffer area according to the associated state information to obtain the read target data; and finally, controlling the decoder to decode according to the read target data. According to the decoding front-end processing method, after the target data is obtained and the channel associated state information of the target data and the target data is stored in the first buffer area and the second buffer area respectively, whether decoding is performed or not can be judged by combining the states of the second buffer area and the decoder, if the decoding condition is met, the decoding action is triggered, the maximum efficiency of a system is exerted, and the decoding efficiency is improved.
In an embodiment of the present application, please refer to fig. 2, which provides a method for determining whether a data reading condition is satisfied in a decoding front-end processing method, the method includes:
in step 201, the terminal detects whether the storage state of the second buffer is a non-empty state.
In this step, the storage state of the second buffer indicates whether the second buffer has data to store currently, optionally, the second buffer may be implemented by a first-in-first-out buffer queue (english: FIFO), when the second buffer is a FIFO, the storage state of the second buffer may be indicated by empty, and when empty is equal to 0, the second buffer is non-empty.
In step 202, the terminal detects whether the working state of the decoder is a non-busy state.
In this step, the operating state of the decoder indicates whether the current capability of the decoder to process data reaches the upper limit of the decoder. For example, when the decoder is a Turbo decoder, the operating state of the decoder can be represented by Busy, and when Busy is 0, the capability of the decoder to process data currently does not reach the upper limit, and new data can still be processed continuously.
In step 203, when the storage status of the second buffer is a non-empty status and the working status of the decoder is a non-busy status, the terminal determines that the data reading condition is satisfied.
In this step, a specific data reading condition is given, when the second buffer is not empty and the decoder is not busy. That is to say, the capacity of the whole system for processing the data is comprehensively considered, the waiting time is reduced, and the effect of processing the data in real time can be realized.
In the embodiment of the application, by comprehensively considering the states of the second buffer area and the decoder, the waiting time caused by the fact that data can be read only when the buffer area is full of data in the traditional ping-pong buffer processing strategy is avoided, and the data reading efficiency is improved.
In an embodiment of the present application, please refer to fig. 3, which provides a method for controlling the decoder to decode according to the read target data in a decoding front-end processing method, the method includes:
step 301, the terminal performs de-interleaving and de-spreading processing on the read target data according to the channel associated state information of the second buffer area to obtain processed data.
In this step, not only the format of the target data needs to be analyzed by using the associated state information, but also the target data needs to be deinterleaved and despread.
Interleaving and deinterleaving are operations occurring in pairs, and when communication is performed, a channel often causes burst errors due to impulse interference or multipath fading, so once uncorrectable errors occur, the errors exist continuously, and the problem can be solved by using convolutional interleaving. The convolution interleaving disturbs the time sequence of the data by a certain rule to weaken the correlation, then the data is sent to a channel, a deinterleaver recovers the data by an opposite rule, and after the deinterleaving, the burst errors are dispersed in time to be similar to the random errors which independently occur, so that the forward error correction coding can effectively correct the errors, and the effect of the forward error correction code and the interleaving can be understood as the length byte which can resist the forward error correction and is expanded.
Spreading and despreading are also operations occurring in pairs, and when communication is performed between terminals, the bandwidth used for transmitting information is far larger than the bandwidth of the information. In this information transmission scheme, a transmitting terminal performs spread spectrum modulation with a spread spectrum code (typically, a pseudo random code). The same code is used at the receiving terminal for coherent synchronous reception, despreading and recovery of the transmitted information data.
Step 302, the terminal sends the processed data to the decoder, and the processed data is used for decoding by the decoder.
After the above format conversion, deinterleaving, and despreading processes on the target data are performed, processed data is obtained, and the processed data can be decoded by a decoder. Optionally, in this embodiment of the application, after the processed data is obtained, the processed data may be divided into a plurality of data blocks, and the data blocks are sequentially input into the decoder in a pipelined manner according to a data format acceptable by the decoder.
In the embodiment of the application, the received target data can be converted through de-interleaving and de-spreading, so that the final decoding result is more accurate.
In an embodiment of the present application, please refer to fig. 4, which provides a method for sending processed data to a decoder in a front-end decoding processing method, the method includes:
step 401, the terminal divides the processed data into N data segments, where N is an integer greater than or equal to 1.
In this step, in order to improve the processing performance of the decoder, a principle of component decoding is adopted, the processed data is subjected to component processing and is divided into N data segments, and then the decoder can process the N data segments simultaneously. The value of N varies depending on the decoder used.
In step 402, the terminal sends the N data segments to the decoder, respectively.
In this step, the terminal sends the data segments after the component to the decoder, so that the decoder can decode the N data segments.
In the embodiment of the application, a component type processing idea is adopted, the processed data is divided into N data segments, and then the decoder is enabled to process the N data segments simultaneously, so that the decoding efficiency of the decoder is improved.
In this embodiment, the target data is equalized single-hop soft information, the first buffer area is provided with a safety interval, and the size of the safety interval is 1.5 times of the data size of the soft information contained in the maximum hop count of the timeslot.
In the embodiment of the present application, wireless communication is implemented through propagation of electromagnetic waves between terminals, and the electromagnetic waves are analog signals, so that analog-to-digital conversion, that is, sampling, needs to be performed on received electromagnetic waves, where the soft information mentioned here refers to the number of sampling points. The single-hop soft information refers to the number of sampling points included in one hop time. While the electromagnetic wave has loss phenomenon in the process of propagation, the equalization process is a process for reducing loss.
In the embodiment of the present application, in order to prevent the first buffer from overflowing during the process of storing data, a security interval is set in the first buffer, and the size of the amount of data that can be accommodated in the security interval may be preset, and the above-mentioned 1.5 times may also be other numbers. In the embodiment of the application, the safety interval is set in the first buffer area, so that the condition of data overflow can be avoided to a limited extent, and the reliability of the decoding processing method is improved.
In this embodiment of the present application, the associated state information includes: at least one of code rate, spreading multiple, interleaved hop count, decoding length, and number of symbols.
The code rate can be understood as bit rate, which refers to the number of bits transmitted per second; the spread spectrum multiple can be understood as spread spectrum communication, the bandwidth of a signal used for transmitting information in the spread spectrum communication is far larger than the bandwidth of the information, and according to different gain values (namely the spread spectrum multiple), the signal is processed by different amplifiers to obtain different bandwidths; the hop count refers to the number of time slot changes in unit time; the decoding length is the data length required by decoding; the number of symbols is the number of characters in the target data. The target data can be comprehensively analyzed and converted in format by utilizing the associated state information.
In the embodiment of the present application, the first buffer is a circular buffer (LoopBuffer); the second buffer area is a FIFO buffer area; the decoder is a Turbo decoder (english: Turbo), a convolutional decoder, a Polar decoder (english: Polar) or a low density parity check code decoder (english: LDPC).
In the embodiment of the application, because the LoopBuffer can realize synchronous reading and writing, the efficiency of storing data and reading data is improved. The Turbo encoder can construct a long code with a pseudorandom characteristic by parallelly cascading two simple component codes through a pseudorandom interleaver, and realizes pseudorandom decoding by carrying out multiple iterations between two decoders, so that the decoding performance is greatly improved. In addition, the application provides various decoders, and the flexibility of the decoding processing method is improved.
In the embodiment of the present application, please refer to fig. 5, which provides a schematic diagram of a decoding front-end processing method, and the method takes as an example that a first buffer is a LoopBuffer buffer, a second buffer is a FIFO buffer, a decoder is a Turbo decoder, and target data is transmitted under a condition that a slot length is not fixed, so as to explain the decoding front-end processing method provided in the present application.
Specifically, after receiving target data sent by a sending terminal, the terminal performs two kinds of processing on the target data, one is to balance the target data by taking hop as a unit to obtain balanced M hop soft information, wherein M is a positive integer greater than or equal to 1, and store the balanced M hop soft information into a LoopBuffer, and the second is to analyze the associated state information (also called a time slot parameter) of the target data and store the analyzed information into a FIFO buffer, and then to determine whether to input data in the LoopBuffer into a Turbo decoder for decoding by combining the state of the FIFO buffer and the state of the Turbo decoder. When the FIFO buffer is in a non-empty state (i.e., empty is 0) and the Turbo decoder is in a non-Busy state (i.e., Busy is 0), data is read from the LoopBuffer buffer according to the channel associated state information in the FIFO buffer, then deinterleaving and despreading processing are performed on the read data according to the channel associated state information in the FIFO buffer to obtain processed data, and finally the processed data is controlled to be divided into three channels to be input to the Turbo decoder for decoding.
In a possible case, two identical buffer areas in the ping-pong cache strategy can only be switched between the two buffer areas according to a preset fixed time length, so as to realize the reading and writing of data. When processing data with variable time slots, it is assumed that one of the buffers stores data of a longer time slot, and the time required for reading the data of the longer time slot is longer than a preset fixed time, so that when the data of the longer time slot is not completely read, the system directly switches to another buffer for reading the data, which may cause confusion of data reading and result in decoding errors.
In the embodiment of the present application, whether to read data from the LoopBuffer buffer is determined according to the state of the FIFO buffer and the state of the Turbo decoder, and when facing data with variable time slots, the data can still be flexibly read, thereby avoiding confusion of data reading and ensuring that decoding can be normally performed.
It should be understood that although the steps in the flowcharts of fig. 2 to 5 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
In an embodiment of the present application, as shown in fig. 6, there is provided a decoding front-end processing apparatus 600, including: a receiving module 601, a storing module 602, a judging module 603, a reading module 604 and a control module 605, wherein:
a receiving module 601, configured to receive target data sent by a sending end, and analyze the target data to obtain associated state information, where the target data is transmitted through a communication resource with an unfixed time slot length;
a storage module 602, configured to store the target data into a first buffer, and store the associated state information into a second buffer;
a determining module 603, configured to determine whether a data reading condition is satisfied according to the storage state of the second buffer and the working state of the decoder;
a reading module 604, configured to, if the data reading condition is met, read the target data from the first buffer according to the associated state information to obtain read target data;
and a control module 605, configured to control the decoder to decode according to the read target data.
In this embodiment of the application, the determining module 603 is specifically configured to detect whether the storage state of the second buffer is a non-empty state; detecting whether the working state of the decoder is a non-busy state; and when the storage state of the second buffer area is a non-empty state and the working state of the decoder is a non-busy state, determining that the data reading condition is met.
In this embodiment, the control module 605 is specifically configured to perform de-interleaving processing and de-spreading processing on the read target data according to the channel associated state information of the second buffer area, so as to obtain processed data; and sending the processed data to the decoder, wherein the processed data is used for decoding by the decoder.
In this embodiment, the control module 605 is specifically configured to divide the processed data into N data segments, where N is an integer greater than or equal to 1; and sending the N data segments to the decoder respectively.
In this embodiment, the target data is equalized single-hop soft information, the first buffer area is provided with a safety interval, and the size of the safety interval is 1.5 times of the data size of the soft information contained in the maximum hop count of the timeslot.
In this embodiment of the present application, the associated state information includes: at least one of code rate, spreading multiple, interleaved hop count, decoding length, and number of symbols.
In one embodiment, the first buffer is a LoopBuffer buffer; the second buffer area is a FIFO buffer area; the decoder is a Turbo decoder, a convolution decoder, a Polar decoder or an LDPC decoder.
For the specific limitations of the decoding front-end processing apparatus, reference may be made to the above limitations of the decoding front-end processing method, which is not described herein again. The various modules in the decoding front-end processing device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a decoding front-end processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment of the present application, there is provided a computer device including a memory and a processor, the memory storing a computer program, and the processor implementing the following steps when executing the computer program:
receiving target data sent by a sending end, analyzing the target data to obtain associated state information, and transmitting the target data through communication resources with unfixed time slot lengths;
storing the target data into a first buffer area, and storing the associated state information into a second buffer area;
judging whether a data reading condition is met or not according to the storage state of the second buffer area and the working state of a decoder;
if the data reading condition is met, reading the target data from the first buffer area according to the associated state information to obtain the read target data;
and controlling the decoder to decode according to the read target data.
In the embodiment of the present application, the processor, when executing the computer program, further implements the following steps:
detecting whether the storage state of the second buffer area is a non-empty state; detecting whether the working state of the decoder is a non-busy state; and when the storage state of the second buffer area is a non-empty state and the working state of the decoder is a non-busy state, determining that the data reading condition is met.
In the embodiment of the present application, the processor, when executing the computer program, further implements the following steps:
de-interleaving and de-spreading the read target data according to the channel associated state information of the second buffer area to obtain processed data; and sending the processed data to the decoder, wherein the processed data is used for decoding by the decoder.
In the embodiment of the present application, the processor, when executing the computer program, further implements the following steps:
dividing the processed data into N data segments, wherein N is an integer greater than or equal to 1; and sending the N data segments to the decoder respectively.
In this embodiment, the target data is equalized single-hop soft information, the first buffer area is provided with a safety interval, and the size of the safety interval is 1.5 times of the data size of the soft information contained in the maximum hop count of the timeslot.
In this embodiment of the present application, the associated state information includes: at least one of code rate, spreading multiple, interleaved hop count, decoding length, and number of symbols.
In the embodiment of the present application, the first buffer is a LoopBuffer buffer; the second buffer area is a FIFO buffer area; the decoder is a Turbo decoder, a convolution decoder, a Polar decoder or an LDPC decoder.
In an embodiment of the application, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of:
receiving target data sent by a sending end, analyzing the target data to obtain associated state information, and transmitting the target data through communication resources with unfixed time slot lengths;
storing the target data into a first buffer area, and storing the associated state information into a second buffer area;
judging whether a data reading condition is met or not according to the storage state of the second buffer area and the working state of a decoder;
if the data reading condition is met, reading the target data from the first buffer area according to the associated state information to obtain the read target data;
and controlling the decoder to decode according to the read target data.
In an embodiment of the application, the computer program when executed by the processor further performs the steps of:
detecting whether the storage state of the second buffer area is a non-empty state; detecting whether the working state of the decoder is a non-busy state; and when the storage state of the second buffer area is a non-empty state and the working state of the decoder is a non-busy state, determining that the data reading condition is met.
In an embodiment of the application, the computer program when executed by the processor further performs the steps of:
de-interleaving and de-spreading the read target data according to the channel associated state information of the second buffer area to obtain processed data; and sending the processed data to the decoder, wherein the processed data is used for decoding by the decoder.
In an embodiment of the application, the computer program when executed by the processor further performs the steps of:
dividing the processed data into N data segments, wherein N is an integer greater than or equal to 1; and sending the N data segments to the decoder respectively.
In this embodiment, the target data is equalized single-hop soft information, the first buffer area is provided with a safety interval, and the size of the safety interval is 1.5 times of the data size of the soft information contained in the maximum hop count of the timeslot.
In this embodiment of the present application, the associated state information includes: at least one of code rate, spreading multiple, interleaved hop count, decoding length, and number of symbols.
In the embodiment of the present application, the first buffer is a LoopBuffer buffer; the second buffer area is a FIFO buffer area; the decoder is a Turbo decoder, a convolution decoder, a Polar decoder or an LDPC decoder.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of coding front-end processing, the method comprising:
receiving target data sent by a sending end, analyzing the target data to obtain associated state information, and transmitting the target data through communication resources with unfixed time slot lengths;
storing the target data into a first buffer area, and storing the associated state information into a second buffer area;
judging whether a data reading condition is met or not according to the storage state of the second buffer area and the working state of the decoder;
if the data reading condition is met, reading the target data from the first buffer area according to the associated state information to obtain the read target data;
and controlling the decoder to decode according to the read target data.
2. The method of claim 1, wherein the determining whether the data reading condition is satisfied according to the storage state of the second buffer and the operating state of the decoder comprises:
detecting whether the storage state of the second buffer area is a non-empty state;
detecting whether the working state of the decoder is a non-busy state;
and when the storage state of the second buffer area is a non-empty state and the working state of the decoder is a non-busy state, determining that the data reading condition is met.
3. The method according to claim 1, wherein said controlling said decoder to decode according to said read target data comprises:
de-interleaving and de-spreading the read target data according to the channel associated state information of the second buffer area to obtain processed data;
and sending the processed data to the decoder, wherein the processed data is used for the decoder to decode.
4. The method of claim 3, wherein sending the processed data to the decoder comprises:
dividing the processed data into N data segments, wherein N is an integer greater than or equal to 1;
and respectively sending the N data segments to the decoder.
5. The method according to claim 1, wherein the target data is equalized single-hop soft information, and the first buffer area is provided with a safety interval, and the size of the safety interval is 1.5 times of the data volume of the soft information contained in the maximum hop count of the timeslot.
6. The method of claim 1, wherein the associated state information comprises: at least one of code rate, spreading multiple, interleaved hop count, decoding length, and number of symbols.
7. The method of claim 1, wherein the first buffer is a LoopBuffer buffer; the second buffer area is an FIFO buffer area; the decoder is a Turbo decoder, a convolution decoder, a Polar decoder or an LDPC decoder.
8. A decoding front-end processing apparatus, the apparatus comprising:
the receiving module is used for receiving target data sent by a sending end and analyzing the target data to obtain associated state information, and the target data is transmitted through communication resources with unfixed time slot lengths;
the storage module is used for storing the target data into a first buffer area and storing the associated state information into a second buffer area;
the judging module is used for judging whether the data reading condition is met or not according to the storage state of the second buffer area and the working state of the decoder;
the reading module is used for reading the target data from the first buffer area according to the associated state information if the data reading condition is met, and obtaining the read target data;
and the control module is used for controlling the decoder to decode according to the read target data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010245005.6A 2020-03-31 2020-03-31 Decoding front-end processing method, decoding front-end processing device, computer equipment and storage medium Active CN113472478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010245005.6A CN113472478B (en) 2020-03-31 2020-03-31 Decoding front-end processing method, decoding front-end processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010245005.6A CN113472478B (en) 2020-03-31 2020-03-31 Decoding front-end processing method, decoding front-end processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113472478A true CN113472478A (en) 2021-10-01
CN113472478B CN113472478B (en) 2023-12-12

Family

ID=77866170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010245005.6A Active CN113472478B (en) 2020-03-31 2020-03-31 Decoding front-end processing method, decoding front-end processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113472478B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116881183A (en) * 2023-09-06 2023-10-13 北京融为科技有限公司 Method and device for processing decoded data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791089A (en) * 2004-12-17 2006-06-21 华为技术有限公司 Method for improving code and decode treatment efficiency
CN101515805A (en) * 2009-03-26 2009-08-26 华为技术有限公司 Turbo encoder and encoding method thereof
WO2017000682A1 (en) * 2015-06-30 2017-01-05 深圳市中兴微电子技术有限公司 Decoding method and apparatus and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791089A (en) * 2004-12-17 2006-06-21 华为技术有限公司 Method for improving code and decode treatment efficiency
CN101515805A (en) * 2009-03-26 2009-08-26 华为技术有限公司 Turbo encoder and encoding method thereof
WO2017000682A1 (en) * 2015-06-30 2017-01-05 深圳市中兴微电子技术有限公司 Decoding method and apparatus and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116881183A (en) * 2023-09-06 2023-10-13 北京融为科技有限公司 Method and device for processing decoded data

Also Published As

Publication number Publication date
CN113472478B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
US11303298B2 (en) Encoding and decoding method and terminal
JP7471357B2 (en) Encoding method, decoding method, and device
EP3533147B1 (en) Iterative decoding of polar code with bit-flipping of unreliable bits
CN101471689B (en) Method for transmitting data in communication system, communication device and communication system
CN108429599B (en) Method and apparatus for data processing in a communication system
US20190372591A1 (en) Methods and apparatuses for data processing in communication system
US11323209B2 (en) Modem chips and receivers for performing hybrid automatic repeat request processing
US11343018B2 (en) Polar code interleaving processing method and apparatus
CN113472478B (en) Decoding front-end processing method, decoding front-end processing device, computer equipment and storage medium
CN110098891B (en) Interleaving method and interleaving apparatus
US11398879B2 (en) Data processing method and communications device
US11128320B2 (en) Encoding method, decoding method, encoding apparatus, and decoding apparatus
US20180212630A1 (en) Encoder device, decoder device, and methods thereof
WO2019042370A1 (en) Data transmission method and device
JP3920220B2 (en) Communication device
CN109495207B (en) Method and apparatus for interleaving data in wireless communication system
WO2018141271A1 (en) Data processing method and device
KR102350909B1 (en) Method and apparatus for error-correcting based on error-correcting code
JP2019083507A (en) Reception device, transmission device, reception method and transmission method
CN112703687B (en) Channel coding method and device
CN109495210B (en) Method, apparatus, and computer-readable storage medium for interleaving data in a wireless communication system
US20210126659A1 (en) Apparatus and method for processing multi-user transmissions to discard signals or data carrying interference
JP2010021886A (en) Data decoding device, receiver and data decoding method
CN112865815A (en) Turbo decoding method, Turbo decoding device, Turbo decoder and storage medium
CN112688694A (en) Decoder for list type continuous elimination and decoding method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant