CN102655440A - Method and device for scheduling multiple sets of Turbo decoders - Google Patents

Method and device for scheduling multiple sets of Turbo decoders Download PDF

Info

Publication number
CN102655440A
CN102655440A CN2011100513776A CN201110051377A CN102655440A CN 102655440 A CN102655440 A CN 102655440A CN 2011100513776 A CN2011100513776 A CN 2011100513776A CN 201110051377 A CN201110051377 A CN 201110051377A CN 102655440 A CN102655440 A CN 102655440A
Authority
CN
China
Prior art keywords
idle
queue
decoder
pointer
decoders
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100513776A
Other languages
Chinese (zh)
Inventor
张薇
刘伟达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZTE Microelectronics Technology Co Ltd
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN2011100513776A priority Critical patent/CN102655440A/en
Publication of CN102655440A publication Critical patent/CN102655440A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a method and a device for scheduling multiple sets of Turbo decoders. The method comprises the following steps of: previously arranging and managing a vacancy decoder queue and a vacancy ID (identity) queue, detecting a state of an input buffering area, and starting a vacancy decoder when data to be decoded exist on the input buffering area; and deleting an ID number of the decoder from the vacancy ID queue, caching data after coding to an output buffering area and reading. According to the method provided by the invention, in a communication system by adopting the multiple sets of Turbo decoders, an input scheduling unit utilizes vacancy identify identification (ID) queue, to distribute the data to the vacancy decoders in time; and an output scheduling unit reads decoding finishing data in time through a polling mechanism, so that the parallel processing of the multiple sets of decoders can be realized, and the treatment efficiency is improved.

Description

Method and device for scheduling multiple sets of Turbo decoders
Technical Field
The invention relates to the field of communication, in particular to a method and a device for scheduling a plurality of sets of Turbo decoders aiming at application scenes of single-path data input and single-path data output.
Background
The decoder is a logic circuit having a decoding function in the communication system, and functions to convert an address code into a valid signal. The processing capacity of a decoder is limited, the contribution to the clock frequency increase is also very limited, when the symbol flow in the system is high, the system requirement can be met by adopting a plurality of sets of decoders for parallel processing, and therefore a certain scheduling algorithm is required to be adopted to schedule the input and the output of the plurality of sets of decoders.
Especially Turbo decoder usually adopts the method of loop iteration feedback to schedule the decoder, namely need to poll many sets of decoders, record their idle or busy state, and then choose the decoder according to the polling result, the decoder scheduling method has long processing time, the throughput is limited, under the situation that the system requires the heavy throughput, there is strict restriction to the data processing time, the existing loop iteration feedback method can't meet the requirement of the system heavy throughput.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a method and an apparatus for scheduling multiple sets of Turbo decoders, where the decoder scheduling apparatus is used to perform parallel processing on the schedules of multiple sets of decoders, so as to save data processing time and respond to system requirements with high throughput.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a method for dispatching a plurality of sets of Turbo decoders comprises the steps of presetting and managing an idle decoder queue and an idle ID queue, detecting the state of an input buffer area, starting an idle decoder when data to be decoded in the input buffer area is to be decoded, and deleting the ID number of the decoder from the idle ID queue;
the decoded data is buffered in the output buffer and read.
Further, a pointer A for indicating the number of idle decoders is preset; the managing a free decoder queue comprises:
when pointer A is at the end of the free decoder queue, indicating that all decoders are free;
when a decoder is started, the pointer A moves forward one bit in the idle decoder queue, and the number of the idle decoders is reduced by one;
when the decoded data is read, pointer a is shifted backward by one bit in the idle decoder queue, and the number of idle decoders is increased by one.
Further, the managing the idle decoder queue further comprises: detecting whether an ID in the idle ID queue is read;
when an ID is detected to be read, which indicates that the decoder corresponding to the ID is started, the pointer A moves forward one bit in the idle decoder queue, and the number of the idle decoders is reduced by one.
Further, the method for managing the idle decoder queue further comprises: detecting whether the decoded data is cached in an output buffer area;
when detecting that the decoded data is buffered in the output buffer area, the pointer A moves backwards by one bit in the idle decoder queue, and the number of the idle decoders is increased by one.
Further, the method for managing the idle decoder queue further comprises:
and when the ID in the idle ID queue is detected to be read and the decoded data is detected to be cached in an output buffer area, the number of idle decoders is unchanged.
Further, the method for managing the free queue further comprises the following steps:
when the pointer A is positioned at the head end of the idle decoder queue, the number of the idle decoders is zero, and the idle decoders are all busy and are not allowed to write data.
The method also comprises the steps of presetting a pointer C for representing the number of the effective idle IDs and a pointer B for representing the head position of the idle ID queue;
the managing the idle ID queue includes:
the pointer C is positioned at the tail end of the effective idle ID queue and represents the number of idle IDs which can be effectively used; pointer B is always at the head of the free ID queue and when the decoder needs to assign an ID, the free ID is taken from the position indicated by pointer B.
Further, the method for managing the idle ID queue further comprises:
detecting whether the idle ID is read;
when the idle ID is detected to be read, selecting the idle ID at the position of the pointer B, recording the ID, moving the pointer C forward by one bit, and adding an invalid ID mark at the original position of the pointer C;
and when the idle ID is not detected to be read, the idle ID queue is not changed.
Further, the method for managing the idle ID queue further comprises:
detecting whether the decoded data is read or not;
when detecting that the decoded data is read, moving the pointer C backward by one bit to indicate that the ID indicated by the original position of the pointer C is idle;
and when the decoded data is not detected to be read, the idle ID queue is not changed.
Further, the method for managing the idle ID queue further comprises:
when an invalid ID flag is added at the pointer B, all IDs are occupied.
A device for scheduling a plurality of sets of decoders comprises an input scheduling unit, an input buffer area, a decoder resource parallel processing unit and an output scheduling unit; wherein,
the input scheduling unit is used for setting and managing an idle decoder and an idle ID queue, distributing ID for the decoder and detecting whether an input buffer area is occupied;
the input buffer area is used for caching data to be decoded, the input buffer area only caches one transmission block TB data at a time, and the data to be cached is read and then the next data to be decoded is cached;
the decoder resource parallel processing unit is used for realizing the parallel decoding processing of the multiple sets of decoders when the multiple sets of decoders all occupy the ID;
the output buffer area is used for caching the decoded data, caching only one TB data at a time, and caching the next decoded data after the cached data is read;
and the output scheduling unit is used for reading the decoded data and informing the input scheduling unit to adjust the idle decoder and the idle ID queue.
Furthermore, the input scheduling unit comprises an idle decoder number register and an idle ID queue register; wherein,
the idle decoder number register is used for managing the number of idle decoders, and when one decoder is started or released, the pointer A used for indicating the number of idle decoders moves forward or backward by one bit, and the count of the idle decoder register is decreased by one or increased by one; when the idle decoder register count is zero, the multiple sets of decoders are all in working states.
The idle ID queue register is used for managing idle IDs in the idle ID queue, the pointer B is always positioned at the head end of the idle ID queue, and when the decoder needs to distribute the IDs, the ID at the position of the pointer B is taken; the position indicated by the pointer C represents the number of valid idle IDs, when one valid ID is read, the pointer C moves forward one bit in sequence, an invalid ID mark is added to the original position, and when an invalid ID mark is added to the pointer B, all the IDs are occupied.
It can be seen from the scheme provided by the present invention that in a communication system using multiple sets of Turbo decoders, an input scheduling unit uses an idle Identification (ID) queue to allocate data to an idle decoder in time, an output scheduling unit reads the decoded data in time through a polling mechanism, parallel processing of multiple sets of decoders is realized, processing efficiency is improved, and only one Transmission Block (TB) data is cached in an input buffer area and an output buffer area, occupying a small storage space, further improving processing efficiency of the system, the multiple sets of decoders determine whether to enter the input buffer area according to whether the ID is allocated, the input scheduling unit does not need to poll the idle/busy state of each decoder, control is simple, the number of decoders is not limited, and portability is strong; and moreover, the idle decoder and the idle ID queue are managed through the pointer position, when the decoded data are read by the output scheduling unit, the ID is released, the decoder has no priority, and the data processing efficiency is improved.
Drawings
FIG. 1 is a flow chart of a method for scheduling sets of decoders in accordance with the present invention;
FIG. 2 is a schematic diagram of an apparatus for scheduling multiple sets of decoders according to the present invention;
FIG. 3 is a flowchart of a method for counting the number of idle decoders by the input scheduling unit according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for managing a free ID queue by an input scheduling unit according to an embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for scheduling multiple sets of decoders in the present invention, as shown in fig. 1, including:
step 100: and presetting and managing an idle decoder register and an idle ID register in real time, and detecting the state of an input buffer zone.
Step 101: and allocating an ID to the idle decoder and starting the idle decoder.
In this step, each decoder corresponds to an ID, and when an idle decoder needs to assign an ID to operate, the idle decoder is assigned a corresponding ID.
When the data to be decoded in the input buffer area is ready and the idle ID queue is not empty, starting the decoder at the head of the queue, clearing the ID allocated to the decoder from the idle ID queue, subtracting 1 from the number of the idle decoders, and when the number of the idle decoders is 0, namely, all the decoders are in a working state.
Step 102: and the decoder assigned to the ID performs decoding work, and new data to be decoded waits for the assigned decoder to perform decoding.
In this step, a plurality of sets of decoders are processed in parallel, the decoders are not divided into priority levels, and the decoders assigned with the IDs can work effectively.
Step 103: and polling whether the plurality of sets of decoders finish decoding or not.
In this step, for the decoded data that completes decoding, go to step 104; the decoders for outstanding decoding continue to wait for polling.
Step 104: and storing the decoded data which is decoded completely in an output buffer area to wait for being read.
In this step, when the output buffer is empty, the decoded data after decoding is stored in the output buffer; when the output buffer area is occupied, the decoded data after decoding is stored in the output buffer area after the data of the output buffer area is read.
Step 105: the decoded data is read, and the number of idle decoders and the ID queue are adjusted.
In this step, the decoder that finishes decoding releases the ID that occupies, this ID is sent to the idle ID queue and queued for the next distribution, the count of the idle decoder register adds 1; and carrying out next round of decoder scheduling processing.
Fig. 2 is a schematic diagram of an apparatus for scheduling multiple sets of decoders in the present invention, as shown in fig. 2, including:
and the input scheduling unit is used for setting and managing the idle decoder and the idle ID queue and detecting whether the input buffer area is occupied. The input scheduling unit allocates ID to the decoder, and the decoder allocated with the ID enters a working state; wherein,
the input scheduling unit at least comprises an idle decoder number register and an idle ID queue register:
the number register of the idle decoder is used for managing the number of the idle decoders, when one decoder is started or released, the pointer A used for indicating the number of the idle decoders moves forward or backward by one bit from the current position, and correspondingly, the count of the number register of the idle decoders is subtracted by 1 or added by 1; when the idle decoder register count is 0, the multiple sets of decoders are all in working state.
The idle ID queue register is used for managing idle IDs in the idle ID queue, the pointer B is always positioned at the head end of the idle ID queue, and when the decoder needs to distribute the IDs, the ID at the position of the pointer B is used; the position indicated by the pointer C represents the number of valid idle IDs, when each valid ID is read, the pointer C sequentially moves forward by one bit, an invalid ID mark is added to the original position, and when an invalid ID mark is added to the pointer B, all the IDs are occupied.
The input buffer area is used for caching data to be decoded which are to enter the parallel processing unit of the decoder, the input buffer area only caches one Transmission Block (TB) data at a time, the data to be cached is read and then the next data to be decoded is cached, and the occupied storage space is small.
The resource parallel processing unit of the decoder is used for realizing the parallel processing of the plurality of sets of decoders when the plurality of sets of decoders all occupy the ID, the decoders have no priority division and are treated equally, for example, the decoder A obtains long packet data first, the decoder B obtains short packet data later, the decoder B completes decoding earlier than the decoder A due to different packet lengths, and the decoder B can immediately be authorized to obtain new data to be decoded to start a new round of decoding without waiting for the decoder A to complete decoding after completing decoding, so that the data processing efficiency is improved.
When the data to be decoded in the input buffer area is ready and the idle ID queue is not empty, starting the decoder at the head of the queue, clearing the ID occupied by the decoder from the idle ID queue, subtracting 1 from the number of the idle decoders, and when the number of the idle decoders is 0, namely, all the decoders are in a working state.
And the output buffer area is used for buffering the decoded data and reading the data by the scheduling unit to be output. The output buffer zone only buffers one TB data at a time, and buffers the next decoding completion data after the buffered data is read, so that the occupied storage space is small.
And the output scheduling unit is used for reading the decoded data, adjusting the number of idle decoders and the ID queues, detecting a decoding completion mark of each decoder in the decoder parallel processing unit through a polling mechanism and detecting the state of an output buffer area, wherein the decoding completion mark is used for marking the decoder which completes decoding, and when the decoder in the decoder parallel processing unit completes decoding, the decoding completion mark is generated. And when the decoding completion mark of a certain decoder is detected and the output buffer area has the data completing decoding, reading the data completing decoding and emptying the output buffer area. The decoder which finishes decoding releases the occupied ID, the ID is sent to a free ID queue of the input scheduling unit to be queued for next allocation, and the count of a free decoder register is increased by 1.
Fig. 3 is a flowchart of a method for counting the number of idle decoders by an input scheduling unit in the embodiment of the present invention, where the method manages the position of a maintenance pointer a by detecting two pulse signals, that is, whether an ID in an idle ID queue is read or not; and secondly, whether the decoded data is read or not is cached in an output buffer area, and the position of the pointer A is maintained according to the detection results of the two pulse signals so as to determine the number of idle decoders. As shown in fig. 3, includes:
step 300: the pointer a for indicating the number of idle decoders is reset and the input scheduling unit is ready to manage the number of idle decoders.
Step 301: pointer a is located at the end of the idle decoder queue, indicating that the number of idle decoders is M.
In this step, it is assumed that there are M decoders in the idle decoder register, and when the pointer a is located at the end of the idle decoder queue, it indicates that all decoders are idle, that is, the number of idle decoders is M, and at this time, data is allowed to be written.
Step 302: it is determined whether a free decoder is enabled to maintain the location of pointer a.
In the step, whether an ID in an idle ID queue is read or not is detected, if the ID is read from the ID queue, a pulse signal is generated to indicate that the ID is distributed to an idle decoder, namely, the idle decoder is started; if no ID is read from the ID queue, the ID queue and the number of idle decoders are not changed. The determination of step 303 is also made.
Step 302 a: according to the result of the determination in step 302, if there is an ID read from the ID queue, the pointer a is moved forward by one bit, indicating that the number of idle decoders is decreased by 1. Thereafter, step 304 is entered.
Step 302 b: according to the result of the determination in step 302, if no ID is read from the ID queue, the pointer position a is unchanged, which means that the number of idle decoders is unchanged, and the process is ended.
Step 303: it is determined whether any decoded data has been read to maintain the position of pointer a.
In this step, whether the data which completes decoding is read is detected, if the decoder which completes decoding is detected to buffer in an output buffer area, another pulse signal is generated to indicate that the decoder completes decoding work and becomes an idle decoder; if no data which completes decoding is detected to be read, the position of the pointer A is unchanged, and the number of idle decoders is unchanged.
Step 303 a: according to the result of the determination in step 303, if it is detected that the decoded data is read, the pointer a is moved backward by one bit, which indicates that the number of idle decoders is increased by 1. Thereafter, step 304 is entered.
Step 303 b: according to the result of the determination in step 303, if it is not detected that the decoded data is read, the position of the pointer a is not changed, and the process is ended.
In one embodiment of the method, it is possible to detect both the case where there is an ID read from the idle ID queue and the case where the decoded data is read, which indicates that the number of idle decoders is not changed.
Step 304: it is determined whether the decoder is fully busy.
In the step, after the position of the pointer A is maintained and updated by detecting two pulse signals, whether the decoder is fully busy is judged according to the position of the pointer C; when the pointer A is located at the head of the idle decoder queue, namely the number of the idle decoders is 0, the decoder is fully busy.
Step 304 a: according to the judgment result in step 304, when the decoder is fully busy, the data writing is not accepted, and the procedure returns to step 302.
Step 304 b: if the decoder is not fully busy as a result of the determination in step 304, it indicates that there is an idle decoder and data writing is acceptable, and the process returns to step 302.
Fig. 4 is a flowchart of a method for managing an idle ID queue by an input scheduling unit according to an embodiment of the present invention, where the method manages the position of a maintenance pointer C by detecting two pulse signals, that is, whether an ID in the idle ID queue is read or not; and secondly, whether the decoded data is read or not is cached in an output buffer area, and the position of the pointer C is maintained according to the detection results of the two pulse signals to form a free ID queue. As shown in fig. 4, includes:
step 400: the pointer is reset and the input scheduling unit is ready to manage the idle ID queue.
Step 401: the pointer B is always positioned at the head end of the idle ID queue in the idle ID register, and the pointer C is positioned at the tail end of the effective idle ID queue and represents the number of idle IDs which can be effectively used;
in this step, it is assumed that there are M decoders in the idle ID register, and when the decoders need to allocate IDs, the idle ID is taken from the head of the idle ID queue, i.e., the position indicated by the pointer B.
Step 402: it is detected whether a free ID is read.
Step 402 a: according to the judgment result in step 402, if it is detected that a free ID is read, the free ID at the position of the pointer B is selected, and the ID number is recorded to indicate that the ID is occupied. Step 404 is entered.
Step 402 b: according to the result of the determination in step 402, if no free ID is detected to be read, the free ID queue in the free ID register is unchanged. The flow is ended.
Step 403: whether the decoded data is read or not is detected to maintain the position of the pointer C.
Step 403 a: according to the judgment result of step 403, if it is detected that the decoded data is read, the pointer C is moved backward by one bit, indicating that the ID indicated by the home position of the pointer C is free. Returning to step 402.
Step 403 b: according to the judgment result of step 403, if it is not detected that the decoded data is read, the free ID queue in the free ID register is not changed. The flow is ended.
Step 404: and sequentially moving the sequence in the register forward by one bit, moving the pointer C forward by one bit, adding an invalid ID mark to the original position of the pointer C, indicating that all the IDs are occupied when the invalid ID mark is added to the pointer B, and returning to the step 402 when all the IDs are occupied.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.

Claims (12)

1. A method for dispatching a plurality of sets of Turbo decoders is characterized in that an idle decoder queue and an idle ID queue are preset and managed; further comprising:
detecting the state of an input buffer area, starting an idle decoder when the input buffer area has data to be decoded, and deleting the ID number of the decoder from an idle ID queue;
the decoded data is buffered in the output buffer and read.
2. The method of claim 1, wherein a pointer a for indicating the number of idle decoders is preset; the managing a free decoder queue comprises:
when pointer A is at the end of the free decoder queue, indicating that all decoders are free;
when a decoder is started, the pointer A moves forward one bit in the idle decoder queue, and the number of the idle decoders is reduced by one;
when the decoded data is read, pointer a is shifted backward by one bit in the idle decoder queue, and the number of idle decoders is increased by one.
3. The method of claim 2, wherein managing the free decoder queue further comprises: detecting whether an ID in the idle ID queue is read;
when an ID is detected to be read, which indicates that the decoder corresponding to the ID is started, the pointer A moves forward one bit in the idle decoder queue, and the number of the idle decoders is reduced by one.
4. The method of claim 2, wherein the method of managing a free decoder queue further comprises: detecting whether the decoded data is cached in an output buffer area;
when detecting that the decoded data is buffered in the output buffer area, the pointer A moves backwards by one bit in the idle decoder queue, and the number of the idle decoders is increased by one.
5. The method of any of claims 2 to 4, wherein the method of managing a free decoder queue further comprises:
and when the ID in the idle ID queue is detected to be read and the decoded data is detected to be cached in an output buffer area, the number of idle decoders is unchanged.
6. The method of claim 5, wherein the method of managing a free queue further comprises:
when the pointer A is positioned at the head end of the idle decoder queue, the number of the idle decoders is zero, and the idle decoders are all busy and are not allowed to write data.
7. The method according to claim 1, characterized in that a pointer C for indicating the number of valid free IDs and a pointer B for indicating the head position of a free ID queue are preset;
the managing the idle ID queue includes:
the pointer C is positioned at the tail end of the effective idle ID queue and represents the number of idle IDs which can be effectively used; pointer B is always at the head of the free ID queue and when the decoder needs to assign an ID, the free ID is taken from the position indicated by pointer B.
8. The method of claim 7, wherein the method of managing a free ID queue further comprises:
detecting whether the idle ID is read;
when the idle ID is detected to be read, selecting the idle ID at the position of the pointer B, recording the ID, moving the pointer C forward by one bit, and adding an invalid ID mark at the original position of the pointer C;
and when the idle ID is not detected to be read, the idle ID queue is not changed.
9. The method of claim 7, wherein the method of managing a free ID queue further comprises:
detecting whether the decoded data is read or not;
when detecting that the decoded data is read, moving the pointer C backward by one bit to indicate that the ID indicated by the original position of the pointer C is idle;
and when the decoded data is not detected to be read, the idle ID queue is not changed.
10. The method of claim 8, wherein the method of managing a free ID queue further comprises:
when an invalid ID flag is added at the pointer B, all IDs are occupied.
11. A device for scheduling a plurality of sets of decoders is characterized by comprising an input scheduling unit, an input buffer area, a decoder resource parallel processing unit and an output scheduling unit; wherein,
the input scheduling unit is used for setting and managing an idle decoder and an idle ID queue, distributing ID for the decoder and detecting whether an input buffer area is occupied;
the input buffer area is used for caching data to be decoded, the input buffer area only caches one transmission block TB data at a time, and the data to be cached is read and then the next data to be decoded is cached;
the decoder resource parallel processing unit is used for realizing the parallel decoding processing of the multiple sets of decoders when the multiple sets of decoders all occupy the ID;
the output buffer area is used for caching the decoded data, caching only one TB data at a time, and caching the next decoded data after the cached data is read;
and the output scheduling unit is used for reading the decoded data and informing the input scheduling unit to adjust the idle decoder and the idle ID queue.
12. The apparatus of claim 11, wherein the input dispatch unit includes a free decoder number register and a free ID queue register; wherein,
the idle decoder number register is used for managing the number of idle decoders, and when one decoder is started or released, the pointer A used for indicating the number of idle decoders moves forward or backward by one bit, and the count of the idle decoder register is decreased by one or increased by one; when the idle decoder register count is zero, the multiple sets of decoders are all in working states.
The idle ID queue register is used for managing idle IDs in the idle ID queue, the pointer B is always positioned at the head end of the idle ID queue, and when the decoder needs to distribute the IDs, the ID at the position of the pointer B is taken; the position indicated by the pointer C represents the number of valid idle IDs, when one valid ID is read, the pointer C moves forward one bit in sequence, an invalid ID mark is added to the original position, and when an invalid ID mark is added to the pointer B, all the IDs are occupied.
CN2011100513776A 2011-03-03 2011-03-03 Method and device for scheduling multiple sets of Turbo decoders Pending CN102655440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100513776A CN102655440A (en) 2011-03-03 2011-03-03 Method and device for scheduling multiple sets of Turbo decoders

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100513776A CN102655440A (en) 2011-03-03 2011-03-03 Method and device for scheduling multiple sets of Turbo decoders

Publications (1)

Publication Number Publication Date
CN102655440A true CN102655440A (en) 2012-09-05

Family

ID=46730971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100513776A Pending CN102655440A (en) 2011-03-03 2011-03-03 Method and device for scheduling multiple sets of Turbo decoders

Country Status (1)

Country Link
CN (1) CN102655440A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813759A (en) * 2020-07-13 2020-10-23 北京九维数安科技有限公司 Packet data parallel processing device and method
CN111833232A (en) * 2019-04-18 2020-10-27 杭州海康威视数字技术股份有限公司 Image processing device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1360427A (en) * 2000-12-19 2002-07-24 深圳市中兴通讯股份有限公司 Two-stage optimizing method for selecting user circuit
CN1464714A (en) * 2002-06-28 2003-12-31 华为技术有限公司 Method for improving data processing capability of remote user dialing authentication protocol
CN1556654A (en) * 2003-12-31 2004-12-22 中兴通讯股份有限公司 Timing and control method of software timer
CN1713612A (en) * 2004-06-25 2005-12-28 中兴通讯股份有限公司 Data packet storing method by pointer technology
CN101084488A (en) * 2004-09-14 2007-12-05 科威尔公司 Debug in a multicore architecture
CN101141225A (en) * 2006-09-08 2008-03-12 中兴通讯股份有限公司 Data loss processing method in mobile communication system
CN101355401A (en) * 2007-07-23 2009-01-28 中兴通讯股份有限公司 Method and apparatus for decoding turbo code
CN101453296A (en) * 2007-11-29 2009-06-10 中兴通讯股份有限公司 Waiting queue control method and apparatus for convolutional Turbo code decoder
CN101674502A (en) * 2009-09-28 2010-03-17 杭州电子科技大学 Method for reorganizing data frames in GPON system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1360427A (en) * 2000-12-19 2002-07-24 深圳市中兴通讯股份有限公司 Two-stage optimizing method for selecting user circuit
CN1464714A (en) * 2002-06-28 2003-12-31 华为技术有限公司 Method for improving data processing capability of remote user dialing authentication protocol
CN1556654A (en) * 2003-12-31 2004-12-22 中兴通讯股份有限公司 Timing and control method of software timer
CN1713612A (en) * 2004-06-25 2005-12-28 中兴通讯股份有限公司 Data packet storing method by pointer technology
CN101084488A (en) * 2004-09-14 2007-12-05 科威尔公司 Debug in a multicore architecture
CN101141225A (en) * 2006-09-08 2008-03-12 中兴通讯股份有限公司 Data loss processing method in mobile communication system
CN101355401A (en) * 2007-07-23 2009-01-28 中兴通讯股份有限公司 Method and apparatus for decoding turbo code
CN101453296A (en) * 2007-11-29 2009-06-10 中兴通讯股份有限公司 Waiting queue control method and apparatus for convolutional Turbo code decoder
CN101674502A (en) * 2009-09-28 2010-03-17 杭州电子科技大学 Method for reorganizing data frames in GPON system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833232A (en) * 2019-04-18 2020-10-27 杭州海康威视数字技术股份有限公司 Image processing device
CN111813759A (en) * 2020-07-13 2020-10-23 北京九维数安科技有限公司 Packet data parallel processing device and method

Similar Documents

Publication Publication Date Title
US20230004329A1 (en) Managed fetching and execution of commands from submission queues
US9015451B2 (en) Processor including a cache and a scratch pad memory and memory control method thereof
CN101567849B (en) Data buffer caching method and device
US9058208B2 (en) Method of scheduling tasks for memories and memory system thereof
US8918595B2 (en) Enforcing system intentions during memory scheduling
CN103631624A (en) Method and device for processing read-write request
US8954652B2 (en) Method and controller for identifying a unit in a solid state memory device for writing data to
US9223373B2 (en) Power arbitration for storage devices
CN102934076A (en) Instruction issue and control device and method
KR20010066933A (en) A method for determining whether to issue a command from a disk controller to a disk drive, a disk controller and a memory media that stores a program
CN103403681A (en) Descriptor scheduler
CN109992205B (en) Data storage device, method and readable storage medium
JP2011059777A (en) Task scheduling method and multi-core system
CN110716691B (en) Scheduling method and device, flash memory device and system
CN101877666B (en) Method and device for receiving multi-application program message based on zero copy mode
CN106874081B (en) Queuing decode tasks according to priority in NAND flash controllers
CN102402401A (en) Method for scheduling input output (IO) request queue of disk
CN104717160A (en) Interchanger and scheduling algorithm
US20190095213A1 (en) Enhanced performance-aware instruction scheduling
CN108021516B (en) Command scheduling management system and method for parallel storage medium storage controller
CN114500401B (en) Resource scheduling method and system for coping with burst traffic
CN102655440A (en) Method and device for scheduling multiple sets of Turbo decoders
US20090265526A1 (en) Memory Allocation and Access Method and Device Using the Same
CN112181887A (en) Data transmission method and device
WO2007099659A1 (en) Data transmitting device and data transmitting method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20151008

Address after: Dameisha Yantian District of Shenzhen City, Guangdong province 518085 Building No. 1

Applicant after: SHENZHEN ZTE MICROELECTRONICS TECHNOLOGY CO., LTD.

Address before: 518057 Nanshan District Guangdong high tech Industrial Park, South Road, science and technology, ZTE building, Ministry of Justice

Applicant before: ZTE Corporation

C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120905