CN100592640C - Decoder for LDPC code based on pipeline operation mode - Google Patents

Decoder for LDPC code based on pipeline operation mode Download PDF

Info

Publication number
CN100592640C
CN100592640C CN200710092476A CN200710092476A CN100592640C CN 100592640 C CN100592640 C CN 100592640C CN 200710092476 A CN200710092476 A CN 200710092476A CN 200710092476 A CN200710092476 A CN 200710092476A CN 100592640 C CN100592640 C CN 100592640C
Authority
CN
China
Prior art keywords
dual port
port ram
vnu
checkpoint
cnu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200710092476A
Other languages
Chinese (zh)
Other versions
CN101093999A (en
Inventor
王琳
谢东福
徐位凯
范雷
张建文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN200710092476A priority Critical patent/CN100592640C/en
Publication of CN101093999A publication Critical patent/CN101093999A/en
Application granted granted Critical
Publication of CN100592640C publication Critical patent/CN100592640C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

Using design philosophy of pipeline operation mode, the LDPC decoder based on pipeline operation mode increases an amount of computational complexity and RAMS memory capacity in order to ensure operation between VNU and CNU. VNU provides information of variable point needed by computation for CNU. A series of dual port RAM array carries out data buffering for the output of CNU. Front part of memory in each dual port RAM array stores information needed by current iteration, and back part of memory stores information needed by next time of iteration. In one time of iterative time sequence, timeis enough to solve collision. Thus, blocking will not happen. Under condition of expending a small quantity of resource price, the invention raises speed of decoder effectively. The structure of decoder is suitable to matrix in any type.

Description

Ldpc code decoder based on pipeline work
Technical field
The present invention relates to electronic technology field, relate in particular to communication data transmission and technical field of data storage, specifically is a kind of ldpc code decoder structure.
Background technology
In the VLSI design, the always a pair of implacable contradiction of resource and speed.Structural design work mainly is the equilibrium problem that solves resource and speed.For the decoder of LDPC sign indicating number, what need balance is not only resource and speed, also comprises bit error rate performance.That is to say the design of ldpc code decoder, need on the whole resource, speed and bit error rate performance to be done a balance.
Wherein bit error rate performance is mainly determined by two aspects, it at first is exactly the decoding algorithm that decoder adopts, for example adopt the MIN-SUM algorithm to be certain to bring, but need under many circumstances to exchange reduction for resource occupation by sacrificing a part of performance than BP algorithm more property loss.Next is exactly the data format that decoder adopts, what we knew in Computer Simulation The data is the floating number of single precision or double precision, and in the realization of actual hardware, the expression floating number that must be similar to by the binary number of certain-length, the binary number that adopts is long more, its precision is good more, the actual performance of corresponding ldpc code decoder just can it is also conceivable that under some situation to performance requirement extreme harshness employing IEEE754 standard designs the data format among the VLSI the closer to the software emulation performance.The design of bit error rate performance can be regarded as the design of checkpoint computing module and variable point computing module internal structure to a certain extent.
Needed clearly before design decoder overall structure, for the decoder of LDPC sign indicating number, bit error rate performance is mainly determined by decoding algorithm; And speed and resource take mainly structures shape by decoder.No matter from which kind of LDPC sign indicating number decoding algorithm, the decode procedure of LDPC sign indicating number mainly comprises: the channel information initialization, and checkpoint calculates, and variable point calculates, hard decision and output result judgement.On basis, develop the complete serial decoding structure that the LDPC sign indicating number according to LDPC sign indicating number decode procedure, the main feature of serial decoding structure is to have only 1 checkpoint computing unit (CNU) and 1 variable point computing unit (VNU) fully, by array ram all result of calculation buffer memorys are got up between the two, this decoder architecture is simple, but computational speed is slow.
Parallel decoding structure main feature is to contain m checkpoint computing unit and n variable point computing unit fully, does not need basically the data in the iterative computation process are stored.Its major advantage is to have high computational speed, and its major defect is because the computing module number is too much, will take too much resource.Secondly because line number huge, when frame length is long,, can cause wiring to pass through basically owing to need the data wire that connects too much.For example the complete parallel decoder of LDPC sign indicating number of the n=20 that once finished of this seminar has taken 5334 Slices after placement-and-routing on the Xilinx Virtex2 3000, accounts for 37% of whole Slices (14336).Calculate on year-on-year basis, when n=1000, adopt the ldpc code decoder of complete parallel decoding structure to take and surpass 250,000 Slices, this is that any a FPGA institute is unacceptable, and the DVB-S2 frame length reaches 60,000.
Ldpc code decoder is the most commonly used at present is the part parallel decoding architecture.As shown in Figure 1, the part parallel decoding architecture is that having passed through the dual port RAM array between checkpoint computing module and the variable point computing module has carried out metadata cache; And the CNU and the VNU number that need are respectively the 1/f that checkpoint and variable are counted out, and collapse factors f is a positive integer between [2, M-1], reflection be multiplexing degree.Information after each CNU calculates deposits the dual port RAM array in, and after waiting the computing unit (for example CNU) of a side to calculate to finish fully, the computing unit of opposite side (for example VNU) just can calculate according to the information after upgrading.Adopt this decoding architecture, the consumption of resource can be reduced to original 1/f, its shortcoming is that decoding speed also will be reduced to original 1/f, need the dual port RAM of some simultaneously.
But the part parallel decoding architecture is real to specific matrix only, and the performance of the performance of this matrix often is not fairly good.Secondly, even the matrix of process particular constraints, in the information exchanging process of checkpoint and variable point, the caused obstruction that conflicts is inevitable.Existing part parallel decoding architecture can only be by to increase memory space be cost with a large amount of resource consumptions or stop decoding and carry out data collision and solve, and can not fundamentally solve the problem of data collision.In fact,,, and be aided with an amount of memory space, in decode procedure, can avoid taking place data collision if we find reasonably to carry out sequential planning by the analysis matrix structure.
Summary of the invention
In order to overcome the above-mentioned defective that existing ldpc code decoder structure exists, technical problem to be solved by this invention provides a kind of ldpc code decoder, by rational sequential planning, be aided with an amount of memory space, in decode procedure, avoid taking place data collision.
Technical scheme of the present invention is to have designed a kind of ldpc code decoder that adopts the flowing water working method to calculate variable point and checking point information.This decoder comprises two parts, and a part is configured for calculating variable point and checking point information with pipeline work by variable point computing unit VNU and checkpoint computing unit CNU; Another part is made of array ram, is used to store the information of this iteration and the generation of last iteration.
Decoder comprises a checkpoint computing unit CNU, a series of variable point computing unit VNU and a series of dual port RAM array, each variable point computing unit is cascade dual port RAM array respectively, the output of VNU connects the input of CNU, for providing, CNU calculates needed variable dot information, the CNU result calculated is carried out metadata cache by a series of dual port RAM arrays, and the quantity of variable point computing unit VNU and the quantity of dual port RAM array are by the degree d of checkpoint cDetermine, each dual port RAM array by a series of physically independently dual port RAM form, its quantity is by the degree d of variable point vDetermine dual port RAM array separated into two parts, preceding part storage current iteration information needed, back part storage next iteration information needed (being the information that current C NU generates).
The dual port RAM array respectively with corresponding VNU cascade, VNU provides needed variable dot information for the calculating of CNU, realizes the clog-free calculating of pipeline system in order to make CNU, has used d altogether cIndividual VNU makes CNU can obtain calculating needed variable dot information simultaneously, thereby guarantees that CNU realizes the clog-free calculating of continuous-flow type, sees on the whole, has constituted the streamline mode of operation between CNU and the VNU.
Because VNU and the status of CNU in this decoder architecture are reciprocity, decoder also can adopt and have d vThe structure of individual CNU and 1 VNU.This decoder comprises: a series of checkpoint computing units are the dual port RAM array of cascade correspondence respectively, the output of CNU connects the input of VNU, the output of VNU is carried out metadata cache by the dual port RAM array, determine the quantity of CNU and the quantity of dual port RAM array by the degree of variable point, each dual port RAM array is made up of a series of independently dual port RAMs, and dual port RAM quantity is determined by the degree of checkpoint.The degree of described checkpoint is that the related at most variable of checkpoint is counted, and the degree of variable point is that the related at most verification of variable point is counted, and the quantity that described verification is counted is determined by the number of nonzero element in the corresponding row or column in the matrix.
See on the whole, constituted the streamline mode of operation between CNU and the VNU.To current iteration, the information that checkpoint generates is returned and is deposited dual port RAM, so that carry out next iteration, because this is the process of depositing of returning of an one-to-many, has also just successfully solved the serious data collision problem that the part parallel decoding architecture faces.Selectively, in order to improve data throughput,, can adopt a plurality of translator units to work simultaneously in high data throughput field.The situation that takies resource according to CNU and VNU is determined concrete which kind of decoder architecture that adopts.
By increasing variable point calculation times and memory space expense, the operating rate of this structure is the d of serial decoding structure at least cDoubly, and efficiently solve the collision problem that iteration information actual hardware that LDPC sign indicating number matrix randomness brought is realized.Simultaneously,, differ bigger,, high using value is arranged as m-ary LDPC sign indicating number field at CNU and VNU resource occupation owing to only need CNU, an a plurality of VNU or VNU, an a plurality of CNU.
Description of drawings
Fig. 1 shows the part parallel decoder architecture
Fig. 2 shows and has 1 CNU and d cThe decoder architecture of the employing flowing water working method of individual VNU
Embodiment
At the drawings and specific embodiments enforcement of the present invention is specified below.
The degree of supposing checkpoint is d c, the degree of variable point is d v, the checkpoint computing unit represents that with CNU variable point computing unit is represented with VNU.Be illustrated in figure 2 as and have 1 CNU and d cThe decoder architecture of the employing flowing water working method of individual VNU.Dual port RAM array of each VNU cascade, the output of VNU connects the input of CNU, and the CNU result calculated is passed through d cIndividual dual port RAM array carries out metadata cache, and each dual port RAM array all is by d vIndividual physically independently dual port RAM form.Each dual port RAM separated into two parts, the checking point information that the required last iteration CNU of preceding part storage current iteration generates, the checking point information that the required current iteration CNU of back part storage next iteration generates.
In order to make CNU realize the clog-free calculating of continuous-flow type, by the degree d of checkpoint cDetermine the quantity of required VNU, this decoder has used d altogether cIndividual VNU, VNU are for the calculating of CNU provides needed variable dot information, and making CNU calculate needed variable dot information can obtain simultaneously, thereby guarantee the clog-free calculating of continuous-flow type of CNU.See on the whole, constituted the streamline mode of operation between CNU and the VNU.
Because each checkpoint is maximum and d cIndividual variable point is related, finishes the clog-free calculating of continuous-flow type in order to guarantee CNU, when needs carry out update calculation, guarantees the d that each checkpoint is associated cIndividual variable dot information has all obtained upgrading, and so just d should be arranged cIndividual VNU calculates simultaneously, and the result that will calculate instant pass to CNU.If we calculate in CNU successively by the sequencing of checkpoint in matrix, the corresponding relation that then should also press in checkpoint and the variable point calculates the variable dot information in regular turn in VNU.
Because each variable point at most may related d vIndividual checkpoint, therefore, each dual port RAM array is by d vIndividual physically independently dual port RAM form, we will finish the required checking point information of iteration variable point calculating in advance and deposit dual port RAM respectively in, guarantee the clog-free of VNU computational process with this.
To current iteration, the information that checkpoint generates is returned and is deposited dual port RAM, so that carry out next iteration, because this is the process of depositing of returning of an one-to-many, the also serious data collision problem that faces with regard to the decoder architecture that has successfully solved the part parallel structure.After iteration is finished each time, switch one time dual port RAM.
Because VNU and the status of CNU in this decoder architecture are reciprocity, decoder also can adopt and have d vThe structure of individual CNU and 1 VNU.This decoder comprises: variable point computing unit VNU, an a series of checkpoint computing unit CNU and a series of dual port RAM array, a series of checkpoint computing units are the dual port RAM array of cascade correspondence respectively, the output of CNU connects the input of VNU, the output of VNU is carried out metadata cache by the dual port RAM array, determine the quantity of CNU and the quantity of dual port RAM array by the degree of variable point, each dual port RAM array is made up of a series of independently dual port RAMs, and its quantity is determined by the degree of checkpoint.The VNU result calculated is passed through d v(degree of variable point) individual dual port RAM array carries out metadata cache, and each dual port RAM array all is by d cIndividual physically independently dual port RAM form each dual port RAM separated into two parts, preceding part storage current iteration information needed, back part storage next iteration information needed (being the information that current VNU generates).This structure emphasis has been put on the VNU, and CNU provides needed variable dot information for the calculating of VNU, realizes the clog-free calculating of continuous-flow type in order to make VNU, has used d altogether vIndividual CNU, making VNU calculate needed checking point information can obtain simultaneously, thereby guarantees the clog-free calculating of continuous-flow type of VNU.See on the whole, constituted the streamline mode of operation between CNU and the VNU.To current iteration, the information of variable dot generation deposits dual port RAM in, so that carry out next iteration.
Next be that example specifies the structure of realizing decoder of the present invention with concrete binary system LDPC sign indicating number.The matrix of binary system LDPC sign indicating number is the matrix H of 10*20, wherein, the line display variable point of matrix H, checkpoint is shown in tabulation.
Figure C20071009247600101
In the matrix in the corresponding line 1 number be the related checkpoint number of corresponding variable point, 1 number is the variable point number of corresponding check point association in the respective column.The related at most variable of checkpoint is counted and is the degree of checkpoint, and the related at most verification of variable point is counted and is the degree of variable point.
Observation matrix H as can be known, at most related 6 the variable points of each checkpoint, and each variable point 3 checkpoints of associations at most.Be the maximal degree d of checkpoint c=6, the maximal degree d of variable point v=3.Can in the decoder architecture design, use a CNU, six VNU, (perhaps using a VNU, three CNU), checkpoint calculates in CNU respectively sequentially.
In order to calculate the 1st checkpoint, 3,6,9,12,14,18 6 variable dot informations must be arranged;
In order to calculate the 2nd checkpoint, 5,6,7,15,16 6 variable dot informations must be arranged;
In order to calculate the 3rd checkpoint, 2,7,8,10,14,20 6 variable dot informations must be arranged;
In order to calculate the 4th checkpoint, 1,4,9,12,19 6 variable dot information must be arranged;
In order to calculate the 5th checkpoint, 5,6,11,13,20 6 variable dot informations must be arranged;
In order to calculate the 6th checkpoint, 1,10,11,16,17,19 6 variable dot information must be arranged;
In order to calculate the 7th checkpoint, 3,5,10,12,15,18 6 variable dot informations must be arranged;
In order to calculate the 8th checkpoint, 2,4,8,9,16,20 6 variable dot informations must be arranged;
In order to calculate the 9th checkpoint, 1,2,7,17,18 6 variable dot information must be arranged;
In order to calculate the 10th checkpoint, 3,4,13,14,17 6 variable dot informations must be arranged;
In the 1st VNU, calculate 3,5,2,1,5,1,3,2,1,3 variable dot informations in order;
In the 2nd VNU, calculate 6,6,7,4,6,10,5,4,2,4 variable dot informations in order;
In the 3rd VNU, calculate 9,7,8,9,11,11,10,8,7,13 variable dot informations in order;
In the 4th VNU, calculate 12,15,10,12,13,16,12,9,17,14 variable dot informations in order;
In the 5th VNU, calculate 14,16,14,19,20,17,15,16,18,17 variable dot informations in order;
In the 6th VNU, calculate 18 in order, *, 20, *, *, 19,18,20, *, * variable dot information; (* represents to be determined the information of replenishing by actual algorithm.)
The 1st VNU needed corresponding last iteration checking point information for calculating the variables corresponding dot information:
{3,5,2,1,5,1,3,2,1,3}
Figure C20071009247600111
{{7,10}{2,5}{3,8,9}{4,6,9}{2,5}{4,6,9}{7,10}{3,8,9}{4,6,9}{1,10}}
Obtain the information that last iteration checkpoint generates from the last partial memory of dual port RAM, the checking point information that is used for next iteration that current iteration CNU generates is stored in the same position of a part of memory behind the dual port RAM.Iteration of every beginning is switched primary memory.
Above-mentioned is the implementation that example has illustrated this ldpc code decoder with a concrete H matrix.Obviously, so a kind of implementation all is feasible for any H matrix.Divide by rational sequential, under equal resource occupation, can obtain the performance more outstanding than all the other decoder implementations.Because VNU and CNU quantity is asymmetric in the decoder architecture design, when taking more resources than CNU, uses VNU the structure of having only a VNU, or when CNU takies more resources than VNU, use the structure of having only a CNU, compare all the other decoder architectures and have higher using value.Because it is insensitive to the concrete form of check matrix to adopt memory array to transmit information, this decoder architecture is applicable to the matrix of any kind.
As adopt 4 system LDPC sign indicating numbers, according to calculating, it is 18 times of VNU that CNU takies resource.Occupation condition is as shown in the table:
Resource name Slices Slice Flip Flops 4 input LUTs
Number (CNU) 10301 14978 16208
Number (VNU) 899 1571 691
Because the CNU resource occupation is 18 times of VNU, consider only to use the decoder architecture of a CNU, can reach aim of saving, checking is lower based on part parallel structure decoder operating rate by experiment, far can not reach requirement.And adopt decoder architecture based on the flowing water working method, on same FPGA, only use 70% resource just to reach our desirable operating rate.Under the clock of 200MHz, if maximum iteration time is 20, then minimum decoding speed is 34.96Mbps.The final occupation condition of decoder is as shown in the table:
Resource name Slices Slice Flip Flops 4 input LUTS Block RAM
Number 16232 24003 21160 117
60% 45% 39% 73%
Above-mentioned instantiation only is that one of implementation of the present invention can not be it will be apparent to those skilled in the art as limiting protection range of the present invention for example, can also do some detail adjustment in the specific implementation, to optimize decoder.But these adjustment are must make in the specific implementation, and it is a support with core concept flowing water working method of the present invention still, and therefore, protection scope of the present invention is as the criterion with claim.

Claims (7)

1, a kind of ldpc code decoder based on pipeline work, comprise, a checkpoint computing unit CNU, a series of variable point computing unit VNU and a series of dual port RAM array, it is characterized in that, a series of variable point computing units are cascade dual port RAM array respectively, the output of VNU all connects the input of CNU, the output of CNU connects dual port RAM array input, by the dual port RAM array metadata cache is carried out in the output of CNU, determine the quantity of VNU and the quantity of dual port RAM array by the degree of checkpoint, each dual port RAM array is made up of a series of independently dual port RAMs, and the quantity of described dual port RAM is determined by the degree of variable point; The degree of described checkpoint is that the related at most variable of checkpoint is counted, and the degree of variable point is that the related at most verification of variable point is counted, and the quantity that described verification is counted is determined by the number of nonzero element in the corresponding row or column in the matrix.
2, ldpc code decoder according to claim 1 is characterized in that, each dual port RAM array separated into two parts, preceding part storage current iteration information needed, the information that back part storage current C NU generates.
3, ldpc code decoder according to claim 1 and 2 is characterized in that, the line display variable point of matrix in the described LDPC sign indicating number, and checkpoint is shown in tabulation.
4, a kind of ldpc code decoder based on pipeline work, comprise, a variable point computing unit VNU, a series of checkpoint computing unit CNU, and a series of dual port RAM arrays, it is characterized in that, a series of checkpoint computing units are the dual port RAM array of cascade correspondence respectively, the output of CNU all connects the input of VNU, the output of VNU connects dual port RAM array input, by the dual port RAM array output information of VNU is carried out metadata cache, determine the quantity of CNU and the quantity of dual port RAM array by the degree of variable point, each dual port RAM array is made up of a series of independently dual port RAMs, and the quantity of described dual port RAM is determined by the degree of checkpoint; The degree of described checkpoint is that the related at most variable of checkpoint is counted, and the degree of variable point is that the related at most verification of variable point is counted, and the quantity that described verification is counted is determined by the number of nonzero element in the corresponding row or column in the matrix.
5, ldpc code decoder according to claim 4 is characterized in that, each dual port RAM array separated into two parts, preceding part storage current iteration information needed, the information that the back current VNU of part storage generates.
6, according to claim 4 or 5 described ldpc code decoders, it is characterized in that, the line display variable point of matrix in the LDPC sign indicating number, checkpoint is shown in tabulation.
7, ldpc code decoder according to claim 6 is characterized in that, determines the degree of checkpoint or the degree of variable point by the number of nonzero element in the corresponding row or column in the matrix.
CN200710092476A 2007-07-24 2007-07-24 Decoder for LDPC code based on pipeline operation mode Expired - Fee Related CN100592640C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200710092476A CN100592640C (en) 2007-07-24 2007-07-24 Decoder for LDPC code based on pipeline operation mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200710092476A CN100592640C (en) 2007-07-24 2007-07-24 Decoder for LDPC code based on pipeline operation mode

Publications (2)

Publication Number Publication Date
CN101093999A CN101093999A (en) 2007-12-26
CN100592640C true CN100592640C (en) 2010-02-24

Family

ID=38992069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200710092476A Expired - Fee Related CN100592640C (en) 2007-07-24 2007-07-24 Decoder for LDPC code based on pipeline operation mode

Country Status (1)

Country Link
CN (1) CN100592640C (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8537938B2 (en) * 2009-01-14 2013-09-17 Thomson Licensing Method and apparatus for demultiplexer design for multi-edge type LDPC coded modulation
CN102377437B (en) * 2010-08-27 2014-12-10 中兴通讯股份有限公司 Method and device for coding quasi-cyclic low density parity check codes

Also Published As

Publication number Publication date
CN101093999A (en) 2007-12-26

Similar Documents

Publication Publication Date Title
CN105912501B (en) A kind of SM4-128 Encryption Algorithm realization method and systems based on extensive coarseness reconfigurable processor
CN105049061B (en) Based on the higher-dimension base stage code decoder and polarization code coding method calculated in advance
CN107590085B (en) A kind of dynamic reconfigurable array data path and its control method with multi-level buffer
CN101777921B (en) Structured LDPC code decoding method and device for system on explicit memory chip
CN105975251B (en) A kind of DES algorithm wheel iteration systems and alternative manner based on coarseness reconstruction structure
CN101119118A (en) Encoder of LDPC code of layered quasi-circulation extended structure
CN103970720A (en) Embedded reconfigurable system based on large-scale coarse granularity and processing method of system
CN105335331A (en) SHA256 realizing method and system based on large-scale coarse-grain reconfigurable processor
CN104391759A (en) Data archiving method for load sensing in erasure code storage
CN100592640C (en) Decoder for LDPC code based on pipeline operation mode
CN104052495A (en) Low density parity check code hierarchical decoding architecture for reducing hardware buffer
CN102064835B (en) Decoder suitable for quasi-cyclic LDPC decoding
CN103761072A (en) Coarse granularity reconfigurable hierarchical array register file structure
CN107168927B (en) Sparse Fourier transform implementation method based on flowing water feedback filtering structure
CN102411557B (en) Multi-granularity parallel FFT (Fast Fourier Transform) computing device
CN102201817B (en) Low-power-consumption LDPC (low density parity check) decoder based on optimization of folding structure of memorizer
CN103473368A (en) Virtual machine real-time migration method and system based on counting rank ordering
CN101136638A (en) Multi-code rate irregular LDPC code decoder
CN109672524A (en) SM3 algorithm wheel iteration system and alternative manner based on coarseness reconstruction structure
CN102594369A (en) Quasi-cyclic low-density parity check code decoder based on FPGA (field-programmable gate array) and decoding method
CN112632465B (en) Data storage method for decomposing characteristic value of real symmetric matrix based on FPGA
CN101106382B (en) High-speed LDPC code decoder based on category routing technology
CN203706196U (en) Coarse-granularity reconfigurable and layered array register file structure
CN102346728B (en) A kind of method and apparatus adopting vector processor to realize FFT/DFT inverted order
CN106371805B (en) The dynamic dispatching interconnected registers of processor and the method for dispatching data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100224

Termination date: 20130724