WO2001035267A1 - Methods and apparatus for efficient cosine transform implementations - Google Patents

Methods and apparatus for efficient cosine transform implementations Download PDF

Info

Publication number
WO2001035267A1
WO2001035267A1 PCT/US2000/031120 US0031120W WO0135267A1 WO 2001035267 A1 WO2001035267 A1 WO 2001035267A1 US 0031120 W US0031120 W US 0031120W WO 0135267 A1 WO0135267 A1 WO 0135267A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
matrix
idct
pes
dimensional
Prior art date
Application number
PCT/US2000/031120
Other languages
French (fr)
Inventor
Charles W. Kurak, Jr.
Gerald G. Pechanek, Jr.
Original Assignee
Bops, Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bops, Incorporated filed Critical Bops, Incorporated
Publication of WO2001035267A1 publication Critical patent/WO2001035267A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/3001Arithmetic instructions
    • G06F9/30014Arithmetic instructions with variable precision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/147Discrete orthonormal transforms, e.g. discrete cosine transform, discrete sine transform, and variations therefrom, e.g. modified discrete cosine transform, integer transforms approximating the discrete cosine transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30032Movement instructions, e.g. MOVE, SHIFT, ROTATE, SHUFFLE
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30036Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units

Definitions

  • the present invention relates generally to improvements in parallel processing, and more particularly to methods and apparatus for efficient cosine transform implementations on the manifold array ("ManArray") processing architecture.
  • ManArray manifold array
  • DCT discrete cosine transform
  • IDCT indirect discrete cosine transform
  • the compression standards are typically complex and specify that a high data rate must be handled.
  • MPEG at Main Profile and Main Level specifies 720x576 picture elements (pels) per frame at 30 frames per second and up to 15 Mbits per second.
  • the MPEG Main Profile at High Level specifies 1920x1152 pels per frame at 60 frames per second and up to 80 Mbits per second.
  • Video processing is a time constrained application with multiple complex compute intensive algorithms such as the two dimensional (2D) 8x8 IDCT.
  • processors with high clock rates, fixed function application specific integrated circuits (ASICs), or combinations of fast processors and ASICs are typically used to meet the high processing load.
  • ASICs fixed function application specific integrated circuits
  • Having efficient 2D 8x8 DCT and IDCT implementations is of great advantage to providing a low cost solution.
  • Prior art approaches, such as Pechanek et al. U.S. Patent No. 5,546,336, used a specialized folded memory array with embedded arithmetic elements to achieve high performance with 16 processing elements. The folded memory array and large number of processing elements do not map well to a low cost regular silicon implementation.
  • the ManArray processor as adapted as further described herein provides efficient software implementations of the IDCT using the ManArray indirect very long instruction word (iVLIW) architecture and a unique data-placement that supports software pipelining between processor elements (PEs) in the 2x2 ManArray processor.
  • iVLIW ManArray indirect very long instruction word
  • PEs processor elements
  • a two-dimensional (2D) 8x8 IDCT used in many video compression algorithms such as MPEG, can be processed in 34-cycles on a 2x2 ManArray processor and meets IEEE Standard 1180-1990 for precision of the IDCT.
  • the 2D 8x8 DCT algorithm using the same distributed principles covered in the distributed 2D 8x8 IDCT, can be processed in 35-cycles on the same 2x2 ManArray processor. With this level of performance, the clock rate can be much lower than is typically used in MPEG processing chips thereby lowering overall power usage.
  • An alternative software process for implementing the cosine transforms on the ManArray processor provides a scalable algorithm that can be executed on various arrays, such as a lxl, a 1x2, a 2x2, a 2x3, a 2x4, and so on allowing scalable performance.
  • this new software process makes use of the scalable characteristics of the ManArray architecture, unique ManArray instructions, and a data placement optimized for the MPEG application.
  • the number of VLIWs is minimized through reuse of VLIWs in the processing of both dimensions of the 2D computation.
  • the present invention defines a collection of eight hardware ManArray instructions that use the ManArray iVLIW architecture and communications network to efficiently calculate the distributed two-dimensional 8x8 IDCT.
  • appropriate data distribution and software pipeline techniques are provided to achieve a 34- cycle distributed two-dimensional 8x8 IDCT on a 2x2 ManArray processor that meets IEEE Standard 1180-1990 for precision of the IDCT.
  • appropriate data distribution patterns are used in local processor element memory in conjunction with a scalable algorithm to effectively and efficiently reuse VLIW instructions in the processing of both dimensions of the two dimensional algorithm.
  • FIG. 1 illustrates an exemplary 2x2 ManArray iVLIW processor
  • Fig. 2 illustrates an IDCT signal flow graph
  • Fig. 3A illustrates input data in 8x8 matrix form
  • Fig. 3B illustrates reordered input 8x8 data matrix per the signal flow graph of Fig. 2;
  • Fig. 3C illustrates the input 8x8 data matrix after it has been folded on the vertical axis
  • Fig. 3D illustrates the input 8x8 data matrix after it has been folded on the horizontal axis with data placement per PE in the 2x2 ManArray iVLIW processor of Fig. 1;
  • Fig 3E illustrates an example of data placement into the ManArray PE compute register files
  • Fig. 4 illustrates the data of Fig. 3D as loaded into the four PEs' registers
  • Fig. 5A illustrates a presently preferred execute VLIW instruction, XV
  • Fig. 5B illustrates the syntax and operation of the XV instruction of Fig. 5 A
  • Fig. 6A illustrates a presently preferred sum of two products instruction, SUM2P
  • Fig. 6B illustrates the syntax and operation of the SUM2P instruction of Fig. 6A;
  • Fig. 7A illustrates a presently preferred sum of two products accumulate instruction, SUM2PA
  • Fig. 7B illustrates the syntax and operation of the SUM2PA instruction of Fig. 7A;
  • Fig. 8A illustrates a presently preferred butterfly with saturate instruction, BFLYS
  • Fig. 8B illustrates the syntax and operation of the BFLYS instruction of Fig. 8 A
  • Fig. 9A illustrates a presently preferred addition instruction, ADD
  • Fig. 9B illustrates the syntax and operation of the ADD instruction of Fig. 9A
  • Fig. 10A illustrates a presently preferred permute instruction, PERM
  • Fig. 10B illustrates the syntax and operation of the PERM instruction of Fig. 10A
  • Fig. IOC illustrates one example of an 8-byte to 4-byte permute operation
  • Fig. 1 OD illustrates another example of an 8-byte to 4-byte permute operation
  • Fig. 10E illustrates one example of an 8-byte to 8-byte permute operation
  • Fig. 11 A illustrates a presently preferred PE to PE exchange instruction, PEXCHG
  • Fig. 1 IB illustrates the syntax and operation of the PEXCHG instruction of Fig. 11A;
  • Fig. 11C illustrates a 2x2 cluster switch arrangement showing PE and cluster switch notation
  • Fig. 1 ID illustrates a key to PEXCHG 2x2 Operation Table with an example path highlighted
  • Fig. 1 IE illustrates the PEXCHG lxl Operation Table
  • Fig. 1 IF illustrates the PEXCHG 1x2 Operation Table
  • Fig. 11G illustrates the PEXCHG 2x2 Operation Table
  • Fig. 11H illustrates a 2x2 PEXCHG operation
  • Fig. 12A illustrates a presently preferred load modulo indexed with scaled update instruction, LMX;
  • Fig. 12B illustrates the syntax and operation of the LMX instruction of Fig. 12A
  • Fig. 13A illustrates the first 18-cycles of ManArray 8x8 2D IDCT program code and the VLIWs associated with each XV program instruction
  • Fig. 13B illustrates cycles 19-34 of the ManArray 8x8 2D IDCT program code
  • Fig. 14 illustrates an 8x8 input data matrix
  • Fig. 15 illustrates the intermediate 8x8 results after processing the first dimension of the 8x8 IDCT algorithm
  • Fig. 16 illustrates the 8x8 output pixel results after processing of both dimensions of the 8x8 IDCT algorithm
  • Fig. 17 illustrates a 1x8 IDCT algorithm in VLIW coding format
  • Figs. 18A-M illustrate 2D 8x8 IDCT ManArray code
  • Fig. 19A illustrates a load indirect with scaled immediate update instruction, LII
  • Fig. 19B illustrates the syntax and operation of the LII instruction of Fig. 19A
  • Fig. 20A illustrates a subtract instruction, SUB;
  • Fig. 20B illustrates the syntax and operation of the SUB instruction of Fig. 20A;
  • Fig. 21 A illustrates a shift right immediate instruction, SHRI;
  • Fig. 2 IB illustrates the syntax and operation of the SHRI instruction of Fig. 21 A;
  • Fig. 22A illustrates a store indirect with scaled immediate update instruction, SII;
  • Fig. 22B illustrates the syntax and operation of the SII instruction of Fig. 22A;
  • Fig. 23 illustrates exemplary 2D 8x8 IDCT code showing the software pipeline operation;
  • Fig. 24 illustrates an exemplary process for a 2x2 DCT.
  • 60/139,946 entitled “Methods and Apparatus for Data Dependent Address Operations and Efficient Variable Length Code Decoding in a VLIW Processor” filed June 18, 1999
  • Provisional Application Serial No. 60/140,245 entitled “Methods and Apparatus for Generalized Event Detection and Action Specification in a Processor” filed June 21, 1999
  • Provisional Application Serial No. 60/140,163 entitled “Methods and Apparatus for Improved Efficiency in Pipeline Simulation and Emulation” filed June 21, 1999
  • 60/140,162 entitled “Methods and Apparatus for Initiating and Re- Synchronizing Multi-Cycle SIMD Instructions” filed June 21, 1999
  • Provisional Application Serial No. 60/140,244 entitled “Methods and Apparatus for Providing One-By-One Manifold Array (lxl ManArray) Program Context Control” filed June 21 , 1999
  • Provisional Application Serial No. 60/140,325 entitled “Methods and Apparatus for Establishing Port Priority Function in a VLIW Processor” filed June 21, 1999
  • 60/140,425 entitled “Methods and Apparatus for Parallel Processing Utilizing a Manifold Array (ManArray) Architecture and Instruction Syntax” filed June 22, 1999
  • Provisional Application Serial No. 60/165,337 entitled “Efficient Cosine Transform Implementations on the ManArray Architecture” filed November 12, 1999
  • Provisional Application Serial No. 60/171,911 entitled “Methods and Apparatus for DMA Loading of Very Long Instruction Word Memory” filed December 23, 1999
  • Provisional Application Serial No. 60/184,668 entitled “Methods and Apparatus for Providing Bit-Reversal and Multicast Functions Utilizing DMA Controller” filed February 24, 2000
  • a ManArray 2x2 iVLIW single instruction multiple data stream (SIMD) processor 100 shown in Fig. 1 comprises a sequence processor (SP) controller combined with a processing element-0 (PE0) SP PE0 101, as described in further detail in U. S. Patent Application Serial No. 09/169,072 entitled “Methods and Apparatus for Dynamically Merging an Array Controller with an Array Processing Element".
  • SP sequence processor
  • PE0 processing element-0
  • Three additional PEs 151, 153, and 155, are also utilized to demonstrate the efficient algorithm and mechanisms for fast cosine transforms on the
  • PEs can also be labeled with their matrix positions as shown in parenthesis for PE0 (PE00) 101, PE1 (PE01) 151, PE2 (PE10) 153, and PE3 (PE11) 155.
  • the SP/PE0 101 contains the fetch controller 103 to allow the fetching of short instruction words (SIWs) from a 32-bit instruction memory 105.
  • the fetch controller 103 provides the typical functions needed in a programmable processor such as a program counter (PC), branch capability, event point (EP) loop operations (see U.S. Application Serial No.
  • PC program counter
  • EP event point
  • the execution units 131 in the combined SP/PE0 101 can be separated into a set of execution units optimized for the control function, e.g., fixed point execution units, and the PE0 as well as the other PEs can be optimized for a floating point application.
  • the execution units 131 are of the same type in the SP/PEO and the PEs.
  • SP/PEO and the other PEs use a five instruction slot iVLIW architecture which contains a very long instruction word memory (VIM) 109 and an instruction decode and VIM controller function unit 107 which receives instructions as dispatched from the SP/PEO's I-Fetch unit 103 and generates the VIM addresses-and-control signals 108 required to access the i VLIWs stored in VIM.
  • Referenced instruction types are identified by the letters SLAMD in VIM 109, where the letters are matched up with instruction types as follows: Store (S), Load (L), ALU (A), MAU (M), and DSU (D). The basic concept of loading of the iVLIWs is described in greater detail in U.S. Patent Application Serial No.
  • the data memory interface controller 125 must handle the data processing needs of both the SP controller, with SP data in memory 121, and PE0, with PE0 data in memory 123.
  • the SP/PEO controller 125 also is the controlling point of the data that is sent over the 32-bit broadcast data bus 126.
  • the other PEs, 151, 153, and 155 contain common physical data memory units 123', 123", and 123"' though the data stored in them is generally different as required by the local processing done on each PE.
  • the interface to these PE data memories is also a common design in PEs 1, 2, and 3 and indicated by PE local memory and data bus interface logic 157, 157' and 157".
  • Interconnecting the PEs for data transfer communications is the cluster switch 171 various aspects of which are described in greater detail in U.S. Patent Application Serial No. 08/885,310 entitled “Manifold Array Processor", U.S. Application Serial No. 09/949,122 entitled “Methods and Apparatus for Manifold Array Processing", and U.S. Application Serial No. 09/169,256 entitled “Methods and Apparatus for ManArray PE-to-PE Switch Control".
  • the interface to a host processor, other peripheral devices, and/or external memory can be done in many ways.
  • a primary interface mechanism is contained in a direct memory access (DMA) control unit 181 that provides a scalable ManArray data bus 183 that connects to devices and interface units external to the ManArray core.
  • the DMA control unit 181 provides the data flow and bus arbitration mechanisms needed for these external devices to interface to the ManArray core memories via the multiplexed bus interface symbolically represented by 185.
  • a high level view of the ManArray Control Bus (MCB) 191 is also shown in Fig. 1.
  • This 2D 8x8 IDCT matrix can be separated into ID 1x8 IDCTs on the rows and columns.
  • the ManArray 2D 8x8 IDCT uses this property and applies the following ID 1x8 IDCT on all eight rows (columns) followed by applying it to all eight columns (rows).
  • the calculation of the 2D 8x8 IDCT in 34-cycles does not include the loading of data into the processing elements of the array since the equation does not account for a data loading function and there are many methods to accomplish the data loading such as direct memory access (DMA) and other means as dictated by the application program.
  • DMA direct memory access
  • compressed encoded data is received and sequentially preprocessed by variable length decoding, run length and inverse scan steps and then placed in the PEs after which the data is further processed by an inverse quantization step prior to using the IDCT processing step.
  • This IDCT algorithm can also be used in an MPEG encoder and other applications requiring an IDCT function.
  • the data is uniquely placed in the four PEs based upon symmetric properties of the IDCT signal flow graph as shown in Fig. 2.
  • the present approach separates the 2-dimensional 8x8 IDCT into 1 -dimensional 1x8 IDCTs on the rows and then on the columns.
  • the symmetric form of the 1x8 IDCT was used in a prior art processor further described in a paper "M.F.A.S.T.: A Highly Parallel Single Chip DSP with a 2D IDCT Example", by G. Pechanek, C. W. Kurak., C. J. Glossner, C. H. L. Moller, and S. J. Walsh.
  • the present invention describes a 2D 8x8 IDCT algorithm on a 4-processing element array in 34-cycles, a performance that is less than twice the processing time with only one quarter of the processing elements as compared to the M.F.A.S.T. processor.
  • no special folding of the data matrix on the diagonal is required.
  • the present ManArray approach uses a data placement that matches the signal flow graph directly.
  • IDCT is composed of a sum of products stage- 1 210 and a butterfly stage-2 220.
  • the stage- 1 operation consists of 32 multiply operations indicated by the input being multiplied (*) by cosine coefficients indicated by a "pc#" where p indicates a positive value or "mc#" where m indicates a negative value.
  • Groups of four multiplied values are then summed as indicated in the signal flow graph by the eight 4-input adders 211-218.
  • the sum of product results are identified by the letters A-H.
  • This 1x8 IDCT operation is performed on each row of the 8x8 input data followed by the 1x8 IDCT operation on each column to form the final two- dimensional result.
  • the input data is shown in Fig. 3A in a row-major matrix 300 where in an MPEG decoder each element represents a DCT coefficient as processed through the variable length decoder.
  • the data types are assumed to be greater than 8-bits at this stage and for ManArray processing 16-bit data types are used for each element.
  • the data elements are identified in both a linear numeric ordering from M0 to M63, and a row and column matrix form corresponding to the input notation used in the signal flow graph 200 of Fig. 2.
  • the data is to be distributed to the PEs of the 2x2 ManArray processor 100 appropriately to allow fast execution as a distributed 2D 8x8 IDCT. Consequently, the data should be distributed among the four PEs to minimize communication operations between the PEs.
  • the input data of the Fig. 2 signal flow graph separates the even data elements from the odd data elements.
  • This even and odd arrangement 310 of the data is shown graphically in Fig. 3B where the rows and columns are ordered in the same even and odd arrangement used in Fig. 2.
  • the second stage 220 of the signal flow graph 200 of Fig. 2 is a butterfly operation.
  • a and H, B and G, C and F, D and E values to be located locally on a PE. This is equivalent to folding the signal flow graph and is further equivalent to folding the input data matrix. Since the 2D 8x8 IDCT operation first operates on the rows (columns) and then the columns (rows), two folds on two different axis must be accounted for.
  • a vertical axis data fold 320 is illustrated in Fig. 3C, while Fig. 3D shows data 330 corresponding to the data of Fig. 3C folded again on the horizontal axis. Fig. 3D further shows how the data would be assigned to the 4 PEs, PE0-PE3, of the 2x2 ManArray processor 100 of Fig. 1. There are a number of ways the data can be loaded into the PEs.
  • VLD variable length decoder
  • the output of the variable length decoder could be in a linear order of 64 packed halfword data elements, two DCT coefficients per word. This packed format is organized to match the packed format required in the IDCT step.
  • the data is then loaded via DMA operation or by using ManArray load instructions.
  • the data is loaded into the PE configurable compute register file which is used in a 16x64 configuration for high performance packed data operations.
  • Such a data placement 340 is shown in Fig. 3E for each PE and the registers chosen for this description.
  • the even-indexed 16-bit data values are placed together in a compute register file (CRF) high halfword (HI) and low halfword (HO) 32-bit register, for example, register blocks 350 and 352 of Fig. 3E.
  • the odd- indexed 16-bit data values are placed together in a CRF high halfword (HI) and low halfword (HO) 32-bit register, for example, register blocks 360 and 362 of Fig. 3E.
  • This data placement supports the signal flow graph of Fig. 2 where the even- indexed input values and odd-indexed input values provide two independent sets of inputs for the sum of product operations. For example, the four sum of product values A, B, C and D are generated from only the even-indexed input values.
  • Fig. 3E The register data placement of Fig. 3E is shown in a compute register file format 400 in Fig. 4. It is noted that a number of different register choices are possible while still achieving the same performance and using the same processing approach. It is further noted that different data orderings that allow the same efficient sum of four products operation are also possible while still achieving the same performance and using the inventive approach.
  • the strategy for processing the IDCT algorithm is implemented using a number of features of the ManArray processor to provide a unique software pipeline employing indirect VLIWs operating in multiple PEs.
  • a syntax operation table 510 for the XV instruction 500 is shown in Fig. 5B.
  • Quad 16x16 multiplications are used in the sum of two products with and without accumulate instructions to produce two 32-bit results.
  • These instructions 600 and 700 are shown in Figs. 6A and 7A.
  • FIGs. 6B and 7B Their syntax/operation tables 610 and 710 are shown in Figs. 6B and 7B.
  • a syntax/operation table 810 for BFLYS instruction 800 is shown in Fig. 8B.
  • Other instructions common to the MAU and ALU include ADD instruction 900 of Fig. 9A having syntax operation table 910 of Fig. 9B, add immediate (ADDI), add with saturate (ADDS), butterfly divide by 2 (BFLYD2), butterfly with saturate (BFLYS), the mean of two numbers (MEAN2), subtract (SUB), subtract immediate (SUBI), and subtract with saturate (SUBS).
  • the DSU supports permute instructions 1000 (PERM) of Fig. 10A having syntax/operation 1010 of Fig. 10B to aid in organizing the data prior to processing and communicating between PEs by use of the PEXCHG instruction 1100 of Fig. 11A having syntax/operation table 11 10 of Fig. 1 IB.
  • the PEXCHG instruction 1100 used in the IDCT algorithm swaps two data items between two PEs.
  • the loading of the cosine coefficients is accomplished with a load modulo indexed with scaled update instruction 1200 (LMX) of Fig. 12 having syntax operation table 1210 of Fig. 12B.
  • LMX scaled update instruction
  • FIG. 10C illustrates one example of an 8- byte to 4-byte permute operation 1020.
  • Fig. 10D illustrates another example of an 8-byte to 4-byte permute operation 1030.
  • Fig. 10E illustrates an example of an 8-byte to 8-byte permute operation 1040.
  • Fig. 1 IC illustrates a 2 x 2 cluster switch arrangement 1130 showing PE and cluster switch notation.
  • Fig. 1 ID illustrates a key to PEXCHG 2 x 2 operation table 1140.
  • Figs. 11E-11G illustrates PEXCHG 1 x 1, 1 x 2 and 2 x 2 operation tables, 1160, 1170 and 1180, respectively.
  • Fig. 11H illustrates a 2 x 2 PEXCHG operation 1150.
  • the frequency or DCT coefficient data and the first set of cosine coefficients are loaded onto the four PEs, some pointers and control values are initialized, and a rounding bit is used to enhance the precision of the results is loaded.
  • the IDCT signal flow graph 200 for each row (column), Fig. 2 begins with a sum of products operation consisting of thirty-two 16x16 multiplications and twenty-four 2-to-l 32- bit additions for each row (column) processed.
  • the ManArray IDCT algorithm maintains maximum precision through the sum-of-products operation.
  • IDCT signal flow graph for each of the eight rows (columns) of processing illustrated in Fig. 2 uses four butterfly operations where each butterfly operation consists of a separate add and subtract on 16-bit data.
  • each butterfly operation consists of a separate add and subtract on 16-bit data.
  • the most significant 16- bits of the 32-bit sum-of-product data are operated on producing the final 1X8 IDCT results in 16-bit form.
  • a total of 32 butterfly operations are required for the eight rows (columns) which is accomplished in four cycles on the 2x2 array using the BFLYS instruction since its execution produces two butterfly operations per PE. Many of these butterfly operations are overlapped by use of VLIWs with other IDCT computation to improve performance.
  • the 1X8 IDCT operations are now repeated on the columns (rows) to complete the 2D 8X8 IDCT.
  • the load unit (LU) is also busy supporting the IDCT computations by loading the appropriate data cycle by cycle.
  • the 8x8 2D IDCT program 1300 shown in Figs. 13A and 13B is discussed below.
  • the first column 1305 labeled Clk indicates the program instruction execution cycle counts.
  • Fig. 13A depicts steps or cycles 1-18 for the program 1300 and
  • Fig. 13B depicts steps or cycles 19-34 of that program. These illustrated steps are made up primarily of 32-bit XV instructions listed under the column 1310 titled 8x8 2D IDCT program.
  • VLIW slot instructions such as instruction 1341 in column 1340 of Fig.
  • no store operations are required and all store units have a no operation (nop) indicated in each VLIW.
  • the instructions used in the program are described in further detail as follows: •
  • the XV instruction 500 is used to execute an indirect VLIW (iVLIW).
  • the iVLIWs that are available for execution by the XV instruction are stored at individual addresses of the specified SP or PE VLIW memory (VIM).
  • the VIM address is computed as the sum of a base VIM address register Vb (V0 or VI) plus an unsigned 8-bit offset VIMOFFS.
  • the VIM address must be in the valid range for the hardware configuration otherwise the operation of this instruction is undefined.
  • a blank ⁇ - parameter does not execute any slots.
  • the override does not affect the UAF setting specified via the LV instruction.
  • a blank 'F- selects the UAF specified when the VLIW was loaded.
  • the SUM2P instruction 600 operation version 610 The product of the high halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) are added to the product of the low halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) and the results are stored in the even and odd target register (Rte, Rto).
  • the SUM2PA instruction 700 operation version 710 The product of the high halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) are added to the product of the low halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) and the results are added to the even and odd target register (Rte, Rto) prior to storing the results in (Rte, Rto).
  • the BFLYS instruction 800 Results of a butterfly operation consisting of a sum and a difference of source registers Rx and Ry are stored in odd/even target register pair Rto ⁇ Rte. Saturated arithmetic is performed such that if a result does not fit within the target format, it is clipped to a minimum or maximum as necessary.
  • the ADD instruction 900 The sum of source registers Rx and Ry is stored in target register Rt. Multiple packed data type operations are defined in syntax/operation table 910.
  • the PERM instruction 1000 Bytes from the source register pair Rxo ⁇ ⁇ Rxe are placed into the target register Rt or Rt ⁇
  • the PEXCHG instruction 1100 A PE's target register receives data from its input port.
  • Fig. 1 IC illustrates a 2x2 cluster switch diagram 11 10 and four PEs 1112, 1114, 1116 and 11 18 arranged in a 2x2 array.
  • the PE's source register is made available on its output port.
  • the PE's input and output ports are connected to a cluster switch 1120 as depicted in Fig. 1 IC.
  • the cluster switch is made up of multiplexers 1122, 1124, 1126 and 1128 (muxes), whose switching is controlled by individual PEs. The combination of the
  • PeXchgCSctrl and the PE's ID controls the PE's mux in the cluster switch.
  • the PEXCHG table specifies how the muxes are controlled as illustrated in the key to PEXCHG 2x2 operation table 1140 shown in Fig. 1 ID.
  • Each PE's mux control in conjunction with its partner's mux control, determines how the specified source data is routed to the PE's input port.
  • Each PE also contains a 4-bit hardware physical identification (PID) stored in a special purpose PID register.
  • the 2x2 array uses two bits of the PID.
  • the PID of a PE is unique and never changes.
  • Each PE can take an identity associated with a virtual organization of PEs. This virtual ID (VID) consists of a Gray encoded row and column value.
  • the LMX instruction 1200 Loads a byte, halfword, word, or doubleword operand into an SP target register from SP memory or into a PE target register from PE local memory. Even address register Ae contains a 32-bit base address of a memory buffer.
  • the high halfword of odd address register Ao contains an unsigned 16-bit value representing the memory buffer size in bytes. This value is the modulo value.
  • the low halfword of Ao is an unsigned 16-bit index loaded into the buffer.
  • the index value is updated prior to (pre-decrement) or after (post-increment) its use in forming the operand effective address.
  • a pre-decrement update involves subtracting the unsigned 7-bit update value UPDATE7 scaled by the size of the operand being loaded (i.e. no scale for a byte, 2 for a halfword, 4 for a word, or 8 for a doubleword) from the index. If the resulting index becomes negative, the modulo value is added to the index.
  • a post-increment update involves adding the scaled UPDATE7 to the index. If the resulting index is greater than or equal to the memory buffer size (modulo value), the memory buffer size is subtracted from the index. The effect of the index update is that the index moves a scaled UPDATE7 bytes forward or backward within the memory buffer.
  • the operand effective address is the sum of the base address and the index. Byte and halfword operands can be sign-extended to 32-bits.
  • Fig. 14 is a logical representation 1400 in two-dimensional form of the 8x8 IDCT input data, F i w j z , as required
  • matrix elements x a , X , x c , X d are non-repeating members of ⁇ 0,2,4,6 ⁇ , and
  • matrix elements x a , X b , x c , d are non-repeating members of ⁇ 1,3,5,7 ⁇ , and
  • matrix elements y m , y n , y 0 , y p are non-repeating members of ⁇ 0,2,4,6 ⁇ .
  • MantaTM which has a fixed endianess ordering for memory accesses the values are loaded "reverse- order" into a register pair.
  • Fig. 15 illustrates the intermediate output table 1500 after the completion of the first dimension of processing. Note that the input row I x a , j z (x a subscripts are common across
  • the post-increment/decrement feature of the LII instruction allow for easy and efficient address generation supporting the stores to memory in the specified memory organization.
  • the output is in 16-bit format.
  • the code can be modified to store 8-bit data elements instead of 16-bit data elements.
  • the code for a 1x8 IDCT is shown in table 1700 of Fig. 17, and then expanded into the full 8x8 IDCT, code listings 1800, 1805, 1810, 1815, 1820, 1825, 1830, 1835, 1840, 1845, 1850, 1855 and 1860 of Figs. 18A-M, respectively.
  • the special ordering of the input data can be incorporated as part of a de-zigzag scan ordering in a video decompression scheme (e.g. MPEG-1, MPEG-2, H.263, etc.).
  • the output data is in row-major order.
  • the 1x8 code for any row in Fig. 17 will output values 0, 1, 2, 3, 4, 5, 6, and 7.
  • the incoming data is loaded into registers R0-R3 of a processing element's CRF, packing two data values in each register, as follows:
  • the construction of the complete 2D 8x8 IDCT is accomplished by performing a series of 8 1x8 IDCTs on the rows in a pipelined fashion, storing the output values in a second memory storage area in transpose format, performing a second series of 8 1x8 IDCTs on the rows of the transposed array (these are the columns of the original data), then storing the output values in a new storage area in transposed format.
  • Exemplary ManArray 2D 8x8 IDCT code is shown in Figs. 18 A-M.
  • the scalable algorithm uses four additional instructions.
  • Fig. 19A shows a load indirect with scaled immediate update (LII) instruction 1900
  • Fig. 19B shows a syntax operation table 1910 for the LII instruction 1900 of Fig. 19A.
  • LII instruction 1900 loads a byte, halfword, word, or doubleword operand into an SP target register from SP memory or into a PE target register from PE local memory.
  • Source address register ⁇ « is updated prior to (pre-decrement/pre-increment) or after (post- decrement/post-increment) its use as the operand effective address.
  • the update to An is an addition or subtraction of the unsigned 7-bit update value UPDATE 7 scaled by the size of the operand being loaded. In other words, no scale for a byte, 2 for a halfword, 4 for a word, or 8 for a doubleword.
  • Byte and halfword operands can be sign-extended to 32-bits.
  • Fig. 20A shows a subtract (SUB) instruction 2000 and Fig. 20B shows a syntax operation table 2010 for the SUB instruction 2000 of Fig. 20A.
  • SUB instruction 2000 the difference of source register Rx and Ry stored in target register Rt.
  • Fig. 21A shows a shift right immediate (SHRI) instruction 2100
  • Fig. 21B shows a syntax/operation table 21 10 for the SHRI instruction 2100 of Fig. 21 A.
  • each source register element is shifted right by the specified number of bits Nbits. The range of Nbits is 1-32.
  • signed (arithmetic) shifts vacated bit positions are filled with the value of the most significant bit of the element.
  • unsigned (logical) shifts vacated bit positions are filled with zeroes.
  • Each result is copied to the corresponding target register element.
  • Fig. 22A illustrates a store indirect with scaled immediate update (SII) instruction 2200
  • Fig. 22B shows a syntax operation table 2210 for the SII instruction 2200 of Fig. 22A.
  • SII instruction 2200 stores a byte, halfword, word, or doubleword operand to SP memory from an SP source register or to PE local memory from a PE source register.
  • Source address register An is updated prior to (pre-decrement/pre-increment) or after (postdecrement/post-increment) its use as the operand effective address.
  • the update to An is an addition or subtract of the unsigned 7-bit update value UPDATE7 scaled by the size of the operand being loaded. In other words, no scaleing is done for a byte, 2 for a halfword, 4 for a word, or 8 for a doubleword.
  • the storage instructions of the 1x8 IDCT code table 1700 of Fig. 17, are modified as shown in code table 2300 of Fig. 23. Note the storage offsets in the SII instructions.
  • the "+8” updates the address pointer so that an element is stored in the transpose format.
  • the "-55" resets the address pointer to the beginning of the next column.
  • the intermediate output is stored in transpose format.
  • A0 is loaded with the address of the intermediate data
  • Al is loaded with the start of the cosine coefficient table again
  • A2 is loaded with the address of the final output data's destination.
  • R30 is again loaded with either the rounding constant or 0.
  • R26 is loaded with the permute control word (0x32107654).
  • VLIW loads (VIM intialization), Figs. 18A-C, then the actual execution code for the first dimension, Figs. 18D-H and then the code for the processing of the second dimension, Figs. 18I-M.
  • the final output is stored in row major order at location specified by output_buffer.
  • the total cycle count is 202 cycles for a single 2D 8x8 IDCT. It is noted that while this version of the IDCT is not as fast as the 34-cycle IDCT on a
  • the scalable algorithm runs on a single processing element.
  • the IDCT runs on a single SP.
  • four 8x8 IDCTs can be performed at once.
  • the effective cycle count is about 50 cycles per IDCT, as the 202 cycles are divided by the total number of PEs. If a 2x4 array were to be used for video decompression, the effective cycle count would be only 25 cycles.
  • cosine coefficients are replicated in each PE's local memory and the 8x8 data for each locally computed 8x8 2D IDCT is stored in each local PE memory. After the data is distributed, the process is then run in SIMD mode on an array of PEs.
  • each IDCT is in row-major order. No further data manipulation is required as in the 34-cycle version. While the cycle count for this IDCT is slightly higher, the effective throughput of a video decompression engine is reduced. Third, the VIM resources for this process are less. By using a more regular pattern, this approach uses only 11 VLIW locations instead of the 27 required for the 34-cycle version.
  • Fig. 24 shows an exemplary process 2400 for a 2x2 DCT implementation expected to take 35 cycles to run.
  • the illustrated process 2400 starts with input data already loaded on the PEs of a 2x2 array such as that of Fig. 1. It ends with the data across the 2x2 array, but not in row-major order. Actual coding and testing of the process has not been completed and various adaptations and adjustments should be expected to optimize the final process for a desired end application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Discrete Mathematics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Many video processing applications are time constrained applications with multiple complex computation intensive algorithms such as the two-dimensional 8x8 inverse discrete cosine transforms (IDCT). It is advantageous to meeting performance requirements to have a programmable processor that can achieve extremely high performance on the 2D 8x8 function. The ManArray 2x2 processor (100) is able to process the 2D 8x8 IDCT in 34-cycles and meet the IEEE standard 1180-1990 for precision of the IDCT. A unique distributed 2D 8x8 IDCT process is presented along with the unique data placement supporting the high performance algorithm. Also, a scalable 2D 8x8 IDCT algorithm operable on various distinct arrays of differing numbers of processors is presented. Very long instruction word memory (VIM) (109) size is minimized by reuse of VLIWs, and further application processing is streamlined by having the IDCT results output in a standard row-major order. The techniques are applicable to cosine transforms generally.

Description

Methods and Apparatus For Efficient Cosine Transform Implementations
Cross-Reference to Related Applications
The present application claims the benefit of U.S. Provisional Application Serial No. 60/165,337 entitled "Methods and Apparatus for Efficient Cosine Transform Implementations" filed November 12, 1999 which is herein incorporated by reference in its entirety. Field of the Invention
The present invention relates generally to improvements in parallel processing, and more particularly to methods and apparatus for efficient cosine transform implementations on the manifold array ("ManArray") processing architecture. Background of the Invention
Many video processing applications, such as the moving picture experts group (MPEG) decoding and encoding standards, use a discrete cosine transform (DCT) and its inverse, the indirect discrete cosine transform (IDCT), in their compression algorithms. The compression standards are typically complex and specify that a high data rate must be handled. For example, MPEG at Main Profile and Main Level specifies 720x576 picture elements (pels) per frame at 30 frames per second and up to 15 Mbits per second. The MPEG Main Profile at High Level specifies 1920x1152 pels per frame at 60 frames per second and up to 80 Mbits per second. Video processing is a time constrained application with multiple complex compute intensive algorithms such as the two dimensional (2D) 8x8 IDCT. The consequence is that processors with high clock rates, fixed function application specific integrated circuits (ASICs), or combinations of fast processors and ASICs are typically used to meet the high processing load. Having efficient 2D 8x8 DCT and IDCT implementations is of great advantage to providing a low cost solution. Prior art approaches, such as Pechanek et al. U.S. Patent No. 5,546,336, used a specialized folded memory array with embedded arithmetic elements to achieve high performance with 16 processing elements. The folded memory array and large number of processing elements do not map well to a low cost regular silicon implementation. It will be shown in the present invention that high performance cosine transforms can be achieved with one quarter of the processing elements as compared to the 16 PE Mfast design in a regular array structure without need of a folded memory array. In addition, the unique instructions, indirect VLIW capability, and use of the ManArray network communication instructions allow a general programmable solution of very high performance. Summary of the Invention
To this end, the ManArray processor as adapted as further described herein provides efficient software implementations of the IDCT using the ManArray indirect very long instruction word (iVLIW) architecture and a unique data-placement that supports software pipelining between processor elements (PEs) in the 2x2 ManArray processor. For example, a two-dimensional (2D) 8x8 IDCT, used in many video compression algorithms such as MPEG, can be processed in 34-cycles on a 2x2 ManArray processor and meets IEEE Standard 1180-1990 for precision of the IDCT. The 2D 8x8 DCT algorithm, using the same distributed principles covered in the distributed 2D 8x8 IDCT, can be processed in 35-cycles on the same 2x2 ManArray processor. With this level of performance, the clock rate can be much lower than is typically used in MPEG processing chips thereby lowering overall power usage.
An alternative software process for implementing the cosine transforms on the ManArray processor provides a scalable algorithm that can be executed on various arrays, such as a lxl, a 1x2, a 2x2, a 2x3, a 2x4, and so on allowing scalable performance. Among its other aspects, this new software process makes use of the scalable characteristics of the ManArray architecture, unique ManArray instructions, and a data placement optimized for the MPEG application. In addition, due to the symmetry of the algorithm, the number of VLIWs is minimized through reuse of VLIWs in the processing of both dimensions of the 2D computation.
The present invention defines a collection of eight hardware ManArray instructions that use the ManArray iVLIW architecture and communications network to efficiently calculate the distributed two-dimensional 8x8 IDCT. In one aspect of the present invention, appropriate data distribution and software pipeline techniques are provided to achieve a 34- cycle distributed two-dimensional 8x8 IDCT on a 2x2 ManArray processor that meets IEEE Standard 1180-1990 for precision of the IDCT. In another aspect of the present invention, appropriate data distribution patterns are used in local processor element memory in conjunction with a scalable algorithm to effectively and efficiently reuse VLIW instructions in the processing of both dimensions of the two dimensional algorithm.
These and other advantages of the present invention will be apparent from the drawings and the Detailed Description which follow. Brief Description of the Drawings Fig. 1 illustrates an exemplary 2x2 ManArray iVLIW processor;
Fig. 2 illustrates an IDCT signal flow graph;
Fig. 3A illustrates input data in 8x8 matrix form;
Fig. 3B illustrates reordered input 8x8 data matrix per the signal flow graph of Fig. 2;
Fig. 3C illustrates the input 8x8 data matrix after it has been folded on the vertical axis;
Fig. 3D illustrates the input 8x8 data matrix after it has been folded on the horizontal axis with data placement per PE in the 2x2 ManArray iVLIW processor of Fig. 1;
Fig 3E illustrates an example of data placement into the ManArray PE compute register files; Fig. 4 illustrates the data of Fig. 3D as loaded into the four PEs' registers;
Fig. 5A illustrates a presently preferred execute VLIW instruction, XV;
Fig. 5B illustrates the syntax and operation of the XV instruction of Fig. 5 A; Fig. 6A illustrates a presently preferred sum of two products instruction, SUM2P;
Fig. 6B illustrates the syntax and operation of the SUM2P instruction of Fig. 6A;
Fig. 7A illustrates a presently preferred sum of two products accumulate instruction, SUM2PA; Fig. 7B illustrates the syntax and operation of the SUM2PA instruction of Fig. 7A;
Fig. 8A illustrates a presently preferred butterfly with saturate instruction, BFLYS;
Fig. 8B illustrates the syntax and operation of the BFLYS instruction of Fig. 8 A;
Fig. 9A illustrates a presently preferred addition instruction, ADD;
Fig. 9B illustrates the syntax and operation of the ADD instruction of Fig. 9A; Fig. 10A illustrates a presently preferred permute instruction, PERM;
Fig. 10B illustrates the syntax and operation of the PERM instruction of Fig. 10A;
Fig. IOC illustrates one example of an 8-byte to 4-byte permute operation;
Fig. 1 OD illustrates another example of an 8-byte to 4-byte permute operation;
Fig. 10E illustrates one example of an 8-byte to 8-byte permute operation; Fig. 11 A illustrates a presently preferred PE to PE exchange instruction, PEXCHG;
Fig. 1 IB illustrates the syntax and operation of the PEXCHG instruction of Fig. 11A;
Fig. 11C illustrates a 2x2 cluster switch arrangement showing PE and cluster switch notation;
Fig. 1 ID illustrates a key to PEXCHG 2x2 Operation Table with an example path highlighted;
Fig. 1 IE illustrates the PEXCHG lxl Operation Table;
Fig. 1 IF illustrates the PEXCHG 1x2 Operation Table;
Fig. 11G illustrates the PEXCHG 2x2 Operation Table;
Fig. 11H illustrates a 2x2 PEXCHG operation; Fig. 12A illustrates a presently preferred load modulo indexed with scaled update instruction, LMX;
Fig. 12B illustrates the syntax and operation of the LMX instruction of Fig. 12A; Fig. 13A illustrates the first 18-cycles of ManArray 8x8 2D IDCT program code and the VLIWs associated with each XV program instruction; and
Fig. 13B illustrates cycles 19-34 of the ManArray 8x8 2D IDCT program code; Fig. 14 illustrates an 8x8 input data matrix; Fig. 15 illustrates the intermediate 8x8 results after processing the first dimension of the 8x8 IDCT algorithm;
Fig. 16 illustrates the 8x8 output pixel results after processing of both dimensions of the 8x8 IDCT algorithm;
Fig. 17 illustrates a 1x8 IDCT algorithm in VLIW coding format; Figs. 18A-M illustrate 2D 8x8 IDCT ManArray code;
Fig. 19A illustrates a load indirect with scaled immediate update instruction, LII; Fig. 19B illustrates the syntax and operation of the LII instruction of Fig. 19A; Fig. 20A illustrates a subtract instruction, SUB;
Fig. 20B illustrates the syntax and operation of the SUB instruction of Fig. 20A; Fig. 21 A illustrates a shift right immediate instruction, SHRI;
Fig. 2 IB illustrates the syntax and operation of the SHRI instruction of Fig. 21 A; Fig. 22A illustrates a store indirect with scaled immediate update instruction, SII; Fig. 22B illustrates the syntax and operation of the SII instruction of Fig. 22A; Fig. 23 illustrates exemplary 2D 8x8 IDCT code showing the software pipeline operation; and
Fig. 24 illustrates an exemplary process for a 2x2 DCT. Detailed Description
Further details of a presently preferred ManArray architecture for use in conjunction with the present invention are found in U.S. Patent Application Serial No. 08/885,310 filed June 30, 1997, now U.S. Patent No. 6,023,753, U.S. Patent Application Serial No.
08/949,122 filed October 10, 1997, U.S. Patent Application Serial No. 09/169,255 filed
October 9, 1998, U.S. Patent Application Serial No. 09/169,256 filed October 9, 1998, U.S. Patent Application Serial No. 09/169,072 filed October 9, 1998, U.S. Patent Application Serial No. 09/187,539 filed November 6, 1998, U.S. Patent Application Serial No. 09/205,558 filed December 4, 1998, U.S. Patent Application Serial No. 09/215,081 filed December 18, 1998, U.S. Patent Application Serial No. 09/228,374 filed January 12, 1999 and entitled "Methods and Apparatus to Dynamically Reconfigure the Instruction Pipeline of an Indirect Very Long Instruction Word Scalable Processor", U.S. Patent Application Serial No. 09/238,446 filed January 28, 1999, U.S. Patent Application Serial No. 09/267,570 filed March 12, 1999, U.S. Patent Application Serial No. 09/337,839 filed June 22, 1999, U.S. Patent Application Serial No. 09/350,191 filed July 9, 1999, U.S. Patent Application Serial No. 09/422,015 filed October 21, 1999 entitled "Methods and Apparatus for Abbreviated Instruction and Configurable Processor Architecture", U.S. Patent Application Serial No. 09/432,705 filed November 2, 1999 entitled "Methods and Apparatus for Improved Motion Estimation for Video Encoding", U.S. Patent Application Serial No. 09/471,217 filed December 23, 1999 entitled "Methods and Apparatus for Providing Data Transfer Control", U.S. Patent Application Serial No. 09/472,372 filed December 23, 1999 entitled "Methods and Apparatus for Providing Direct Memory Access Control", U.S. Patent Application Serial No. 09/596,103 entitled "Methods and Apparatus for Data Dependent Address Operations and Efficient Variable Length Code Decoding in a VLIW Processor" filed June 16, 2000, U.S. Patent Application Serial No. 09/598,567 entitled "Methods and Apparatus for Improved Efficiency in Pipeline Simulation and Emulation" filed June 21, 2000, U.S. Patent Application Serial No. 09/598,564 entitled "Methods and Apparatus for Initiating and Resynchronizing Multi-Cycle SIMD Instructions" filed June 21, 2000, U.S. Patent Application Serial No. 09/598,566 entitled "Methods and Apparatus for Generalized Event Detection and Action Specification in a Processor" filed June 21, 2000, and U.S. Patent Application Serial No. 09/599,980 entitled "Methods and Apparatus for Establishing Port Priority Functions in a VLIW Processor" filed June 21, 2000, as well as, Provisional Application Serial No. 60/113,637 entitled "Methods and Apparatus for Providing Direct Memory Access (DMA) Engine" filed December 23, 1998, Provisional Application Serial No. 60/1 13,555 entitled "Methods and Apparatus Providing Transfer Control" filed December 23, 1998, Provisional Application Serial No. 60/139,946 entitled "Methods and Apparatus for Data Dependent Address Operations and Efficient Variable Length Code Decoding in a VLIW Processor" filed June 18, 1999, Provisional Application Serial No. 60/140,245 entitled "Methods and Apparatus for Generalized Event Detection and Action Specification in a Processor" filed June 21, 1999, Provisional Application Serial No. 60/140,163 entitled "Methods and Apparatus for Improved Efficiency in Pipeline Simulation and Emulation" filed June 21, 1999, Provisional Application Serial No. 60/140,162 entitled "Methods and Apparatus for Initiating and Re- Synchronizing Multi-Cycle SIMD Instructions" filed June 21, 1999, Provisional Application Serial No. 60/140,244 entitled "Methods and Apparatus for Providing One-By-One Manifold Array (lxl ManArray) Program Context Control" filed June 21 , 1999, Provisional Application Serial No. 60/140,325 entitled "Methods and Apparatus for Establishing Port Priority Function in a VLIW Processor" filed June 21, 1999, Provisional Application Serial No. 60/140,425 entitled "Methods and Apparatus for Parallel Processing Utilizing a Manifold Array (ManArray) Architecture and Instruction Syntax" filed June 22, 1999, Provisional Application Serial No. 60/165,337 entitled "Efficient Cosine Transform Implementations on the ManArray Architecture" filed November 12, 1999, and Provisional Application Serial No. 60/171,911 entitled "Methods and Apparatus for DMA Loading of Very Long Instruction Word Memory" filed December 23, 1999, Provisional Application Serial No. 60/184,668 entitled "Methods and Apparatus for Providing Bit-Reversal and Multicast Functions Utilizing DMA Controller" filed February 24, 2000, Provisional Application Serial No. 60/184,529 entitled "Methods and Apparatus for Scalable Array Processor Interrupt Detection and Response" filed February 24, 2000, Provisional Application Serial No. 60/184,560 entitled "Methods and Apparatus for Flexible Strength Coprocessing Interface" filed February 24, 2000, and Provisional Application Serial No. 60/203,629 entitled "Methods and Apparatus for Power Control in a Scalable Array of Processor Elements" filed May 12, 2000, respectively, all of which are assigned to the assignee of the present invention and incorporated by reference herein in their entirety.
In a presently preferred embodiment of the present invention, a ManArray 2x2 iVLIW single instruction multiple data stream (SIMD) processor 100 shown in Fig. 1 comprises a sequence processor (SP) controller combined with a processing element-0 (PE0) SP PE0 101, as described in further detail in U. S. Patent Application Serial No. 09/169,072 entitled "Methods and Apparatus for Dynamically Merging an Array Controller with an Array Processing Element". Three additional PEs 151, 153, and 155, are also utilized to demonstrate the efficient algorithm and mechanisms for fast cosine transforms on the
ManArray architecture in accordance with the present invention. It is noted that PEs can also be labeled with their matrix positions as shown in parenthesis for PE0 (PE00) 101, PE1 (PE01) 151, PE2 (PE10) 153, and PE3 (PE11) 155. The SP/PE0 101 contains the fetch controller 103 to allow the fetching of short instruction words (SIWs) from a 32-bit instruction memory 105. The fetch controller 103 provides the typical functions needed in a programmable processor such as a program counter (PC), branch capability, event point (EP) loop operations (see U.S. Application Serial No. 09/598,556 entitled "Methods and Apparatus for Generalized Event Detection and Action Specification in a Processor" filed June 21, 2000 and claiming the benefit of U.S. Provisional Application Serial No. 60/140,245 for further details), and support for interrupts. It also provides the instruction memory control which could include an instruction cache if needed by an application. In addition, the SIW I-Fetch controller 103 dispatches 32-bit SIWs to the other PEs in the system by means of the 32-bit instruction bus 102.
In this exemplary system, common elements are used throughout to simplify the explanation, though actual implementations are not limited to this restriction. For example, the execution units 131 in the combined SP/PE0 101 can be separated into a set of execution units optimized for the control function, e.g., fixed point execution units, and the PE0 as well as the other PEs can be optimized for a floating point application. For the purposes of this description, it is assumed that the execution units 131 are of the same type in the SP/PEO and the PEs. In a similar manner, SP/PEO and the other PEs use a five instruction slot iVLIW architecture which contains a very long instruction word memory (VIM) 109 and an instruction decode and VIM controller function unit 107 which receives instructions as dispatched from the SP/PEO's I-Fetch unit 103 and generates the VIM addresses-and-control signals 108 required to access the i VLIWs stored in VIM. Referenced instruction types are identified by the letters SLAMD in VIM 109, where the letters are matched up with instruction types as follows: Store (S), Load (L), ALU (A), MAU (M), and DSU (D). The basic concept of loading of the iVLIWs is described in greater detail in U.S. Patent Application Serial No. 09/187,539 entitled "Methods and Apparatus for Efficient Synchronous MIMD Operations with iVLIW PE-to-PE Communication". Also contained in the SP/PEO and the other PEs is a common PE configurable register file 127 which is described in further detail in U.S. Patent Application Serial No. 09/169,255 entitled "Methods and Apparatus for Dynamic Instruction Controlled Reconfiguration Register File with Extended Precision".
Due to the combined nature of the SP/PEO, the data memory interface controller 125 must handle the data processing needs of both the SP controller, with SP data in memory 121, and PE0, with PE0 data in memory 123. The SP/PEO controller 125 also is the controlling point of the data that is sent over the 32-bit broadcast data bus 126. The other PEs, 151, 153, and 155 contain common physical data memory units 123', 123", and 123"' though the data stored in them is generally different as required by the local processing done on each PE. The interface to these PE data memories is also a common design in PEs 1, 2, and 3 and indicated by PE local memory and data bus interface logic 157, 157' and 157". Interconnecting the PEs for data transfer communications is the cluster switch 171 various aspects of which are described in greater detail in U.S. Patent Application Serial No. 08/885,310 entitled "Manifold Array Processor", U.S. Application Serial No. 09/949,122 entitled "Methods and Apparatus for Manifold Array Processing", and U.S. Application Serial No. 09/169,256 entitled "Methods and Apparatus for ManArray PE-to-PE Switch Control". The interface to a host processor, other peripheral devices, and/or external memory can be done in many ways. For completeness, a primary interface mechanism is contained in a direct memory access (DMA) control unit 181 that provides a scalable ManArray data bus 183 that connects to devices and interface units external to the ManArray core. The DMA control unit 181 provides the data flow and bus arbitration mechanisms needed for these external devices to interface to the ManArray core memories via the multiplexed bus interface symbolically represented by 185. A high level view of the ManArray Control Bus (MCB) 191 is also shown in Fig. 1.
All of the above noted patents are assigned to the assignee of the present invention and incorporated herein by reference in their entirety. Inverse Discrete Cosine Transform
p(x,y) = ∑∑ ψf(u,v)cos[(2x + l)u ^)∞s[(2y + l)v^] =0 v=0
The 2D 8x8 IDCT equation is given as:
Where: C(«) = l/V2: for w = 0
C(u) = 1 for u > 0
C(v) = l/V2.: for v = 0 C(v) = 1 for v > 0 f(u, v) = 2D DCT coefficient p(x, y) = 2D result value
This 2D 8x8 IDCT matrix can be separated into ID 1x8 IDCTs on the rows and columns. The ManArray 2D 8x8 IDCT uses this property and applies the following ID 1x8 IDCT on all eight rows (columns) followed by applying it to all eight columns (rows).
P(*) = ∑ f(u)cos[(2x + \)u^] u=0 Where: C(u) = 1/^2: for u = 0
C(u) = 1 for u > 0
F(w) = ID DCT coefficient P(x) - ID result value
The calculation of the 2D 8x8 IDCT in 34-cycles does not include the loading of data into the processing elements of the array since the equation does not account for a data loading function and there are many methods to accomplish the data loading such as direct memory access (DMA) and other means as dictated by the application program. For example, in an MPEG decoders compressed encoded data is received and sequentially preprocessed by variable length decoding, run length and inverse scan steps and then placed in the PEs after which the data is further processed by an inverse quantization step prior to using the IDCT processing step. This IDCT algorithm can also be used in an MPEG encoder and other applications requiring an IDCT function. For the fast IDCT ManArray 2x2 approach of the present invention, the data is uniquely placed in the four PEs based upon symmetric properties of the IDCT signal flow graph as shown in Fig. 2. The present approach separates the 2-dimensional 8x8 IDCT into 1 -dimensional 1x8 IDCTs on the rows and then on the columns. The symmetric form of the 1x8 IDCT was used in a prior art processor further described in a paper "M.F.A.S.T.: A Highly Parallel Single Chip DSP with a 2D IDCT Example", by G. Pechanek, C. W. Kurak., C. J. Glossner, C. H. L. Moller, and S. J. Walsh. The approach described in the M.F.A.S.T. article required a special folding of the data on the diagonal of the data matrix to determine the data placement for the diagonally folded 4x4 M.F.A.S.T. array of 16-processing elements to accomplish the 2D 8x8 IDCT processing in 18-22 cycles. The present invention describes a 2D 8x8 IDCT algorithm on a 4-processing element array in 34-cycles, a performance that is less than twice the processing time with only one quarter of the processing elements as compared to the M.F.A.S.T. processor. In addition, no special folding of the data matrix on the diagonal is required. The present ManArray approach uses a data placement that matches the signal flow graph directly.
Fig. 2 illustrates a signal flow graph 200 for a symmetric 1x8 IDCT operation in accordance with the present invention in which the input data elements are row elements given as f ZJ- where z={0,l,...,7}=row identifier and j={0,l,...,7}=column identifier. The 1x8
IDCT is composed of a sum of products stage- 1 210 and a butterfly stage-2 220. The stage- 1 operation consists of 32 multiply operations indicated by the input being multiplied (*) by cosine coefficients indicated by a "pc#" where p indicates a positive value or "mc#" where m indicates a negative value. Groups of four multiplied values are then summed as indicated in the signal flow graph by the eight 4-input adders 211-218. The sum of product results are identified by the letters A-H. These values are then processed by butterfly operations in stage 220 to generate the output values Pzj where z={0,l,...,7}=row identifier and
j={0,l,...,7}=column identifier. This 1x8 IDCT operation is performed on each row of the 8x8 input data followed by the 1x8 IDCT operation on each column to form the final two- dimensional result.
The input data is shown in Fig. 3A in a row-major matrix 300 where in an MPEG decoder each element represents a DCT coefficient as processed through the variable length decoder. The data types are assumed to be greater than 8-bits at this stage and for ManArray processing 16-bit data types are used for each element. The data elements are identified in both a linear numeric ordering from M0 to M63, and a row and column matrix form corresponding to the input notation used in the signal flow graph 200 of Fig. 2. The data is to be distributed to the PEs of the 2x2 ManArray processor 100 appropriately to allow fast execution as a distributed 2D 8x8 IDCT. Consequently, the data should be distributed among the four PEs to minimize communication operations between the PEs. To discover an appropriate data distribution pattern, it is noted that the input data of the Fig. 2 signal flow graph separates the even data elements from the odd data elements. This even and odd arrangement 310 of the data is shown graphically in Fig. 3B where the rows and columns are ordered in the same even and odd arrangement used in Fig. 2.
It is next noted that the second stage 220 of the signal flow graph 200 of Fig. 2 is a butterfly operation. For example, PZ)1 = B+G while Pz>6 = B-G. The butterfly processing
requires A and H, B and G, C and F, D and E values to be located locally on a PE. This is equivalent to folding the signal flow graph and is further equivalent to folding the input data matrix. Since the 2D 8x8 IDCT operation first operates on the rows (columns) and then the columns (rows), two folds on two different axis must be accounted for. A vertical axis data fold 320 is illustrated in Fig. 3C, while Fig. 3D shows data 330 corresponding to the data of Fig. 3C folded again on the horizontal axis. Fig. 3D further shows how the data would be assigned to the 4 PEs, PE0-PE3, of the 2x2 ManArray processor 100 of Fig. 1. There are a number of ways the data can be loaded into the PEs. For example, in an MPEG decoder, the output of the variable length decoder (VLD) could be in a linear order of 64 packed halfword data elements, two DCT coefficients per word. This packed format is organized to match the packed format required in the IDCT step. The data is then loaded via DMA operation or by using ManArray load instructions. The data is loaded into the PE configurable compute register file which is used in a 16x64 configuration for high performance packed data operations. Such a data placement 340 is shown in Fig. 3E for each PE and the registers chosen for this description. Within each PE, the even-indexed 16-bit data values are placed together in a compute register file (CRF) high halfword (HI) and low halfword (HO) 32-bit register, for example, register blocks 350 and 352 of Fig. 3E. In the same manner, the odd- indexed 16-bit data values are placed together in a CRF high halfword (HI) and low halfword (HO) 32-bit register, for example, register blocks 360 and 362 of Fig. 3E. This data placement supports the signal flow graph of Fig. 2 where the even- indexed input values and odd-indexed input values provide two independent sets of inputs for the sum of product operations. For example, the four sum of product values A, B, C and D are generated from only the even-indexed input values. The register data placement of Fig. 3E is shown in a compute register file format 400 in Fig. 4. It is noted that a number of different register choices are possible while still achieving the same performance and using the same processing approach. It is further noted that different data orderings that allow the same efficient sum of four products operation are also possible while still achieving the same performance and using the inventive approach.
In a presently preferred embodiment, the strategy for processing the IDCT algorithm is implemented using a number of features of the ManArray processor to provide a unique software pipeline employing indirect VLIWs operating in multiple PEs. For example, the indirect execute VLIW (XV) instruction 500 shown in Fig. 5A has a unique enable feature, E = SLAMD specified in bits 14-10, which allows a software pipeline to be built up or torn down without creating new VLIWs. A syntax operation table 510 for the XV instruction 500 is shown in Fig. 5B. Quad 16x16 multiplications are used in the sum of two products with and without accumulate instructions to produce two 32-bit results. These instructions 600 and 700 are shown in Figs. 6A and 7A. Their syntax/operation tables 610 and 710 are shown in Figs. 6B and 7B. The use of common instructions that can execute in either the ALU or MAU or both, for example the butterfly with saturate instruction 800 of Fig. 8, improves performance. A syntax/operation table 810 for BFLYS instruction 800 is shown in Fig. 8B. Other instructions common to the MAU and ALU include ADD instruction 900 of Fig. 9A having syntax operation table 910 of Fig. 9B, add immediate (ADDI), add with saturate (ADDS), butterfly divide by 2 (BFLYD2), butterfly with saturate (BFLYS), the mean of two numbers (MEAN2), subtract (SUB), subtract immediate (SUBI), and subtract with saturate (SUBS). In addition, the DSU supports permute instructions 1000 (PERM) of Fig. 10A having syntax/operation 1010 of Fig. 10B to aid in organizing the data prior to processing and communicating between PEs by use of the PEXCHG instruction 1100 of Fig. 11A having syntax/operation table 11 10 of Fig. 1 IB. The PEXCHG instruction 1100 used in the IDCT algorithm swaps two data items between two PEs. The loading of the cosine coefficients is accomplished with a load modulo indexed with scaled update instruction 1200 (LMX) of Fig. 12 having syntax operation table 1210 of Fig. 12B. As addressed further below, additional aspects of the present invention are illustrated in Figs. 10C-10E and Figs. 1 lC-11H as follows. Fig. 10C illustrates one example of an 8- byte to 4-byte permute operation 1020. Fig. 10D illustrates another example of an 8-byte to 4-byte permute operation 1030. Fig. 10E illustrates an example of an 8-byte to 8-byte permute operation 1040. Fig. 1 IC illustrates a 2 x 2 cluster switch arrangement 1130 showing PE and cluster switch notation. Fig. 1 ID illustrates a key to PEXCHG 2 x 2 operation table 1140. Figs. 11E-11G illustrates PEXCHG 1 x 1, 1 x 2 and 2 x 2 operation tables, 1160, 1170 and 1180, respectively. Fig. 11H illustrates a 2 x 2 PEXCHG operation 1150.
Prior to beginning the IDCT process, the frequency or DCT coefficient data and the first set of cosine coefficients are loaded onto the four PEs, some pointers and control values are initialized, and a rounding bit is used to enhance the precision of the results is loaded. The IDCT signal flow graph 200 for each row (column), Fig. 2, begins with a sum of products operation consisting of thirty-two 16x16 multiplications and twenty-four 2-to-l 32- bit additions for each row (column) processed. The ManArray IDCT algorithm maintains maximum precision through the sum-of-products operation. The second stage of the ID
IDCT signal flow graph for each of the eight rows (columns) of processing illustrated in Fig. 2 uses four butterfly operations where each butterfly operation consists of a separate add and subtract on 16-bit data. At this point in the ManArray IDCT process, the most significant 16- bits of the 32-bit sum-of-product data are operated on producing the final 1X8 IDCT results in 16-bit form. A total of 32 butterfly operations are required for the eight rows (columns) which is accomplished in four cycles on the 2x2 array using the BFLYS instruction since its execution produces two butterfly operations per PE. Many of these butterfly operations are overlapped by use of VLIWs with other IDCT computation to improve performance. The 1X8 IDCT operations are now repeated on the columns (rows) to complete the 2D 8X8 IDCT. The load unit (LU) is also busy supporting the IDCT computations by loading the appropriate data cycle by cycle. To provide further details of these operations, the 8x8 2D IDCT program 1300 shown in Figs. 13A and 13B is discussed below. The first column 1305 labeled Clk indicates the program instruction execution cycle counts. Fig. 13A depicts steps or cycles 1-18 for the program 1300 and Fig. 13B depicts steps or cycles 19-34 of that program. These illustrated steps are made up primarily of 32-bit XV instructions listed under the column 1310 titled 8x8 2D IDCT program. For VLIW slot instructions, such as instruction 1341 in column 1340 of Fig. 13A or instruction 1342 in column 1340 of Fig. 13B, the instructions are struck out or crossed through. In these instances, the instructions shown are located in VLIW memory, but are not enabled by the E = parameter of the XV instruction that selects that VLIW for execution. Column 1310 lists the actual program stored in 32-bit instruction memory. The VLIWs which the XVs indirectly selected for execution are stored in the local PE VIMs. The other columns 1320, 1330, 1340, 1350 and 1360 represent the SLAMD VLIWs that are indirectly executed with each XV instruction. The XV syntax 510 of Fig. 5B and used in column 1310 of Figs. 13A and 13B is as follows: XV is the base mnemonic, .p indicates this is a PE instruction, V0 selects the V0 VIM base address register, # represents the VIM offset value added to V0 to create the VIM VLIW address, the E=SLAMD indicates which execution units are enabled for that execution cycle, and F=AMD indicates the unit affecting flags selection option. Note that with the E=SLAMD option VLIWs can sometimes be reused as in this IDCT algorithm where the first six XVs Clk=l, 2, ...6 are repeated in XVs Clk=9, 10, ... 14 respectively and differ in use by appropriate choice of the E= enabled unit. For the anticipated use of this ManArray IDCT algorithm, no store operations are required and all store units have a no operation (nop) indicated in each VLIW.
The instructions used in the program are described in further detail as follows: • The XV instruction 500 is used to execute an indirect VLIW (iVLIW). The iVLIWs that are available for execution by the XV instruction are stored at individual addresses of the specified SP or PE VLIW memory (VIM). The VIM address is computed as the sum of a base VIM address register Vb (V0 or VI) plus an unsigned 8-bit offset VIMOFFS. The VIM address must be in the valid range for the hardware configuration otherwise the operation of this instruction is undefined. Any combination of individual instruction slots may be executed via the execute slot parameter Ε={SLAMD}', where S=store unit (SU), L=load unit (LU), A=arithmetic logic unit (ALU), M=multiply accumulate unit (MAU), D=data select unit (DSU). A blank Ε- parameter does not execute any slots. The unit affecting flags (UAF) parameter =[AMDN]' overrides the UAF specified for the VLIW when it was loaded via the LV instruction. The override selects which arithmetic instruction slot (A=ALU, M=MAU, D=DSU) or none (N=NONE) is allowed to set condition flags for this execution of the VLIW. The override does not affect the UAF setting specified via the LV instruction. A blank 'F- selects the UAF specified when the VLIW was loaded.
• The SUM2P instruction 600 operation version 610: The product of the high halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) are added to the product of the low halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) and the results are stored in the even and odd target register (Rte, Rto). • The SUM2PA instruction 700 operation version 710: The product of the high halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) are added to the product of the low halfwords of the even and odd source registers (Rxe, Rxo) and (Rye, Ryo) and the results are added to the even and odd target register (Rte, Rto) prior to storing the results in (Rte, Rto). • The BFLYS instruction 800: Results of a butterfly operation consisting of a sum and a difference of source registers Rx and Ry are stored in odd/even target register pair Rto\\Rte. Saturated arithmetic is performed such that if a result does not fit within the target format, it is clipped to a minimum or maximum as necessary.
• The ADD instruction 900: The sum of source registers Rx and Ry is stored in target register Rt. Multiple packed data type operations are defined in syntax/operation table 910.
• The PERM instruction 1000: Bytes from the source register pair Rxo\ \Rxe are placed into the target register Rt or Rtø| |Rte based on corresponding 4-bit indices in permute control word Ry. Figs. 10C, 10D, and 10E depict three exemplary operations 1020, 1030, and
1040 of this instruction. • The PEXCHG instruction 1100: A PE's target register receives data from its input port.
Fig. 1 IC illustrates a 2x2 cluster switch diagram 11 10 and four PEs 1112, 1114, 1116 and 11 18 arranged in a 2x2 array. The PE's source register is made available on its output port. The PE's input and output ports are connected to a cluster switch 1120 as depicted in Fig. 1 IC. The cluster switch is made up of multiplexers 1122, 1124, 1126 and 1128 (muxes), whose switching is controlled by individual PEs. The combination of the
PeXchgCSctrl and the PE's ID controls the PE's mux in the cluster switch. The PEXCHG table specifies how the muxes are controlled as illustrated in the key to PEXCHG 2x2 operation table 1140 shown in Fig. 1 ID. Each PE's mux control, in conjunction with its partner's mux control, determines how the specified source data is routed to the PE's input port. Each PE also contains a 4-bit hardware physical identification (PID) stored in a special purpose PID register. The 2x2 array uses two bits of the PID. The PID of a PE is unique and never changes. Each PE can take an identity associated with a virtual organization of PEs. This virtual ID (VID) consists of a Gray encoded row and column value. For the allowed virtual organization of PEs 1 150, shown in Fig. 11H, the last 2 digits of the VID match the last 2 digits of the PID. Tables 1160, 1 170 and 1180 of Figs. 1 IE, 1 IF and 11G, respectively, show the control settings for a lxl, 1x2, and a 2x2 configuration of PEs, respectively.
• The LMX instruction 1200: Loads a byte, halfword, word, or doubleword operand into an SP target register from SP memory or into a PE target register from PE local memory. Even address register Ae contains a 32-bit base address of a memory buffer.
The high halfword of odd address register Ao contains an unsigned 16-bit value representing the memory buffer size in bytes. This value is the modulo value. The low halfword of Ao is an unsigned 16-bit index loaded into the buffer. The index value is updated prior to (pre-decrement) or after (post-increment) its use in forming the operand effective address. A pre-decrement update involves subtracting the unsigned 7-bit update value UPDATE7 scaled by the size of the operand being loaded (i.e. no scale for a byte, 2 for a halfword, 4 for a word, or 8 for a doubleword) from the index. If the resulting index becomes negative, the modulo value is added to the index. A post-increment update involves adding the scaled UPDATE7 to the index. If the resulting index is greater than or equal to the memory buffer size (modulo value), the memory buffer size is subtracted from the index. The effect of the index update is that the index moves a scaled UPDATE7 bytes forward or backward within the memory buffer. The operand effective address is the sum of the base address and the index. Byte and halfword operands can be sign-extended to 32-bits. An alternate implementation of the 8x8 2D IDCT of the present invention that is scalable so that it is operable on arrays of different numbers of PEs, such as 1x0, lxl, 1x2, 2x2, 2x3, and so on, is described below. This alternate approach makes efficient use of the SUM2PA ManArray instructions as well as the indirect VLIW architecture. Fig. 14 is a logical representation 1400 in two-dimensional form of the 8x8 IDCT input data, F iwjz, as required
for this scalable approach. The ordering shown maintains a relationship of the rows and columns given by the following formula that allows the reuse of the VLIWs and cosine coefficient table without modification for each dimension (rows or columns). By reuse of the VLIWs, the VIM memory is minimized and by common use of the cosine coefficient tables, the local PE data memory is minimized. The relationship between the different matrix elements as shown in Fig. 14 is specified as follows:
Where a selected subscript "iw" is even or odd, and
If iw is even then
jz is odd,
matrix elements xa, X , xc, Xd are non-repeating members of {0,2,4,6}, and
matrix elements ym, yn, y0, yp are non-repeating members of { 1,3,5,7}
else jz is even,
matrix elements xa, Xb, xc, d are non-repeating members of {1,3,5,7}, and
matrix elements ym, yn, y0, yp are non-repeating members of {0,2,4,6}.
For this approach, four 16-bit values are loaded into the CRF by use of a double-word load instruction. In the current implementation of the ManArray architecture termed Manta™ which has a fixed endianess ordering for memory accesses the values are loaded "reverse- order" into a register pair. For example, the four memory values:
{r Xa,Xaj 1* xa_xb> l1 xa_xc_ r xa.χdi are loaded into a register pair as:
{ F xa,xd, F xa,xc, F xa,xb, F xa,xa } .
Note that if the endianess of a processor core changes, the only change needed to the process is the arrangement of the cosine coefficients in the static table stored in memory. The
orderings of xa, X , xc, χd and ym. yn. yo. Yp will determine the ordering of the static cosine
coefficient table. The groupings of even and odd indices allow efficient use of the SUM2PA instruction in the ManArray architecture. The outputs are determined by the internal workings of the algorithm itself and are in the subscript order 0,1,2,3,4,5,6,7.
Fig. 15 illustrates the intermediate output table 1500 after the completion of the first dimension of processing. Note that the input row I xa, jz (xa subscripts are common across
the first row) is processed and stored in a transpose position as column I xa, jz in Fig. 1 (xa
subscripts are common across the first column). Likewise row I Xb, jz (xb subscripts are
common across the second row) is processed and stored in column I X , jz in Fig. 15 (x
subscripts are common across the second column), etc. The post-increment/decrement feature of the LII instruction allow for easy and efficient address generation supporting the stores to memory in the specified memory organization.
During the second pass through the array, the incoming rows are again processed and stored in the transpose positions as columns. The result is a complete 8x8 2D IDCT stored in row-major order. The logical matrix organization of the pixel results, Pjj, is shown in table
1600 of Fig. 16. In the present implementation, the output is in 16-bit format. However, if it is known that the output values can be represented as 8-bit values, the code can be modified to store 8-bit data elements instead of 16-bit data elements.
Since the 8x8 IDCT is linearly separable, the code for a 1x8 IDCT is shown in table 1700 of Fig. 17, and then expanded into the full 8x8 IDCT, code listings 1800, 1805, 1810, 1815, 1820, 1825, 1830, 1835, 1840, 1845, 1850, 1855 and 1860 of Figs. 18A-M, respectively.
In the code table 1700 for a 1x8 IDCT, Fig. 17, note that A0 holds the address of the incoming data that has been properly arranged, Al holds the address of the cosine coefficient table, and A2 holds the address of the output data. R30 holds a special rounding value (where needed, 0 otherwise). R26 holds the permute control word (value = 0x32107654). All incoming and outgoing values are 16-bit signed data. Internal working values use 16- and 32-bit signed data types. O 01/35267 22
The special ordering of the input data can be incorporated as part of a de-zigzag scan ordering in a video decompression scheme (e.g. MPEG-1, MPEG-2, H.263, etc.). The output data is in row-major order. The 1x8 code for any row in Fig. 17 will output values 0, 1, 2, 3, 4, 5, 6, and 7. The incoming data is loaded into registers R0-R3 of a processing element's CRF, packing two data values in each register, as follows:
Figure imgf000023_0002
Figure imgf000023_0001
The construction of the complete 2D 8x8 IDCT is accomplished by performing a series of 8 1x8 IDCTs on the rows in a pipelined fashion, storing the output values in a second memory storage area in transpose format, performing a second series of 8 1x8 IDCTs on the rows of the transposed array (these are the columns of the original data), then storing the output values in a new storage area in transposed format. Exemplary ManArray 2D 8x8 IDCT code is shown in Figs. 18 A-M. The scalable algorithm uses four additional instructions. Fig. 19A shows a load indirect with scaled immediate update (LII) instruction 1900 and Fig. 19B shows a syntax operation table 1910 for the LII instruction 1900 of Fig. 19A. LII instruction 1900 loads a byte, halfword, word, or doubleword operand into an SP target register from SP memory or into a PE target register from PE local memory. Source address register Λ« is updated prior to (pre-decrement/pre-increment) or after (post- decrement/post-increment) its use as the operand effective address. The update to An is an addition or subtraction of the unsigned 7-bit update value UPDATE 7 scaled by the size of the operand being loaded. In other words, no scale for a byte, 2 for a halfword, 4 for a word, or 8 for a doubleword. Byte and halfword operands can be sign-extended to 32-bits.
Fig. 20A shows a subtract (SUB) instruction 2000 and Fig. 20B shows a syntax operation table 2010 for the SUB instruction 2000 of Fig. 20A. Utilizing SUB instruction 2000, the difference of source register Rx and Ry stored in target register Rt. Fig. 21A shows a shift right immediate (SHRI) instruction 2100, and Fig. 21B shows a syntax/operation table 21 10 for the SHRI instruction 2100 of Fig. 21 A. Utilizing instruction 2100, each source register element is shifted right by the specified number of bits Nbits. The range of Nbits is 1-32. For signed (arithmetic) shifts, vacated bit positions are filled with the value of the most significant bit of the element. For unsigned (logical) shifts, vacated bit positions are filled with zeroes. Each result is copied to the corresponding target register element.
Fig. 22A illustrates a store indirect with scaled immediate update (SII) instruction 2200, and Fig. 22B shows a syntax operation table 2210 for the SII instruction 2200 of Fig. 22A. SII instruction 2200 stores a byte, halfword, word, or doubleword operand to SP memory from an SP source register or to PE local memory from a PE source register. Source address register An is updated prior to (pre-decrement/pre-increment) or after (postdecrement/post-increment) its use as the operand effective address. The update to An is an addition or subtract of the unsigned 7-bit update value UPDATE7 scaled by the size of the operand being loaded. In other words, no scaleing is done for a byte, 2 for a halfword, 4 for a word, or 8 for a doubleword.
To realize the storage of the transpose values, the storage instructions of the 1x8 IDCT code table 1700 of Fig. 17, are modified as shown in code table 2300 of Fig. 23. Note the storage offsets in the SII instructions. The "+8" updates the address pointer so that an element is stored in the transpose format. The "-55" resets the address pointer to the beginning of the next column.
After the 8 rows of the first dimension are completed, the intermediate output is stored in transpose format. A0 is loaded with the address of the intermediate data, Al is loaded with the start of the cosine coefficient table again, and A2 is loaded with the address of the final output data's destination. R30 is again loaded with either the rounding constant or 0. R26 is loaded with the permute control word (0x32107654). By pipelining the rows (then the columns) a portion of the code, Figs. 18 D-M, is shown in Fig. 23 for a sample set of VLIWs progressing through rows of the 8x8, i.e. 1x8 IDCTs in succession. Entry 2302 indicates the start of next row processing and entry 2304 indicates the finish of the previous row processing. The VLIW number (VIM address) and the execution slot enable controls, #,E=execution unit, are given in a first column 2310. First shown are the VLIW loads (VIM intialization), Figs. 18A-C, then the actual execution code for the first dimension, Figs. 18D-H and then the code for the processing of the second dimension, Figs. 18I-M. The final output is stored in row major order at location specified by output_buffer. The total cycle count is 202 cycles for a single 2D 8x8 IDCT. It is noted that while this version of the IDCT is not as fast as the 34-cycle IDCT on a
2x2, there are several differences and advantages. First, it is scalable. That is, the scalable algorithm runs on a single processing element. In the code discussed above, the IDCT runs on a single SP. On a ManArray 2x2, four 8x8 IDCTs can be performed at once. Thus, the effective cycle count is about 50 cycles per IDCT, as the 202 cycles are divided by the total number of PEs. If a 2x4 array were to be used for video decompression, the effective cycle count would be only 25 cycles. These improvements are made without any change to the process and minimum change to the code. For example, for the exemplary code given in Figs. 17-23, the .s is changed to a .p meaning the .p instructions are to be executed in the PEs. In addition, the cosine coefficients, a relatively small table, are replicated in each PE's local memory and the 8x8 data for each locally computed 8x8 2D IDCT is stored in each local PE memory. After the data is distributed, the process is then run in SIMD mode on an array of PEs.
Second, the output of each IDCT is in row-major order. No further data manipulation is required as in the 34-cycle version. While the cycle count for this IDCT is slightly higher, the effective throughput of a video decompression engine is reduced. Third, the VIM resources for this process are less. By using a more regular pattern, this approach uses only 11 VLIW locations instead of the 27 required for the 34-cycle version.
Further optimization of the cycle count may be possible with this approach, but could result in a corresponding increase in other areas, such as VIM size for example.
While the present invention has been disclosed in a presently preferred context, it will be recognized that the present teachings may be adapted to a variety of contexts consistent with this disclosure and the claims that follow. By way of example, while the present invention is principally disclosed in the context of specific IDCT implementations, it will be recognized that the present teachings can be applied to more effective implementations of variety of cosine transforms, such as discrete cosine transforms (DCTs), for example. As one example of such an implementation, Fig. 24 shows an exemplary process 2400 for a 2x2 DCT implementation expected to take 35 cycles to run. The illustrated process 2400 starts with input data already loaded on the PEs of a 2x2 array such as that of Fig. 1. It ends with the data across the 2x2 array, but not in row-major order. Actual coding and testing of the process has not been completed and various adaptations and adjustments should be expected to optimize the final process for a desired end application.

Claims

We claim:
1. A method for efficiently computing a two dimensional inverse discrete cosine transform (IDCT) for a two dimensional data matrix comprising the steps of: distributing the row and column data for said matrix into processing element configurable compute register files for a plurality of processing elements (PEs) in a manner so as to allow fast execution by minimizing communication operations between the PEs; applying a one dimensional IDCT on all rows of said matrix; and then applying the one dimensional IDCT on all columns of said matrix to form the two dimensional IDCT.
2. The method of claim 1 wherein the organization of the distributed data is obtained by translating a row-major data matrix into a form prescribed by a cosine transform signal flow graph.
3. The method of claim 2 wherein the plurality of PEs comprises a 2x2 array of PEs, and the matrix translation further comprises the steps of: reordering a row-major data matrix by grouping odd and even data elements together; then folding the reordered matrix on a vertical axis and on a horizontal axis; and then assigning 4x4 quadrants of the translated matrix into each of the four PEs.
4. The method of claim 1 wherein the plurality of PEs comprises a 2x2 array of PEs and said step of distributing further comprises the steps of: loading frequency coefficient data and a first set of cosine coefficients for said PEs; initializing pointer and control values; and utilizing a rounding bit to enhance precision.
5. The method of claim 4 wherein the step of applying a one dimensional IDCT on all rows of said matrix further comprises: performing a sum of product operation comprising thirty-two 16 x 16 multiplications and twenty-four 2-to-l 32-bit additions for each row to produce 32-bit sum of product data.
6. The method of claim 4 wherein the step of applying a one dimensional IDCT on all columns of said matrix further comprises: performing a sum of product operation comprising thirty-two 16 x 16 multiplications and twenty-four 2-to-l 32-bit additions for each column to produce 32-bit sum of product data.
7. The method of claim 5 wherein said step of applying a one dimensional IDCT on all rows of said matrix further comprises: performing four butterfly operations, where each butterfly operation comprises a separate add and subtract on 16-bit data.
8. The method of either claim 6 or 7 wherein said step of applying a one dimensional IDCT on all columns of said matrix further comprises: performing four butterfly operations, where each butterfly operation comprises a separate add and subtract on 16-bit data.
9. The method of either of claims 6 or 7 wherein as results of initial ones of said sum of product operations are produced, said butterfly operations commence and overlap with the completion of further of said sum of product operations.
10. The method of either claim 5 or 6 further comprising the step of: operating on the most significant 16 bits of the 32-bit sum of product data to produce a final IDCT result in 16-bit form.
1 1. The method of claim 1 wherein the plurality of PEs comprises a 2 x 2 array of
PEs connected in a manifold array architecture and an indirect execute very long instruction word (XV) instruction is utilized to allow a software pipeline to be built up or torn down without creating new very long instruction words (VLIWs) to support each stage of building upon tearing down a software pipeline.
12. The method of claim 1 wherein said step of distributing further comprises the steps of: receiving from an MPEG decoder a linear order of 64 packed halfword data elements with two AC coefficients per word; and organizing this data in packed format to match a packed format required in one of said IDCT steps.
13. The method of claim 12 further comprising the step of: loading the data via a direct memory access (DMA).
14. The method of claim 12 further comprising the step of: loading the data utilizing manifold array load instructions.
15. The method of claim 14 wherein the data is loaded into the processing element configurable compute register files used in a 16 x 64 configuration for high performance packed data operation.
16. A system for efficiently computing a two dimensional inverse discrete cosine transform (IDCT) for a two dimensional data matrix comprising: means for distributing the row and column data for said matrix into processing element configurable compute register files for a plurality of processing elements (PEs) to allow fast execution by minimizing communication operations between the PEs; means for applying a one dimensional IDCT on all rows of said matrix; and means for applying the one dimensional IDCT on all columns of said matrix.
17. The system of claim 16 wherein the plurality of PEs comprises a 2x2 array of PEs and said means for distributing further comprises: means for loading frequency coefficient data and a first set of cosine coefficients for said PEs; means for initializing pointer and control values; and means for utilizing a rounding bit to enhance precision.
18. The system of claim 17 wherein the means applying a one dimensional IDCT on all rows of said matrix further comprises: means for performing a sum of product operation comprising thirty-two 16 x 16 multiplications and twenty- four 2-to-l 32-bit additions for each row to produce 32-bit sum of product data.
19. The system of claim 17 wherein the means for applying a one dimensional IDCT on all columns of said matrix further comprises: means for performing a sum of product operation comprising thirty-two 16 x 16 multiplications and twenty-four 2-to-l 32-bit additions for each column to produce 32-bit sum of product data.
20. The system of claim 18 wherein the means for applying a one dimensional IDCT on all rows of said matrix further comprises: means for performing four butterfly operations, where each butterfly operation comprises a separate add and subtract on 16-bit data.
21. The system of either claim 19 or 20 wherein said means for applying a one dimensional IDCT on all columns of said matrix further comprises: means for performing four butterfly operations, where each butterfly operation comprises a separate add and subtract on 16-bit data.
22. The system of either of claims 19 or 20 wherein as results of initial ones of said sum of product operations are produced, said means for performing butterfly operations commence operation and said operations overlap with the completion of further of said sum of product operations.
23. The system of either claim 18 or 19 further comprising: means for operating on the most significant 16 bits of the 32-bit sum of product data to produce a final IDCT result in 16-bit form.
24. The system of claim 16 wherein the plurality of PEs comprises a 2 x 2 array of PEs connected in a manifold array architecture and an indirect execute very long instruction word (XV) instruction is utilized to allow a software pipeline to be built up or torn down without creating new very long instruction words (VLIWs) to support each stage of building up or tearing down a software pipeline.
25. The system of claim 16 wherein said means for distributing further comprises: means for receiving from an MPEG decoder a linear order of 64 packed halfword data elements with two AC coefficients per word; and means for organizing this data in packed format to match a packed format required in one of said IDCT steps.
26. The system of claim 25 further comprising: means for loading the data via a direct memory access (DMA).
27. The system of claim 25 further comprising: means for loading the data utilizing manifold array load instructions.
28. The system of claim 27 wherein the data is loaded into the processing element configurable compute register files configured in a 16 x 64 configuration for high performance packed data operation.
29. A method for efficiently computing a two dimensional inverse discrete cosine transform (IDCT) for a two dimensional data matrix comprising the steps of: distributing the row and column data for said matrix into processing element configurable compute register files for a plurality of processing elements (PEs) in a manner so as to allow fast execution by minimizing communication operations between the PEs. applying a one dimensional IDCT on all columns of said matrix; and then applying the one dimensional IDCT on all rows of said matrix to form the two dimensional IDCT.
30. The method of claim 29 wherein the organization of the distributed data is obtained by translating a column-major data matrix into a form prescribed by a cosine transform signal flow graph.
31. The method of claim 29 wherein the plurality of PEs comprises a 2x2 array of PEs the matrix translation further comprises the steps of: reordering a column-major data matrix by grouping odd and even data elements together; then folding the reordered matrix on a vertical axis and on a horizontal axis; and then assigning 4x4 quadrants of the translated matrix into each of the four PEs.
32. The method of claim 29 wherein the plurality of PEs comprises a 2x2 array of
PEs and said step of distributing further comprises the steps of: loading frequency coefficient data and a first set of cosine coefficients for said PEs; initializing pointer and control values; and utilizing a rounding bit to enhance precision.
33. The method of claim 32 wherein the step of applying a one dimensional IDCT on all columns of said matrix further comprises: performing a sum of product operation comprising thirty-two 16 x 16 multiplications and twenty-four 2-to-l 32-bit additions for each column to produce 32-bit sum of product data.
34. The method of claim 32 wherein the step of applying a one dimensional IDCT on all rows of said matrix further comprises: performing a sum of product operation comprising thirty-two 16 x 16 multiplications and twenty-four 2-to-l 32-bit additions for each row to produce 32-bit sum of product data.
35. The method of claim 33 wherein said step of applying a one dimensional IDCT on all columns of said matrix further comprises: performing four butterfly operations, where each butterfly operation comprises a separate add and subtract on 16-bit data.
36. The method of either claim 34 or 35 wherein said step of applying a one dimensional IDCT on all columns of said matrix further comprises: performing four butterfly operations, where each butterfly operation comprises a separate add and subtract on 16-bit data.
37. The method of either of claims 34 or 35 wherein as results of initial ones of said sum of product operations are produced, said butterfly operations commence and overlap with the completion of further of said sum of product operations.
38. The method of either claim 33 or 34 further comprising the step of: operating on the most significant 16 bits of the 32-bit sum of product data to produce a final IDCT result in 16-bit form.
39. The method of claim 29 wherein the plurality of PEs comprises a 2 x 2 array of PEs connected in a manifold array architecture and an indirect execute very long instruction word (XV) instruction is utilized to allow a software pipeline to be built up or torn down without creating new very long instruction words (VLIWs) to support each stage of building upon tearing down a software pipeline.
40. The method of claim 29 wherein said step of distributing further comprises the steps of: receiving from an MPEG decoder a linear order of 64 packed halfword data elements with two AC coefficients per word; and organizing this data in packed format to match a packed format required in one of said IDCT steps.
41. The method of claim 40 further comprising the step of: loading the data via a direct memory access (DMA).
42. The method of claim 40 further comprising the step of: loading the data utilizing manifold array load instructions.
43. The method of claim 42 wherein the data is loaded into the processing element configurable compute register files used in a 16 x 64 configuration for high performance packed data operation.
44. A method for efficiently computing a cosine transform for a two dimensional data matrix comprising the steps of: distributing the row and column data for said matrix into processing element configurable compute register files for a plurality of processing elements (PEs) in a manner so as to allow fast execution by minimizing communication operations between the PEs; applying a one dimensional cosine transform on all rows of said matrix; and then applying the one dimensional cosine transform on all columns of said matrix to form the two dimensional cosine transform.
45. A method for efficiently computing a two dimensional cosine transform for a two dimensional data matrix comprising the steps of: distributing the row and column data for said matrix into processing element configurable compute register files for a plurality of processing elements (PEs) in a manner so as to allow fast execution by minimizing communication operations between the PEs; applying a one dimensional cosine transform on all columns of said matrix; and then applying the one dimensional cosine transform on all rows of said matrix to form the two dimensional cosine transform.
46. A scalable method for efficiently computing a two dimensional indirect discrete cosine transform (IDCT) for a two dimensional data matrix comprising the steps of: assigning input data, F iw jz to positions in the two dimensional data matrix so that the
relationship between different matrix elements is defined such that if iw is even, then jz is
odd, matrix elements xa, x^ xc, Xd are non-repeating members of {0,2,4,6}, and matrix
elements ym, y„, y0, yP are non-repeating members of { 1 ,3,5,7} , otherwise jz is even, matrix
element xa, X , xc, Xd are non-repeating members of {1,3,5,7}, and matrix elements ym, yn, y0,
yp are non-repeating members of {0,2,4,6}; and
processing the two dimensional data matrix to efficiently compute the two dimensional inverse discrete cosine transform.
47. The method of claim 46 further comprising the step of: loading four 16-bit values in a compute register file utilizing a double-word load instruction.
48. The method of claim 47 wherein said four 16-bit values are loaded in reverse- order in a register pair.
49. The method of claim 46 further comprising the step of storing a static cosine coefficient table in a local memory of each processing element of an array comprising multiple processing elements; and distributing said processing amongst said multiple processing elements.
50. The method of claim 46 further comprising the step of: utilizing a SUM2PA instruction in the ManArray architecture to produce outputs in subscript order 0,1,2,3,4,5,6, and 7.
51. The method of claim 49 further comprising the step of: utilizing a post-increment/decrement feature of an LII instruction in the ManArray architecture for easy and efficient address generation supporting stores to memory in a specified memory organization.
PCT/US2000/031120 1999-11-12 2000-11-09 Methods and apparatus for efficient cosine transform implementations WO2001035267A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16533799P 1999-11-12 1999-11-12
US60/165,337 1999-11-12

Publications (1)

Publication Number Publication Date
WO2001035267A1 true WO2001035267A1 (en) 2001-05-17

Family

ID=22598497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/031120 WO2001035267A1 (en) 1999-11-12 2000-11-09 Methods and apparatus for efficient cosine transform implementations

Country Status (1)

Country Link
WO (1) WO2001035267A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007044598A2 (en) * 2005-10-05 2007-04-19 Qualcomm Incorporated Fast dct algorithm for dsp with vliw architecture
CN112825257A (en) * 2019-11-20 2021-05-21 美光科技公司 Method and apparatus for performing video processing matrix operations within a memory array

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4829465A (en) * 1986-06-19 1989-05-09 American Telephone And Telegraph Company, At&T Bell Laboratories High speed cosine transform
US5285402A (en) * 1991-11-22 1994-02-08 Intel Corporation Multiplyless discrete cosine transform
EP0720103A1 (en) * 1994-12-29 1996-07-03 Daewoo Electronics Co., Ltd Two-dimensional inverse discrete cosine transform circuit
US5854757A (en) * 1996-05-07 1998-12-29 Lsi Logic Corporation Super-compact hardware architecture for IDCT computation
US5870497A (en) * 1991-03-15 1999-02-09 C-Cube Microsystems Decoder for compressed video signals
US5978508A (en) * 1996-09-20 1999-11-02 Nec Corporation Two-dimensional inverse discrete cosine transformation circuit for MPEG2 video decoder

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4829465A (en) * 1986-06-19 1989-05-09 American Telephone And Telegraph Company, At&T Bell Laboratories High speed cosine transform
US5870497A (en) * 1991-03-15 1999-02-09 C-Cube Microsystems Decoder for compressed video signals
US5285402A (en) * 1991-11-22 1994-02-08 Intel Corporation Multiplyless discrete cosine transform
EP0720103A1 (en) * 1994-12-29 1996-07-03 Daewoo Electronics Co., Ltd Two-dimensional inverse discrete cosine transform circuit
US5854757A (en) * 1996-05-07 1998-12-29 Lsi Logic Corporation Super-compact hardware architecture for IDCT computation
US5978508A (en) * 1996-09-20 1999-11-02 Nec Corporation Two-dimensional inverse discrete cosine transformation circuit for MPEG2 video decoder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PECHANEK G.G. ET AL.: "M. F.A.S.T.: A single chip highly parallel image processing architecture", PROCEEDINGS INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, vol. 1, October 1995 (1995-10-01), pages 69 - 72, XP002936265 *
WANG C.-L. ET AL.: "Highly parallel VLSI architectures for the 2-D DCT and IDCT computations", IEEE REGION 10'S NINTH ANNUAL INTERNATIONAL CONFERENCE, vol. 1, August 1994 (1994-08-01), pages 295 - 299, XP002936264 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007044598A2 (en) * 2005-10-05 2007-04-19 Qualcomm Incorporated Fast dct algorithm for dsp with vliw architecture
WO2007044598A3 (en) * 2005-10-05 2008-08-14 Qualcomm Inc Fast dct algorithm for dsp with vliw architecture
US7725516B2 (en) 2005-10-05 2010-05-25 Qualcomm Incorporated Fast DCT algorithm for DSP with VLIW architecture
US8396916B2 (en) 2005-10-05 2013-03-12 Qualcomm, Incorporated Fast DCT algorithm for DSP with VLIW architecture
CN112825257A (en) * 2019-11-20 2021-05-21 美光科技公司 Method and apparatus for performing video processing matrix operations within a memory array

Similar Documents

Publication Publication Date Title
US6141673A (en) Microprocessor modified to perform inverse discrete cosine transform operations on a one-dimensional matrix of numbers within a minimal number of instructions
US6061521A (en) Computer having multimedia operations executable as two distinct sets of operations within a single instruction cycle
US5801975A (en) Computer modified to perform inverse discrete cosine transform operations on a one-dimensional matrix of numbers within a minimal number of instruction cycles
US6047372A (en) Apparatus for routing one operand to an arithmetic logic unit from a fixed register slot and another operand from any register slot
US6397324B1 (en) Accessing tables in memory banks using load and store address generators sharing store read port of compute register file separated from address register file
US9557994B2 (en) Data processing apparatus and method for performing N-way interleaving and de-interleaving operations where N is an odd plural number
EP1692613B1 (en) A data processing apparatus and method for moving data between registers and memory
EP1692612B1 (en) A data processing apparatus and method for moving data between registers and memory
US7756347B2 (en) Methods and apparatus for video decoding
US7761693B2 (en) Data processing apparatus and method for performing arithmetic operations in SIMD data processing
US5941938A (en) System and method for performing an accumulate operation on one or more operands within a partitioned register
US6963341B1 (en) Fast and flexible scan conversion and matrix transpose in a SIMD processor
US20070239967A1 (en) High-performance RISC-DSP
WO2000054144A1 (en) Register file indexing methods and apparatus for providing indirect control of register addressing in a vliw processor
US8174532B2 (en) Programmable video signal processor for video compression and decompression
US6754687B1 (en) Methods and apparatus for efficient cosine transform implementations
CN109213525B (en) Streaming engine with shortcut initiation instructions
US7210023B2 (en) Data processing apparatus and method for moving data between registers and memory in response to an access instruction having an alignment specifier identifying an alignment to be associated with a start address
US20050125631A1 (en) Data element size control within parallel lanes of processing
US20090276576A1 (en) Methods and Apparatus storing expanded width instructions in a VLIW memory for deferred execution
WO2001035267A1 (en) Methods and apparatus for efficient cosine transform implementations
Holmann et al. Real-time MPEG-2 software decoding with a dual-issue RISC processor
Pechanek et al. MFAST: a single chip highly parallel image processing architecture
Pechanek et al. MFAST: A Highly Parallel Single Chip DSP with a 2D IDCT Example
Wu et al. Parallel architectures for programmable video signal processing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA IL JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase