CN102073479B - Data size-based shuffle switch matrix compression method - Google Patents

Data size-based shuffle switch matrix compression method Download PDF

Info

Publication number
CN102073479B
CN102073479B CN 201010559590 CN201010559590A CN102073479B CN 102073479 B CN102073479 B CN 102073479B CN 201010559590 CN201010559590 CN 201010559590 CN 201010559590 A CN201010559590 A CN 201010559590A CN 102073479 B CN102073479 B CN 102073479B
Authority
CN
China
Prior art keywords
switch matrix
current
request
shuffle
data granularity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010559590
Other languages
Chinese (zh)
Other versions
CN102073479A (en
Inventor
陈书明
刘胜
万江华
陈海燕
陈跃跃
孙海燕
张凯
王耀华
刘祥远
李振涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN 201010559590 priority Critical patent/CN102073479B/en
Publication of CN102073479A publication Critical patent/CN102073479A/en
Application granted granted Critical
Publication of CN102073479B publication Critical patent/CN102073479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a data size-based shuffle switch matrix compression method, which comprises the following steps of: step 1, when performing data size-based shuffle switch matrix compression according to a shuffle request, setting the data size of the current shuffle request to be the smallest data size first; step 2, judging whether the data size of the current shuffle request can be doubled, and if the data size of the current shuffle request cannot be doubled, skipping to the step 3, otherwise, skipping to the step 4; step 3, doubling the current data size, compressing the current shuffle switch matrix according to the doubled data size, and skipping to the step 2; step 4, further compressing the current switch matrix by using the current size indexing method; and step 5, judging whether the current shuffle request is the last shuffle request in the procedure, if the current shuffle request is not the last shuffle request, processing the next shuffle request, and skipping tothe step 1, otherwise, ending the algorithm. The data size-based shuffle switch matrix compression method is simple in principle, is easy and convenient to operate, and can effectively improve the utilization ratio and the compression efficiency of a shuffle mode memory.

Description

The based on data granularity shuffle the switch matrix compression method
Technical field
The present invention relates generally to and adopts single instruction stream multiple data stream (Single Instruction stream Multiple Datastreams, SIMD) general processor of technology or digital signal processor (Digital Signal Processor, DSP) field is refered in particular to and is shuffled the switch matrix compression method among a kind of SIMD of being applied to or the DSP.
Background technology
Parallel and vectorial technology has very large advantage as the SIMD technology of representative in solving the data operation intensive applications such as media processing and radio communication take sub-word, thereby is widely used and develops, and has been subject to the particularly favor of DSP of processor.Shuffling the unit and mainly be responsible for the mutual of data between each processing unit register of SIMD parts, is the critical component of the parallel processing unit performance performance of restriction.
The hardware body structure of shuffling the unit generally has following three kinds of modes, is respectively cross bar switch (Crossbar), multistage network and perfect shuffle-exchange network.The dirigibility of the central Crossbar of these three kinds of structures is the strongest, and multistage network takes second place, and perfect shuffle-exchange network is the poorest; Hardware realization expense minimum and the extendability of perfect shuffle-exchange network are also relatively good, and multistage network takes second place, and Crossbar is the poorest.Because the diversity of the shuffle mode that Embedded Application needs, the use of Crossbar is the most extensive, and the method that the present invention discusses is also take the hardware configuration that shuffles the unit as Crossbar as background.
Shuffling in the unit take Crossbar as main hardware structure, shuffle mode be with control Crossbar in each switching node disconnect or the matrix of the 01 value composition of conducting is one to one.The matrix of the 01 value composition of each switching node disconnection or conducting among the control Crossbar is referred to as switch matrix, and (or before) must directly or indirectly tell Crossbar that this shuffles needed switch matrix when the user shuffled instruction in execution.Because the data volume of switch matrix itself is larger, need to compress it.On the one hand, this compression is very important, square being directly proportional of the width of the data volume of switch matrix and SIMD data path, and along with the increase of the width of SIMD data path, the data volume of switch matrix can sharply expand; On the other hand, the complicacy of circuit in the time of must considering decompress(ion) should adopt fairly simple compression-decompression technique, in order to trading off aspect critical path time-delay and the compression efficiency.
Traditional shuffles the unit and shuffles in the instruction, generally adopts minimum data granularity indexing method to compress.As shown in Figure 1, the compression process schematic diagram of traditional employing minimum data granularity indexing method: when the switch matrix compression is carried out in the request of shuffling, do not consider its data granularity, its data granularity is set to the minimum data granularity, then according to current granularity indexing method switch matrix is carried out index compression, then judge that current last of asking in the program whether of shuffling shuffle request, process if not continuing that then the next one is shuffled the compression of asking to carry out switch matrix, otherwise be disposed.
AltiVec instruction set take Motorola Inc. as design agents is example, in the AltiVec instruction set, the width of vector is 128bits(16 byte), and the minimum data granularity of shuffling is 8bits, the instruction VPERM that shuffles in the AltiVec instruction set has used a vector registor (128bit) expression shuffle mode, it is 16 territories (each 8bits) that this vector registor is divided into, respectively each byte of indicating target vector registor from which byte in the source vector register (be each domain representation the delegation in the switch matrix), similarly also has the SHF instruction of the EVP processor of PHILIPS Co. respectively.
The switch matrix compression method of this minimum data granularity indexing method usage comparison in present vector processor is extensive, but this method is the same for the compression efficiency of the request of shuffling of different pieces of information granularity, and whole compression efficiency is lower.
The program of different application (or the different program segment of same application program) often has different data widths, as in video algorithm, data width is in the majority with byte or half-word, in the wireless telecommunications algorithm, the shared ratio of complex operation (data width is word or double word) is very large, above-mentioned phenomenon has also caused the data granularity of the request of shuffling of distinct program (or the different program segment of same application program) also to have larger difference, and this provides possibility for the present invention seeks more efficient switch matrix compression method.
Summary of the invention
The technical problem to be solved in the present invention just is: for the technical matters that prior art exists, the invention provides that a kind of principle is simple, easy and simple to handle, utilization factor that can Effective Raise shuffle mode storer, improve compression efficiency the based on data granularity shuffle the switch matrix compression method.
For solving the problems of the technologies described above, the present invention by the following technical solutions:
A kind of based on data granularity shuffle the switch matrix compression method, it is characterized in that step is:
Step 1: shuffling switch matrix when compression of based on data granularity carried out in the request of shuffling, and the at first current data granularity that shuffles request is set to the minimum data granularity;
Step 2: judge that whether the current data granularity that shuffles request can double, if cannot double, then goes to step 4; Otherwise jump to step 3;
Step 3: current data granularity is doubled, and based on the data granularity after doubling the current switch matrix that shuffles is compressed, and jump to step 2;
Step 4: shuffle request according to this, use current granularity indexing method that existing switch matrix is further compressed;
Step 5: judge that current shuffling ask in the program whether last to shuffle request, if not, then process the next one and shuffle request, and go to step 1, otherwise algorithm finishes.
As a further improvement on the present invention:
The flow process of described step 2 is:
Whether duplicable variable s is set to 1 2.1 identify current data of shuffling request, and variable i and j are set to 0 simultaneously;
Whether satisfy one of following two conditions 2.2 judge the current element that shuffles among the switch matrix x: 1. .x I, j, x I+1, j, x I, j+1, x I+1, j+1These four elements all are 0; 2. .x I, j, x I+1, j+1These two elements are 1, and x I+1, j, x I, j+1These two elements are 0, if do not satisfy, it is 0 that s then is set, and goes to 2.5; Otherwise go to 2.3;
Deduct 2 2.3 judge the scale n whether j equals current switch matrix, if so, then go to 2.4, otherwise j is from increasing 2, and go to 2.2;
Deduct 2 2.4 judge the scale n whether i equals current switch matrix, if so, then go to 2.5, otherwise i equals 0 from increasing 2, j, and go to 2.2;
If 2.5 current s=1, then the current data granularity that shuffles request can double, otherwise cannot double.
The flow process of described step 4 is:
4.1 variable i is set to 0;
4.2 judge whether the capable element of i of the switch matrix of current data granularity all is 0, if it is with the most significant digit SF_i[log in the i territory of final shuffle mode SF 2N] assignment is 1, all the other position SF_i[log 2N-1:0] assignment is 0, and jump to 4.4, otherwise with SF_i[log 2N] assignment is 0, it is 0 that variable j is set, and jumps to 4.3;
4.3 judge whether the element that the capable j of i of the switch matrix of current data granularity is listed as is 1, if then with SF_i[log 2N-1:0] assignment is j, and jumps to 4.4, otherwise with j from increasing 1, and go to 4.3;
4.4 i is increased 1 certainly; And judge i whether less than the scale n of the switch matrix of current data granularity, if, then jump to 4.2, continue the next line element of compression switch matrix, otherwise compress complete, the SF_0 that obtains, SF_1 ..., SF_n-1 is final shuffle mode.
Compared with prior art, the invention has the advantages that:
1. the matrix compression technology of shuffling that proposes can improve the utilization factor of shuffle mode storer effectively, and the storer of storage shuffle mode is fully utilized;
2. application the present invention, the total amount of data of the needed shuffle mode of same application program is less, and when executive utility, the data volume of loading shuffle mode also correspondence tails off, and has reduced power consumption;
3. decompression circuit is simple, and is less to the critical path delay.Use put forward the methods of the present invention, decompression circuit only needs to increase a small amount of signal replication logic on the decoding scheme basis of classic method, and hardware complexity is little.
4. in sum, the method that the present invention proposes is the capacity of compression switch matrix effectively, and little on the impact of existing decompression circuit, and the Interface design of shuffling instruction and shuffling the unit for vector processor provides new thinking.
Description of drawings
Fig. 1 is the compression process schematic diagram that adopts minimum data granularity indexing method;
Fig. 2 is the schematic flow sheet that shuffles the switch matrix compression method that the present invention is based on data granularity;
Fig. 3 is the algorithm flow schematic diagram of judging among the present invention whether the current data granularity that shuffles request can double;
Fig. 4 is the schematic flow sheet of current granularity indexing method in the implementation;
Fig. 5 is the compression process schematic diagram that the switch matrix of the larger request of shuffling of data granularity adopts classic method;
Fig. 6 is the compression process schematic diagram that the switch matrix of the larger request of shuffling of data granularity adopts the method for the present invention's proposition.
Embodiment
Below with reference to Figure of description and specific embodiment the present invention is described in further details.
The present invention is at pretreatment stage, shuffle request for each, determine step by step its final shuffling data granularity, and according to established data granularity step by step switch matrix is compressed, further compress according to current granularity indexing method at last, thereby finish the final compression of switch matrix.
The data path overall width of supposing processor is W bits, the minimum data granularity is G bits, the scale of Crossbar is N*N, and the width of each port is that G bits(W and G here is 2 integral number power, and N=W/G), then each shuffle mode of shuffling request is the switch matrix of a N*N when initial, establish it and be X, and wherein the numerical value of each element is X I, j(1≤i≤N-1,1≤j≤N-1), the maximum data granularity of supposing the processor support is M bits, then T=log 2(M/G).
If the current switch matrix that shuffles request is x, current switch matrix scale of shuffling request is that the element in the current switch matrix that shuffles request of n*n is x I, j(1≤i≤n-1,1≤j≤n-1), the current data granularity that shuffles request is g.
As shown in Figure 2, the flow process of shuffling the switch matrix compression method that the present invention is based on data granularity is:
Step 1: the request of shuffling is when the switch matrix compression of carrying out the based on data granularity, and the at first current data granularity that shuffles request is set to the minimum data granularity; X=X namely is set, g=G, n=N, x I, j=X I, j(1≤i≤N-1,1≤j≤N-1), t=0, and go to step 2;
Step 2: judge that whether the current data granularity that shuffles request can double, if cannot double, then goes to step 4; Otherwise jump to step 3;
Step 3: current data granularity is doubled, and based on the data granularity after doubling current switch matrix is compressed, and jump to step 2; If the switch matrix after the compression is y, then the method for compression is: y is set I, j=x I*2, j*2(1≤i≤n/2-1,1≤j≤n/2-1), g=2g is set, x=y, n=n/2, t=t+1; If t≤T then goes to step 2, otherwise skips to step 4;
Step 4: shuffle request according to this, use current granularity indexing method that existing switch matrix is further compressed;
Step 5: judge that current shuffling ask in the program whether last to shuffle request, if not, then process the next one and shuffle request, and go to step 1, otherwise algorithm finishes.
As shown in Figure 3, the algorithm flow schematic diagram for judging in the above-mentioned steps 2 whether the current data granularity that shuffles request can double:
Whether duplicable variable s is set to 1 2.1 identify current data of shuffling request, and variable i and j are set to 0 simultaneously; That is, whether the current data granularity that shuffles request of defining variable s sign is that 2g(s=1 represents it is that s=0 represents no), s=1 is set, i=0 is set, j=0;
Whether satisfy one of following two conditions 2.2 judge the element among the current switch matrix x: 1. .x I, j, x I+1, j, x I, j+1, x I+1, j+1These four elements all are 0; 2. .x I, j, x I+1, j+1These two elements are 1 and x I+1, j, x I, j+1These two elements are 0.If do not satisfy, it is 0 that s then is set, and goes to 2.5; Otherwise go to 2.3;
Deduct 2 (being n-2) 2.3 judge the scale n whether j equals current switch matrix, if so, then go to 2.4, otherwise j is from increasing 2 (j=j+2), and go to 2.2;
Deduct 2 (being n-2) 2.4 judge the scale n whether i equals current switch matrix, if so, then go to 2.5, otherwise i is from increasing 2 (i=i+2), j equals 0, and goes to 2.2;
If 2.5 current s=1, then the current data granularity that shuffles request can double (being that the current data granularity that shuffles request is 2g), otherwise cannot double (being that the current data granularity that shuffles request is g).
As shown in Figure 4, the based on data granularity that proposes for traditional minimum data granularity indexing method and the present invention shuffle subalgorithm---the schematic flow sheet of current granularity indexing method total in the switch matrix compression method, it is above-mentioned steps 4, if final shuffle mode is SF, then SF has n territory, the figure place 1+log in each territory 2N uses SF_i[j here] (0≤i≤n-1,0≤j≤log 2N) the j position of expression shuffle mode territory i, idiographic flow is:
4.1 variable i is set to 0;
4.2 judge whether the capable element of i of the switch matrix of current data granularity all is 0, if it is with the most significant digit SF_i[log in the i territory of final shuffle mode SF 2N] assignment is 1, all the other position SF_i[log 2N-1:0] assignment is 0, and jump to 4.4, otherwise with SF_i[log 2N] assignment is 0, it is 0 that variable j is set, and jumps to 4.3;
4.3 judge whether the element that the capable j of i of the switch matrix of current data granularity is listed as is 1, if then with SF_i[log 2N-1:0] assignment is j, and jumps to 4.4, otherwise with j from increasing 1, and go to 4.3;
4.4 i is increased 1 certainly; And judge i whether less than the scale n of the switch matrix of current data granularity, if, then jump to 4.2, continue the next line element of compression switch matrix, otherwise compress complete, the SF_0 that obtains, SF_1 ..., SF_n-1 is final shuffle mode.
As from the foregoing, the method that the present invention proposes is the improvement to traditional minimum data granularity indexing method, if do not carry out the first to the 4th above-mentioned step, and directly carries out for the 5th step, and the method is exactly minimum data granularity indexing method.The method principal character that the present invention proposes is that the data granularity of the request of shuffling is distinguished, and for the switch matrix of a N*N, it can be compressed to N/2 X-1* (1+log 2(N/2 X-1)) position (the x here is the multiple that the current data granularity that shuffles request is the minimum data granularity), and traditional minimum data granularity indexing method can only be with its boil down to N* (1+log 2N) position is when the granularity of the request of shuffling is not equal to minimum data granularity, i.e. x〉1 o'clock, the method that the present invention proposes will be far superior to traditional method to the compression effectiveness of switch matrix.
For the switch matrix compression method of above-mentioned based on data granularity, the present invention proposes following instruction SHUF (B, the H of shuffling, W, DW) (Mode), VRi, VRj. wherein SHUFB, SHUFH, SHUFW and SHUFDW to identify respectively the granularity of shuffling be byte, half-word, word and double word; The address (when specific implementation, also Mode can be set, at this moment shuffle mode corresponding to acquiescence leave in certain register, programmer need in advance shuffle mode moved this register) of shuffle mode in storer that the Mode sign is corresponding; VRi and VRj identify respectively the source operand place register number of shuffling operation and the register number at destination operand place.
The method that the present invention proposes is carried out shuffling instruction, and the shuffle mode decompress(ion) is when being switch matrix, and only needing increases a small amount of signal replication logic on the decoding scheme basis of classic method, thereby very little to the critical path delay.
As shown in Figure 5, adopt the compression process schematic diagram of classic method for the switch matrix of the larger request of shuffling of data granularity: a among the figure, b, c, d, e, f, g, h, A, B, C, D, E, F, G, the equal 8bits of the width of each element among the H, the scale of shuffling the Crossbar in the unit is 16*16, and the width of port is 8bits, and the minimum data granularity of the request of shuffling that can support also is 8bits.Vectorial abcdefghABCDEFGH shuffled be vectorial efghEFGHabcd0000, original switch matrix corresponding to its shuffle mode is 01 matrix (showing such as lower left quarter branch among the figure) of a 16*16, traditional minimum data granularity indexing method directly compresses according to step shown in Figure 4 the original switch matrix, the 0th behavior 0000_1000_0000_0000 in the original switch matrix, because the element of this delegation is not 0 entirely, so the most significant digit in the 0th territory of the shuffle mode after the compression is 0; Simultaneously because the 4th element of this delegation is 1, so minimum four compression processes for the 1st~15 row in the 4(original switch matrix in the 0th territory of the shuffle mode after the compression repeat no more here).Adopt classic method, the switch matrix of 256 elements in the example is compressed to 80bits.
As shown in Figure 6, adopt the compression process schematic diagram of the method for the present invention's proposition for the switch matrix of the larger request of shuffling of data granularity.Obviously, the example among the request of shuffling shown in the figure and Fig. 5 is identical.Method according to the present invention's proposition, the data granularity of the original request of shuffling can be 2 times minimum data granularity, be 16bits, the switch matrix of original switch matrix this moment after can 2 times of minimum data granularities compressions of boil down to (its scale become original 1/4); Further, switch matrix after 2 times of minimum data granularities are compressed can also continue compressed, be that 32bits compresses according to data granularity namely, switch matrix after 4 times of minimum data granularities compressions of switch matrix boil down to after 2 times of minimum data granularities compression this moment (it is original 1/4 that its scale becomes, become the original switch matrix 1/16); Last again according to method (the described method of Fig. 4) the boil down to 001_011_001_100 of 32bits index, adopt like this put forward the methods of the present invention, the switch matrix of 256 elements in the example finally is compressed into 12bits, with respect to traditional method, the compressibility of the method that the present invention proposes has improved more than 6 times.
Below only be preferred implementation of the present invention, protection scope of the present invention also not only is confined to above-described embodiment, and all technical schemes that belongs under the thinking of the present invention all belong to protection scope of the present invention.Should be pointed out that for those skilled in the art, the some improvements and modifications not breaking away under the principle of the invention prerequisite should be considered as protection scope of the present invention.

Claims (1)

  1. A based on data granularity shuffle the switch matrix compression method, it is characterized in that step is:
    Step 1: shuffling switch matrix when compression of based on data granularity carried out in the request of shuffling, and the at first current data granularity that shuffles request is set to minimum data granularity, t=0;
    Step 2: judge that whether the current data granularity that shuffles request can double, if cannot double, then goes to step 4; Otherwise jump to step 3;
    Step 3: current data granularity is doubled, and based on the data granularity after doubling the current switch matrix that shuffles is compressed, and jump to step 2 or step 4; If the current switch matrix that shuffles request is x, current switch matrix scale of shuffling request is n*n, and the element in the current switch matrix that shuffles request is x I, j, 1≤i≤n-1 wherein, 1≤j≤n-1, the current data granularity that shuffles request is g, the switch matrix of establishing after the compression is y; The maximum data granularity of supposing the processor support is that M bits, minimum data granularity are G bits, T=log 2(M/G), the G here is 2 integral number power, and then the method for compression is: y is set I, j=x I*2, j*2, 1≤i≤n/2-1 wherein, 1≤j≤n/2-1 arranges g=2g, x=y, n=n/2, t=t+1; If t≤T then goes to step 2, otherwise skips to step 4;
    Step 4: shuffle request according to this, use current granularity indexing method that existing switch matrix is further compressed;
    Step 5: judge that current shuffling ask in the program whether last to shuffle request, if not, then process the next one and shuffle request, and go to step 1, otherwise finish;
    The flow process of described step 2 is:
    Whether duplicable variable s is set to 1 2.1 identify current data of shuffling request, and variable i and j are set to 0 simultaneously;
    Whether satisfy one of following two conditions 2.2 judge the current element that shuffles among the switch matrix x: 1.. x I, j, x I+1, j, x I, j+1, x I+1, j+1These four elements all are 0; 2.. x I, j, x I+1, j+1These two elements are 1, and x I+1, j, x I, j+1These two elements are 0, if do not satisfy, it is 0 that s then is set, and goes to 2.5; Otherwise go to 2.3;
    Deduct 2 2.3 judge the scale n whether j equals current switch matrix, if so, then go to 2.4, otherwise j is from increasing 2, and go to 2.2;
    Deduct 2 2.4 judge the scale n whether i equals current switch matrix, if so, then go to 2.5, otherwise i equals 0 from increasing 2, j, and go to 2.2;
    If 2.5 current s=1, then the current data granularity that shuffles request can double, otherwise cannot double.
CN 201010559590 2010-11-25 2010-11-25 Data size-based shuffle switch matrix compression method Active CN102073479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010559590 CN102073479B (en) 2010-11-25 2010-11-25 Data size-based shuffle switch matrix compression method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010559590 CN102073479B (en) 2010-11-25 2010-11-25 Data size-based shuffle switch matrix compression method

Publications (2)

Publication Number Publication Date
CN102073479A CN102073479A (en) 2011-05-25
CN102073479B true CN102073479B (en) 2013-03-27

Family

ID=44032029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010559590 Active CN102073479B (en) 2010-11-25 2010-11-25 Data size-based shuffle switch matrix compression method

Country Status (1)

Country Link
CN (1) CN102073479B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104011673B (en) * 2011-12-30 2016-12-07 英特尔公司 Vector frequency compression instruction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1409520A (en) * 2001-09-26 2003-04-09 深圳市中兴通讯股份有限公司上海第二研究所 Coding method for improving end-to-end service quality
CN1581725A (en) * 2003-08-07 2005-02-16 三星电子株式会社 Method and apparatus for determining a shuffling pattern in a double space-time transmit diversity system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1409520A (en) * 2001-09-26 2003-04-09 深圳市中兴通讯股份有限公司上海第二研究所 Coding method for improving end-to-end service quality
CN1581725A (en) * 2003-08-07 2005-02-16 三星电子株式会社 Method and apparatus for determining a shuffling pattern in a double space-time transmit diversity system

Also Published As

Publication number Publication date
CN102073479A (en) 2011-05-25

Similar Documents

Publication Publication Date Title
JP6073385B2 (en) Data processing system and method using scalar / vector instructions
EP2972792B1 (en) Vector indirect element vertical addressing mode with horizontal permute
US9348792B2 (en) Coarse-grained reconfigurable processor and code decompression method thereof
CN102053816B (en) Data shuffling unit with switch matrix memory and shuffling method thereof
CN101986262B (en) Shuffle switch matrix compression method based on mode shift
CN104868922A (en) Data compression method and device
MY122682A (en) System and method for performing context switching and rescheduling of a processor
CN202931289U (en) Hardware LZ 77 compression implement system
CN105468568A (en) High-efficiency coarse granularity reconfigurable computing system
CN102073479B (en) Data size-based shuffle switch matrix compression method
CN104756069A (en) An element selection unit and a method therein
CN105446666A (en) Full digital sequence storage method and system based on mobile terminal
JP2007500394A (en) Microcontroller with interrupt structure having programmable priority levels associated with different register sets for each priority level
CN103399727A (en) Hardware integersaturation detector, method for detectingsaturation, and hardware device thereof
US20050278504A1 (en) System capable of dynamically arranging coprocessor number
US6877062B2 (en) Method and apparatus for increasing the memory read/write speed by using internal registers
CN206757592U (en) A kind of data handling system
CN102004672B (en) Reduction device capable of configuring auto-increment interval of reduction target
US4956805A (en) Circuitry for character translate functions
US6510444B2 (en) Data processor architecture and instruction format for increased efficiency
SE1151231A1 (en) Digital signal processor and baseband communication device
KR101927255B1 (en) Processing device of register window overflow/underflow
Gu et al. A segmental bus-invert coding method for instruction memory data bus power efficiency
CN104714777A (en) Microcontroller instruction fetching method and implementation circuit thereof
CN104699219A (en) Electronic equipment and information processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant