CN101504600A - Data transmission method used for micro-processor and micro-processor - Google Patents

Data transmission method used for micro-processor and micro-processor Download PDF

Info

Publication number
CN101504600A
CN101504600A CNA2009100768142A CN200910076814A CN101504600A CN 101504600 A CN101504600 A CN 101504600A CN A2009100768142 A CNA2009100768142 A CN A2009100768142A CN 200910076814 A CN200910076814 A CN 200910076814A CN 101504600 A CN101504600 A CN 101504600A
Authority
CN
China
Prior art keywords
data
memory bank
bytes
memory
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009100768142A
Other languages
Chinese (zh)
Other versions
CN101504600B (en
Inventor
石艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijin Hongqi Shengli Technology Development Co Ltd
Original Assignee
Beijin Hongqi Shengli Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijin Hongqi Shengli Technology Development Co Ltd filed Critical Beijin Hongqi Shengli Technology Development Co Ltd
Priority to CN200910076814.2A priority Critical patent/CN101504600B/en
Publication of CN101504600A publication Critical patent/CN101504600A/en
Application granted granted Critical
Publication of CN101504600B publication Critical patent/CN101504600B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Executing Machine-Instructions (AREA)

Abstract

The invention provides a data transmission method for a microprocessor CPU, which comprises the following steps: storing image data to an internal data storage array of the CPU;, wherein the data of different rows of each arithmetic data block in the image data are stored in different storage bodies; and the internal data storage array of the CPU comprises a plurality of the storage bodies; sending a load instruction for loading the arithmetic data block to a corresponding storage body by pointing to a bus of each storage body, and providing corresponding addressing information aiming at different storage bodies by the load instruction; and according to the corresponding addressing information, reading corresponding data information in each storage body respectively, and loading the data information to a register in sequence, The method can read a plurality of data scattered in the storage array to the register of Load/Store through a reading period and a Load instruction so as to greatly improve the reading efficiency, and does not waste precious register space.

Description

A kind of data transmission method and a kind of microprocessor that is used for microprocessor
Technical field
The present invention relates to the Data Transmission Controlling field, particularly relate to a kind of data transmission method and a kind of microprocessor that is used for microprocessor.
Background technology
The application of microprocessor CPU is very extensive, and for example, CPU is applied among the various SOC (System On Chip, SOC (system on a chip)) as important devices.Structural representation with reference to a kind of CPU device shown in Figure 1 specifically can comprise:
Data-storage system 101 is used to store data;
Command memory 102 is used for storage instruction;
Data storage manager 103 is used to manage the request of access of pointing to described data storage array;
Instruction storage manager 104 is used to manage the request of access of pointing to described command memory;
Controller 105 is used for control and coordinates each functional part operation;
Arithmetical unit 106 is used to finish various arithmetic sum logical operations.
What CPU shown in Figure 1 adopted is Harvard architecture, with data and instruction separate storage and calling, be that CPU comprises data channel (data storage array 101 and data storage manager 103) and instruction path (command memory 102 and instruction storage manager 104), described data channel links to each other with data bus, and described instruction path and instruction bus links to each other.
When microprocessor CPU and external unit carry out data transmission, generally realize by DMA (DirectMemory Access, direct memory visit).Be well known that, adopt dma mode between CPU internal storage and external memory storage, to carry out data transmission, do not need the participation of CPU, the passage of data transmission is provided by bus.
General, the internal data store system 101 of CPU can comprise Cache (cache memory, for example register) and TCM (tight coupling physical store module).Wherein, TCM can be the RAM (Random Access Memory random memory) of a fixed size, can be by instructing randomly, individually each storage unit among the RAM being conducted interviews.When carrying out logical operation, can instruct (loading or storage) to read or write required data by Load/Store from TCM.
Referring to Fig. 2, provided the data structure signal of view data, suppose that image resolution ratio is 640 * 480,640 bytes 201 of data structure one behavior then shown in Figure 2 have 480 row.When prior art is carried out computing to view data, required arithmetic unit is data block (for example 4 among Fig. 2 takes advantage of the data block 202 of 4 bytes), and in TCM, normally adopt the mode of delegation of delegation sequential storage, be the data of the data block (for example 4 among Fig. 2 takes advantage of the data block 202 of 4 bytes) of required computing, can be disperseed to be stored in the different memory location of TCM.
Can be but the characteristics of Load/Store instruction are exactly the memory address of visiting in every instruction above 1, the operation of access memory can not mix with arithmetical operation, so for view data, prior art can't all be read all data of whole data block the register of Load/Store instruction from TCM by a Load instruction.Usually need a plurality of Load operations, a part of data are read in each Load operation, and deposit data in a plurality of registers (because usually the register of Load/Store instruction only is 256) respectively, could realize reading to all data of whole data block, efficient is lower, and has wasted valuable register space.
Thereby, need the urgent technical matters that solves of those skilled in the art to be exactly at present: the reading efficiency that how can improve separate data.
Summary of the invention
Technical matters to be solved by this invention provides a kind of data transmission method that is applied in microprocessor CPU, can will disperse the data of storage directly to read in the register by a Load/Store instruction, saves register space, improves reading efficiency.
Accordingly, the present invention also provides a kind of microprocessor CPU, and it is at disperseing the data of storage to have higher reading efficiency in internal memory array.
In order to solve the problems of the technologies described above, the embodiment of the invention discloses a kind of data transmission method that is used for microprocessor CPU, may further comprise the steps:
Storing image data is to the CPU internal data storage array, and wherein, the data of the different rows of each the operational data piece in the described view data are stored in the different memory banks; Described CPU internal data storage array comprises a plurality of memory banks; Send a load instructions that is used to load an operational data piece, described load instructions is sent to corresponding memory bank by the bus of pointing to each memory bank; Described load instructions provides corresponding addressing information at different memory banks; According to corresponding addressing information, read corresponding data information in each memory bank respectively, order is loaded in the register.
Preferably, described addressing information at a memory bank comprises addressing byte first and the side-play amount at this memory bank.
Preferably, described CPU internal data storage array comprises 8 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
Preferably, described CPU internal data storage array comprises 16 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
Preferably, the dot matrix of described view data is 640 * 480,1024 * 768,1600 * 1200 or 2048 * 1536.
According to another embodiment, the invention also discloses a kind of microprocessor CPU, specifically comprise:
Data storage array comprises a plurality of memory banks, stores view data; The data of the different rows of each the operational data piece in the described view data are stored in the different memory banks;
Command memory is used for storage instruction;
Data storage manager links to each other respectively with a plurality of memory banks of described data storage array by multiple bus; Be used to manage the request of access of pointing to described data storage array;
Instruction storage manager is used to manage the request of access of pointing to described command memory;
Controller is used for control and coordinates each functional part operation;
Arithmetical unit is used to finish various arithmetic sum logical operations;
Also comprise:
Load Load/ storage Store parts, be used for sending a load instructions that is used to load an operational data piece to data storage manager, described load instructions provides corresponding addressing information at each memory bank of described data storage array;
Described data storage manager reads corresponding data information in each memory bank respectively according to corresponding addressing information, and order is loaded in the register that loads Load/ storage Store parts.
Preferably, described addressing information at a memory bank comprises addressing byte first and the side-play amount at this memory bank.
Preferably, described CPU internal data storage array comprises 8 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
Preferably, described CPU internal data storage array comprises 16 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
Preferably, the dot matrix of described view data is 640 * 480,1024 * 768,1600 * 1200 or 2048 * 1536.
Compared with prior art, the present invention has the following advantages:
The microprocessor internal of this patent adopts the data storage array with a plurality of memory banks, makes each line data of the required video data block of single computing (for example 4 data blocks of taking advantage of 4 bytes) all disperse to be stored in the different memory banks; And each individuality has corresponding separately bus and links to each other with data storage manager.When the needs reading of data, provide a Load/Store instruction, visit each memory bank simultaneously by data storage manager, in the Load instruction, different addressing informations is given in position at desired data in each memory bank, thereby can pass through a read cycle, will be dispersed in a plurality of data reads in the storage array to the register of Load/Store, improve reading efficiency greatly, and can not waste valuable register space.
Description of drawings
Fig. 1 is the structural representation of a kind of microprocessor CPU system embodiment of prior art;
Fig. 2 is existing a kind of image data structure synoptic diagram;
Fig. 3 is the flow chart of steps of a kind of microprocessor CPU method for reading data of the present invention;
Fig. 4 is the storage organization synoptic diagram of a kind of desired data piece of the present invention in the CPU internal memory array;
Fig. 5 is the structural representation of a kind of microprocessor CPU system embodiment of the present invention.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
The microprocessor CPU device of RISC (compacting instruction set processor) structure can be applied in the various embedded system developments.For example, the ARM embedded system of relatively using always.In the ARM embedded system, data storage manager (MMU, Memory Management Unit) can be finished the mapping (because having adopted the page virtual memory management in ARM) to amount of physical memory of the control of memory access authority and virtual memory space.Data storage manager can receive all request of access at storage array (pointing to each memory bank), and has abilities such as control of authority, arbitration management.The present invention links to each other data storage manager by multiple bus with each memory bank, thus can according to the addressing information that points to each memory bank read respectively required, be dispersed in the data in each memory bank.
For the request of access of the data storage array that points to microprocessor CPU, a kind of situation is to initiate by CPU is outside; For example, for DMA (Direct Memory Access, direct memory access (DMA)), it can temporarily take data bus, directly sends request of access at data storage array to data storage manager.
Another kind of situation is, initiates by CPU is inner, for example, in risc architecture, all can send the Load/Store instruction to data storage manager by loading Load/ storage Store parts.Finish corresponding read-write operation by data storage manager then; If relate to computing, then finish to get final product at the data call arithmetic unit of being read and write.Concrete request of access can comprise usually: address information, the control information of read or write; For write operation, can also comprise the required data that write.
For the CPU processing procedure of view data, a kind of common typical case uses, by loading Load/ storage Store instruction, certain operational data piece in the view data that is stored in the data storage array (general arithmetic unit be 4 take advantage of the data block of 4 bytes) institute's canned data is read in the register that loads Load/ storage Store parts, so that the arithmetic unit of CPU carries out computing to it; Described computing may comprise amplification, dwindles, rotation, special efficacy or the like operation.By each data block in this image is carried out computing, finish operational processes to this image.
For view data shown in Figure 2,4 take advantage of the data block of 4 bytes to carry out computing to wherein certain if desired, then can only read continuous data owing to load the Load instruction, and every loads the Load instruction when carrying out, and memory address generally can not surpass 1, so for above-mentioned target data block, prior art needs 4 Load instructions, read respectively 4 times, take 4 registers, just can finish and read.And behind employing the present invention, then only need a Load instruction just required target data block message all can be read out, and be stored in the register, saved valuable register space, improved reading efficiency.Concrete technical scheme is described below:
With reference to Fig. 3, show a kind of microprocessor CPU method for reading data of the present invention embodiment, it specifically can may further comprise the steps:
Step 301, storing image data are to the CPU internal data storage array, and wherein, the data of the different rows of each the operational data piece in the described view data are stored in the different memory banks; Described CPU internal data storage array comprises a plurality of memory banks;
Step 302, send a load instructions that is used to load an operational data piece, described load instructions is sent to corresponding memory bank by the bus of pointing to each memory bank; Described load instructions provides corresponding addressing information at different memory banks;
Wherein, described addressing information at a memory bank can comprise addressing byte first and the side-play amount at this memory bank.Preferably, side-play amount is 1 to 10.
Step 303, the corresponding addressing information of foundation read corresponding data information in each memory bank respectively, and order is loaded in the register.
For step 301, be characterized in the data of the different rows of each operational data piece are stored in respectively in the different memory banks, certainly, also can be stored in the different memory banks with the data of delegation.With reference to Fig. 4, show a kind of required storage condition signal of operational data piece in the CPU internal memory array; Wherein, with 64 is continuous 4 groups of 4 bytes (4 groups of 22-25 that yellow marks) of mould, be respectively stored in (32 of each memory banks) in 8 memory banks, and each memory bank has only been stored the part (2 bytes) of delegation's target data (4 bytes) or delegation's target data, the data that are each dispersion all are continuous in a memory bank, can read fully by a Load instruction.
In order to be dispersed in disposable the reading out of data in each memory bank, then in the Load of definite object data block instruction, need provide corresponding separately addressing information at different memory banks.Because when the concrete execution of Load instruction, the general memory address can not be above 1, so the present invention has adopted 8 buses respectively each addressing information to be sent to corresponding memory bank, finish reading of information needed in this memory bank by the control module that reads of each memory bank then.Like this, all only carried out a memory address, met the requirement of Load instruction at the process that reads of each memory bank.Operation respectively by 8 memory banks, and will read the register of sequential storage as a result to the Load instruction, promptly can be implemented in a Load in the instruction cycle, finish to a plurality of separate datas read and be stored to a register, raise the efficiency and save register resources.For example, in register, 4 groups of 22-25 bytes are sequential storage successively, and arithmetic unit directly calls to calculate and gets final product.
Below simple each line data of how describing operational data piece (4 take advantage of the data block of 4 bytes) store in the different memory banks.At first, existing image resolution ratio is generally 640 * 480,1024 * 768,1600 * 1200 or 2048 * 1536 dot matrix.
With 640 * 480 is example, with reference to Fig. 4, suppose that described CPU internal data storage array comprises the memory bank of 8 32 (4 bytes), adopt 64 to illustrate (640 is integral multiples of 64, and it is capable only to be equivalent to have increased n, and principle is identical) for mould among Fig. 4, each line data for view data, all adopted 8 null bytes following closely, with front and back two row of differentiate between images data, null byte adopts " X " expression in Fig. 5.
Because 64 can divide exactly 32, and 32 be the delegation of storage array of the present invention (8 4 bytes of memory bodies) just, is determinatives so follow 8 null bytes that adopted closely after delegation's view data.That is, for an operational data piece, first line data and second line data position in 84 bytes of memory volume arrays differs 8 bytes (for example No. 22 bytes of memory bank bank5 No. 22 bytes and memory bank bank7); Second line data and the third line data differ 8 bytes, and the third line data and fourth line data differ 8 bytes, and the fourth line data and first line data differ 8 bytes.Just this operational data piece (4 take advantage of 4 bytes) is dispersed in each memory bank uniformly, does not have two line data to appear at the situation of same memory bank.Can find out that by above-mentioned analysis under above-mentioned application conditions, storage array just can be satisfied the demand greater than 8 memory banks.
Certainly, the fifth line data and first line data will appear at same memory bank simultaneously, for existing operational data piece, all are 4 to take advantage of 4 bytes, so though it is feasible in theory, but do not influence practical application of the present invention.If really along with technical development, the byte matrix of operational data piece has increased, and the number of then suitably adjusting memory bank of the present invention gets final product (for example 16 or more).
In addition, need to prove also that if when storing image data, what adopt between row and the row is 4 null bytes, only need 4 memory banks to get final product (feasible especially greater than 4 certainly).Promptly, suppose for an operational data piece, first line data is stored in first memory bank since No. 0 byte, then second line data (4 bytes) is stored in second memory bank just, the third line data (4 bytes) are stored in the 3rd memory bank just, and fourth line data (4 bytes) are stored in the 4th memory bank just.Thereby also this operational data piece (4 take advantage of 4 bytes) can be dispersed in each memory bank uniformly, can two line data not appear at the situation of same memory bank, equally also can satisfy the requirement that the present invention reads whole data block information by a Load instruction.
Dot matrix for 1024 * 768,1600 * 1200 and 2048 * 1536, because it is identical to calculate principle, and 640,1024,1600 and 2048 all be 64 integral multiple, so applicable cases is similar substantially with aforementioned analysis, is not described in detail in this.
Certainly, top example all is to adopt 4 to take advantage of the data block of 4 bytes be that example describes, and when arithmetic unit changes, can satisfy requirement of the present invention based on aforementioned principles to the number correction of memory bank.And the present invention also is not limited to aforesaid several dot matrix way, for other dot matrix that may occur under the technical conditions future, can use the present invention based on aforementioned principles to the number correction of memory bank.
With reference to Fig. 5, show a kind of microprocessor CPU embodiment 500 of the present invention, specifically can comprise with lower member:
Data storage array 501 comprises a plurality of memory banks, stores view data; The data of the different rows of each the operational data piece in the described view data are stored in the different memory banks;
Command memory 502 is used for storage instruction;
Data storage manager 503 links to each other respectively with a plurality of memory banks of described data storage array by multiple bus; Be used to manage the request of access of pointing to described data storage array;
Instruction storage manager 504 is used to manage the request of access of pointing to described command memory;
Controller 505 is used for control and coordinates each functional part operation;
Arithmetical unit 506 is used to finish various arithmetic sum logical operations;
Also comprise:
Load Load/ storage Store parts 507, be used for sending a load instructions that is used to load an operational data piece to data storage manager 503, described load instructions provides corresponding addressing information at each memory bank of described data storage array 501; Preferably, described addressing information at a memory bank comprises addressing byte first and the side-play amount at this memory bank; For example, at the bank5 of Fig. 4, its addressing information can comprise: addressing byte first is No. 22 byte addresses, and side-play amount is a 1-4 byte.
Described data storage manager 503 reads corresponding data information in each memory bank respectively according to corresponding addressing information, and order is loaded in the register that loads Load/ storage Store parts 507.For example, at Fig. 4, read the data of 4 22-25 bytes respectively, sequential storage is to the register of Load instruction.
Preferably, described CPU internal data storage array comprises 8 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.Perhaps, described CPU internal data storage array comprises 16 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
Embodiment shown in Figure 5 also comprises instruction bus 510 and data bus 511, and the two only is logical partitioning, and in fact, if the bus time-sharing multiplex, the two is same physically.
Need to prove, for aforesaid each method embodiment, for simple description, so it all is expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not subjected to the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in the instructions all belongs to preferred embodiment, and related action and module might not be that the present invention is necessary.
In addition, each embodiment in this instructions all adopts the mode of going forward one by one to describe, and what each embodiment stressed all is and the difference of other embodiment that identical similar part is mutually referring to getting final product between each embodiment; In like manner, the not clear description part of device embodiment sees also the associated description of method embodiment.
More than a kind of microprocessor CPU provided by the present invention and a kind of data transmission method that is applied to microprocessor CPU are described in detail, used specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1, a kind of data transmission method that is used for microprocessor CPU is characterized in that, comprising:
Storing image data is to the CPU internal data storage array, and wherein, the data of the different rows of each the operational data piece in the described view data are stored in the different memory banks; Described CPU internal data storage array comprises a plurality of memory banks;
Send a load instructions that is used to load an operational data piece, described load instructions is sent to corresponding memory bank by the bus of pointing to each memory bank; Described load instructions provides corresponding addressing information at different memory banks;
According to corresponding addressing information, read corresponding data information in each memory bank respectively, order is loaded in the register.
2, the method for claim 1 is characterized in that:
Described addressing information at a memory bank comprises addressing byte first and the side-play amount at this memory bank.
3, the method for claim 1 is characterized in that:
Described CPU internal data storage array comprises 8 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
4, the method for claim 1 is characterized in that:
Described CPU internal data storage array comprises 16 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
5, the method for claim 1 is characterized in that:
The dot matrix of described view data is 640 * 480,1024 * 768,1600 * 1200 or 2048 * 1536.
6, a kind of microprocessor CPU is characterized in that, comprising:
Data storage array comprises a plurality of memory banks, stores view data; The data of the different rows of each the operational data piece in the described view data are stored in the different memory banks;
Command memory is used for storage instruction;
Data storage manager links to each other respectively with a plurality of memory banks of described data storage array by multiple bus; Be used to manage the request of access of pointing to described data storage array;
Instruction storage manager is used to manage the request of access of pointing to described command memory;
Controller is used for control and coordinates each functional part operation;
Arithmetical unit is used to finish various arithmetic sum logical operations;
Also comprise:
Load Load/ storage Store parts, be used for sending a load instructions that is used to load an operational data piece to data storage manager, described load instructions provides corresponding addressing information at each memory bank of described data storage array;
Described data storage manager reads corresponding data information in each memory bank respectively according to corresponding addressing information, and order is loaded in the register that loads Load/ storage Store parts.
7, microprocessor CPU as claimed in claim 6 is characterized in that:
Described addressing information at a memory bank comprises addressing byte first and the side-play amount at this memory bank.
8, microprocessor CPU as claimed in claim 6 is characterized in that:
Described CPU internal data storage array comprises 8 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
9, microprocessor CPU as claimed in claim 6 is characterized in that:
Described CPU internal data storage array comprises 16 32 memory bank, and described operational data piece is the data block that 4 bytes are taken advantage of 4 bytes.
10, microprocessor CPU as claimed in claim 6 is characterized in that:
The dot matrix of described view data is 640 * 480,1024 * 768,1600 * 1200 or 2048 * 1536.
CN200910076814.2A 2009-01-21 2009-01-21 Data transmission method used for micro-processor and micro-processor Expired - Fee Related CN101504600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910076814.2A CN101504600B (en) 2009-01-21 2009-01-21 Data transmission method used for micro-processor and micro-processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910076814.2A CN101504600B (en) 2009-01-21 2009-01-21 Data transmission method used for micro-processor and micro-processor

Publications (2)

Publication Number Publication Date
CN101504600A true CN101504600A (en) 2009-08-12
CN101504600B CN101504600B (en) 2014-05-07

Family

ID=40976857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910076814.2A Expired - Fee Related CN101504600B (en) 2009-01-21 2009-01-21 Data transmission method used for micro-processor and micro-processor

Country Status (1)

Country Link
CN (1) CN101504600B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104011644A (en) * 2011-12-22 2014-08-27 英特尔公司 Processors, methods, systems, and instructions to generate sequences of integers in numerical order that differ by a constant stride
CN105740156A (en) * 2014-12-08 2016-07-06 联想(北京)有限公司 Method and device for access control, access method, storage method and access system
US10223111B2 (en) 2011-12-22 2019-03-05 Intel Corporation Processors, methods, systems, and instructions to generate sequences of integers in which integers in consecutive positions differ by a constant integer stride and where a smallest integer is offset from zero by an integer offset
US10565283B2 (en) 2011-12-22 2020-02-18 Intel Corporation Processors, methods, systems, and instructions to generate sequences of consecutive integers in numerical order
CN112445525A (en) * 2019-09-02 2021-03-05 中科寒武纪科技股份有限公司 Data processing method, related device and computer readable medium
CN113434198A (en) * 2021-06-25 2021-09-24 深圳市中科蓝讯科技股份有限公司 RISC-V instruction processing method, storage medium and electronic device
WO2023184705A1 (en) * 2022-04-02 2023-10-05 长鑫存储技术有限公司 Data transmission circuit and method, and storage device
US11816361B2 (en) 2022-04-02 2023-11-14 Changxin Memory Technologies, Inc. Circuit and method for transmitting data to memory array, and storage apparatus
US11837304B2 (en) 2022-04-02 2023-12-05 Changxin Memory Technologies, Inc. Detection circuit

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040064670A1 (en) * 2000-11-21 2004-04-01 John Lancaster Data addressing
CN1688965A (en) * 2002-10-11 2005-10-26 皇家飞利浦电子股份有限公司 VLIW processor with power saving
CN101127578A (en) * 2007-09-14 2008-02-20 广东威创日新电子有限公司 A method and system for processing a magnitude of data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040064670A1 (en) * 2000-11-21 2004-04-01 John Lancaster Data addressing
CN1688965A (en) * 2002-10-11 2005-10-26 皇家飞利浦电子股份有限公司 VLIW processor with power saving
CN101127578A (en) * 2007-09-14 2008-02-20 广东威创日新电子有限公司 A method and system for processing a magnitude of data

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10866807B2 (en) 2011-12-22 2020-12-15 Intel Corporation Processors, methods, systems, and instructions to generate sequences of integers in numerical order that differ by a constant stride
US10565283B2 (en) 2011-12-22 2020-02-18 Intel Corporation Processors, methods, systems, and instructions to generate sequences of consecutive integers in numerical order
CN104011644B (en) * 2011-12-22 2017-12-08 英特尔公司 Processor, method, system and instruction for generation according to the sequence of the integer of the phase difference constant span of numerical order
US11650820B2 (en) 2011-12-22 2023-05-16 Intel Corporation Processors, methods, systems, and instructions to generate sequences of integers in numerical order that differ by a constant stride
CN104011644A (en) * 2011-12-22 2014-08-27 英特尔公司 Processors, methods, systems, and instructions to generate sequences of integers in numerical order that differ by a constant stride
US10223112B2 (en) 2011-12-22 2019-03-05 Intel Corporation Processors, methods, systems, and instructions to generate sequences of integers in which integers in consecutive positions differ by a constant integer stride and where a smallest integer is offset from zero by an integer offset
US10223111B2 (en) 2011-12-22 2019-03-05 Intel Corporation Processors, methods, systems, and instructions to generate sequences of integers in which integers in consecutive positions differ by a constant integer stride and where a smallest integer is offset from zero by an integer offset
US10732970B2 (en) 2011-12-22 2020-08-04 Intel Corporation Processors, methods, systems, and instructions to generate sequences of integers in which integers in consecutive positions differ by a constant integer stride and where a smallest integer is offset from zero by an integer offset
CN105740156A (en) * 2014-12-08 2016-07-06 联想(北京)有限公司 Method and device for access control, access method, storage method and access system
CN105740156B (en) * 2014-12-08 2019-01-15 联想(北京)有限公司 Access control method and device, access method, storage method and access system
CN112445525A (en) * 2019-09-02 2021-03-05 中科寒武纪科技股份有限公司 Data processing method, related device and computer readable medium
CN113434198A (en) * 2021-06-25 2021-09-24 深圳市中科蓝讯科技股份有限公司 RISC-V instruction processing method, storage medium and electronic device
CN113434198B (en) * 2021-06-25 2023-07-14 深圳市中科蓝讯科技股份有限公司 RISC-V instruction processing method, storage medium and electronic device
WO2023184705A1 (en) * 2022-04-02 2023-10-05 长鑫存储技术有限公司 Data transmission circuit and method, and storage device
US11816361B2 (en) 2022-04-02 2023-11-14 Changxin Memory Technologies, Inc. Circuit and method for transmitting data to memory array, and storage apparatus
US11837304B2 (en) 2022-04-02 2023-12-05 Changxin Memory Technologies, Inc. Detection circuit

Also Published As

Publication number Publication date
CN101504600B (en) 2014-05-07

Similar Documents

Publication Publication Date Title
CN101504600B (en) Data transmission method used for micro-processor and micro-processor
US20100191918A1 (en) Cache Controller Device, Interfacing Method and Programming Method Using the Same
CN1983196B (en) System and method for grouping execution threads
US20030103056A1 (en) Computer system controller having internal memory and external memory control
US11163710B2 (en) Information processor with tightly coupled smart memory unit
CN107291424A (en) Accelerator based on flash memory and the computing device comprising it
CN104221005B (en) For sending a request to the mechanism of accelerator from multithreading
JP2003504757A (en) Buffering system bus for external memory access
US7948498B1 (en) Efficient texture state cache
CN102446159B (en) Method and device for managing data of multi-core processor
US7836221B2 (en) Direct memory access system and method
US11907814B2 (en) Data path for GPU machine learning training with key value SSD
CN101150486A (en) A management method for receiving network data of zero copy buffer queue
CN107710175A (en) Memory module and operating system and method
US8397005B2 (en) Masked register write method and apparatus
CN102402422A (en) Processor component and memory sharing method thereof
CN104808950B (en) Modal dependence access to in-line memory element
CN111459543B (en) Method for managing register file unit
EP2689325B1 (en) Processor system with predicate register, computer system, method for managing predicates and computer program product
US8478946B2 (en) Method and system for local data sharing
JP5527340B2 (en) Vector processing apparatus and vector processing method
CN100456232C (en) Storage access and dispatching device aimed at stream processing
CN115878517A (en) Memory device, operation method of memory device, and electronic device
CN104424130A (en) Increasing the efficiency of memory resources in a processor
CN113157602A (en) Method and device for distributing memory and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140507

Termination date: 20190121