CN115328551A - Microprocessor architecture design method and system based on operator - Google Patents

Microprocessor architecture design method and system based on operator Download PDF

Info

Publication number
CN115328551A
CN115328551A CN202210916248.7A CN202210916248A CN115328551A CN 115328551 A CN115328551 A CN 115328551A CN 202210916248 A CN202210916248 A CN 202210916248A CN 115328551 A CN115328551 A CN 115328551A
Authority
CN
China
Prior art keywords
operator
execution
instruction set
target instruction
execution component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210916248.7A
Other languages
Chinese (zh)
Inventor
邓全
孙彩霞
郑重
隋兵才
王永文
倪晓强
郭维
黄立波
雷国庆
王俊辉
郭辉
沈俊忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202210916248.7A priority Critical patent/CN115328551A/en
Publication of CN115328551A publication Critical patent/CN115328551A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3867Concurrent instruction execution, e.g. pipeline, look ahead using instruction pipelines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • G06F9/3889Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units controlled by multiple instructions, e.g. MIMD, decoupled access or execute

Abstract

The invention discloses a microprocessor architecture design method and a system based on operators, which comprises the steps of determining the calculation function contained in a target instruction set supported by a target instruction set architecture aiming at the target instruction set architecture, and abstracting the calculation function into operators; aiming at an operator in the operator set, establishing an execution component assembly aiming at an instruction of a calculation function of a target instruction set mapped by the operator to obtain an execution component assembly library; and updating the execution pipeline template based on the execution component assembly library to complete the design of the execution pipeline in the target instruction set architecture. The invention can effectively reduce the coupling relation between the instruction set and the hardware architecture corresponding to the pipeline execution stage, can quickly complete the design of the execution pipeline in the target instruction set architecture, provides a lightweight and extensible implementation method for the design of a microprocessor supporting multiple instruction sets, relieves the problem of area overhead caused by the incremental design of the multiple instruction sets, and reduces the design period of the upgrading process of the microprocessor system structure.

Description

Microprocessor architecture design method and system based on operator
Technical Field
The invention belongs to the design of a microprocessor architecture and an execution component in the technical field of microprocessor design, and particularly relates to a microprocessor architecture design method and system based on an operator.
Background
The instruction set is an interface between software and hardware and is the underlying support for the microprocessor architecture. At present, the development of instruction sets is facing new changes, and open-source instructions represented by RISC-V are developed vigorously, and the traditional closed instruction sets such as X86 and ARM are continuously challenged. The evolving variety of instruction sets has resulted in software facing compatibility issues on non-instruction based microprocessors. This problem not only limits the development of the emerging instruction set software ecology, but also limits the deployment of hardware devices based on the emerging instruction set. The microprocessor supporting multiple instruction sets can solve the problem of software incompatibility caused by different instruction sets from a hardware level, and has the advantages of high performance and good compatibility compared with other design level solutions such as binary translation. The execution pipeline is responsible for performing operations such as numerical or logical operations specified by the instruction set. With the improvement of the computational power of a processor chip, the occupied area and the power consumption proportion of an execution pipeline are increased, and the design of the execution pipeline is crucial. The micro-architecture of the processor should be designed to increase the computational power provided per unit area as much as possible. The design of microprocessors that support multiple instruction sets presents a number of challenges. For the execution pipeline, because the multiple instruction sets have different definitions for the calculation operation, the simple implementation of the calculation operation of different instruction sets will cause the area of hardware to increase dramatically, and the hardware design has no capability of rapidly expanding other instruction sets.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the present invention is directed to a method and system for designing an operator-based microprocessor architecture, which is not limited to a microprocessor supporting multiple instructions, but is to shorten a design cycle and reduce hardware area overhead caused by supporting multiple instruction sets.
In order to solve the technical problems, the invention adopts the technical scheme that:
an operator-based microprocessor architecture design method, comprising:
s101, aiming at a target instruction set architecture, determining a calculation function contained in a target instruction set supported by the target instruction set architecture, abstracting the calculation function into an operator to obtain an operator set, and thus establishing mapping from the operator to an instruction of the calculation function of the target instruction set;
s102, aiming at operators in an operator set, establishing an execution component assembly for the instructions of the calculation function of the mapped target instruction set, and accordingly obtaining an execution component assembly library;
s103, updating the execution pipeline template based on the execution component in the execution component library, and finally finishing the design of the execution pipeline in the target instruction set architecture.
Optionally, the target instruction set architecture in step S101 supports multiple target instruction sets, after the computation function is abstracted into an operator to obtain an operator set, the operator set includes a common operator, a similar operator, and an independent operator, where the common operator refers to an operator used by multiple target instruction sets, the similar operator refers to an operator used by a certain target instruction set, and the operator is similar to a certain operator used by another target instruction set, and the independent operator refers to an operator used by a certain target instruction set alone.
Optionally, when the computing function is abstracted into operators in step S101 to obtain an operator set, the method further includes decomposing the complex computing function into a plurality of simple computing functions, and mapping each simple computing function to an operator.
Optionally, the execution pipeline template preset in step S103 includes an issue queue, an execution path, a bypass, a result bus, and a control logic, and the updating of the execution pipeline template refers to placing the execution component assemblies in the execution component library into the execution path, respectively, connecting the execution component assemblies placed in the execution path with the issue queue and the result bus, and updating the control logic and the decoding module of the bypass to locate the required execution component assembly.
Optionally, in the preset execution pipeline template, the transmission queue, the execution path, and the result bus are sequentially connected, the execution path is connected in parallel with the bypass, and the transmission queue, the execution path, the bypass, and the result bus are respectively connected with the control logic.
Optionally, when the execution component assembly is established for the instruction of the computation function of the target instruction set mapped by the operator in the operator set in step S102, establishing the execution component assembly for the instruction of the computation function of the target instruction set mapped by the similar operator includes: the method comprises the steps that a group of similar operators establishes the same execution component assembly, the execution component assembly comprises an execution component shared by the same part of instructions of each similar operator, an execution component independent of the different part of instructions of each similar operator, and control logic and a control port, wherein the control logic and the control port are used for controlling the execution state of the independent execution component so that the execution component assembly is controlled to switch the calculation functions of different similar operators.
Optionally, step S103 is followed by the step of adding a supporting target instruction set to the target instruction set architecture:
s201, determining a calculation function contained in a target instruction set which needs to be newly added and supported, abstracting the calculation function into an operator, thereby establishing mapping from the operator to an instruction of the calculation function contained in the target instruction set which needs to be newly added and supported, and updating an operator set according to the operator of the target instruction set which needs to be newly added and supported;
s202, aiming at the newly added operators in the operator set, establishing an execution component assembly for the instructions of the calculation function of the mapped target instruction set, and updating an execution component assembly library;
s203, updating the execution pipeline template based on the execution component in the updated execution component library, and finally completing the updating of the execution pipeline in the target instruction set architecture.
Optionally, the target instruction set architecture supports a target instruction set including an ARM instruction set and a RISC-V instruction set.
In addition, the invention also provides a microprocessor, which comprises a microprocessor body and an execution pipeline arranged in the microprocessor body, wherein the execution pipeline is obtained by adopting the microprocessor architecture design method based on the operator.
Furthermore, the present invention also provides an operator based microprocessor architecture design system comprising a microprocessor and a memory connected to each other, said microprocessor being programmed or configured to perform the steps of said operator based microprocessor architecture design method.
Furthermore, the present invention also provides a computer readable storage medium having stored therein a computer program for being programmed or configured by a microprocessor to carry out the steps of the operator-based microprocessor architecture design method.
Compared with the prior art, the invention mainly has the following advantages: aiming at a target instruction set architecture, determining a calculation function contained in a target instruction set supported by the target instruction set architecture, abstracting the calculation function into an operator to obtain an operator set, and thus establishing mapping from the operator to instructions of the calculation function of the target instruction set; aiming at an operator in the operator set, establishing an execution component assembly for the instruction of the calculation function of the target instruction set mapped by the operator, thereby obtaining an execution component assembly library; the method has the advantages that the execution pipeline template is updated based on the execution component assemblies in the execution component assembly library, the design of the execution pipeline in the target instruction set architecture is finally completed, the design of the execution pipeline in the target instruction set architecture can be rapidly completed through the abstraction of operators and the updating of the execution pipeline template, the coupling relation of hardware architectures corresponding to the instruction set and the pipeline execution stage can be effectively reduced, a lightweight extensible implementation method is provided for the design of a microprocessor supporting multiple instruction sets, the problem of area overhead caused by the multiple instruction set increment design is solved, and the design period of the microprocessor system structure upgrading process is shortened.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of the basic principle of the method according to the embodiment of the present invention.
FIG. 3 is an example of an execution pipeline template for a method of an embodiment of the present invention.
Detailed Description
The basic principle of the microprocessor architecture design method based on the operator is to establish the mapping from the operator to the instruction of the calculation function of the target instruction set, abstract the instruction of the calculation function of the target instruction set into the operator, quickly determine the execution component assembly required by the target instruction set architecture for realizing the support of the target instruction set, and quickly finish the design of the execution pipeline in the target instruction set architecture by updating and replacing the execution path in the execution pipeline template. The microprocessor architecture design method based on the operator supports two design scenes, wherein the first scene is a light-weight pipeline design method for multi-instruction set; the second scenario is to extend the range supported by the instruction for the existing design, i.e. upgrade the existing design for the new instruction set. The core ideas of the design methods corresponding to the two scenes are consistent, and the specific execution modes are slightly different.
The first scenario is a multi-instruction-set-oriented method for providing a lightweight pipeline design. Specifically, as shown in fig. 1 and fig. 2, the method for designing an operator-based microprocessor architecture in this embodiment includes:
s101, aiming at a target instruction set architecture, determining a calculation function contained in a target instruction set supported by the target instruction set architecture, abstracting the calculation function into an operator to obtain an operator set, and thus establishing mapping from the operator to an instruction of the calculation function of the target instruction set;
s102, aiming at an operator in an operator set, establishing an execution component assembly for the instruction of the calculation function of the target instruction set mapped by the operator, and obtaining an execution component assembly library;
s103, updating the execution pipeline template based on the execution component in the execution component library, and finally finishing the design of the execution pipeline in the target instruction set architecture.
In step S101, the design requirement is clarified, and the calculation function included in the target instruction set can be clarified by analyzing the architecture of the target instruction set, and the calculation function is abstracted into an operator. It should be noted that the target instruction set architecture in step S101 may support a single or multiple target instruction sets as required. As an optional implementation manner, in this embodiment, the target instruction set architecture in step S101 supports multiple target instruction sets, after the calculation function is abstracted into an operator to obtain an operator set, the operator set includes a common operator, a similar operator, and an independent operator, where the common operator refers to an operator used by multiple target instruction sets, the similar operator refers to an operator used by a certain target instruction set, and the operator is similar to a certain operator used by another target instruction set (the overall function of the operator is similar but slightly different, and the operator incorporates originally separate operators belonging to different instruction sets), and the independent operator refers to an operator used by a certain target instruction set alone.
The instruction set a and the instruction set B shown in fig. 2 are target instruction sets that need to be supported in a first scenario, and the instruction set C is a target instruction set that needs to be newly supported in a second scenario, which is described in detail below. In this embodiment, the instruction set A is an ARM instruction set (ARM 64), and the instruction set B is a RISC-V instruction set (RV 64G). The common operator refers to an operator used by a plurality of instruction sets, such as an integer addition operator, to which the ADD instruction of ARM64 and the ADD instruction of RV64G are mapped. Similar operators AND common operators are similar AND are used by both the ARM64 instruction set AND the RV64G instruction, but the functions of the operators under the two instruction sets have slight differences, a decoder AND an execution pipeline can be shared, but slight differences need to be treated during execution, for example, a bitwise AND memory atomic operator is used for mapping the LDCLR of the ARM64 AND the AMOAND instruction of the RV64G to the operators, but the LDCLR AND the AMOAND complete atomic exchange operation, but have differences in functions, after the LDCLR loads data from the memory, the data (assumed to be OldValue) AND a value in a register (assumed to be RegValue) are subjected to a CLR operation, namely OldValue AND NOT (RegValue), AND the amond operation is performed to be the OldValue AND RegValue. Independent operators refer to operators used by only one instruction set, i.e., RV 64G-only or ARM 64-only, such as floating point sign injection operators, which are needed by instructions in ARM64 that do not have this functionality, and only by the RV64G instruction set.
As an optional implementation manner, when the computation function is abstracted into the operators to obtain the computation set in step S101, the present embodiment further includes decomposing the complex computation function into a plurality of simple computation functions, and each simple computation function is mapped into one operator, for example, the multiply-accumulate operation is split into the multiplication operator and the addition operator.
Step S102 is configured to establish an execution component library based on operators, where there is no functional overlap between execution components of a common operator and an independent operator, and therefore, the execution component components are directly established in a one-to-one correspondence manner. Functional overlap exists between similar operators, so that an optimization space exists when the execution component assembly is built aiming at the similar operators. In this embodiment, when, in step S102, for an operator in the operator set, an execution component assembly is established for an instruction of a calculation function of a target instruction set mapped by the operator, the establishing an execution component assembly for an instruction of a calculation function of a target instruction set mapped by a similar operator includes: the method comprises the steps that a group of similar operators establishes the same execution component assembly, the execution component assembly comprises an execution component shared by the same part of instructions of each similar operator, an execution component independent of the different part of instructions of each similar operator, and control logic and a control port, wherein the control logic and the control port are used for controlling the execution state of the independent execution component so that the execution component assembly is controlled to switch the calculation functions of different similar operators. By the method, a group of similar operators can be simplified into a multi-mode execution component assembly, the number of the execution component assembly components can be reduced, the area overhead of designing a microprocessor is reduced, and the time sequence change caused by control signals of the functional operation of the similarity is small. In the present embodiment, when building a library of execution component assemblies, the execution component assemblies can be regarded as an implementation of a specific operator. The present embodiment utilizes operators to determine the functions of hardware execution units, and the specific circuit implementation of hardware will not be discussed at all. The operators resulting from the previous example include the common operator ADD, the similar operator memory atomic operator logical and, the independent operator sign injection. The execution unit component library comprises an operation unit supporting ADD, an operation unit supporting logical AND of memory atomic operators and an operation unit supporting sign injection.
As shown in fig. 3, the execution pipeline template preset in step S103 of this embodiment includes an issue queue, an execution path, a bypass, a result bus, and a control logic, and the updating of the execution pipeline template refers to placing the execution component assemblies in the execution component library into the execution path respectively, connecting the execution component assemblies placed in the execution path with the issue queue and the result bus, and updating the control logic and the bypassed decode module to locate the required execution component assembly. In this embodiment, a preset execution pipeline template is used, so that the hardware design of the execution pipeline can be decoupled from the instruction set to the greatest extent. Assuming that the delays of the three execution unit assemblies are all 1, the arithmetic block control coding can be added by controlling the decoding multiplexing of the original bypass mechanism, so that the data can find the correct execution unit in the execution path.
As shown in fig. 3, in the execution pipeline template preset in this embodiment, an issue queue, an execution path, and a result bus are sequentially connected, the execution path is connected in parallel with a bypass, and the issue queue, the execution path, the bypass, and the result bus are respectively connected to a control logic. The execution pipeline template shown in fig. 3 is used for updating the execution path by the execution component assembly obtained in the previous step, and the transmission queue and bypass design updates the control logic according to the execution delay, and the control logic is slightly modified adaptively. In addition, in this embodiment, a register file is further provided at the rear end of the result bus for outputting the result and buffering the result for implementing multi-cycle operation. It should be noted that the method of the present embodiment is not limited to be used in a specific execution pipeline template.
In the second scenario, the range of the supported instruction set is extended for the existing hardware execution pipeline design, specifically, as shown in the dotted line portion in fig. 2, the step S103 in this embodiment further includes a step of constructing a new supported target instruction set for the target instruction set:
s201, determining a calculation function contained in a target instruction set which needs to be newly added and supported, abstracting the calculation function into an operator, thereby establishing mapping from the operator to an instruction of the calculation function contained in the target instruction set which needs to be newly added and supported, and updating an operator set according to the operator of the target instruction set which needs to be newly added and supported;
s202, aiming at the newly added operators in the operator set, establishing an execution component assembly for the instructions of the calculation function of the mapped target instruction set, and updating an execution component assembly library;
s203, updating the execution pipeline template based on the execution component in the updated execution component library, and finally completing the updating of the execution pipeline in the target instruction set architecture.
By mapping the operator to the instruction of the calculation function contained in the target instruction set which needs to be newly supported, updating the execution component assembly library and updating the execution pipeline template, the target instruction set architecture can quickly support the new instruction set, a lightweight and extensible implementation method is provided for the microprocessor design supporting multiple instruction sets, the problem of area overhead caused by multiple instruction set increment design is solved, and the design period of the microprocessor system structure upgrading process is shortened.
In summary, the present embodiment provides an operator-based execution pipeline design method for multiple instruction sets, which reduces the coupling relationship between instruction sets and hardware architectures corresponding to the pipeline execution stage, provides a lightweight and extensible structure for supporting the design of a microprocessor with multiple instruction sets, alleviates the problem of area overhead caused by the incremental design of multiple instruction sets, reduces the design period of the microprocessor architecture upgrade process, and improves the migration efficiency of hardware designs accumulated between different instruction sets.
In addition, the present embodiment further provides a microprocessor, which includes a microprocessor body and an execution pipeline disposed in the microprocessor body, where the execution pipeline is obtained by using the above microprocessor architecture design method based on operators.
In addition, the present embodiment also provides an operator-based microprocessor architecture design system, which includes a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to execute the steps of the operator-based microprocessor architecture design method.
Furthermore, the present embodiment also provides a computer-readable storage medium, in which a computer program is stored, the computer program being programmed or configured by a microprocessor to perform the steps of the foregoing operator-based microprocessor architecture design method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (10)

1. An operator-based microprocessor architecture design method, comprising:
s101, aiming at a target instruction set architecture, determining a calculation function contained in a target instruction set supported by the target instruction set architecture, abstracting the calculation function into an operator to obtain an operator set, and thus establishing mapping from the operator to an instruction of the calculation function of the target instruction set;
s102, aiming at operators in an operator set, establishing an execution component assembly for the instructions of the calculation function of the mapped target instruction set, and accordingly obtaining an execution component assembly library;
s103, updating the execution pipeline template based on the execution component in the execution component library, and finally finishing the design of the execution pipeline in the target instruction set architecture.
2. The method according to claim 1, wherein the target instruction set architecture in step S101 supports multiple target instruction sets, and after the calculation function is abstracted into an operator to obtain an operator set, the operator set includes a common operator, a similar operator, and an independent operator, wherein the common operator refers to an operator commonly used by the multiple target instruction sets, the similar operator refers to an operator used by a certain target instruction set, and the operator is similar to an operator used by another target instruction set, and the independent operator refers to an operator used by a certain target instruction set alone.
3. The method according to claim 1, wherein when the computing functions are abstracted into operators to obtain the set of operators in step S101, the method further comprises decomposing the complex computing function into a plurality of simple computing functions, and each simple computing function is mapped into an operator.
4. The method according to claim 1, wherein the execution pipeline template preset in step S103 includes an issue queue, an execution path, a bypass, a result bus and a control logic, and the updating of the execution pipeline template means that the execution component modules in the execution component library are respectively placed in the execution path, the execution component modules placed in the execution path are connected to the issue queue and the result bus, and the control logic and the bypassed decode module are updated to locate the required execution component module.
5. The method according to claim 4, wherein in the predetermined execution pipeline template, the transmission queue, the execution path, and the result bus are sequentially connected, the execution path is connected in parallel with the bypass, and the transmission queue, the execution path, the bypass, and the result bus are respectively connected to the control logic.
6. The method according to claim 1, wherein, when building an execution unit component for an instruction of a computation function of a target instruction set mapped by an operator in the operator set in step S102, building an execution unit component for an instruction of a computation function of a target instruction set mapped by a similar operator comprises: the method comprises the steps that a group of similar operators establishes the same execution component assembly, the execution component assembly comprises an execution component shared by the same part of instructions of each similar operator, an execution component independent of the different part of instructions of each similar operator, and control logic and a control port, wherein the control logic and the control port are used for controlling the execution state of the independent execution component so that the execution component assembly is controlled to switch the calculation functions of different similar operators.
7. The method according to claim 1, further comprising, after step S103, the step of adding a supporting target instruction set for the target instruction set architecture:
s201, determining a calculation function contained in a target instruction set which needs to be newly added and supported, abstracting the calculation function into an operator, thereby establishing mapping from the operator to an instruction of the calculation function contained in the target instruction set which needs to be newly added and supported, and updating an operator set according to the operator of the target instruction set which needs to be newly added and supported;
s202, aiming at the newly added operators in the operator set, establishing an execution component assembly for the instructions of the calculation function of the mapped target instruction set, and updating an execution component assembly library;
s203, updating the execution pipeline template based on the execution component in the updated execution component library, and finally completing the updating of the execution pipeline in the target instruction set architecture.
8. A microprocessor comprising a microprocessor body and an execution pipeline disposed in the microprocessor body, wherein the execution pipeline is obtained by using the operator-based microprocessor architecture design method of any one of claims 1 to 7.
9. An operator based microprocessor architecture design system comprising a microprocessor and a memory connected to each other, characterized in that said microprocessor is programmed or configured to perform the steps of the operator based microprocessor architecture design method according to any of the claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program for being programmed or configured by a microprocessor to perform the steps of the method for operator-based microprocessor architecture design according to any of claims 1-7.
CN202210916248.7A 2022-08-01 2022-08-01 Microprocessor architecture design method and system based on operator Pending CN115328551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210916248.7A CN115328551A (en) 2022-08-01 2022-08-01 Microprocessor architecture design method and system based on operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210916248.7A CN115328551A (en) 2022-08-01 2022-08-01 Microprocessor architecture design method and system based on operator

Publications (1)

Publication Number Publication Date
CN115328551A true CN115328551A (en) 2022-11-11

Family

ID=83918719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210916248.7A Pending CN115328551A (en) 2022-08-01 2022-08-01 Microprocessor architecture design method and system based on operator

Country Status (1)

Country Link
CN (1) CN115328551A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369867A (en) * 2023-09-28 2024-01-09 中国人民解放军国防科技大学 Instruction set and tool chain automatic generation oriented instruction set architecture model description method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369867A (en) * 2023-09-28 2024-01-09 中国人民解放军国防科技大学 Instruction set and tool chain automatic generation oriented instruction set architecture model description method

Similar Documents

Publication Publication Date Title
CN108268278B (en) Processor, method and system with configurable spatial accelerator
US10387319B2 (en) Processors, methods, and systems for a configurable spatial accelerator with memory system performance, power reduction, and atomics support features
US11086816B2 (en) Processors, methods, and systems for debugging a configurable spatial accelerator
US10467183B2 (en) Processors and methods for pipelined runtime services in a spatial array
US10416999B2 (en) Processors, methods, and systems with a configurable spatial accelerator
US10496574B2 (en) Processors, methods, and systems for a memory fence in a configurable spatial accelerator
US10445098B2 (en) Processors and methods for privileged configuration in a spatial array
US10445234B2 (en) Processors, methods, and systems for a configurable spatial accelerator with transactional and replay features
US10445451B2 (en) Processors, methods, and systems for a configurable spatial accelerator with performance, correctness, and power reduction features
US10469397B2 (en) Processors and methods with configurable network-based dataflow operator circuits
US10515046B2 (en) Processors, methods, and systems with a configurable spatial accelerator
US20190004878A1 (en) Processors, methods, and systems for a configurable spatial accelerator with security, power reduction, and performace features
US20190101952A1 (en) Processors and methods for configurable clock gating in a spatial array
US8595280B2 (en) Apparatus and method for performing multiply-accumulate operations
US20190102338A1 (en) Processors, methods, and systems with a configurable spatial accelerator having a sequencer dataflow operator
US20140047218A1 (en) Multi-stage register renaming using dependency removal
CN104899181A (en) Data processing apparatus and method for processing vector operands
CN112579159A (en) Apparatus, method and system for instructions for a matrix manipulation accelerator
KR100316078B1 (en) Processor with pipelining-structure
CN111752618A (en) Cross-flow pipeline of floating-point adder
CN115328551A (en) Microprocessor architecture design method and system based on operator
US11106465B2 (en) Vector add-with-carry instruction
CN108228242B (en) Configurable and flexible instruction scheduler
Fajardo Jr et al. Towards a multiple-ISA embedded system
Finlayson et al. Improving low power processor efficiency with static pipelining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination