CN104008216B - Method for utilizing storage complier to generate optimized storage example - Google Patents

Method for utilizing storage complier to generate optimized storage example Download PDF

Info

Publication number
CN104008216B
CN104008216B CN201310056648.6A CN201310056648A CN104008216B CN 104008216 B CN104008216 B CN 104008216B CN 201310056648 A CN201310056648 A CN 201310056648A CN 104008216 B CN104008216 B CN 104008216B
Authority
CN
China
Prior art keywords
memory
optimizing
block
database
compiler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310056648.6A
Other languages
Chinese (zh)
Other versions
CN104008216A (en
Inventor
连南钧
林孝平
石维强
林育均
叶有伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
M31 Technology Corp
Original Assignee
M31 Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by M31 Technology Corp filed Critical M31 Technology Corp
Priority to CN201310056648.6A priority Critical patent/CN104008216B/en
Publication of CN104008216A publication Critical patent/CN104008216A/en
Application granted granted Critical
Publication of CN104008216B publication Critical patent/CN104008216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Devices For Executing Special Programs (AREA)
  • Design And Manufacture Of Integrated Circuits (AREA)

Abstract

Provided is a method for utilizing a storage complier to generate an optimized storage example. Data used for scanning a designed storage are provided, and a front end model and a rear end model are produced to provide a database, a design criterion is received through a user interface, and the design of the storage is optimized by considering the speed, the power and the area simultaneously according to the provided database and the design criterion to generate a storage example.

Description

The method for producing optimizing memory example using memory compiler
Technical field
The present invention relates to a kind of memory compiler(compiler), it is more particularly to a kind of to consider simultaneously and automatic optimal Change the memory compiler of speed, power and area.
Prior art
Memory compiler(Such as random access memory compiler)Can be used to automatically generate memory example(memory instance).Memory compiler can be additionally used in support system and integrate chip(SoC)Designed capacity.However, traditional storage Device compiler provides speed, the single characteristic specification of power or density to formulate when memory example is produced, only.Therefore, institute The memory example of generation usually not considers the optimization in terms of three to meet the requirement of client simultaneously.
Additionally, when memory example is produced, traditional memory compiler operations are in element(device)Level.Due to Element quantity itself is various, almost adjusts whole efficiency with more than million order of magnitude, therefore, memory example it is optimal Change need to expend the suitable time.
In view of traditional memory compiler cannot effectively and quickly produce optimizing memory example, therefore need proposition badly A kind of novel memory compiler, to overcome the shortcoming of legacy memory compiler.
The content of the invention
In view of one of the problems referred to above of prior art, purpose of the embodiment of the present invention are to propose that one kind is compiled using memory The method that device is translated to produce optimizing memory example, it considers tripartite's factor of speed, power and area to optimize simultaneously The design of memory.In one embodiment, the memory compiler for being proposed is implemented in framework(architecture)Level, Block(block)Level and element level, to the generation for accelerating memory example.
Embodiments in accordance with the present invention, there is provided the associated description data of designed memory, and generation front end model is with after Model is held to provide a database.Design criteria is received by user interface.According to the database and design criteria, while examining Speed, power and area tripartite factor are measured to optimize the design of the memory, so as to produce memory example.
In a specific embodiment, the optimization step uses top-down mode, by the framework of designed memory Multiple blocks are decomposed into, analysis is done according to the characteristic of block and best of breed is selected;For these decomposition blocks, from database Obtain at least one high rate data storehouse, at least one small-power database and at least one small area database;For these The performance characteristic of block does the block of general orientation and selects and adjustment;After the block of best of breed is selected, the block the inside is adjusted Component parameters do the adjustment of more thin portion, to realize optimizing.The optimization step also using mode from bottom to top, links this A little adjustment elements, to form these blocks;And these blocks are combined, to form memory, and then check the table of global optimization It is existing.
Description of the drawings
Fig. 1 shows the use memory compiler of the embodiment of the present invention to produce the stream of the method for optimizing memory example Cheng Tu.
Fig. 2 shows the detail flowchart of the optimization step of Fig. 1.
Fig. 3 illustrates that block decomposes.
Fig. 4 illustrates three-dimensional restrictive condition curved surface.
Description of reference numerals
11:Memory related data is provided
12:Front end model and and rear end model
13:Memory compiler user interface
14:Optimize
141:Defined formula
142:Select the relevant portion of database
143:Framework decomposes
144:Obtain high speed, small-power, small area database
145:Element is adjusted
146:Block is remapped
147:Framework is remapped
148:Whether restrictive condition is met
149:Example is produced
15:Candidate list
16:Whether meet the requirements
41:Three-dimensional restrictive condition curved surface
2A:Mode from top to bottom
2B:Mode from bottom to top
XDEC:X-decoder
IO:Imput output circuit
Specific embodiment
Fig. 1 shows the use memory compiler of the embodiment of the present invention to produce the stream of the method for optimizing memory example Cheng Tu.The present embodiment may be used to produce optimizing memory example, such as static RAM(SRAM), read-only storage Device(ROM), Content Addressable Memory(content addressable memory,CAM)Or flash memory.
First, in step 11, there is provided the associated description data of designed memory, for example, carried by Semiconductor foundries For.The data that step 11 is provided can be simulation of integrated circuit program(SPICE)Described circuit, Design Rule(For example Integrated circuit topology placement rule(TLR))Or element kenel(Such as random access memory), but it is not limited to this.Root According to the data for being provided, in step 12 front end is produced(F/E)Model and rear end(B/E)Model, to will be with design behavior mould The database of type(library)It is supplied to optimizer(optimizer), the optimizer considers speed, power and face simultaneously Product(Or density)This tripartite's factor carrys out the design of optimizing memory.On the contrary, traditional memory compiler only for speed, Single characteristic factor in power or density is developed, rather than all three factor.In this manual, front end model is related Electric current, voltage and/or power in designed memory, and rear end model is then relevant to the layout pattern of designed memory (pattern).In a preferred embodiment, the method for being proposed is applicable to design small area(Or high density)Memory.Phase Than in conventional method, this preferred embodiment is in optimization of design small area(Or high density)It is more efficient in memory example.
On the other hand, in step 13, user interface is installed on the computer with memory compiler(Such as graphical user Interface(GUI)), for receiving design criteria from client(design criteria), such as exemplary configuration (configuration).User interface also receives the priority of speed, power and area.Additionally, user interface also receives institute The storage volume of design memory(Such as 2MB or 1GB).In following step, set according to storage volume and priority Count and optimize the memory.
Then, in step 14, the restrictive condition that the database and step 13 provided according to step 12 is received (constraint)To optimize speed, power and the design of area.Will hereinafter, relevant optimization is illustrated with reference to Fig. 2 Details.
After the optimization of execution step 14, in step 15 candidate list is prepared(candidate list), it includes many The memory example of individual generation, finally to be assessed according to customer requirement.In step 16, select to be produced from candidate list One in raw memory example, it best suits the requirement of client.
Fig. 2 shows the optimization of Fig. 1(That is, step 14)Detail flowchart.In step 141, received according to step 13 Restrictive condition, the speed, power and area to designed memory define control rule(Or formula).Meanwhile, in step 142, according to the restrictive condition received in step 13, select the relevant portion of provided database.
One of feature according to the present embodiment, using mode from top to bottom(top-down approach)2A is optimizing The design of memory.Wherein, in step 143, as illustrated in fig. 3, the whole framework of designed memory is decomposed into into multiple areas Block:Memory cell, X-decoder(XDEC), control circuit and imput output circuit(IO).Thus, it is possible to block level is come The framework of memory is represented, to carry out the specificity analysis of block and select best of breed.Contrary, traditional memory compiler Then it is carried out in element level, therefore the more difficult manipulation of its reservoir designs.The block of the present embodiment can be based on leaf unit (leaf-cell-based)Block, but be not limited to this.
Next, obtaining related at least one high of these blocks in step 144, the database provided from step 12 Speed data storehouse, at least one small-power database and at least one small area(Or high density)Database.In the present embodiment In, qualifier " height " or " low/little " refer to respectively a physical quantity(Such as speed, power or area)Value more than or less than one pre- If critical value.Then, the block for doing general orientation for these block performance characteristics is selected and adjustment.Finally, in step 145, when After the block of best of breed is selected, if necessary, then to the element of these blocks(For example, electric crystal)Parameter carry out more thin portion Adjustment or fine setting.In the present embodiment, the parameter for being adjusted can include critical voltage(Such as low level critical voltage, standard Critical voltage or high levle critical voltage), P-type mos(PMOS)Or N-type metal-oxide semiconductor (MOS) (NMOS)Width/height, the parallel/series element of physical layout pattern and dynamic/static state combination/in proper order (combinational/sequential)Gate(gate)Circuit kenel.
According to another feature of the present embodiment, using mode from bottom to top(bottom-up approach)2B is finely tuning most Goodization.In step 146, by these adjustment elements(For example, part is adjusted and another part is not adjusted)Linked(Or weight Mapping)To form individual block;In step 147, these blocks are combined(Or remap)To form memory, then to this Memory carries out entire combination simulation, to check the performance of global optimization.If analog result meets restrictive condition(Step 148), then the memory example of correlation is produced(Step 149);Otherwise, the priority for being received according to step 13, to step 142 select the another part in data presented storehouse, and perform mode 2A from top to bottom and from bottom to top mode 2B again.Thus, Mode 2A from top to bottom and from bottom to top mode 2B one or many are performed, so as to obtain candidate list, it includes multiple generations Memory example.
As it was previously stated, the present embodiment considers speed, power and area factor this three setting for optimizing memory simultaneously Meter.Therefore, as exemplified in figure 4, three-dimensional restrictive condition curved surface is built up in optimization procedures(constraint surface) 41.One or more memory examples for being close to three-dimensional restrictive condition curved surface 41 are chosen as optimal candidate.
The above is only the preferred embodiments of the present invention, is not intended to limit the scope of the present invention;Other are not Depart from the equivalent change or modification completed under the disclosed spirit of invention, in the scope of the present application that should be included in.

Claims (16)

1. method of a kind of use memory compiler to produce optimizing memory example, comprising:
The associated description data of designed memory is provided;
Front end model and rear end model are produced, to provide a database;
Design criteria is received by user interface;And
According to the database and the design criteria, while speed, power and area are considered to optimize the design of the memory, So as to produce memory example,
Wherein, priority of the design criteria comprising speed, power and area,
Wherein, the optimization step is included:
According to the priority and specification requirement, the speed, power and area to the designed memory defines control Rule;
According to the priority and specification requirement, the relevant portion of the database is selected;
The framework of the designed memory is decomposed into into multiple blocks;
For the block that these decompose, at least one high rate data storehouse, at least one small-power are obtained from the database Database and at least one small area database;
For these block performance characteristics, the block for doing general orientation is selected and adjustment;
Adjust the parameter of the element of these blocks;
Link the element of these adjustment, to form described these blocks;
Described these blocks of combination, to form the memory;And
Entire combination simulation is carried out to the memory.
2. method of the use memory compiler according to claim 1 to produce optimizing memory example, also includes:
Prepare a candidate list, to be estimated;The candidate list includes multiple memory examples;And
One in these memory examples is selected from the candidate list.
3. method of the use memory compiler according to claim 1 to produce optimizing memory example, wherein institute State circuit of the description packet containing description, Design Rule or element kenel.
4. method of the use memory compiler according to claim 1 to produce optimizing memory example, wherein, institute State electric current, voltage and/or power that front end model is relevant to the designed memory.
5. method of the use memory compiler according to claim 1 to produce optimizing memory example, wherein, institute State the layout pattern that rear end model is relevant to the designed memory.
6. method of the use memory compiler according to claim 1 to produce optimizing memory example, also includes: Receive the storage volume of the designed memory.
7. method of the use memory compiler according to claim 1 to produce optimizing memory example, wherein, institute State these and decompose block comprising memory cell, X-decoder, control circuit and imput output circuit.
8. method of the use memory compiler according to claim 1 to produce optimizing memory example, wherein, institute State width/length of the parameter comprising critical voltage, P-type mos PMOS or N-type metal-oxide semiconductor (MOS) NMOS The gate circuit kenel of degree, parallel/series element and dynamic/static state.
9. method of the use memory compiler according to claim 1 to produce optimizing memory example, wherein, institute State optimization step and produce three-dimensional restrictive condition curved surface, one or more memory examples of the close three-dimensional restrictive condition curved surface It is chosen as optimal candidate.
10. a kind of three-dimensional storage compiler method for optimizing, comprising:
According to the three-dimensional priority of speed, power and area, control rule is gone out to the three-dimensional definition of designed memory, so as to Produce three-dimensional restrictive condition curved surface;
The designed memory is decomposed into into multiple blocks;
For the block that these decompose, provide database from one and obtain at least one high rate data storehouse, at least one small-power Database and at least one small area database;
For these block performance characteristics, the block for doing general orientation is selected and adjustment;
Adjust the parameter of the element of these blocks;
Link these adjustment elements, to form these blocks;
These blocks are combined, to produce multiple memory examples;And
The each or multiple memory examples Jie Jin the three-dimensional restrictive condition curved surface are selected, as optimal candidate.
11. three-dimensional storage compiler method for optimizing according to claim 10, wherein, these decompose block and include Memory cell, X-decoder, control circuit and imput output circuit.
12. three-dimensional storage compiler method for optimizing according to claim 10, wherein the parameter is comprising critical Voltage, the width/height of P-type mos PMOS or N-type metal-oxide semiconductor (MOS) NMOS, parallel/series unit The gate circuit kenel of part and dynamic/static state.
A kind of 13. memory compiler method for optimizing, comprising:
According to priority, the speed, power and area to designed memory defines control rule;
According to the priority, a relevant portion that database is provided is selected;
In top-down mode, the framework of the designed memory is decomposed into into multiple blocks, is further divided into multiple element;
In the way of from bottom to top, link these elements to form block, the recombinant block is forming the framework of the memory; And
Entire combination simulation is carried out to the memory,
Wherein described top-down step is included:
The framework of the designed memory is decomposed into into multiple blocks;
For the block that these decompose, at least one high rate data storehouse, at least one small-power number are obtained from the database According to storehouse and at least one small area database;
For these block performance characteristics, the block for doing general orientation is selected and adjustment;And
Adjust the parameter of the element of these blocks.
14. memory compiler method for optimizing according to claim 13, wherein this from bottom to top the step of include:
Link these adjustment elements, to form these blocks;And
These blocks are combined, to form the memory.
15. memory compiler method for optimizing according to claim 13, wherein, these decompose block comprising storage Device unit, X-decoder, control circuit and imput output circuit.
16. memory compiler method for optimizing according to claim 13, wherein the parameter comprising critical voltage, The width/height of P-type mos PMOS or N-type metal-oxide semiconductor (MOS) NMOS, parallel/series element and dynamic The gate circuit kenel of state/static state.
CN201310056648.6A 2013-02-22 2013-02-22 Method for utilizing storage complier to generate optimized storage example Active CN104008216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310056648.6A CN104008216B (en) 2013-02-22 2013-02-22 Method for utilizing storage complier to generate optimized storage example

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310056648.6A CN104008216B (en) 2013-02-22 2013-02-22 Method for utilizing storage complier to generate optimized storage example

Publications (2)

Publication Number Publication Date
CN104008216A CN104008216A (en) 2014-08-27
CN104008216B true CN104008216B (en) 2017-04-26

Family

ID=51368873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310056648.6A Active CN104008216B (en) 2013-02-22 2013-02-22 Method for utilizing storage complier to generate optimized storage example

Country Status (1)

Country Link
CN (1) CN104008216B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383938B (en) * 2016-09-07 2020-01-10 京微齐力(北京)科技有限公司 FPGA memory inference method and device
CN116362199B (en) * 2023-05-26 2023-08-11 上海韬润半导体有限公司 Method and device for optimizing type selection of memory in chip design

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101449256A (en) * 2006-04-12 2009-06-03 索夫特机械公司 Apparatus and method for processing an instruction matrix specifying parallel and dependent operations
US8037432B2 (en) * 2005-11-16 2011-10-11 Lsi Corporation Method and apparatus for mapping design memories to integrated circuit layout

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8037432B2 (en) * 2005-11-16 2011-10-11 Lsi Corporation Method and apparatus for mapping design memories to integrated circuit layout
CN101449256A (en) * 2006-04-12 2009-06-03 索夫特机械公司 Apparatus and method for processing an instruction matrix specifying parallel and dependent operations

Also Published As

Publication number Publication date
CN104008216A (en) 2014-08-27

Similar Documents

Publication Publication Date Title
Xie et al. Performance comparisons between 7-nm FinFET and conventional bulk CMOS standard cell libraries
US7739627B2 (en) System and method of maximizing integrated circuit manufacturing yield with context-dependent yield cells
CN110046369A (en) Integrated circuit and its designing system
US20090031277A1 (en) Architectural physical synthesis
US8276107B2 (en) Integrated data model based framework for driving design convergence from architecture optimization to physical design closure
KR20120047336A (en) Hierarchical order ranked simulation of electronic circuits
WO2007005724A2 (en) Fpga circuits and methods considering process variations
CN104008216B (en) Method for utilizing storage complier to generate optimized storage example
Yazdanshenas et al. Automatic circuit design and modelling for heterogeneous FPGAs
Fogaca et al. On the superiority of modularity-based clustering for determining placement-relevant clusters
Chhabria et al. BeGAN: Power grid benchmark generation using a process-portable GAN-based methodology
US7260802B2 (en) Method and apparatus for partitioning an integrated circuit chip
US20230385501A1 (en) Dvd simulation using microcircuits
Kahng et al. Construction of realistic gate sizing benchmarks with known optimal solutions
TWI708156B (en) Method and system for partitioning group of power-ground cells
Ma et al. Future Perspectives of TCAD in the Industry
US8176455B2 (en) Semiconductor device design support apparatus and substrate netlist generation method
Pedram Logical-physical co-design for deep submicron circuits: challenges and solutions
Baek et al. Selectively patterned masks: Structured ASIC with asymptotically ASIC performance
Cao Multi-objective Digital VLSI Design Optimisation
TWI517180B (en) Method of generating optimized memory instances using a memory compiler
Pravin et al. Design and Implementation of High-Performance PnR Block-Level Design with Timing Placement
Shin AI-EDA: toward a holistic approach to AI-powered EDA
US8516416B1 (en) Integrated data model based framework for driving design convergence from architecture optimization to physical design closure
Yella et al. Are standalone gate size and V T optimization tools useful?

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant