CN106383938B - FPGA memory inference method and device - Google Patents

FPGA memory inference method and device Download PDF

Info

Publication number
CN106383938B
CN106383938B CN201610808450.2A CN201610808450A CN106383938B CN 106383938 B CN106383938 B CN 106383938B CN 201610808450 A CN201610808450 A CN 201610808450A CN 106383938 B CN106383938 B CN 106383938B
Authority
CN
China
Prior art keywords
memory
splitting
result
write
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610808450.2A
Other languages
Chinese (zh)
Other versions
CN106383938A (en
Inventor
张云哲
耿嘉
樊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jing Wei Qi Li (Beijing) Technology Co., Ltd.
Original Assignee
Jingwei Qili (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingwei Qili (beijing) Technology Co Ltd filed Critical Jingwei Qili (beijing) Technology Co Ltd
Priority to CN201610808450.2A priority Critical patent/CN106383938B/en
Publication of CN106383938A publication Critical patent/CN106383938A/en
Application granted granted Critical
Publication of CN106383938B publication Critical patent/CN106383938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Design And Manufacture Of Integrated Circuits (AREA)
  • Logic Circuits (AREA)

Abstract

The invention relates to a method and a device for deducing an FPGA memory, wherein the method provided by the embodiment of the invention comprises the following steps: marking each memory instance in a netlist synthesized according to user RTL by using a network cable, and then searching read/write ports and peripheral logic resources which have a connection relation with the memory instance through the network cable so as to expand the peripheral logic resources to obtain complete memory instance representation; deducing the number, initial value and working mode of the ports of the memory to obtain an inference result; carrying out high-level mapping on the complete memory example to obtain a high-level mapping result, and determining a splitting strategy according to the width of an address bus, the width of a data bus and RAM primitives in a chip to obtain a splitting result; and finishing the process mapping of the memory example according to the splitting result and the high-level mapping result. The embodiment of the invention provides a standard operation flow inferred by a memory, and the operation flow is simple and clear, each step task is clear, the operability is strong, and the application range is wide.

Description

FPGA memory inference method and device
Technical Field
The invention relates to the technical field of electronics, in particular to an FPGA memory inference method and device.
Background
The Field Programmable Gate Array (FPGA) is used as a semi-custom circuit in the Field of application specific integrated circuits, which not only solves the disadvantages of custom circuits, but also overcomes the defect of limited Gate circuits of the original Programmable devices. In the specific design and use process, in order to utilize the memory resources on the FPGA chip, the memory of a specific process can be instantiated, and the memory can be utilized in an automatic memory inference mode. The former method is usually adopted because it depends on a specific process and destroys the portability of HDL (hardware Description language) source code, but the prior art lacks the specification design of the latter method.
Disclosure of Invention
In one aspect, an embodiment of the present invention provides an FPGA memory inference method, where the method includes: synthesizing RTL of a user into a netlist, wherein each memory instance in the netlist is respectively marked by a network cable, and the network cable is connected with a read port/write port and peripheral logic resources; traversing the netlist to obtain a network cable for identifying the memory instance and a read port/write port connected with the network cable; expanding peripheral logic resources of the read port/the write port to complete the expansion of the memory example; deducing the number, initial value and working mode of ports of a memory instance to obtain an inference result; according to the inference result, carrying out high-level mapping on the memory example to obtain a high-level mapping result, and splitting the memory example to obtain a splitting result; and finishing the process mapping of the memory example according to the splitting result and the high-level mapping result.
Optionally, in the foregoing method, splitting the memory instance to obtain a split result includes: determining the width of an address bus and the width of a data bus of a memory instance, and determining various RAM primitives of a chip; and determining a splitting strategy according to the primitive sizes of the various RAM primitives, and splitting the memory instance according to the splitting strategy to obtain a splitting result.
Optionally, in the above method, the operation mode includes a write-first mode, a read-first mode, and/or a write-hold mode.
In another aspect, an embodiment of the present invention provides an FPGA memory inference apparatus, where the apparatus includes: the netlist synthesis unit is used for synthesizing the user RTL into a structured netlist net, each memory instance in the netlist net is respectively marked by a network cable, and the network cable is connected with the read port/write port WP/RP and peripheral logic resources; the traversal query unit is used for traversing the net list net, acquiring each network wire for identifying the memory example, and determining a read port/write port WP/RP connected with the network wire; the expansion unit is used for expanding peripheral logic resources of the read port/write port WP/RP and completing expansion of a memory example; the inference unit is used for inferring the memory example, inferring the port number, initial value and working mode of the memory example, and obtaining an inference result T1; the splitting and mapping unit is used for carrying out high-level mapping on the memory example according to the inference result T1 to obtain a high-level mapping result Y1 and splitting the memory example to obtain a splitting result C1; and the process mapping unit is used for carrying out process mapping on the memory examples according to the splitting result C1 and the high-level mapping result Y1 and mapping the memory examples to specific devices.
Optionally, in the above apparatus, the splitting and mapping unit includes: the high-level mapping subunit is used for performing high-level mapping on the memory instance according to the inference result T1 to obtain a high-level mapping result Y1; the splitting subunit determines the address bus width and the data bus width of the memory instance according to the inference result T1 and determines various RAM primitives of the chip; and determining a splitting strategy according to the primitive sizes of the various RAM primitives, and splitting the memory instance according to the splitting strategy to obtain a splitting result C1.
The FPGA memory inference method and the FPGA memory inference device provided by the embodiment of the invention provide a standard operation flow for memory inference, the operation flow is simple and clear, each step task is clear, the operability is strong, and the method is widely applied to inference of various memory resources on various FPGAs.
Drawings
FIG. 1 is a schematic flow chart of an FPGA memory inference method according to an embodiment of the present invention;
FIG. 2 is a user RTL of an example of a single-ported memory provided by an embodiment of the present invention;
FIG. 3 is a structural diagram of a structured netlist after synthesis of a single-ported RAM instance according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an FPGA memory inference device according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
The embodiment of the invention provides an FPGA memory inference method, which is characterized in that each memory instance in a netlist synthesized according to user RTL is identified by a network cable, and then a read/write port and peripheral logic resources which are in connection with the memory instance are searched by the network cable, so that the peripheral logic resources are expanded to obtain complete memory instance representation; deducing the number, initial value and working mode of the ports of the memory to obtain an inference result; carrying out high-level mapping on the complete memory example to obtain a high-level mapping result, and determining a splitting strategy according to the width of an address bus, the width of a data bus and RAM primitives in a chip to obtain a splitting result; and finishing the process mapping of the memory example according to the splitting result and the high-level mapping result.
The syntactic structure of the cat-t memory to which the inference method is applied comprises a two-dimensional signal assignment statement, a condition (case) structure controlled by a clock edge and a write enable signal; the method supports the inference of memories such as a single port, a double port and a simple double port, the inference of operation modes such as a write-first mode (WRITE FIRST), a read-first mode (READ FIRST) and a write hold mode (NO CHANGE), and the inference of an initial value.
The embodiment of the invention provides an FPGA memory inference method, fig. 1 is a schematic flow chart of the FPGA memory inference method provided by the embodiment of the invention, and as shown in fig. 1, the method includes:
step S101, synthesizing a user RTL (register Transfer level) into a structured netlist, wherein each memory instance in the netlist is respectively identified by a network cable, and the network cable is connected with a read port (ReadPort)/write port (WritePort) and peripheral logic resources; wherein, the peripheral logic resource includes: a data selector (Mux), a register (Reg), an And gate (And), or gate (Not).
In order to more intuitively describe the method provided by the embodiment of the present invention, a specific example is described below, fig. 2 is a user RTL of a certain single-ported memory example provided by the embodiment of the present invention, which is synthesized into a structured netlist according to the method of step S101, fig. 3 is a schematic diagram of a structured netlist structure after synthesis of a certain single-ported RAM example provided by the embodiment of the present invention, and as shown in fig. 3, a netlist structure of a certain single-ported memory example includes a write port, a read port, and peripheral logic resources (Mux, Reg).
And S102, traversing the structured netlist, acquiring each network cable for identifying the memory instance, and determining a read port/write port connected with the network cable.
Step S103, expanding peripheral logic resources of the read port/write port, and completing the expansion of the memory instance so as to further obtain a complete representation of the memory instance.
Step S104, deducing the port number, the initial value and the working mode of the memory example to obtain an inference result; the working modes specifically comprise the following steps: a write-first mode (WRITE FIRST) specifically involving, when a write enable signal input to the memory is active, using the input value as an output value of the memory; a read-ahead mode (READ FIRST) that specifically employs a value stored by the memory as an output value when a write enable signal input to the memory is active; the write hold mode (NO CHANGE) specifically means that when a write enable signal input to the memory is active, an output value remains unchanged and is not affected by an input value.
Step S105, completing high level mapping and splitting of the memory instance, specifically, step S105 includes step S1051 and step S1052.
Step S1051, performing high-level mapping on the memory instance according to the inference result to obtain a high-level mapping result, i.e. mapping the memory instance to a memory, which completely represents all information of the inference result.
Step S1052, determining the address bus width, the data bus width and various RAM primitives of the chip of the memory instance; and determining a splitting strategy according to the primitive sizes of the various RAM primitives, and splitting the memory instance to obtain a splitting result.
It should be noted that there is no restriction on the order between step S1051 and step S1052.
Step S106, performing technology mapping (technology mapping) on the memory example according to the splitting result and the high-level mapping result, and mapping the memory example to a specific device.
An FPGA storage inferring device according to an embodiment of the present invention is provided, and fig. 4 is a schematic structural diagram of the FPGA storage inferring device according to the embodiment of the present invention, as shown in fig. 4, the device includes:
a netlist synthesis unit 10, configured to synthesize the RTL of the user into a structured netlist net, where each memory instance in the netlist net is identified by a network cable, and the network cable is connected to the read/write port and peripheral logic resources; wherein, the peripheral logic resources include Mux, Reg, And Not.
And the traversal query unit 20 is configured to traverse the structured netlist net, obtain each network line for identifying the memory instance, and determine the read port/write port WP/RP connected to the network line.
The expansion unit 30 expands the peripheral logic resources of the read port/write port WP/RP obtained by traversing the query unit 20, and completes the expansion of the memory instance, so as to further obtain a complete representation of the memory instance.
The inference unit 40 is configured to infer the memory instance generated by the expansion unit 30, infer the port number, the initial value, and the working mode of the memory instance, and obtain an inference result T1; the working modes specifically comprise the following steps: the write-first mode specifically refers to that when a write enable signal input into a memory is valid, an input value is adopted as an output value of the memory; a read-first mode, specifically, when a write enable signal input to the memory is valid, a value stored in the memory is used as an output value; the write hold mode specifically means that when a write enable signal input to the memory is active, an output value remains unchanged and is not affected by an input value.
The splitting and mapping unit 50 is configured to perform high-level mapping on the memory instance according to the inference result T1 to obtain a high-level mapping result Y1, and split the memory instance to obtain a split result C1;
and the process mapping unit 60 is used for carrying out process mapping on the memory example according to the splitting result C1 and the high-level mapping result Y1, and mapping the memory example to a specific device.
Optionally, the splitting and mapping unit 50 comprises a higher layer mapping subunit 51 and a splitting subunit 52, wherein:
and the high-level mapping subunit 51 is used for performing high-level mapping on the memory instance according to the inference result T1 to obtain a high-level mapping result Y1, that is, mapping the memory instance into a memory, wherein the memory completely expresses all information of the inference result.
The disassembling sub-unit 52 is used for determining the address bus width, the data bus width and various RAM primitives of the chip of the memory instance according to the inference result T1; and determining a splitting strategy according to the primitive sizes of the various RAM primitives, splitting the memory instance and obtaining a splitting result C1.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (4)

1. An FPGA memory inference method, comprising:
synthesizing RTL of a user into a netlist, wherein each memory instance in the netlist is respectively marked by a network cable, and the network cable is connected with a read port/write port and peripheral logic resources;
traversing the netlist, acquiring a network cable for identifying the memory instance, and acquiring a read port/write port connected with the network cable;
expanding peripheral logic resources of the read port/write port to complete the expansion of the memory instance;
deducing the port number, the initial value and the working mode of the memory example to obtain an inference result; the working mode comprises a write-first mode, a read-first mode and/or a write-hold mode, wherein the write-first mode is to adopt an input value as an output value of the memory when a write enable signal input into the memory is effective; the read-ahead mode is to adopt a value stored by the memory as an output value when a write enable signal input to the memory is valid; the write hold mode is that when the write enable signal input into the memory is effective, the output value is kept unchanged and is not influenced by the input value;
performing high-level mapping on the memory instance according to the inference result to obtain a high-level mapping result, and splitting the memory instance to obtain a splitting result; there is no restriction of precedence order for the high-level mapping and the splitting;
and finishing the process mapping of the memory instance according to the splitting result and the high-level mapping result.
2. The method of claim 1, wherein the splitting the instance of memory to obtain a split result comprises:
determining the address bus width and the data bus width of the memory example, and determining various RAM primitives of a chip;
determining a splitting strategy according to the primitive sizes of the various RAM primitives, and splitting the memory instance according to the splitting strategy to obtain the splitting result.
3. An FPGA memory inference device, the device comprising:
the netlist synthesis unit is used for synthesizing the user RTL into a structured netlist, each memory instance in the netlist is respectively marked by a network cable, and the network cable is connected with the read port/write port and peripheral logic resources;
the traversal query unit is used for traversing the netlist, acquiring each network cable for identifying the memory instance, and determining a read port/write port connected with the network cable;
the expansion unit is used for expanding the peripheral logic resources of the read port/write port to complete the expansion of the memory instance;
the inference unit is used for inferring the memory example, inferring the port number, the initial value and the working mode of the memory example and acquiring an inference result; the working mode comprises a write-first mode, a read-first mode and/or a write-hold mode, wherein the write-first mode is to adopt an input value as an output value of the memory when a write enable signal input into the memory is valid; the read-ahead mode is to adopt a value stored by the memory as an output value when a write enable signal input to the memory is valid; the write hold mode is that when the write enable signal input into the memory is effective, the output value is kept unchanged and is not influenced by the input value;
the splitting and mapping unit is used for carrying out high-level mapping on the memory instance according to the inference result to obtain a high-level mapping result, and splitting the memory instance to obtain a splitting result; there is no restriction of precedence order for the high-level mapping and the splitting;
and the process mapping unit is used for carrying out process mapping on the memory example according to the splitting result and the high-level mapping result and mapping the memory example to a specific device.
4. The apparatus of claim 3, wherein the splitting and mapping unit comprises:
a high-level mapping subunit, configured to perform high-level mapping on the memory instance according to the inference result to obtain a high-level mapping result;
the splitting subunit is used for determining the address bus width and the data bus width of the memory instance according to the inference result and determining various RAM primitives of the chip; and determining a splitting strategy according to the primitive sizes of the various RAM primitives, and splitting the memory instance according to the splitting strategy to obtain a splitting result.
CN201610808450.2A 2016-09-07 2016-09-07 FPGA memory inference method and device Active CN106383938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610808450.2A CN106383938B (en) 2016-09-07 2016-09-07 FPGA memory inference method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610808450.2A CN106383938B (en) 2016-09-07 2016-09-07 FPGA memory inference method and device

Publications (2)

Publication Number Publication Date
CN106383938A CN106383938A (en) 2017-02-08
CN106383938B true CN106383938B (en) 2020-01-10

Family

ID=57939514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610808450.2A Active CN106383938B (en) 2016-09-07 2016-09-07 FPGA memory inference method and device

Country Status (1)

Country Link
CN (1) CN106383938B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329362B (en) * 2020-10-30 2023-12-26 苏州盛科通信股份有限公司 General method, device and storage medium for complex engineering modification of chip
CN112948324A (en) * 2021-04-16 2021-06-11 山东高云半导体科技有限公司 Memory mapping processing method and device and FPGA chip
CN113255258B (en) * 2021-06-23 2021-10-01 上海国微思尔芯技术股份有限公司 Logic synthesis method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008216B (en) * 2013-02-22 2017-04-26 円星科技股份有限公司 Method for utilizing storage complier to generate optimized storage example
CN105426314B (en) * 2014-09-23 2018-09-11 京微雅格(北京)科技有限公司 A kind of process mapping method of FPGA memories
CN104361171B (en) * 2014-11-07 2018-06-19 中国科学院微电子研究所 The processing method of ROM Technology Mappings

Also Published As

Publication number Publication date
CN106383938A (en) 2017-02-08

Similar Documents

Publication Publication Date Title
CN107346351B (en) Method and system for designing an FPGA based on hardware requirements defined in source code
EP3279788A1 (en) Access method, device and system for expanding memory
CN106383938B (en) FPGA memory inference method and device
JP2018507449A (en) Heterogeneous multiprocessor program compilation for programmable integrated circuits
JP2004529403A (en) User configurable on-chip memory system
US20200028511A1 (en) Hierarchical partial reconfiguration for programmable integrated circuits
US8793628B1 (en) Method and apparatus of maintaining coherency in the memory subsystem of an electronic system modeled in dual abstractions
JP2016517191A (en) Method for changing the signal value of an FPGA at runtime
US20170103156A1 (en) Hybrid Compilation For FPGA Prototyping
US8516414B2 (en) Behavioral synthesis device, behavioral synthesis method, and computer program product
JP6910198B2 (en) How to create an FPGA netlist
CN103605833B (en) A kind of method and device that the performance of memory array system is emulated
US9773083B1 (en) Post-placement and pre-routing processing of critical paths in a circuit design
US7395521B1 (en) Method and apparatus for translating an imperative programming language description of a circuit into a hardware description
US8650517B1 (en) Automatically documenting circuit designs
CN102054088A (en) Virtual platform for prototyping system-on-chip designs
US8990741B2 (en) Circuit design support device, circuit design support method and program
US6539537B1 (en) System synthesizer
CN106649899B (en) Local memory layout method
US8739088B1 (en) Using constraints wtihin a high-level modeling system for circuit design
CN115293076A (en) Method for generating circuit, electronic device and storage medium
JP6649731B2 (en) Identify signals to read back from FPGA
US8990758B2 (en) Generating a convergent circuit design from a functional description using entities having access to the functional description and to physical design information
CN107247577B (en) Method, device and system for configuring SOC IP core
US20030046641A1 (en) Representing a simulation model using a hardware configuration database

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190103

Address after: 901-903, 9th Floor, Satellite Building, 63 Zhichun Road, Haidian District, Beijing

Applicant after: Jing Wei Qi Li (Beijing) Technology Co., Ltd.

Address before: 100080 Beijing Haidian A62, East of Building No. 27, Haidian Avenue, 4th Floor, A District, Haidian District

Applicant before: Beijing deep science and Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant