CN103116493A - Automatic mapping method applied to coarsness reconfigurable array - Google Patents

Automatic mapping method applied to coarsness reconfigurable array Download PDF

Info

Publication number
CN103116493A
CN103116493A CN2013100277768A CN201310027776A CN103116493A CN 103116493 A CN103116493 A CN 103116493A CN 2013100277768 A CN2013100277768 A CN 2013100277768A CN 201310027776 A CN201310027776 A CN 201310027776A CN 103116493 A CN103116493 A CN 103116493A
Authority
CN
China
Prior art keywords
data flow
priority
mapped
flow diagram
running node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100277768A
Other languages
Chinese (zh)
Other versions
CN103116493B (en
Inventor
齐志
马璐
刘波
葛伟
曹鹏
杨军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201310027776.8A priority Critical patent/CN103116493B/en
Publication of CN103116493A publication Critical patent/CN103116493A/en
Application granted granted Critical
Publication of CN103116493B publication Critical patent/CN103116493B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Devices For Executing Special Programs (AREA)

Abstract

The invention discloses an automatic mapping method applied to a coarsness reconfigurable array. The automatic mapping method comprises the following steps: an application algorithm written in high-level language is divided into a software portion and a hardware portion, wherein the software portion is controlled by a master controller to be executed, and the hardware portion is accelerated by the reconfigurable array; the hardware portion accelerated on the array is compiled through a compiler, and a data flow diagram representing the hardware portion is obtained; a to-be-mapped operation node of the data flow diagram is selected and mapped onto the array, residual operation nodes are selected repeatedly until all the operation nodes in the data flow diagram are mapped, and a configuration file which can operate on the array is obtained; the obtained configuration file is integrated with the software portion which is executed through the master controller, and a new application algorithm which is integrated is obtained; and the new application algorithm is compiled through the compiler of the master controller, and machine codes which can operate on hardware are generated. The automatic mapping method achieves high-efficiency distribution and dispatch of reconfigurable array hardware resources, and shortens mapping time.

Description

A kind of automatic mapping method that is applied to coarse-grained reconfigurable array
Technical field
The invention belongs to embedded areas of information technology, specifically a kind of automatic mapping method that is applied to coarse-grained reconfigurable array.
Background technology
In the computation schema that masses have been accustomed to, processor and special IC (ASIC) are two large main flows always.Particularly embedded environment is to the improving constantly of the index demands such as the performance of system, energy consumption, Time To Market to be accompanied by application, and traditional computation schema has exposed all drawbacks.The processor mode can realize various application neatly, but on performance defectiveness; Although and hardware logic realizes that performance is high, dirigibility is very poor.For in calculated performance with realize doing in dirigibility a well balance, Reconfigurable Computing Technology arises at the historic moment.Reconfigurable Computing Technology has been concentrated the advantage of processor and special IC, and the computing power of high efficient and flexible can be provided, and is also simultaneously to explore a new road that solves the high design of nanoscale chip and throw the sheet cost.The compute-intensive applications of some main flows in built-in field all is fit to utilize Reconfigurable Computing Technology to go to realize very much.
A typical coarseness reconfigurable system is comprised of one or more processors and reconfigurable function cell array.For an application algorithm with high level language, processor is carried out serial or non-critical code, and can efficient mapping all moves on reconfigurable arrays to the code of hardware.The partial code that is mapped on reconfigurable arrays can effectively utilize the concurrency of hardware, and carries out in the mode of streamline.Realize using algorithm with reconfigurable processor, following main process is arranged: (1) hardware-software partition is mapped to the crucial loop body that occupies a large amount of execution time on reconfigurable arrays; (2) intermediate representation of generation cycle program, with this describe correlation computations operation that loop body comprised, between them data and control dependence, and respectively operate the required information such as execution time; (3) the mapping intermediate representation, be mapped to the intermediate representation of describing crucial circulation on reconfigurable arrays, and generate the configuration file that can carry out on reconfigurable hardware.Wherein, describe the mapping method of the intermediate representation of crucial loop body, determined that largely can the advantage on reconfigurable system hardware be utilized fully.Mapping can adopt manual mapping or automated tool to complete, but be mapped with by hand following shortcoming: on the one hand, manual mapping mode need to spend a large amount of manpowers and time, and needs the mapping personnel to the hardware configuration of the reconfigurable system that adopts, deep understanding to be arranged; On the other hand, along with the expansion of array scale, use the increase of algorithm complex, the difficulty of manual mapping and the probability of makeing mistakes also can enlarge markedly.
Summary of the invention
Goal of the invention: for the problem and shortage of above-mentioned prior art existence, the purpose of this invention is to provide a kind of automatic mapping method that is applied to coarse-grained reconfigurable array, efficient allocation and the scheduling of realization to the reconfigurable arrays hardware resource to reduce the mapping time, improves the mapping quality simultaneously.
Technical scheme: for achieving the above object, the technical solution used in the present invention is a kind of automatic mapping method that is applied to coarse-grained reconfigurable array, comprises the following steps:
1a) will with the application algorithm of high level language, be divided into the software section of being carried out by main control processor and the hardware components that is accelerated by reconfigurable arrays;
1b) the hardware components to accelerating on reconfigurable arrays uses compiler to compile, and obtains describing the data flow diagram of this partial code;
1c) select running node to be mapped in data flow diagram: the whole unmapped running node in data flow diagram is arranged from high to low according to priority, select the highest running node of priority as running node to be mapped;
1d) selected running node to be mapped is mapped on reconfigurable arrays, remaining running node is carried out step 1c repeatedly) described arrangement and selection, until all operations node in data flow diagram is all mapped, obtain the configuration file that can move on reconfigurable arrays;
1e) with the configuration file and the step 1a that obtain) the described software section integration of being carried out by main control processor, the new application algorithm after being integrated;
1f) compiler of described new application algorithm with main control processor compiled the machine code that generation can move on hardware.
Further, the step that selected running node to be mapped is mapped on reconfigurable arrays is as follows:
2a) set up the priority list of functional unit: calculate the computing cost of functional unit in reconfigurable arrays, and set up priority list according to the size of computing cost, the larger priority of computing cost is lower, and the less priority of computing cost is higher;
2b) determine the distribution function unit: the functional unit in priority list is detected one by one, select the highest functional unit of first unappropriated priority;
2c) determine the route of inputoutput data: as step 2b) in complete behind this running node distribution function unit, be that the inputoutput data of this running node is selected routed path;
2d) recall analysis: if at step 2b) in do not find the functional unit that can shine upon, and recall the threshold value that number of times surpass to be set, discharge some nodes of map operation, get back to step 1c);
2e) cutting data flow graph: surpass when recalling number of times the threshold value of setting, when still not finding the functional unit that can shine upon, the cutting data flow graph forms new data flow diagram with remaining unmapped running node, gets back to step 1c); Repeat said process, until all operations node is all mapped.
Further, described compiler is the compiler IMPACT that increases income.
As the standard of the priority of each running node being arranged and selecting, first according to the height of each running node in data flow diagram, highly higher priority is higher; To highly identical running node, then consider the number of the child node of each running node, the more priority is higher for child node.
As the standard of the priority that each functional unit is arranged and searched for, the priority of functional unit is directly proportional to the number of its route resource, and the more priority is higher for the number of route resource.
beneficial effect: the automatic mapping method that is applied to coarse-grained reconfigurable array that the present invention proposes, employing considers that the mapping algorithm of functional unit computing cost in running node and reconfigurable arrays (being called for short " array ") carries out distribution and the scheduling of the interior calculating of reconfigurable arrays and storage resources, analyzed the dependence of running node in the data flow diagram of using algorithm routine, utilize fully the hardware resource of reconfigurable arrays, improved the occupancy of functional unit in array, and saved the time waste that manual mapping causes, realize the optimization of the interior calculating of reconfigurable arrays and utilization ratio of storage resources.
Description of drawings
The structured flowchart of the coarseness reconfigurable system that Fig. 1 provides for the embodiment of the present invention;
Fig. 2 is workflow diagram of the present invention;
The data flow diagram that represents loop body that Fig. 3 provides for the embodiment of the present invention;
A kind of topology diagram of the reconfigurable arrays that Fig. 4 provides for the embodiment of the present invention.
Have in figure: main control processor 1, reconfigurable arrays 2, IMPACT compiler 3, pretreater 4, resource distribution module 5, subgraph are cut apart module 6.
Embodiment
Below in conjunction with the drawings and specific embodiments, further illustrate the present invention, should understand these embodiment only is used for explanation the present invention and is not used in and limits the scope of the invention, after having read the present invention, those skilled in the art all fall within the application's claims limited range to the modification of the various equivalent form of values of the present invention.
Fig. 1 is the structured flowchart of coarseness reconfigurable system.This coarseness reconfigurable system comprises main control processor 1, reconfigurable arrays 2.
Workflow of the present invention is as follows, referring to Fig. 2:
The first step will with the application algorithm of high level language, be divided into the software section of being carried out by main control processor 1 and the hardware components that is accelerated by reconfigurable arrays 2.
Second step to the hardware components that accelerates, uses the compiler IMPACT3 that increases income on array, carry out a series of analyses, optimization, conversion, obtains describing the data flow diagram of this partial code.
In the 3rd step, select running node to be mapped in data flow diagram: the whole unmapped running node in data flow diagram is arranged according to the priority, select the highest node of priority; Shine upon selected running node to be mapped, the remaining operation node is repeatedly arranged and selected, until all operations in data flow diagram is all mapped, obtain the configuration file that can move on reconfigurable arrays.
In the 4th step, follow the software section of being carried out by primary processor to integrate the configuration that obtains, the new application algorithm routine after being integrated.
In the 5th step, the compiler of new algorithm program with main control processor compiled the machine code that generation can move on hardware.
The step that selected running node to be mapped is mapped on array of the present invention is as follows: the priority list of a) setting up functional unit: calculate the computing cost of functional unit in reconfigurable arrays, and set up priority list according to the size of computing cost; B) determine the distribution function unit: the functional unit in priority list is detected one by one, select the highest functional unit of first unappropriated priority; C) determine the route of inputoutput data: as step b) in complete behind this running node distribution function unit, be that the inputoutput data of this running node is selected routed path; D) recall analysis: if at step b) in do not find the functional unit that can shine upon, and recall the threshold value that number of times surpass to be set, discharge some nodes of map operation, get back to step c); E) cutting data flow graph: surpass when recalling number of times the threshold value of setting, when still not finding the functional unit that can shine upon, the cutting data flow graph forms new data flow diagram with remaining unmapped running node, gets back to step c).Repeat said process, until all operations node is all mapped.
As one embodiment of the present of invention, each running node is arranged and the standard of the priority selected, the height to it in data flow diagram with and the number of child node be directly proportional, highly higher, the child node number is more, priority is higher.Referring to Fig. 3, the computing method of the priority of running node of the present invention are described with the data flow diagram in Fig. 3, detailed process is as follows:
At first, the height of tree according to running node in data flow diagram, highly more high priority is higher.Due to the OP(running node) 0 height is that the height of 3, OP1, OP2 is that the height of 2, OP3, OP4 is that the height of 1, OP5, OP6 is 0, therefore, the priority list that obtains running node is { OP0, OP1, OP2, OP3, OP4, OP5, OP6};
Secondly, to level node, child node more multipriority is higher.OP0 does not have the co-altitude running node; For OP1 and OP2, OP1 has 2 child nodes, and OP2 only has 1 child node, so the priority of OP1 is higher than PO2; To OP3 and OP4, OP3 does not have child node, and OP4 has 2 child nodes, so the priority of OP4 is higher than OP3; OP5 and OP6 all do not have child node.
Finally, the priority list of running node is { OP0, OP1, OP2, OP4, OP3, OP5, OP6}.
As an alternative embodiment of the invention, the standard of the priority that each functional unit is arranged and searched for is directly proportional to the number of its route resource, and route resource is more, and priority is higher.Referring to Fig. 4, the computing method of the priority of functional unit of the present invention are described with the topological structure of the reconfigurable arrays in Fig. 4, functional unit numbering 0~15 in figure, the concrete analysis process is as follows: in illustrated structure, middle functional unit has more interconnection line than the functional unit of surrounding, angle from route, this means if running node is mapped to the functional unit of more interconnection line, its child node finds the possibility of mapping position larger, so we give such functional unit higher priority.The priority order that from the above mentioned, can obtain array functional unit in Fig. 4 is: 5,6,9,10,1,2,4,7,8,11,13,14,0,3,12,15.
The below introduces respectively four key modules that the automatic mapping method that is applied to coarse-grained reconfigurable array of the present invention relates to.
One, the front end compiler IMPACT3 that increases income:
The utilization compiler IMPACT3 that increases income carries out the analysis of morphology, syntax and semantics to input source program, go forward side by side line correlation optimization and conversion, generate the intermediate representation file of three address code, this intermediate representation file including all information of source program, be used for subsequent step.
Two, pretreater 4:
For generation and the various optimization of effective run time version, the control flow graph of program and the information of data flow diagram are essential.Comprise that to control flow graph relevant with stream in data flow diagram, inverse correlation, the analysis that the stream between the relevant and iteration of input is correlated with, each running node attribute in the drawings.
Three, resource distribution module 5:
The data flow diagram of expression being used the algorithm core code is mapped on reconfigurable arrays, is equivalent to each running node in data flow diagram is assigned on the functional unit of array.In the result that running node is distributed and array, the annexation of each functional unit couples together computational resource, is equivalent to the limit in data flow diagram is described out on reconfigurable arrays hardware.The mapping method that the present invention adopts is the method for a kind of while placement-and-routing, namely connects up in layout, only has been cabled successfully, and could illustrate that layout also successfully, could stop layout.This while method of placement-and-routing may can find the scheme of better dealing with problems than placement-and-routing individually, so placement-and-routing is a kind of good selection simultaneously.Use back-track algorithm in mapping process, when some running node can't be shone upon, the certain operations that selection is remapped and successfully shone upon allowed them be mapped to different positions.The number of times of recalling need to be limited in certain threshold value, otherwise algorithm can be searched for all possibilities, brings very large complexity and excessive compilation time.Recalling to increase the possibility that finds solution, to exchange higher degree of parallelism certain computing time for.
Four, data flow diagram is cut apart module 6:
The data flow diagram partitioning algorithm that the present invention proposes can be divided into three steps.At first, choose the failed running node of mapping.Secondly, check the functional unit that the father node of this running node shines upon, whether have the route path that is connected to other unoccupied functional units, if exist, namely find a feasible cutting, otherwise the running node of its upper level is repeated identical process, until find feasible cutting.At last, add respectively corresponding output and input port to the part of having shone upon and the part under being cut.A new data flow diagram has just generated.Repeat this process until all running node are all successfully shone upon.

Claims (6)

1. automatic mapping method that is applied to coarse-grained reconfigurable array is characterized in that comprising the following steps:
1a) will with the application algorithm of high level language, be divided into the software section of being carried out by main control processor and the hardware components that is accelerated by reconfigurable arrays;
1b) the hardware components to accelerating on reconfigurable arrays uses compiler to compile, and obtains describing the data flow diagram of this partial code;
1c) select running node to be mapped in data flow diagram: the whole unmapped running node in data flow diagram is arranged from high to low according to priority, select the highest running node of priority as running node to be mapped;
1d) selected running node to be mapped is mapped on reconfigurable arrays, remaining running node is carried out step 1c repeatedly) described arrangement and selection, until all operations node in data flow diagram is all mapped, obtain the configuration file that can move on reconfigurable arrays;
1e) with the configuration file and the step 1a that obtain) the described software section integration of being carried out by main control processor, the new application algorithm after being integrated;
1f) compiler of described new application algorithm with main control processor compiled the machine code that generation can move on hardware.
2. a kind of automatic mapping method that is applied to coarse-grained reconfigurable array according to claim 1, is characterized in that, the step that selected running node to be mapped is mapped on reconfigurable arrays is as follows:
2a) set up the priority list of functional unit: calculate the computing cost of functional unit in reconfigurable arrays, and set up priority list according to the size of computing cost, the larger priority of computing cost is lower, and the less priority of computing cost is higher;
2b) determine the distribution function unit: the functional unit in priority list is detected one by one, select the highest functional unit of first unappropriated priority;
2c) determine the route of inputoutput data: as step 2b) in complete behind this running node distribution function unit, be that the inputoutput data of this running node is selected routed path;
2d) recall analysis: if at step 2b) in do not find the functional unit that can shine upon, and recall the threshold value that number of times surpass to be set, discharge some nodes of map operation, get back to step 1c);
2e) cutting data flow graph: surpass when recalling number of times the threshold value of setting, when still not finding the functional unit that can shine upon, the cutting data flow graph forms new data flow diagram with remaining unmapped running node, gets back to step 1c); Repeat said process, until all operations node is all mapped.
3. a kind of automatic mapping method that is applied to coarse-grained reconfigurable array according to claim 1, is characterized in that, the height in data flow diagram is directly proportional the priority of described running node to it.
4. a kind of automatic mapping method that is applied to coarse-grained reconfigurable array according to claim 3, is characterized in that, the described running node identical to the height in data flow diagram, and its priority is directly proportional to the number of its child node.
5. mapping method according to claim 2, is characterized in that, the priority of described functional unit is directly proportional to the number of its route resource.
6. a kind of automatic mapping method that is applied to coarse-grained reconfigurable array according to claim 1, is characterized in that, described compiler is the compiler IMPACT that increases income.
CN201310027776.8A 2013-01-21 2013-01-21 A kind of automatic mapping method being applied to coarse-grained reconfigurable array Expired - Fee Related CN103116493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310027776.8A CN103116493B (en) 2013-01-21 2013-01-21 A kind of automatic mapping method being applied to coarse-grained reconfigurable array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310027776.8A CN103116493B (en) 2013-01-21 2013-01-21 A kind of automatic mapping method being applied to coarse-grained reconfigurable array

Publications (2)

Publication Number Publication Date
CN103116493A true CN103116493A (en) 2013-05-22
CN103116493B CN103116493B (en) 2016-01-06

Family

ID=48414879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310027776.8A Expired - Fee Related CN103116493B (en) 2013-01-21 2013-01-21 A kind of automatic mapping method being applied to coarse-grained reconfigurable array

Country Status (1)

Country Link
CN (1) CN103116493B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107402745A (en) * 2017-07-04 2017-11-28 清华大学 The mapping method and device of DFD
CN107679012A (en) * 2017-09-27 2018-02-09 清华大学无锡应用技术研究院 Method and apparatus for the configuration of reconfigurable processing system
CN109144702A (en) * 2018-09-06 2019-01-04 陈彦楠 One kind being used for row-column parallel calculation coarse-grained reconfigurable array multiple-objection optimization automatic mapping dispatching method
CN109471636A (en) * 2018-09-14 2019-03-15 上海交通大学 The operator mapping method and system of coarseness reconfigurable architecture
CN111045959A (en) * 2019-11-18 2020-04-21 中国航空工业集团公司西安航空计算技术研究所 Complex algorithm variable mapping method based on storage optimization
CN111090613A (en) * 2019-11-25 2020-05-01 中国人民解放军国防科技大学 Low-complexity hardware and software partitioning and scheduling method based on graph partitioning
CN112306500A (en) * 2020-11-30 2021-02-02 上海交通大学 Compiling method for reducing multi-class access conflict aiming at coarse-grained reconfigurable structure
CN113094030A (en) * 2021-02-09 2021-07-09 北京清微智能科技有限公司 Easily compiling method and system for reconfigurable chip
WO2022057185A1 (en) * 2020-09-17 2022-03-24 北京清微智能科技有限公司 Reconfigurable array mapping method and apparatus
WO2023241027A1 (en) * 2022-06-15 2023-12-21 东南大学 Information security-oriented reconfigurable system chip compiler and automatic compilation method
CN117573607A (en) * 2023-11-28 2024-02-20 北京智芯微电子科技有限公司 Reconfigurable coprocessor, chip, multi-core signal processing system and computing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020100029A1 (en) * 2000-07-20 2002-07-25 Matt Bowen System, method and article of manufacture for compiling and invoking C functions in hardware
US20030117971A1 (en) * 2001-12-21 2003-06-26 Celoxica Ltd. System, method, and article of manufacture for profiling an executable hardware model using calls to profiling functions
US20050278680A1 (en) * 2004-06-15 2005-12-15 University Of North Carolina At Charlotte Methodology for scheduling, partitioning and mapping computational tasks onto scalable, high performance, hybrid FPGA networks
CN101630274A (en) * 2009-07-31 2010-01-20 清华大学 Method for dividing cycle task by means of software and hardware and device thereof
CN101630275A (en) * 2009-07-31 2010-01-20 清华大学 Realizing method of configuration information for generating cycle task and device thereof
CN102508816A (en) * 2011-11-15 2012-06-20 东南大学 Configuration method applied to coarse-grained reconfigurable array

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020100029A1 (en) * 2000-07-20 2002-07-25 Matt Bowen System, method and article of manufacture for compiling and invoking C functions in hardware
US20030117971A1 (en) * 2001-12-21 2003-06-26 Celoxica Ltd. System, method, and article of manufacture for profiling an executable hardware model using calls to profiling functions
US20050278680A1 (en) * 2004-06-15 2005-12-15 University Of North Carolina At Charlotte Methodology for scheduling, partitioning and mapping computational tasks onto scalable, high performance, hybrid FPGA networks
CN101630274A (en) * 2009-07-31 2010-01-20 清华大学 Method for dividing cycle task by means of software and hardware and device thereof
CN101630275A (en) * 2009-07-31 2010-01-20 清华大学 Realizing method of configuration information for generating cycle task and device thereof
CN102508816A (en) * 2011-11-15 2012-06-20 东南大学 Configuration method applied to coarse-grained reconfigurable array

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PENG CAO等: "Hybrid-Priority Configuration Cache Supervision Method for Coarse Grained Reconfigurable Architecture", 《CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY (CYBERC), 2012 INTERNATIONAL CONFERENCE ON》 *
王大伟等: "核心循环到粗粒度可重构体系结构的流水化映射", 《计算机学报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107402745B (en) * 2017-07-04 2020-05-22 清华大学 Mapping method and device of data flow graph
CN107402745A (en) * 2017-07-04 2017-11-28 清华大学 The mapping method and device of DFD
CN107679012A (en) * 2017-09-27 2018-02-09 清华大学无锡应用技术研究院 Method and apparatus for the configuration of reconfigurable processing system
CN109144702A (en) * 2018-09-06 2019-01-04 陈彦楠 One kind being used for row-column parallel calculation coarse-grained reconfigurable array multiple-objection optimization automatic mapping dispatching method
CN109144702B (en) * 2018-09-06 2021-12-07 兰州大学 Multi-objective optimization automatic mapping scheduling method for row-column parallel coarse-grained reconfigurable array
CN109471636A (en) * 2018-09-14 2019-03-15 上海交通大学 The operator mapping method and system of coarseness reconfigurable architecture
CN109471636B (en) * 2018-09-14 2020-07-14 上海交通大学 Operator mapping method and system of coarse-grained reconfigurable architecture
CN111045959A (en) * 2019-11-18 2020-04-21 中国航空工业集团公司西安航空计算技术研究所 Complex algorithm variable mapping method based on storage optimization
CN111045959B (en) * 2019-11-18 2024-03-19 中国航空工业集团公司西安航空计算技术研究所 Complex algorithm variable mapping method based on storage optimization
CN111090613B (en) * 2019-11-25 2022-03-15 中国人民解放军国防科技大学 Low-complexity hardware and software partitioning and scheduling method based on graph partitioning
CN111090613A (en) * 2019-11-25 2020-05-01 中国人民解放军国防科技大学 Low-complexity hardware and software partitioning and scheduling method based on graph partitioning
WO2022057185A1 (en) * 2020-09-17 2022-03-24 北京清微智能科技有限公司 Reconfigurable array mapping method and apparatus
CN112306500B (en) * 2020-11-30 2022-06-07 上海交通大学 Compiling method for reducing multi-class access conflict aiming at coarse-grained reconfigurable structure
CN112306500A (en) * 2020-11-30 2021-02-02 上海交通大学 Compiling method for reducing multi-class access conflict aiming at coarse-grained reconfigurable structure
CN113094030A (en) * 2021-02-09 2021-07-09 北京清微智能科技有限公司 Easily compiling method and system for reconfigurable chip
WO2023241027A1 (en) * 2022-06-15 2023-12-21 东南大学 Information security-oriented reconfigurable system chip compiler and automatic compilation method
CN117573607A (en) * 2023-11-28 2024-02-20 北京智芯微电子科技有限公司 Reconfigurable coprocessor, chip, multi-core signal processing system and computing method

Also Published As

Publication number Publication date
CN103116493B (en) 2016-01-06

Similar Documents

Publication Publication Date Title
CN103116493B (en) A kind of automatic mapping method being applied to coarse-grained reconfigurable array
CN100543753C (en) Hardware description language (HDL) program implementation
CN101436128B (en) Software test case automatic generating method and system
CN102831011B (en) A kind of method for scheduling task based on many core systems and device
CN102508816B (en) Configuration method applied to coarse-grained reconfigurable array
Edwards Moore's law: What comes next?
CN101464799A (en) MPI parallel programming system based on visual modeling and automatic skeleton code generation method
CN104850866B (en) Via Self-reconfiguration K-means clustering technique implementation methods based on SoC-FPGA
CN112527262B (en) Automatic vector optimization method for non-uniform width of deep learning framework compiler
CN102176200A (en) Software test case automatic generating method
CN114995823A (en) Deep learning compiler optimization method for special accelerator for CNN
CN105302525B (en) Method for parallel processing for the reconfigurable processor of multi-level heterogeneous structure
CN110704364A (en) Automatic dynamic reconstruction method and system based on field programmable gate array
US7983890B2 (en) Method and apparatus performing automatic mapping for a multi-processor system
CN102163248B (en) Advanced synthesizing method for integrated circuit
CN105302624B (en) Start spacing automatic analysis method between cycle flowing water iteration in a kind of reconfigurable compiling device
CN101464965A (en) Multi-nuclear parallel ant group design method based on TBB
CN102622334B (en) Parallel XSLT (Extensible Style-sheet Language Transformation) conversion method and device for use in multi-thread environment
CN113031954A (en) Code compiling method and device, electronic equipment, storage medium and heterogeneous system
CN105260222A (en) Optimization method for initiation interval between circulating pipeline iterations in reconfigurable compiler
CN109471636A (en) The operator mapping method and system of coarseness reconfigurable architecture
CN112527304B (en) Self-adaptive node fusion compiling optimization method based on heterogeneous platform
CN100559344C (en) A kind of disposal route of supporting with regular record variables access special register group
CN103605573A (en) Reconfigurable architecture mapping decision-making method based on expense calculation
US20030037319A1 (en) Method and apparatus for partitioning and placement for a cycle-based simulation system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160106

Termination date: 20200121

CF01 Termination of patent right due to non-payment of annual fee