CN110032374A - A kind of parameter extracting method, device, equipment and medium - Google Patents
A kind of parameter extracting method, device, equipment and medium Download PDFInfo
- Publication number
- CN110032374A CN110032374A CN201910216412.1A CN201910216412A CN110032374A CN 110032374 A CN110032374 A CN 110032374A CN 201910216412 A CN201910216412 A CN 201910216412A CN 110032374 A CN110032374 A CN 110032374A
- Authority
- CN
- China
- Prior art keywords
- parameter
- layer
- attribute
- target component
- stored
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000003062 neural network model Methods 0.000 claims abstract description 26
- 238000004590 computer program Methods 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 18
- 238000012216 screening Methods 0.000 claims description 18
- 238000003860 storage Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 210000005036 nerve Anatomy 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 5
- 241000208340 Araliaceae Species 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012856 packing Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/42—Syntactic analysis
- G06F8/427—Parsing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Stored Programmes (AREA)
Abstract
The invention discloses a kind of parameter extracting method, device, equipment and media, to improve the working efficiency of central processing unit.The parameter extracting method, comprising: obtain the parameter attribute of at least one layer of parameter in deep neural network model;Attribute filters out target component from pre-stored parameter sets based on the parameter;The target component is sent to the on-site programmable gate array FPGA for running the deep neural network model.
Description
Technical field
The present invention relates to computer field more particularly to a kind of parameter extracting method, device, equipment and media.
Background technique
In the prior art, when field programmable gate array (Field-Programmable Gate Array, FPGA) chip
When needing certain parameters, the memory for storing parameter can transmit the total data of storage to central processing unit
(Central Processing Unit, CPU), CPU is after the total data for receiving memory transmission, to the data of packing
It is parsed, and therefrom extracts the parameter of fpga chip needs, and parameter is sent to FPGA.
When using said extracted mode extracting parameter, CPU is needed to parse the data of packing, this will occupy a large amount of
Cpu resource influences the working efficiency of CPU.
Summary of the invention
The embodiment of the present invention provides a kind of parameter extracting method, device, equipment and medium, to improve central processing unit
Working efficiency.
In a first aspect, the embodiment of the invention provides a kind of parameter extracting methods, comprising:
Obtain the parameter attribute of at least one layer of parameter in deep neural network model;
Target component is filtered out from pre-stored parameter sets based on parameter attribute;
Target component is sent to the on-site programmable gate array FPGA of operation deep neural network model.
Parameter extracting method provided in an embodiment of the present invention obtains the ginseng of at least one layer of parameter in deep neural network model
Number attribute filters out target component based on parameter attribute from pre-stored parameter sets, and target component is sent to fortune
In the PFGA of row deep neural network model, parameter extraction needed for deep neural network is come out to realize, with existing CPU pairs
The mode that whole data carry out parameter needed for FPGA is extracted in parsing is compared, and can be joined according to layer each in deep neural network model
Parameter needed for several parameter attributes extracts FPGA, parses without data of the CPU to packing, improves the work effect of CPU
Rate.
In a kind of possible embodiment, in the above method provided in an embodiment of the present invention, parameter attribute includes layer mark
Knowledge and/or layer parameter attribute.
In a kind of possible embodiment, in the above method provided in an embodiment of the present invention, layer parameter attribute include with
Under it is one or more: input layer name, current channel type, core size, sliding step and filling size.
In a kind of possible embodiment, in the above method provided in an embodiment of the present invention, parameter attribute includes layer mark
Know, target component filtered out from pre-stored parameter sets based on parameter attribute, comprising:
Screening identifies identical parameter as target component with layer identification from pre-stored parameter sets.
In a kind of possible embodiment, in the above method provided in an embodiment of the present invention, parameter attribute includes layer mark
Know, target component filtered out from pre-stored parameter sets based on parameter attribute, comprising:
Determine the layer identification of FPGA current operation layer;
It is screened from pre-stored parameter sets and identifies identical parameter conduct with the layer identification of FPGA current operation layer
Target component.
In a kind of possible embodiment, in the above method provided in an embodiment of the present invention, parameter attribute includes layer mark
Knowledge and layer parameter attribute, filter out target component from pre-stored parameter sets based on parameter attribute, comprising:
Determine the layer identification of FPGA current operation layer;
And and layer parameter identical as the layer identification mark of FPGA current operation layer is screened from pre-stored parameter sets
The identical parameter of attribute attribute is as target component.
Parameter extracting method provided in an embodiment of the present invention determines the layer identification of FPGA current operation layer, from being stored in advance
Parameter sets in identical and identical with layer parameter attribute attribute with the layer identification mark of the FPGA current operation layer parameter of screening
As target component.When FPGA needs certain parameters in current operation layer, first FPGA currently can be transported using the program
It calculates data used in layer all to screen, and parameter extraction identical with layer parameter attribute attribute in the layer parameter is come out,
To guarantee that the parameter extracted is more accurate.
In a kind of possible embodiment, in the above method provided in an embodiment of the present invention, parameter attribute is preparatory
It is arranged when establishing deep neural network model.
Second aspect, the embodiment of the invention also provides a kind of parameter extraction devices, comprising:
Acquiring unit, for obtaining the parameter attribute of at least one layer of parameter in deep neural network;
Screening unit, for filtering out target component from pre-stored parameter sets based on parameter attribute;
Transmission unit, for target component to be sent to the field programmable gate array of operation deep neural network model
FPGA。
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute includes layer mark
Knowledge and/or layer parameter attribute.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, layer parameter attribute include with
Under it is one or more: input layer name, current channel type, core size, sliding step and filling size.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute includes layer mark
Know;
Screening unit is specifically used for screening identical parameter conduct with layer identification mark from pre-stored parameter sets
Target component.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute includes layer mark
Know;
Screening unit, specifically for determining the layer identification of FPGA current operation layer;
It is screened from pre-stored parameter sets and identifies identical parameter conduct with the layer identification of FPGA current operation layer
Target component.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute includes layer mark
Know and layer parameter attribute;
Screening unit, specifically for determining the layer identification of FPGA current operation layer;
And and layer parameter identical as the layer identification mark of FPGA current operation layer is screened from pre-stored parameter sets
The identical parameter of attribute attribute is as target component.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute is preparatory
It is arranged when establishing deep neural network model.
The third aspect, the embodiment of the invention also provides a kind of parameter extraction equipment, comprising: at least one processor, extremely
A few memory and computer program instructions stored in memory, when computer program instructions are executed by processor
Realize the parameter extracting method that first aspect of the embodiment of the present invention provides.
Fourth aspect, the embodiment of the invention also provides a kind of computer storage mediums, are stored thereon with computer program
The parameter extraction side that first aspect of the embodiment of the present invention provides is realized in instruction when computer program instructions are executed by processor
Method.
Detailed description of the invention
Attached drawing is used to provide further understanding of the present invention, and constitutes part of specification, is implemented with the present invention
Example is used to explain the present invention together, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is a kind of schematic flow diagram of parameter extracting method provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic flow diagram of the detailed process of parameter extracting method provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of parameter extraction device provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of parameter extraction equipment provided in an embodiment of the present invention.
Specific embodiment
Embodiments herein is illustrated below in conjunction with attached drawing, it should be understood that embodiment described herein is only used
In description and interpretation the application, it is not used to limit the application.
Below with reference to attached drawing is illustrated, to parameter extracting method provided in an embodiment of the present invention, device, equipment and medium
Specific embodiment is illustrated.
It should be noted that FPGA provided in an embodiment of the present invention can directly be connect with CPU, it can also be indirectly by other
Equipment is connected with CPU, and it is not limited in the embodiment of the present invention.
The embodiment of the invention provides a kind of parameter extracting methods, as shown in Figure 1, may include steps of:
Step 101, the parameter attribute for obtaining at least one layer of parameter in deep neural network model.
It should be noted that including convolutional layer, warp lamination and pond layer in deep neural network model.
Wherein, parameter attribute is when pre-establishing deep neural network model for layer ginseng each in deep neural network model
Number setting.
It should be noted that parameter attribute includes layer identification and/or layer parameter attribute.
In a kind of possible embodiment, layer identification is level number, and in embodiments of the present invention, level number can be Arab
Number is also possible to English letter, can also be other letter or numbers that mark action may be implemented, the embodiment of the present invention pair
This is without limitation.
Step 102 filters out target component from pre-stored parameter sets based on parameter attribute.
When it is implemented, if when parameter attribute includes layer identification, screening and layer identification from pre-stored parameter sets
Identical parameter is identified as target component.
In a kind of possible embodiment, target ginseng is filtered out from pre-stored parameter sets based on parameter attribute
When number, the layer identification of FPGA current operation layer is determined, screen from pre-stored parameter sets and FPGA current operation layer
Layer identification identifies identical parameter as target component.
In another possible embodiment, target ginseng is filtered out from pre-stored parameter sets based on parameter attribute
When number, next layer of layer identification of FPGA current operation layer is determined, screen from pre-stored parameter sets current with FPGA
Next layer of layer identification of operation layer identifies identical parameter as target component.
In another possible embodiment, when parameter attribute includes layer identification and layer parameter attribute, determine that FPGA works as
The layer identification of preceding operation layer, from pre-stored parameter sets screen it is identical as the layer identification mark of FPGA current operation layer,
And parameter identical with layer parameter attribute attribute is as target component.
Phase is identified with the layer identification of FPGA current operation layer when it is implemented, screening from pre-stored parameter sets
When parameter same and identical with layer parameter attribute attribute is as target component, the layer mark of FPGA current operation layer can be first determined
Know, screened from pre-stored parameter sets and identify identical first parameter sets with the layer identification of FPGA current operation layer,
Then parameter identical with layer parameter attribute attribute is screened from the first parameter sets as target component.
In another possible embodiment, next layer of layer identification of FPGA current operation layer is determined, from being stored in advance
Parameter sets in screening with next layer of layer identification of FPGA current operation layer identify identical first parameter sets, from first
Parameter identical with layer parameter attribute attribute is screened in parameter sets as target component.
Step 103, the on-site programmable gate array FPGA that target component is sent to operation deep neural network model.
When it is implemented, FPGA can be in operation deep neural network model after target component is sent to FPGA
Directly use.
Below with reference to Fig. 2, by taking parameter attribute includes layer identification and layer property parameters as an example, to provided in an embodiment of the present invention
The specific steps of parameter extracting method are described in detail.
As shown in Fig. 2, parameter extracting method provided in an embodiment of the present invention, specific steps may include:
Step 201, the parameter attribute for obtaining at least one layer of parameter in deep neural network model.
It should be noted that parameter attribute is when pre-establishing deep neural network model for deep neural network model
In each layer parameter setting.
Step 202, the layer identification for determining FPGA current operation layer.
Step 203, screening is identical with the layer identification mark of FPGA current operation layer from pre-stored parameter sets
First parameter sets.
It should be noted that in other embodiments of the present invention, can also be screened from pre-stored parameter sets with
Next layer of layer identification of FPGA current operation layer identifies identical second parameter sets, and the embodiment of the present invention does not limit this
It is fixed.
Step 204 screens parameter identical with layer parameter attribute attribute as target component from the first parameter sets.
Step 205, the FPGA that target component is sent to operation deep neural network model.
Based on identical inventive concept, the embodiment of the present invention also provides a kind of parameter extraction device.
As shown in figure 3, parameter extraction device provided in an embodiment of the present invention, comprising:
Acquiring unit 301, for obtaining the parameter attribute of at least one layer of parameter in deep neural network;
Screening unit 302, for filtering out target component from pre-stored parameter sets based on parameter attribute;
Transmission unit 303, for target component to be sent to the field-programmable gate array of operation deep neural network model
Arrange FPGA.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute includes layer mark
Knowledge and/or layer parameter attribute.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, layer parameter attribute include with
Under it is one or more: input layer name, current channel type, core size, sliding step and filling size.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute includes layer mark
Know;
Screening unit 302 is specifically used for screening and layer identification from pre-stored parameter sets and identifies identical parameter
As target component.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute includes layer mark
Know;
Screening unit 302, specifically for determining the layer identification of FPGA current operation layer;
It is screened from pre-stored parameter sets and identifies identical parameter conduct with the layer identification of FPGA current operation layer
Target component.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, property parameters include layer mark
Know and layer parameter attribute;
Screening unit 302, specifically for determining the layer identification of FPGA current operation layer;
And and layer parameter identical as the layer identification mark of FPGA current operation layer is screened from pre-stored parameter sets
The identical parameter of attribute attribute is as target component.
In a kind of possible embodiment, in above-mentioned apparatus provided in an embodiment of the present invention, parameter attribute is preparatory
It is arranged when establishing deep neural network model.
In addition, the parameter extracting method and device in conjunction with Fig. 1-Fig. 3 embodiment of the present invention described can be by parameter extractions
Equipment is realized.Fig. 4 shows the hardware structural diagram of parameter extraction equipment provided in an embodiment of the present invention.
Parameter extraction equipment may include processor 401 and the memory 402 for being stored with computer program instructions.
Specifically, above-mentioned processor 401 may include central processing unit (CPU) or specific integrated circuit
(Application Specific Integrated Circuit, ASIC), or may be configured to implement implementation of the present invention
One or more integrated circuits of example.
Memory 402 may include the mass storage for data or instruction.For example it rather than limits, memory
402 may include hard disk drive (Hard Disk Drive, HDD), floppy disk drive, flash memory, CD, magneto-optic disk, tape or logical
With the combination of universal serial bus (Universal Serial Bus, USB) driver or two or more the above.It is closing
In the case where suitable, memory 402 may include the medium of removable or non-removable (or fixed).In a suitable case, it stores
Device 402 can be inside or outside data processing equipment.In a particular embodiment, memory 402 is nonvolatile solid state storage
Device.In a particular embodiment, memory 402 includes read-only memory (ROM).In a suitable case, which can be mask
ROM, programming ROM (PROM), erasable PROM (EPROM), the electric erasable PROM (EEPROM), electrically-alterable ROM of programming
(EAROM) or the combination of flash memory or two or more the above.
Processor 401 is by reading and executing the computer program instructions stored in memory 402, to realize above-mentioned implementation
Any one parameter extracting method in example.
In one example, parameter extraction equipment may also include communication interface 403 and bus 410.Wherein, as shown in figure 4,
Processor 401, memory 402, communication interface 403 connect by bus 410 and complete mutual communication.
Communication interface 403 is mainly used for realizing in the embodiment of the present invention between each module, device, unit and/or equipment
Communication.
Bus 410 includes hardware, software or both, and the component of parameter extraction equipment is coupled to each other together.Citing comes
It says rather than limits, bus may include accelerated graphics port (AGP) or other graphics bus, enhance Industry Standard Architecture (EISA) always
Line, front side bus (FSB), super transmission (HT) interconnection, the interconnection of Industry Standard Architecture (ISA) bus, infinite bandwidth, low pin count
(LPC) bus, memory bus, micro- channel architecture (MCA) bus, peripheral component interconnection (PCI) bus, PCI-Express
(PCI-X) bus, Serial Advanced Technology Attachment (SATA) bus, Video Electronics Standards Association part (VLB) bus or other conjunctions
The combination of suitable bus or two or more the above.In a suitable case, bus 410 may include one or more
Bus.Although specific bus has been described and illustrated in the embodiment of the present invention, the present invention considers any suitable bus or interconnection.
Parameter extraction equipment can execute sheet based on the parameter attribute of each layer parameter in the deep neural network model of acquisition
Parameter extracting method in inventive embodiments, to realize the parameter extraction device described in conjunction with Fig. 1-Fig. 3.
In addition, the embodiment of the present invention can provide a kind of computer-readable in conjunction with the parameter extracting method in above-described embodiment
Storage medium is realized.Computer program instructions are stored on the computer readable storage medium;The computer program instructions quilt
Processor realizes any one parameter extracting method in above-described embodiment when executing.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The shape for the computer program product implemented in usable storage medium (including but not limited to magnetic disk storage and optical memory etc.)
Formula.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of parameter extracting method characterized by comprising
Obtain the parameter attribute of at least one layer of parameter in deep neural network model;
Attribute filters out target component from pre-stored parameter sets based on the parameter;
The target component is sent to the on-site programmable gate array FPGA for running the deep neural network model.
2. the method according to claim 1, wherein the parameter attribute includes layer identification and/or layer parameter category
Property.
3. according to the method described in claim 2, it is characterized in that, the layer parameter attribute includes one or more of: defeated
Enter layer name, current channel type, core size, sliding step and filling size.
4. described based on described the method according to claim 1, wherein the parameter attribute includes layer identification
Parameter attribute filters out target component from pre-stored parameter sets, comprising:
Screening identifies identical parameter as the target component with the layer identification from pre-stored parameter sets.
5. described based on described the method according to claim 1, wherein the parameter attribute includes layer identification
Parameter attribute filters out target component from pre-stored parameter sets, comprising:
Determine the layer identification of the FPGA current operation layer;
It is screened from pre-stored parameter sets and identifies identical parameter conduct with the layer identification of the FPGA current operation layer
The target component.
6. the method according to claim 1, wherein the parameter attribute includes layer identification and layer parameter attribute,
The attribute based on the parameter filters out target component from pre-stored parameter sets, comprising:
Determine the layer identification of the FPGA current operation layer;
From pre-stored parameter sets screen it is identical as the layer identification mark of the FPGA current operation layer and with the layer
The identical parameter of parameter attribute attribute is as the target component.
7. the method according to claim 1, wherein the parameter attribute is to pre-establish the depth nerve
It is arranged when network model.
8. a kind of parameter extraction device characterized by comprising
Acquiring unit, for obtaining the parameter attribute of at least one layer of parameter in deep neural network;
Screening unit filters out target component from pre-stored parameter sets for attribute based on the parameter;
Transmission unit, for the target component to be sent to the field-programmable gate array for running the deep neural network model
Arrange FPGA.
9. a kind of parameter extraction equipment characterized by comprising at least one processor, at least one processor and storage
Computer program instructions in the memory are realized when the computer program instructions are executed by the processor as weighed
Benefit requires method described in any one of 1-7.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that when the calculating
Such as method of any of claims 1-7 is realized when machine program instruction is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910216412.1A CN110032374B (en) | 2019-03-21 | 2019-03-21 | Parameter extraction method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910216412.1A CN110032374B (en) | 2019-03-21 | 2019-03-21 | Parameter extraction method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110032374A true CN110032374A (en) | 2019-07-19 |
CN110032374B CN110032374B (en) | 2023-04-07 |
Family
ID=67236483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910216412.1A Active CN110032374B (en) | 2019-03-21 | 2019-03-21 | Parameter extraction method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110032374B (en) |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228240A (en) * | 2016-07-30 | 2016-12-14 | 复旦大学 | Degree of depth convolutional neural networks implementation method based on FPGA |
CN106228238A (en) * | 2016-07-27 | 2016-12-14 | 中国科学技术大学苏州研究院 | The method and system of degree of depth learning algorithm is accelerated on field programmable gate array platform |
CA2987325A1 (en) * | 2016-01-12 | 2017-07-20 | Tencent Technology (Shenzhen) Company Limited | Cnn processing method and device |
CN107451653A (en) * | 2017-07-05 | 2017-12-08 | 深圳市自行科技有限公司 | Computational methods, device and the readable storage medium storing program for executing of deep neural network |
US20180005106A1 (en) * | 2016-06-30 | 2018-01-04 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and non-transitory computer-readable storage medium |
CN107822622A (en) * | 2017-09-22 | 2018-03-23 | 成都比特律动科技有限责任公司 | Electrocardiographic diagnosis method and system based on depth convolutional neural networks |
US20180114117A1 (en) * | 2016-10-21 | 2018-04-26 | International Business Machines Corporation | Accelerate deep neural network in an fpga |
CN108334945A (en) * | 2018-01-30 | 2018-07-27 | 中国科学院自动化研究所 | The acceleration of deep neural network and compression method and device |
CN108416422A (en) * | 2017-12-29 | 2018-08-17 | 国民技术股份有限公司 | A kind of convolutional neural networks implementation method and device based on FPGA |
CN108537328A (en) * | 2018-04-13 | 2018-09-14 | 众安信息技术服务有限公司 | Method for visualizing structure neural network |
CN108665059A (en) * | 2018-05-22 | 2018-10-16 | 中国科学技术大学苏州研究院 | Convolutional neural networks acceleration system based on field programmable gate array |
CN108710941A (en) * | 2018-04-11 | 2018-10-26 | 杭州菲数科技有限公司 | The hard acceleration method and device of neural network model for electronic equipment |
US20190042945A1 (en) * | 2017-12-12 | 2019-02-07 | Somdeb Majumdar | Methods and arrangements to quantize a neural network with machine learning |
US20190050715A1 (en) * | 2018-09-28 | 2019-02-14 | Intel Corporation | Methods and apparatus to improve data training of a machine learning model using a field programmable gate array |
US20190050717A1 (en) * | 2017-08-11 | 2019-02-14 | Google Llc | Neural network accelerator with parameters resident on chip |
CN109409518A (en) * | 2018-10-11 | 2019-03-01 | 北京旷视科技有限公司 | Neural network model processing method, device and terminal |
CN109784484A (en) * | 2019-01-31 | 2019-05-21 | 深兰科技(上海)有限公司 | Neural network accelerated method, device, neural network accelerate chip and storage medium |
-
2019
- 2019-03-21 CN CN201910216412.1A patent/CN110032374B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2987325A1 (en) * | 2016-01-12 | 2017-07-20 | Tencent Technology (Shenzhen) Company Limited | Cnn processing method and device |
US20180082175A1 (en) * | 2016-01-12 | 2018-03-22 | Tencent Technology (Shenzhen) Company Limited | Convolutional Neural Network Processing Method and Device |
US20180005106A1 (en) * | 2016-06-30 | 2018-01-04 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and non-transitory computer-readable storage medium |
CN106228238A (en) * | 2016-07-27 | 2016-12-14 | 中国科学技术大学苏州研究院 | The method and system of degree of depth learning algorithm is accelerated on field programmable gate array platform |
CN106228240A (en) * | 2016-07-30 | 2016-12-14 | 复旦大学 | Degree of depth convolutional neural networks implementation method based on FPGA |
US20180114117A1 (en) * | 2016-10-21 | 2018-04-26 | International Business Machines Corporation | Accelerate deep neural network in an fpga |
CN107451653A (en) * | 2017-07-05 | 2017-12-08 | 深圳市自行科技有限公司 | Computational methods, device and the readable storage medium storing program for executing of deep neural network |
US20190050717A1 (en) * | 2017-08-11 | 2019-02-14 | Google Llc | Neural network accelerator with parameters resident on chip |
CN107822622A (en) * | 2017-09-22 | 2018-03-23 | 成都比特律动科技有限责任公司 | Electrocardiographic diagnosis method and system based on depth convolutional neural networks |
US20190042945A1 (en) * | 2017-12-12 | 2019-02-07 | Somdeb Majumdar | Methods and arrangements to quantize a neural network with machine learning |
CN108416422A (en) * | 2017-12-29 | 2018-08-17 | 国民技术股份有限公司 | A kind of convolutional neural networks implementation method and device based on FPGA |
CN108334945A (en) * | 2018-01-30 | 2018-07-27 | 中国科学院自动化研究所 | The acceleration of deep neural network and compression method and device |
CN108710941A (en) * | 2018-04-11 | 2018-10-26 | 杭州菲数科技有限公司 | The hard acceleration method and device of neural network model for electronic equipment |
CN108537328A (en) * | 2018-04-13 | 2018-09-14 | 众安信息技术服务有限公司 | Method for visualizing structure neural network |
CN108665059A (en) * | 2018-05-22 | 2018-10-16 | 中国科学技术大学苏州研究院 | Convolutional neural networks acceleration system based on field programmable gate array |
US20190050715A1 (en) * | 2018-09-28 | 2019-02-14 | Intel Corporation | Methods and apparatus to improve data training of a machine learning model using a field programmable gate array |
CN109409518A (en) * | 2018-10-11 | 2019-03-01 | 北京旷视科技有限公司 | Neural network model processing method, device and terminal |
CN109784484A (en) * | 2019-01-31 | 2019-05-21 | 深兰科技(上海)有限公司 | Neural network accelerated method, device, neural network accelerate chip and storage medium |
Non-Patent Citations (4)
Title |
---|
位俊雷等: "基于FPGA的神经网络PID控制器设计与实现", 《自动化与仪器仪表》 * |
杨一晨等: "一种基于可编程逻辑器件的卷积神经网络协处理器设计", 《西安交通大学学报》 * |
王昆等: "深度学习中的卷积神经网络系统设计及硬件实现", 《电子技术应用》 * |
蒋林等: "基于FPGA的卷积神经网络设计与实现", 《微电子学与计算机》 * |
Also Published As
Publication number | Publication date |
---|---|
CN110032374B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2991004A2 (en) | Method and apparatus for labeling training samples | |
CN106354645B (en) | Test method and test platform based on background system service or interface | |
CN110189013A (en) | A kind of determination method, apparatus, equipment and the medium of operation flow | |
CN110691035A (en) | Method and device for determining network congestion, electronic equipment and storage medium | |
CN104318562A (en) | Method and device for confirming quality of internet images | |
CN104951842B (en) | A kind of new oilfield production forecast method | |
CN109996269A (en) | A kind of cordless communication network abnormal cause determines method, apparatus, equipment and medium | |
CN106224278B (en) | A kind of compressor control system automatic generation method and device | |
CN113010944B (en) | Model verification method, electronic equipment and related products | |
CN105187092B (en) | A kind of method and apparatus for the interference signal for reducing mobile communication | |
CN104933138A (en) | Webpage crawler system and webpage crawling method | |
CN110032374A (en) | A kind of parameter extracting method, device, equipment and medium | |
CN113191671B (en) | Engineering amount calculating method and device and electronic equipment | |
CN108920377B (en) | Log playback test method, system and device and readable storage medium | |
CN110334018A (en) | A kind of big data introduction method and relevant device | |
CN108073510A (en) | Method for testing software and device | |
CN109862392B (en) | Method, system, device and medium for identifying video traffic of internet game | |
CN116067359A (en) | Low-precision track data processing method and system based on delaunay triangle network | |
CN116071335A (en) | Wall surface acceptance method, device, equipment and storage medium | |
CN104753934A (en) | Method for separating known protocol multi-communication-parties data stream into point-to-point data stream | |
EP3048812A1 (en) | Voice signal processing apparatus and voice signal processing method | |
CN109981548B (en) | Method and device for analyzing charging message | |
CN110008940B (en) | Method and device for removing target object in image and electronic equipment | |
CN108965903A (en) | A kind of method and device in the optional network specific digit region for identifying live video | |
CN107315196A (en) | Improved VSP wave field separations method and apparatus based on medium filtering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240510 Address after: Room 6227, No. 999, Changning District, Shanghai 200050 Patentee after: Shenlan robot (Shanghai) Co.,Ltd. Country or region after: China Address before: Unit 1001, 369 Weining Road, Changning District, Shanghai, 200336 (9th floor of actual floor) Patentee before: DEEPBLUE TECHNOLOGY (SHANGHAI) Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right |