CN108388943A - A kind of pond device and method suitable for neural network - Google Patents
A kind of pond device and method suitable for neural network Download PDFInfo
- Publication number
- CN108388943A CN108388943A CN201810014396.3A CN201810014396A CN108388943A CN 108388943 A CN108388943 A CN 108388943A CN 201810014396 A CN201810014396 A CN 201810014396A CN 108388943 A CN108388943 A CN 108388943A
- Authority
- CN
- China
- Prior art keywords
- pond
- neuron
- module
- makeup
- multiplexing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
- Advance Control (AREA)
Abstract
The present invention relates to a kind of pond makeups suitable for neural network to set, including neuron input interface module, for receiving neuron number evidence, and identifies formal neuron data;Pond cache module, for temporary multiplexing neuron number evidence;Pond computing module, for completing the pondization calculating for neuron number evidence;Neuron output interface module is used for output pool result of calculation;And pond control module, for controlling the modules and pond process that the pond makeup is set.
Description
Technical field
The present invention relates to calculating field, more particularly to a kind of pond device and methods suitable for neural network.
Background technology
Neural network is one of the sensor model that artificial intelligence field has high development level, because being widely applied and remarkably
Performance become the research hotspot of academia and industrial quarters.Neural network is tied by simulating the nerve connection of human brain
Structure establishes model, and breakthrough is brought for large-scale data (such as image, video or audio) processing task.Nerve net
The calculating process of network generally can be divided into convolution, activation, pond, wherein each level characteristics figure size of neural network can
It is reduced as pondization operates composition, calculates convergence effect to reach, efficient pond makeup is equipped with conducive to the hard of neural network is saved
Part cost.
In practical applications, pond size, pondization the multiplexing selection of different neural network models and the tune of pond data
Degree can have differences.And pond makeup in the prior art is set, it is difficult to while meeting neural network accelerator compliance,
The low energy consumption of neural network chip is kept, this just strongly limits the efficiency of neural network chip and the compatibility to heterogeneous networks
Property.
Therefore, it is necessary to a kind of good compatibility and the pond device and methods suitable for neural network that low energy consumption.
Invention content
The present invention provide a kind of pond makeup suitable for neural network set including:Neuron input interface module, for connecing
Neuron number evidence is received, and identifies formal neuron data;Pond cache module, for temporary multiplexing neuron number evidence;Chi Huaji
Module is calculated, for completing the pondization calculating for neuron number evidence;Neuron output interface module calculates knot for output poolization
Fruit;And pond control module, for controlling the modules and pond process that the pond makeup is set.
Preferably, the pond control module is additionally operable to receive simultaneously analysis cell parameter.
Preferably, the pond control module judges whether during pondization according to the pond parameter using multiplexing plan
Slightly.
Preferably, the pond parameter includes the step-length and the length of side in pond domain.
Preferably, if the step-length is less than the length of side, multiplex strategy, the pond control module is used to control the pond
Computing module executes calculating, and controls the pond cache module and start.
Preferably, the pond computing module is received from the neuron input interface module and the pond cache module
Neuron number evidence.
Preferably, the neuron number evidence is the single pond domain neuron number evidence through splicing composition.
Preferably, if the step-length is equal to the length of side, multiplex strategy, the pond control module is not used to control the pond
Change computing module and calculating is directly executed to the neuron, and controls the pond cache module and do not start.
Preferably, the computing pool control module is additionally operable to control suspend mode and startup that the pond makeup is set.
According to another aspect of the present invention, also a kind of pond method suitable for neural network, includes the following steps:
It receives and analysis cell parameter, generation valid data encodes and determine multiplex strategy;
It is encoded according to the valid data and receives formal neuron data, determine whether that storage is multiple according to the multiplex strategy
With neuron number evidence;
For the formal neuron data or the formal neuron data and the multiplexing neuron number according to splicing
The final result that the neuron number of composition calculates according to pondization is carried out and exports calculating.
Compared with the existing technology, the present invention achieves following advantageous effects:It is provided by the invention to be applied to nerve net
The pond device and method of network obtains valid data coding and multiplex strategy, utilizes by analyzing the pond parameter of neural network
The mode of temporal data realizes the multiplexing of data during pond, to realize using fixed arithmetic element to different pond models
The neuron enclosed is activated in batches, is improved pond and is disguised the compatibility set;It is set simultaneously for pond makeup and sets up suspend mode and startup
Mechanism reduces the energy consumption of neural network chip.
Description of the drawings
Fig. 1 is that the pond makeup provided by the invention suitable for neural network is set.
Fig. 2 is that the method flow diagram for carrying out pond is set using pond shown in FIG. 1 makeup.
Fig. 3 is the pond apparatus structure schematic diagram of presently preferred embodiments of the present invention.
Specific implementation mode
In order to make the purpose of the present invention, technical solution and advantage be more clearly understood, below in conjunction with attached drawing, to according to this
The pond device and method suitable for neural network provided in the embodiment of invention is further described.
With the development of artificial intelligence in recent years, obtained on solving abstract problem based on the neural network of deep learning
It is widely applied, deep neural network can be layered by multiple conversion stages and data characteristics is described, to establish one
Kind interconnects the operational model constituted by great deal of nodes by netted, these nodes are commonly known as neuron.In general, neural
The calculation amount of network is larger, and calculating process is complicated, takes maximum/small value using what pond makeup was set and is averaged, can be to god
Calculating through network is restrained.Therefore, it designs the makeup of efficient pond and sets and be of great significance to the calculating of neural network.
The poor problem of generally existing compatibility is set for the makeup of existing pond, inventor is through having researched and proposed a kind of pond makeup
It sets and method, the pond task of different scales can be completed with the pond computing module for the scale of fixing, and use flexibly
Method of calling completes the multiplexing of neuron, so as to ensureing the compatibility to neural network accelerator;Meanwhile it using and opening
The working mechanism that dynamic and suspend mode is combined, to realize the low energy consumption of neural network chip.
Fig. 1 is that the pond makeup provided by the invention suitable for neural network is set, as shown in Figure 1, it includes god that pond makeup, which sets 101,
Through first input interface module 102, neuron output interface module 105, pond cache module 103, pond computing module 104 and
Pond control module 106.
Wherein, neuron input interface module 102 can be used for the control signal according to reception, be received according to transport protocol
The neuron that 101 different effective bandwidths are set in pond makeup is transferred to by external module (such as active module or external cache module)
Data, and ensure that data are accurately transmitted.The interface module can establish lasting data transmission channel with external module, and according to
Valid data sector address code identification in control signal goes out the live part of the neuron number evidence of input;
Pond cache module 103 can be used for storing the multiplexing neuron for waiting for pond, be provided again for the pondization operation of neuron
It is supplied with the data of part, which can assist completing the number in neuron data multiplexing process according to the control signal of reception
According to reading process;
Pond computing module 104 can be used for completing the operation of the activation to the neuron inputted every time, pond arithmetic element
Number is fixed, and partitioning scheme, which can be used, for the neuron of different pond ranges completes pond;Meanwhile the computing module can also be kept in
The intermediate result of pond operation completes interative computation when neuronal poolization operation for auxiliary;
Neuron output interface module 105 can be used for the control signal according to reception, according to transport protocol to external module
101 operation result is set in output pool makeup.
Pond control module 106, can be used for receive be sent to pond makeup set 101 pond parameter, and according to the parameter to
The modules that pond makeup sets 101 send control signal, and the working condition and pond data to each module are during pond
Flowing water transmit and carry out management and control, connect for example, control module 106 can control neuron input interface module 102 and neuron to export
Write-in, deletion and the transmission of 103 data of valid data input/output amount and pond cache module of mouth mold block 105.
In one embodiment of the invention, pond control module 106 can also control pond makeup and set opening for 101 each modules
Dynamic and suspend mode.For example, in Resnet50 neural network models, about 50 layers of total number of plies, wherein executing needed for pondization operation
The number of plies is 2, respectively pond length of side 3x3, step-length 2 and pond length of side 7x7, step-length 7, due to the pond in the model
Operation is that minority calculates, and can disguise the startup set and dormancy mechanism in other network layers setting pond without using pond, to subtract
The additional energy consumption that few pond module brings neural network accelerator.
According to another aspect of the present invention, also provide it is a kind of using above-mentioned pond makeup set 101 pairs of neuron numbers according to progress
The method in pond, Fig. 2 is that the method flow diagram for carrying out pond is set using pond shown in FIG. 1 makeup, as shown in Fig. 2, this method has
Body includes the following steps:
Step S10, simultaneously analysis cell parameter is received
When neural network needs to carry out Chi Huashi, pond control module 106 will receive the activation signal from neural network,
Disguise the modules set in 101 to control the pond of startup in a dormant state.
After startup is set in pond makeup, is received using pond control module 106 and be sent to pond from extrinsic neural network module
Makeup sets 101 pond parameter and carries out Parameter analysis, to determine the multiplex strategy of pond operation, and generates control signal;
Wherein, pond parameter may include the pond length of side, pond step-length and pond action type.Pond control module can pass through
The analysis cell length of side and pond step-length judge the necessity of current layer pondization multiplexing, if for example, step-length is equal with the length of side, control
Multiplex strategy may be selected in module;If step-length is less than the length of side, need to start multiplex strategy, and calculate the multiplexing amount of neuron,
The neuron in the domain of pond in batches, generate and indicate every batch of god further according to the arithmetic element quantity of pond computing module 104
Data encoding through first quantity.
In one embodiment of the invention, the neuron data volume corresponding to 101 input bandwidth is set in above-mentioned pond makeup
It is identical as the arithmetic element number of pond computing module 104.
Step S20, neuron is received and stored
Using pond control module 106, valid data coding and neuron multiplexing amount that step S10 is generated are sent to
Neuron input interface module 102, neuron input interface module 102 is encoded according to above-mentioned valid data and corresponding transmission
Agreement receives effective neuron number evidence from outside;Meanwhile also according to above-mentioned neuron multiplexing amount, in the neuron of input
It waits for that next time, the part of activation multiplexing carried out assignment, and is temporarily stored into pond cache module 103 to wait for next use.
Step S30, pondization is executed to calculate and export result of calculation
It, can be defeated from neuron after the input of the control completion above-mentioned neuron number evidences of step S20 of pond control module 106 is kept in
Incoming interface module 102 and pond cache module 103 load neuron, and the splicing of above-mentioned neuron is formed to the god in single pond domain
It is input to pond computing module 104 through member, pond computing module 104 is according to the control from pond control module 106 of reception
Information selection calculates type to the neuron number according to execution pond operation;Wherein, above-mentioned control information includes and Current neural member
The corresponding pondization of data calculates type;
For the neuron number evidence of present lot, if the result of calculation that pond computing module 104 obtains is intermediate result,
It will be above-mentioned wait belonging to when the next batch neuron number of forebay range is set according to the makeup of input pond by the scratchpad
Intermediate result and the neuron number evidence of next batch input are input to neuron input interface module 102 jointly, to change
In generation, calculates;After interative computation several times, if the result of calculation that pond computing module 104 obtains is pond final result,
The final result is transmitted to neuron output interface module 105, exports the final result to outer according to Data Transport Protocol
Portion's module.
In one embodiment of the invention, when pond computing module 104 is in execution pondization operation, if the nerve of input
Metadata can not be filled up completely the input bandwidth of pond computing module 104, and pond control module 106 can be according to different type pond
Operation is filled the spare bits of the neuron number evidence of input, to ensure the accuracy of pondization calculating.
Fig. 3 is the pond apparatus structure schematic diagram of presently preferred embodiments of the present invention, as shown in figure 3, below will be with specific
Example come illustrate it is provided by the invention using pond makeup set 101 pairs of neuron numbers according to carry out pond method.
Assuming that the input bandwidth that pond makeup sets 101 is 128bit, neuron 8bit, the step-length in pond domain is 2, the length of side is
3.When pond, makeup sets 101 and receives activation signal, can be controlled and be started by pond control module 106, enter pond from dormant state
Calculating state.
Above-mentioned steps S10 is first carried out, the pond parameter of external module input, packet are received using pond control module 106
Include the pond domain length of side 3, pond step-length 2 and pond action type;Meanwhile analyzing above-mentioned parameter using pond control module 106
Data, since pond step-length is less than the pond length of side, then judgement needs to start neuron multiplexing mechanism, it is assumed that pond window is unidirectional
Mobile, then can get its multiplexing amount is 3;
Secondly, above-mentioned steps S20 is executed, receives neuron, and according to the arithmetic element quantity pair of pond computing module 104
Neuron in the domain of pond carries out in batches, it is assumed that the neuron number of input is 16, and the formal neuron number in pond domain is 9, single
Input, which is transmitted, can accommodate all neurons in single pond domain, if neuron is first pond window that input feature vector figure is often gone
Mouthful, then formal neuron number is 9, produces and indicates the coding for being 9 per batch neuronal quantity;
According to above-mentioned efficient coding, input interface module 102 will be sent from external module comprising effectively with invalid number
According to the middle valid data for receiving 128bit;Meanwhile it will be inputted in neuron according to multiplexing amount information and waiting for activation next time multiplexing part
3 neurons replicate and be stored in pond cache module 103;
Finally, above-mentioned steps S30 is executed, pond control module 106 will need multiple in 6 neurons of input and temporary storage module
3 neurons carry out 9 input neurons in the splicing composition ponds 3x3 domain, and above-mentioned splicing result is transmitted to pond
Change computing module, execute pondization and calculate, external module is transmitted to through neuron output interface module 105 after acquisition final result.
In another embodiment of the present invention, it is assumed that the step-length in pond domain is identical as the length of side, such as is 7, Dang Chihua
When device 101 receives activation signal and enters pond calculating state, then it is not necessarily to start neuron multiplexing mechanism, that is to say, that nothing
Pond cache module 103 need to be started, the neuron of input can be directly transferred to pond computing module 104 and carry out pond operation,
If the neuron in pond domain needs a point multiple batches of progress pond, can by scratchpad, and with the neuron of subsequent batches
It is common to execute pond, until completing the pond operation task of the regulation all neurons in pond domain and exporting as a result, for example, it is assumed that single
The neuron in a pond domain is 49, then can be divided into 4 batches, is encoded to (16-16-16-1) to generate.
Although disguising to pond provided by the invention in the above-described embodiments, using Resnet50 neural network models
It sets and method is illustrated, but it will be recognized by one of ordinary skill in the art that pond device and method herein can be also used for
Other neural network models.
Compared with the existing technology, the pond makeup suitable for neural network provided in embodiments of the present invention is set and side
Method, by using corresponding multiplex strategy, different scales can be completed merely with the pond arithmetic element of fixed scale by realizing
Pond task, while with the multiplexing of flexible neuron call method completion neuron, realizing pond and disguising the compatibility set;If
The startup and dormancy mechanism that pond makeup is set have been stood, pond has been reduced and disguises the energy consumption set.
Although the present invention has been described by means of preferred embodiments, the present invention is not limited to described here
Embodiment, further include made various changes and variation without departing from the present invention.
Claims (10)
1. a kind of pond makeup suitable for neural network is set, including
Neuron input interface module for receiving neuron number evidence, and identifies formal neuron data;
Pond cache module, for temporary multiplexing neuron number evidence;
Pond computing module, for completing the pondization calculating for neuron number evidence;
Neuron output interface module is used for output pool result of calculation;And
Pond control module, for controlling the modules and pond process that the pond makeup is set.
2. pond makeup according to claim 1 is set, which is characterized in that the pond control module is additionally operable to receive and analyze
Pond parameter.
3. pond makeup according to claim 2 is set, which is characterized in that the pond control module is according to the pond parameter
Judge whether to use multiplex strategy during pondization.
4. pond makeup according to claim 3 is set, which is characterized in that the pond parameter includes step-length and the side in pond domain
It is long.
5. pond makeup according to claim 4 is set, which is characterized in that if the step-length is less than the length of side, using multiplexing plan
Slightly, the pond control module controls the pond computing module and executes calculating, and controls the pond cache module and start.
6. pond makeup according to claim 5 is set, which is characterized in that the pond computing module is inputted from the neuron
Interface module and the pond cache module receive neuron number evidence.
7. pond makeup according to claim 6 is set, which is characterized in that the neuron number evidence is through the single of splicing composition
Pond domain neuron number evidence.
8. pond makeup according to claim 4 is set, which is characterized in that if the step-length is equal to the length of side, do not use multiplexing
Strategy, the pond control module control the pond computing module and directly execute calculating to the neuron, and described in control
Pond cache module does not start.
9. being set according to the makeup of claim 1 to 8 any one of them pond, which is characterized in that the computing pool control module is also
Disguise the suspend mode and startup set for controlling the pond.
10. a kind of pond method suitable for neural network includes the following steps:
It receives and analysis cell parameter, generation valid data encodes and determine multiplex strategy;
It is encoded according to the valid data and receives formal neuron data, storage multiplexing god is determined whether according to the multiplex strategy
Through metadata;
It is formed according to splicing for the formal neuron data or the formal neuron data and the multiplexing neuron number
Neuron number according to carrying out pondization calculating and export the final result of calculating.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810014396.3A CN108388943B (en) | 2018-01-08 | 2018-01-08 | Pooling device and method suitable for neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810014396.3A CN108388943B (en) | 2018-01-08 | 2018-01-08 | Pooling device and method suitable for neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108388943A true CN108388943A (en) | 2018-08-10 |
CN108388943B CN108388943B (en) | 2020-12-29 |
Family
ID=63076734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810014396.3A Active CN108388943B (en) | 2018-01-08 | 2018-01-08 | Pooling device and method suitable for neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108388943B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558564A (en) * | 2018-11-30 | 2019-04-02 | 上海寒武纪信息科技有限公司 | Operation method, device and Related product |
CN117273102A (en) * | 2023-11-23 | 2023-12-22 | 深圳鲲云信息科技有限公司 | Apparatus and method for pooling accelerators and chip circuitry and computing device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015036939A (en) * | 2013-08-15 | 2015-02-23 | 富士ゼロックス株式会社 | Feature extraction program and information processing apparatus |
CN106228240A (en) * | 2016-07-30 | 2016-12-14 | 复旦大学 | Degree of depth convolutional neural networks implementation method based on FPGA |
CN106355244A (en) * | 2016-08-30 | 2017-01-25 | 深圳市诺比邻科技有限公司 | CNN (convolutional neural network) construction method and system |
CN106682734A (en) * | 2016-12-30 | 2017-05-17 | 中国科学院深圳先进技术研究院 | Method and apparatus for increasing generalization capability of convolutional neural network |
CN106875012A (en) * | 2017-02-09 | 2017-06-20 | 武汉魅瞳科技有限公司 | A kind of streamlined acceleration system of the depth convolutional neural networks based on FPGA |
CN106940815A (en) * | 2017-02-13 | 2017-07-11 | 西安交通大学 | A kind of programmable convolutional neural networks Crypto Coprocessor IP Core |
CN107103113A (en) * | 2017-03-23 | 2017-08-29 | 中国科学院计算技术研究所 | Towards the Automation Design method, device and the optimization method of neural network processor |
US20170300812A1 (en) * | 2016-04-14 | 2017-10-19 | International Business Machines Corporation | Efficient determination of optimized learning settings of neural networks |
-
2018
- 2018-01-08 CN CN201810014396.3A patent/CN108388943B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015036939A (en) * | 2013-08-15 | 2015-02-23 | 富士ゼロックス株式会社 | Feature extraction program and information processing apparatus |
US20170300812A1 (en) * | 2016-04-14 | 2017-10-19 | International Business Machines Corporation | Efficient determination of optimized learning settings of neural networks |
CN106228240A (en) * | 2016-07-30 | 2016-12-14 | 复旦大学 | Degree of depth convolutional neural networks implementation method based on FPGA |
CN106355244A (en) * | 2016-08-30 | 2017-01-25 | 深圳市诺比邻科技有限公司 | CNN (convolutional neural network) construction method and system |
CN106682734A (en) * | 2016-12-30 | 2017-05-17 | 中国科学院深圳先进技术研究院 | Method and apparatus for increasing generalization capability of convolutional neural network |
CN106875012A (en) * | 2017-02-09 | 2017-06-20 | 武汉魅瞳科技有限公司 | A kind of streamlined acceleration system of the depth convolutional neural networks based on FPGA |
CN106940815A (en) * | 2017-02-13 | 2017-07-11 | 西安交通大学 | A kind of programmable convolutional neural networks Crypto Coprocessor IP Core |
CN107103113A (en) * | 2017-03-23 | 2017-08-29 | 中国科学院计算技术研究所 | Towards the Automation Design method, device and the optimization method of neural network processor |
Non-Patent Citations (2)
Title |
---|
NICOLA CALABRETTA等: "Flow controlled scalable optical packet switch for low latency flat data center network", 《2013 15TH INTERNATIONAL CONFERENCE ON TRANSPARENT OPTICAL NETWORKS (ICTON)》 * |
常亮等: "图像理解中的卷积神经网络", 《自动化学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109558564A (en) * | 2018-11-30 | 2019-04-02 | 上海寒武纪信息科技有限公司 | Operation method, device and Related product |
CN109558564B (en) * | 2018-11-30 | 2022-03-11 | 上海寒武纪信息科技有限公司 | Operation method, device and related product |
CN117273102A (en) * | 2023-11-23 | 2023-12-22 | 深圳鲲云信息科技有限公司 | Apparatus and method for pooling accelerators and chip circuitry and computing device |
CN117273102B (en) * | 2023-11-23 | 2024-05-24 | 深圳鲲云信息科技有限公司 | Apparatus and method for pooling accelerators and chip circuitry and computing device |
Also Published As
Publication number | Publication date |
---|---|
CN108388943B (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110348574B (en) | ZYNQ-based universal convolutional neural network acceleration structure and design method | |
CN108345934B (en) | Activation device and method for neural network processor | |
CN107844826A (en) | Neural-network processing unit and the processing system comprising the processing unit | |
CN108388943A (en) | A kind of pond device and method suitable for neural network | |
WO2022007880A1 (en) | Data accuracy configuration method and apparatus, neural network device, and medium | |
CN108304925A (en) | A kind of pond computing device and method | |
CN110991630A (en) | Convolutional neural network processor for edge calculation | |
CN108304926A (en) | A kind of pond computing device and method suitable for neural network | |
CN111831359B (en) | Weight precision configuration method, device, equipment and storage medium | |
WO2022078334A1 (en) | Processing method for processing signals using neuron model and network, medium and device | |
Yang et al. | Deep reinforcement learning based wireless network optimization: A comparative study | |
CN112383439B (en) | Intelligent gas meter air upgrading system and upgrading method | |
CN114691765A (en) | Data processing method and device in artificial intelligence system | |
CN109086871A (en) | Training method, device, electronic equipment and the computer-readable medium of neural network | |
CN109299487B (en) | Neural network system, accelerator, modeling method and device, medium and system | |
CN111831358A (en) | Weight precision configuration method, device, equipment and storage medium | |
CN113191504B (en) | Federated learning training acceleration method for computing resource isomerism | |
CN114169506A (en) | Deep learning edge computing system framework based on industrial Internet of things platform | |
CN111344719A (en) | Data processing method and device based on deep neural network and mobile device | |
CN106971229B (en) | Neural network computing core information processing method and system | |
CN109858341B (en) | Rapid multi-face detection and tracking method based on embedded system | |
CN108090865B (en) | Optical satellite remote sensing image on-orbit real-time streaming processing method and system | |
CN109190755A (en) | Matrix conversion device and method towards neural network | |
CN109919655A (en) | A kind of charging method of charging equipment, device and intelligent terminal | |
CN114791768A (en) | Computer system and electronic equipment based on brain-computer interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |