CN107256423A - A kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium - Google Patents
A kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium Download PDFInfo
- Publication number
- CN107256423A CN107256423A CN201710312916.4A CN201710312916A CN107256423A CN 107256423 A CN107256423 A CN 107256423A CN 201710312916 A CN201710312916 A CN 201710312916A CN 107256423 A CN107256423 A CN 107256423A
- Authority
- CN
- China
- Prior art keywords
- network model
- model
- nervus opticus
- neural
- nerves
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium, augmentation nerve planar network architecture includes first nerves network model and nervus opticus network model, first nerves network model is using the good model of sample training, nervus opticus network model is the model without sample training, first nerves network model is connected with the input of nervus opticus network model, and training method includes following process:New samples are inputted, first nerves network model and nervus opticus network model are based respectively on new samples and export the first result and the second result;Expected result is subtracted into the desired value after the first result as nervus opticus network model;Nervus opticus network model is trained based on desired value.The present invention extends the sample range that the neural planar network architecture of augmentation integrally can recognize that while ensureing to possess effective recognition capability to original sample and new samples.
Description
Technical field
The present invention relates to deep learning (Deep Learning) field, and in particular to a kind of neural planar network architecture of augmentation and its
Training method, computer-readable recording medium.
Background technology
In the research of deep learning, prior art developed such as LeNet much trained, AlexNet,
The convolutional Neural such as VGGNet, GoogleNet, ResNet net (Convolutional Neural Network, CNN) model is with before
Nerve net (FNN, feed forward neural network) model is presented, these models will often spend substantial amounts of training sample
Sheet, substantial amounts of time come repetition training, Optimization Debugging could be perfect.
For example, the formation of each convolutional layer of convolutional Neural net is using all of a convolution kernel and preceding layer
Matrix element on (or part) output matrix (also referred to as characteristic pattern, feature map) correspondence position is multiplied, and is tired out
Plus, then biasing put, and the element of this layer of one of matrix is obtained finally by a nonlinear activation function.Generally, above
Layer is convolutional layer, for feature extraction, and finally several layers of is full interconnection layer, identical with traditional feedforward neural network, for recognizing
Different features.For the convolutional Neural trained a net, its situation for possessing the ability of identification new samples is being required
Under, if be still trained using original convolutional Neural net for these new samples, just most likely result in reel product god
Weight changes through net so that its recognition capability to original sample declines.For Feedforward Neural Networks, asked there is also same
Topic.
The content of the invention
According to the first aspect of the invention, a kind of embodiment provides a kind of training method of the neural planar network architecture of augmentation, described
Augmentation nerve planar network architecture includes first nerves network model and nervus opticus network model, and the first nerves network model is
Through using the good model of sample training, nervus opticus network model is the model without sample training, first nerves network model
It is connected with the input of nervus opticus network model, methods described includes following process:
The new samples learnt for neural planar network architecture are inputted into first nerves network model and nervus opticus network mould respectively
Type, first nerves network model and nervus opticus network model are based respectively on new samples and export the first result and the second result;
Expected result is subtracted into the desired value after the first result as nervus opticus network model;
Nervus opticus network model is trained based on desired value.
According to the second aspect of the invention, a kind of embodiment provides a kind of computer-readable recording medium, and it includes program,
Described program can be executed by processor to realize above-mentioned method.
According to the third aspect of the invention we, a kind of embodiment provides a kind of augmentation neural planar network architecture, including first nerves net
Network model, the first nerves network model is using the good model of sample training;Also include being trained according to the above method
Obtained nervus opticus network model;The first nerves network model is connected structure with the nervus opticus network model input
Into the neural planar network architecture of the augmentation.
In the present invention, first nerves network model can recognize the sample type involved by original sample collection, not to the first god
It is trained through network model, so that the weight of first nerves network model does not change, it will not be caused to original sample
This recognition capability declines.For new samples, nervus opticus network model has good learning ability and recognition capability, from
And, the present invention extends the neural planar network architecture of augmentation while ensureing to possess effective recognition capability to original sample and new samples
Overall recognizable sample range.
Brief description of the drawings
Fig. 1 is the first convolution Model of Neural Network schematic diagram of embodiment one;
Fig. 2 is the augmentation convolutional Neural planar network architecture schematic diagram of embodiment one;
Fig. 3 is the augmentation convolutional Neural planar network architecture schematic diagram of embodiment two.
Embodiment
The present invention is described in further detail below by embodiment combination accompanying drawing.Wherein different embodiments
Middle similar component employs associated similar element numbers.In the following embodiments, many detailed descriptions be in order to
The application is better understood.However, those skilled in the art can be without lifting an eyebrow recognize, which part feature
It is dispensed, or can be substituted by other elements, material, method in varied situations.In some cases, this Shen
Certain operations that please be related do not show or description that this is the core in order to avoid the application by mistake in the description
Many descriptions are flooded, and to those skilled in the art, be described in detail these associative operations be not it is necessary, they
The general technology knowledge of description and this area in specification can completely understand associative operation.
In addition, feature described in this description, operation or feature can be combined to form respectively in any suitable way
Plant embodiment.Meanwhile, each step or action in method description can also be aobvious and easy according to those skilled in the art institute energy
The mode carry out order exchange or adjustment seen.Therefore, the various orders in specification and drawings are intended merely to clearly describe a certain
Individual embodiment, is not meant to be necessary order, wherein some sequentially must comply with unless otherwise indicated.
It is herein part institute serialization number itself, such as " first ", " second ", is only used for the object described by distinguishing,
Without any order or art-recognized meanings.And " connection ", " connection " described in the application, unless otherwise instructed, including directly and
It is indirectly connected with (connection).
Embodiment one:
The present embodiment provides a kind of augmentation convolutional Neural planar network architecture and its training method, and augmentation convolutional Neural planar network architecture includes
First convolution Model of Neural Network CNN1 and the second convolution Model of Neural Network CNN2 in parallel.
First convolution Model of Neural Network CNN1 be by being trained and trained good neural planar network architecture to original sample,
As shown in figure 1, x0The sample concentrated for original sample, it is that can obtain ideal that it, which inputs the first convolution Model of Neural Network CNN1,
Output result y0.Therefore, the first convolution Model of Neural Network CNN1 can recognize the image for belonging to original sample collection type well.
Such as the first convolution Model of Neural Network CNN1 can use AlexNet, and it can recognize original sample collection ImageNet
In 1000 multiclass navigation objects, still, due to the new demand of practical application, except to recognize above-mentioned type objects more than 1000 with
Outside, the object of 2000 new multiclass marine organisms species of increase identification is also required, now, if still using the first convolution nerve net
MODEL C NN1 carries out sample training to be directed to these new objects, just most likely results in the first convolution Model of Neural Network CNN1 power
Change again so that its recognition capability to 1000 original multiclass navigation objects declines.
The present embodiment is the training method for solving the above problems and proposing augmentation convolutional Neural planar network architecture, is mainly included as follows
Process:
St1, as shown in Fig. 2 nervus opticus network model CNN2 is parallel on the first convolution Model of Neural Network CNN1, from
And the first convolution neural network model CNN1 and the second convolution neural network model CNN2 input are connected;
St2, the new samples x learnt for neural planar network architecture is inputted to first nerves network model be respectively just resistant to and second
Neural network model could be interior, and the first convolution neural network model CNN1 and the second convolution neural network model CNN2 are based respectively on
New samples x exports the first result y1 and the second result y2, for example, objects of the new samples x from 2000 multiclass marine organisms species
Sample set;
St3, expected result y* subtracted it is new for this as the second convolution neural network model CNN2 after the first result y1
Sample x desired value y*-y1;
St4, based on nervus opticus network model CNN2 be directed to numerous new samples desired value, using back-propagation algorithm
(Backpropagation algorithm, BP algorithm) is trained to nervus opticus network model CNN2.
Back-propagation algorithm is the most frequently used and maximally effective algorithm for being used for training ANN at present.Its main thought
It is that training set data is input to the input layer of nerve net, by hidden layer, finally reaches output layer and output result, this is
The propagated forward process of nerve net;Because the output result and actual result of nerve net have error, then estimate and reality are calculated
Error between value, and by the error from output layer to hidden layer backpropagation, until traveling to input layer;In backpropagation
During, according to the value of the various parameters of error transfer factor nerve net;Continuous iteration said process, until convergence.
Specifically, the opportunity of augmentation convolutional Neural planar network architecture is output as y=y1+y2, and y* is augmentation convolutional Neural rack
The desired value of structure, then y*-y1 is the desired value of the second convolution Model of Neural Network, during beginning, error e=(y*-y1-y2)2Very
Greatly, after being trained by back-propagation algorithm, error e can be gradually decreased, a sample x is finally often inputted, is just approached
Desired value y* y.
So as to be constituted as shown in Fig. 2 the first convolution Model of Neural Network CNN1 is in parallel with the second convolution Model of Neural Network CNN2
Augmentation convolutional Neural planar network architecture.For the image of input, the first convolution Model of Neural Network CNN1 and the second convolution Model of Neural Network
CNN2 can embody the identification advantage to navigation object and living species respectively.If for example, input is marine organisms species figure
Picture, then the first convolution Model of Neural Network CNN1 do not make certainly its be living species output (or not making identification), and second
Convolutional Neural pessimistic concurrency control CNN2 output then can determine that it is marine organisms species.That is, when a tested sample of input belongs to
During first good sample of network training, then the desired value that y1 corresponds to the sample is output as, at this moment y2 is close to zero;Work as input
Tested sample when belonging to newly-increased sample, be output as the corresponding desired values of y2.
In the present embodiment, for common output, it is desirable to the first convolution Model of Neural Network CNN1 output and the second convolution
Model of Neural Network CNN2 is output as the vector with identical dimensional so that y1+y2 is meaningful, you can be added.
First convolution Model of Neural Network CNN1 and the second convolution Model of Neural Network CNN2 can using LeNet, AlexNet,
The models such as VGGNet, GoogleNet, LeNet5 or ResNet, and the first convolution Model of Neural Network CNN1 and the second convolutional Neural
Pessimistic concurrency control CNN2 can both use identical model, it would however also be possible to employ different models.For example in the present embodiment, by training
It is respectively the first AlexNet and second to the first convolution Model of Neural Network CNN1 and the second convolution Model of Neural Network CNN2
AlexNet, in other embodiments of the invention, the first convolution Model of Neural Network CNN1 and the second convolution are obtained by training
Model of Neural Network CNN2 can also be respectively the first AlexNet and the 2nd VGGNet.
Embodiment two:
The present embodiment and the difference of embodiment one are that the first convolution Model of Neural Network CNN1 of embodiment one is single
Convolutional Neural net, and the present embodiment replaces the first convolution Model of Neural Network CNN1 using the first convolution Model of Neural Network framework, the
One convolution Model of Neural Network framework is the framework of two or more convolutional Neural nets in parallel, for example, and the present embodiment
The first convolution Model of Neural Network framework include attached convolution Model of Neural Network CNN1 ' and Annex II convolutional Neural pessimistic concurrency control CNN1 ",
Wherein, attached convolution Model of Neural Network CNN1 ' can recognize 1000 multiclass navigation objects in original sample collection ImageNet, attached
Two convolutional Neural pessimistic concurrency control CNN1 " can recognize 3000 multiclass every-day objects in original sample collection ImageNet, in order to newly-increased
Plus 2000 multiclass marine organisms be identified and do not destroy the first convolution Model of Neural Network framework, the present embodiment can be first
The second convolution Model of Neural Network CNN2 is set up on the basis of convolutional Neural pessimistic concurrency control framework.
The present embodiment use with the identical method of embodiment one, by the way that nervus opticus network model is parallel into the first convolution
So as to form new augmentation convolutional Neural planar network architecture on Model of Neural Network framework, new sample is inputted to the augmentation convolutional Neural planar network architecture
This x, it is y*-y1 to calculate nervus opticus network model for the desired value of the new samples, using back-propagation algorithm to the second god
It is trained through network model, as shown in figure 3, so as to obtain the first convolution Model of Neural Network framework and the second convolution nerve net mould
The augmentation convolutional Neural planar network architecture that type CNN2 parallel connections are constituted.
Attached convolution Model of Neural Network CNN1 ', Annex II convolutional Neural pessimistic concurrency control CNN1 " and the second convolution Model of Neural Network
CNN2 can both use identical model, it would however also be possible to employ different models.For example in the present embodiment, attached one is obtained by training
Convolutional Neural pessimistic concurrency control CNN1 ', Annex II convolutional Neural pessimistic concurrency control CNN1 " and the second convolution Model of Neural Network CNN2 model is
AlexNet.In other embodiments of the invention, attached convolution Model of Neural Network CNN1 ', Annex II convolution are obtained by training
Model of Neural Network CNN1 " and the second convolution Model of Neural Network CNN2 model is respectively AlexNet, AlexNet and LeNet;The
One convolution Model of Neural Network framework can include more nerve nets, for example, obtain attached convolution Model of Neural Network by training, it is attached
The model of two convolutional Neural pessimistic concurrency controls, Annex III convolutional Neural pessimistic concurrency control and the second convolution Model of Neural Network CNN2 is respectively
AlexNet, AlexNet, ResNet and LeNet.
This thinking provided by the present embodiment, complicated sample class can be resolved into some subclasses (for example
Complex samples class is decomposed into Navigation class article, day ordinary articles, marine organisms), the first convolution Model of Neural Network is first trained, is used for
Recognize the first subclass, then augmentation second is used for the convolutional Neural net that recognizes the second subclass, then augmentation the 3rd is used to recognizing the
The convolutional Neural net of three subclasses ... until convolutional Neural planar network architecture can complete the identification of all subclasses.
For Feedforward Neural Networks, the increasing of augmentation feed forward neural planar network architecture of the invention and its training method and above-described embodiment
Wide convolutional Neural planar network architecture and its training method principle are similar, will simply use the partial replacement of convolutional Neural net for using feedforward
Nerve net, therefore repeat no more.
In other embodiments of the present invention, a kind of computer-readable recording medium is also provided, it includes program, the journey
Sequence can be executed by processor to realize above-mentioned method.
The neural planar network architecture of the augmentation of the present invention and its training method, computer-readable recording medium, first nerves network mould
Type can recognize that the sample type involved by original sample collection (includes the sample that original sample is concentrated, and to belong to original sample collection similar
Other samples of type), for new samples, first nerves network model is not trained, so that first nerves network model
Weight is not changed, and its recognition capability to original sample will not be caused to decline.For new samples (including new samples concentrate
Sample, and belong to other samples of new samples collection same type), nervus opticus network model have good learning ability with
Recognition capability, so that, the present invention extends augmentation while ensureing to possess effective recognition capability to original sample and new samples
The sample range that neural planar network architecture integrally can recognize that.In addition, having good identification to original sample in first nerves network model
In the case of ability but the newly-increased sample of unsuitable identification, the nervus opticus network model for recognizing newly-increased sample is adapted to by supplement,
The neural planar network architecture of the augmentation finally given is all had well to the object for being related to original sample collection and new samples collection type
Recognition capability.
Use above specific case is illustrated to the present invention, is only intended to help and is understood the present invention, not to limit
The system present invention.For those skilled in the art, according to the thought of the present invention, it can also make some simple
Deduce, deform or replace.
Claims (10)
1. the training method of the neural planar network architecture of a kind of augmentation, it is characterised in that the neural planar network architecture of the augmentation includes first nerves
Network model and nervus opticus network model, the first nerves network model are to have used the good model of sample training, the
Two neural network models are the models without sample training, and methods described includes:
The new samples learnt for neural planar network architecture are inputted into first nerves network model and nervus opticus network model respectively, the
One neural network model and nervus opticus network model are based respectively on new samples and export the first result and the second result;
Expected result is subtracted into the desired value after the first result as nervus opticus network model;
Nervus opticus network model is trained based on desired value.
2. the method as described in claim 1, it is characterised in that
The first nerves network model is convolutional Neural net, the first nerves network mould with the nervus opticus network model
Type is in parallel with the nervus opticus network model to constitute augmentation convolutional Neural planar network architecture.
3. method as claimed in claim 1 or 2, it is characterised in that
The first nerves network model is single nerve net;
Or, the network architecture that the first nerves network model is formed for the parallel connection of more than one nerve net, the network architecture
Obtained by the method training described in claim 1.
4. method as claimed in claim 1 or 2, it is characterised in that
The desired value of numerous new samples is directed to using the nervus opticus network model, using back-propagation algorithm to described second
Neural network model is trained.
5. method as claimed in claim 1 or 2, it is characterised in that
Output and the nervus opticus network model of the first nerves network model are output as the arrow with identical dimensional
Amount.
6. a kind of computer-readable recording medium, it is characterised in that including program, described program can be executed by processor with reality
The existing method as any one of claim 1-5.
7. a kind of neural planar network architecture of augmentation, including first nerves network model, the first nerves network model is to have used
The good model of sample training;
Characterized in that,
Also include the nervus opticus network model that the method training according to claim any one of 1-6 is obtained;
The first nerves network model is connected with the input of the nervus opticus network model constitutes the augmentation nerve net
Framework.
8. framework as claimed in claim 7, it is characterised in that
The first nerves network model is convolutional Neural net, the first nerves network mould with the nervus opticus network model
Type is in parallel with the nervus opticus network model to constitute augmentation convolutional Neural planar network architecture;Or,
The first nerves network model is Feedforward Neural Networks, the first nerves network mould with the nervus opticus network model
Type is in parallel with the nervus opticus network model to constitute augmentation feed forward neural planar network architecture.
9. framework as claimed in claim 7 or 8, it is characterised in that
The first nerves network model is single nerve net;
Or, the network architecture that the first nerves network model is formed for the parallel connection of more than one nerve net, the network architecture
Method training according to claim any one of 1-6 is obtained.
10. framework as claimed in claim 7 or 8, it is characterised in that
The first nerves network model and the nervus opticus network model using LeNet, AlexNet, VGGNet,
The models such as GoogleNet, LeNet5 or ResNet;
The first nerves network model uses identical model or different models from the nervus opticus network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710312916.4A CN107256423A (en) | 2017-05-05 | 2017-05-05 | A kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710312916.4A CN107256423A (en) | 2017-05-05 | 2017-05-05 | A kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107256423A true CN107256423A (en) | 2017-10-17 |
Family
ID=60027282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710312916.4A Pending CN107256423A (en) | 2017-05-05 | 2017-05-05 | A kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107256423A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934184A (en) * | 2019-03-19 | 2019-06-25 | 网易(杭州)网络有限公司 | Gesture identification method and device, storage medium, processor |
CN110334233A (en) * | 2019-07-12 | 2019-10-15 | 福建省趋普物联科技有限公司 | Advertising image classification method based on depth convolutional neural networks model |
CN110598504A (en) * | 2018-06-12 | 2019-12-20 | 北京市商汤科技开发有限公司 | Image recognition method and device, electronic equipment and storage medium |
WO2020062262A1 (en) * | 2018-09-30 | 2020-04-02 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for generating a neural network model for image processing |
CN111492382A (en) * | 2017-11-20 | 2020-08-04 | 皇家飞利浦有限公司 | Training a first neural network model and a second neural network model |
WO2020188436A1 (en) * | 2019-03-21 | 2020-09-24 | International Business Machines Corporation | System and method of incremental learning for object detection |
CN111739300A (en) * | 2020-07-21 | 2020-10-02 | 成都恒创新星科技有限公司 | Training method of intelligent parking deep learning network based on FPGA |
CN112949107A (en) * | 2019-12-10 | 2021-06-11 | 通用汽车环球科技运作有限责任公司 | Composite neural network architecture for stress distribution prediction |
WO2023184958A1 (en) * | 2022-03-29 | 2023-10-05 | 上海商汤智能科技有限公司 | Target recognition, and training of neural network |
-
2017
- 2017-05-05 CN CN201710312916.4A patent/CN107256423A/en active Pending
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111492382B (en) * | 2017-11-20 | 2024-05-07 | 皇家飞利浦有限公司 | Training a first neural network model and a second neural network model |
CN111492382A (en) * | 2017-11-20 | 2020-08-04 | 皇家飞利浦有限公司 | Training a first neural network model and a second neural network model |
CN110598504A (en) * | 2018-06-12 | 2019-12-20 | 北京市商汤科技开发有限公司 | Image recognition method and device, electronic equipment and storage medium |
WO2020062262A1 (en) * | 2018-09-30 | 2020-04-02 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for generating a neural network model for image processing |
US11907852B2 (en) | 2018-09-30 | 2024-02-20 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for generating a neural network model for image processing |
US11599796B2 (en) | 2018-09-30 | 2023-03-07 | Shanghai United Imaging Healthcare Co., Ltd. | Systems and methods for generating a neural network model for image processing |
CN109934184A (en) * | 2019-03-19 | 2019-06-25 | 网易(杭州)网络有限公司 | Gesture identification method and device, storage medium, processor |
US11080558B2 (en) | 2019-03-21 | 2021-08-03 | International Business Machines Corporation | System and method of incremental learning for object detection |
WO2020188436A1 (en) * | 2019-03-21 | 2020-09-24 | International Business Machines Corporation | System and method of incremental learning for object detection |
CN110334233A (en) * | 2019-07-12 | 2019-10-15 | 福建省趋普物联科技有限公司 | Advertising image classification method based on depth convolutional neural networks model |
CN112949107A (en) * | 2019-12-10 | 2021-06-11 | 通用汽车环球科技运作有限责任公司 | Composite neural network architecture for stress distribution prediction |
CN111739300B (en) * | 2020-07-21 | 2020-12-11 | 成都恒创新星科技有限公司 | Training method of intelligent parking deep learning network based on FPGA |
CN111739300A (en) * | 2020-07-21 | 2020-10-02 | 成都恒创新星科技有限公司 | Training method of intelligent parking deep learning network based on FPGA |
WO2023184958A1 (en) * | 2022-03-29 | 2023-10-05 | 上海商汤智能科技有限公司 | Target recognition, and training of neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107256423A (en) | A kind of neural planar network architecture of augmentation and its training method, computer-readable recording medium | |
WO2022135066A1 (en) | Temporal difference-based hybrid flow-shop scheduling method | |
Hunsberger et al. | Spiking deep networks with LIF neurons | |
Kang et al. | An adaptive PID neural network for complex nonlinear system control | |
Hou et al. | Fruit recognition based on convolution neural network | |
CN105772407A (en) | Waste classification robot based on image recognition technology | |
US20170024661A1 (en) | Methods and systems for implementing deep spiking neural networks | |
CN107506722A (en) | One kind is based on depth sparse convolution neutral net face emotion identification method | |
CN106022465A (en) | Extreme learning machine method for improving artificial bee colony optimization | |
CN106919951A (en) | A kind of Weakly supervised bilinearity deep learning method merged with vision based on click | |
CN104217214A (en) | Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method | |
CN110427006A (en) | A kind of multi-agent cooperative control system and method for process industry | |
CN103914711B (en) | A kind of improved very fast learning device and its method for classifying modes | |
CN104573621A (en) | Dynamic gesture learning and identifying method based on Chebyshev neural network | |
CN109766995A (en) | The compression method and device of deep neural network | |
CN103679139A (en) | Face recognition method based on particle swarm optimization BP network | |
CN110135582A (en) | Neural metwork training, image processing method and device, storage medium | |
CN104850531A (en) | Method and device for establishing mathematical model | |
CN111931813A (en) | CNN-based width learning classification method | |
CN112288080A (en) | Pulse neural network-oriented adaptive model conversion method and system | |
CN104942015A (en) | Intelligent control method and system for pickling process section in cold-rolling unit | |
Niu et al. | A multi-swarm optimizer based fuzzy modeling approach for dynamic systems processing | |
CN105427241A (en) | Distortion correction method for large-field-of-view display device | |
CN104050505A (en) | Multilayer-perceptron training method based on bee colony algorithm with learning factor | |
CN110245602A (en) | A kind of underwater quiet target identification method based on depth convolution feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171017 |
|
RJ01 | Rejection of invention patent application after publication |