CN109948651A - Pond method, apparatus and storage medium, the computer equipment of convolutional neural networks - Google Patents
Pond method, apparatus and storage medium, the computer equipment of convolutional neural networks Download PDFInfo
- Publication number
- CN109948651A CN109948651A CN201910113187.9A CN201910113187A CN109948651A CN 109948651 A CN109948651 A CN 109948651A CN 201910113187 A CN201910113187 A CN 201910113187A CN 109948651 A CN109948651 A CN 109948651A
- Authority
- CN
- China
- Prior art keywords
- data
- pond
- matrix
- layer
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 126
- 238000000034 method Methods 0.000 title claims abstract description 51
- 239000011159 matrix material Substances 0.000 claims abstract description 127
- 238000012549 training Methods 0.000 claims abstract description 54
- 238000000605 extraction Methods 0.000 claims abstract description 20
- 239000000284 extract Substances 0.000 claims abstract description 13
- 230000015654 memory Effects 0.000 claims description 30
- 230000004913 activation Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims 2
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 9
- 230000001537 neural effect Effects 0.000 description 6
- 210000002569 neuron Anatomy 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 239000011248 coating agent Substances 0.000 description 4
- 238000000576 coating method Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 239000006227 byproduct Substances 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides the pond method, apparatus and storage medium, computer equipment of a kind of convolutional neural networks, which comprises training sample is inputted convolutional neural networks model by the training sample for obtaining convolutional neural networks;The pond layer for the sample data matrix input convolutional neural networks model that the convolutional layer of convolutional neural networks model is exported, is divided into multiple data submatrixs for sample data matrix in the layer of pond;Extract a sample data at random from each data submatrix;According to each sample data generate pond matrix of extraction, using the pond matrix as the output of pond layer.The above method can expand the data sample of Chi Huahou, enrich the input information of convolutional neural networks.
Description
Technical field
The present invention relates to convolutional neural networks modelling technique fields, specifically, the present invention relates to a kind of convolutional Neural nets
Pond method, apparatus and storage medium, the computer equipment of network.
Background technique
Convolutional neural networks (Convolutional Neural Network, CNN) are a kind of feedforward neural networks.Convolution
Neural network uses a large amount of artificial neuron, and artificial neuron can respond the surrounding cells in a part of coverage area,
It is usually used in large-scale image procossing.It includes convolutional layer (convolutional layer) and pond layer that convolutional neural networks, which use,
(pooling layer)。
Pond function general at present is using maximum pond (max-pooling) or averagely pond (mean-pooling).
The shortcomings that maximum pond is that each pondization can all lose the information other than maximum value position;The shortcomings that average pond is Bu Nengti
Reveal it is some farther out but important information from average value.Therefore, the pond mode that current convolutional neural networks use is reduced
The input information of convolutional neural networks, causes the output result accuracy of convolutional neural networks model to reduce.
Summary of the invention
The present invention proposes the pond method, apparatus and storage medium, computer equipment of a kind of convolutional neural networks, to expand
The data sample of Chi Huahou enriches the input information of convolutional neural networks.
The present invention provides following scheme:
A kind of pond method of convolutional neural networks, comprising: the training sample for obtaining convolutional neural networks, by training sample
Input convolutional neural networks model;The sample data matrix that the convolutional layer of convolutional neural networks model is exported inputs convolutional Neural
Sample data matrix is divided into multiple data submatrixs in the layer of pond by the pond layer of network model;From the sub- square of each data
A sample data is extracted at random in battle array;According to each sample data generate pond matrix of extraction, using the pond matrix as
The output of pond layer.
In one embodiment, the multiple data submatrix includes line number data submatrix identical with columns;It is described
Sample data matrix is divided into multiple data submatrixs in the layer of pond, comprising: obtain the sample data matrix line number and
Columns;According to the line number and columns of the sample data matrix, the sample data matrix is divided into the layer of pond multiple
Line number data submatrix identical with columns.
In one embodiment, the multiple data submatrix include line number the first data submatrix identical with columns and
Line number and the different second data submatrix of columns;It is described that sample data matrix is divided into multiple data in the layer of pond
Matrix, comprising: obtain the line number and columns of sample data matrix;According to the line number of sample data matrix and columns in the layer of pond
Sample data matrix is divided into multiple first data submatrixs and multiple second data submatrixs.
In one embodiment, the data submatrix is the matrix of multirow and multiple row;It is described from each data submatrix
A sample data is extracted at random, comprising: randomly selects any row data from the multirow of each data submatrix;From this
In row data, the corresponding data of either rank are randomly selected as the sample data;Or, from the more of each data submatrix
Any column data is randomly selected in column;From the column data, the corresponding data of any row are randomly selected as the sample data.
In one embodiment, the data submatrix is the matrix of multirow and multiple row;It is described from each data submatrix
At random extract a sample data, comprising: from the multirow of each data submatrix and multiple row, randomly select any row and
The corresponding data of either rank are as the sample data.
In one embodiment, the training sample is image sample data;The training sample for obtaining convolutional neural networks
This, inputs convolutional neural networks model for training sample, comprising: obtains described image sample data as the convolutional Neural net
The training sample of network model.
In one embodiment, the training sample for obtaining described image sample data as the convolutional neural networks model
This, comprising: obtain training sample of the picture sample data in described image sample data as the convolutional neural networks model
This;The pond for the sample data matrix input convolutional neural networks model that the convolutional layer by convolutional neural networks model exports
Layer, is divided into multiple data submatrixs for sample data matrix in the layer of pond, extracts one at random from each data submatrix
A sample data, according to each sample data generate pond matrix of extraction, using the pond matrix as the output of pond layer, packet
Include: the picture sample data matrix that the first convolutional layer of convolutional neural networks model is exported is after activation, input convolution mind
The first pond layer through network model, is divided into multiple data submatrixs for sample data matrix in the first pond layer, from every
A picture sample data are extracted in a data submatrix at random, are carried out according to the picture sample data generate pond matrix of extraction
After the output of first pond layer, then the data of first pond layer output are inputted into the second convolutional layer progress convolution, to described the
The data of two convolutional layers output carry out Random Activation, and the data after Random Activation are inputted the second pond layer and carry out Chi Huaxun
Practice, using the pond training result as the output of system pool layer.
A kind of pond makeup of convolutional neural networks is set, comprising: module is obtained, for obtaining the training sample of convolutional neural networks
This, inputs convolutional neural networks model for training sample;Division module, for exporting the convolutional layer of convolutional neural networks model
Sample data matrix input convolutional neural networks model pond layer, sample data matrix is divided into the layer of pond multiple
Data submatrix;Extraction module, for extracting a sample data at random from each data submatrix;Generation module is used for
According to each sample data generate pond matrix of extraction, using the pond matrix as the output of pond layer.
A kind of storage medium, is stored thereon with computer program;The computer program is suitable for being loaded and being held by processor
The pond method of convolutional neural networks described in any of the above-described embodiment of row.
A kind of computer equipment comprising: one or more processors;Memory;One or more application program, wherein
One or more of application programs are stored in the memory and are configured as being held by one or more of processors
Row, one or more of application programs are configured to carry out the pond of the convolutional neural networks according to any of the above-described embodiment
Change method.
Training sample is inputted convolutional neural networks model by the pond method of convolutional neural networks provided by the above embodiment
In, after the convolutional layer of convolutional neural networks model carries out process of convolution, the sample data matrix of convolutional layer output is inputted
The pond layer of convolutional neural networks model.In the layer of pond, sample data matrix is divided into multiple data submatrixs, and from
Each data submatrix extracts a data sample at random with generate pond matrix.Therefore, by the layer of pond to sample number
According to each data submatrix carry out data random acquisition after-bay, M*M*M*M can be obtained in the sample data matrix of a M*M,
Compared with the mode in traditional maximum pond or average pond, the data sample of Chi Huahou is expanded, has enriched convolutional Neural net
The input information of network.
The additional aspect of the present invention and advantage will be set forth in part in the description, these will become from the following description
Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, in which:
Fig. 1 is the schematic diagram of internal structure in an embodiment of convolutional neural networks model provided by the invention;
Fig. 2 is the schematic diagram in an embodiment of maximum pond method provided by the invention;
Fig. 3 is the schematic diagram in an embodiment of average pond method provided by the invention;
Fig. 4 is the method flow diagram in an a kind of embodiment of the pond method of convolutional neural networks provided by the invention;
Fig. 5 is the method flow diagram in an embodiment of step S200 provided by the invention;
Fig. 6 is the schematic diagram in an embodiment of random pool method provided by the invention;
Fig. 7 is the structural block diagram in the embodiment that a kind of pond makeup of convolutional neural networks provided by the invention is set;
Fig. 8 is the structural schematic diagram in an a kind of embodiment of computer equipment provided by the invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " described " and "the" may also comprise plural form, " first " used herein, " second " are only used for distinguishing same technology special
Sign, is not limited the sequence of the technical characteristic and quantity etc..It is to be further understood that in specification of the invention
The wording " comprising " used refers to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that depositing
Or add other one or more features, integer, step, operation, element, component and/or their group.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term), there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless idealization or meaning too formal otherwise will not be used by specific definitions as here
To explain.
The present invention provides a kind of pond method of convolutional neural networks, applied in the pond layer of convolutional neural networks.With
It is lower that guiding explanation first is done to the background technique of the pond method of convolutional neural networks of the present invention:
It is shown in Figure 1, convolutional neural networks in input layer between output layer, comprising there are two convolutional layers, two ponds
Change layer, a full connection hidden layer.In general, the input of pond layer derives from a upper convolutional layer, mainly provides very strong
Robustness.While reducing the quantity of deep learning parameter, moreover it is possible to prevent the generation of over-fitting.Number of parameters subtracts
Mean that the speed of convolutional neural networks training can faster less, the model file after training is smaller, reasoning speed when forecast sample
Degree is faster.Thus pond function has great significance for convolutional neural networks.
Pond function is generally using maximum pond (max-pooling) and average pond (mean- in convolutional neural networks
Pooling) method.For maximum pond method, as shown in Figure 2.The matrix for multiplying 4 for one 4 passes through the 2 maximum Chi Hanshuochi for multiplying 2
After change, become one 2 multiplied by 2 matrix.Pondization calculate step are as follows: by 4 multiply 4 matrix-split be 42 multiply 2 submatrix, point
The maximum value of each submatrix is not sought, constitutes a new matrix, as maximum pond.For average pond method, such as Fig. 3
It is shown.Multiply 4 matrix for one 4 after 2 multiply 2 average Chi Hanshuochiization, becomes one 2 multiplied by 2 matrix.Pondization calculates step
It is rapid: by 4 multiply 4 matrix-split to be 42 multiply 2 submatrix, seek the average value of each submatrix respectively, constitute one it is new
Matrix, as average pond.However, using maximum pond method, each pondization can all lose the information other than maximum value position.
Using average pond method, cannot embody it is some farther out but important information from average value.
The present invention provides a kind of pond method of convolutional neural networks, to expand the data sample of Chi Huahou, enriches convolution
The input information of neural network.In one embodiment, as shown in figure 4, the pond method of the convolutional neural networks, including following step
It is rapid:
S100 obtains the training sample of convolutional neural networks, and training sample is inputted convolutional neural networks model.
In the present embodiment, before the data operation for carrying out convolutional neural networks structure, server first obtains characteristic
According to training sample.The training sample is the target signature that server is calculated by convolutional neural networks.Further, will
The training sample inputs in convolutional neural networks model.Wherein, convolutional neural networks model includes convolutional layer, sample level, activation
Layer, pond layer and full articulamentum.Convolutional layer is used to extract the space characteristics of input data.Convolutional layer may include multiple convolution
Core, to extract multiple space characteristics of input data.Nonlinear activation function can be used in active coating.Pond layer is for avoiding convolution
The over-fitting of process.Adjoining of the full articulamentum in network between neuron and neuron is connected, and can pass through softmax
Function exports after calculating, and obtains different probability values.
In one embodiment, the training sample is image sample data.Step S100, comprising: obtain described image sample
Training sample of the notebook data as the convolutional neural networks model.
In this embodiment, image sample data can be for the sample data of model training, model output is then figure
The classification of decent notebook data.Further, the acquisition described image sample data is as the convolutional neural networks model
Training sample, comprising: obtain the picture sample data in described image sample data as the convolutional neural networks model
Training sample.
Wherein, the sample data matrix that the convolutional layer by convolutional neural networks model exports inputs convolutional neural networks
Sample data matrix is divided into multiple data submatrixs in the layer of pond by the pond layer of model, from each data submatrix
A sample data is extracted at random, according to each sample data generate pond matrix of extraction, using the pond matrix as pond
The output of layer, comprising: by the picture sample data matrix of the first convolutional layer output of convolutional neural networks model after activation,
Sample data matrix is divided into multiple data in the first pond layer by the first pond layer for inputting convolutional neural networks model
Matrix extracts a picture sample data at random from each data submatrix, according to the picture sample data generate pond of extraction
After changing matrix progress the first pond layer output, then the data of first pond layer output are inputted into the second convolutional layer and are rolled up
Product carries out Random Activation to the data of second convolutional layer output, and the data after Random Activation is inputted the second pond layer
Pondization training is carried out, using the pond training result as the output of system pool layer.
S200, the sample data matrix that the convolutional layer of convolutional neural networks model is exported input convolutional neural networks model
Pond layer, sample data matrix is divided into multiple data submatrixs in the layer of pond.
In the present embodiment, when training sample is input to convolutional neural networks model by server, training sample is defeated first
Enter the convolutional layer to convolutional neural networks model.Convolutional layer carries out convolution training to training sample, extracts the training sample of input
Different spaces feature, output include multiple features sample data matrix.It is biggish that dimension is usually obtained after convolutional layer
The sample data matrix of feature.At this point, feature is cut into several regions, take its maximum value or average value, obtain new, dimension compared with
Small feature.It that is to say the pond operation of convolutional neural networks.When carrying out the pond layer operation of convolutional neural networks, by convolution
The sample data matrix of layer output is divided into submatrix.It specifically can be, it is (big less than sample data matrix that default size is set
It is small) convolution kernel.Convolution kernel window is long window identical with width values size.Certainly, also it is not excluded for long and width values size not phase
The convolution kernel of same window.Sample data matrix can be divided into multiple submatrixs by convolution kernel.
In one embodiment, the multiple data submatrix includes line number data submatrix identical with columns.Such as Fig. 5 institute
Show, in step S200, sample data matrix be divided into multiple data submatrixs in the layer of pond, comprising:
S210 obtains the line number and columns of the sample data matrix.
S230 draws the sample data matrix in the layer of pond according to the line number and columns of the sample data matrix
It is divided into multiple line numbers data submatrix identical with columns.
Convolution kernel in convolutional neural networks generally takes small square matrices.The deep learning frame of mainstream is generally supported
Be also the same convolution kernel of length and width.For example, can be used 3 multiply 3 convolution kernel or 2 multiply 2 convolution kernel.In this embodiment,
Obtain sample data matrix line number M and columns N, according to line number M and columns N by sample data matrix be divided into multiple line numbers with
The identical data submatrix of columns.Wherein, line number M and columns N can be identical.
In other embodiments, multiple data submatrixs include line number the first data submatrix identical with columns and row
The several and different second data submatrix of columns.It that is to say, when carrying out sample data matrix division, can mark off simultaneously multiple
Line number the first data submatrix identical with columns and multiple line numbers and the different second data submatrix of columns.Specifically
Sample data matrix in step S200, is divided into multiple data submatrixs in the layer of pond by ground, comprising: obtains sample data
The line number and columns of matrix;Sample data matrix is divided into the layer of pond according to the line number of sample data matrix and columns more
A first data submatrix and multiple second data submatrixs.
S300 extracts a sample data at random from each data submatrix.
In the present embodiment, when server carries out pond operation in convolutional neural networks model, from the sub- square of each data
A sample data is extracted at random in battle array, so as to constitute new matrix according to the data extracted in each data submatrix.This
Place, the data extracted at random every time can be any one data value, each data value in data submatrix and can also be repeated and mention
It takes.
In one embodiment, the data submatrix is the matrix of multirow and multiple row.Step S300, comprising: from described every
Any row data are randomly selected in the multirow of a data submatrix;From the row data, the corresponding data of either rank are randomly selected
As the sample data;Or, randomly selecting any column data from the multiple row of each data submatrix;From the column data
In, the corresponding data of any row are randomly selected as the sample data.
In this embodiment, the first random screening from multirow and the data submatrix of multiple row of server goes out the number of any a line
According to, then any one column are screened from the row data, the corresponding data of the column are then the sample data.Alternatively, server first from
Random screening goes out the data of any one column in multirow and the data submatrix of multiple row, then any a line is screened from the column data,
The corresponding data of the row are then the sample data.
In one embodiment, the data submatrix is the matrix of multirow and multiple row.Step S300, comprising: from described every
In the multirow and multiple row of a data submatrix, any row and the corresponding data of either rank are randomly selected as the sample data.
In this embodiment, server selects any row value and train value at random, and the row value and the corresponding data of train value are the sample
Data.It is (4,3) as server randomly selects trip value and train value, then the data in (4,3) corresponding data submatrix are institute
State sample data.
S400, according to each sample data generate pond matrix of extraction, using the pond matrix as the output of pond layer.
In the present embodiment, new matrix can be generated according to the sample data extracted in each data submatrix, that is to say
Pond matrix, using the pond matrix as the output of pond layer, to be output to the full connection hidden layer of convolutional neural networks model.?
In one specific embodiment, as shown in fig. 6, multiply for one 44 sample data matrix through being divided into 2 multiply 2 data submatrixs with
After machine Chi Hanshuochiization, become multiple 2 multiplied by 2 matrix.Specifically, the calculating step of random pool are as follows: multiply 4 matrix for 4
Be split as 42 multiply 2 submatrix, take a value at random in the inside of each submatrix respectively, constitute a new matrix, i.e.,
For random pool matrix.By using random pool, one 4 multiplies 4 matrix and will generate the different pond of 4*4*4*4=1024 kind
Sample has been expanded 1024 times as a result, being equivalent to, has been greatly enriched the input information of convolutional neural networks.
Training sample is inputted convolutional neural networks model by the pond method of convolutional neural networks provided by the above embodiment
In, after the convolutional layer of convolutional neural networks model carries out process of convolution, the sample data matrix of convolutional layer output is inputted
The pond layer of convolutional neural networks model.In the layer of pond, sample data matrix is divided into multiple data submatrixs, and from
Each data submatrix extracts a data sample at random with generate pond matrix.Therefore, by the layer of pond to sample number
According to each data submatrix carry out data random acquisition after-bay, M*M*M*M can be obtained in the sample data matrix of a M*M,
Compared with the mode in traditional maximum pond or average pond, the data sample of Chi Huahou is expanded, has enriched convolutional Neural net
The input information of network.
In one embodiment, the pond method of above-mentioned convolutional neural networks can be used in image classification.Specifically, using this
The image classification method of the pond method of convolutional neural networks includes: the image data for obtaining target image;By described image number
According to input convolutional neural networks model, the result data to be sorted of the target image is obtained;The convolutional neural networks model
For carrying out characteristics of image category analysis to described image data, result data to be sorted is exported;Wherein, the convolutional Neural net
Pond layer in network model, it is multiple for the image data of the convolutional layer output in the convolutional neural networks model to be divided into
Data matrix, and extract an image sample data at random from each data submatrix, according to the image sample data of extraction
Generate pond matrix is exported the pond matrix as pondization;According to the result data to be sorted, target image is divided
Class.
Further, the convolutional neural networks model further includes the first activation unit and the second activation unit;The volume
Lamination includes the first convolutional layer and the second convolutional layer;The pond layer includes the first pond layer and the second pond layer;Wherein, described
First pond layer and second pond layer are used to the picture number of the convolutional layer output in the convolutional neural networks model
According to being divided into multiple data matrixes, and an image sample data is extracted at random from each data submatrix, according to extraction
Image sample data generate pond matrix is exported the pond matrix as pondization;The first activation unit is used for will be described
After the image data of first convolutional layer output carries out nonrandom activation, end value is input to first pond layer;Described
One pond layer is used to the image data of input carrying out Chi Huahou to be output to second convolutional layer;Second convolutional layer is used for
The second activation unit is output to after the image data of input is carried out convolution;What the second activation unit was used to input
The full articulamentum of the convolutional neural networks model is output to after image data progress Random Activation, the full articulamentum is for defeated
The result data to be sorted out.
Specifically, a typical convolutional neural networks have convolutional layer, active coating, pond layer, a full articulamentum, and one three layers
Convolutional neural networks can indicate are as follows: input picture -> convolution -> activation -> pond -> convolution -> activation -> pond -> convolution ->
Activation -> pond -> full articulamentum -> output to be sorted.If random pool is applied to the convolutional neural networks, the knot of network
Structure becomes: input picture -> convolution -> activation -> random pool -> convolution -> Random Activation -> pond -> convolution -> Random Activation ->
Pond -> full articulamentum -> output to be sorted.
The present invention also provides a kind of makeups of the pond of convolutional neural networks to set.In one embodiment, as shown in fig. 7, the convolution
The pond makeup of neural network is set including obtaining module 10, division module 20, extraction module 30 and generation module 40.
The training sample that module 10 is used to obtain convolutional neural networks is obtained, training sample is inputted into convolutional neural networks mould
Type.In the present embodiment, before the data operation for carrying out convolutional neural networks structure, server first obtains the instruction of characteristic
Practice sample.The training sample is the target signature that server is calculated by convolutional neural networks.Further, by the training
Sample inputs in convolutional neural networks model.Wherein, convolutional neural networks model includes convolutional layer, sample level, active coating, Chi Hua
Layer and full articulamentum.Convolutional layer is used to extract the space characteristics of input data.Convolutional layer may include multiple convolution kernels, to extract
Multiple space characteristics of input data.Nonlinear activation function can be used in active coating.Pond layer is used to avoid the mistake of convolution process
Fitting phenomenon.Adjoining of the full articulamentum in network between neuron and neuron is connected, and can be calculated by softmax function
After export, obtain different probability values.
The sample data matrix input convolution mind that division module 20 is used to export the convolutional layer of convolutional neural networks model
Sample data matrix is divided into multiple data submatrixs in the layer of pond by the pond layer through network model.In the present embodiment,
When training sample is input to convolutional neural networks model by server, training sample is input to convolutional neural networks model first
Convolutional layer.Convolutional layer carries out convolution training to training sample, extracts the different spaces feature of the training sample of input, output includes
The sample data matrix of multiple features.The sample data matrix of the biggish feature of dimension is usually obtained after convolutional layer.At this point,
Feature is cut into several regions, takes its maximum value or average value, obtains new, the lesser feature of dimension.It that is to say convolutional Neural
The pond operation of network.When carrying out the pond layer operation of convolutional neural networks, the sample data matrix of convolutional layer output is drawn
It is divided into submatrix.It specifically can be, the convolution kernel of default size (less than the size of sample data matrix) be set.Convolution kernel window
For long window identical with width values size.Certainly, also it is not excluded for long and the different window of width values size convolution kernel.Pass through volume
Sample data matrix can be divided into multiple submatrixs by product core.
Extraction module 30 for extracting a sample data at random from each data submatrix.In the present embodiment, it takes
When business device carries out pond operation in convolutional neural networks model, a sample number is extracted at random from each data submatrix
According to so as to constitute new matrix according to the data extracted in each data submatrix.Herein, the data extracted at random every time can
To be that any one data value, each data value can also be repeated extraction in data submatrix.
Generation module 40 is used for each sample data generate pond matrix according to extraction, using the pond matrix as pond
The output of layer.In the present embodiment, new matrix can be generated according to the sample data extracted in each data submatrix, that is to say
Pond matrix, using the pond matrix as the output of pond layer, to be output to the full connection hidden layer of convolutional neural networks model.
In other embodiments, the pond of convolutional neural networks provided by the invention disguises the modules in setting and is also used to hold
In the pond method of row convolutional neural networks of the present invention, the operation that corresponding each step executes no longer is done in detail herein
Explanation.
The present invention also provides a kind of storage mediums.Computer program is stored on the storage medium;The computer program
When being executed by processor, the pond method of convolutional neural networks described in any of the above-described embodiment is realized.The storage medium can be with
It is memory.For example, built-in storage or external memory, or including both built-in storage and external memory.Built-in storage can be with
Including read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM
(EEPROM), flash memory or random access memory.External memory may include hard disk, floppy disk, ZIP disk, USB flash disk, tape
Deng.Storage medium disclosed in this invention includes but is not limited to the memory of these types.Memory disclosed in this invention is only
As an example rather than as restriction.
The present invention also provides a kind of computer equipments.A kind of computer equipment includes: one or more processors;Storage
Device;One or more application program.Wherein one or more of application programs are stored in the memory and are configured
To be executed by one or more of processors, one or more of application programs are configured to carry out any of the above-described embodiment
The pond method of the convolutional neural networks.
Fig. 8 is the structural schematic diagram of the computer equipment in one embodiment of the invention.Computer equipment described in the present embodiment
It can be server, personal computer and the network equipment.As shown in figure 8, equipment include processor 803, it is memory 805, defeated
Enter the devices such as unit 807 and display unit 809.It will be understood by those skilled in the art that the device structure device shown in Fig. 8 is simultaneously
The restriction to all devices is not constituted, may include than illustrating more or fewer components, or the certain components of combination.Memory
805 can be used for storing application program 801 and each functional module, and processor 803 runs the application program for being stored in memory 805
801, thereby executing the various function application and data processing of equipment.Memory can be built-in storage or external memory, or
Person includes both built-in storage and external memory.Built-in storage may include read-only memory (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory or random access memory.External storage
Device may include hard disk, floppy disk, ZIP disk, USB flash disk, tape etc..Memory disclosed in this invention includes but is not limited to these types
Memory.Memory disclosed in this invention is only used as example rather than as restriction.
Input unit 807 is used to receive the input of signal, and receives the keyword of user's input.Input unit 807 can
Including touch panel and other input equipments.Touch panel collects the touch operation of user on it or nearby and (for example uses
Family uses the operations of any suitable object or attachment on touch panel or near touch panel such as finger, stylus), and root
According to the corresponding attachment device of preset driven by program;Other input equipments can include but is not limited to physical keyboard, function
One of key (such as broadcasting control button, switch key etc.), trace ball, mouse, operating stick etc. are a variety of.Display unit
809 can be used for showing the information of user's input or be supplied to the information of user and the various menus of computer equipment.Display is single
The forms such as liquid crystal display, Organic Light Emitting Diode can be used in member 809.Processor 803 is the control centre of computer equipment, benefit
With the various pieces of various interfaces and the entire computer of connection, by running or executing the software being stored in memory 805
Program and/or module, and the data being stored in memory are called, perform various functions and handle data.
In one embodiment, equipment includes one or more processors 803, and one or more memories 805, and one
A or multiple application programs 801.Wherein one or more of application programs 801 are stored in memory 805 and are configured
To be executed by one or more of processors 803, one or more of application programs 801 are configured to carry out the above implementation
The pond method of convolutional neural networks described in example.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing module
It is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computer
In read/write memory medium.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, which can store in a computer-readable storage medium
In matter, storage medium may include memory, disk or CD etc..
The above is only some embodiments of the invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
It should be understood that each functional unit in various embodiments of the present invention can be integrated in a processing module,
It can be physically existed alone, can also be integrated in two or more units in a module with each unit.It is above-mentioned integrated
Module both can take the form of hardware realization, can also be realized in the form of software function module.
The above is only some embodiments of the invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (10)
1. a kind of pond method of convolutional neural networks characterized by comprising
Training sample is inputted convolutional neural networks model by the training sample for obtaining convolutional neural networks;
The pond layer for the sample data matrix input convolutional neural networks model that the convolutional layer of convolutional neural networks model is exported,
Sample data matrix is divided into multiple data submatrixs in the layer of pond;
Extract a sample data at random from each data submatrix;
According to each sample data generate pond matrix of extraction, using the pond matrix as the output of pond layer.
2. the method according to claim 1, wherein the multiple data submatrix includes that line number is identical as columns
Data submatrix;It is described that sample data matrix is divided into multiple data submatrixs in the layer of pond, comprising:
Obtain the line number and columns of the sample data matrix;
According to the line number and columns of the sample data matrix, the sample data matrix is divided into multiple rows in the layer of pond
Number data submatrix identical with columns.
3. the method according to claim 1, wherein the multiple data submatrix includes that line number is identical as columns
The first data submatrix and line number and the different second data submatrix of columns;It is described in the layer of pond by sample data
Matrix is divided into multiple data submatrixs, comprising:
Obtain the line number and columns of sample data matrix;
Sample data matrix is divided into multiple first numbers in the layer of pond according to the line number of sample data matrix and columns
According to submatrix and multiple second data submatrixs.
4. the method according to claim 1, wherein the data submatrix is the matrix of multirow and multiple row;Institute
It states and extracts a sample data at random from each data submatrix, comprising:
Any row data are randomly selected from the multirow of each data submatrix;From the row data, randomly select any
Corresponding data are arranged as the sample data;Or,
Any column data is randomly selected from the multiple row of each data submatrix;From the column data, randomly select any
The corresponding data of row are as the sample data.
5. the method according to claim 1, wherein the data submatrix is the matrix of multirow and multiple row;Institute
It states and extracts a sample data at random from each data submatrix, comprising:
From the multirow of each data submatrix and multiple row, any row and the corresponding data of either rank are randomly selected as institute
State sample data.
6. the method according to claim 1, wherein the training sample is image sample data;The acquisition
Training sample is inputted convolutional neural networks model by the training sample of convolutional neural networks, comprising:
Obtain training sample of the described image sample data as the convolutional neural networks model.
7. according to the method described in claim 6, it is characterized in that, the acquisition described image sample data is as the convolution
The training sample of neural network model, comprising: obtain the picture sample data in described image sample data as the convolution
The training sample of neural network model;
The pond for the sample data matrix input convolutional neural networks model that the convolutional layer by convolutional neural networks model exports
Change layer, sample data matrix is divided into multiple data submatrixs in the layer of pond, is extracted at random from each data submatrix
One sample data, according to each sample data generate pond matrix of extraction, using the pond matrix as the output of pond layer,
Include:
The picture sample data matrix that first convolutional layer of convolutional neural networks model is exported is after activation, input convolution mind
The first pond layer through network model, is divided into multiple data submatrixs for sample data matrix in the first pond layer, from every
A picture sample data are extracted in a data submatrix at random, are carried out according to the picture sample data generate pond matrix of extraction
After the output of first pond layer, then the data of first pond layer output are inputted into the second convolutional layer progress convolution, to described the
The data of two convolutional layers output carry out Random Activation, and the data after Random Activation are inputted the second pond layer and carry out Chi Huaxun
Practice, using the pond training result as the output of system pool layer.
8. a kind of pond makeup of convolutional neural networks is set characterized by comprising
Module is obtained, for obtaining the training sample of convolutional neural networks, training sample is inputted into convolutional neural networks model;
Division module, for the sample data matrix of the convolutional layer output of convolutional neural networks model to be inputted convolutional neural networks
Sample data matrix is divided into multiple data submatrixs in the layer of pond by the pond layer of model;
Extraction module, for extracting a sample data at random from each data submatrix;
Generation module, for each sample data generate pond matrix according to extraction, using the pond matrix as pond layer
Output.
9. a kind of storage medium, which is characterized in that be stored thereon with computer program;The computer program is suitable for by processor
The pond method of convolutional neural networks described in any one of loading and execute the claims 1 to 7.
10. a kind of computer equipment, characterized in that it comprises:
One or more processors;
Memory;
One or more application program, wherein one or more of application programs are stored in the memory and are configured
To be executed by one or more of processors, one or more of application programs are configured to carry out according to claim 1
To the pond method of convolutional neural networks described in any one of 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910113187.9A CN109948651A (en) | 2019-02-13 | 2019-02-13 | Pond method, apparatus and storage medium, the computer equipment of convolutional neural networks |
PCT/CN2019/117863 WO2020164271A1 (en) | 2019-02-13 | 2019-11-13 | Pooling method and device for convolutional neural network, storage medium and computer device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910113187.9A CN109948651A (en) | 2019-02-13 | 2019-02-13 | Pond method, apparatus and storage medium, the computer equipment of convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109948651A true CN109948651A (en) | 2019-06-28 |
Family
ID=67007583
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910113187.9A Pending CN109948651A (en) | 2019-02-13 | 2019-02-13 | Pond method, apparatus and storage medium, the computer equipment of convolutional neural networks |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109948651A (en) |
WO (1) | WO2020164271A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020164271A1 (en) * | 2019-02-13 | 2020-08-20 | 平安科技(深圳)有限公司 | Pooling method and device for convolutional neural network, storage medium and computer device |
CN111882565A (en) * | 2020-07-28 | 2020-11-03 | 深圳市雨滴科技有限公司 | Image binarization method, device, equipment and storage medium |
CN114124973A (en) * | 2021-09-27 | 2022-03-01 | 烽火通信科技股份有限公司 | Multi-cloud-scene-oriented mirror image synchronization method and device |
CN115985465A (en) * | 2023-03-21 | 2023-04-18 | 天津医科大学总医院 | Electromyographic signal feature extraction method, device and equipment based on time sequence and storage medium |
CN117251715A (en) * | 2023-11-17 | 2023-12-19 | 华芯程(杭州)科技有限公司 | Layout measurement area screening method and device, electronic equipment and storage medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112052627A (en) * | 2020-08-21 | 2020-12-08 | 海南星瞰信息咨询中心(有限合伙) | Method, device, medium and equipment for estimating near-surface ozone space distribution |
CN112099737B (en) * | 2020-09-29 | 2023-09-01 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for storing data |
CN116167148B (en) * | 2023-04-26 | 2023-07-07 | 青岛理工大学 | Urban neighborhood form optimization method and system based on local microclimate |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408435A (en) * | 2014-12-05 | 2015-03-11 | 浙江大学 | Face identification method based on random pooling convolutional neural network |
CN107346448B (en) * | 2016-05-06 | 2021-12-21 | 富士通株式会社 | Deep neural network-based recognition device, training device and method |
CN107871136A (en) * | 2017-03-22 | 2018-04-03 | 中山大学 | The image-recognizing method of convolutional neural networks based on openness random pool |
CN107506722A (en) * | 2017-08-18 | 2017-12-22 | 中国地质大学(武汉) | One kind is based on depth sparse convolution neutral net face emotion identification method |
CN109948651A (en) * | 2019-02-13 | 2019-06-28 | 平安科技(深圳)有限公司 | Pond method, apparatus and storage medium, the computer equipment of convolutional neural networks |
-
2019
- 2019-02-13 CN CN201910113187.9A patent/CN109948651A/en active Pending
- 2019-11-13 WO PCT/CN2019/117863 patent/WO2020164271A1/en active Application Filing
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020164271A1 (en) * | 2019-02-13 | 2020-08-20 | 平安科技(深圳)有限公司 | Pooling method and device for convolutional neural network, storage medium and computer device |
CN111882565A (en) * | 2020-07-28 | 2020-11-03 | 深圳市雨滴科技有限公司 | Image binarization method, device, equipment and storage medium |
CN114124973A (en) * | 2021-09-27 | 2022-03-01 | 烽火通信科技股份有限公司 | Multi-cloud-scene-oriented mirror image synchronization method and device |
CN114124973B (en) * | 2021-09-27 | 2023-06-09 | 烽火通信科技股份有限公司 | Mirror image synchronization method and device for multi-cloud scene |
CN115985465A (en) * | 2023-03-21 | 2023-04-18 | 天津医科大学总医院 | Electromyographic signal feature extraction method, device and equipment based on time sequence and storage medium |
CN115985465B (en) * | 2023-03-21 | 2023-07-07 | 天津医科大学总医院 | Myoelectric signal characteristic extraction method, device, equipment and storage medium based on time sequence |
CN117251715A (en) * | 2023-11-17 | 2023-12-19 | 华芯程(杭州)科技有限公司 | Layout measurement area screening method and device, electronic equipment and storage medium |
CN117251715B (en) * | 2023-11-17 | 2024-03-19 | 华芯程(杭州)科技有限公司 | Layout measurement area screening method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020164271A1 (en) | 2020-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109948651A (en) | Pond method, apparatus and storage medium, the computer equipment of convolutional neural networks | |
Deng | Interpreting tree ensembles with intrees | |
Li et al. | Deep feature selection: theory and application to identify enhancers and promoters | |
US20230386609A1 (en) | Methods And Systems For Identifying Progenies For Use In Plant Breeding | |
Lei et al. | Protein complex identification through Markov clustering with firefly algorithm on dynamic protein–protein interaction networks | |
Simon | Cognitive architectures and rational analysis: Comment | |
EP3179415A1 (en) | Systems and methods for a multi-core optimized recurrent neural network | |
CN109376844A (en) | The automatic training method of neural network and device recommended based on cloud platform and model | |
CN109074642A (en) | machine learning device | |
WO2020224128A1 (en) | News recommendation method and apparatus based on short-term interest of user, and electronic device and medium | |
CN110209378A (en) | Page generation method, device, terminal and storage medium | |
Zhang et al. | Rapeseed stand count estimation at leaf development stages with UAV imagery and convolutional neural networks | |
WO2017048296A1 (en) | Determining output presentation type | |
CN108959453A (en) | Information extracting method, device and readable storage medium storing program for executing based on text cluster | |
US20220207354A1 (en) | Analog circuits for implementing brain emulation neural networks | |
CN111788634B (en) | Methods and systems for identifying hybrids for plant breeding | |
CN108769125A (en) | Using recommendation method, apparatus, storage medium and computer equipment | |
Li et al. | PMVT: a lightweight vision transformer for plant disease identification on mobile devices | |
Agrawal et al. | Rice plant diseases detection using convolutional neural networks | |
Caswell et al. | Loopy neural nets: Imitating feedback loops in the human brain | |
Kishi et al. | Characteristic features of statistical models and machine learning methods derived from pest and disease monitoring datasets | |
Layton | Learning data mining with Python | |
De Reffye et al. | Stochastic modelling of tree annual shoot dynamics | |
Muhammad et al. | Real Time Emotion Based Music Player Using CNN Architectures | |
Phattaraworamet et al. | Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |