CN109002882A - Convolutional neural networks model building method, system, device and readable storage medium storing program for executing - Google Patents
Convolutional neural networks model building method, system, device and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN109002882A CN109002882A CN201810717777.8A CN201810717777A CN109002882A CN 109002882 A CN109002882 A CN 109002882A CN 201810717777 A CN201810717777 A CN 201810717777A CN 109002882 A CN109002882 A CN 109002882A
- Authority
- CN
- China
- Prior art keywords
- concealed nodes
- layer
- degree
- incidence coefficient
- present sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
This application discloses a kind of convolutional neural networks model building method, system, device and computer readable storage mediums, S1: using the comparison sequence of each characteristic pattern of the reference sequences and present sample layer of target convolution neural network model, the incidence coefficient of each characteristic pattern of present sample layer is obtained;S2: using the incidence coefficient of each characteristic pattern, the degree of association of each concealed nodes of present sample layer is obtained;S3: utilizing threshold value, and statistical correlation degree is more than or equal to the quantity of the concealed nodes of threshold value;S4: using quantity as the concealed nodes number of the previous convolutional layer of present sample layer;S5: iteration S1 to S4, until present sample layer is first sample level;The application is based on gray relative analysis method, the degree of association for calculating each concealed nodes of present sample layer, the concealed nodes by the degree of association lower than threshold value screen out, and retain the higher concealed nodes of precision, the high convolutional layer of precision is obtained, to obtain efficient convolutional neural networks model.
Description
Technical field
The present invention relates to artificial neural network field, in particular to a kind of convolutional neural networks model building method, system,
Device and computer readable storage medium.
Background technique
In convolutional neural networks model, original image to be processed is input in system, utilizes the activation primitive of neuron
The local feature of image to be processed is obtained, each of network neuron all connect with the local receptor field of preceding layer.Volume
Extracted image local feature is mapped as a plane by the neuron in product neural network in each hidden layer.
Most of recognition methods is realized by feature, i.e., before carrying out image classification, needs to extract image
Feature.Image characteristics extraction is the difficult point of image recognition, and convolutional neural networks can be implicit from image data to be trained
Study, does not need explicit extraction characteristics of image, this is also important when carrying out image recognition using convolutional neural networks algorithm
One of advantage.
Concealed nodes quantity is to influence one of most important factor of convolutional neural networks structure, and hiding characteristic pattern determines net
The concealed nodes number of network, hiding characteristic pattern is fewer, then the information that network obtains is fewer, and precision is lower, and convergence rate is slower;
Conversely, hiding, characteristic pattern is more, then neural network accuracy is higher, it can also cause the training time of network is too long, while there is also
The problem of over training, causes the effect of image recognition to be deteriorated instead.
In the research and application process of convolutional neural networks, concealed nodes are usually by virtue of experience obtained, and pass through structure
Make and contain the network structures of different number concealed nodes and be trained, and by the best network structure of final effect as
The network structure of final choice, the network structure that this method is constructed is limited, and therefore, it is difficult to find suitable network structure.
Therefore, it is necessary to the methods that one kind can construct efficient convolutional neural networks model.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of convolutional neural networks model building method, system, device and
Computer readable storage medium can construct efficient convolutional neural networks model.Its concrete scheme is as follows:
A kind of convolutional neural networks model building method, comprising:
S1: the comparison sequence of each characteristic pattern of the reference sequences and present sample layer of target convolution neural network model is utilized
Column, obtain the incidence coefficient of each characteristic pattern of present sample layer;
S2: using the incidence coefficient of each characteristic pattern, the degree of association of each concealed nodes of present sample layer is obtained;
S3: utilizing threshold value, and statistical correlation degree is more than or equal to the quantity of the concealed nodes of the threshold value;
S4: using the quantity as the concealed nodes number of the previous convolutional layer of present sample layer;
S5: iteration S1 to S4, until present sample layer is first sample level;
Wherein, the reference sequences are the output of the target convolution neural network model, and the relatively sequence is current
The output of each concealed nodes of sample level;Iteration is the last one in the target convolution neural network model to sample level for the first time
Sample level, according to backpropagation direction successively iteration to first sample level.
It is optionally, described using the reference sequences of target convolution neural network model and the comparison sequence of present sample layer,
Obtain the process of the incidence coefficient of each characteristic pattern of present sample layer, comprising:
Using the reference sequences and the relatively sequence, incidence coefficient calculation formula is substituted into, it is every to obtain present sample layer
The incidence coefficient of a characteristic pattern;Wherein,
The incidence coefficient calculation formula are as follows:
In formula, ρ (ρ ∈ [0,1]) is the resolution ratio in incidence coefficient calculating process, and yk indicates the target convolutional Neural
The output of network model, i.e., the described reference sequences, ti (k) indicates the output of each concealed nodes of present sample layer, i.e., described
Compare sequence, ξi(k) incidence coefficient of k-th of characteristic pattern in i-th of concealed nodes is indicated.
Optionally, the incidence coefficient using each characteristic pattern, obtains the association of each concealed nodes of present sample layer
The process of degree, comprising:
Using the incidence coefficient of each characteristic pattern, calculation of relationship degree formula is substituted into, the degree of association of each concealed nodes is obtained;
Wherein,
The calculation of relationship degree formula are as follows:
In formula, γiThe degree of association of i-th of concealed nodes, N indicate sample image number.
Optionally, the threshold value is being averaged for the degree of association for each concealed nodes of present sample layer that iteration obtains for the first time
Value.
The invention also discloses a kind of convolutional neural networks model construction systems, comprising:
Incidence coefficient computing module, for the reference sequences and present sample layer using target convolution neural network model
The comparison sequence of each characteristic pattern, obtains the incidence coefficient of each characteristic pattern of present sample layer;
Calculation of relationship degree module obtains present sample layer and each hides section for the incidence coefficient using each characteristic pattern
The degree of association of point;
Quantity statistics module, for utilizing threshold value, statistical correlation degree is more than or equal to the quantity of the concealed nodes of the threshold value;
Optimization module, for using the quantity as the concealed nodes number of the previous convolutional layer of present sample layer;
Iteration module, for re-calling the incidence coefficient computing module, until present sample layer is first sampling
Layer;
Wherein, the reference sequences are the output of the target convolution neural network model, and the relatively sequence is current
The output of each concealed nodes of sample level;Iteration is the last one in the target convolution neural network model to sample level for the first time
Sample level, according to backpropagation direction successively iteration to first sample level.
Optionally, the incidence coefficient computing module is specifically used for utilizing the reference sequences and the relatively sequence, generation
Enter incidence coefficient calculation formula, obtains the incidence coefficient of each characteristic pattern of present sample layer;Wherein,
The incidence coefficient calculation formula are as follows:
In formula, ρ (ρ ∈ [0,1]) is the resolution ratio in incidence coefficient calculating process, ykIndicate the target convolutional Neural
The output of network model, i.e., the described reference sequences, ti(k) indicate the output of each concealed nodes of present sample layer, i.e., it is described
Compare sequence, ξi(k) incidence coefficient of k-th of characteristic pattern in i-th of concealed nodes is indicated.
Optionally, the calculation of relationship degree module substitutes into the degree of association specifically for the incidence coefficient using each characteristic pattern
Calculation formula obtains the degree of association of each concealed nodes;Wherein,
The calculation of relationship degree formula are as follows:
In formula, γiThe degree of association of i-th of concealed nodes, N indicate sample image number.
The invention also discloses a kind of convolutional neural networks model construction devices, comprising:
Memory, for storing convolution neural network model construction procedures;
Processor, for executing the convolutional neural networks model construction program, to realize convolutional Neural net as the aforementioned
Network model building method.
The invention also discloses a kind of computer readable storage medium, volume is stored on the computer readable storage medium
Product neural network model construction procedures, realize such as aforesaid roll when the convolutional neural networks model construction program is executed by processor
The step of product neural network model construction method.
In the present invention, convolutional neural networks model building method, comprising: S1: target convolution neural network model is utilized
The comparison sequence of each characteristic pattern of reference sequences and present sample layer, obtains the association system of each characteristic pattern of present sample layer
Number;S2: using the incidence coefficient of each characteristic pattern, the degree of association of each concealed nodes of present sample layer is obtained;S3: threshold is utilized
Value, statistical correlation degree are more than or equal to the quantity of the concealed nodes of threshold value;S4: using quantity as the previous convolutional layer of present sample layer
Concealed nodes number;S5: iteration S1 to S4, until present sample layer is first sample level;Wherein, reference sequences are
The output for each concealed nodes that sequence is present sample layer is compared in the output of target convolution neural network model;Sample level is first
Secondary iteration be target convolution neural network model in the last one sample level, according to backpropagation direction successively iteration to first
Sample level.
The present invention is every using gray relative analysis method, the reference sequences of target convolution neural network model and present sample layer
The comparison sequence of a characteristic pattern, obtains the incidence coefficient of each characteristic pattern of present sample layer, and then obtains a pair of with characteristic pattern one
The degree of association for each concealed nodes answered screens out the degree of association lower than the concealed nodes of threshold value, screen out result is influenced it is small,
It is easier to cause the concealed nodes of error, to retain the higher concealed nodes of precision, obtains the high convolutional layer of precision, thus
Obtain efficient convolutional neural networks model.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of convolutional neural networks model building method flow diagram disclosed by the embodiments of the present invention;
Fig. 2 is a kind of convolutional neural networks model structure schematic diagram disclosed by the embodiments of the present invention;
Fig. 3 is a kind of convolutional neural networks model construction system structure diagram disclosed by the embodiments of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
It is shown in Figure 1 the embodiment of the invention discloses a kind of convolutional neural networks model building method, this method packet
It includes:
S1: it using the reference sequences of target convolution neural network model and the comparison sequence of present sample layer, obtains current
The incidence coefficient of each characteristic pattern of sample level;
S2: using the incidence coefficient of each characteristic pattern, the degree of association of each concealed nodes of present sample layer is obtained;
S3: utilizing threshold value, and statistical correlation degree is more than or equal to the quantity of the concealed nodes of threshold value;
S4: using quantity as the concealed nodes number of the previous convolutional layer of present sample layer;
S5: iteration S1 to S4, until present sample layer is first sample level;
Wherein, reference sequences are the output of target convolution neural network model, and comparing sequence is each of present sample layer
The output of concealed nodes;Iteration is the last one sample level in target convolution neural network model to sample level for the first time, according to reversed
The direction of propagation successively iteration to first sample level.
Specifically, shown in Figure 2, target convolution neural network model is the typical convolutional neural networks pre-established
Model successively includes input layer R1, multiple successively alternate convolutional layers (C1 and C3) and sample level (S2 and S4), full articulamentum Q1
With output layer O1;When being iterated for the first time to target convolution neural network model, present sample layer is target convolutional neural networks
The last one sample level in model, i.e., the sample level being connect with full articulamentum, during successive iterations, successively according to backpropagation
The each iteration in direction is a upper sample level for present sample layer, until first sample level, for example, target convolutional neural networks
Model haves three layers sample level altogether, and the first sample level to third sample level is for the first time iterated target convolution neural network model
When, present sample layer is the last one sample level third sample level, and when second of iteration, present sample layer is the second sample level,
When third time iteration, present sample layer is the first sample level, simultaneously as present sample layer has been first sample level, then
Terminate iteration.
Specifically, using gray relative analysis method, reference sequences and comparing sequence, the reference under different images sample is obtained
Sequence and the correlation degree value for comparing sequence, i.e. incidence coefficient;For N number of sample image to be processed, each concealed nodes are just
There is N number of output, the incidence coefficient of each characteristic pattern includes N number of coefficient, is analyzed for convenience, by the incidence coefficient of each characteristic pattern
It concentrates as a value, i.e., the degree of association of each concealed nodes;Using calculation of relationship degree is preset or passed through according to practical experience
Obtained threshold value, the concealed nodes by the degree of association lower than threshold value omit, and statistical correlation degree is more than or equal to the concealed nodes of threshold value
Quantity, using the quantity as the concealed nodes number of the previous convolutional layer of present sample layer, certainly, to concealed nodes omit into
Province slightly before, the degree of association of each concealed nodes can be arranged according to descending, convenient for filter out the degree of association lower than threshold value hide
Node;Similarly, by repeating S1 to S4, can the gradually convolutional layer in optimization aim convolutional neural networks model concealed nodes
Number, to obtain in the case where guaranteeing precision, the better convolutional neural networks model of runnability.
As it can be seen that the embodiment of the present invention using gray relative analysis method, target convolution neural network model reference sequences and
The comparison sequence of each characteristic pattern of present sample layer, obtains the incidence coefficient of each characteristic pattern of present sample layer, so obtain with
The degree of association of characteristic pattern each concealed nodes correspondingly, the concealed nodes by the degree of association lower than threshold value screen out, to protect
The higher concealed nodes of precision are stayed, the high convolutional layer of precision is obtained, to obtain efficient convolutional neural networks model.
The embodiment of the invention discloses a kind of specific convolutional neural networks model building methods, implement relative to upper one
Example, the present embodiment have made further instruction and optimization to technical solution.It is specific:
Specifically, above-mentioned S1 utilizes the reference sequences of target convolution neural network model and the comparison sequence of present sample layer
Column, obtain the process of the incidence coefficient of each characteristic pattern of present sample layer, can specifically include:
Using reference sequences and compare sequence, substitutes into incidence coefficient calculation formula, obtain each characteristic pattern of present sample layer
Incidence coefficient;Wherein,
Incidence coefficient calculation formula are as follows:
In formula, ρ (ρ ∈ [0,1]) is the resolution ratio in incidence coefficient calculating process, ykIndicate target convolutional neural networks
The output of model, i.e. reference sequences, ti(k) output for indicating each concealed nodes of present sample layer, that is, compare sequence, ξi(k)
Indicate the incidence coefficient of k-th of characteristic pattern in i-th of concealed nodes.
Above-mentioned S2 utilizes the incidence coefficient of each characteristic pattern, obtains the mistake of the degree of association of each concealed nodes of present sample layer
Journey can specifically include:
Using the incidence coefficient of each characteristic pattern, calculation of relationship degree formula is substituted into, the degree of association of each concealed nodes is obtained;
Wherein,
Calculation of relationship degree formula are as follows:
In formula, γiThe degree of association of i-th of concealed nodes, N indicate sample image number.
Specifically, the incidence coefficient of each characteristic pattern includes N number of coefficient since each concealed nodes have N number of output, pass through
The incidence coefficient of each characteristic pattern is averaging, is concentrated as a value, and then obtain the degree of association of each concealed nodes.
It should be noted that threshold value can be the degree of association for each concealed nodes of present sample layer that iteration obtains for the first time
Average value, that is, the average value of the degree of association of the concealed nodes of last sample level.
In addition, the embodiment of the invention also discloses a kind of application scenarios of convolutional neural networks model building method, specifically
:
It is shown in Figure 2, the initial configuration setting of target convolution neural network model are as follows: 1-10-30-10, i.e., one is defeated
Enter a layer R1, the concealed nodes number of the first convolutional layer C1 is 10, and the concealed nodes number of the second convolutional layer C3 is 30, full articulamentum
The concealed nodes number of Q1 includes 10, and resolution ratio is set as ρ=0.5, first passes through the training to convolutional neural networks in advance, obtains
C3 and S4 layers of relational degree taxis, as shown in 1S4 layers of concealed nodes degree of association of table.
Table 1
Specifically, can be ε=0.514 with given threshold, preceding 18 characteristic patterns of table 1 be chosen, and adjust target convolution mind
Structure through network model is 1-10-18-10, and is trained to convolutional neural networks model adjusted, training result
After being adjusted such as table 2 shown in the S4 layers of concealed nodes degree of association.
Table 2
Serial number | The degree of association | Serial number | The degree of association | Serial number | The degree of association |
1 | 0.6464 | 2 | 0.6328 | 3 | 0.6252 |
4 | 0.6147 | 5 | 0.6128 | 6 | 0.6057 |
7 | 0.6044 | 8 | 0.5794 | 9 | 0.5769 |
10 | 0.5747 | 11 | 0.5671 | 12 | 0.5641 |
13 | 0.5601 | 14 | 0.5526 | 15 | 0.5468 |
16 | 0.5344 | 17 | 0.5275 | 18 | 0.5214 |
As shown in table 2, selected 18 characteristic patterns influence result all larger, that is, the adjustment of the network structure adjusted is closed
Reason.
Specifically, C1, S2 layers of characteristic pattern number are determined in the same way after determining the characteristic pattern number of C3, S4,
C1, S2 layers of initial association degree sequence are as shown in 3S2 layers of concealed nodes degree of association of table.
Table 3
Serial number | The degree of association | Serial number | The degree of association | Serial number | The degree of association |
1 | 0.6665 | 2 | 0.6650 | 3 | 0.6634 |
4 | 0.6439 | 5 | 0.6418 | 6 | 0.6391 |
7 | 0.6340 | 8 | 0.6339 | 9 | 0.6228 |
10 | 0.6045 |
Specifically, preparatory selected threshold ε=0.630, chooses preceding 8 characteristic patterns of table 3, and adjust target convolution nerve net
The structure of network model is 1-8-18-10, and is trained to target convolution neural network model adjusted, and training result is such as
After table 4 adjusts shown in the S2 layers of concealed nodes degree of association.
Table 4
As shown in table 4, when S2 layers of characteristic pattern number are 8, each characteristic pattern has large effect to result, therefore most
The structure of total target convolution neural network model adjusted is 1-8-18-10.
Correspondingly, the embodiment of the invention also discloses a kind of convolutional neural networks model construction system, it is shown in Figure 3,
The system includes:
Incidence coefficient computing module 1, for the reference sequences and present sample layer using target convolution neural network model
Each characteristic pattern comparison sequence, obtain the incidence coefficient of each characteristic pattern of present sample layer;
Calculation of relationship degree module 2 obtains present sample layer and each hides section for the incidence coefficient using each characteristic pattern
The degree of association of point;
Quantity statistics module 3, for utilizing threshold value, statistical correlation degree is more than or equal to the quantity of the concealed nodes of threshold value;
Optimization module 4, for using quantity as the concealed nodes number of the previous convolutional layer of present sample layer;
Iteration module 5, for re-calling incidence coefficient computing module 1, until present sample layer is first sample level;
Wherein, reference sequences are the output of target convolution neural network model, and comparing sequence is each of present sample layer
The output of concealed nodes;Iteration is the last one sample level in target convolution neural network model to sample level for the first time, according to reversed
The direction of propagation successively iteration to first sample level.
As it can be seen that the embodiment of the present invention using gray relative analysis method, target convolution neural network model reference sequences and
The comparison sequence of each characteristic pattern of present sample layer, obtains the incidence coefficient of each characteristic pattern of present sample layer, so obtain with
The degree of association of characteristic pattern each concealed nodes correspondingly, the concealed nodes by the degree of association lower than threshold value screen out, to protect
The higher concealed nodes of precision are stayed, the high convolutional layer of precision is obtained, to obtain efficient convolutional neural networks model.
In the embodiment of the present invention, above-mentioned incidence coefficient computing module 1, specifically for using reference sequences and comparing sequence,
Incidence coefficient calculation formula is substituted into, the incidence coefficient of each characteristic pattern of present sample layer is obtained;Wherein,
Incidence coefficient calculation formula are as follows:
In formula, ρ (ρ ∈ [0,1]) is the resolution ratio in incidence coefficient calculating process, ykIndicate target convolutional neural networks
The output of model, i.e. reference sequences, ti(k) output for indicating each concealed nodes of present sample layer, that is, compare sequence, ξi(k)
Indicate the incidence coefficient of k-th of characteristic pattern in i-th of concealed nodes.
It is public to substitute into calculation of relationship degree specifically for the incidence coefficient using each characteristic pattern for above-mentioned calculation of relationship degree module 2
Formula obtains the degree of association of each concealed nodes;Wherein,
Calculation of relationship degree formula are as follows:
In formula, γiThe degree of association of i-th of concealed nodes, N indicate sample image number.
Wherein, above-mentioned threshold value is the average value of the degree of association for each concealed nodes of present sample layer that iteration obtains for the first time.
In addition, the embodiment of the invention also discloses a kind of convolutional neural networks model construction devices, comprising:
Memory, for storing convolution neural network model construction procedures;
Processor, for executing convolutional neural networks model construction program, to realize convolutional neural networks mould as the aforementioned
Type construction method.
About the specific steps of convolutional neural networks model building method above-mentioned, it can refer to and be disclosed in previous embodiment
Corresponding contents, do not repeated herein.
In addition, the embodiment of the invention also discloses a kind of computer readable storage medium, on computer readable storage medium
It is stored with convolutional neural networks model construction program, is realized as before when convolutional neural networks model construction program is executed by processor
The step of stating convolutional neural networks model building method.
About the specific steps of convolutional neural networks model building method above-mentioned, it can refer to and be disclosed in previous embodiment
Corresponding contents, do not repeated herein.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that
A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or
The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged
Except there is also other identical elements in the process, method, article or apparatus that includes the element.
Professional further appreciates that, unit described in conjunction with the examples disclosed in the embodiments of the present disclosure
And algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware and
The interchangeability of software generally describes each exemplary composition and step according to function in the above description.These
Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession
Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered
Think beyond the scope of this invention.
It above can to a kind of convolutional neural networks model building method provided by the present invention, system, device and computer
It reads storage medium to be described in detail, specific case used herein explains the principle of the present invention and embodiment
It states, the above description of the embodiment is only used to help understand the method for the present invention and its core ideas;Meanwhile for this field
Those skilled in the art, according to the thought of the present invention, there will be changes in the specific implementation manner and application range, to sum up institute
It states, the contents of this specification are not to be construed as limiting the invention.
Claims (9)
1. a kind of convolutional neural networks model building method characterized by comprising
S1: using the comparison sequence of each characteristic pattern of the reference sequences and present sample layer of target convolution neural network model,
Obtain the incidence coefficient of each characteristic pattern of present sample layer;
S2: using the incidence coefficient of each characteristic pattern, the degree of association of each concealed nodes of present sample layer is obtained;
S3: utilizing threshold value, and statistical correlation degree is more than or equal to the quantity of the concealed nodes of the threshold value;
S4: using the quantity as the concealed nodes number of the previous convolutional layer of present sample layer;
S5: iteration S1 to S4, until present sample layer is first sample level;
Wherein, the reference sequences are the output of the target convolution neural network model, and the relatively sequence is present sample
The output of each concealed nodes of layer;Iteration is the last one sampling in the target convolution neural network model to sample level for the first time
Layer, according to backpropagation direction successively iteration to first sample level.
2. convolutional neural networks model building method according to claim 1, which is characterized in that described to utilize target convolution
The reference sequences of neural network model and the comparison sequence of present sample layer, obtain the association system of each characteristic pattern of present sample layer
Several processes, comprising:
Using the reference sequences and the relatively sequence, incidence coefficient calculation formula is substituted into, each spy of present sample layer is obtained
Levy the incidence coefficient of figure;Wherein,
The incidence coefficient calculation formula are as follows:
In formula, ρ (ρ ∈ [0,1]) is the resolution ratio in incidence coefficient calculating process, ykIndicate the target convolutional neural networks
The output of model, i.e., the described reference sequences, ti(k) output of each concealed nodes of present sample layer, i.e., the described comparison are indicated
Sequence, ξi(k) incidence coefficient of k-th of characteristic pattern in i-th of concealed nodes is indicated.
3. convolutional neural networks model building method according to claim 2, which is characterized in that described to utilize each feature
The incidence coefficient of figure obtains the process of the degree of association of each concealed nodes of present sample layer, comprising:
Using the incidence coefficient of each characteristic pattern, calculation of relationship degree formula is substituted into, the degree of association of each concealed nodes is obtained;Its
In,
The calculation of relationship degree formula are as follows:
In formula, γiThe degree of association of i-th of concealed nodes, N indicate sample image number.
4. convolutional neural networks model building method according to any one of claims 1 to 3, which is characterized in that the threshold
Value is the average value of the degree of association for each concealed nodes of present sample layer that iteration obtains for the first time.
5. a kind of convolutional neural networks model construction system characterized by comprising
Incidence coefficient computing module, each of reference sequences and present sample layer for utilizing target convolution neural network model
The comparison sequence of characteristic pattern, obtains the incidence coefficient of each characteristic pattern of present sample layer;
Calculation of relationship degree module obtains each concealed nodes of present sample layer for the incidence coefficient using each characteristic pattern
The degree of association;
Quantity statistics module, for utilizing threshold value, statistical correlation degree is more than or equal to the quantity of the concealed nodes of the threshold value;
Optimization module, for using the quantity as the concealed nodes number of the previous convolutional layer of present sample layer;
Iteration module, for re-calling the incidence coefficient computing module, until present sample layer is first sample level;
Wherein, the reference sequences are the output of the target convolution neural network model, and the relatively sequence is present sample
The output of each concealed nodes of layer;Iteration is the last one sampling in the target convolution neural network model to sample level for the first time
Layer, according to backpropagation direction successively iteration to first sample level.
6. convolutional neural networks model construction system according to claim 5, which is characterized in that the incidence coefficient calculates
Module is specifically used for substituting into incidence coefficient calculation formula using the reference sequences and the relatively sequence, obtaining present sample
The incidence coefficient of each characteristic pattern of layer;Wherein,
The incidence coefficient calculation formula are as follows:
In formula, ρ (ρ ∈ [0,1]) is the resolution ratio in incidence coefficient calculating process, ykIndicate the target convolutional neural networks
The output of model, i.e., the described reference sequences, ti(k) output of each concealed nodes of present sample layer, i.e., the described comparison are indicated
Sequence, ξi(k) incidence coefficient of k-th of characteristic pattern in i-th of concealed nodes is indicated.
7. convolutional neural networks model construction system according to claim 6, which is characterized in that the calculation of relationship degree mould
Block substitutes into calculation of relationship degree formula, obtains the association of each concealed nodes specifically for the incidence coefficient using each characteristic pattern
Degree;Wherein,
The calculation of relationship degree formula are as follows:
In formula, γiThe degree of association of i-th of concealed nodes, N indicate sample image number.
8. a kind of convolutional neural networks model construction device characterized by comprising
Memory, for storing convolution neural network model construction procedures;
Processor, for executing the convolutional neural networks model construction program, to realize such as any one of Claims 1-4 institute
The convolutional neural networks model building method stated.
9. a kind of computer readable storage medium, which is characterized in that be stored with convolution mind on the computer readable storage medium
Through network model construction procedures, such as claim 1 is realized when the convolutional neural networks model construction program is executed by processor
The step of to any one of 4 convolutional neural networks model building method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810717777.8A CN109002882A (en) | 2018-07-03 | 2018-07-03 | Convolutional neural networks model building method, system, device and readable storage medium storing program for executing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810717777.8A CN109002882A (en) | 2018-07-03 | 2018-07-03 | Convolutional neural networks model building method, system, device and readable storage medium storing program for executing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109002882A true CN109002882A (en) | 2018-12-14 |
Family
ID=64599742
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810717777.8A Pending CN109002882A (en) | 2018-07-03 | 2018-07-03 | Convolutional neural networks model building method, system, device and readable storage medium storing program for executing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109002882A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140079297A1 (en) * | 2012-09-17 | 2014-03-20 | Saied Tadayon | Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities |
CN105528638A (en) * | 2016-01-22 | 2016-04-27 | 沈阳工业大学 | Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network |
CN107563430A (en) * | 2017-08-28 | 2018-01-09 | 昆明理工大学 | A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension |
-
2018
- 2018-07-03 CN CN201810717777.8A patent/CN109002882A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140079297A1 (en) * | 2012-09-17 | 2014-03-20 | Saied Tadayon | Application of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities |
CN105528638A (en) * | 2016-01-22 | 2016-04-27 | 沈阳工业大学 | Method for grey correlation analysis method to determine number of hidden layer characteristic graphs of convolutional neural network |
CN107563430A (en) * | 2017-08-28 | 2018-01-09 | 昆明理工大学 | A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105678248B (en) | Face key point alignment algorithm based on deep learning | |
CN106407986B (en) | A kind of identification method of image target of synthetic aperture radar based on depth model | |
CN109214973A (en) | For the confrontation safety barrier generation method of steganalysis neural network | |
CN104346629B (en) | A kind of model parameter training method, apparatus and system | |
Mei et al. | Signal processing on graphs: Estimating the structure of a graph | |
CN102495919B (en) | Extraction method for influence factors of carbon exchange of ecosystem and system | |
CN110135386B (en) | Human body action recognition method and system based on deep learning | |
CN106709511A (en) | Urban rail transit panoramic monitoring video fault detection method based on depth learning | |
WO2021051526A1 (en) | Multi-view 3d human pose estimation method and related apparatus | |
CN110378208B (en) | Behavior identification method based on deep residual error network | |
CN111160458B (en) | Image processing system and convolution neural network thereof | |
CN111445010B (en) | Distribution network voltage trend early warning method based on evidence theory fusion quantum network | |
CN109740057A (en) | A kind of strength neural network and information recommendation method of knowledge based extraction | |
CN107679539B (en) | Single convolution neural network local information and global information integration method based on local perception field | |
CN108984481A (en) | A kind of homography matrix estimation method based on convolutional neural networks | |
CN109345504A (en) | A kind of bottom-up more people's Attitude estimation methods constrained using bounding box | |
CN105389596A (en) | Method for enabling convolutional neural network to be suitable for recognition of pictures of various sizes | |
CN108776835A (en) | A kind of deep neural network training method | |
CN101877125A (en) | Wavelet domain statistical signal-based image fusion processing method | |
CN115331069A (en) | Personalized image classification model training method based on federal learning | |
CN108053025A (en) | Multicolumn neutral net medical image analysis method and device | |
CN106056141A (en) | Target recognition and angle coarse estimation algorithm using space sparse coding | |
CN113705724B (en) | Batch learning method of deep neural network based on self-adaptive L-BFGS algorithm | |
CN109740672B (en) | Multi-stream feature distance fusion system and fusion method | |
CN108388961A (en) | Self-adapting random neighbours' community detecting algorithm based on modularity optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181214 |