CN106372656A - Depth one-time learning model obtaining method and device and image identification method and device - Google Patents
Depth one-time learning model obtaining method and device and image identification method and device Download PDFInfo
- Publication number
- CN106372656A CN106372656A CN201610761364.0A CN201610761364A CN106372656A CN 106372656 A CN106372656 A CN 106372656A CN 201610761364 A CN201610761364 A CN 201610761364A CN 106372656 A CN106372656 A CN 106372656A
- Authority
- CN
- China
- Prior art keywords
- dimensionality reduction
- image
- characteristic pattern
- image set
- pattern image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a depth one-time learning model obtaining method and device and an image identification method and device. The depth one-time learning model obtaining method comprises the following steps: inputting each target image in a preset data set comprising a few target images and background images to a preset CNN model; selecting output images in any layer of hidden layers of the preset CNN model as a feature image set of the target images; determining a dimensionality reduction matrix through a PCA method by adopting the background images in the preset data set and carrying out dimensionality reduction on the feature image set to generate a dimensionality reduced feature image set; inputting the dimensionality reduced feature image set to a preset Bayesian learning model, carrying out identification on the target images corresponding to the dimensionality reduced feature image set, and constructing a depth one-time learning model; and training the depth one-time learning model through the preset data set until the model converges, and obtaining the converged depth one-time learning model. When the converged depth one-time learning model is adopted to identify the images, recognition rate is higher.
Description
Technical field
The present invention relates to image identification technical field, more particularly, to a kind of method of the acquisition disposable learning model of depth,
Image-recognizing method and device.
Background technology
It is desirable to when images to be recognized is identified with classification, the disparate networks training can be adopted in prior art
Practise model images to be recognized to be identified classify, for example: convolutional neural networks cnn (convolutional can be adopted
Neural network, cnn) model is identified to images to be recognized, it would however also be possible to employ and Bayesian learning models are to be identified
Image is identified, equally, it would however also be possible to employ other network learning model are identified to images to be recognized.
It is necessary first to build cnn model when being identified to images to be recognized using cnn model, and demarcated using a large amount of
Good data is trained to this cnn model, using the cnn model training, images to be recognized is identified afterwards, you can
Accurately determine in images to be recognized whether comprise object to be identified, and then images to be recognized is classified.And in practical application
In, it is generally difficult to get the data demarcated in a large number, the data sample demarcated on a small quantity can only be obtained, so, if still
Select if cnn model is identified to images to be recognized, due to the data sample demarcated that cnn model is trained
Negligible amounts, there are certain Expired Drugs in the cnn model of acquisition, last discrimination can be led to very low, recognition effect
Poor.
So, certain Expired Drugs are had based on the cnn model that a small amount of sample training obtains, the essence to image recognition
Exactness is relatively low, and the suitability is poor.
Content of the invention
The purpose of the embodiment of the present invention is to provide a kind of method obtaining the disposable learning model of depth, image recognition side
, there are certain Expired Drugs with the cnn model solving based on a small amount of sample training obtains, to image recognition in method and device
Degree of accuracy is relatively low, the poor problem of the suitability.
In order to solve above-mentioned technical problem, the embodiment of the invention discloses following technical scheme:
In a first aspect, embodiments providing a kind of method obtaining the disposable learning model of depth, the method bag
Include:
The all target images concentrated for the preset data comprising a small amount of target image and background image, by each width mesh
The default convolutional neural networks cnn model of logo image input, selects any one layer in this default cnn model hidden layer of output image
As the characteristic pattern image set of this target image, described target image is the image being noted as comprising object to be identified, the described back of the body
Scape image is the image being noted as not comprising object to be identified;
The background image concentrated using described preset data, determines dimensionality reduction matrix by principal component analysiss pca method;
Dimensionality reduction is carried out using the characteristic pattern image set that described dimensionality reduction matrix concentrates each width target image to described preset data,
Generate the dimensionality reduction characteristic pattern image set of this target image;
Bayesian learning models are preset in the input of each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set, right
This corresponding target image of dimensionality reduction characteristic image collection is identified, and builds the disposable learning model of depth;
It is trained using the disposable learning model of depth described in described preset data set pair, until model convergence, thus
Obtain the disposable learning model of depth after convergence.
In conjunction with a first aspect, in the first possible implementation of first aspect, selecting this default cnn model to imply
In layer, any one layer of output image, as the process of the characteristic pattern image set of this target image, specifically includes:
Select the characteristic pattern image set as this target image for the output image of this default cnn model n-th layer hidden layer, by institute
Stating characteristic image set representations is following characteristics matrix;
Matrix element in described eigenmatrix is calculated according to the following equation using activation receptance functionSo that it is determined that institute
State characteristic pattern image set;
Wherein, inThe output image of the default cnn model n-th layer hidden layer of expression, i.e. described characteristic pattern image set,Represent institute
State i-th matrix element of the corresponding eigenmatrix of characteristic image collection, i.e. characteristic pattern image set inIn ith feature image, i0Table
Show the described target image of the default cnn model of input,Represent i-th wave filter in default cnn model n-th layer hidden layer,
Represent i-th bias matrix in default cnn model n-th layer hidden layer, 1≤i≤k, k represent default cnn model n-th layer hidden layer
Middle convolution nuclear volume, i.e. characteristic pattern image set inIn characteristic image quantity, (*) represent convolution algorithm, function σ () represent swash
Receptance function alive.
In conjunction with the first possible implementation of first aspect, in the possible implementation of the second of first aspect
In, dimensionality reduction is carried out using the characteristic pattern image set that described dimensionality reduction matrix concentrates each width target image to described preset data, generates
The process of the dimensionality reduction characteristic pattern image set of this target image, specifically includes:
Using described dimensionality reduction matrix, according to following formula, described preset data is concentrated with the characteristic pattern of each width target image
Image set carries out dimensionality reduction, generates the dimensionality reduction characteristic pattern image set of this target image;
i′n=in·wc;
Wherein, i′nRepresent dimensionality reduction characteristic pattern image set, inRepresent characteristic pattern image set, wcRepresent dimensionality reduction matrix.
In conjunction with the possible implementation of the second of first aspect, in the third possible implementation of first aspect
In, by default for the input of each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set Bayesian learning models, to this fall
The corresponding target image of dimensional feature image set is identified, and builds the process of the disposable learning model of depth, specifically includes:
Each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set is inputted following default Bayesian learning moulds
Type, is identified to this corresponding target image of dimensionality reduction characteristic image collection;
Wherein, p (c | i 'n) represent in acquisition dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will
Input dimensionality reduction characteristic pattern image set i ' thereinnCorresponding target image is identified as the posterior probability of target image, and c represents default shellfish
The dimensionality reduction characteristic pattern image set i ' of this learning model identified input of leafnCorresponding target image is target image, p (cbg|i′n) represent
Obtaining dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input dimensionality reduction characteristic pattern image set therein
i′nCorresponding target image is identified as the posterior probability of background image, cbgRepresent default Bayesian learning models identified input
Dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image;
p(i′n| θ) for target image likelihood function, represent dimensionality reduction characteristic pattern image set i 'nCorresponding target image is target
Image, on the premise of meeting a certain distribution that parameter is θ, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (c) represent fall
Dimensional feature image set i 'nThe prior probability of object to be identified, p (i ' is comprised in corresponding target imagen|θbg) it is background image
Likelihood function, represents dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image, and meeting parameter is θbgA certain distribution
On the premise of, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (cbg) represent dimensionality reduction characteristic pattern image set i 'nCorresponding target
The prior probability of object to be identified is not comprised in image;
By assuming that method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize to be identified
The dimensionality reduction characteristic image of object features, determines the likelihood function of target image;
Wherein, h represents the index vector adopting in hypothesis method, and its codomain is [1, k], and h represents the set of all h;
By null hypothesiss method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize background
The dimensionality reduction characteristic image of feature, determines the likelihood function of background image;
p(i′n|θbg)=p (i 'n,h0|θbg)=p (i 'n|h0,θbg)p(h0|θbg);
Wherein, h0Represent the index vector adopting in null hypothesiss method;
By the p (i ' in the likelihood function of target imagen| h, θ) it is transformed to following first gauss of distribution function;
Wherein, g represents Gauss distribution, hqRepresent q-th aspect indexing in index vector h, q represents index vector h's
Length, that is, in index vector h aspect indexing quantity, i 'n(hq) represent according to h from dimensionality reduction characteristic pattern image set i 'nIn select
hqIndividual dimensionality reduction characteristic image, i 'nJ () represents dimensionality reduction characteristic pattern image set i 'nIn the dimensionality reduction characteristic image do not selected by h, μ, γ, μbg,
γbgRepresent the parameter of the first gauss of distribution function;
By the p (i ' in the likelihood function of background imagen|h0,θbg) it is transformed to following second gauss of distribution function;
According to the likelihood function of described target image, the likelihood function of background image, default Bayesian learning models, first
Gauss of distribution function and the second gauss of distribution function, build the disposable learning model of following depth;
Second aspect, the invention discloses a kind of carry out image knowledge using the disposable learning model of depth after above-mentioned convergence
Method for distinguishing, the method includes:
By default for images to be recognized input cnn model, the output image of this default cnn model n-th layer hidden layer is selected to make
Characteristic pattern image set to be identified for this images to be recognized;
Dimensionality reduction is carried out using dimensionality reduction matrix to described characteristic pattern image set to be identified, generates the to be identified of described images to be recognized
Dimensionality reduction characteristic pattern image set;
Described dimensionality reduction characteristic pattern image set to be identified is inputted in the disposable learning model of depth after described convergence, treats knowledge
Other image is identified, so that it is determined that whether comprising object to be identified in described images to be recognized.
The third aspect, the invention discloses a kind of device obtaining the disposable learning model of depth, this device includes:
Characteristic pattern image set acquisition module, for concentrate for the preset data comprising a small amount of target image and background image
All target images, default for the input of each width target image convolutional neural networks cnn model selects this default cnn model hidden
Containing any one layer in layer of output image as this target image characteristic pattern image set, described target image be noted as comprising
The image of object to be identified, described background image is the image being noted as not comprising object to be identified;
Dimensionality reduction matrix deciding module, for the background image concentrated using described preset data, by principal component analysiss pca
Method determines dimensionality reduction matrix;
First dimensionality reduction computing module, for concentrating each width target image using described dimensionality reduction matrix to described preset data
Characteristic pattern image set carry out dimensionality reduction, generate the dimensionality reduction characteristic pattern image set of this target image;
The disposable learning model of depth builds module, for by each of all dimensionality reduction characteristic image collection dimensionality reduction feature
The default Bayesian learning models of image set input, are identified to this corresponding target image of dimensionality reduction characteristic image collection, build deep
Spend disposable learning model;
Model training module, for being trained using the disposable learning model of depth described in described preset data set pair,
Until model convergence, thus obtaining the disposable learning model of the depth after convergence.
In conjunction with the third aspect, in the first possible implementation of the third aspect, described characteristic pattern image set obtains mould
Block specifically for:
The all target images concentrated for the preset data comprising a small amount of target image and background image, by each width mesh
The default convolutional neural networks cnn model of logo image input, selects the output image conduct of this default cnn model n-th layer hidden layer
The characteristic pattern image set of this target image, described characteristic image set representations are following characteristics matrix;
Matrix element in described eigenmatrix is calculated according to the following equation using activation receptance functionSo that it is determined that institute
State characteristic pattern image set;
Wherein, inThe output image of the default cnn model n-th layer hidden layer of expression, i.e. described characteristic pattern image set,Represent institute
State i-th matrix element of the corresponding eigenmatrix of characteristic image collection, i.e. characteristic pattern image set inIn ith feature image, i0Table
Show the described target image of the default cnn model of input,Represent i-th wave filter in default cnn model n-th layer hidden layer,
Represent i-th bias matrix in default cnn model n-th layer hidden layer, 1≤i≤k, k represent default cnn model n-th layer hidden layer
Middle convolution nuclear volume, i.e. characteristic pattern image set inIn characteristic image quantity, (*) represent convolution algorithm, function σ () represent swash
Receptance function alive.
In conjunction with the first possible implementation of the third aspect, in the possible implementation of the second of the third aspect
In, described first dimensionality reduction computing module specifically for:
Using described dimensionality reduction matrix, according to following formula, described preset data is concentrated with the characteristic pattern of each width target image
Image set carries out dimensionality reduction, generates the dimensionality reduction characteristic pattern image set of this target image;
i′n=in·wc;
Wherein, i 'nRepresent dimensionality reduction characteristic pattern image set, inRepresent characteristic pattern image set, wcRepresent dimensionality reduction matrix.
In conjunction with the possible implementation of the second of the third aspect, in the third possible implementation of the third aspect
In, the disposable learning model of described depth build module specifically for:
Each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set is inputted following default Bayesian learning moulds
Type, is identified to this corresponding target image of dimensionality reduction characteristic image collection;
Wherein, p (c | i 'n) represent in acquisition dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will
Input dimensionality reduction characteristic pattern image set i ' thereinnCorresponding target image is identified as the posterior probability of target image, and c represents default shellfish
The dimensionality reduction characteristic pattern image set i ' of this learning model identified input of leafnCorresponding target image is target image, p (cbg|i′n) represent
Obtaining dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input dimensionality reduction characteristic pattern image set therein
i′nCorresponding target image is identified as the posterior probability of background image, cbgRepresent default Bayesian learning models identified input
Dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image;
p(i′n| θ) for target image likelihood function, represent dimensionality reduction characteristic pattern image set i 'nCorresponding target image is target
Image, on the premise of meeting a certain distribution that parameter is θ, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (c) represent fall
Dimensional feature image set i 'nThe prior probability of object to be identified, p (i ' is comprised in corresponding target imagen|θbg) it is background image
Likelihood function, represents dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image, and meeting parameter is θbgA certain distribution
On the premise of, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (cbg) represent dimensionality reduction characteristic pattern image set i 'nCorresponding target
The prior probability of object to be identified is not comprised in image;
By assuming that method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize to be identified
The dimensionality reduction characteristic image of object features, determines the likelihood function of target image;
Wherein, h represents the index vector adopting in hypothesis method, and its codomain is [1, k], and h represents the set of all h;
By null hypothesiss method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize background
The dimensionality reduction characteristic image of feature, determines the likelihood function of background image;
p(i′n|θbg)=p (i 'n,h0|θbg)=p (i 'n|h0,θbg)p(h0|θbg);
Wherein, h0Represent the index vector adopting in null hypothesiss method;
By the p (i ' in the likelihood function of target imagen| h, θ) it is transformed to following first gauss of distribution function;
Wherein, g represents Gauss distribution, hqRepresent q-th aspect indexing in index vector h, q represents index vector h's
Length, that is, in index vector h aspect indexing quantity, i 'n(hq) represent according to h from dimensionality reduction characteristic pattern image set i 'nIn select
hqIndividual dimensionality reduction characteristic image, i 'nJ () represents dimensionality reduction characteristic pattern image set i 'nIn the dimensionality reduction characteristic image do not selected by h, μ, γ, μbg,
γbgRepresent the parameter of the first gauss of distribution function;
By the p (i ' in the likelihood function of background imagen|h0,θbg) it is transformed to following second gauss of distribution function;
According to the likelihood function of described target image, the likelihood function of background image, default Bayesian learning models, first
Gauss of distribution function and the second gauss of distribution function, build the disposable learning model of following depth;
Fourth aspect, the invention discloses a kind of carry out image knowledge using the disposable learning model of depth after above-mentioned convergence
Other device, this device includes:
Characteristic pattern image set acquisition module to be identified, for by default for images to be recognized input cnn model, selecting this to preset
The output image of cnn model n-th layer hidden layer is as the characteristic pattern image set to be identified of this images to be recognized;
Second dimensionality reduction computing module, for carrying out dimensionality reduction using dimensionality reduction matrix to described characteristic pattern image set to be identified, generates
The dimensionality reduction characteristic pattern image set to be identified of described images to be recognized;
Picture recognition module, disposable for described dimensionality reduction characteristic pattern image set to be identified being inputted the depth after described convergence
In learning model, images to be recognized is identified, so that it is determined that whether comprising object to be identified in described images to be recognized.
The technical scheme that embodiments of the invention provide can include following beneficial effect: the invention provides a kind of obtain
The method of the disposable learning model of depth, image-recognizing method and device, inquiry learning mould of acquisition depth that the present invention provides
In the method for type, based on the low volume data collection comprising a small amount of target image and background image, (comprise in data set has demarcated
Sample data volume is few or sample is more single), construct the disposable learning model of depth, and adopt this low volume data set pair depth
Disposable learning model is trained, and finally obtained the disposable learning model of the depth after convergence, is experimentally confirmed, and adopts
The disposable learning model of depth that the method that the present invention provides obtains is obtained than based on identical low volume data collection to the discrimination of image
The cnn model taking is high to the discrimination of image, and recognition effect is more preferable.
The embodiment of the present invention is it should be appreciated that above general description and detailed description hereinafter are only exemplary reconciliation
The property released, the disclosure can not be limited.
Brief description
Accompanying drawing herein is merged in description and constitutes the part of this specification, shows the enforcement meeting the present invention
Example, and be used for explaining the principle of the present invention together with description.
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
Have technology description in required use accompanying drawing be briefly described it should be apparent that, for those of ordinary skill in the art
Speech, without having to pay creative labor, can also obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of method obtaining the disposable learning model of depth provided in an embodiment of the present invention;
Fig. 2 is provided in an embodiment of the present invention a kind of to carry out image using the disposable learning model of depth after above-mentioned convergence
The flow chart knowing method for distinguishing;
Fig. 3 is a kind of structured flowchart of device obtaining the disposable learning model of depth provided in an embodiment of the present invention;
Fig. 4 is provided in an embodiment of the present invention a kind of to carry out image using the disposable learning model of depth after above-mentioned convergence
The structured flowchart of the device of identification.
Specific embodiment
In order that those skilled in the art more fully understand the technical scheme in the present invention, real below in conjunction with the present invention
Apply the accompanying drawing in example, the enforcement it is clear that described is clearly and completely described to the technical scheme in the embodiment of the present invention
Example is only a part of embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, this area is common
The every other embodiment that technical staff is obtained under the premise of not making creative work, all should belong to present invention protection
Scope.
The invention provides a kind of method obtaining the disposable learning model of depth, image-recognizing method and device, this
In the method for bright offer, the disposable learning model of depth can be built based on a small amount of sample, and adopt the number containing a small amount of sample
It is trained according to the disposable learning model of set pair depth, finally obtain the disposable learning model of depth after convergence, through reality
Experimental check, the disposable learning model of the depth after this is restrained is applied in the image recognition processes of reality, a small amount of with being based on
The cnn model that sample training obtains is compared, and the degree of accuracy of image recognition is obviously improved, discrimination is higher, the suitability is more preferable.
Below in conjunction with the accompanying drawings, the specific embodiment of the present invention is discussed in detail.
As shown in figure 1, Fig. 1 is illustrated that a kind of stream of the method for the disposable learning model of acquisition depth that the present invention provides
Cheng Tu, the method includes:
Step 101, all target images concentrated for the preset data comprising a small amount of target image and background image, will
Convolutional neural networks cnn model is preset in each width target image input, selects any one layer in this default cnn model hidden layer
Output image is as the characteristic pattern image set of this target image.
Generally, people want to obtain certain network learning model for images to be recognized is carried out with image recognition classification
When, need first to obtain a certain amount of sample data demarcated and build network learning model, and demarcated by a certain amount of
Sample data is trained to network learning model until convergence, obtaining the network learning model after convergence, afterwards using convergence
Network learning model afterwards can carry out image recognition to images to be recognized and classify, therefore, it is desirable to obtain sexology of depth
Practise model it is necessary first to obtain a certain amount of sample data demarcated.
In the present embodiment before starting to obtain the disposable learning model of depth, can obtain first and comprise a small amount of target image
With the data set of background image, in this data set, the quantity of image is less than 2000 width, can be hundreds of width, afterwards by this data set
It is pre-stored in the system for obtaining the disposable learning model of depth, directly use when obtaining the disposable learning model of depth
Can, therefore, herein by this DSD be preset data collection, wherein, target image be noted as comprising thing to be identified
The image of body, background image is the image being noted as not comprising object to be identified, herein, people is wanted in figure to be identified
The target object that in picture, whether recognition and verification comprises is defined as object to be identified.In specific implementation process, can be marked by artificial
The mode of note obtains preset data collection, for example, the image including object to be identified is manually labeled as target image, will not wrap
Image containing object to be identified is manually labeled as background image, afterwards manually the target image having marked and background image prestores
In the system for obtaining the disposable learning model of depth, just can obtain preset data collection.
In the method for the disposable learning model of acquisition depth that the present embodiment provides, by using default Bayesian learning mould
The target image that type is concentrated to preset data is identified, thus building the disposable learning model of depth, and the identification to image
It is substantially and determines in image whether comprise object to be identified, determine that whether comprising object to be identified in image can pass through to compare
Comprise in image not comprising object to be identified in the probability of object to be identified and image, that is, comprise background (herein, by image
Remaining object outside object to be identified is all defined as background) probability magnitude relationship determining, if comprising in image to treat
The probability of identification object is more than the probability in image comprise background, then can determine in image and comprise object to be identified, therefore, adopt
The identification process of the target image that preset data is concentrated can be expressed as by presetting pattra leaves with default Bayesian learning models
This learning model determines the size of the probability comprising background in the probability comprising object to be identified in target image and target image
Relation.
Determine probability and the target image comprising object to be identified in target image by default Bayesian learning models
In comprise the probability of background, need first to determine the characteristic pattern image set of target image, when being embodied as, can be carried using cnn model
Take the characteristic pattern image set of target image, therefore, in addition it is also necessary to first obtain cnn before starting to obtain the disposable learning model of depth
Model, afterwards cnn model is pre-stored in the system for obtaining the disposable learning model of depth, is obtaining sexology of depth
During practising model, directly using therefore, herein, by this cnn model for presetting cnn model, this is preset
Cnn model can be any one in following cnn models: the first cnn model is using the sample data demarcated in a large number
The cnn model that training obtains;Second cnn model is the cnn model being obtained using the sample data training demarcated on a small quantity.
Concentrate the characteristic image of each width target image in all target images using default cnn model extraction preset data
During collection, this target image is inputted described default cnn model, selects any one layer in this default cnn model hidden layer of output
Image, as the characteristic pattern image set of this target image, when being embodied as, selects any one layer in this default cnn model hidden layer
Output image is as the process of the characteristic pattern image set of this target image, comprising:
Select the characteristic pattern image set as this target image for the output image of this default cnn model n-th layer hidden layer, by institute
Stating characteristic image set representations is following characteristics matrix;
Matrix element in described eigenmatrix is calculated according to the following equation using activation receptance functionSo that it is determined that institute
State characteristic pattern image set;
Wherein, inThe output image of the default cnn model n-th layer hidden layer of expression, i.e. described characteristic pattern image set,Represent institute
State i-th matrix element of the corresponding eigenmatrix of characteristic image collection, i.e. characteristic pattern image set inIn ith feature image, i0Table
Show the described target image of the default cnn model of input,Represent i-th wave filter in default cnn model n-th layer hidden layer,
Represent i-th bias matrix in default cnn model n-th layer hidden layer, 1≤i≤k, k represent default cnn model n-th layer hidden layer
Middle convolution nuclear volume, i.e. characteristic pattern image set inIn characteristic image quantity (each characteristic pattern image set inIn comprise several characteristic patterns
Picture), (*) represents convolution algorithm, and function σ () represents activation receptance function.
Step 102, the background image concentrated using described preset data, determine dimensionality reduction by principal component analysiss pca method
Matrix.
When being embodied as, preset data is concentrated the default cnn mould of each width background image input in all background images
In type, extract the background characteristics image set as this background image for the output image of default cnn model n-th layer hidden layer, and really
Make this background characteristics image set corresponding background characteristics matrix, afterwards the corresponding institute of all background images is concentrated to preset data
Eigenmatrix of having powerful connections carries out principal component analysiss pca (principal component analysis, pca), generates coefficient square
Battle array, finally selects the front c of this coefficient matrix to arrange as dimensionality reduction matrix, wherein, the value of c needs to meet following conditions: makes pca
The contribution rate of accumulative total of eigenvalue reach more than 90%.For example: assume preset data concentrate background image quantity be 20, for
A width background image therein, its corresponding one 256 × 169 background characteristics matrix, 20 width background images just form one
5120 × 169 background characteristics matrix, carries out, after principal component analysiss, to generate one 169 × 169 to this background characteristics matrix
Coefficient matrix, select the front c of this coefficient matrix to arrange as dimensionality reduction matrix.
Step 103, using described dimensionality reduction matrix to described preset data concentrate each width target image characteristic pattern image set
Carry out dimensionality reduction, generate the dimensionality reduction characteristic pattern image set of this target image.
When being embodied as, this process specifically includes: using described dimensionality reduction matrix, according to following formula to described preset data
The characteristic pattern image set concentrating each width target image carries out dimensionality reduction, generates the dimensionality reduction characteristic pattern image set of this target image;
i′n=in·wc;
Wherein, i 'nRepresent dimensionality reduction characteristic pattern image set, inRepresent characteristic pattern image set, wcRepresent dimensionality reduction matrix.
Step 104, default for the input of each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set Bayes is learned
Practise model, this corresponding target image of dimensionality reduction characteristic image collection is identified, build the disposable learning model of depth.
When being embodied as, this process specifically includes: by each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic image
Collection inputs following default Bayesian learning models, and this corresponding target image of dimensionality reduction characteristic image collection is identified;
Wherein, p (c | i 'n) represent in acquisition dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will
Input dimensionality reduction characteristic pattern image set i ' thereinnCorresponding target image is identified as the posterior probability of target image, and c represents default shellfish
The dimensionality reduction characteristic pattern image set i ' of this learning model identified input of leafnCorresponding target image is target image, p (cbg|i′n) represent
Obtaining dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input dimensionality reduction characteristic pattern image set therein
i′nCorresponding target image is identified as the posterior probability of background image, cbgRepresent default Bayesian learning models identified input
Dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image, and it is therein that r represents that default Bayesian learning models will input
Dimensionality reduction characteristic pattern image set i 'nCorresponding target image is identified as the posterior probability of target image and default Bayesian learning models will
Input dimensionality reduction characteristic pattern image set i ' thereinnCorresponding target image is identified as the ratio of the posterior probability of background image;
Under normal circumstances, each probability meets in the probability-distribution function of a certain probability distribution and all contains parameter, then
Test Probability p (c | i 'n) it is a probability-distribution function, therefore, posterior probability p (c | i 'n) in be implied with parameter θ, after representing this
Test Probability p (c | i 'n) meet some probability distribution, and the parameter of this probability distribution is θ, in the same manner, posterior probability p (cbg|
i′n) in be implied with parameter θbg, thus, following formulaCan be deformed into following form:
Wherein, p (c, θ) represents dimensionality reduction characteristic pattern image set i 'nThe priori of object to be identified is comprised in corresponding target image
Probability, can be abbreviated as p (c), p (cbg,θbg) represent dimensionality reduction characteristic pattern image set i 'nDo not comprise to be identified in corresponding target image
The prior probability of object, can be abbreviated as p (cbg), p (i 'n, θ) and it is the evidence factor in order to ensure posteriority of all categories
Probability and be 1, generally this evidence factor be constant, in the same manner, p (i 'n,θbg) it is also a constant, such above-mentioned formula is permissible
It is reduced to:Wherein, p (i 'n| θ) for target image likelihood function, represent
Dimensionality reduction characteristic pattern image set i 'nCorresponding target image is target image, on the premise of meeting a certain distribution that parameter is θ, dimensionality reduction
Characteristic pattern image set i 'nProbability density function, p (c) represent dimensionality reduction characteristic pattern image set i 'nComprise in corresponding target image to wait to know
The prior probability of other object, p (i 'n|θbg) for background image likelihood function, represent dimensionality reduction characteristic pattern image set i 'nCorresponding mesh
Logo image is background image, and meeting parameter is θbgA certain distribution on the premise of, dimensionality reduction characteristic pattern image set i 'nProbability density letter
Number, p (cbg) represent dimensionality reduction characteristic pattern image set i 'nThe prior probability of object to be identified is not comprised in corresponding target image;
By assuming that method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'n(each dimensionality reduction characteristic pattern
Image set i 'nIn all comprise multiple dimensionality reduction characteristic images) in can characterize the dimensionality reduction characteristic image of object features to be identified, determine
The likelihood function of target image;
Wherein, h represents the index vector adopting in hypothesis method, and its codomain is [1, k], and h represents the set of all h;
By null hypothesiss method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize background
The dimensionality reduction characteristic image of feature, determines the likelihood function of background image;
p(i′n|θbg)=p (i 'n,h0|θbg)=p (i 'n|h0,θbg)p(h0|θbg);
Wherein, h0Represent the index vector adopting in null hypothesiss method;
Due to p (h | θ) for constant it is assumed that p (i 'n| h, θ) meet Gauss distribution, then can be by the likelihood letter of target image
P (i ' in numbern| h, θ) it is transformed to following first gauss of distribution function;
Wherein, g represents Gauss distribution, hqRepresent q-th aspect indexing in index vector h, q represents index vector h's
Length, that is, in index vector h aspect indexing quantity, i 'n(hq) represent according to h from dimensionality reduction characteristic pattern image set i 'nIn select
hqIndividual dimensionality reduction characteristic image, i 'nJ () represents dimensionality reduction characteristic pattern image set i 'nIn the dimensionality reduction characteristic image do not selected by h, μ, γ, μbg,
γbgRepresent the parameter of the first gauss of distribution function;
For background image, the background dimensionality reduction characteristic image of all sign background characteristics selects all without by h, therefore permissible
By the p (i ' in the likelihood function of background imagen|h0,θbg) it is transformed to following second gauss of distribution function;
When by default for width target image input cnn model, by the spy of this default cnn model extraction to this target image
Levy image set, and dimensionality reduction is carried out by dimensionality reduction matrix, after obtaining the dimensionality reduction characteristic pattern image set of target image, p (i 'n|h0,θbg)
It is a definite value, and, p (c)/p (cbg) it is constant, therefore, it can likelihood function according to described target image, background image
Likelihood function, default Bayesian learning models, the first gauss of distribution function and the second gauss of distribution function, construct following depths
Spend disposable learning model:
Step 105, it is trained using the disposable learning model of depth described in described preset data set pair, until model is received
Hold back, thus obtaining the disposable learning model of the depth after convergence.
Included multiple using the method that the disposable learning model of preset data set pair depth is trained, for example, the first
Method, using preset data collection, by expectation maximization em (expectation maximization, em) method to depth one
Secondary inquiry learning model is trained;Second method, using preset data collection, by mm (majorize-minimize, mm) side
To depth, disposable learning model is trained method;The third method, using preset data collection, is maximized by conditional expectation
Ecm (expectation conditional maximization, ecm) method to depth, instruct by disposable learning model
Practice;4th kind of method, using preset data collection, by α-em (α-expectation maximization, α-em) method pair
The disposable learning model of depth is trained;Using any one method aforementioned, to depth, disposable learning model is trained,
It is possible to obtain the value of μ and γ in the disposable learning model of depth after model convergence, the value of μ and γ obtaining is substituted into
To in the disposable learning model of depth it is possible to obtain the disposable learning model of depth after convergence.
When the sample size demarcated obtaining network learning model for training is less or more single, using this enforcement
The disposable learning model of depth that the method that example provides obtains increases significantly to the discrimination tool of other images.For example with right
As a example the identification of car: be experimentally confirmed, using cnn model and inquiry learning mould of depth of the acquisition of identical data sample training
80 width images are identified respectively by type with the two, and the discrimination of cnn model is 65%, and the disposable learning model of depth
Discrimination is 71.25%, and discrimination significantly improves.
In the method for the disposable learning model of acquisition depth that the present invention provides, based on comprising a small amount of target image and background
The low volume data collection of image, constructs the disposable learning model of depth, and adopts this sexology of low volume data set pair depth
Practise model to be trained, finally obtained the disposable learning model of the depth after convergence, be experimentally confirmed, carried using the present invention
For the disposable learning model of depth that obtains of method to the discrimination of image than the cnn mould being obtained based on identical low volume data collection
Type is high to the discrimination of image, and more preferably, the suitability is more preferable for recognition effect.
As shown in Fig. 2 Fig. 2 be illustrated that the present invention provide a kind of using inquiry learning mould of the depth after above-mentioned convergence
The flow chart that type carries out the method for image recognition, the method includes:
Step 201, by default for images to be recognized input cnn model, select the defeated of this default cnn model n-th layer hidden layer
Go out image as the characteristic pattern image set to be identified of this images to be recognized.
Step 202, using dimensionality reduction matrix, described characteristic pattern image set to be identified is carried out with dimensionality reduction, generate described images to be recognized
Dimensionality reduction characteristic pattern image set to be identified.
Step 203, described dimensionality reduction characteristic pattern image set to be identified is inputted the disposable learning model of depth after described convergence
In, images to be recognized is identified, so that it is determined that whether comprising object to be identified in described images to be recognized.
The implementation of step 201 and step 202 may be referred to corresponding implementation in above-described embodiment, and here is no longer
Repeat, in step 203, by the disposable learning model of depth after dimensionality reduction characteristic pattern image set to be identified input convergence, pass through
After the disposable learning model of depth after this convergence calculates r, compare the size between r and the predetermined threshold value of posterior probability ratio
Relation, wherein, the predetermined threshold value of posterior probability ratio arbitrarily can be chosen in 0.5~1.5, if r is more than or equal to this posteriority
The predetermined threshold value of probability ratio, then can determine in images to be recognized and comprise object to be identified, if r is less than this posterior probability ratio
The predetermined threshold value of value, then can determine and do not comprise object to be identified in images to be recognized.
The method that the present embodiment provides, knows to images to be recognized due to employing the disposable learning model of above-mentioned depth
Not, the accuracy rate of identification is higher.
Corresponding with said method, the embodiment of the invention also discloses a kind of device obtaining the disposable learning model of depth
And a kind of device carrying out image recognition using the disposable learning model of depth after above-mentioned convergence.
As shown in figure 3, Fig. 3 is illustrated that a kind of structured flowchart of the device obtaining the disposable learning model of depth, this dress
Put 300 to include:
Characteristic pattern image set acquisition module 301, for for the preset data collection comprising a small amount of target image and background image
In all target images, default for the input of each width target image convolutional neural networks cnn model selects this default cnn mould
In type hidden layer, as the characteristic pattern image set of this target image, described target image is to be noted as to any one layer of output image
Comprise the image of object to be identified, described background image is the image being noted as not comprising object to be identified;
Dimensionality reduction matrix deciding module 302, for the background image concentrated using described preset data, by principal component analysiss
Pca method determines dimensionality reduction matrix;
First dimensionality reduction computing module 303, for concentrating each width target using described dimensionality reduction matrix to described preset data
The characteristic pattern image set of image carries out dimensionality reduction, generates the dimensionality reduction characteristic pattern image set of this target image;
The disposable learning model of depth builds module 304, for by each of all dimensionality reduction characteristic image collection dimensionality reduction
The default Bayesian learning models of characteristic pattern image set input, are identified to this corresponding target image of dimensionality reduction characteristic image collection, structure
Build the disposable learning model of depth;
Model training module 305, for being instructed using the disposable learning model of depth described in described preset data set pair
Practice, until model convergence, thus obtaining the disposable learning model of the depth after convergence.
Further, described characteristic pattern image set acquisition module 301 specifically for:
The all target images concentrated for the preset data comprising a small amount of target image and background image, by each width mesh
The default convolutional neural networks cnn model of logo image input, selects the output image conduct of this default cnn model n-th layer hidden layer
The characteristic pattern image set of this target image, described characteristic image set representations are following characteristics matrix;
Matrix element in described eigenmatrix is calculated according to the following equation using activation receptance functionSo that it is determined that institute
State characteristic pattern image set;
Wherein, inThe output image of the default cnn model n-th layer hidden layer of expression, i.e. described characteristic pattern image set,Represent institute
State i-th matrix element of the corresponding eigenmatrix of characteristic image collection, i.e. characteristic pattern image set inIn ith feature image, i0Table
Show the described target image of the default cnn model of input,Represent i-th wave filter in default cnn model n-th layer hidden layer,
Represent i-th bias matrix in default cnn model n-th layer hidden layer, 1≤i≤k, k represent default cnn model n-th layer hidden layer
Middle convolution nuclear volume, i.e. characteristic pattern image set inIn characteristic image quantity, (*) represent convolution algorithm, function σ () represent swash
Receptance function alive.
Further, described first dimensionality reduction computing module 303 specifically for:
Using described dimensionality reduction matrix, according to following formula, described preset data is concentrated with the characteristic pattern of each width target image
Image set carries out dimensionality reduction, generates the dimensionality reduction characteristic pattern image set of this target image;
i′n=in·wc;
Wherein, i 'nRepresent dimensionality reduction characteristic pattern image set, inRepresent characteristic pattern image set, wcRepresent dimensionality reduction matrix.
Further, the disposable learning model of described depth build module 304 specifically for:
Each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set is inputted following default Bayesian learning moulds
Type, is identified to this corresponding target image of dimensionality reduction characteristic image collection;
Wherein, p (c | i 'n) represent in acquisition dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will
Input dimensionality reduction characteristic pattern image set i ' thereinnCorresponding target image is identified as the posterior probability of target image, and c represents default shellfish
The dimensionality reduction characteristic pattern image set i ' of this learning model identified input of leafnCorresponding target image is target image, p (cbg|i′n) represent
Obtaining dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input dimensionality reduction characteristic pattern image set therein
i′nCorresponding target image is identified as the posterior probability of background image, cbgRepresent default Bayesian learning models identified input
Dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image;
p(i′n| θ) for target image likelihood function, represent dimensionality reduction characteristic pattern image set i 'nCorresponding target image is target
Image, on the premise of meeting a certain distribution that parameter is θ, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (c) represent fall
Dimensional feature image set i 'nThe prior probability of object to be identified, p (i ' is comprised in corresponding target imagen|θbg) it is background image
Likelihood function, represents dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image, and meeting parameter is θbgA certain distribution
On the premise of, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (cbg) represent dimensionality reduction characteristic pattern image set i 'nCorresponding target
The prior probability of object to be identified is not comprised in image;
By assuming that method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize to be identified
The dimensionality reduction characteristic image of object features, determines the likelihood function of target image;
Wherein, h represents the index vector adopting in hypothesis method, and its codomain is [1, k], and h represents the set of all h;
By null hypothesiss method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize background
The dimensionality reduction characteristic image of feature, determines the likelihood function of background image;
p(i′n|θbg)=p (i 'n,h0|θbg)=p (i 'n|h0,θbg)p(h0|θbg);
Wherein, h0Represent the index vector adopting in null hypothesiss method;
By the p (i ' in the likelihood function of target imagen| h, θ) it is transformed to following first gauss of distribution function;
Wherein, g represents Gauss distribution, hqRepresent q-th aspect indexing in index vector h, q represents index vector h's
Length, that is, in index vector h aspect indexing quantity, i 'n(hq) represent according to h from dimensionality reduction characteristic pattern image set i 'nIn select
hqIndividual dimensionality reduction characteristic image, i 'nJ () represents dimensionality reduction characteristic pattern image set i 'nIn the dimensionality reduction characteristic image do not selected by h, μ, γ, μbg,
γbgRepresent the parameter of the first gauss of distribution function;
By the p (i ' in the likelihood function of background imagen|h0,θbg) it is transformed to following second gauss of distribution function;
According to the likelihood function of described target image, the likelihood function of background image, default Bayesian learning models, first
Gauss of distribution function and the second gauss of distribution function, build the disposable learning model of following depth;
The disposable learning model of depth being obtained by the device that the present invention provides, when image being identified with it, is known
Not rate is higher to the discrimination of image than the cnn model being obtained based on identical low volume data collection, and more preferably, the suitability is more for recognition effect
Good.
As shown in figure 4, Fig. 4 be illustrated that the present invention provide a kind of using inquiry learning mould of the depth after above-mentioned convergence
Type carries out the structured flowchart of the device of image recognition, and this device 400 includes:
Characteristic pattern image set acquisition module 401 to be identified, for by default for images to be recognized input cnn model, selecting this pre-
If the output image of cnn model n-th layer hidden layer is as the characteristic pattern image set to be identified of this images to be recognized;
Second dimensionality reduction computing module 402, for dimensionality reduction is carried out to described characteristic pattern image set to be identified using dimensionality reduction matrix, raw
Become the dimensionality reduction characteristic pattern image set to be identified of described images to be recognized;
Picture recognition module 403, for inputting the depth one after described convergence by described dimensionality reduction characteristic pattern image set to be identified
In secondary inquiry learning model, images to be recognized is identified, so that it is determined that whether comprising thing to be identified in described images to be recognized
Body.
When image being identified using this device, the disposable learning model of the depth after being capable of using above-mentioned convergence
Carry out corresponding effect in the method for image recognition, higher to the discrimination of image.
Each embodiment in this specification is all described by the way of going forward one by one, identical similar portion between each embodiment
Divide mutually referring to what each embodiment stressed is the difference with other embodiment.Especially for device or
For system embodiment, because it is substantially similar to embodiment of the method, so describing fairly simple, referring to method in place of correlation
The part of embodiment illustrates.Apparatus and system embodiment described above is only schematically, wherein as separation
The unit of part description can be or may not be physically separate, as the part that unit shows can be or
Can not be physical location, you can with positioned at a place, or can also be distributed on multiple NEs.Can be according to reality
The needing of border selects the purpose to realize this embodiment scheme for some or all of module therein.Those of ordinary skill in the art
In the case of not paying creative work, you can to understand and to implement.
It should be noted that herein, the relational terms of such as " first " and " second " or the like are used merely to one
Individual entity or operation are made a distinction with another entity or operation, and not necessarily require or imply these entities or operate it
Between there is any this actual relation or order.And, term " inclusion ", "comprising" or its any other variant are intended to
Cover comprising of nonexcludability, so that including a series of process of key elements, method, article or equipment not only include those
Key element, but also include other key elements being not expressly set out, or also include for this process, method, article or set
Standby intrinsic key element.In the absence of more restrictions, the key element that limited by sentence "including a ..." it is not excluded that
Also there is other identical element in process, method, article or the equipment including key element.
The above is only the specific embodiment of the present invention it is noted that coming for those skilled in the art
Say, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should be regarded as
Protection scope of the present invention.
Claims (10)
1. a kind of method obtaining the disposable learning model of depth is it is characterised in that include:
The all target images concentrated for the preset data comprising a small amount of target image and background image, by each width target figure
As the default convolutional neural networks cnn model of input, select any one layer in this default cnn model hidden layer of output image conduct
The characteristic pattern image set of this target image, described target image is the image being noted as comprising object to be identified, described Background
As for being noted as not comprising the image of object to be identified;
The background image concentrated using described preset data, determines dimensionality reduction matrix by principal component analysiss pca method;
Dimensionality reduction is carried out using the characteristic pattern image set that described dimensionality reduction matrix concentrates each width target image to described preset data, generates
The dimensionality reduction characteristic pattern image set of this target image;
By default for the input of each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set Bayesian learning models, to this fall
The corresponding target image of dimensional feature image set is identified, and builds the disposable learning model of depth;
It is trained using the disposable learning model of depth described in described preset data set pair, until model convergence, thus obtaining
The disposable learning model of depth after convergence.
2. method according to claim 1 is it is characterised in that select any one layer in this default cnn model hidden layer
Output image, as the process of the characteristic pattern image set of this target image, specifically includes:
Select the characteristic pattern image set as this target image for the output image of this default cnn model n-th layer hidden layer, by described spy
Levy image set and be expressed as following characteristics matrix;
Matrix element in described eigenmatrix is calculated according to the following equation using activation receptance functionSo that it is determined that described spy
Levy image set;
Wherein, inThe output image of the default cnn model n-th layer hidden layer of expression, i.e. described characteristic pattern image set,Represent described spy
Levy i-th matrix element of the corresponding eigenmatrix of image set, i.e. characteristic pattern image set inIn ith feature image, i0Represent defeated
Enter the described target image of default cnn model, wi nRepresent i-th wave filter in default cnn model n-th layer hidden layer,Represent
I-th bias matrix in default cnn model n-th layer hidden layer, 1≤i≤k, k represent volume in default cnn model n-th layer hidden layer
Long-pending nuclear volume, i.e. characteristic pattern image set inIn characteristic image quantity, (*) represent convolution algorithm, function σ () represent activation ring
Answer function.
3. method according to claim 2 it is characterised in that concentrated every using described dimensionality reduction matrix to described preset data
The characteristic pattern image set of one width target image carries out dimensionality reduction, generates the process of the dimensionality reduction characteristic pattern image set of this target image, concrete bag
Include:
Using described dimensionality reduction matrix, according to following formula, described preset data is concentrated with the characteristic pattern image set of each width target image
Carry out dimensionality reduction, generate the dimensionality reduction characteristic pattern image set of this target image;
i′n=in·wc;
Wherein, i 'nRepresent dimensionality reduction characteristic pattern image set, inRepresent characteristic pattern image set, wcRepresent dimensionality reduction matrix.
4. method according to claim 3 is it is characterised in that will be special for each of all dimensionality reduction characteristic image collection dimensionality reduction
Levy the default Bayesian learning models of image set input, this corresponding target image of dimensionality reduction characteristic image collection is identified, builds
The process of the disposable learning model of depth, specifically includes:
Each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set is inputted following default Bayesian learning models, right
This corresponding target image of dimensionality reduction characteristic image collection is identified;
Wherein, p (c | i 'n) represent in acquisition dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input
Dimensionality reduction characteristic pattern image set i ' thereinnCorresponding target image is identified as the posterior probability of target image, and c represents default Bayes
The dimensionality reduction characteristic pattern image set i ' of learning model identified inputnCorresponding target image is target image, p (cbg|i′n) represent and obtaining
Obtain dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input dimensionality reduction characteristic pattern image set i ' thereinnRight
The target image answered is identified as the posterior probability of background image, cbgRepresent that the dimensionality reduction of default Bayesian learning models identified input is special
Levy image set i 'nCorresponding target image is background image;
p(i′n| θ) for target image likelihood function, represent dimensionality reduction characteristic pattern image set i 'nCorresponding target image is target figure
Picture, on the premise of meeting a certain distribution that parameter is θ, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (c) represent dimensionality reduction
Characteristic pattern image set i 'nThe prior probability of object to be identified, p (i ' is comprised in corresponding target imagen|θbg) for background image seemingly
So function, represents dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image, and meeting parameter is θbgA certain distribution
Under the premise of, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (cbg) represent dimensionality reduction characteristic pattern image set i 'nCorresponding target figure
The prior probability of object to be identified is not comprised in picture;
By assuming that method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize object to be identified
The dimensionality reduction characteristic image of feature, determines the likelihood function of target image;
Wherein, h represents the index vector adopting in hypothesis method, and its codomain is [1, k], and h represents the set of all h;
By null hypothesiss method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize background characteristics
Dimensionality reduction characteristic image, determine the likelihood function of background image;
p(i′n|θbg)=p (i 'n,h0|θbg)=p (i 'n|h0,θbg)p(h0|θbg);
Wherein, h0Represent the index vector adopting in null hypothesiss method;
By the p (i ' in the likelihood function of target imagen| h, θ) it is transformed to following first gauss of distribution function;
Wherein, g represents Gauss distribution, hqRepresent q-th aspect indexing in index vector h, q represents the length of index vector h,
I.e. in index vector h aspect indexing quantity, i 'n(hq) represent according to h from dimensionality reduction characteristic pattern image set i 'nIn the h that selectsqIndividual
Dimensionality reduction characteristic image, i 'nJ () represents dimensionality reduction characteristic pattern image set i 'nIn the dimensionality reduction characteristic image do not selected by h, μ, γ, μbg, γbg
Represent the parameter of the first gauss of distribution function;
By the p (i ' in the likelihood function of background imagen|h0,θbg) it is transformed to following second gauss of distribution function;
According to the likelihood function of described target image, the likelihood function of background image, default Bayesian learning models, the first Gauss
Distribution function and the second gauss of distribution function, build the disposable learning model of following depth;
5. the disposable learning model of depth after the convergence described in a kind of any one using Claims 1-4 carries out image knowledge
Method for distinguishing is it is characterised in that include:
By default for images to be recognized input cnn model, the output image selecting this default cnn model n-th layer hidden layer is as this
The characteristic pattern image set to be identified of images to be recognized;
Dimensionality reduction is carried out using dimensionality reduction matrix to described characteristic pattern image set to be identified, generates the dimensionality reduction to be identified of described images to be recognized
Characteristic pattern image set;
Described dimensionality reduction characteristic pattern image set to be identified is inputted in the disposable learning model of depth after described convergence, to figure to be identified
As being identified, so that it is determined that whether comprising object to be identified in described images to be recognized.
6. a kind of device obtaining the disposable learning model of depth is it is characterised in that include:
Characteristic pattern image set acquisition module, all for concentrate for the preset data comprising a small amount of target image and background image
Target image, default for the input of each width target image convolutional neural networks cnn model selects this default cnn model hidden layer
In any one layer output image as this target image characteristic pattern image set, described target image be noted as comprising to wait to know
The image of other object, described background image is the image being noted as not comprising object to be identified;
Dimensionality reduction matrix deciding module, for the background image concentrated using described preset data, by principal component analysiss pca method
Determine dimensionality reduction matrix;
First dimensionality reduction computing module, for concentrating the spy of each width target image using described dimensionality reduction matrix to described preset data
Levy image set and carry out dimensionality reduction, generate the dimensionality reduction characteristic pattern image set of this target image;
The disposable learning model of depth builds module, for by each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic image
The default Bayesian learning models of collection input, are identified to this corresponding target image of dimensionality reduction characteristic image collection, build depth one
Secondary inquiry learning model;
Model training module, for being trained using the disposable learning model of depth described in described preset data set pair, until
Model is restrained, thus obtaining the disposable learning model of the depth after convergence.
7. device according to claim 6 it is characterised in that described characteristic pattern image set acquisition module specifically for:
The all target images concentrated for the preset data comprising a small amount of target image and background image, by each width target figure
As the default convolutional neural networks cnn model of input, the output image selecting this default cnn model n-th layer hidden layer is as this mesh
The characteristic pattern image set of logo image, described characteristic image set representations are following characteristics matrix;
Matrix element in described eigenmatrix is calculated according to the following equation using activation receptance functionSo that it is determined that described spy
Levy image set;
Wherein, inThe output image of the default cnn model n-th layer hidden layer of expression, i.e. described characteristic pattern image set,Represent described spy
Levy i-th matrix element of the corresponding eigenmatrix of image set, i.e. characteristic pattern image set inIn ith feature image, i0Represent defeated
Enter the described target image of default cnn model, wi nRepresent i-th wave filter in default cnn model n-th layer hidden layer,Represent
I-th bias matrix in default cnn model n-th layer hidden layer, 1≤i≤k, k represent volume in default cnn model n-th layer hidden layer
Long-pending nuclear volume, i.e. characteristic pattern image set inIn characteristic image quantity, (*) represent convolution algorithm, function σ () represent activation ring
Answer function.
8. device according to claim 7 it is characterised in that described first dimensionality reduction computing module specifically for:
Using described dimensionality reduction matrix, according to following formula, described preset data is concentrated with the characteristic pattern image set of each width target image
Carry out dimensionality reduction, generate the dimensionality reduction characteristic pattern image set of this target image;
i′n=in·wc;
Wherein, i 'nRepresent dimensionality reduction characteristic pattern image set, inRepresent characteristic pattern image set, wcRepresent dimensionality reduction matrix.
9. device according to claim 8 is it is characterised in that described depth disposable learning model structure module is specifically used
In:
Each of all dimensionality reduction characteristic image collection dimensionality reduction characteristic pattern image set is inputted following default Bayesian learning models, right
This corresponding target image of dimensionality reduction characteristic image collection is identified;
Wherein, p (c | i 'n) represent in acquisition dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input
Dimensionality reduction characteristic pattern image set i ' thereinnCorresponding target image is identified as the posterior probability of target image, and c represents default Bayes
The dimensionality reduction characteristic pattern image set i ' of learning model identified inputnCorresponding target image is target image, p (cbg|i′n) represent and obtaining
Obtain dimensionality reduction characteristic pattern image set i 'nOn the premise of, default Bayesian learning models will input dimensionality reduction characteristic pattern image set i ' thereinnRight
The target image answered is identified as the posterior probability of background image, cbgRepresent that the dimensionality reduction of default Bayesian learning models identified input is special
Levy image set i 'nCorresponding target image is background image;
p(i′n| θ) for target image likelihood function, represent dimensionality reduction characteristic pattern image set i 'nCorresponding target image is target figure
Picture, on the premise of meeting a certain distribution that parameter is θ, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (c) represent dimensionality reduction
Characteristic pattern image set i 'nThe prior probability of object to be identified, p (i ' is comprised in corresponding target imagen|θbg) for background image seemingly
So function, represents dimensionality reduction characteristic pattern image set i 'nCorresponding target image is background image, and meeting parameter is θbgA certain distribution
Under the premise of, dimensionality reduction characteristic pattern image set i 'nProbability density function, p (cbg) represent dimensionality reduction characteristic pattern image set i 'nCorresponding target figure
The prior probability of object to be identified is not comprised in picture;
By assuming that method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize object to be identified
The dimensionality reduction characteristic image of feature, determines the likelihood function of target image;
Wherein, h represents the index vector adopting in hypothesis method, and its codomain is [1, k], and h represents the set of all h;
By null hypothesiss method, according to the following equation, pick out all dimensionality reduction characteristic pattern image set i 'nIn can characterize background characteristics
Dimensionality reduction characteristic image, determine the likelihood function of background image;
p(i′n|θbg)=p (i 'n,h0|θbg)=p (i 'n|h0,θbg)p(h0|θbg);
Wherein, h0Represent the index vector adopting in null hypothesiss method;
By the p (i ' in the likelihood function of target imagen| h, θ) it is transformed to following first gauss of distribution function;
Wherein, g represents Gauss distribution, hqRepresent q-th aspect indexing in index vector h, q represents the length of index vector h,
I.e. in index vector h aspect indexing quantity, i 'n(hq) represent according to h from dimensionality reduction characteristic pattern image set i 'nIn the h that selectsqIndividual
Dimensionality reduction characteristic image, i 'nJ () represents dimensionality reduction characteristic pattern image set i 'nIn the dimensionality reduction characteristic image do not selected by h, μ, γ, μbg, γbg
Represent the parameter of the first gauss of distribution function;
By the p (i ' in the likelihood function of background imagen|h0,θbg) it is transformed to following second gauss of distribution function;
According to the likelihood function of described target image, the likelihood function of background image, default Bayesian learning models, the first Gauss
Distribution function and the second gauss of distribution function, build the disposable learning model of following depth;
10. the disposable learning model of depth after the convergence described in a kind of any one using Claims 1-4 carries out image knowledge
Other device is it is characterised in that include:
Characteristic pattern image set acquisition module to be identified, for by default for images to be recognized input cnn model, selecting this default cnn mould
The output image of type n-th layer hidden layer is as the characteristic pattern image set to be identified of this images to be recognized;
Second dimensionality reduction computing module, for described characteristic pattern image set to be identified being carried out with dimensionality reduction using dimensionality reduction matrix, generates described
The dimensionality reduction characteristic pattern image set to be identified of images to be recognized;
Picture recognition module, for inputting the inquiry learning of depth after described convergence by described dimensionality reduction characteristic pattern image set to be identified
In model, images to be recognized is identified, so that it is determined that whether comprising object to be identified in described images to be recognized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610761364.0A CN106372656B (en) | 2016-08-30 | 2016-08-30 | Obtain method, image-recognizing method and the device of the disposable learning model of depth |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610761364.0A CN106372656B (en) | 2016-08-30 | 2016-08-30 | Obtain method, image-recognizing method and the device of the disposable learning model of depth |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106372656A true CN106372656A (en) | 2017-02-01 |
CN106372656B CN106372656B (en) | 2019-05-10 |
Family
ID=57901020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610761364.0A Active CN106372656B (en) | 2016-08-30 | 2016-08-30 | Obtain method, image-recognizing method and the device of the disposable learning model of depth |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106372656B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578448A (en) * | 2017-08-31 | 2018-01-12 | 广东工业大学 | Blending surfaces number recognition methods is included without demarcation curved surface based on CNN |
CN107633236A (en) * | 2017-09-28 | 2018-01-26 | 北京达佳互联信息技术有限公司 | Picture material understanding method, device and server |
CN109857864A (en) * | 2019-01-07 | 2019-06-07 | 平安科技(深圳)有限公司 | Text sentiment classification method, device, computer equipment and storage medium |
CN111179235A (en) * | 2019-12-23 | 2020-05-19 | 沈阳先进医疗设备技术孵化中心有限公司 | Image detection model generation method and device, and application method and device |
WO2020248581A1 (en) * | 2019-06-11 | 2020-12-17 | 中国科学院自动化研究所 | Graph data identification method and apparatus, computer device, and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101464950A (en) * | 2009-01-16 | 2009-06-24 | 北京航空航天大学 | Video human face identification and retrieval method based on on-line learning and Bayesian inference |
CN102147867A (en) * | 2011-05-20 | 2011-08-10 | 北京联合大学 | Method for identifying traditional Chinese painting images and calligraphy images based on subject |
CN103839072A (en) * | 2013-12-31 | 2014-06-04 | 浙江工业大学 | False fingerprint detecting method based on naive Bayes classifiers |
CN104021577A (en) * | 2014-06-19 | 2014-09-03 | 上海交通大学 | Video tracking method based on local background learning |
CN104899255A (en) * | 2015-05-15 | 2015-09-09 | 浙江大学 | Image database establishing method suitable for training deep convolution neural network |
US9400925B2 (en) * | 2013-11-15 | 2016-07-26 | Facebook, Inc. | Pose-aligned networks for deep attribute modeling |
CN105825511A (en) * | 2016-03-18 | 2016-08-03 | 南京邮电大学 | Image background definition detection method based on deep learning |
-
2016
- 2016-08-30 CN CN201610761364.0A patent/CN106372656B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101464950A (en) * | 2009-01-16 | 2009-06-24 | 北京航空航天大学 | Video human face identification and retrieval method based on on-line learning and Bayesian inference |
CN102147867A (en) * | 2011-05-20 | 2011-08-10 | 北京联合大学 | Method for identifying traditional Chinese painting images and calligraphy images based on subject |
US9400925B2 (en) * | 2013-11-15 | 2016-07-26 | Facebook, Inc. | Pose-aligned networks for deep attribute modeling |
CN103839072A (en) * | 2013-12-31 | 2014-06-04 | 浙江工业大学 | False fingerprint detecting method based on naive Bayes classifiers |
CN104021577A (en) * | 2014-06-19 | 2014-09-03 | 上海交通大学 | Video tracking method based on local background learning |
CN104899255A (en) * | 2015-05-15 | 2015-09-09 | 浙江大学 | Image database establishing method suitable for training deep convolution neural network |
CN105825511A (en) * | 2016-03-18 | 2016-08-03 | 南京邮电大学 | Image background definition detection method based on deep learning |
Non-Patent Citations (4)
Title |
---|
LI ZHANG ET AL.: "ROBUST FACE ALIGNMENT BASED ON LOCAL TEXTURE CLASSIFIERS", 《IEEE》 * |
张昭旭: "CNN 深度学习模型用于表情特征提取方法探究", 《图形图像》 * |
陈冠宇 等: "基于卷积神经网络的不良地质体识别与分类", 《地质科技情报》 * |
黄咨 等: "一种用于行人检测的隐式训练卷积神经网络模型", 《计算机应用与软件》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578448A (en) * | 2017-08-31 | 2018-01-12 | 广东工业大学 | Blending surfaces number recognition methods is included without demarcation curved surface based on CNN |
CN107633236A (en) * | 2017-09-28 | 2018-01-26 | 北京达佳互联信息技术有限公司 | Picture material understanding method, device and server |
CN107633236B (en) * | 2017-09-28 | 2019-01-22 | 北京达佳互联信息技术有限公司 | Picture material understanding method, device and server |
CN109857864A (en) * | 2019-01-07 | 2019-06-07 | 平安科技(深圳)有限公司 | Text sentiment classification method, device, computer equipment and storage medium |
WO2020248581A1 (en) * | 2019-06-11 | 2020-12-17 | 中国科学院自动化研究所 | Graph data identification method and apparatus, computer device, and storage medium |
CN111179235A (en) * | 2019-12-23 | 2020-05-19 | 沈阳先进医疗设备技术孵化中心有限公司 | Image detection model generation method and device, and application method and device |
CN111179235B (en) * | 2019-12-23 | 2024-03-08 | 东软医疗系统股份有限公司 | Image detection model generation method and device, and application method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106372656B (en) | 2019-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106372656A (en) | Depth one-time learning model obtaining method and device and image identification method and device | |
CN108615048B (en) | Defense method for image classifier adversity attack based on disturbance evolution | |
CN105512680A (en) | Multi-view SAR image target recognition method based on depth neural network | |
CN106951825A (en) | A kind of quality of human face image assessment system and implementation method | |
CN109101930A (en) | A kind of people counting method and system | |
CN108932527A (en) | Using cross-training model inspection to the method for resisting sample | |
CN109685115A (en) | A kind of the fine granularity conceptual model and learning method of bilinearity Fusion Features | |
CN109800811A (en) | A kind of small sample image-recognizing method based on deep learning | |
CN107463920A (en) | A kind of face identification method for eliminating partial occlusion thing and influenceing | |
CN109583322A (en) | A kind of recognition of face depth network training method and system | |
CN113076994B (en) | Open-set domain self-adaptive image classification method and system | |
CN106570464A (en) | Human face recognition method and device for quickly processing human face shading | |
CN105046277A (en) | Robust mechanism research method of characteristic significance in image quality evaluation | |
CN110097029B (en) | Identity authentication method based on high way network multi-view gait recognition | |
CN106295694A (en) | A kind of face identification method of iteration weight set of constraints rarefaction representation classification | |
CN106503616A (en) | A kind of Mental imagery Method of EEG signals classification of the learning machine that transfinited based on layering | |
CN113610540A (en) | River crab anti-counterfeiting tracing method and system | |
CN113592007B (en) | Knowledge distillation-based bad picture identification system and method, computer and storage medium | |
CN110135508B (en) | Model training method and device, electronic equipment and computer readable storage medium | |
CN104715266B (en) | The image characteristic extracting method being combined based on SRC DP with LDA | |
CN106777402A (en) | A kind of image retrieval text method based on sparse neural network | |
CN111881716A (en) | Pedestrian re-identification method based on multi-view-angle generation countermeasure network | |
CN105631477A (en) | Traffic sign recognition method based on extreme learning machine and self-adaptive lifting | |
CN110298394A (en) | A kind of image-recognizing method and relevant apparatus | |
CN110490028A (en) | Recognition of face network training method, equipment and storage medium based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211227 Address after: 230000 No. 67, Jiatang village north, Dayang Town, Luyang District, Hefei City, Anhui Province Patentee after: Gao Qianwen Address before: 518057 No. 04, 22 / F, international student entrepreneurship building, No. 29, South Ring Road, high tech Zone, Nanshan District, Shenzhen, Guangdong Province Patentee before: TONGGUAN TECHNOLOGY (SHENZHEN) CO.,LTD. |