CN105224948A - A kind of generation method of the largest interval degree of depth generation model based on image procossing - Google Patents

A kind of generation method of the largest interval degree of depth generation model based on image procossing Download PDF

Info

Publication number
CN105224948A
CN105224948A CN201510609808.4A CN201510609808A CN105224948A CN 105224948 A CN105224948 A CN 105224948A CN 201510609808 A CN201510609808 A CN 201510609808A CN 105224948 A CN105224948 A CN 105224948A
Authority
CN
China
Prior art keywords
picture
hidden variable
largest interval
parameter
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510609808.4A
Other languages
Chinese (zh)
Other versions
CN105224948B (en
Inventor
朱军
李崇轩
张钹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201510609808.4A priority Critical patent/CN105224948B/en
Publication of CN105224948A publication Critical patent/CN105224948A/en
Application granted granted Critical
Publication of CN105224948B publication Critical patent/CN105224948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a kind of largest interval degree of depth generation model generation method based on image procossing, comprising: the set building the picture sample with mark, obtain the hiding expression of picture sample, obtain largest interval regularization factors; Obtain and the parameter sampling hidden variable distributed according to hidden variable, calculate the relative entropy of described hidden variable variation Posterior distrbutionp and prior distribution; Obtain and generate according to each picture sample the parameter distributed and probability reconstruction is carried out to described picture sample, obtain probability reconstruction error; Largest interval regularization factors, relative entropy and probability reconstruction error are sued for peace, obtains largest interval degree of depth generation model.Largest interval degree of depth generation model provided by the invention, improves the performance in differentiation task, maintains the ability of degree of depth generation model data modeling, can process large-scale data, be applied in the task of image procossing aspect.

Description

A kind of generation method of the largest interval degree of depth generation model based on image procossing
Technical field
The present invention relates to data mining, machine learning techniques field, be specifically related to a kind of generation method of the largest interval degree of depth generation model based on image procossing.
Background technology
Along with the development of degree of depth study, feedforward neural network all achieves significant achievement in every field, such as speech recognition, Images Classification, text classification etc.Especially convolutional neural networks all achieves leading status on the data set of each image recognition.But simple feedforward neural network can not carry out probabilistic Modeling to training examples, the situation that input data exist loss of learning also just cannot be processed.Degree of depth generation model is as a kind of model extracting sample high-order nonlinear feature at data modeling, and sample generates and missing data predicts there is good performance.But the performance of production model in simple differentiation task is generally inferior to discriminative model; Have clear and definite error in classification objective function in addition in feedforward neural network, and the inference problems of degree of depth generation model is a challenge.
At present, many scholars have carried out very many further investigations for generation model and correlation technique thereof, are described as follows:
Very effective in study at discriminative model of largest interval study, such as Support Vector Machine, polynary output largest interval Markov Network etc.Therefore, some researchists by introducing hidden variable in largest interval model, thus can improve the differentiation performance of generation model significantly.But these methods all just improve the discriminating power of shallow-layer generation model, be difficult to process day by day complicated data.
Other scholars propose a kind of method carrying out approximate hidden variable Posterior distrbutionp based on the variation deduction model of cognition (coding network) built independent of generation model (decoding network).Can be understood as probability autocoder in essence.The method can learn complicated hidden layer efficiently and represent, but does not still explore the performance of feature in differentiation task of degree of depth generation model study, and discriminating power is poor.Meanwhile, the method is not sought yet and how convolution operation to be applied in decoding network.
Also have some scholars to propose the operation of anti-pondization, by by anti-pond, the non-linear combination of Convolution sums, construct the deterministic network from manual feature to chair picture.But the method is a kind of deterministic network, is not generation model, does not relate to probabilistic Modeling; Top-level feature is also hand-designed, is not automatic learning; The method does not learn the coding network from data to hiding expression yet simultaneously.
A desirable degree of depth generation model should have following feature: in differentiation task and feedforward convolutional neural networks compare favourably; Can carry out well modeled to data, automatic learning deep layer represents, the situation of process shortage of data; Can Fast Learning model parameter.But in the scheme of above-mentioned prior art, the not scheme of a comparatively perfect degree of depth generation model.
Summary of the invention
The technical problem to be solved in the present invention is: solving of the prior artly does not have that a kind of to be applied to can showing in differentiation task of image procossing good, and automatic learning deep layer represents, process shortage of data, can the problem of degree of depth generation model of Fast Learning model parameter.
For realizing above-mentioned goal of the invention, the invention provides a kind of largest interval degree of depth generation model generation method based on image procossing.Comprise:
Build the set of the picture sample with mark, obtain the hiding expression of each picture sample in described set, and the mark of comprehensive described hiding expression and described picture sample, obtain largest interval regularization factors;
Obtain the parameter of hidden variable distribution, and according to the parameter sampling hidden variable that described hidden variable distributes, calculate the relative entropy of described hidden variable variation Posterior distrbutionp and prior distribution;
Obtain the parameter that each picture sample generates distribution, and according to the parameter of described picture sample generation distribution, probability reconstruction is carried out to described picture sample, obtain probability reconstruction error;
Described largest interval regularization factors, relative entropy and probability reconstruction error are sued for peace, obtains largest interval degree of depth generation model;
Wherein, the parameter of described hidden variable distribution calculates according to described hiding expression;
The parameter that described picture sample generates distribution calculates according to described hidden variable.
Preferably, the hiding expression of each picture sample in described set utilizes coding network to calculate;
The generation distribution parameter of described each picture sample, is according to described hidden variable, is calculated by decoding network.
Preferably, described decoding networking comprises:
Anti-pond: the square by each unit expansion of described hidden variable being multiple subelement composition, in described square, the value of upper left corner subelement equals the value of described hidden variable unit, and the value of subelement described in all the other is 0, obtains anti-pond result;
Convolution: convolution is carried out to described anti-pond result;
Nonlinear activation: nonlinear activation is carried out to the result that described convolution obtains;
Repeat described anti-pond, convolution and nonlinear activation step, and the result obtained after at every turn repeating is carried out build stack, and carry out stochastic sampling according to the probability distribution of described result.
Preferably, also comprise the generation realizing random pictures according to described largest interval degree of depth generation model, comprising:
Obtain the hidden variable in described model;
Described hidden variable utilized the network mapping of described solution to model code in the first matrix identical with the picture size that will generate, the average of each pixel in the picture that will generate described in each element representation of described first matrix;
According to the distribution parameter of the picture sample pixel that described average and described model are arranged, stochastic sampling is carried out to each pixel of described picture sample, obtains the picture of stochastic generation.
Preferably, also comprise the classification realizing picture according to described largest interval degree of depth generation model, comprising:
Input needs the first picture carrying out classifying;
The coding network in described model is utilized to obtain the hiding expression of described first picture;
The hiding expression of described first picture is mapped to picture mark space;
Export the classification of described first picture.
Preferably, also comprise the prediction realizing picture missing pixel according to described largest interval degree of depth generation model, comprising:
Input the second picture having pixel to lack, the position of described second picture pixel disappearance is known;
The coding network in described model is utilized to obtain the hiding expression of described second picture;
According to the hiding expression of described second picture, the hidden variable of second picture described in stochastic sampling;
The decoding network in described model the hidden variable of described second picture is utilized to be mapped in the second matrix identical with second picture size.The average of each positional representation second picture respective pixel probability reconstruction of described second matrix;
The pixel value of the position of described second picture pixel disappearance is replaced with described second probability and rebuilds average, and using the result after replacement as new input, repeat the step that described acquisition is hidden expression, obtained hidden variable and acquisition probability reconstruction average.
Preferably, the set of the described picture sample with mark is included in training set, is a fixed-size subset in described training set.
Preferably, described largest interval regularization factors is the mark according to described hiding expression and described picture sample, is obtained by structure linear support vector machine.
Preferably, the parameter of described hidden variable distribution, is calculated by linear mapping according to described hiding expression;
Described hidden variable is fixed dimension, is the parameter distributed according to described hidden variable, utilizes random number generator sampling to obtain.
Preferably, described obtain largest interval degree of depth generation model after, utilize stochastic gradient descent method to optimize described model.
The invention provides a kind of generation method of largest interval degree of depth generation model.This model can be acquired on the one hand and represent for the more effective hidden layer of differentiation task.On the other hand, maintain the ability of degree of depth generation model for data modeling, can the significant image of stochastic generation, and can lack part be predicted when image missing pixel, there is the generation ability comparable with degree of depth generation model under the meaning of square error.When image missing pixel, largest interval generation model can obtain than convolutional neural networks and the better classification results of general degree of depth generation model.Simultaneously because we adopt stochastic gradient descent method Optimized Coding Based network, decoding network and largest interval sorter simultaneously, the training time of largest interval degree of depth generation model is approximately the twice of traditional convolutional neural networks, can be applied to large-scale data.
Accompanying drawing explanation
By reading hereafter detailed description of the preferred embodiment, various other advantage and benefit will become cheer and bright for those of ordinary skill in the art.Accompanying drawing only for illustrating the object of preferred implementation, and does not think limitation of the present invention.And in whole accompanying drawing, represent identical parts by identical reference symbol.In the accompanying drawings:
Fig. 1 is the generation method flow diagram of the largest interval degree of depth generation model based on image procossing that first embodiment of the invention provides.
Embodiment
Below in conjunction with drawings and Examples, the specific embodiment of the present invention is described in further detail.Following examples for illustration of the present invention, but are not used for limiting the scope of the invention.
Embodiment one
Present embodiments provide a kind of generation method of the largest interval degree of depth generation model based on image procossing, comprising:
A subset in S101, given training set, i.e. the set of some pictures, is calculated by coding network and hides expression, build Support Vector Machine, and calculate largest interval regularization factors.Sub-step in S101 is described as follows:
Each picture sample x in S1011, supposition training set nfor coloured image, i.e. three-dimensional matrice, and with mark y n∈ { 1...C}.Wherein, y nrepresent the classification of picture; C represents classification sum, carrys out abstract representative classification here by numeral.While building training set, divide a part of training set and be combined into checking set.
A size in S1012, random selecting training set is the subset of k utilize degree of depth convolutional neural networks, i.e. coding network, calculate the hiding expression f (x of each sample in this subset n; φ).Wherein, all weights in network and offset parameter are expressed as φ.F (x n; φ) be one with x nfor the function that input take φ as parameter.
Result is stored as a d dimensional vector; Corresponding f (x n, y; φ) represent a long vector for d × C, the pixel value that d × y+1 to d × (y+1) ties up is f (x n; Value φ), remaining pixel value is 0.
S1013, the hiding expression f (x of sample obtained according to S1012 step n, y; φ) with the mark y inputted n, build linear support vector machine, its weight parameter and offset parameter are ω, obtain largest interval regularization factors such as formula shown in (1) simultaneously:
R n=max yl n(y)-ω T△f(x n;φ)(1)
Wherein, y nbe correct mark, and y is one of all possible mark enumerated; l n(y) be Support Vector Machine be predicted as current enumerate may mark y and incorrect mark y nloss; △ f (x n; φ)=f (x n, y n; φ)-f (x n, y; φ) be the difference of proper vector; max yoperation table is shown in the loss likely marked to be selected to lose maximum situation as final regularizing filter.
The parameter of S102, the distribution of calculating hidden variable, sampling to hidden variable, calculate the relative entropy of hidden variable variation Posterior distrbutionp and prior distribution simultaneously, is also KL distance (Kullback-LeiblerDivergence).Sub-step in S102 is described as follows:
S1021, the hiding expression f (x of sample obtained according to S1012 step n; φ) calculate the distribution parameter of hidden variable by linear mapping, namely the average of hidden variable Gaussian distribution and variance are such as formula shown in (2)-(3):
μ n = W 1 T f ( x n ; φ ) + b 1 - - - ( 2 )
logσ n 2 = W 2 T f ( x n ; φ ) + b 2 - - - ( 3 )
Wherein, μ nfor the average of hidden variable Gaussian distribution; for the variance of hidden variable Gaussian distribution; φ is that all weights in network and offset parameter represent; W 1, W 2, b 1, b 2for weight parameter and the offset parameter of above-mentioned linear transformation, in order to represent convenient, these parameters also being absorbed in φ, finally obtain such as formula the result shown in (4)-(5):
μ n=h 1(x n;φ)(4)
logσ n 2 = h 2 ( x n ; φ ) - - - ( 5 )
Wherein, h represents the compound of f in S1012 step and above-mentioned linear transformation.
S1022, according to the distribution parameter μ calculated in S1021 step nwith for each sample in S1012 step, utilize the hidden variable ε of random number generator sampling fixed dimension n~ N (0,1), hidden variable obeys the independently standard gaussian distribution of each dimension, and the substitution of variable skill of recycling Gaussian distribution can obtain formula (6):
z n=μ nn⊙ε n(6)
Wherein, ⊙ is step-by-step product; z nfor hidden variable; μ nrepresent the average of Gaussian distribution; σ nrepresent the standard deviation of Gaussian distribution.
And calculate the hidden variable variation Posterior distrbutionp of each sample and the KL distance of prior distribution, be formula (7):
K n = 0.5 × Σ j ( 1 + l o g ( σ n , j 2 ) - μ n , j 2 - σ n , j 2 ) - - - ( 7 )
Wherein, μ nrepresent the average of Gaussian distribution; σ nrepresent the variance of Gaussian distribution; The subscript of each parameter represents the jth dimension of the n-th sample.
S103, the parameter utilizing the generation of decoding network calculating sample to distribute, carry out probability reconstruction to picture, and calculating probability reconstruction error.Sub-step in S103 is described as follows:
S1031, according to the hidden variable z obtained in S1022 step nit is blocked transposition from a long vector and becomes three-dimensional matrice, utilize degree of depth convolutional neural networks, namely decoding network (wherein, the weight parameter of decoding network and offset parameter are θ) calculate each sample, i.e. the parameter of the generation distribution of each pixel of picture is such as formula shown in (8):
μ n′=g(z n;θ)(8)
Wherein, g is the function of Neural Networks Representation, μ nbe preferably the average of Bernoulli Jacob's variable.
S1032, the parameter μ distributed according to the sample generation obtained in S1031 step n', carry out probability reconstruction to sample corresponding in S1012 step, probability reconstruction error can be similar to by the method for sampling or obtain analytical form by theoretical analysis, and its analytical form is such as formula shown in (9):
E n=∑ jx n,jlogμ′ n,j+(1-x n,j)log(1-μ′ n,j)(9)
Wherein, x nrepresent original input picture; μ nrepresent the average of rebuilding picture; E nrepresent the analytical form of probability reconstruction error; The subscript of each parameter represents the jth dimension of the n-th sample.
It should be noted that, the pixel obedience of the picture of hypothesis input here distributes with the Bernoulli Jacob that correspondence output is average thus obtains the probability reconstruction error of cross entropy form.
Also it should be noted that, the probabilistic decoding network in S1031 and S1032 step described in model training, specifically comprises:
Anti-pond: operate contrary with the pondization used in S1012 step, be the square of 2 × 2 or 3 × 3 by each unit expansion in picture sample, the value in each foursquare upper left corner equals the value of picture sample unit, and in square, remaining value is 0;
Convolution: convolution operation is carried out for the result behind pond anti-in (a);
Nonlinear activation: nonlinear activation is carried out for the result of convolution in (c), namely the output of maximal value as activation function is got to the numerical value in unit and 0;
By stacking in order for the structure obtained in anti-pond, convolution and nonlinear activation step, and carry out stochastic sampling according to the probability distribution of described result.
S104, obtain the objective function of largest interval degree of depth generation model, this function is largest interval degree of depth generation model.Adopt stochastic gradient descent method objective function, and can training of judgement set continue optimize.Sub-step in S104 is described as follows:
S1041, KL distance K to the hidden variable variation Posterior distrbutionp obtained in S1022 step and prior distribution n, the probability reconstruction error E that obtains of S1032 step nwith the largest interval regularization factors R that S1013 step obtains nbe weighted summation and obtain objective function (being also largest interval degree of depth generation model) such as formula shown in (10):
min θ,φ,ωnK n+E n+λR n(10)
Wherein, K nfor KL distance; E nfor probability reconstruction error; K n+ E nfor the variation upper bound of logarithm maximum likelihood opposite number; λ is controling parameters, for controlling the relative weighting of the variation upper bound and largest interval regularization factors.
S1042, the method for stochastic gradient descent is utilized to be optimized objective function.
If judge, objective function can continue to optimize, then get back to S1011 step, and the subset that stochastic sampling is new;
If judge, objective function can not continue to optimize, and namely objective function no longer declines, then enter next step.
S105, according in the checking set in S1011 step error in classification select optimized parameter, continue to optimize the parameter of coding network, decoding network and largest interval sorter.
Present embodiments providing a kind of largest interval degree of depth generation model generation method, by introducing largest interval regularization factors, improve the performance of degree of depth generation model in differentiation task.Maintain the ability of degree of depth generation model data modeling simultaneously, the situation of shortage of data can be processed.And when test data has disappearance, the classification results of largest interval degree of depth generation model is better than convolutional neural networks and general degree of depth generation model.Meanwhile, the present invention uses stochastic gradient descent method entirety to trained decoding network, and coding network and largest interval sorter, can process large-scale data.
Embodiment two
Present embodiments provide a kind of largest interval degree of depth generation model according to embodiment one carries out stochastic generation method to picture, comprising:
Prior distribution according to hidden variable z carries out stochastic sampling, and such as z is the independently standard gaussian distribution of each dimension;
In the matrix that the picture size being utilized by the hidden variable z of sampling the decoding network in embodiment one S1031 step to be mapped to will to generate is identical, the average of each pixel in the picture that each element representation of this matrix will generate;
The distribution parameter of the picture pixels of model hypothesis in the average of each pixel of the picture obtained in utilization and embodiment one S1031 step, stochastic sampling is carried out to each pixel of picture, obtain the picture of a stochastic generation, and the picture that stochastic sampling obtains is similar to the distribution submitting to training data.
Embodiment three
Present embodiments provide a kind of method of classifying according to the picture sample of largest interval degree of depth generation model to input of embodiment one, comprising:
If the first picture of input is x 1, utilize the coding network in embodiment one S1012 step to carry out the first picture to be mapped to the hiding expression f (x of the first picture 1; φ);
Utilize the Support Vector Machine in embodiment one S1013 step by the hiding expression f (x of the first picture 1; φ) be mapped to picture mark space, and export the classification of described first picture, realize the classification of the first picture.
Embodiment four
Present embodiments provide a kind of method picture that pixel lacks predicted according to the largest interval degree of depth generation model of embodiment one, comprising:
If input second picture is x, the position of lack part is known, utilizes the coding network in embodiment one S1012 step to suppose to be mapped to the hiding expression f (x of second picture 2; φ);
According to the hiding expression f (x obtaining second picture 2; φ), the process according to embodiment one S102 step carries out stochastic sampling, obtains the hidden variable of second picture;
The decoding network in embodiment one S1031 step the hidden variable of second picture is utilized to be mapped to the matrix of second picture size.Wherein, the average of each positional representation second picture respective pixel probability reconstruction of this matrix;
The pixel value of second picture pixel lack part is replaced with the average obtained, and using the result after replacing as new input, re-enters in model and repeat that above-mentioned steps is some takes turns, namely realize the prediction of second picture pixel disappearance.
Embodiment five
The present embodiment, based on the largest interval degree of depth generation model of embodiment one, Handwritten Digit Recognition data set MNIST and street number set of identification data SVHN is tested.The error rate of largest interval degree of depth generation model on two data sets is respectively 0.45% and 3.09%.Be compared to unsupervised degree of depth generation model 1.04% and 25.3% has had significant raising, can compare favourably with 0.39% of best convolutional neural networks and 1.92%.
Above embodiment is only for illustration of the present invention; and be not limitation of the present invention; the those of ordinary skill of relevant technical field; without departing from the spirit and scope of the present invention; can also make a variety of changes and modification; therefore all equivalent technical schemes also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (10)

1., based on a generation method for the largest interval degree of depth generation model of image procossing, it is characterized in that, comprising:
Build the set of the picture sample with mark, obtain the hiding expression of each picture sample in described set, and the mark of comprehensive described hiding expression and described picture sample, obtain largest interval regularization factors;
Obtain the parameter of hidden variable distribution, and according to the parameter sampling hidden variable that described hidden variable distributes, calculate the relative entropy of described hidden variable variation Posterior distrbutionp and prior distribution;
Obtain the parameter that each picture sample generates distribution, and according to the parameter of described picture sample generation distribution, probability reconstruction is carried out to described picture sample, obtain probability reconstruction error;
Described largest interval regularization factors, relative entropy and probability reconstruction error are sued for peace, obtains largest interval degree of depth generation model;
Wherein, the parameter of described hidden variable distribution calculates according to described hiding expression;
The parameter that described picture sample generates distribution calculates according to described hidden variable.
2. generate method as claimed in claim 1, it is characterized in that,
The hiding expression of each picture sample in described set utilizes coding network to calculate;
The generation distribution parameter of described each picture sample, is according to described hidden variable, is calculated by decoding network.
3. generate method as claimed in claim 2, it is characterized in that, described decoding networking comprises:
Anti-pond: the square by each unit expansion of described hidden variable being multiple subelement composition, in described square, the value of upper left corner subelement equals the value of described hidden variable unit, and the value of subelement described in all the other is 0, obtains anti-pond result;
Convolution: convolution is carried out to described anti-pond result;
Nonlinear activation: nonlinear activation is carried out to the result that described convolution obtains;
Repeat described anti-pond, convolution and nonlinear activation step, and the result obtained after at every turn repeating is carried out build stack, and carry out stochastic sampling according to the probability distribution of described result.
4. generate method as claimed in claim 2, it is characterized in that, also comprise the generation realizing random pictures according to described largest interval degree of depth generation model, comprising:
Obtain the hidden variable in described model;
Described hidden variable utilized the network mapping of described solution to model code in the first matrix identical with the picture size that will generate, the average of each pixel in the picture that will generate described in each element representation of described first matrix;
According to the distribution parameter of the picture sample pixel that described average and described model are arranged, stochastic sampling is carried out to each pixel of described picture sample, obtains the picture of stochastic generation.
5. generate method as claimed in claim 2, it is characterized in that, also comprise the classification realizing picture according to described largest interval degree of depth generation model, comprising:
Input needs the first picture carrying out classifying;
The coding network in described model is utilized to obtain the hiding expression of described first picture;
The hiding expression of described first picture is mapped to picture mark space;
Export the classification of described first picture.
6. generate method as claimed in claim 2, it is characterized in that, also comprise the prediction realizing picture missing pixel according to described largest interval degree of depth generation model, comprising:
Input the second picture having pixel to lack, the position of described second picture pixel disappearance is known;
The coding network in described model is utilized to obtain the hiding expression of described second picture;
According to the hiding expression of described second picture, the hidden variable of second picture described in stochastic sampling;
The decoding network in described model the hidden variable of described second picture is utilized to be mapped in the second matrix identical with second picture size.The average of each positional representation second picture respective pixel probability reconstruction of described second matrix;
The pixel value of the position of described second picture pixel disappearance is replaced with described second probability and rebuilds average, and using the result after replacement as new input, repeat the step that described acquisition is hidden expression, obtained hidden variable and acquisition probability reconstruction average.
7. generate method as claimed in claim 1, it is characterized in that, the set of the described picture sample with mark is included in training set, is a fixed-size subset in described training set.
8. generating method as claimed in claim 1, it is characterized in that, described largest interval regularization factors, is the mark according to described hiding expression and described picture sample, is obtained by structure linear support vector machine.
9. generate method as claimed in claim 1, it is characterized in that,
The parameter of described hidden variable distribution, is calculated by linear mapping according to described hiding expression;
Described hidden variable is fixed dimension, is the parameter distributed according to described hidden variable, utilizes random number generator sampling to obtain.
10. generate method as claimed in claim 1, it is characterized in that, described obtain largest interval degree of depth generation model after, utilize stochastic gradient descent method to optimize described model.
CN201510609808.4A 2015-09-22 2015-09-22 A kind of largest interval depth based on image procossing generates the generation method of model Active CN105224948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510609808.4A CN105224948B (en) 2015-09-22 2015-09-22 A kind of largest interval depth based on image procossing generates the generation method of model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510609808.4A CN105224948B (en) 2015-09-22 2015-09-22 A kind of largest interval depth based on image procossing generates the generation method of model

Publications (2)

Publication Number Publication Date
CN105224948A true CN105224948A (en) 2016-01-06
CN105224948B CN105224948B (en) 2019-03-01

Family

ID=54993908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510609808.4A Active CN105224948B (en) 2015-09-22 2015-09-22 A kind of largest interval depth based on image procossing generates the generation method of model

Country Status (1)

Country Link
CN (1) CN105224948B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718959A (en) * 2016-01-27 2016-06-29 中国石油大学(华东) Object identification method based on own coding
CN106127230A (en) * 2016-06-16 2016-11-16 上海海事大学 Image-recognizing method based on human visual perception
CN106203628A (en) * 2016-07-11 2016-12-07 深圳先进技术研究院 A kind of optimization method strengthening degree of depth learning algorithm robustness and system
CN106355191A (en) * 2016-08-12 2017-01-25 清华大学 Deep generating network random training algorithm and device
CN106778700A (en) * 2017-01-22 2017-05-31 福州大学 One kind is based on change constituent encoder Chinese Sign Language recognition methods
CN107463953A (en) * 2017-07-21 2017-12-12 上海交通大学 Image classification method and system based on quality insertion in the case of label is noisy
CN109685087A (en) * 2017-10-18 2019-04-26 富士通株式会社 Information processing method and device and information detecting method and device
CN113435488A (en) * 2021-06-17 2021-09-24 深圳大学 Image sampling probability improving method and application thereof
CN113642447A (en) * 2021-08-09 2021-11-12 杭州弈胜科技有限公司 Monitoring image vehicle detection method and system based on convolutional neural network cascade
CN114831621A (en) * 2022-05-23 2022-08-02 西安大数据与人工智能研究院 Distributed ultrafast magnetic resonance imaging method and imaging system thereof
CN115563655A (en) * 2022-11-25 2023-01-03 承德石油高等专科学校 User dangerous behavior identification method and system for network security

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100074053A1 (en) * 2008-07-18 2010-03-25 William Marsh Rice University Methods for concurrent generation of velocity models and depth images from seismic data
US20140067738A1 (en) * 2012-08-28 2014-03-06 International Business Machines Corporation Training Deep Neural Network Acoustic Models Using Distributed Hessian-Free Optimization
CN104778070A (en) * 2014-01-15 2015-07-15 富士通株式会社 Extraction method and equipment for hidden variables and information extraction method and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100074053A1 (en) * 2008-07-18 2010-03-25 William Marsh Rice University Methods for concurrent generation of velocity models and depth images from seismic data
US20140067738A1 (en) * 2012-08-28 2014-03-06 International Business Machines Corporation Training Deep Neural Network Acoustic Models Using Distributed Hessian-Free Optimization
CN104778070A (en) * 2014-01-15 2015-07-15 富士通株式会社 Extraction method and equipment for hidden variables and information extraction method and equipment

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718959A (en) * 2016-01-27 2016-06-29 中国石油大学(华东) Object identification method based on own coding
CN105718959B (en) * 2016-01-27 2018-11-16 中国石油大学(华东) A kind of object identification method based on from coding
CN106127230B (en) * 2016-06-16 2019-10-01 上海海事大学 Image-recognizing method based on human visual perception
CN106127230A (en) * 2016-06-16 2016-11-16 上海海事大学 Image-recognizing method based on human visual perception
CN106203628A (en) * 2016-07-11 2016-12-07 深圳先进技术研究院 A kind of optimization method strengthening degree of depth learning algorithm robustness and system
CN106203628B (en) * 2016-07-11 2018-12-14 深圳先进技术研究院 A kind of optimization method and system enhancing deep learning algorithm robustness
CN106355191A (en) * 2016-08-12 2017-01-25 清华大学 Deep generating network random training algorithm and device
CN106778700A (en) * 2017-01-22 2017-05-31 福州大学 One kind is based on change constituent encoder Chinese Sign Language recognition methods
CN107463953B (en) * 2017-07-21 2019-11-19 上海媒智科技有限公司 Image classification method and system based on quality insertion in the noisy situation of label
CN107463953A (en) * 2017-07-21 2017-12-12 上海交通大学 Image classification method and system based on quality insertion in the case of label is noisy
CN109685087A (en) * 2017-10-18 2019-04-26 富士通株式会社 Information processing method and device and information detecting method and device
CN109685087B (en) * 2017-10-18 2022-11-01 富士通株式会社 Information processing method and device and information detection method
CN109685087B9 (en) * 2017-10-18 2023-02-03 富士通株式会社 Information processing method and device and information detection method
CN113435488A (en) * 2021-06-17 2021-09-24 深圳大学 Image sampling probability improving method and application thereof
CN113435488B (en) * 2021-06-17 2023-11-07 深圳大学 Image sampling probability improving method and application thereof
CN113642447A (en) * 2021-08-09 2021-11-12 杭州弈胜科技有限公司 Monitoring image vehicle detection method and system based on convolutional neural network cascade
CN113642447B (en) * 2021-08-09 2022-03-08 杭州弈胜科技有限公司 Monitoring image vehicle detection method and system based on convolutional neural network cascade
CN114831621A (en) * 2022-05-23 2022-08-02 西安大数据与人工智能研究院 Distributed ultrafast magnetic resonance imaging method and imaging system thereof
CN114831621B (en) * 2022-05-23 2023-05-26 西安大数据与人工智能研究院 Distributed ultrafast magnetic resonance imaging method and imaging system thereof
CN115563655A (en) * 2022-11-25 2023-01-03 承德石油高等专科学校 User dangerous behavior identification method and system for network security

Also Published As

Publication number Publication date
CN105224948B (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN105224948A (en) A kind of generation method of the largest interval degree of depth generation model based on image procossing
CN107679582A (en) A kind of method that visual question and answer are carried out based on multi-modal decomposition model
CN110321361B (en) Test question recommendation and judgment method based on improved LSTM neural network model
CN109242033A (en) Wafer defect method for classifying modes and device, storage medium, electronic equipment
CN106096728B (en) A kind of dangerous source discrimination based on deep layer extreme learning machine
CN109727590A (en) Music generating method and device based on Recognition with Recurrent Neural Network
CN103345656A (en) Method and device for data identification based on multitask deep neural network
CN103605985B (en) Face recognition method based on data dimension reduction of tensor global-local preserving projection
CN106295199A (en) Automatic history matching method and system based on autocoder and multiple-objection optimization
CN103093248B (en) A kind of semi-supervision image classification method based on various visual angles study
CN107292336A (en) A kind of Classification of Polarimetric SAR Image method based on DCGAN
CN106934458A (en) Multilayer automatic coding and system based on deep learning
CN109886072A (en) Face character categorizing system based on two-way Ladder structure
Song et al. Resolution and relevance trade-offs in deep learning
CN109978074A (en) Image aesthetic feeling and emotion joint classification method and system based on depth multi-task learning
CN105740908A (en) Classifier design method based on kernel space self-explanatory sparse representation
CN104463148B (en) Face identification method based on Image Reconstruction and hash algorithm
Bai et al. Alleviating adversarial attacks via convolutional autoencoder
CN105631469A (en) Bird image recognition method by multilayer sparse coding features
CN103971136A (en) Large-scale data-oriented parallel structured support vector machine classification method
CN103268484A (en) Design method of classifier for high-precision face recognitio
CN103761532B (en) Label space dimensionality reducing method and system based on feature-related implicit coding
CN115359353A (en) Flower identification and classification method and device
CN107528824A (en) A kind of depth belief network intrusion detection method based on two-dimensionses rarefaction
Nesterenko et al. Phyloformer: towards fast and accurate phylogeny estimation with self-attention networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210524

Address after: 100084 a1901, 19th floor, building 8, yard 1, Zhongguancun East Road, Haidian District, Beijing

Patentee after: Beijing Ruili Wisdom Technology Co.,Ltd.

Address before: 100084 mailbox, 100084-82 Tsinghua Yuan, Beijing, Haidian District, Beijing

Patentee before: TSINGHUA University

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160106

Assignee: Beijing Intellectual Property Management Co.,Ltd.

Assignor: Beijing Ruili Wisdom Technology Co.,Ltd.

Contract record no.: X2023110000073

Denomination of invention: A Method of Generating Maximum Interval Depth Generative model Based on Image Processing

Granted publication date: 20190301

License type: Common License

Record date: 20230531