CN109583563A - Variation expectation maximization routing algorithm based on capsule network - Google Patents

Variation expectation maximization routing algorithm based on capsule network Download PDF

Info

Publication number
CN109583563A
CN109583563A CN201811250635.1A CN201811250635A CN109583563A CN 109583563 A CN109583563 A CN 109583563A CN 201811250635 A CN201811250635 A CN 201811250635A CN 109583563 A CN109583563 A CN 109583563A
Authority
CN
China
Prior art keywords
capsule
updates
parameter
probability
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811250635.1A
Other languages
Chinese (zh)
Inventor
徐宁
楚昕
刘小峰
缪晓宇
姚潇
蒋爱民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201811250635.1A priority Critical patent/CN109583563A/en
Publication of CN109583563A publication Critical patent/CN109583563A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of variation expectation maximization routing algorithm based on capsule network, the pose matrix of rudimentary capsule is considered as the data point of GMM, the pose matrix of advanced capsule is considered as Gaussian Profile, data point cluster for Gaussian Profile one by one and is calculated into its distribution parameter by VBEM routing algorithm, it is grouped rudimentary capsule to form a high level capsule at runtime, is then updated according to Gaussian Distribution Parameters and calculate activation value a.The present invention uses VBEM algorithm in capsule network, compared with EM algorithm, this method hardly needs additional calculation amount, and it solves the calculating problem in maximum likelihood method, inferred based on variation and each factoring is optimized to complete whole optimization process, both available approximate solution, it again can be to avoid the singularity generated when Gaussian component " degeneration " is to a specific data point, but also hidden variable classification number k can be determined in the algorithm automatically, and over-fitting is also avoided when k is larger.

Description

Variation expectation maximization routing algorithm based on capsule network
Technical field
The present invention relates to variation infer and Bayes field, and in particular to it is a kind of based on capsule network variation expectation most Bigization routing algorithm is used as and clusters rudimentary capsule for advanced capsule.
Background technique
Capsule network is a kind of new neural network model, and capsule is a concept of space, can be vector, is also possible to square The concrete form of battle array scalar quantity, capsule can determine that capsule network is in Handwritten Digit Recognition field according to the feature of input data Only need 3 layers of hidden layer that can achieve the effect that deep neural network, it is even better.
Expectation maximization (EM) algorithm is that the one kind for the maximum likelihood solution for finding the probabilistic model with latent variable is general Method, gauss hybrid models (GMM) refer to the linear combination of multiple gauss of distribution function, are widely used in data mining, machine In device study and statistical analysis.In numerous applications, parameter is determined by maximum likelihood method, it will usually use EM algorithm.However, Maximum likelihood method has some huge limitations, and it is general about latent variable posteriority to need to calculate partial data log-likelihood function The expectation of rate distribution.For many models in practical application, calculates Posterior probability distribution or calculate about after this It is infeasible to test being contemplated to be for probability distribution, and in maximum likelihood method, when a Gaussian component " degeneration " has to one When the data point of body, singularity can be generated, and this singularity is not present in bayes method.
The decomposition that the method that variation is inferred is based on true Posterior probability distribution is approximate, carries out most to each factoring Optimization is to complete whole optimization process, using the method for variation Bayes, not only available approximate solution, but also can be to avoid working as The singularity generated when Gaussian component " degeneration " is to a specific data point, the present invention are calculated in capsule network using VBEM Method, compared with EM algorithm, this method hardly needs additional calculation amount, and it solves the master in maximum likelihood method It wants difficult, but also hidden variable classification number k can be determined in the algorithm automatically, and also avoided intending when k is larger It closes.VBEM algorithm is a two stage iteration optimization algorithms, is generally divided into VBE, VBM step, and VBE step is calculated according to parameter current Posterior distrbutionp, VBM step is according to optimal solution form undated parameter, until convergence.
Summary of the invention
For solve existing capsule network Road by algorithm solve part practical problem can not computational problem, the present invention discloses A kind of variation expectation maximization routing algorithm based on capsule network, variation expectation maximization routing algorithm purpose are glue Capsule is grouped the relationship to form a part and entirety: the pose matrix of rudimentary capsule is considered as the data point of GMM, advanced capsule Pose matrix is considered as Gaussian Profile, and data point cluster for Gaussian Profile one by one and is calculated its distribution ginseng by VBEM routing algorithm Number, i.e., be grouped rudimentary capsule to form a high level capsule at runtime, is then updated and is calculated according to Gaussian Distribution Parameters Activation value a.
Technical solution: to achieve the above object, the technical solution adopted by the present invention are as follows:
Variation expectation maximization routing algorithm based on capsule network uses VBEM routing algorithm meter in capsule network layer Calculate the output attitude matrix and activation value of capsule.VBEM routing algorithm uses the Gauss model with 16 μ and 16 σ to father's capsule Attitude matrix modeling, each μ indicates an element of attitude matrix, and σ is used for the calculating of activation value a, in VBEM routing algorithm Two stages in constantly iteration until convergence, calculate the input pose matrix and activation value a of father's capsule;
S01 inputs current pose matrix and activation value, using the pose matrix of each capsule as data point X={ x1, x2...xN, N is the number of capsule, using pose matrix classification as hidden variable Z={ z1,z2...zK, k be capsule classification, k by VBEM algorithm automatically determines;
S02 infers the optimal of the optimal solution and posterior probability p (z | x) for finding out the prior distribution of each parameter θ using variation Solve q*(z), θ includes mixed coefficint π={ π1π2...πk, mean μ={ μ of GMM1μ2...μk, covariance Λ={ Λ of GMM1 Λ2...Λk};
S03 defines each variable and initializes, wherein being attached to Gaussian Profile initial value μ for pose matrix element is corresponding0, It forces covariance Λ being set as diagonal matrix, then variances sigma2Take covariance matrix diagonal line;
S04 carries out VBM step, updates each variable;
S05 carries out VBE step, updates the r for representing posterior probabilitynk
S06, iteration n times or until iteration convergence obtain final activation value a, and the mixed Gaussian mould that output x is obeyed The mean value of type, conversion obtain father's capsule pose matrix.
Above-mentioned steps S02 is specifically included:
Data point x belongs to the probability distribution of kth class:
Wherein πkFor k-th of element of mixed coefficint π, μkkFor k-th of element of μ, Λ;
πkIndicate zk=1 probability, it may be assumed that p (zk=1)=πk, when k-th of element of z is 1, other elements are 0, institute It is indicated with the marginal probability distribution of z are as follows:
By the optimal solution q of the prior probability q (π) of mixed coefficint π*(π) regards the distribution of Di Li Cray as, it may be assumed that
q*(π)=Dir (π | α), (3)
Wherein α is Di Li Cray coefficient;
Enable the optimal solution q of the mean μ of Gaussian Profile and the prior probability q (μ, Λ) of covariance Λ*(μ, Λ) obeys independent Gauss-Wishart distribution, it may be assumed that
Wherein mk、βk、ωk、νkFor Gauss-Wishart distribution parameter;
The optimal solution q for directly writing out posterior probability p (z | x) is inferred according to variation*(z) logarithmic function:
Wherein:
lnρnzIt is lnq*(z) one represents the factor, and E [*] is to ask * expectation, and D is the dimension of data point x;
Index is gone to formula (5) two sides, obtains proportional relation are as follows:
It is required that probability be it is normalized, have:
Wherein:
rnkIt is ρnkNormalized form, rnkIt is non-negative, and
The above-mentioned steps S03 course of work includes the following steps: definition observation data for rnkBelow three statistics:
By rnkIt is considered as posterior probability, then NkIndicate kth class probability,Indicate the mean value for belonging to the x of kth class, SkIt indicates to belong to In the covariance of the x of kth class, N is initializedk,Sk, the symbol lower right corner is designated as number 0 i.e. expression initial value in following formula, is K represents its k-th of component;
(3-2) definition updates Di Li Cray parameter equation: αk0+Nk (13)
(3-3) definition updates Gauss-Wishart parameter equation: βk0+Nk (14)
vk=v0+NK+1 (17)
(3-4) definition updates mean value formula: μk=mk(18)
(3-5) definition updates formula of variance: σ2=diag (inv (Wk -1)) (19)
(3-6) definition updates capsule network cost function formulation: cost=βμ+lg(σ)∑rij (20)
Wherein βμFor network parameter, obtained by reversed Internet communication training;
(3-7) definition updates activation value a formula: aj=logstic (λ (βα-∑cost)) (21)
Wherein βαFor network parameter, obtained by reversed Internet communication training;
(3-8) initializes each variable defined above, wherein corresponding to be attached to Gaussian Profile initial by pose matrix element Value μ0IfThen μ0=(μ12...μ1516)T, the symbol lower right corner is designated as in following formula Number 0 indicates initial value, all represents its k-th of component for k.
The above-mentioned steps S04 course of work includes the following steps:
(4-1) updates statistic Nk,Sk
(4-3) updates Di Li Cray parameter alphak
(4-4) updates Gauss-Wishart parameter: βk,mk,Wk -1,vk
(4-5) updates mean μk, variances sigma2
(4-6) updates capsule network cost function cost, updates activation value a.
The above-mentioned steps S05 course of work includes the following steps:
(5-1) updates ln ρ according to formula (6)nk, in which:
(5-2) updates r according to formula (9)nk
Beneficial effects of the present invention: the present invention uses VBEM algorithm, compared with EM algorithm, this method in capsule network Additional calculation amount is hardly needed, and it solves the calculating problem in maximum likelihood method, is inferred based on variation to every A factoring is optimized to complete whole optimization process, not only available approximate solution, but also can be to avoid working as Gauss The singularity generated when component " degeneration " is to a specific data point, but also hidden variable classification number k can be automatically in algorithm Middle determination, and over-fitting is also avoided when k is larger.A kind of variation expectation maximization routing calculation based on capsule network Method is a general-purpose algorithm.
Detailed description of the invention
Fig. 1 is VBEM algorithm routing procedure;
Scheme model structure in Fig. 2 embodiment of the present invention;
Fig. 3 is data structure diagram of the picture after every layer network in the embodiment of the present invention.
Specific embodiment
The invention will be further described with reference to the accompanying drawing and by specific embodiment, and following embodiment is descriptive , it is not restrictive, this does not limit the scope of protection of the present invention.
In order to make technological means of the invention, creation characteristic, workflow, application method reach purpose and effect, and it is It is easy to understand the evaluation method with reference to specific embodiments the present invention is further explained.
As shown in Figure 2 and Figure 3, use mnist data set as the training data of capsule network, initial data size is 28* 28, it with convolution kernel is 5x5 in common convolutional layer, step-length 2, have filling carries out convolution operation to picture, obtains 14*14* 32 input;
In initial capsule layer, 32 channels are converted into 32 primary capsules, each capsule packet using the convolution kernel of 1x1 Containing a 4x4 matrix and an activation value;
It is a convolution capsule layer 1, convolution kernel 3x3, step-length 2, the use of convolution capsule layer 1 after initial capsule layer VBEM routing exports (pose matrix and activation value a) to calculate capsule;
Convolution capsule layer 2 is another convolution capsule layer, 3x3 convolution kernel, and step-length 1 is equally routed using VBEM to count Calculate capsule output;
The output capsule of convolution capsule layer 2 is connected to classification capsule layer by 1x1 convolution kernel, each classification is by a glue Capsule indicates (in MNIST, there is 10 classifications).
VBEM routing algorithm is realized:
As shown in Figure 1, VBEM routing algorithm uses the Gauss model with 16 μ and 16 σ to the attitude matrix of father's capsule Modeling, each μ indicate an element of attitude matrix, and σ is used for the calculating of activation value a, in two stages of VBEM routing algorithm In constantly iteration until convergence, calculate the input pose matrix and activation value a of father's capsule;
S01 inputs current pose matrix and activation value, using the pose matrix of each capsule as data point X={ x1, x2...xN, N is the number of capsule, using pose matrix classification as hidden variable Z={ z1,z2...zK, k be capsule classification, k by VBEM algorithm automatically determines;
S02 infers the optimal of the optimal solution and posterior probability p (z | x) for finding out the prior distribution of each parameter θ using variation Solve q*(z), θ includes mixed coefficint π={ π1π2...πk, mean μ={ μ of GMM1μ2...μk, covariance Λ={ Λ of GMM1 Λ2...Λk};
S03 defines each variable and initializes, wherein being attached to Gaussian Profile initial value μ for pose matrix element is corresponding0, It forces covariance Λ being set as diagonal matrix, then variances sigma2Take covariance matrix diagonal line;
S04 carries out VBM step, updates each variable;
S05 carries out VBE step, updates the r for representing posterior probabilitynk
S06, iteration n times or until iteration convergence obtain final activation value a, and the mixed Gaussian mould that output x is obeyed The mean value of type, conversion obtain father's capsule pose matrix.
Above-mentioned steps S02 is specifically included:
Data point x belongs to the probability distribution of kth class:
Wherein πkFor k-th of element of mixed coefficint π, μkkFor k-th of element of μ, Λ;
πkIndicate zk=1 probability, it may be assumed that p (zk=1)=πk, when k-th of element of z is 1, other elements are 0, institute It is indicated with the marginal probability distribution of z are as follows:
By the optimal solution q of the prior probability q (π) of mixed coefficint π*(π) regards the distribution of Di Li Cray as, it may be assumed that
q*(π)=Dir (π | α), (3)
Wherein α is Di Li Cray coefficient;
Enable the optimal solution q of the mean μ of Gaussian Profile and the prior probability q (μ, Λ) of covariance Λ*(μ, Λ) obeys independent Gauss-Wishart distribution, it may be assumed that
Wherein mk、βk、ωk、νkFor Gauss-Wishart distribution parameter;
The optimal solution q for directly writing out posterior probability p (z | x) is inferred according to variation*(z) logarithmic function:
Wherein:
lnρnzIt is lnq*(z) one represents the factor, and E [*] is to ask * expectation, and D is the dimension of data point x;
Index is gone to formula (5) two sides, obtains proportional relation are as follows:
It is required that probability be it is normalized, have:
Wherein:
rnkIt is ρnkNormalized form, rnkIt is non-negative, and
The above-mentioned steps S03 course of work includes the following steps:
(3-1) definition observation data are for rnkBelow three statistics:
By rnkIt is considered as posterior probability, then NkIndicate kth class probability,Indicate the mean value for belonging to the x of kth class, SkIt indicates to belong to In the covariance of the x of kth class, N is initializedk,Sk, the symbol lower right corner is designated as number 0 i.e. expression initial value in following formula, is K represents its k-th of component;
(3-2) definition updates Di Li Cray parameter equation: αk0+Nk (13)
(3-3) definition updates Gauss-Wishart parameter equation: βk0+Nk (14)
vk=v0+NK+1 (17)
(3-4) definition updates mean value formula: μk=mk (18)
(3-5) definition updates formula of variance: σ2=diag (inv (Wk -1)) (19)
(3-6) definition updates capsule network cost function formulation: cost=βμ+lg(σ)∑rij (20)
Wherein βμFor network parameter, obtained by reversed Internet communication training;
(3-7) definition updates activation value a formula: aj=logstic (λ (βα-∑cost)) (21)
Wherein βαFor network parameter, obtained by reversed Internet communication training;
(3-8) initializes each variable defined above, wherein corresponding to be attached to Gaussian Profile initial by pose matrix element Value μ0IfThen μ0=(μ12...μ1516)T, the symbol lower right corner is designated as in following formula Number 0 indicates initial value, all represents its k-th of component for k.
The above-mentioned steps S04 course of work includes the following steps:
(4-1) updates statistic Nk,Sk
(4-3) updates Di Li Cray parameter alphak
(4-4) updates Gauss-Wishart parameter: βk,mk,Wk -1,vk
(4-5) updates mean μk, variances sigma2
(4-6) updates capsule network cost function cost, updates activation value a.The above-mentioned steps S05 course of work includes as follows Step:
(5-1) updates ln ρ according to formula (6)nk, in which:
(5-2) updates r according to formula (9)nk

Claims (5)

1. a kind of variation expectation maximization routing algorithm based on capsule network, which is characterized in that
The pose matrix of rudimentary capsule is considered as the data point of GMM, the pose matrix of advanced capsule is considered as Gaussian Profile, by VBEM Data point cluster for Gaussian Profile one by one and is calculated its distribution parameter by routing algorithm, i.e., is at runtime grouped rudimentary capsule A high level capsule is formed, is then updated according to Gaussian Distribution Parameters and calculates activation value a;
S01 inputs current pose matrix and activation value, using the pose matrix of each capsule as data point X={ x1,x2...xN, N For the number of capsule, using pose matrix classification as hidden variable Z={ z1,z2...zK, k be capsule classification, k by VBEM algorithm from It is dynamic to determine;
S02 infers the optimal solution q of the optimal solution and posterior probability p (z | x) that find out the prior distribution of each parameter θ using variation* (z), θ includes mixed coefficint π={ π1 π2 ... πk, mean μ={ μ of GMM1 μ2 ... μk, the covariance Λ of GMM= {Λ1 Λ2 ... Λk};
S03 defines each variable and initializes, wherein being attached to Gaussian Profile initial value μ for pose matrix element is corresponding0, forcing will Covariance Λ is set as diagonal matrix, then variances sigma2Take covariance matrix diagonal line;
S04 carries out VBM step, updates each variable;
S05 carries out VBE step, updates the r for representing posterior probabilitynk
S06, iteration n times or until iteration convergence obtain final activation value a, and the mixed Gauss model obeyed of output x Mean value, conversion obtain father's capsule pose matrix.
2. a kind of variation expectation maximization routing algorithm based on capsule network according to claim 1, which is characterized in that
The step S02 is specifically included:
Data point x belongs to the probability distribution of kth class:
Wherein πkFor k-th of element of mixed coefficint π, μkkFor k-th of element of μ, Λ;
πkIndicate zk=1 probability, it may be assumed that p (zk=1)=πk, when k-th of element of z is 1, other elements are 0, so z Marginal probability distribution indicates are as follows:
By the optimal solution q of the prior probability q (π) of mixed coefficint π*(π) regards the distribution of Di Li Cray as, it may be assumed that
q*(π)=Dir (π | α), (3)
Wherein α is Di Li Cray coefficient;
Enable the optimal solution q of the mean μ of Gaussian Profile and the prior probability q (μ, Λ) of covariance Λ*(μ, Λ) obeys independent height This-Wishart distribution, it may be assumed that
Wherein mk、βk、ωk、νkFor Gauss-Wishart distribution parameter;
The optimal solution q for directly writing out posterior probability p (z | x) is inferred according to variation*(z) logarithmic function:
Wherein:
lnρnzIt is lnq*(z) one represents the factor, and E [*] is to ask * expectation, and D is the dimension of data point x;
Index is gone to formula (5) two sides, obtains proportional relation are as follows:
It is required that probability be it is normalized, have:
Wherein:
rnkIt is ρnkNormalized form, rnkIt is non-negative, and
3. a kind of variation expectation maximization routing algorithm based on capsule network according to claim 1, which is characterized in that
The step S03 course of work includes the following steps:
(3-1) definition observation data are for rnkFollowing three statistic:
By rnkIt is considered as posterior probability, then NkIndicate kth class probability,Indicate the mean value for belonging to the x of kth class, SkExpression belongs to kth The covariance of the x of class initializes Nk,Sk, the symbol lower right corner, which is designated as number 0, in following formula indicates initial value, for k all generations Its k-th of component of table;
(3-2) definition updates Di Li Cray parameter equation: αk0+Nk (13)
(3-3) definition updates Gauss-Wishart parameter equation: βk0+Nk (14)
vk=v0+NK+1 (17)
(3-4) definition updates mean value formula: μk=mk (18)
(3-5) definition updates formula of variance:
(3-6) definition updates capsule network cost function formulation: cost=βμ+lg(σ)∑rij (20)
Wherein βμFor network parameter, obtained by reversed Internet communication training;
(3-7) definition updates activation value a formula: aj=logstic (λ (βα-∑cost)) (21)
Wherein βαFor network parameter, obtained by reversed Internet communication training;
(3-8) initializes each variable defined above, wherein being attached to Gaussian Profile initial value μ for pose matrix element is corresponding0, IfThen μ0=(μ12...μ1516)T, the symbol lower right corner is designated as number in following formula 0 indicates initial value, all represents its k-th of component for k.
4. a kind of variation expectation maximization routing algorithm based on capsule network according to claim 1, which is characterized in that
The step S04 course of work includes the following steps:
(4-1) updates statistic Nk,Sk
(4-3) updates Di Li Cray parameter alphak
(4-4) updates Gauss-Wishart parameter: βk,mk,vk
(4-5) updates mean μk, variances sigma2
(4-6) updates capsule network cost function cost, updates activation value a.
5. a kind of variation expectation maximization routing algorithm based on capsule network according to claim 1, which is characterized in that
The step S05 course of work includes the following steps:
(5-1) updates ln ρ according to formula (6)nk, in which:
(5-2) updates r according to formula (9)nk
CN201811250635.1A 2018-10-25 2018-10-25 Variation expectation maximization routing algorithm based on capsule network Pending CN109583563A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811250635.1A CN109583563A (en) 2018-10-25 2018-10-25 Variation expectation maximization routing algorithm based on capsule network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811250635.1A CN109583563A (en) 2018-10-25 2018-10-25 Variation expectation maximization routing algorithm based on capsule network

Publications (1)

Publication Number Publication Date
CN109583563A true CN109583563A (en) 2019-04-05

Family

ID=65920782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811250635.1A Pending CN109583563A (en) 2018-10-25 2018-10-25 Variation expectation maximization routing algorithm based on capsule network

Country Status (1)

Country Link
CN (1) CN109583563A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046671A (en) * 2019-04-24 2019-07-23 吉林大学 A kind of file classification method based on capsule network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046671A (en) * 2019-04-24 2019-07-23 吉林大学 A kind of file classification method based on capsule network

Similar Documents

Publication Publication Date Title
CN106529569B (en) Threedimensional model triangular facet feature learning classification method and device based on deep learning
Zeng et al. 3DContextNet: Kd tree guided hierarchical learning of point clouds using local and global contextual cues
Pickup et al. Shape retrieval of non-rigid 3d human models
Vitaladevuni et al. Co-clustering of image segments using convex optimization applied to EM neuronal reconstruction
CN109754066A (en) Method and apparatus for generating fixed-point type neural network
CN110021069A (en) A kind of method for reconstructing three-dimensional model based on grid deformation
CN104392253B (en) Interactive classification labeling method for sketch data set
CN103942571B (en) Graphic image sorting method based on genetic programming algorithm
JP2009545045A (en) Pattern classification method
CN107292341A (en) Adaptive multi views clustering method based on paired collaboration regularization and NMF
CN112200295B (en) Ordering method, operation method, device and equipment of sparse convolutional neural network
CN103177265B (en) High-definition image classification method based on kernel function Yu sparse coding
CN107301643A (en) Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms
CN109800853B (en) Matrix decomposition method and device fusing convolutional neural network and explicit feedback and electronic equipment
CN112529068B (en) Multi-view image classification method, system, computer equipment and storage medium
CN106250918A (en) A kind of mixed Gauss model matching process based on the soil-shifting distance improved
Wereszczyński et al. Quantum computing for clustering big datasets
CN106022359A (en) Fuzzy entropy space clustering analysis method based on orderly information entropy
CN104809478A (en) Image partitioning method and device oriented to large-scale three-dimensional reconstruction
CN109583563A (en) Variation expectation maximization routing algorithm based on capsule network
CN110703038B (en) Harmonic impedance estimation method suitable for fan access power distribution network
Chandra et al. Learning multiple non-linear sub-spaces using k-rbms
CN107492101B (en) Multi-modal nasopharyngeal tumor segmentation algorithm based on self-adaptive constructed optimal graph
CN107563287B (en) Face recognition method and device
Xiang et al. Optical flow estimation using spatial-channel combinational attention-based pyramid networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190405