CN113779502B - Image processing evidence function estimation method based on correlation vector machine - Google Patents

Image processing evidence function estimation method based on correlation vector machine Download PDF

Info

Publication number
CN113779502B
CN113779502B CN202110963746.2A CN202110963746A CN113779502B CN 113779502 B CN113779502 B CN 113779502B CN 202110963746 A CN202110963746 A CN 202110963746A CN 113779502 B CN113779502 B CN 113779502B
Authority
CN
China
Prior art keywords
equation
function
vector
image
gamma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110963746.2A
Other languages
Chinese (zh)
Other versions
CN113779502A (en
Inventor
邹大伟
马春华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suihua University
Original Assignee
Suihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suihua University filed Critical Suihua University
Priority to CN202110963746.2A priority Critical patent/CN113779502B/en
Publication of CN113779502A publication Critical patent/CN113779502A/en
Application granted granted Critical
Publication of CN113779502B publication Critical patent/CN113779502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing evidence function estimation method based on a correlation vector machine. Step 1: the data in the image is proved to be normal distribution according to the prior form of the corrected weight parameters by using the mean value and covariance in the normal distribution; step 2: integrating the weight parameters of the data in the image according to the product of the multivariate Taylor formula, the likelihood function and the prior distribution of the weights to obtain a specific expression of the evidence function, namely an edge likelihood function; step 3: based on the edge likelihood function of the data in the image in the step 2, the evidence function containing the super parameters is maximized by utilizing matrix calculus, matrix algebra and optimization methods, so that the optimization iterative algorithm of each super parameter of the image is obtained. The invention is used for solving the problems that complex integration is faced and calculation is difficult in the process of integrating the weight parameters by the product of the likelihood function and the prior distribution of the weight in image processing to obtain the evidence function.

Description

Image processing evidence function estimation method based on correlation vector machine
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image processing evidence function estimation method based on a correlation vector machine.
Background
In the estimation process of an evidence function related to a related vector machine in the image processing field, the posterior distribution needs to be proved to be normal distribution, and the mean value and covariance of the normal distribution are solved, so that the traditional method has to search for a complete square term, and the method is difficult to operate and lacks logic. Meanwhile, in the evidence function principle related to a related vector machine in the image processing field at present, the prior distribution of the weight parameters is the product of normal distribution with zero mean value, which lacks generality.
In the process of integrating the weight parameters by the product of the likelihood function and the prior distribution of the weights to obtain the evidence function, complex integration is faced, and the calculation is difficult. How to find a new method and logic framework for image processing to more simply and logically integrate the correlation of the image processing and more effectively maximize the evidence function is less studied.
Disclosure of Invention
The invention provides an image processing evidence function estimation method based on a correlation vector machine, which is used for solving the problems that complex integration is faced and calculation is difficult in the process of integrating a weight parameter by the product of a likelihood function and prior distribution of weight in image processing to obtain an evidence function.
The invention is realized by the following technical scheme:
an image processing evidence function estimation method based on a correlation vector machine, the evidence function estimation method comprising the following steps:
step 1: the data in the image is proved to be normal distribution according to the prior form of the corrected weight parameters by using the mean value and covariance in the normal distribution;
step 2: integrating the weight parameters of the data in the image according to the product of the multivariate Taylor formula, the likelihood function and the prior distribution of the weights to obtain a specific expression of the evidence function, namely an edge likelihood function;
step 3: based on the edge likelihood function of the data in the image in the step 2, the evidence function containing the super parameters is maximized by utilizing matrix calculus, matrix algebra and optimization methods, so that the optimization iterative algorithm of each super parameter of the image is obtained.
Further, the step 1 is specifically that, whenWherein x is a scalar, A is an n x n reversible symmetric matrix, tr (·) is the trace of the matrix,>Tr(xy T )=x T y, where Tr (·) is the trace of the matrix, x is the vector, y is the vector; operator->Defined as->And-> wherein k∈N;
let h= (h 1 ,h 2 ) T And (3) with wherein h1 As the first component of the vector h, h 2 As the second component of the vector h, x 1 As the first component of vector x, x 2 As a second component of the vector x,
is available in the form ofAnd->
When an operatorActing on x to obtain
Then according to the operatorThe formula acting on x can be obtained
wherein h=(h1 ,h 2 ) T And (3) with
By means ofIs available in the form of
If f: B (w, R) →R is defined as f (w) =w T Aw+w T b+c; wherein A is an n reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor at f (w)Taylor expansion of (2) is
The (i, j) th element of H is composed ofDefinition;
the general form of a linear regression model in machine learning is
wherein φi (x) As a nonlinear basis function of the input variable, w 0 As a deviation parameter, x is an image data vector;
definition phi 0 (x) =1, whereby (1) is rewritable as
wherein w=(w0 ,…,w M-1 ) T And phi (x) = (phi) 0 (x),…,φ M-1 (x)) T
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e
t=y(x,w)+ε (3)
Where ε is 0 mean and the precision is β, to obtain a normal random variable
p(t|x,w,β)=N(t|y(x,w),β -1 ) (4)。
Further, the weight parameters in the step 1 are obtained in the following form a priori
Where α is the precision (inverse of variance) vector, α= (α) 1 ,…,α M ) T And gamma is the mean vector, gamma= (gamma) 1 ,…,γ M ) T
Obtaining likelihood functions using equation (4)
wherein ,m is the number of parameters to be determined.
Similarly, the number of the devices to be used in the system,
where α=diag (α i );
The posterior distribution p (w|t, X, α, β, γ) =n (w|m, Σ) of the acquisition weight parameters is also a normal distribution in which
m=(A+βΦ T Φ) -1 (Aγ+βΦ T t) (8)
Σ=(A+βΦ T Φ) -1 (9)
wherein
Further, the step 2 is specifically that,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using equation (6) and equation (7), one can obtain
wherein
Order theTo obtain w= (A+beta phi) T Φ) -1 (βΦ T t+Aγ)=m,Is available in the form of
wherein
Using equation (11) and equation (12), one can obtain
Wherein m= (a+βΦ) T Φ) -1 (βΦ T t+Aγ),X X=(x 1 ,x 2 ,…,x N )。
Further, the step 3 is specifically that,
taking the logarithm of equation (13) to obtain
Using equation (9), equation (14) andis available in the form of
Because of
By means ofAnd equation (15) to obtain
From equation (16)
wherein Σii Is the i-th element of the main diagonal of the posterior covariance Σ;
from equation (12), it can be obtained
wherein mi Is the i-th component of the posterior mean m;
from equation (17) and equation (18), we can obtain
From (19)Thereby obtaining
wherein λi =1-α i Σ ii
Definition of Sigma and according to equation (9)Is available in the form of
According toAnd Tr (xy) T )=x T y, can obtain
Because of (A+βΦ) T Φ)(A+βΦ T Φ) -1 =I M Can be obtained
Φ T ΦΣ=β -1 (I M -AΣ) (23)
From the equation (22) and the equation (23), it is possible to obtain
From equation (12), it can be obtained
From equation (24) and equation (25), we can obtain
Thereby obtaining
Deriving gamma according to equation (12)Let->Can obtain gamma=m, so
γ i =m i (27)
wherein γi Is the ith component of gamma, m i Is the ith component of m.
The beneficial effects of the invention are as follows:
the invention adopts a more general weight parameter prior form instead of the traditional normal distribution that each weight parameter obeys the mean value to be zero, the parameters have a larger value range, and the image data maximizes the evidence function containing the super parameters according to matrix calculus, matrix algebra and optimization methods, thereby being beneficial to improving the resolution of the image data.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An image processing evidence function estimation method based on a correlation vector machine, the evidence function estimation method comprising the following steps:
step 1: the data in the image is proved to be normal distribution according to the prior form of the corrected weight parameters by using the mean value and covariance in the normal distribution;
step 2: integrating the weight parameters of the data in the image according to the product of the multivariate Taylor formula, the likelihood function and the prior distribution of the weights to obtain a specific expression of the evidence function, namely an edge likelihood function;
step 3: and (3) maximizing the evidence function containing the super parameters by utilizing matrix calculus, matrix algebra and optimization methods based on the edge likelihood function of the data in the image in the step (2), thereby obtaining the optimization iterative algorithm of each super parameter of the image.
Further, the step 1 is specifically that, whenWhere x is a scalar, A is an n x n reversible symmetric matrix, tr (X) is the trace of the matrix, ">Tr(xy T )=x T y, where Tr (·) is the trace of the matrix, x is the vector, y is the vector; operator->Defined as->And-> wherein k∈N;
let h= (h 1 ,h 2 ) T And (3) with wherein h1 As the first component of the vector h, h 2 As the second component of the vector h, x 1 As the first component of vector x, x 2 As a second component of the vector x,
is available in the form ofAnd->
When an operatorActing on x to obtain
Then according to the operatorThe formula x acting on x (this x is removed) can be obtained
wherein h=(h1 ,h 2 ) T And (3) with
By means ofIs available in the form of
If f: B (w, R) →R is defined as f (w) =w T Aw+w T b+c; wherein A is an n reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor at f (w)Taylor expansion of (2) is
The (i, j) th element of H is composed ofDefinition;
the general form of a linear regression model in machine learning is
wherein φi (x) As a nonlinear basis function of the input variable, w 0 As a deviation parameter, x is an image data vector;
definition phi 0 (x) =1, whereby (1) is rewritable as
wherein w=(w0 ,…,w M-1 ) T And phi (x) = (phi) 0 (x),…,φ M-1 (x)) T
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e
t=y(x,w)+ε (3)
Where ε is 0 mean and the precision is β, to obtain a normal random variable
p(t|x,w,β)=N(t|y(x,w),β -1 ) (4)。
Further, the weight parameters in the step 1 are obtained in the following form a priori
Where α is the precision (inverse of variance) vector, α= (α) 1 ,…,α M ) T And gamma is the mean vector, gamma= (gamma) 1 ,…,γ M ) T
Obtaining likelihood functions using equation (4)
wherein ,m is the number of parameters to be determined.
Similarly, the number of the devices to be used in the system,
where α=diag (α i );
The posterior distribution p (w|t, X, α, β, γ) =n (w|m, Σ) of the acquisition weight parameters is also a normal distribution in which
m=(A+βΦ T Φ) -1 (Aγ+βΦ T t) (8)
Σ=(A+βΦ T Φ) -1 (9)
wherein
Before the certification, the following is seen:
normal distributionTaking the negative index of the normal distribution as
Order theX=μ can be obtained, which implies that the dwell point of f (x) is the mean of the normal distribution, while +.>The second order gradient of f (x) is the inverse of the covariance.
The following demonstrates that p (w|t, X, α, β, γ) is a normal distribution;
from equation (6) and equation (7), the negative index of the product of p (t|X, w, beta) p (w|alpha, gamma) is obtained
Order theThus, the first and second substrates are bonded together,
m=w=(A+βΦ T Φ) -1 (Aγ+βΦ T t),
and because ofSo Σ= (a+βΦ) T Φ) -1 And obtaining the evidence.
Further, the step 2 is specifically that,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using equation (6) and equation (7), one can obtain
wherein
Order theTo obtain w= (A+beta phi) T Φ) -1 (βΦ T t+Aγ)=m,Is available in the form of
wherein
Using equation (11) and equation (12), one can obtain
Wherein m= (a+βΦ) T Φ) -1 (βΦ T t+Aγ),X=(x 1 ,x 2 ,…,x N )。
Further, the step 3 is specifically described.
Taking the logarithm of equation (13) to obtain
Using equation (9), equation (14) andavailable->
Because of
By means ofAnd equation (15) to obtain
From equation (16)
wherein Σii Is the i-th element of the main diagonal of the posterior covariance Σ;
from equation (12), it can be obtained
wherein mi Is the i-th component of the posterior mean m;
from equation (17) and equation (18), we can obtain
From (19)Thereby obtaining
wherein λi =1-α i Σ ii ,;
Definition of Sigma and according to equation (9)Is available in the form of
According toAnd Tr (xy) T )=x T y, can obtain
Because of (A+βΦ) T Φ)(A+βΦ T Φ) -1 =I M Can be obtained
Φ T ΦΣ=β -1 (I M -AΣ) (23)
From the equation (22) and the equation (23), it is possible to obtain
From equation (12), it can be obtained
From equation (24) and equation (25), we can obtain
Thereby obtaining
Deriving gamma according to equation (12)Let->Can obtain gamma=m, so
γ i =m i (27)
wherein γi Is the ith component of gamma, m i Is the ith component of m.

Claims (2)

1. The image processing evidence function estimation method based on the correlation vector machine is characterized by comprising the following steps of:
step 1: the data in the image is proved to be normal distribution according to the prior form of the corrected weight parameters by using the mean value and covariance in the normal distribution;
step 2: integrating the weight parameters of the data in the image according to the product of the multivariate Taylor formula, the likelihood function and the prior distribution of the weights to obtain a specific expression of the evidence function, namely an edge likelihood function;
step 3: based on the edge likelihood function of the data in the image in the step 2, maximizing the evidence function containing the super parameters by utilizing matrix calculus, matrix algebra and optimization methods, thereby obtaining the optimization iterative algorithm of each super parameter of the image;
the step 1 is specifically that whenWherein x is a scalar, A is an n x n reversible symmetric matrix, tr (·) is the trace of the matrix,>Tr(xy T )=x T y, where Tr (·) is the trace of the matrix and x is the vector; operatorDefined as->And-> wherein k∈N;
let h= (h 1 ,h 2 ) T And (3) with wherein h1 As the first component of the vector h, h 2 As the second component of the vector h, x 1 As the first component of vector x, x 2 As the second component of the vector x, can be made +.>And (3) with
When an operatorActing on x to obtain
Then according to the operatorThe formula acting on x can be obtained
wherein h=(h1 ,h 2 ) T And (3) with
By means ofIs available in the form of
If f: B (w, R) →R is defined as f (w) =w T Aw+w T b+c; wherein A is an n reversible symmetric matrix, b and w are n-dimensional column vectors, and c is a scalar; taylor at f (w)Taylor expansion of (2) is
The (i, j) th element of H is composed ofDefinition;
the general form of a linear regression model in machine learning is
wherein φi (x) As a nonlinear basis function of the input variable, w 0 As a deviation parameter, x is an image data vector;
definition phi 0 (x) =1, whereby (1) is rewritable as
wherein w=(w0 ,…,w M-1 ) T And phi (x) = (phi) 0 (x),…,φ M-1 (x)) T
The objective function is a deterministic function y (x, w) with additive Gaussian noise, i.e
t=y(x,w)+ε (3)
Where ε is 0 mean and the precision is β, to obtain a normal random variable
p(t|x,w,β)=N(t|y(x,w),β -1 ) (4);
The weight parameters in the step 1 are obtained in the following form in a priori manner
Where α is the precision vector, α= (α) 1 ,…,α M ) T And gamma is the mean vector, gamma= (gamma) 1 ,…,γ M ) T
Obtaining likelihood functions using equation (4)
wherein ,m is the number of parameters to be determined;
similarly, the number of the devices to be used in the system,
where α=diag (α i );
The posterior distribution p (w|t, X, α, β, γ) =n (w|m, Σ) of the acquisition weight parameters is also a normal distribution in which
m=(A+βΦ T Φ) -1 (Aγ+βΦ T t) (8)
Σ=(A+βΦ T Φ) -1 (9)
wherein
The step 2 is specifically that,
p(t|X,α,β,γ)=∫p(t|X,w,β)p(w|α,γ)dw (10)
using equation (6) and equation (7), one can obtain
wherein
Order theObtaining
w=(A+βΦ T Φ) -1 (βΦ T t+Aγ)=m,Is available in the form of
wherein
Using equation (11) and equation (12), one can obtain
Wherein m= (a+βΦ) T Φ) -1 (βΦ T t+Aγ),X=(x 1 ,x 2 ,…,x N )。
2. The method for estimating an image processing evidence function based on a relevance vector machine according to claim 1, wherein the step 3 is specifically,
taking the logarithm of equation (13) to obtain
Using equation (9), equation (14) andis available in the form of
Because of
By means ofAnd equation (15) to obtain
From equation (16)
wherein Σii Is the i-th element of the main diagonal of the posterior covariance Σ;
from equation (12), it can be obtained
wherein mi Is the i-th component of the posterior mean m;
from equation (17) and equation (18), we can obtain
From (19)Thereby obtaining
wherein λi =1-α i Σ ii
Definition of Sigma and according to equation (9)Is available in the form of
According toAnd Tr (xy) T )=x T y, can obtain
Because of (A+βΦ) T Φ)(A+βΦ T Φ) -1 =I M Can be obtained
Φ T ΦΣ=β -1 (I M -AΣ) (23)
From the equation (22) and the equation (23), it is possible to obtain
From equation (12), it can be obtained
From equation (24) and equation (25), we can obtain
Thereby obtaining
Deriving gamma according to equation (12)Let->Can obtain gamma=m, so
γ i =m i (27)
wherein γi Is the ith component of gamma, m i Is the ith component of m.
CN202110963746.2A 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine Active CN113779502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110963746.2A CN113779502B (en) 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110963746.2A CN113779502B (en) 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine

Publications (2)

Publication Number Publication Date
CN113779502A CN113779502A (en) 2021-12-10
CN113779502B true CN113779502B (en) 2023-08-29

Family

ID=78838587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110963746.2A Active CN113779502B (en) 2021-08-20 2021-08-20 Image processing evidence function estimation method based on correlation vector machine

Country Status (1)

Country Link
CN (1) CN113779502B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254193A (en) * 2011-07-16 2011-11-23 西安电子科技大学 Relevance vector machine-based multi-class data classifying method
CN103258213A (en) * 2013-04-22 2013-08-21 中国石油大学(华东) Vehicle model dynamic identification method used in intelligent transportation system
CN104732215A (en) * 2015-03-25 2015-06-24 广西大学 Remote-sensing image coastline extracting method based on information vector machine
CN106709918A (en) * 2017-01-20 2017-05-24 成都信息工程大学 Method for segmenting images of multi-element student t distribution mixed model based on spatial smoothing
CN108197435A (en) * 2018-01-29 2018-06-22 绥化学院 Localization method between a kind of multiple characters multi-region for containing error based on marker site genotype
CN108228535A (en) * 2018-01-02 2018-06-29 佛山科学技术学院 A kind of optimal weighting parameter evaluation method of unequal precision measurement data fusion
CN111914865A (en) * 2019-05-08 2020-11-10 天津科技大学 Probability main component analysis method based on random core
CN112053307A (en) * 2020-08-14 2020-12-08 河海大学常州校区 X-ray image linear reconstruction method
US10867171B1 (en) * 2018-10-22 2020-12-15 Omniscience Corporation Systems and methods for machine learning based content extraction from document images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8250003B2 (en) * 2008-09-12 2012-08-21 Microsoft Corporation Computationally efficient probabilistic linear regression

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254193A (en) * 2011-07-16 2011-11-23 西安电子科技大学 Relevance vector machine-based multi-class data classifying method
CN103258213A (en) * 2013-04-22 2013-08-21 中国石油大学(华东) Vehicle model dynamic identification method used in intelligent transportation system
CN104732215A (en) * 2015-03-25 2015-06-24 广西大学 Remote-sensing image coastline extracting method based on information vector machine
CN106709918A (en) * 2017-01-20 2017-05-24 成都信息工程大学 Method for segmenting images of multi-element student t distribution mixed model based on spatial smoothing
CN108228535A (en) * 2018-01-02 2018-06-29 佛山科学技术学院 A kind of optimal weighting parameter evaluation method of unequal precision measurement data fusion
CN108197435A (en) * 2018-01-29 2018-06-22 绥化学院 Localization method between a kind of multiple characters multi-region for containing error based on marker site genotype
US10867171B1 (en) * 2018-10-22 2020-12-15 Omniscience Corporation Systems and methods for machine learning based content extraction from document images
CN111914865A (en) * 2019-05-08 2020-11-10 天津科技大学 Probability main component analysis method based on random core
CN112053307A (en) * 2020-08-14 2020-12-08 河海大学常州校区 X-ray image linear reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Logical Framework of the Evidence Function Approximation Associated with Relevance Vector Machine;Dawei Zou等;Mathematical Problems in Engineering;第2020卷;第1页-10页 *

Also Published As

Publication number Publication date
CN113779502A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
Zhang et al. Robust low-rank kernel multi-view subspace clustering based on the schatten p-norm and correntropy
Escanciano et al. Uniform convergence of weighted sums of non and semiparametric residuals for estimation and testing
Kedward et al. Efficient and exact mesh deformation using multiscale RBF interpolation
Song et al. Hilbert space embeddings of conditional distributions with applications to dynamical systems
Zhao et al. Statistical inference for generalized random coefficient autoregressive model
CN113240596B (en) Color video recovery method and system based on high-order tensor singular value decomposition
CN109635245A (en) A kind of robust width learning system
Lin et al. Orthogonal rotation-invariant moments for digital image processing
Esteves et al. 3d object classification and retrieval with spherical cnns
CN115659880A (en) Hardware circuit and method of principal component analysis algorithm based on singular value decomposition
CN109657693B (en) Classification method based on correlation entropy and transfer learning
CN113779502B (en) Image processing evidence function estimation method based on correlation vector machine
Lin et al. Click-pixel cognition fusion network with balanced cut for interactive image segmentation
Héas et al. Generalized kernel-based dynamic mode decomposition
Tan et al. Tensor recovery via multi-linear augmented Lagrange multiplier method
Nakano Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations
Marcellino et al. A gpu-accelerated svd algorithm, based on qr factorization and givens rotations, for dwi denoising
Oskarsson et al. Trust no one: Low rank matrix factorization using hierarchical ransac
Sha et al. Adaptive restoration and reconstruction of incomplete flow fields based on unsupervised learning
Wang et al. Subsampling in Longitudinal Models
CN113449817B (en) Image classification implicit model acceleration training method based on phantom gradient
Ding et al. Affine registration for multidimensional point sets under the framework of Lie group
CN117593594B (en) Brain MRI image classification method, equipment and medium based on consistency alignment
CN110689073A (en) Nonlinear system identification method and equipment
Yang et al. A non-intrusive reduced basis eki for time fractional diffusion inverse problems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant